uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
2,869,038,156,528
arxiv
\section{Introduction} Two of the most-studied quantities in the Markov chain literature are the \textit{mixing time} and \textit{hitting times} associated with a chain. In \citep{finitemixhit, oliveira2012mixing}, the authors showed that these quantities are equal up to universal constants for reversible discrete time Markov processes with finite state space. In this paper, we use Nonstandard Analysis to extend this result to discrete time Markov processes on $\sigma$-compact state spaces that satisfy the \textit{strong Feller property} (see \ref{mixhit}). As in the context of \citep{finitemixhit, oliveira2012mixing}, it is generally easier to get upper bounds on hitting times, and it is generally easier to get lower bounds on mixing times. These results let us estimate whichever is more convenient. Recall that the mixing time measures the number of steps a Markov chain must take to (approximately) forget its initial condition. This quantity is fundamental in computer science and computational statistics, where it measures the efficiency of algorithms based on Markov chains; it is also important in the statistical physics literature, where it provides a way to qualitatively describe the behaviour of a material (see \textit{e.g.} overviews \citep{markovmix, diaconis2009markov,montenegro2006mathematical,aldous2002reversible,meyn2012markov}). The hitting time measures the number of steps a Markov chain must take before entering a set for the first time. This quantity is not always directly relevant for applications, but it is usually easier to compute or estimate and many tools have been developed for estimating hitting times and relating them to other quantities of interest (see \textit{e.g.} the role of hitting time calculations in the theory of metastability \citep{bovier2006metastability}). \subsection{Nonstandard Analysis} In this paper, we extend known results about Markov Processes with a finite state space to those with a continuum state space. Our arguments are based on nonstandard analysis, which allows construction of a single object---a hyperfinite probability space---that satisfies all the first order logic properties of a finite probability space, but can simultaneously be viewed via the Loeb construction (\citep{Loeb75}) as a measure-theoretic probability space. This construction often allows one to make discrete arguments about the hyperfinite probability space, and then use the Loeb construction to express the results in measure-theoretic terms. In order to do this, one has to establish approriate notions of liftings (hyperfinite processes that sit ``above'' the measure-theoretic objects of interest) and pushdowns (projections of hyperfinite objects to the measure-theoretic objects). These liftings and pushdowns form a ``dictionary'' that must be chosen specifically to represent the type of probabilistic process of interest. Dictionaries for Lebesgue Integration, Brownian Motion and It\^o Integration were given in \citep{andersonisrael} and \citep{anderson87}, for stochastic differential equations in \citep{Keisler87}, and for Markov chains in \citep{Markovpaper}. One of the main contributions of this paper is an expansion of the dictionary for Markov chains in \citep{Markovpaper}. This expansion lets us translate the proofs of existing discrete results to obtain several new results, and we anticipate it being useful for the translation of further Markov chain results in the future. \subsection{Related Literature} \label{SecRelLit} \subsubsection{Computational Statistics} Although Markov chains on infinite state spaces occur in many areas, we are especially interested in the mixing properties of Markov chains used in Markov chain Monte Carlo (MCMC) algorithms. Most algorithms used in MCMC do \textit{not} satisfy the strong Feller condition, and so our main result, Theorem \ref{mixhit}, does not apply directly. In Section \ref{statapp}, we explain how our main result can still be applied to popular MCMC chains. We note that most chains used in MCMC are \textit{geometrically ergodic} but do not have finite mixing times. Our main results can still be applied in this situation, and this is the subject of a companion paper. \subsubsection{Equivalence and Sensitivity} There are many different ways to measure the time it takes for a Markov chain to ``get random." The present paper belongs to the large literature, started in \citep{aldous1982some,aldous1997mixing}, devoted to understanding how much different measures of this time can disagree. These ``equivalence" results are closely related to the problem of studying the \textit{sensitivity} of Markov chains to qualitatively-small changes (see \textit{e.g.} \citep{addario2017mixing,hermon2018sensitivity}) and to the study of \textit{perturbations} of Markov chains (see \textit{e.g.} \citep{mitrophanov2005sensitivity,herve2014approximating,pillai2014ergodicity,rudolf2018perturbation,bardenet2017markov,negrea2017error}). While perturbations have been studied on very general state space, to our knowledge all research related to sensitivity has been focused on Markov chains on discrete state spaces. Finally, the relationship between hitting and mixing times has been refined since \citep{finitemixhit, oliveira2012mixing}; see \textit{e.g.} \citep{basu2015characterization}. \subsection{Overview of the Paper} In Section \ref{secpre}, we give basic definitions and inequalities related to mixing and hitting times. We also state the main results. In Section \ref{sechyprob} and Section \ref{sechymarkov}, we introduce hyperfinite representations for probability spaces and discrete-time Markov processes developed in \citep{Markovpaper}. Namely, we show that, for every discrete-time Markov process satisfying appropriate conditions, there exists a corresponding hyperfinite Markov process. In Section \ref{secmha}, we show that the mixing times and hitting times of a discrete-time Markov process on compact state space can be approximated by the mixing times and hitting times of its corresponding hyperfinite Markov process. This leads to a proof in Section \ref{SecMixHitCompact} that mixing times and hitting times are asymptotically equivalent for discrete-time Markov processes on compact state space satisfying Section \ref{assumptiondsf}. We extend to $\sigma$-compact spaces in Section \ref{secsigcomp}. Finally, in Section \ref{statapp} and Section \ref{AppOtherExt} we show how to apply our results to some popular chains from statistics. Various elementary proofs and lemmas are deferred to the appendices. \section{Preliminaries and Main Results}\label{secpre} We fix a $\sigma$-compact metric state space $X$ endowed with Borel $\sigma$-algebra $\mathcal{B} X$ and let $\{P_{x}(\cdot)\}_{x\in X}$ denote the transition kernel of a Markov process with unique stationary measure $\pi$. Throughout the paper, we include $0$ in $\mathbb{N}$. For $x\in X$, $t\in \mathbb{N}$ and $A\in \mathcal{B} X$, we write $P_{x}^{(t)}(A)$ or $P^{(t)}(x,A)$ for the transition probability from $x$ to $A$ in $t$ steps. We write $P_{x}(A)$ and $P(x,A)$ as an abbreviation for $P_{x}^{(1)}(A)$ and $P^{(1)}(x,A)$, respectively. Recall that $\{P_{x}(\cdot)\}_{x\in X}$ is said to be \emph{reversible} if \[ \label{EqDefRev} \int_{A}P(x,B)\pi(\mathrm{d} x)=\int_{B}P(x,A)\pi(\mathrm{d} x). \] for every $A,B\in \mathcal{B} X$. For probability measures $\mu, \nu$ on $(X, \mathcal{B} X)$, we denote by \[ \parallel \mu - \nu \parallel = \sup_{A \in \mathcal{B} X} |\mu(A) - \nu(A)| \] the usual \textit{total variation distance} between $\mu$ and $\nu$. Our main result will require the following continuity condition on $\{P_{x}(\cdot)\}_{x\in X}$. \begin{assumption}{DSF}\label{assumptiondsf} The transition kernel $\{P_{x}(\cdot)\}_{x\in X}$ satisfies the \emph{strong Feller property} if for every $x\in X$ and every $\epsilon>0$ there exists $ \delta>0$ such that \[ (\forall y\in X) (|y-x|<\delta \implies ( \parallel P_{x}(\cdot) - P_{y}(\cdot) \parallel <\epsilon)). \] \end{assumption} We define the mixing time: \begin{defn}\label{defmix} Let $\epsilon\in \Reals_{> 0}$. The mixing time $t_{m}(\epsilon)$ of $\{P_{x}(\cdot)\}_{x\in X}$ is \[ \min\{t\geq 0: d(t)\leq \epsilon\}, \] where $d(t)=\sup_{x\in X}\parallel P^{(t)}(x,\cdot)-\pi(\cdot) \parallel$. \end{defn} It is clear that $d(t)$ is a non-increasing function. The ``lazy" transition kernel associated with $\{P_{x}(\cdot)\}_{x\in X}$ is: \begin{defn}\label{deflazy} The \emph{lazy kernel} $\{P_{L}(x,\cdot)\}_{x\in X}$ of a transition kernel $\{P(x,\cdot)\}_{x\in X}$ is given by \[ P_{L}(x,A)=\frac{1}{2}P(x,A)+\frac{1}{2}\delta(x,A) \] for every $x\in X$ and every $A\in \mathcal{B} X$, where $$ \delta(x,A)= \begin{cases} 1, \qquad x\in A \\ 0, \qquad x\not\in A. \end{cases} $$ \end{defn} For $\epsilon\in \Reals_{> 0}$, we denote the mixing time of the lazy chain by $t_{L}(\epsilon)$. For notational convenience, we will simply write $t_{L}$ and $t_m$ when $\epsilon=\frac{1}{4}$. We now denote by $\{X_{t}\}_{t \in \mathbb{N}}$ a Markov chain with transition kernel $\{P_{x}(\cdot)\}_{x\in X}$ and arbitrary starting point $X_{0} = x_{0} \in X$. Recall that the \textit{hitting time} of a set $A \in \mathcal{B} X$ for this Markov chain is defined to be: \[ \tau_{A} = \min \{ t \in \mathbb{N} \, : \, X_{t} \in A \}. \] We now introduce the maximum hitting time of large sets. \begin{defn} Let $\alpha\in \Reals_{> 0}$. The maximum hitting time with respect to $\alpha$ is \[ t_{H}(\alpha)=\sup\{\expect_{x}(\tau_{A}): x\in X, A\in \mathcal{B} X\ \text{such that}\ \pi(A)\geq \alpha\}, \] where $\expect$ is the expectation of a measure in the product space which generates the underlying Markov process and its subscript refers to the starting point $X_{0}$. \end{defn} We now quote the main results from \citep{finitemixhit, oliveira2012mixing}: \begin{thm}[{\citep[][Thm.~1.1]{finitemixhit}};{\citep[][Thm.~1.3]{oliveira2012mixing}}]\label{fmixhit} Let $0 < \alpha<\frac{1}{2}$. Then there exist universal positive constants $c'_{\alpha},c_{\alpha}$ so that for every finite reversible Markov process \[ c'_{\alpha}t_{H}(\alpha)\leq t_{L}\leq c_{\alpha}t_{H}(\alpha). \] \end{thm} Throughout the paper, we denote by $\mathcal{M}$ the collection of discrete time reversible transition kernels with a stationary distribution on a $\sigma$-compact metric state space satisfying Section \ref{assumptiondsf}. Note that transition kernels on finite state spaces belong to $\mathcal{M}$. The main result of this paper generalizes Theorem \ref{fmixhit} to $\mathcal{M}$: \begin{thm}\label{mixhit} Let $0 < \alpha<\frac{1}{2}$. Then there exist universal constants $0<a_{\alpha},a'_{\alpha}<\infty$ such that, for every $\kernel \in \mathcal{M}$, we have \[ a'_{\alpha}t_{H}(\alpha)\leq t_{L}\leq a_{\alpha}t_{H}(\alpha). \] \end{thm} The \textit{first} inequality in Theorem \ref{mixhit} is straightforward and well-known (see \textit{e.g.} Lemma \ref{maxhitless}). The second is more difficult. The compact version of Theorem \ref{mixhit} is proved in Theorem \ref{mixhitcompact} and the general version is proved in Theorem \ref{mixhitpf}. \subsection{Equivalent Form of Mixing Times and Hitting times}\label{secequivalent} In this section, we define several quantities that are asymptotically equivalent to the mixing times and the maximum hitting times defined in the previous section. These equivalent forms play important roles throughout the entire paper, since they are easier to work with for general Markov processes. First, let \[ \overline{d}(t)=\sup_{x,y\in X}\parallel P^{(t)}(x,\cdot)-P^{(t)}(y,\cdot) \parallel. \] We recall two important results on $\overline{d}(t)$:\footnote{The referenced proofs are stated for discrete spaces, but the arguments apply immediately in the current setting.} \begin{lemma}[{\citep[][Lemma.~4.11]{markovmix}}]\label{mixequal} For every $t\in \mathbb{N}$, we have $d(t)\leq \overline{d}(t)\leq 2d(t)$. \end{lemma} \begin{lemma}[{\citep[][Lemma.~4.12]{markovmix}}]\label{submulti} The function $\overline{d}$ is sub-multiplicative. That is, for $s, t \in \mathbb{N}$, \[ \overline{d}(s+t)\leq \overline{d}(s)\overline{d}(t). \] \end{lemma} For every $\epsilon\in \Reals_{> 0}$, define the \emph{standardized mixing time} to be \[ \overline{t}_{m}(\epsilon)=\min\{t\geq 0: \overline{d}(t)\leq \epsilon\}. \] Similarly, we define \[ \overline{t}_{L}(\epsilon)=\min\{t\geq 0: \overline{d}_{L}(t)\leq \epsilon\}. \] For convenience, we write $\overline{t}_{m}$ and $\overline{t}_{L}$ when $\epsilon=\frac{1}{4}$. The following well-known equivalence between mixing times and standard mixing times follows immediately from \ref{mixequal} and \ref{submulti}: \begin{lemma}\label{mixequivalent} For every transition kernel $\{P_{x}(\cdot)\}_{x\in X}$, we have $\overline{t}_{m}\leq 2t_{m}\leq 2\overline{t}_{m}$. \end{lemma} Next, we define the \emph{large hitting time}: \begin{defn}\label{largehit} Let $\alpha\in \Reals_{> 0}$. The large hitting time with respect to $\alpha$ is \[ \tau_{g}(\alpha)=\min\{t\in \mathbb{N}: \inf\{\Prob_{x}(\tau_{A}\leq t): x\in X, A\in \mathcal{B} X \ \text{such that}\ \pi(A)\geq \alpha\}>0.9\} \] where $\Prob$ is a measure in the product space which generates the underlying Markov process and its subscript gives the starting point of the Markov process. \end{defn} Unsurprisingly, the maximum hitting time is asymptotically equivalent to the large hitting time: \begin{lemma}\label{maxlarge} For every $\alpha\in \Reals_{> 0}$ and every Markov process, we have $0.1\tau_{g}(\alpha)\leq t_{H}(\alpha)\leq 2\tau_{g}(\alpha)$. \end{lemma} \begin{proof} See Appendix \ref{SecElEq}. \end{proof} The following is an immediate consequence of \ref{maxlarge} and Theorem \ref{fmixhit}. \begin{thm}\label{mhequal} Let $0 < \alpha<\frac{1}{2}$. Then there exist universal positive constants $c'_{\alpha},c_{\alpha}$ so that for every finite reversible Markov process \[ c'_{\alpha}\tau_{g}(\alpha)\leq t_{L}\leq c_{\alpha}\tau_{g}(\alpha). \] \end{thm} \subsection{Notation from Nonstandard Analysis} In this paper, we use nonstandard analysis, a powerful machinery derived from mathematical logic, as our main toolkit. For those who are not familiar with nonstandard analysis, \citep{Markovpaper,nsbayes} provide reviews tailored to statisticians and probabilists. \citep{NSAA97,NDV,NAW} provide thorough introductions. We briefly introduce the setting and notation from nonstandard analysis. We use $\NSE{}$ to denote the nonstandard extension map taking elements, sets, functions, relations, etc., to their nonstandard counterparts. In particular, $\NSE{\Reals}$ and $\NSE{\mathbb{N}}$ denote the nonstandard extensions of the reals and natural numbers, respectively. An element $r\in \NSE{\Reals}$ is \emph{infinite} if $|r|>n$ for every $n\in \mathbb{N}$ and is \emph{finite} otherwise. An element $r \in \NSE{\Reals}$ with $r > 0$ is \textit{infinitesimal} if $r^{-1}$ is infinite. For $r,s \in \NSE{\Reals}$, we use the notation $r \approx s$ as shorthand for the statement ``$|r-s|$ is infinitesimal," and similarly we use use $r \gtrapprox s$ as shorthand for the statement ``either $r \geq s$ or $r \approx s$." Given a topological space $(X,\topology)$, the monad of a point $x\in X$ is the set $\bigcap_{ U\in \topology \, : \, x \in U}\NSE{U}$. An element $x\in \NSE{X}$ is \emph{near-standard} if it is in the monad of some $y\in X$. We say $y$ is the standard part of $x$ and write $y=\mathsf{st}(x)$. Note that such $y$ is unique. We use $\NS{\NSE{X}}$ to denote the collection of near-standard elements of $\NSE{X}$ and we say $\NS{\NSE{X}}$ is the \emph{near-standard part} of $\NSE{X}$. The standard part map $\mathsf{st}$ is a function from $\NS{\NSE{X}}$ to $X$, taking near-standard elements to their standard parts. In both cases, the notation elides the underlying space $Y$ and the topology $T$, because the space and topology will always be clear from context. For a metric space $(X,d)$, two elements $x,y\in \NSE{X}$ are \emph{infinitely close} if $\NSE{d}(x,y)\approx 0$. An element $x\in \NSE{X}$ is near-standard if and only if it is infinitely close to some $y\in X$. An element $x\in \NSE{X}$ is finite if there exists $y\in X$ such that $\NSE{d}(x,y)<\infty$ and is infinite otherwise. Let $X$ be a topological space endowed with Borel $\sigma$-algebra $\mathcal{B} X$ and let $\FM{X}$ denote the collection of all finitely additive probability measures on $(X,\mathcal{B} X)$. An internal probability measure $\mu$ on $(\NSE{X},\NSE{\mathcal{B} X})$ is an element of $\NSE{\FM{X}}$. Namely, an internal probability measure $\mu$ on $(\NSE{X},\NSE{\mathcal{B} X})$ is an internal function from $\NSE{\mathcal{B} X}\to \NSE{[0,1]}$ such that \begin{enumerate} \item $\mu(\emptyset)=0$; \item $\mu(\NSE{X})=1$; and \item $\mu$ is hyperfinitely additive. \end{enumerate} The Loeb space of the internal probability space $(\NSE{X},\NSE{\mathcal{B} X}, \mu)$ is a countably additive probability space $(\NSE{X},\Loeb{\NSE{\mathcal{B} X}}, \Loeb{\mu})$ such that \[ \Loeb{\NSE{\mathcal{B} X}}=\{A\subset \NSE{X}|(\forall \epsilon>0)(\exists A_i,A_o\in \NSE{\mathcal{B} X})(A_i\subset A\subset A_o\wedge \mu(A_o\setminus A_i)<\epsilon)\} \] and \[ \Loeb{\mu}(A)=\sup\{\mathsf{st}(\mu(A_i))|A_i\subset A,A_i\in \NSE{\mathcal{B} X}\}=\inf\{\mathsf{st}(\mu(A_o))|A_o\supset A,A_o\in \NSE{\mathcal{B} X}\}. \] Every standard model is closely connected to its nonstandard extension via the \emph{transfer principle}, which asserts that a first order statement is true in the standard model is true if and only if it is true in the nonstandard model. Finally, given a cardinal number $\kappa$, a nonstandard model is called $\kappa$-saturated if the following condition holds: let $\mathcal F$ be a family of internal sets, if $\mathcal F$ has cardinality less than $\kappa$ and $\mathcal F$ has the finite intersection property, then the total intersection of $\mathcal F$ is non-empty. In this paper, we assume our nonstandard model is as saturated as we need (see \textit{e.g.} {\citep[][Thm.~1.7.3]{NSAA97}} for the existence of $\kappa$-saturated nonstandard models for any uncountable cardinal $\kappa$). \section{Hyperfinite Representation of Probability Spaces}\label{sechyprob} In this section, we give an overview of hyperfinite representation for probability spaces developed in \citep{Markovpaper}. All the proofs can be found in {\citep[][Section.~6]{Markovpaper}}. We use similar notation to \citep{Markovpaper} and \citep{nsbayes}. The following theorem gives a nonstandard characterization for compact topological spaces. \begin{thm} [{\citep[][Thm.~4.1.13]{AR65}}]\label{HBfinite} A topological space $X$ is compact if and only if every $x\in \NSE{X}$ is near-standard. \end{thm} In the following, we use the common notation $d(x,A) = \inf \{y \in X \, : \, d(x,y) \}$ for every $x\in X$ and every $A\subset X$. We now introduce the concept of hyperfinite representation of a Heine-Borel metric space $X$. The intuition is to take a ``large enough" portion of $\NSE{X}$ containing $X$ and then partition it into hyperfinitely many pieces of *Borel sets with infinitesimal radius. We then pick one ``representative" from each piece to form a hyperfinite set. The formal definition is given below. \begin{defn}\label{hyperapproxsp} Let $(X,d)$ be a metric space satisfying the Heine-Borel condition. Let $\delta\in {^{*}\mathbb{R}^{+}}$ be an infinitesimal and $r$ be an infinite hyperreal number. A $(\delta,r)$-hyperfinite representation of $X$ is a tuple $(S,\{B(s)\}_{s\in S})$ such that \begin{enumerate} \item $S$ is a hyperfinite subset of $\NSE{X}$. \item $s\in B(s)\in \NSE{\mathcal{B} X}$ for every $s\in S$. \item For every $s\in S$, the diameter of $B(s)$ is no greater than $\delta$. \item $B(s_1)\cap B(s_2)=\emptyset$ for every $s_1\neq s_2\in S$. \item For any $x\in \NS{^{*}X}$, ${^{*}d}(x,{^{*}X}\setminus \bigcup_{s\in S}B(s))>r$. \item There exists $a_0\in X$ and some infinite $r_0$ such that \[ \NS{^{*}X}\subset \bigcup_{s\in S}B(s)=\overline{U(a_0,r_0)} \] where $\overline{U(a_0,r_0)}=\{x\in {^{*}X}: {^{*}d}(x,a_0)\leq r_0\}$. \end{enumerate} The set $S$ is called the \emph{base set} of the hyperfinite representation. For every $x\in \bigcup_{s\in S}B(s)$, we use $s_{x}$ to denote the unique element in $S$ such that $x\in B(s_x)$. \end{defn} If $X$ is compact, we have $\NS{\NSE{X}}=\NSE{X}$ by \ref{HBfinite}. In this case, we can pick $S$ such that $\bigcup_{s\in S}B(s)=\NSE{X}$, and hence the second parameter of an $(\epsilon,r)$-hyperfinite representation is redundant. Thus, we shall simply work with an $\epsilon$-hyperfinite representation in the case where $X$ is compact. The set $\overline{U(a_0,r_0)}$ can be seen as the *closure of the nonstandard open ball $U(a_0,r_0)$. As $X$ satisfies the Heine-Borel condition, by the transfer principle, $\overline{U(a_0,r_0)}$ is a *compact set. That is, $\overline{U(a_0,r_0)}$ satisfies all the first-order logic properties of a compact set. The next theorem shows that hyperfinite representations always exist. Although the statement appears to be slightly stronger than {\citep[][Thm.~6.6]{Markovpaper}}, its proof is almost identical to the proof of {\citep[][Thm.~6.6]{Markovpaper}} hence is omitted. \begin{thm}\label{exhyper} Let $X$ be a metric space satisfying the Heine-Borel condition. Then, for every positive infinitesimal $\delta$ and every positive infinite $r$, there exists an $(\delta,r)$-hyperfinite representation $(S_{\delta}^{r},\{B(s)\}_{s\in S_{\delta}^{r}})$ of ${^{*}X}$ such that $X\subset S_{\delta}^{r}$. \end{thm} Suppose $X$ is a Heine-Borel metric space endowed with Borel $\sigma$-algebra $\mathcal{B} X$. Let $P$ be a probability measure on $(X,\mathcal{B} X)$. Let $S$ be the base set of a $(\delta,r)$-hyperfinite representation of $X$ for some positive infinitesimal $\delta$ and some positive infinite number $r$. The next theorem shows that we can define an internal probability measure on $(S,\mathcal{I}(S))$ that gives a ``nice" approximation of $P$. \begin{thm}[{\citep[][Thm.~6.11]{Markovpaper}}]\label{representationthm} Let $(X,\mathcal{B} X,P)$ be a Borel probability space where $X$ is a metric space satisfying the Heine-Borel condition, and let $(\NSE{X},{}^{*}\mathcal{B} X,{^{*}P})$ be its nonstandard extension. For every positive infinitesimal $\delta$, every positive infinite $r$ and every $(\delta,r)$-hyperfinite representation $(S,\{B(s)\}_{s\in S})$ of ${^{*}X}$, define an internal probability measure $P'$ on $(S,\mathcal{I}(S))$ by letting $P'(\{s\})=\frac{\NSE{P}(B(s))}{{^{*}P(\bigcup_{t\in S}B(t))}}$ for every $s\in S$. Then we have \begin{enumerate} \item $P'(\{s\})\approx {^{*}P}(B(s))$. \item ${^{*}P(\bigcup_{s\in S}B(s))}\approx 1$. \item $P(E)=\Loeb{P'}(\mathsf{st}^{-1}(E)\cap S)$ for every $E\in \mathcal{B} X$. \end{enumerate} \end{thm} \section{Hyperfinite Representation of General Markov Processes}\label{sechymarkov} Let $\{ P_{x} \}_{x \in X}$ be the transition kernel of a discrete-time Markov process with state space $X$. We assume that $X$ is a metric space satisfying the Heine-Borel condition throughout the rest of the paper until otherwise mentioned. The transition probability can be viewed as a function $g: X\times \mathbb{N}\times \mathcal{B} X\to [0,1]$ by letting $g(x,t,A)=P_{x}^{(t)}(A)$ for every $x\in X$, $t\in \mathbb{N}$ and $A\in \mathcal{B} X$. We will use $g(x,t,A)$ and $P_{x}^{(t)}(A)$ interchangeably. For any $x\in X$ and $A\in \mathcal{B} X$, let $P_{x}^{(0)}(A)=1$ if $x\in A$ and $P_{x}^{(0)}(A)=0$ if $x\not\in A$. We will construct a hyperfinite object to represent the Markov process $\{X_t\}_{t\in \mathbb{N}}$ associated with the transition kernel $g$. We fix a set $T=\{1,2,\dotsc,K\}$ for some infinite $K\in \NSE{\mathbb{N}}$ throughout the paper. A \emph{hyperfinite Markov process} is defined analogously to finite Markov processes. Namely, a hyperfinite Markov process is characterized by the following four ingredients: \begin{enumerate} \item A \emph{state space} $S$ which is a non-empty hyperfinite set. \item A time line $T$. \item A set $\{\nu_i: i\in S\}\subset \NSE{\mathbb{R}}$ where each $\nu_i\geq 0$ and $\sum_{i\in S}\nu_i=1$. \item A set $\{p_{ij}\}_{i,j\in S}$ of non-negative hyperreals with $\sum_{j\in S}p_{ij}=1$ for every $i\in S$. \end{enumerate} The following theorem shows that it is always possible to construct a hyperfinite Markov processes with these parameters. \begin{thm}[{\citep[][Thm.~7.2]{Markovpaper}}]\label{HMexist} Fix a hyperfinite state space $S$, a time line $T$, a hyperfinite set $\{v_i\}_{i\in S}$ and a hyperfinite set $\{p_{ij}\}_{i,j\in S}$ that satisfy the immediately-preceding conditions. Then there exists an internal probability triple $(\Omega,\mathcal{A},\Prob)$ with an internal stochastic process $\{X_t\}_{t\in T}$ defined on $(\Omega,\mathcal{A},\Prob)$ such that \[ \Prob(X_0=i_0,X_{\delta t}=i_{\delta t},...X_{t}=i_t)=v_{i_{0}}p_{i_{0}i_{\delta t}}...p_{i_{t-\delta t}i_{t}} \] for all $t\in T$ and $i_0,....i_{t}\in S$. \end{thm} As in \citep{Markovpaper}, we will construct a hyperfinite Markov process $\{X'_t\}_{t\in T}$ which is a ``nice" representation of $\{X_t\}_{t\in \mathbb{N}}$. Due to the similarities between finite objects and hyperfinite objects, $\{X'_t\}_{t\in T}$ inherits many key properties from finite Markov processes. $\{X'_t\}_{t\in T}$ will play an essential role throughout the paper. Pick any positive infinitesimal $\delta$ and any positive infinite number $r$. Let $(S,\{B(s)\}_{s\in S})$ be a $(\delta, r)$-hyperfinite representation of $\NSE{X}$. Let us recall some key properties of $(S,\{B(s)\}_{s\in S})$. \begin{enumerate} \item $s\in B(s)$ for every $s\in S$. \item For every $s\in S$, the diameter of $B(s)$ is no greater than $\delta$. \item $B(s_1)\cap B(s_2)=\emptyset$ for every $s_1\neq s_2\in S$. \item $\NS{^{*}X}\subset \bigcup_{s\in S}B(s)$. \end{enumerate} For every $x\in {^{*}X}$, we know that ${^{*}g}(x,1,.)$ is an internal probability measure on $({^{*}X}, {^{*}\mathcal{B} X})$. We can construct $(S, \{B(s)\}_{s\in S})$ so that: \begin{lemma}[{\citep[][Lemma.~9.14]{Markovpaper}}]\label{dsfconsequence} Suppose that $g$ satisfies Section \ref{assumptiondsf}. There exists a hyperfinite representation $(S,\{B(s)\}_{s\in S})$ of $\NSE{X}$ such that, for every $s\in S$, every positive $n\in \mathbb{N}$ and every $A\in \NSE{\mathcal{B} X}$, we have \[ \NSE{g}(x_1,n,A)\approx \NSE{g}(x_2,n,A) \] for every $x_1,x_2\in B(s)$. \end{lemma} We shall fix such a $(S, \{B(s)\}_{s\in S})$ for the rest of the paper. When $X$ is non-compact, $\bigcup_{s\in S}B(s)\neq {^{*}X}$. Hence, we need to truncate ${^{*}g}$ to be an internal probability measure on $\bigcup_{s\in S}B(s)$. \begin{defn}\label{dtrunchain} For $i\in \{0,1\}$, let $g'(x,i,A): \bigcup_{s\in S}B(s)\times {^{*}\mathcal{B} X} \to {^{*}[0,1]}$ be given by: \[ g'(x,i,A)={^{*}g(x,i,A\cap\bigcup_{s\in S}B(s))}+\delta(x,A){^{*}g(x,i, {^{*}X}\setminus\bigcup_{s\in S}B(s))}. \] where $\delta(x,A)=1$ if $x\in A$ and $\delta(x,A)=0$ if otherwise. \end{defn} We now define a hyperfinite Markov process $\{X'_t\}_{t\in T}$ on $S$ by specifying its internal transition kernels. We will use $G_{i}^{(t)}(\{j\})$ or $G_{ij}^{(t)}$ to denote the internal transition probability from $i$ to $j$ at time $t$. For $i,j\in S$, define $G_{ij}^{(0)}=g'(i,0,B(j))$ and $G_{ij}=g'(i,1,B(j))$. For every $t\in T$, we define $G_{ij}^{(t)}$ by the inductive formula $G_{ij}^{(t+1)} = \sum_{k} G_{ik} G_{kj}^{(t)}$. For any internal set $A\subset S$ and any $i\in S$, let $G_{i}^{(0)}(A)=\sum_{j\in A}G_{ij}^{(0)}$ and $G_{i}(A)=\sum_{j\in A}G_{ij}$. It follows from definition that $G_{ij}^{(0)}=1$ if $i=j$ and $G_{ij}^{(0)}=0$ otherwise. It is straightforward to verify that $G_{i}^{(t)}(\cdot)$ defines an internal probability measure on $S$ for every $t\in \mathbb{N}$ and $i\in S$. \begin{lemma}[{\citep[][Lemma.~8.13]{Markovpaper}}]\label{allinternal} For any $i\in S$ and any $t\in \mathbb{N}$, $G_{i}^{(t)}(\cdot)$ is an internal probability measure on $(S,\mathcal{I}(S))$. \end{lemma} We now quote the following two key results from \citep{Markovpaper}. \begin{thm}[{\citep[][Thm.~8.14]{Markovpaper}}]\label{Gapprox} Suppose $\{g(x,1,\cdot)\}_{x\in X}$ satisfies Section \ref{assumptiondsf}. Then for any $t\in \mathbb{N}$, any $s\in \NS{S}$ and any $A\in {^{*}\mathcal{B} X}$, we have ${^{*}g}(s,t,\bigcup_{a\in A\cap S}B(a))\approx G_{s}^{(t)}(A\cap S)$. \end{thm} \begin{thm}[{\citep[][Lemma.~8.15]{Markovpaper}}]\label{disrepresent} Suppose $\{g(x,1,\cdot)\}_{x\in X}$ satisfies Section \ref{assumptiondsf}. Then for any $s\in \NS{S}$, any $t\in \mathbb{N}$ and any $E\in \mathcal{B} X$, $g(\mathsf{st}(s),t,E)=\overline{G}_{s}^{(t)}(\mathsf{st}^{-1}(E)\cap S)$. \end{thm} These theorems shows that the transition probabilities of $\{X_t\}_{t\in \mathbb{N}}$ agree with the Loeb extension of the internal transition probabilities of $\{X'_t\}_{t\in T}$ via standard part map. Such $\{X'_t\}_{t\in \mathbb{N}}$ is called a hyperfinite representation of $\{X_t\}_{t\in \mathbb{N}}$. \subsection{Hyperfinite Representation of Lazy Chain}\label{seclazy} For discrete-time Markov processes, one considers a lazy version of the original Markov process to avoid periodicity and near-periodicity issues. Let $g$ be the transition kernel of a discrete-time Markov process. We denote by $g_{L}$ the lazy kernel of $g$, given by the formula $g_L(x,1,A)=\frac{1}{2}g(x,1,A)+\frac{1}{2}\delta(x,A)$ for every $x\in X$ and every $A\in \mathcal{B} X$, where we recall $\delta(x,A)=1$ if $x\in A$ and $\delta_{x}(A)=0$ if $x\not\in A$. Note that $g_{L}$ generally does not satisfy Section \ref{assumptiondsf} even if $g$ does. Suppose $g$ satisfies Section \ref{assumptiondsf} and let $G$ be a hyperfinite representation of $g$. In addition, let $\{X'_t\}_{t\in T}$ be a hyperfinite Markov process associated with the internal transition kernel $G$. Both $G$ and $\{X'_t\}_{t\in T}$ will be fixed until the applications in Section \ref{statapp}. The lazy chain of $\{X'_t\}_{t\in T}$ is defined to be a hyperfinite Markov process with transition probabilities $L_{ij}^{(0)}=G_{ij}^{(0)}$ and $L_{ij}=\frac{1}{2}G_{ij}+\frac{1}{2}\Delta(i,j)$, where $\Delta(i,j)=1$ if $i=j$ and $\Delta(i,j)=0$ if $i\neq j$. Thus, for every $i\in S$ and $A\in \mathcal{I}(S)$ we have \[ L_{i}(A)=\sum_{j\in A}L_{ij}=\sum_{j\in A}(\frac{1}{2}G_{ij}+\frac{1}{2}\Delta(i,j))=\frac{1}{2}G_{i}(A)+\frac{1}{2}\Delta(i,A) \] where $\Delta(i,A)=1$ if $i\in A$ and $\Delta(i,A)=0$ if $i\not\in A$. For every $i\in S$, $A\in \mathcal{I}(S)$ and every $t\in T$, we define $L_{i}^{(t+1)}(A)$ by the inductive formula $L_{i}^{(t+1)}(A)=\sum_{j\in S}L_{ij}L_{j}^{(t)}(A)$. Before proving the main result of this section, we quote the following useful lemma. \begin{lemma}[{\citep[][Lemma.~7.24]{Markovpaper}}] \label{tvfunction} Let $P_1$ and $P_2$ be two internal probability measures on a hyperfinite set $S$. Then \[ \parallel P_1(\cdot)-P_2(\cdot) \parallel \geq \NSE{\sup}_{f: S\to {^{*}[0,1]}}|\sum_{i\in S}P_{1}(\{i\})f(i)-\sum_{i\in S}P_{2}(\{i\})f(i)|, \] where $\parallel P_1(\cdot)-P_2(\cdot) \parallel=\NSE{\sup}_{A\in \mathcal{I}(S)}|P_{1}(A)-P_{2}(A)|$ and the $\NSE{\sup}$ is taken over all internal functions. \end{lemma} We now prove the following representation theorem, which is similar in spirit to Theorem \ref{Gapprox}: \begin{thm}\label{lazystar} Suppose $\{g(x,1,\cdot)\}_{x\in X}$ satisfies Section \ref{assumptiondsf}. Then for any $t\in \mathbb{N}$, any $x\in \NS{\NSE{X}}$ and any $A\in {^{*}\mathcal{B} X}$, we have ${^{*}g_{L}}(x,t,\bigcup_{a\in A\cap S}B(a))\approx L_{s_{x}}^{(t)}(A\cap S)$ where $s_{x}$ is the unique element in $S$ such that $x\in B(s_x)$. \end{thm} \begin{proof} We prove it by induction on $t\in \mathbb{N}$. Let $t=1$. Pick any $x\in \NS{\NSE{X}}$ and any $A\in \NSE{\mathcal{B} X}$, by Lemma \ref{dsfconsequence} and Theorem \ref{Gapprox}, we have \[ {^{*}g_{L}}(x,1,\bigcup_{a\in A\cap S}B(a))&=\frac{1}{2}\NSE{g}(x,1,\bigcup_{a\in A\cap S}B(a))+\frac{1}{2}\NSE{\delta}(x,\bigcup_{a\in A\cap S}B(a))\\ &\approx \frac{1}{2}\NSE{g}(s_x,1,\bigcup_{a\in A\cap S}B(a))+\frac{1}{2}\Delta(s_x,A\cap S)\\ &\approx \frac{1}{2}G_{s_{x}}(A\cap S)+\frac{1}{2}\Delta(s_{x},A\cap S)=L_{s_{x}}(A\cap S). \] Suppose the theorem holds for $t=n$. We now show that the theorem holds for $t=n+1$. By the transfer of the Markov property, we have \[\label{glcalstart} &{^{*}g_{L}}(x,n+1,\bigcup_{a\in A\cap S}B(a))\\ &=\int \NSE{g_{L}}(y,n,\bigcup_{a\in A\cap S}B(a))\NSE{g_{L}}(x,1,\mathrm{d} y)\\ &\approx \int_{\bigcup_{s\in S}B(s)}\NSE{g_{L}}(y,n,\bigcup_{a\in A\cap S}B(a))\NSE{g_{L}}(x,1,\mathrm{d} y), \] where the last $\approx$ follows from the fact that $\NSE{g_{L}}(x,1,\bigcup_{s\in S}B(s))=1$. By the induction hypothesis, we know that $\NSE{g_{L}}(y,n,\bigcup_{a\in A\cap S}B(a))\approx L_{s_{y}}^{(n)}(A\cap S)$ for every $y\in \bigcup_{s\in S}B(s)$. Thus, we have \[ &\int_{\bigcup_{s\in S}B(s)}\NSE{g_{L}}(y,n,\bigcup_{a\in A\cap S}B(a))\NSE{g_{L}}(x,1,\mathrm{d} y)\\ &\approx \int_{\bigcup_{s\in S}B(s)}L_{s_{y}}^{(n)}(A\cap S)\NSE{g_{L}}(x,1,\mathrm{d} y)\\ &=\sum_{s\in S}L_{s}^{(n)}(A\cap S)\NSE{g_{L}}(x,1,B(s))\\ &=\sum_{s\in S}L_{s}^{(n)}(A\cap S)(\frac{1}{2}\NSE{g}(x,1,B(s))+\frac{1}{2}\NSE{\delta}(x,B(s)))\\ &=L_{s_x}^{(n)}(A\cap S)(\frac{1}{2}\NSE{g}(x,1,B(s_x))+\frac{1}{2})+\frac{1}{2}\sum_{s\neq s_x}L_{s}^{(n)}(A\cap S)\NSE{g}(x,1,B(s)). \label{glcalend} \] We must now calculate the second term. By Lemma \ref{dsfconsequence}, we have $\NSE{g}(x,1,B(s_x))\approx \NSE{g}(s_x,1,B(s_x))$. By Definition \ref{dtrunchain}, we know that $\NSE{g}(s_x,1,B(s_x))\approx G_{s_{x}s_{x}}$. We will now show that \[ \sum_{s\neq s_x}L_{s}^{(n)}(A\cap S)\NSE{g}(x,1,B(s))\approx \sum_{s\neq s_x}L_{s}^{(n)}(A\cap S)\NSE{g}(s_x,1,B(s)) \] by considering two cases: $\NSE{g}(x,1,\bigcup_{s\neq s_x}B(s))\approx 0$ and $\NSE{g}(x,1,\bigcup_{s\neq s_x}B(s))\not\approx 0$. If $\NSE{g}(x,1,\bigcup_{s\neq s_x}B(s))\approx 0$, by Section \ref{assumptiondsf}, we have $\NSE{g}(s_x,1,\bigcup_{s\neq s_x}B(s))\approx 0$. Thus, we have $\sum_{s\neq s_x}L_{s}^{(n)}(A\cap S)\NSE{g}(x,1,B(s))\approx \sum_{s\neq s_x}L_{s}^{(n)}(A\cap S)\NSE{g}(s_x,1,B(s))\approx 0$. In the case $\NSE{g}(x,1,\bigcup_{s\neq s_x}B(s))\not\approx 0$, by Section \ref{assumptiondsf}, we have $\NSE{g}(s_x,1,\bigcup_{s\neq s_x}B(s))\not\approx 0$. This allows us to define $P_1,P_2: \mathcal{I}(S)\to \NSE{[0,1]}$ by the formulae $P_1(A)=\frac{\NSE{g}(x,1,\bigcup_{s\in (A\cap S\setminus\{s_x\})}B(s))}{\NSE{g}(x,1,\bigcup_{s\neq s_x}B(s))}$ and $P_2(A)=\frac{\NSE{g}(s_x,1,\bigcup_{s\in (A\cap S\setminus\{s_x\})}B(s))}{\NSE{g}(s_x,1,\bigcup_{s\neq s_x}B(s))}$. Then both $P_1$ and $P_2$ are internal probability measures on $S$. By Section \ref{assumptiondsf}, we know that $\parallel P_1(\cdot)-P_2(\cdot) \parallel\approx 0$. By Lemma \ref{tvfunction}, this implies \[ \sum_{s\neq s_x}L_{s}^{(n)}(A\cap S)\NSE{g}(x,1,B(s))\approx \sum_{s\neq s_x}L_{s}^{(n)}(A\cap S)\NSE{g}(s_x,1,B(s)) \] in our second case as well, so this equality always holds. By Definition \ref{dtrunchain}, we know that $\NSE{g}(s_x,1,B(s))=G_{s_{x}s}$ for $s\neq s_{x}$. Hence we always have $\frac{1}{2}\sum_{s\neq s_x}L_{s}^{(n)}(A\cap S)\NSE{g}(x,1,B(s))\approx \frac{1}{2}\sum_{s\neq s_x}L_{s}^{(n)}(A\cap S)G_{s_{x}s}$. Thus, combining \eqref{glcalstart} to \eqref{glcalend}, we have \[ &{^{*}g_{L}}(x,n+1,\bigcup_{a\in A\cap S}B(a))\\ &=L_{s_x}^{(n)}(A\cap S)(\frac{1}{2}\NSE{g}(x,1,B(s_x))+\frac{1}{2})+\frac{1}{2}\sum_{s\neq s_x}L_{s}^{(n)}(A\cap S)\NSE{g}(x,1,B(s))\\ &\approx L_{s_x}^{(n)}(A\cap S)(\frac{1}{2}G_{s_{x}s_{x}}+\frac{1}{2})+\frac{1}{2}\sum_{s\neq s_x}L_{s}^{(n)}(A\cap S)G_{s_{x}s}. \] On the other hand, we have \[ L_{s_x}^{(n+1)}(A\cap S)&=\sum_{s\in S}L_{s_{x}s}L_{s}^{(n)}(A\cap S)\\ &=\sum_{s\in S}(\frac{1}{2}G_{s_{x}s}+\frac{1}{2}\Delta(s_x,s))L_{s}^{(n)}(A\cap S)\\ &=\sum_{s\neq s_x}\frac{1}{2}G_{s_{x}s}L_{s}^{(n)}(A\cap S)+(\frac{1}{2}G_{s_{x}s_{x}}+\frac{1}{2})L_{s_x}^{(n)}(A\cap S). \] Thus, we can conclude that ${^{*}g_{L}}(x,n+1,\bigcup_{a\in A\cap S}B(a))\approx L_{s_{x}}^{(n+1)}(A\cap S)$. By induction, we have the desired result. \end{proof} The following well-known nonstandard representation theorem is due to Robert Anderson. \begin{lemma} [{\citep[][Thm ~3.3]{anderson87}}]\label{loebapproximate} Let $(X,\mathcal{B} X,\mu)$ be a $\sigma$-compact Borel probability space. Then $\mathsf{st}$ is measure preserving from $({^{*}X},\overline{^{*}\mathcal{B} X},\overline{^{*}\mu})$ to $(X,\mathcal{B} X,\mu)$. That is, we have $\mu(E)=\overline{^{*}\mu}(\mathsf{st}^{-1}(E))$ for all $E\in \mathcal{B} X$. \end{lemma} Note that every Heine-Borel space is $\sigma$-compact. We also recall that the hyperfinite state space $S$ of $\{X'_t\}_{t\in T}$ contains $X$ as a subset. We now present the following hyperfinite representation theorem for lazy chains. The proof is very similar to the proof of \ref{disrepresent} hence is omitted. \begin{thm}\label{lazystandard} Suppose that the transition kernel of $\{X_t\}_{t\in \mathbb{N}}$ satisfies Section \ref{assumptiondsf}. Then for every $x\in X$, every $t\in \mathbb{N}$ and every $E\in \mathcal{B} X$, we have $g_{L}(x,t,E)=\Loeb{L}_{x}^{(t)}(\mathsf{st}^{-1}(E)\cap S)$. \end{thm} \subsection{Hyperfinite Representation of Stationary Distribution} Let $\pi$ be a stationary distribution of $\kernel$. We construct an analogous object in the hyperfinite representation $\kerp$, called the ``weakly stationary distribution". \begin{defn}\label{defnstationary} Let $\Pi$ be an internal probability measure on $(S,\mathcal{I}(S))$. We say $\Pi$ is a weakly stationary distribution for $\kerp$ if there exists an infinite $t_0\in T$ such that for any $t\leq t_0$ and any $A\in \mathcal{I}(S)$ we have $\Pi(A)\approx\sum_{i\in S}\Pi(\{i\})G_{i}^{(t)}(A)$. \end{defn} In \citep{Markovpaper}, the authors show that weakly stationary distributions exist for hyperfinite representations of general state space continuous time Markov processes under moderate regularity conditions. In this section, we show that weakly stationary distributions exist for Markov processes with transition kernel satisfying Section \ref{assumptiondsf}. We start by giving an explicit construction of a weakly stationary distribution from the standard stationary distribution. \begin{defn}\label{defwksta} Let $\pi$ be the stationary distribution for $\kernel$. Let $\pi'$ be an internal probability measure on $(S,\mathcal{I}(S))$ satisfying \begin{itemize} \item For all $s\in S$, let $\pi'(\{s\})=\frac{\NSE{\pi}(B(s))}{\NSE{\pi}(\bigcup_{t\in S}B(t))}$. \item For every internal set $A\subset S$, let $\pi'(A)=\sum_{s\in A}\pi'(\{s\})$. \end{itemize} \end{defn} It is straightforward to verify from \ref{defwksta} that \[ \label{EqPiPrimeSt} \pi'(A)\approx \NSE{\pi}(\bigcup_{s\in A}B(s)) \] for every $A\in \mathcal{I}(S)$. The following theorem relates $\pi'$ and $\pi$. \begin{thm}[{\citep[][Lemma ~8.15]{Markovpaper}}]\label{starepresent} $\pi'$ is an internal probability measure on $(S,\mathcal{I}(S))$. Moreover, for every $A\in \mathcal{B} X$, we have $\pi'(\mathsf{st}^{-1}(A)\cap S)=\pi(A)$. \end{thm} We now show that $\pi'$ is a weakly stationary distribution for $\kerp$. \begin{thm}\label{wkstationary} Suppose $\{g(x,1,\cdot)\}_{x\in X}$ satisfies Section \ref{assumptiondsf}. Let $\pi$ be the stationary distribution of $\kernel$. Then $\pi'$ satisfying Definition \ref{defwksta} is a weakly stationary distribution for $\kerp$. \end{thm} \begin{proof} Pick an internal set $A\subset S$ and some $t\in \mathbb{N}$. By the transfer principle, we have $\pi'(A)\approx \NSE{\pi}(\bigcup_{a\in A}B(a))=\int \NSE{g}(x,t, \bigcup_{a\in A}B(a))\NSE{\pi}(\mathrm{d} x)$. Pick $\epsilon>0$, there is a compact set $K\subset X$ such that $\sum_{s\in S}\pi'(\{s\})G_{s}^{(t)}(A)-\sum_{s\in \NSE{K}\cap S}\pi'(\{s\})G_{s}^{(t)}(A)<\epsilon$ and $\int \NSE{g}(x,t, \bigcup_{a\in A}B(a))\NSE{\pi}(\mathrm{d} x)-\int_{S_{K}} \NSE{g}(x,t, \bigcup_{a\in A}B(a))\NSE{\pi}(\mathrm{d} x)<\epsilon$, where $S_K=\bigcup_{s\in \NSE{K}\cap S}B(s)$. As our choice of $\epsilon$ is arbitrary, to show $\pi'(A)\approx \sum_{s\in S}\pi'(\{s\})G_{s}^{(t)}(A)$, it is sufficient to show that $\sum_{s\in \NSE{K}\cap S}\pi'(\{s\})G_{s}^{(t)}(A)\approx \int_{S_{K}} \NSE{g}(x,t, \bigcup_{a\in A}B(a))\NSE{\pi}(\mathrm{d} x)$. Note that we have \[ \int_{S_{K}} \NSE{g}(x,t, \bigcup_{a\in A}B(a))\NSE{\pi}(\mathrm{d} x)&=\int_{\bigcup_{s\in \NSE{K}\cap S}B(s)}\NSE{g}(x,t, \bigcup_{a\in A}B(a))\NSE{\pi}(\mathrm{d} x)\\ &=\sum_{s\in \NSE{K}\cap S}\int_{B(s)}\NSE{g}(x,t,\bigcup_{a\in A}B(a))\NSE{\pi}(\mathrm{d} x). \] By Lemma \ref{dsfconsequence}, we have \[ \label{ApprEqBsNse} \int_{B(s)}\NSE{g}(x,t,\bigcup_{a\in A}B(a))\NSE{\pi}(\mathrm{d} x)\approx \int_{B(s)}\NSE{g}(s,t,\bigcup_{a\in A}B(a))\NSE{\pi}(\mathrm{d} x). \] As $\sum_{s\in \NSE{K}\cap S}\NSE{\pi}(B(s))<1$, by Theorem \ref{Gapprox}, we have \[ &\sum_{s\in \NSE{K}\cap S}\int_{B(s)}\NSE{g}(x,t,\bigcup_{a\in A}B(a))\NSE{\pi}(\mathrm{d} x)\\ &\stackrel{\text{Eq. } \eqref{ApprEqBsNse}}{\approx} \sum_{s\in \NSE{K}\cap S}\int_{B(s)}\NSE{g}(s,t,\bigcup_{a\in A}B(a))\NSE{\pi}(\mathrm{d} x)\\ &=\sum_{s\in \NSE{K}\cap S}\NSE{g}(s,t,\bigcup_{a\in A}B(a))\NSE{\pi}(B(s))\\ &\stackrel{ Lemma \ref{dsfconsequence}}{\approx} \sum_{s\in \NSE{K}\cap S}G_{s}^{(t)}(A)\NSE{\pi}(B(s))\\ &\stackrel{\text{Eq. } \eqref{EqPiPrimeSt}}{\approx} \sum_{s\in \NSE{K}\cap S}G_{s}^{(t)}(A)\pi'(\{s\}). \] Thus, we can conclude that $\pi'(A)\approx \sum_{s\in S}\pi'(\{s\})G_{s}^{(t)}(A)$ for every $A\in \mathcal{I}(S)$ and every $t\in \mathbb{N}$. Let $\mathcal{D}=\{t\in T: (\forall A\in \mathcal{I}(S))(|\pi'(A)-\sum_{s\in S}\pi'(\{s\})G_{s}^{(t)}(A)|<\frac{1}{t})\}$. Then $\mathcal{D}$ contains every $t\in \mathbb{N}$. By overspill, there exists an infinite $t_0\in T$ such that $|\pi'(A)-\sum_{s\in S}\pi'(\{s\})G_{s}^{(t)}(A)|<\frac{1}{t}$ for all $A\in \mathcal{I}(S)$ and all $t\leq t_0$. Thus, we have $\pi'(A)\approx \sum_{s\in S}\pi'(\{s\})G_{s}^{(t)}(A)$ for all $A\in \mathcal{I}(S)$ and all $t\leq t_0$. \end{proof} \subsection{Hyperfinite Representation of Reversible Markov Processes}\label{secreverse} Recall that a Markov process is \emph{reversible} if it satisfies Equation \eqref{EqDefRev}, and in particular A Markov chain on a finite state space is reversible if and only if \[ \pi(\{i\})g(i,1,\{j\})=\pi(\{j\})g(j,1,\{i\}) \] for every $i,j$ in the state space $X$. If $\{g(x,1,\cdot)\}_{x\in X}$ is reversible, then its hyperfinite representation $\kerp$ is ``almost" reversible in the sense that \[ \sum_{s\in S_1}G_{s}^{(t)}(S_2)\pi'(\{s\})\approx \sum_{s\in S_2}G_{s}^{(t)}(S_1)\pi'(\{s\}) \] for every $S_1,S_2\in \mathcal{I}(S)$ and every $t\in \mathbb{N}$. We now show that $\kerp$ is ``infinitesimally close" to a *reversible process. \begin{thm}\label{closereverse} Suppose $\{g(x,1,\cdot)\}_{x\in X}$ is reversible with stationary measure $\pi$ and satisfies Section \ref{assumptiondsf}. Then there exists an internal transition kernel $\{H_{s}(\cdot)\}_{s\in S}$ such that it is *reversible with respect to $\pi'$ and \[ \max_{s\in S}\parallel G^{(t)}_{s}(\cdot)-H^{(t)}_{s}(\cdot) \parallel\approx 0. \] for every $t\in \mathbb{N}$. \end{thm} \begin{proof} For every $x\in \NSE{X}$ and $A\in \NSE{\mathcal{B} X}$, define \[ F(x,A)=\delta(x,A)\NSE{g}(x,1,\NSE{X}\setminus \bigcup_{s\in S}B(s)) \] for notational convenience. Thus, for every $i,j\in S$, by Definition \ref{dtrunchain}, we have $G_{ij}=\NSE{g}(i,1,B(j))+F(i,B(j))$. Note that $\pi'(\{i\})=\frac{\NSE{\pi}(B(i))}{\NSE{\pi}(\bigcup_{s\in S}B(s))}$. For every $i,j\in S$ with $\pi'(\{i\})\neq 0$, define \[ H_{ij}=\frac{\int_{B(i)}\NSE{g}(x,1,B(j))+F(x,B(j))\frac{\NSE{\pi}(\mathrm{d} x)}{\NSE{\pi}(\bigcup_{s\in S}B(s))}}{\pi'(\{i\})}. \] For $i$ with $\pi'(\{i\})=0$, set $H_{ij}=G_{ij}=\NSE{g}(i,1,B(j))+F(i,B(j))$. For every $i\in S$ and $A\in \mathcal{I}(S)$, define $H_{i}(A)=\sum_{j\in A}H_{ij}$. It is straightforward to verify that $H_{i}(\cdot)$ defines an internal probability measure on $(S, \mathcal{I}(S))$ for every $i\in S$. We now show that the hyperfinite Markov process with internal transition kernel $\{H_i(\cdot)\}_{i\in S}$ is *reversible. \begin{claim}\label{starproperty} The internal transition matrix $\{H_i(\cdot)\}_{i\in S}$ is *reversible with *stationary distribution $\pi'$. \end{claim} \begin{proof} We start by showing that $\pi'$ is a *stationary distribution of $\{H_i(\cdot)\}_{i\in S}$. Let $S_0=\{s\in S: \NSE{\pi}(B(s))>0\}$. For $s\in S\setminus S_0$, note that $\int_{B(s)}\NSE{g}(x,1,B(j))\NSE{\pi}(\mathrm{d} x)=0$ for every $j\in S$. Thus, for every $j\in S$, we have \[ &\sum_{i\in S}\pi'(\{i\})H_{ij}\\ &=\frac{1}{\NSE{\pi}(\bigcup_{s\in S}B(s))}\sum_{i\in S_{0}}\pi'(\{i\})\frac{\int_{B(i)}\NSE{g}(x,1,B(j))+F(x,B(j))\NSE{\pi}(\mathrm{d} x)}{\pi'(\{i\})}\\ &=\frac{1}{\NSE{\pi}(\bigcup_{s\in S}B(s))}\sum_{i\in S}\int_{B(i)}\NSE{g}(x,1,B(j))+F(x,B(j))\NSE{\pi}(\mathrm{d} x)\\ &=\frac{1}{\NSE{\pi}(\bigcup_{s\in S}B(s))}(\NSE{\pi}(B(j))-\int_{\NSE{X}\setminus \bigcup_{s\in S}B(s)}\NSE{g}(x,1,B(j))\NSE{\pi}(\mathrm{d} x)+ \int_{B(j)}F(x,B(j))\NSE{\pi}(\mathrm{d} x))\\ &=\frac{\NSE{\pi}(B(j))}{\NSE{\pi}(\bigcup_{s\in S}B(s))}=\pi'(\{j\}). \] Hence $\pi'$ is a *stationary distribution of $\{H_i(\cdot)\}_{i\in S}$. We now show that the hyperfinite Markov process with internal transition kernel $\{H_i(\cdot)\}_{i\in S}$ is *reversible with respect its *stationary distribution $\pi'$. For $t\in S\setminus S_0$, we have \[ \pi'(\{t\})H_{tj}=0=\frac{1}{\NSE{\pi}(\bigcup_{s\in S}B(s))}\int_{B(t)}\NSE{g}(x,1,B(j))+F(x,B(j))\NSE{\pi}(\mathrm{d} x) \] for every $j\in S$. Thus, for every $i,j\in S$, we have \[ &\pi'(\{i\})H_{ij}\\ &=\frac{1}{\NSE{\pi}(\bigcup_{s\in S}B(s))}\int_{B(i)}\NSE{g}(x,1,B(j))+F(x,B(j))\NSE{\pi}(\mathrm{d} x)\\ &=\frac{1}{\NSE{\pi}(\bigcup_{s\in S}B(s))}(\int_{B(j)}\NSE{g}(x,1,B(i))\NSE{\pi}(\mathrm{d} x)+\int_{B(j)}F(x,B(i))\NSE{\pi}(\mathrm{d} x))\\ &=\frac{1}{\NSE{\pi}(\bigcup_{s\in S}B(s))}(\int_{B(j)}\NSE{g}(x,1,B(i))+F(x,B(i))\NSE{\pi}(\mathrm{d} x)\\ &=\pi'(\{j\})H_{ji}. \] \end{proof} We now prove the theorem by induction on $t\in \mathbb{N}$. Let $t=1$. Pick $s\in S$ and $A\in \mathcal{I}(S)$. If $\NSE{\pi}(B(s))=0$, then we have $G_{s}(A)=H_{s}(A)$. Suppose $\NSE{\pi}(B(s))\neq 0$. Pick $m\in \mathbb{N}$. We have \[ &|G_{s}(A)-H_{s}(A)|\\ &\leq \frac{\int_{B(s)}|\NSE{g}(s,1,\bigcup_{a\in A}B(a))+F(s,\bigcup_{a\in A}B(a))-\NSE{g}(x,1,\bigcup_{a\in A}B(a))-F(x,\bigcup_{a\in A}B(a))|\NSE{\pi}(\mathrm{d} x)}{\NSE{\pi}(\mathrm{d} x)}\\ &\stackrel{Lemma \ref{dsfconsequence}}{\leq} \frac{\frac{1}{m}\NSE{\pi}(B(s))}{\NSE{\pi}(B(s))}=\frac{1}{m}. \] As our choices of $s$ and $A$ are arbitrary, we have $\max_{s\in S} \parallel G_{s}(\cdot)-H_{s}(\cdot) \parallel\approx 0$. Suppose that the theorem holds for $t=n$. We now establish the result for $t=n+1$. Pick $s\in S$ and $A\in \mathcal{I}(S)$. By Lemma \ref{tvfunction}, we have \[ &|G_{s}^{(n+1)}(A)-H_{s}^{(n+1)}(A)|\\ &=|\sum_{i\in S}G_{si}G_{i}^{(n)}(A)-\sum_{i\in S}H_{si}H_{i}^{(n)}(A)|\\ &\leq \parallel G_{s}(\cdot)-H_{s}(\cdot) \parallel\approx 0. \] As our choices of $s$ and $A$ are arbitrary, we have $\max_{s\in S} \parallel G_{s}^{(t+1)}(\cdot)-H_{s}^{(t+1)}(\cdot) \parallel\approx 0$, completing the proof. \end{proof} Throughout the paper, we shall denote the hyperfinite Markov process on $S$ with the internal transition matrix $\{H_{ij}\}_{i,j\in S}$ by $\{Z_{t}\}_{t\in T}$. As the total variation distance between $\kerp$ and $\{H_i\}_{i \in S}$ is infinitesimal, it is not surprising that $\{H_i\}_{i \in S}$ can be used as a hyperfinite representation of $\kerp$. The following two theorems follow easily from Theorem \ref{Gapprox}, \ref{disrepresent} and \ref{closereverse} hence proofs are omitted. \begin{thm}\label{hGapprox} Suppose $\{g(x,1,\cdot)\}_{x\in X}$ satisfies Section \ref{assumptiondsf}. Then for any $t\in \mathbb{N}$, any $s\in \NS{S}$ and any $A\in {^{*}\mathcal{B} X}$, we have ${^{*}g}(s,t,\bigcup_{a\in A\cap S}B(a))\approx H_{s}^{(t)}(A\cap S)$. \end{thm} \begin{thm}\label{hdisrepresent} Suppose $\{g(x,1,\cdot)\}_{x\in X}$ satisfies Section \ref{assumptiondsf}. Then for any $s\in \NS{S}$, any $t\in \mathbb{N}$ and any $E\in \mathcal{B} X$, $g(\mathsf{st}(s),t,E)=\overline{H}_{s}^{(t)}(\mathsf{st}^{-1}(E)\cap S)$. \end{thm} Define the lazy transition kernel $\{I_{ij}\}_{i,j \in S}$ associated with $\{H_{ij}\}_{i,j \in S}$ to be a collection of internal transition probabilities satisfying the initial conditions $I^{(0)}_{ij}=H^{(0)}_{ij}$ and $I_{ij}=\frac{1}{2}H_{ij}+\frac{1}{2}\Delta(i,j)$, where $\Delta(i,j)=1$ if $i=j$ and $\Delta(i,j)=0$ if $i\neq j$. For every $i\in S$ and $A\in \mathcal{I}(S)$ we then have \[ I_{i}(A) \equiv \sum_{j\in A}I_{ij}=\sum_{j\in A}(\frac{1}{2}H_{ij}+\frac{1}{2}\Delta(i,j))=\frac{1}{2}H_{i}(A)+\frac{1}{2}\Delta(i,A) \] where $\Delta(i,A)=1$ if $i\in A$ and $\Delta(i,A)=0$ if $i\not\in A$. The following result shows that the total variation distance between the lazy chain of $\kerp$ and the lazy chain of $\{H_{i}\}_{i \in S}$ is infinitesimal. \begin{lemma}\label{lazyclose} Suppose $\{g(x,1,\cdot)\}_{x\in X}$ is reversible and satisfies Section \ref{assumptiondsf}. Then we have \[ \max_{s\in S}\parallel L^{(t)}_{s}(\cdot)-I^{(t)}_{s}(\cdot) \parallel\approx 0 \] for every $t\in \mathbb{N}$. \end{lemma} \begin{proof} We prove the result by induction on $t\in \mathbb{N}$. Let $t=1$. By \ref{closereverse} and the construction of the lazy chain, we have \[ \max_{s\in S} \parallel L_{s}(\cdot)-I_{s}(\cdot) \parallel \approx 0. \] Assume that the theorem holds for $t=n$. We now prove the case for $t=n+1$. Pick $s\in S$ and $A\in \mathcal{I}(S)$. By Lemma \ref{tvfunction}, we have \[ &|L_{s}^{(n+1)}(A)-I_{s}^{(n+1)}(A)|\\ &=|\sum_{i\in S}L_{si}L_{i}^{(n)}(A)-\sum_{i\in S}I_{si}I_{i}^{(n)}(A)|\\ &\leq \parallel L_{s}(\cdot)-I_{s}(\cdot) \parallel\approx 0. \] As our choices of $s$ and $A$ are arbitrary, we have $\max_{s\in S} \parallel L_{s}^{(t+1)}(\cdot)-I_{s}^{(t+1)}(\cdot) \parallel\approx 0$, completing the proof. \end{proof} It is not surprising that the lazy transition kernel $\{I_{s}(\cdot)\}_{s\in S}$ is a hyperfinite representation of the standard lazy transition kernel $\{g_{L}(x,1,\cdot)\}_{x\in X}$. The following two results follow directly from \ref{lazystar}, \ref{lazystandard} and \ref{lazyclose}. \begin{thm}\label{hlazystar} Suppose $\{g(x,1,\cdot)\}_{x\in X}$ satisfies Section \ref{assumptiondsf}. Then for any $t\in \mathbb{N}$, any $x\in \NS{\NSE{X}}$ and any $A\in {^{*}\mathcal{B} X}$, we have ${^{*}g_{L}}(x,t,\bigcup_{a\in A\cap S}B(a))\approx I_{s_{x}}^{(t)}(A\cap S)$ where $s_{x}$ is the unique element in $S$ such that $x\in B(s_x)$. \end{thm} \begin{thm}\label{hlazystandard} Suppose $\{g(x,1,\cdot)\}_{x\in X}$ satisfies Section \ref{assumptiondsf}. Then for every $x\in X$, every $t\in \mathbb{N}$ and every $E\in \mathcal{B} X$, we have $g_{L}(x,t,E)=\Loeb{I}_{x}^{(t)}(\mathsf{st}^{-1}(E)\cap S)$. \end{thm} \section{Mixing Times and Hitting Times with Their Nonstandard Counterparts}\label{secmha} . In this section, we develop nonstandard notions of mixing and hitting times for hyperfinite Markov processes and we show that the nonstandard notions and standard notions agree with each other. We assume that the underlying state space $X$ is compact for some theorems in this section. Recall that $X$ is compact if and only if $\NSE{X}=\NS{\NSE{X}}$. \subsection{Agreement of Mixing Time}\label{secmix} Let $\{X_t\}_{t\in \mathbb{N}}$ be a discrete time Markov process on a general state space $X$ with transition probabilities denoted by $\{g(x,t,A)\}_{x\in X,t\in \mathbb{N}, A\in \mathcal{B} X}$ and stationary distribution $\pi$. The following theorem shows that the mixing time of the lazy chain is no greater than the mixing time of the hyperfinite lazy chain. \begin{thm}\label{mixlemma} Suppose $\gkernel$ satisfies Section \ref{assumptiondsf}. For every $\epsilon\in \Reals_{> 0}$, we have \[\label{mixsteq} t_{L}(\epsilon) \leq \min\{t\geq 0: \sup_{i\in S}\mathsf{st}(\parallel I_{i}^{(t)}(\cdot)-\pi'(\cdot) \parallel) \leq \epsilon\}. \] \end{thm} \begin{proof} By the definition of the Loeb measure, \ref{hlazystandard} and \ref{starepresent}, we have \[ &\sup_{i\in S}\mathsf{st}(\parallel I_{i}^{(t)}(\cdot)-\pi'(\cdot) \parallel)\\ &\geq \sup_{i\in S}\sup_{A\in \mathcal{B} X}|\Loeb{I}_{i}^{(t)}(\mathsf{st}^{-1}(A)\cap S)-\Loeb{\pi'}(\mathsf{st}^{-1}(A)\cap S)|\\ &\geq \sup_{x\in X}\sup_{A\in \mathcal{B} X}|g_{L}(x,t,A)-\pi(A)|\\ &=\sup_{x\in X}\parallel g_{L}(x,t,\cdot)-\pi(\cdot) \parallel. \] Thus, $t_{L}(\epsilon)\leq \min\{t\in \mathbb{N}: \sup_{i\in S}\mathsf{st}(\parallel I_{i}^{(t)}(\cdot)- \pi'(\cdot)\parallel) \leq \epsilon\}$ for all $\epsilon>0$. \end{proof} The following result is an immediate consequence of Lemma \ref{mixlemma}. \begin{cor}\label{mixcor} Suppose $\gkernel$ satisfies Section \ref{assumptiondsf}. For every $\epsilon\in \Reals_{> 0}$, we have \[ t_{L}(\epsilon)\leq\NSE{\min}\{t\in T: \NSE{\sup}_{i\in S}\parallel I_{i}^{(t)}(\cdot)-\pi'(\cdot) \parallel \leq \epsilon\}. \] \end{cor} \begin{proof} Pick $\epsilon\in \Reals_{> 0}$. If $\NSE{\sup}_{i\in S}\parallel I_{i}^{(t)}(\cdot)-\pi'(\cdot) \parallel \leq \epsilon$ then \[ \sup_{i\in S}\mathsf{st}(\parallel I_{i}^{(t)}(\cdot)-\pi'(\cdot) \parallel) \leq \epsilon. \] The result then follows from Lemma \ref{mixlemma}. \end{proof} \subsection{Agreement of Hitting Time}\label{sechit} Let $\{X_t\}_{t\in \mathbb{N}}$ be a discrete time Markov process with transition probabilities $\{g(x,1,A)\}_{x\in X, A\in \mathcal{B} X}$ and initial distribution $\nu$. By Kolmogorov existence theorem, there exists a probability measure $\Prob$ on $(X^{\mathbb{N}},\mathcal{B} X^{\mathbb{N}})$ such that \[\label{prodeq} &\Prob(X_0\in A_0,X_1\in A_1,\dotsc, X_n\in A_n)\\ &=\int_{A_0}\nu(\mathrm{d} x_0)\int_{A_1}g(x_0,1,\mathrm{d} x_1)\dotsc\int_{A_{n-1}}g(x_{n-1},1,A_n)g(x_{n-2},1,\mathrm{d} x_{n-1}) \] for all $n\in \mathbb{N}$ and all $A_0,A_1,\dotsc, A_n\in \mathcal{B} X$. We write $\Prob_{x}(\cdot)$ for the probability of an event conditional on $X_0=x$. Let $\{H_{ij}\}_{i,j\in S}$ and $\mu$ denote the internal transition matrix and the initial internal distribution of a *reversible hyperfinite Markov process $\{Z_t\}_{t\in T}$ defined in \ref{secreverse}, respectively. By \ref{HMexist}, we have: \begin{thm}\label{hyexist} There exists a hyperfinite probability space $(\Omega,\mathcal{I}(\Omega),\IProb)$ such that \[ \IProb(Z_{0}=i_0,Z_{1}=i_1,\dotsc,Z_{t}=i_t)=\mu(\{i_0\})H_{i_{0}i_{1}}H_{i_{1}i_{2}}\dotsc H_{i_{t-1}i_{t}} \] for all $t\in T$ and $i_0,i_1,\dotsc,i_t\in S$. \end{thm} We write $\IProb_{s}(\cdot)$ for the internal probability of an internal event conditional on $Z_0=s$. The first hitting time $\tau_{A}$ of a set $A\in \mathcal{B} X$ for $\{X_t\}_{t\in \mathbb{N}}$ is $\min\{t>0: X_t\in A\}$. It is straightforward to see that $\Prob_{x}(\tau_{A}=1)=g(x,1,A)$. For $t\geq 1$, we have $\Prob_{x}(\tau_{A}=t+1)=\int_{X\setminus A} \Prob_{y}(\tau_{A}=t) \mathrm{d} g(x,1,\mathrm{d} y)$. Similarly, the first internal hitting time $\tau'_{A}$ of an internal set $A\subset S$ is defined to be $\min\{t\in T: Z_t\in A\}$. It is easy to verify that $\IProb_{s}(\tau'_{A}=1)=H_{s}(A)$ for every $s\in S$ and $A\in \mathcal{I}(S)$. For $t>1$, $s\in S$ and $A\in \mathcal{I}(S)$, we have $\IProb_{s}(\tau'_{A}=t)=\sum_{s_1,s_2,\dotsc,s_{t-1}\in S\setminus A}H_{ss_{1}}H_{s_{1}s_{2}}\dotsc H_{s_{t-1}}(A)$. Thus, for $t\geq 1$, we have $\IProb_{s}(\tau'_{A}=t+1)=\int_{S\setminus A} \IProb_{y}(\tau'(A)=t)H_{s}(\mathrm{d} y)$. In order to apply nonstandard extensions and the transfer principle more easily, we define $\cP: X\times \mathcal{B} X\times \mathbb{N}\to [0,1]$ to be $\cP(x,B,t)=\Prob_{x}(\tau_{B}=t)$ and define $\mathcal{Q}: S\times \mathcal{I}(S)\times T\to \NSE{[0,1]}$ to be $\mathcal{Q}(s,A,t)=\IProb_{s}(\tau'_{A}=t)$. \begin{thm}\label{stopprob} Suppose $\gkernel$ satisfies Section \ref{assumptiondsf}. Moreover, assume that the state space $X$ is compact. For every $x\in \NSE{X}$, every $A\in \mathcal{I}(S)$ and every $t\in \mathbb{N}$, we have $\NSE{\cP}(x,\bigcup_{a\in A}B(a),t)\approx \mathcal{Q}(s_{x},A,t)$ where $s_x$ is the unique element in $S$ with $x\in B(s_x)$. \end{thm} \begin{proof} For $t=1$, by Section \ref{assumptiondsf} and Theorem \ref{hGapprox}, we have \[ \NSE{\cP}(x,\bigcup_{a\in A}B(a),1)=\NSE{g}(x,1,\bigcup_{a\in A}B(a))\approx H_{s_{x}}(A)=\mathcal{Q}(s_{x},A,1) \] for every $x\in \NSE{X}$ and every $A\in \mathcal{I}(S)$. Fix $n \in \mathbb{N}$ and suppose we have $\NSE{\cP}(x,\bigcup_{a\in A}B(a),t)\approx \mathcal{Q}(s_{x},A,t)$ for every $x\in \NSE{X}$, every $A\in \mathcal{I}(S)$ and every $t\leq n$. We now prove the case where $t=n+1$. By the induction hypothesis, we have \[ &\NSE{\cP}(x,\bigcup_{a\in A}B(a),n+1)\\ &=\int_{\NSE{X}\setminus \bigcup_{a\in A}B(a)} \NSE{\cP}(y,\bigcup_{a\in A}B(a),n)\NSE{g}(x,1,\mathrm{d} y)\\ &=\int_{\bigcup_{s\in S\setminus A}B(s)} \NSE{\cP}(y,\bigcup_{a\in A}B(a),n)\NSE{g}(x,1,\mathrm{d} y)\\ &\approx\int_{\bigcup_{s\in S\setminus A}B(s)} \mathcal{Q}(s_y,A,n)\NSE{g}(x,1,\mathrm{d} y). \] By Lemma \ref{tvfunction}, we have \[ \int_{\bigcup_{s\in S\setminus A}B(s)} \mathcal{Q}(s_y,A,n)\NSE{g}(x,1,\mathrm{d} y)\\ \approx \int_{\bigcup_{s\in S\setminus A}B(s)} \mathcal{Q}(s_y,A,n)\NSE{g}(s_x,1,\mathrm{d} y)\\ =\sum_{s\in S\setminus A}\mathcal{Q}(s,A,n)\NSE{g}(s_x,1,B(s)). \] As $X$ is compact, by Definition \ref{dtrunchain}, we have $\NSE{g}(s_x,1,B(s))=G_{s_{x}s}$. Thus, we have \[ \sum_{s\in S\setminus A}\mathcal{Q}(s,A,n)\NSE{g}(s_x,1,B(s))=\sum_{s\in S\setminus A}\mathcal{Q}(s,A,n)G_{s_{x}s}. \] By Lemma \ref{tvfunction} and \ref{closereverse}, we have \[ &|\sum_{s\in S\setminus A}\mathcal{Q}(s,A,n)G_{s_{x}s}-\sum_{s\in S\setminus A}\mathcal{Q}(s,A,n)H_{s_{x}s}|\\ &\leq \parallel G_{s_{x}}(\cdot)-H_{s_{x}}(\cdot) \parallel \approx 0. \] Hence, we have $\NSE{\cP}(x,\bigcup_{a\in A}B(a),n+1)\approx \mathcal{Q}(s_x, A,n+1)$, completing the proof. \end{proof} The following result shows that the large hitting time of the standard Markov process defined in \ref{largehit} is bounded from below by the large hitting time of its hyperfinite representation. \begin{thm}\label{hitlemma} Let $\alpha\in \Reals_{> 0}$. Suppose $\gkernel$ satisfies Section \ref{assumptiondsf}. Moreover, assume that the state space $X$ is compact. Then \[ \tau_{g}(\alpha)\geq \min\{t\in T: \NSE{\inf}\{\sum_{k=1}^{t}\mathcal{Q}(s,A,k): s\in S, A\in \mathcal{I}(S)\ \text{such that}\ \pi'(A)\geq \alpha\}> 0.9\}, \] provided that $\tau_{g}(\alpha)$ exists. \end{thm} \begin{proof} Pick $\alpha\in \Reals_{> 0}$ and suppose $\tau_{g}(\alpha)$ exists. By the transfer principle, we have \[ \tau_{g}(\alpha)=\NSE{\min}\{t\in T: \NSE{\inf}\{\sum_{k=1}^{t}\NSE{\cP}(x,A,k): x\in \NSE{X}, A\in \NSE{\mathcal{B} X} \ \text{such that}\ \NSE{\pi}(A)\geq \alpha\}>0.9\}. \] For every $A\in \mathcal{I}(S)$ with $\pi'(A)>\alpha$, by Definition \ref{defwksta}, we have $\NSE{\pi}(\bigcup_{a\in A}B(a))>\alpha$. Thus, for every $n\in \mathbb{N}$, we have \[ &\NSE{\inf}\{\sum_{k=1}^{n}\NSE{\cP}(x,A,k): x\in \NSE{X}, A\in \NSE{\mathcal{B} X} \ \text{such that}\ \NSE{\pi}(A)\geq \alpha\}\\ &\leq \NSE{\inf}\{\sum_{k=1}^{n}\NSE{\cP}(s,\bigcup_{a\in A}B(a),k): s\in S, A\in \mathcal{I}(S) \ \text{such that}\ \pi'(A)\geq \alpha\}\\ &\lessapprox \NSE{\inf}\{\sum_{k=1}^{n}\mathcal{Q}(s,A,k): s\in S, A\in \mathcal{I}(S) \ \text{such that}\ \pi'(A)\geq \alpha\}. \] As $\tau_{g}(\alpha)$ exists, we have \[ \inf\{\sum_{k=1}^{\tau_{g}(\alpha)}\mathcal{Q}(s,A,k): s\in S, A\in \mathcal{I}(S) \ \text{such that}\ \pi'(A)\geq \alpha\}>0.9. \] Hence, we have the desired result. \end{proof} \subsection{Mixing Times and Hitting Times on Compact Sets} \label{SecMixHitCompact} In this section, we use techniques developed in previous sections to prove Theorem \ref{mixhit} for reversible Markov processes with compact state spaces. The following lemma is well-known (for completeness, a proof can be found in \ref{wellknownbound} in the appendix): \begin{lemma}\label{maxhitless} Let $0 < \alpha<\frac{1}{2}$. Let $\mathcal{D}$ denote the collection of discrete time transition kernels with a stationary distribution on a $\sigma$-compact metric state space. Then there exists a universal constant $d'_{\alpha}$ such that, for every $\gkernel \in \mathcal{D}$, we have \[ d'(\alpha)t_{H}(\alpha) \leq t_{L}. \] \end{lemma} We now show prove our main result Theorem \ref{mixhit} in the special case that the underlying state space is compact: \begin{thm}\label{mixhitcompact} Let $0 < \alpha<\frac{1}{2}$. Then there exist universal constants $d_{\alpha},d'_{\alpha}$ such that, for every $\gkernel \in \mathcal{C}$, we have \[ d'_{\alpha}t_{H}(\alpha)\leq t_{L}\leq d_{\alpha}t_{H}(\alpha). \] \end{thm} \begin{proof} Suppose $t_{H}(\alpha)$ is infinite. By \ref{maxhitless}, we know that $t_{L}$ is infinite. Thus, the result follows immediately in this case. Suppose $t_{H}(\alpha)$ is finite. Let $c_{\alpha}$ be the constant given in Theorem \ref{fmixhit}. Let $\{I_{i}(\cdot)\}_{i\in S}$ be the internal transition probability matrix defined after \ref{hdisrepresent}. By \ref{starproperty}, we know that $\{I_{i}(\cdot)\}_{i\in S}$ is a *reversible process with *stationary distribution $\pi'$. Let \[ T_{L}=\NSE{\min}\{t\in T: \NSE{\sup}_{i\in S}\parallel I_{i}^{(t)}(\cdot)-\pi'(\cdot) \parallel \leq \epsilon\}. \] Let \[ T_{g}(\alpha)=\NSE{\min}\{t\in T: \NSE{\inf}\{\sum_{k=1}^{t}\mathcal{Q}(s,A,k): s\in S, A\in \mathcal{I}(S)\ \text{such that}\ \pi'(A)\geq \alpha\}> 0.9\} \] where $\mathcal{Q}(s,A,k)$ is defined in \ref{sechit}. By the transfer of \ref{mhequal}, we know that $T_{L}\leq 2c_{\alpha}T_{g}(\alpha)$. By \ref{mixcor}, we have $t_{L}\leq T_{L}$. By \ref{hitlemma}, we have $\tau_{g}(\alpha)\geq T_{g}(\alpha)$. Thus, we have $t_{L}\leq 2c_{\alpha}\tau_{g}(\alpha)$. Let $d_{\alpha}=20c_{\alpha}$. By \ref{maxlarge}, we have $t_{L}\leq d_{\alpha}t_{H}(\alpha)$. By \ref{maxhitless}, we have the desired result. \end{proof} \section{Mixing Times and Hitting Times on $\sigma$-Compact Sets}\label{secsigcomp} We fix notation as in Section \ref{SecMixHitCompact}, but relax the assumption that $(X,d)$ is a compact metric space to the assumption that $(X,d)$ is a $\sigma$-compact metric space. As before, all $\sigma$-algebras should be taken to be the usual Borel $\sigma$-algebra. We recall the definition of the \textit{trace} of a Markov chain: \begin{defn} \label{DefTraceChain} Let $g$ be the transition kernel of a Markov chain on state space $X$ with stationary measure $\pi$ and Borel $\sigma$-field $\mathcal{B} X$. Let $S \in \mathcal{B} X$ have measure $\pi(S) > 0$. Fix $x \in X$ and let $\{X_{t}\}_{t \geq 0}$ be a Markov chain with transition kernel $g$ and starting point $X_{0} = x$. Then define the sequence $\{\eta_{i}\}_{i \in \mathbb{N}}$ by setting \[ \eta_{0} = \min \{t \geq 0 \, : \, X_{t} \in S \} \] and recursively setting \[ \eta_{i+1} = \min \{t > \eta_{i} \, : \, X_{t} \in S \}. \] We then define the \textit{trace} of $g$ on $S$ to be the Markov chain with transition kernel \[ \label{EqDefTrace} g^{(S)}(x,t,A) = \mathbb{P}_{x}[X_{\eta_{t}} \in A]. \] \end{defn} \begin{rem} Suppose that the original transition kernel $g$ has stationary distribution $\pi$. For $S\in \mathcal{B} X$ with $\pi(S)>0$, the normalization of $\pi$ to the set $S$ is the stationary distribution of the trace transition kernel $g^{(S)}$. Moreover, if $g$ is ergodic and reversible with respect to the stationary distribution $\pi$, then $g^{(S)}$ is reversible with respect to the normalization of $\pi$ to the set $S$. \end{rem} \begin{rem} \label{RemCoupChainTrace} Note that Definition \ref{DefTraceChain} naturally constructs a coupling of $\{X_{t}\}_{t \in \mathbb{N}} \sim g$ and $\{X_{t}^{(S)}\}_{t \in \mathbb{N}} \sim \{g^{(S)}(x,1,\cdot)\}_{x\in S}$ on the same probability space. \end{rem} \begin{rem} \label{RemLazyAlt} We give an alternative definition of the ``lazy" kernel from \ref{deflazy} that is similar to the coupling in Remark \ref{RemCoupChainTrace}. Let $\{\zeta_{i} \}_{i \in \mathbb{N}}$ be a sequence of i.i.d. geometric random variables with mean 2, and define $L(t) = \max \{i \, : \, \sum_{j=1}^{i} \zeta_{i} \leq t\}.$ Observe that the chain $\{X_{t}^{(L)}\}_{t \in \mathbb{N}}$ given by \[\label{EqLazyRep} X_{t}^{(L)} = X_{L(t)} \] satisfies $X_{0}^{(L)} = x$ and $\{X_{t}^{(L)}\}_{t \in \mathbb{N}} \sim g_{L}$. \end{rem} A simple coupling argument, expanded in \ref{SubsecTraceProp}, gives: \begin{lemma}\label{tracedsf} Let $g$ be a transition kernel with stationary measure $\pi$ that satisfies Section \ref{assumptiondsf}, and let $S \in \mathcal{B} X$ be a set with measure $\pi(S) > 0$. Then the trace $g^{(S)}$ of $g$ on $S$ also satisfies Section \ref{assumptiondsf}. \end{lemma} For the rest of the section, let $\mathcal{K}(X)$ denote the collection of all compact subsets of $X$ that are also in $\mathcal{B} X$. The next theorem shows that the standardized mixing time of the original Markov chain is bounded by the supremum over standardized mixing times of associated trace chains.\footnote{We freely use here the fact that the operation taking a kernel to its associated ``lazy" kernel and the operation taking a kernel to its associated ``trace" kernel commute. We include a proof of this fact in \ref{LemmaTraceLazyCommute} of the appendix for completeness.} \begin{lemma} \label{LemmaIneqMixComp} Let $g$ be the transition kernel of a Markov chain on state space $X$ with stationary measure $\pi$. For $S \in \mathcal{B} X$ with $\pi(S) > 0$, denote by $\overline{t}_{m}^{(S)}$ the standardized mixing time with respect to $g^{(S)}$. Then \[ \overline{t}_{m} \leq \sup_{S\in \mathcal{K}(X)}\overline{t}_{m}^{(S)}. \] \end{lemma} \begin{proof} By the definition of $\overline{t}_{m}$, for all $\epsilon > 0$ there exist some particular points $x,y\in X$ and a set $A\in \mathcal{B} X$ such that \[ \label{IneqLimCompBdOne} | g(x,t,A) - g(y,t,A) | > 0.25 + \epsilon \] for $t =\overline{t}_{m}-1$. Next, note that $\{g(x,n,\cdot), g(y,n,\cdot) \}_{n=0}^{\overline{t}_{m}}$ is a finite collection of measures, and in particular it is tight. Therefore, there exists a compact set $S$ such that $\max_{0 \leq n \leq \overline{t}_{m}}\max\{g(x,n,S), g(y,n,S)\}\geq 1 - \frac{\epsilon}{100 \overline{t}_{m}}$ and $x,y\in S$. Combining this with Inequality \eqref{IneqLimCompBdOne}, the transition probabilities $g^{(S)}$ satisfy \begin{align} &|g^{(S)}(x,t,A \cap S) - g^{(S)}(y,t, A \cap S) |\\ &\geq |g(x,t,A)-g(y,t,A)|-|g(x,t,A)-g^{(S)}(x,t,A)|-|g(y,t,A)-g^{(S)}(y,t,A)|\\ &\geq | g(x,t,A) - g(y,t,A) |-\sum_{n=0}^{t}g(x,n,X\setminus S)-\sum_{n=0}^{t}g(y,n,X\setminus S)\\ &\geq | g(x,t,A) - g(y,t,A) |-2(t+1)\max_{0\leq n\leq t}\max\{g(x,n,X\setminus S),g(y,n,X\setminus S)\}\\ &> 0.25 + \frac{98}{100} \epsilon. \end{align} Thus, the mixing time of $g^{(S)}$ is also at least $\overline{t}_{m}$, so we conclude \[ \overline{t}_{m} \leq \sup_{S\in \mathcal{K}(X)} \overline{t}_{m}^{(S)}. \] \end{proof} By the coupling in \ref{RemCoupChainTrace}, we have: \begin{lemma}\label{mhitboundlemma} Let $0 < \alpha<\frac{1}{2}$. Let $g$ be the transition kernel of a Markov chain on state space $X$ with stationary measure $\pi$. For $S \in \mathcal{B} X$ with $\pi(S) > 0$, denote by $\tau_{g}^{(S)}(\alpha)$ the large hitting time with respect to $g^{(S)}$. Then \[ \tau_{g}(\alpha)\geq \sup_{S\in \mathcal{K}(X)} \tau_{g}^{(S)}(\alpha). \] \end{lemma} We can now prove Theorem \ref{mixhit}, the main result of this section: \begin{thm}\label{mixhitpf} Let $0 < \alpha<\frac{1}{2}$. Then there exist universal constants $0<a_{\alpha},a'_{\alpha} < \infty$ such that, for every $\gkernel \in \mathcal{M}$, we have \[ a'_{\alpha}t_{H}(\alpha)\leq t_{L}\leq a_{\alpha}t_{H}(\alpha). \] \end{thm} \begin{proof} By \ref{maxhitless}, there exists a universal constant $a'_{\alpha}>0$ such that, for every $\gkernel\in \mathcal{M}$, we have $a'_{\alpha}t_{H}(\alpha)\leq t_{L}$. Recall that $\mathcal{C}$ is the collection of discrete time reversible transition kernels with compact state space satisfying Section \ref{assumptiondsf}. By Theorem \ref{mixhitcompact}, there exists a universal constant $d_{\alpha}>0$ such that, for every $\gkernel\in \mathcal{M}$, the mixing time of the lazy chain is bounded by $d_{\alpha}$ times the maximal hitting time. For every $\gkernel\in \mathcal{M}$, by \ref{mixequivalent}, we have $t_{L}\leq 2\overline{t}_{L}$. By \ref{LemmaTraceLazyCommute}, \ref{LemmaIneqMixComp}, \ref{tracedsf} and Theorem \ref{mixhitcompact}, we have \[ \overline{t}_{L}\leq \sup_{S\in \mathcal{K}(X)}\overline{t}_{L}^{(S)}\leq d_{\alpha}\sup_{S\in \mathcal{K}(X)}\tau_{g}^{(S)}(\alpha). \] By \ref{mhitboundlemma} and \ref{maxlarge}, we have \[ \sup_{S\in \mathcal{K}(X)}\tau_{g}^{(S)}(\alpha)\leq \tau_{g}(\alpha)\leq 10t_{H}(\alpha). \] Let $a_{\alpha}=20d_{\alpha}$. We have $t_{L}\leq a_{\alpha}t_{H}(\alpha)$ for every $\gkernel\in \mathcal{M}$. \end{proof} \section{Statistical Applications and Extensions}\label{statapp} In this section, we give results that allow us apply our main result, Theorem \ref{mixhit}, to obtain useful bounds for various Markov chains that don't satisfy its main assumptions. Our main motivation is the study of Markov chain Monte Carlo (MCMC) algorithms. MCMC is ubiquitous in statistical computation, and in this context small mixing times correspond to efficient algorithms (see \textit{e.g.} \citep{brooks2011handbook} for an overview of MCMC, \citep{gelman1995bayesian} for applications, and \citep{meyn2012markov} for analyses). Very few algorithms used for MCMC satisfy the strong Feller condition Section \ref{assumptiondsf}. We begin by showing in Section \ref{SecMHMod} that our results apply without change to the Metropolis-Hastings algorithm, one of the most popular algorithms in computational statistics. In Section \ref{SecAsfIntro}, we introduce a relaxation of the strong Feller condition Section \ref{assumptiondsf} and then show that this relaxed property is satisfied by many other MCMC chains. Appendix Section \ref{AppOtherExt} contains further applications. \subsection{Strong Feller Functions of Metropolis-Hastings Chains} \label{SecMHMod} We begin with the following definition of a large class of Metropolis-Hastings chains: \begin{defn} [Metropolis-Hastings Chain] \label{EqDefMH} Fix a distribution $\pi$ with continuous density $\rho$ supported on $\mathbb{R}^{d}$. Also fix a reversible kernel $\{q(x,1,\cdot)\}_{x\in \mathbb{R}^{d}}$ on $\mathbb{R}^{d}$ with stationary measure $\nu$. For every $x\in \mathbb{R}^{d}$, assume that $q(x,1,\cdot)$ has continuous density $q_{x}$ and $\nu$ has continuous density $\phi$. Define the \textit{acceptance function} by the formula \[ \beta(x,y) = \min(1, \frac{\rho(y) q_{y}(x)}{\rho(x) q_{x}(y)}). \] Finally, define $g$ to be the transition kernel given by the formula \[\label{gtranform} g(x,1,A) = \int_{y \in A} q_{x}(y) \beta(x,y) dy + \delta(x,A) \int_{\mathbb{R}^{d}} q_{x}(y) (1 - \beta(x,y))dy. \] For a transition kernel of this form, define the constant \[ \gamma = \inf_{x} \int_{\mathbb{R}^{d}} q_{x}(y) \beta(x,y) dy. \] \end{defn} \begin{rem} It is well-known that, under these conditions, $g$ will be reversible with stationary measure $\pi$ (see \textit{e.g.} \citep{chib1995understanding}). \end{rem} Let $g$ be a Metropolis-Hastings kernel of form given in \ref{EqDefMH}, and let $\{X_{t}\}_{t \in \mathbb{N}} \sim g$. Then define inductively $\eta_{0} = 0$ and \[ \eta_{i+1} = \min \{t > \eta_{i} \, : \, X_{t} \neq X_{\eta_{i}} \}. \] Define the \textit{skeleton} of $\{X_{t}\}_{t \in \mathbb{N}}$ by \[ \label{EqSkelDef} Y_{t} = X_{\eta_{t}}. \] The process $\{(Y_{t}, \eta_{t})\}_{t \in \mathbb{N}}$ is a Markov chain. We denote by $g'$ its transition kernel, and $\pi'$ its stationary measure on $X \times \mathbb{N}$. We remark that it is easy to reconstruct $\{X_{t}\}_{t \in \mathbb{N}}$ from $\{(Y_{t},\eta_{t})\}_{t \in \mathbb{N}}$. For this section only, denote by $t_{m}', t_{L}'$ and $t_{H}'(\alpha)$ the mixing time, lazy mixing time and maximum hitting time of $g'$. We then have: \begin{thm}\label{IneqMHAlt} Let $\mathcal{B}$ be the collection of transition kernels of the form given in \ref{gtranform} with finite mixing time, and for which $q_{x}(y)$ is jointly continuous in $x,y$. Then for all $0 < \alpha < \frac{1}{2}$, there exists a universal constant $0 < c_{\alpha} < \infty$ so that \[ t_{L}' \leq c_{\alpha} (1 - \delta)^{-1} t_{H}(\delta \alpha) \] for every $g\in \mathcal{B}$ and every $\delta >0$. \end{thm} \begin{proof} Since $g$ is of the form \ref{gtranform}, it is straightforward to see that $g'$ satisfies Section \ref{assumptiondsf}. Thus, one can apply Theorem \ref{mixhit} to show that, for any $0 < \hat{\alpha} < \frac{1}{2}$, \[ \label{EqAppMainSkel} t_{L}' \sim t_{H}'(\hat{\alpha}), \] where (as in Theorem \ref{mixhit}) the implied constant depends on $\hat{\alpha}$. Next, we must relate $t_{H}'$ to $t_{H}$. For $x \in X$, let $\lambda(x) = g(x,1,\{x\}^{c})$. For $\lambda \in (0,\infty)$, denote by $L_{\lambda}$ the law of the geometric random variable with success probability $\lambda$ and let $\Prob_{\lambda}$ denote its associated probability mass function. For $A \subset X \times \mathbb{N}$, we observe \[ \pi'(A) = \sum_{n \in \mathbb{N}} \int_{X} L_{\lambda(x)}(n) \mathbf{1}_{(x,n) \in A} \pi(dx). \] Fix a measurable set $A' \subset X \times \mathbb{N}$ with stationary measure $\pi'(A') \geq \alpha$. Define the associated ``core" set $A \subset X$ by \[ \label{EqCoreDef} A = \{x \in X \, : \, \Prob_{\lambda(x)}(\{n \, : \, (x,n) \in A' \}) \geq (1- \delta) \alpha \}. \] Since $\pi'(A') \geq \alpha$, we must have $\pi(A) \geq \delta \alpha$. For chains $\{X_{t}\}_{t \in \mathbb{N}}$ and $\{(Y_{t},\eta_{t})\}_{t \in \mathbb{N}}$ coupled as in Equation \eqref{EqSkelDef}, define the hitting times \[ \tau_{A} = \min \{t \, : \, X_{t} \in A \}, \, \tau_{A}' = \min \{t \, : \, Y_{t} \in A \}, \, \tau_{A'}' = \min \{t \, : \, (Y_{t},\eta_{t}) \in A' \}. \\ \] By the definition of the ``core" set in Equation \eqref{EqCoreDef}, \[ \E_{x}[\tau_{A'}'] \lesssim (1-\delta)^{-1} \, \E_{x}[\tau_{A}'] \] for all starting points $x \in X$. Under our coupling of $\{X_{t}\}_{t \in \mathbb{N}}$ and $\{(Y_{t},\eta_{t})\}_{t \in \mathbb{N}}$, \[ \E_{x}[\tau_{A}'] \leq \E_{x}[\tau_{A}] \] for all starting points $x \in X$. Combining these two inequalities, we have \[ \E_{x}[\tau_{A'}'] \lesssim (1-\delta)^{-1} \E_{x}[\tau_{A}] \] for all $A' \in \mathcal{B} X$ with $\pi'(A') \geq \alpha$ and all $x \in X$. furthermore, $\pi(A) \geq \delta \alpha$, so \[ t_{H}'(\alpha) \lesssim (1 - \delta)^{-1} t_{H}(\delta \alpha). \] Combining this with Equation \eqref{EqAppMainSkel}, completes the proof. \end{proof} \subsection{Almost-Strong Feller Chains} \label{SecAsfIntro} We don't know a general way to extend the trick in Section \ref{SecMHMod}. Fortunately for us, in the context of MCMC, the user does not usually care about the mixing time of a \textit{specific} Markov chain - it is enough to estimate the mixing time of \textit{some} Markov chain that is both fast and easy to implement. We give the mathematical results first, then explain their relevance to MCMC in Section \ref{SubsubsecUsingMain}. \subsubsection{Generic Bounds}\label{SecASFGen} Let $\{g(x,1,\cdot)\}_{x\in X}$ be the transition kernel of a Markov process. For every $k\in \mathbb{N}$, denote by $g^{(k)}$ the transition kernel \[ g^{(k)}(x,t,A) = g(x,kt, A) \] for every $x\in X$, $t\in \mathbb{N}$ and $A\in \mathcal{B} X$. We call $\{g^{(k)}(x,1,\cdot)\}_{x\in X}$ the \emph{$k$-skeleton} of $\{g(x,1,\cdot)\}_{x\in X}$. We will use the superscript $(k)$ to extend our notation for the kernel $g$ to the kernel $g^{(k)}$. For example, for every $\epsilon>0$, we use $\overline{t}_{m}^{(k)}(\epsilon)$ to denote the standardized mixing time of $g^{(k)}$. We observe some simple relationships between $g$ and $g^{(k)}$, with details in Appendix \ref{SecMixHitSkel} for completeness: \begin{lemma}\label{LemmaASFClaim1} For all $\epsilon > 0$ and all $k \in \mathbb{N}$, \[ \overline{t}_{m}^{(k)}(\epsilon) = \lceil \frac{\overline{t}_{m}(\epsilon)}{k} \rceil. \] \end{lemma} \begin{lemma}\label{LemmaASFClaim2} For all $\alpha > 0$ there exists a constant $0 < C_{\alpha} < \infty$ so that for all $k \in \mathbb{N}$, \[ t_{H}^{(k)}(\alpha) \leq C_{\alpha} \lceil \frac{1}{k} \overline{t}_{m} \rceil. \] \end{lemma} Next, we give a definition that relaxes the strong Feller condition in a quantitatively-useful way. We first make a small remark on three operations on kernels that we've defined: the trace of a kernel on a set, the $k$-skeleton of a kernel, and the ``lazy" version of a kernel. As shown in \ref{LemmaTraceLazyCommute}, the ``trace" and ``lazy" transformations commute - the trace of the lazy chain is equal to the lazy version of the trace chain. However, the $k$-skeleton and ``lazy" transformations \textit{do not} generally commute. As such, we occasionally use parentheses in the following notation to emphasize the order in which these transformations occur, with subscripts taking precedence. For example, $g_{L}^{(k)}$ is the $k$-skeleton of the chain $g_{L}$, while $(g^{(k)})_{L}$ is the lazy version of $g^{(k)}$. This last chain is important, and so we introduce the shorthand \[ \label{EqGDef} G \equiv (g_{L}^{(k)})_{L}. \] We also define $T_{m}$, $T_{L}$, and $T_{H}$ to be the mixing time, lazy mixing time and maximum hitting time of $G$. \begin{defn}[$(k,C)$-almost Strong Feller] \label{DefASF} For $k,C \in \mathbb{N}$, we say that a kernel $\{g(x,1,\cdot)\}_{x\in X}$ is \textit{$(k,C)$-almost strong Feller} if there exist kernels $\{G_{1}(x,1,\cdot), G_{2}(x,1,\cdot)\}_{x\in X}$ so that the following are satisfied: \begin{enumerate} \item $G_{1}$ is reversible and satisfies Section \ref{assumptiondsf}, and \item For some \[ \label{IneqASFGoodApprX} 0 \leq p \leq \frac{1}{\asfc}, \] we have \[ \label{IneqASFGoodAppr} g_{L}^{(k)} = (1 - p)G_{1} + pG_{2}. \] \end{enumerate} \end{defn} For the rest of the paper, we let $\mathcal{E}(k,C)$ be the collection of $(k,C)$-almost strong Feller transition kernels on a $\sigma$-compact metric state space $X$. \begin{rem} Any strong Feller chain is $(1,C)$-almost strong Feller for all $C \geq 0$. Our condition is inspired by the famous \textit{asymptotically strong Feller} condition of \cite{hairer2006ergodicity}. \end{rem} To lessen notation in the rest of this section, we use ``$x \lesssim y$" as shorthand for the longer phrase ``there exists a universal constant $D$ such that $x \leq D y$," and $x \sim y$ for ``$x \lesssim y$ and $y \lesssim x$." We use the ``prime" superscript to denote quantities related to chains drawn from $G_{1}$. For example, we denote by e.g. $t_{m}'$ the mixing time of $G_{1}$ and $t_{L}'$ the mixing time of its associated lazy chain. We then have the main result of this section, which shows that $T_{m}$ is bounded from above by $t_{H}^{(k)}(\alpha)$ under condition \ref{DefASF}: \begin{thm} \label{ThmAsfMainBd} There exists a universal constant $C_0$ such that, for every $0 < \alpha < 0.5$, there exists a universal constant $d_{\alpha}$ such that for all $C > C_{0}$, all $k \in \mathbb{N}$ and all $\{g(x,1,\cdot)\}_{x\in X}\in \mathcal{E}(k,C)$, we have \[ d_{\alpha} T_{m} \leq \ell_{H}^{(k)}(\alpha), \] where $\ell_{H}^{(k)}$ denotes the maximum hitting time of the transition kernel $g_{L}^{(k)}$. \end{thm} \begin{proof} Pick $0<\alpha<\frac{1}{2}$. We have $k,C \in \mathbb{N}$ and $g \in \mathcal{E}(k,C)$ as generic constants and transition kernels, and we let $G_{1}, G_{2},$ and $p$ be associated kernels and constant as in \ref{DefASF}. By \ref{LemmaASFClaim1}, \[ t_{L}^{(k)} \sim 1 + \frac{t_{L}}{k}. \] Applying \ref{LemmaElCompLazyMix}, the mixing time $T_{m}$ of $G=(g_{L}^{(k)})_{L}$ satisfies \[ T_{m} \lesssim t_{L}^{(k)} \sim 1 + \frac{t_{L}}{k}. \] Applying \ref{LemmaElPertMixing} and \ref{IneqASFGoodAppr}, there exists a constant $C_{0}>0$ so that for all $C > C_{0}$, all $k\in \mathbb{N}$ and $g \in \mathcal{E}(k,C)$, we have \[\label{llequal} T_{m} \sim t_{L}'. \] as well. We restrict ourselves to $C > C_{0}$ for the remainder of the proof. Since the transition kernel $\{G_1(x,1,\cdot)\}_{x\in X}$ satisfies Section \ref{assumptiondsf}, Theorem \ref{mixhit} gives \[ \label{IneqStar1} t_{L}' \sim t_{H}'(\alpha). \] Applying \ref{IneqStar1} with \ref{LemmaASFClaim2} and \ref{LemmaElPertHitting}, we further have \[ t_{L}' \sim t_{H}'(\alpha) \lesssim \ell_{H}^{(k)}(\alpha). \] Combining this with Inequality \eqref{llequal} completes the proof. \end{proof} \subsubsection{Gibbs Samplers}\label{SecASFGibbs} We will show that \ref{ThmAsfMainBd} can be used to obtain nontrivial mixing bounds related to the following class of Gibbs samplers: \begin{defn} [Gibbs Sampler] \label{EqDefGibbs} Fix a distribution $\pi$ with continuous density $\rho > 0$ on $\mathbb{R}^{d}$. For $x \in \mathbb{R}^{d}$, $i \in \{1,2,\ldots,n\}$ and $z \in \mathbb{R}$, define \[ \rho_{x,i}(z) = \frac{\rho(x[1],x[2],\ldots,x[i-1],z,x[i+1],\ldots,x[d])}{\int \rho(x[1],x[2],\ldots,x[i-1],y,x[i+1],\ldots,x[d]) dy}, \] the $i$'th conditional distribution of $\rho$. Let $F_{x,i}$ be the CDF of $\rho_{x,i}$. We then define a Markov chain as follows. Fix a starting point $X_{0} = x$. Let $i_{t} \stackrel{iid}{\sim} \mathrm{Unif}(\{1,2,\ldots,d\})$ and $U_{t} \stackrel{iid}{\sim} \mathrm{Unif}([0,1])$ be two i.i.d. sequences. We iteratively define $X_{t+1}$ by the equation \[ \label{EqGibbsForwardMap} X_{t+1} = (X_{t}[1],\ldots,X_{t}[i_{t}-1], F_{X_{t},i_{t}}^{-1}(U_{t}), X_{t}[i_{t}+1],\ldots,X_{t}[d]). \] We define the transition kernel $g$ by setting \[ \label{EqGibbsMapToKern} g(x,t,A) = \mathbb{P}_{x}[X_{t} \in A] \] where $\Prob$ is a product measure that generates this Markov process. \eqref{EqGibbsForwardMap} is the usual ``forward mapping" representation of a ``random-scan" Gibbs sampler. Note that, since $\rho$ is continuous and nonzero everywhere, $F_{x,i}^{-1}(u)$ always contains exactly one element for $x \in \mathbb{R}^{d}$, $i \in \{1,2,\ldots,n\}$ and $u \in [0,1]$. Under the same setting as \eqref{EqGibbsMapToKern}, we define the associated ``conditional" update kernels $\{ g^{(i)}\}_{1 \leq i \leq d}$ by their one-step transition probabilities: \[ g^{(i)}(x,1,A) = \mathbb{P}_{x}[X_{1} \in A | i_{0} = i]. \] \end{defn} The MCMC literature has many variants of the Gibbs sampler, but we focus on this popular simple case. Before stating our main result, we recall that any sequence of transition kernels $g_{1},\ldots,g_{k}$ on the same space has a product kernel, which we denote $\prod_{j=1}^{k} g_{j}$. Informally, this product is obtained by ``proposing from these kernels in order"; see \textit{e.g.} Theorem 5.17 of \cite{FMP2} for a formal justification of the notation. Our main result is: \begin{lemma} \label{LemmaGibbsSamplersAsf} Let $\mathcal{A}$ be the collection of transition kernels of the form given in \ref{EqDefGibbs}, that also have finite mixing time. Then, for all $0 < C < \infty$, there exists a universal constant $K_{C}$ so that $\{g(x,1,\cdot)\}_{x\in X} \in \mathcal{A}$ is $(k,C)$-almost strong Feller for any $k \geq K_{C} d \log(t_{L})$. \end{lemma} \begin{rem} The condition $\rho(\theta) > 0$ for all $\theta \in \mathbb{R}^{d}$ is only used as a simple sufficient condition for the chain $G_{1}$ defined in the proof to satisfy Section \ref{assumptiondsf}. In many other situations, this can be checked directly. \end{rem} \begin{proof} Throughout this proof, we fix $g \in \mathcal{A}$ and use notation from \ref{EqDefGibbs} freely. We begin by bounding the mixing time from below. For $x \in \mathbb{R}^{d}$, define the collection \[ H(x) = \{y \in \mathbb{R}^{d} \, : \, \exists \, n \in \{1,2,\ldots,d\} \, \text{ s.t. } \, y[n] = x[n] \} \] of vectors that share at least one entry with $x$. Since $\pi$ has a density $\rho$, we have for any $x \in \mathbb{R}^{d}$ that \[ \pi(H(x)) = 0. \] Thus, for any $x \in \mathbb{R}^{d}$ and $t \in \mathbb{N}$, we have \[ \label{IneqNonSingGibbs} \| g(x,t,\cdot) - \pi(\cdot) \| \geq P[\cup_{s=0}^{t-1} \{i_{s}\} \neq \{1,2,\ldots,d\}]. \] By the representation for the lazy chain in \ref{RemLazyAlt}, we also have \[ \| g_{L}(x,t,\cdot) - \pi(\cdot) \| \geq P[\cup_{s=0}^{t-1} \{i_{s}\} \neq \{1,2,\ldots,d\}]. \] By the well-known ``coupon collector" bound (see the main theorem of \cite{erdHos1961classical}), there exists some $d_{0} \in \mathbb{N}$ such that for all $d \geq d_{0}$, \[ \label{IneqCoupQuote} P[\cup_{s=0}^{\frac{1}{2} d \log(d) - 1} \{i_{s}\} \neq \{1,2,\ldots,d\}] \geq 1 - e^{-d}. \] Putting together Inequalities \eqref{IneqNonSingGibbs} to \eqref{IneqCoupQuote}, this implies that there exists some universal constant $0 < c_{1} < \infty$ so that for all $d \in \mathbb{N}$ and all $g \in \mathcal{A}$ on $\mathbb{R}^{d}$, \[ \label{IneqGibbsAsfcKeyBd} t_{m}, \, t_{L} \geq c_{1} d \log(d). \] Denote by $k \in \mathbb{N}$ a constant that will be fixed later in the proof. Let $\{X_{t}\}_{t \in \mathbb{N}} \sim g$, let $L$ be the (random) function from Equation \eqref{ezlform}, and let $\{\zeta_{i} \}_{i \in \mathbb{N}}$ be the i.i.d. geometric(2) random variables used to construct $L$ in Equation \eqref{ezlform}. Recall that $\{X_{L(t)}\}_{t \in \mathbb{N}} \sim g_{L}$. For this choice, define the event \[ \mathcal{E} = \{\cup_{t=1}^{k} \{i_{L(t)}\} = \{1,2,\ldots,d\} \}, \] and define the kernels $G_{1}, G_{2}$ by setting \begin{align*} G_{1}(x,1,A) &= \mathbb{P}_{x}[X_{L(k)} \in A | \mathcal{E}] \\ G_{2}(x,1,A) &= \mathbb{P}_{x}[X_{L(k)} \in A | \mathcal{E}^{c}]. \end{align*} In the notation of Definition \ref{DefASF}, the constant $p$ associated with this choice of $k, G_{1}, G_{2}$ is \[ p = P[\mathcal{E}]. \] We observe that, for any fixed $j \in \{1,2,\ldots,d\}$ and $s \in \mathbb{N}$, \[ P[ j \in \cup_{t=0}^{s-1} \{i_{t}\}] = 1- (1 - \frac{1}{d})^{s}. \] On the other hand, by Hoeffding's inequality, we have \[ P[L(k) < \frac{k}{4}] \leq e^{- \frac{1}{4} k^{2}} \] for all $k \geq 4$. Combining these two bounds, \[ \label{IneqPUpGib1} p \leq P[\cup_{t=0}^{\frac{k}{4}-1} \{i_{t}\} \neq \{1,2,\ldots,d\}] + P[L(k) < \frac{k}{4}] \leq d (1 - \frac{1}{d})^{\frac{k}{4}} + e^{- \frac{1}{4} k^{2}} \leq d e^{-\frac{k}{4d}} + e^{- \frac{1}{4} k^{2}}. \] Noting $k,d \geq 1$, we have: \[ \label{IneqPUpGib} p \leq 2d e^{-\frac{k}{4d}}. \] To satisfy Inequalities \eqref{IneqASFGoodAppr} and \eqref{IneqASFGoodApprX}, we just need our choice of $k$ to ensure that $p \leq \frac{1}{\asfc}$. Inspecting these inequalities, there exists a universal constant $K$ so that this inequality is satisfied as long as \[ k > K d \log(\max(d, t_{L})). \] On the other hand, by Inequality \eqref{IneqGibbsAsfcKeyBd}, there exists a universal constant $A$ so that \[ t_{L} \geq A d \log(\max(d,t_{L})). \] Inspecting these final two bounds, we see that for all $k$ sufficiently large compared to $d \log(t_{L}) \lesssim t_{L}$, this choice of $k$, $p$ and $G_{1}$ satisfies \eqref{IneqASFGoodAppr} and \eqref{IneqASFGoodApprX}. Next, we must check that $G_{1}$ is reversible. To see this, we begin by noting that $\mathcal{E}$ depends only on the sequence $\{i_{t}\}_{t \in \mathbb{N}}$ of ``index" variables in our forward-mapping representation. Next, we check that, even after conditioning on $\mathcal{E}$, these index variables have a certain exchangeability-like property. For $m \in \mathbb{N}$ and any sequence $J \in \mathbb{R}^{n}$ with $n \geq m+1$, define the ``reversal" function \[ \label{EqReverseOp} w_{m}(J) = (J[m], J[m-1],\ldots,J[1],J[0]). \] Let $T=L(k)$. We observe that for any sequence $j_{0},j_{1},\ldots \in \{1,2,\ldots,d\}$, we have \[ \mathbb{P}[(i_{0}, i_{1},\ldots,i_{T}) = (j_{0},j_{1},\ldots,j_{T}) | \mathcal{E}] = \mathbb{P}[(i_{0},i_{1},\ldots,i_{T}) = w_{T}(j_{0},j_{1},\ldots,j_{T}) | \mathcal{E}] \] and so for any $m \in \mathbb{N}$ and $J \in \{1,2,\ldots,d\}^{m+1}$ \[ \mathbb{P}[\{T=m\} \cap \{(i_{0},i_{1},\ldots,i_{T}) = J \} | \mathcal{E}] = \mathbb{P}[\{T=m\} \cap \{(i_{0},i_{1},\ldots,i_{T}) = w_{m}(J) \} | \mathcal{E}]. \] Then, for $\{X_{t}\}_{t \in \mathbb{N}} \sim g$ drawn according to the forward-mapping representation, and for any $x \in X$, we have \begin{align*} G_{1}&(x,1,A) = \mathbb{P}_{x}[X_{L(k)} \in A | \mathcal{E}] \\ &= \sum_{m \geq 0} \sum_{J \in \{1,\ldots,d\}^{m+1}} \mathbb{P}_{x}[X_{L(k)} \in A | \mathcal{E}; \, \{ T=m\} \cap \{(i_{0},i_{1},\ldots,i_{T}) = J\}] \\ & \qquad \times \mathbb{P}[\{ T=m\} \cap \{(i_{0},i_{1},\ldots,i_{T}) = J\}] \\ &= \frac{1}{2} \sum_{m \geq 0} \sum_{J \in \{1,\ldots,d\}^{m+1}} (\mathbb{P}_{x}[X_{L(k)} \in A | \mathcal{E}; \, \{ T=m\} \cap \{(i_{0},i_{1},\ldots,i_{T}) = J\}] \\ &+\mathbb{P}_{x}[X_{L(k)} \in A | \mathcal{E}; \, \{ T=m\} \cap \{(i_{0},i_{1},\ldots,i_{T}) = w_{m}(J)\}] ) \\ &\qquad \times \mathbb{P}[ \, \{ T=m\} \cap \{(i_{0},i_{1},\ldots,i_{T}) = J\}] | \mathcal{E}] \\ &= \sum_{m \geq 0} \sum_{J \in \{1,\ldots,d\}^{m+1}} \frac{1}{2} (\prod_{\ell=0}^{m} g^{(J[\ell])} + \prod_{\ell=0}^{m} g^{(w_{m}(J)[\ell])}) \mathbb{P}[ \, \{ T=m\} \cap \{(i_{0},i_{1}\ldots,i_{T}) = J\}] | \mathcal{E}]. \end{align*} Recalling that $g^{(i)}$ is $\pi$-reversible for every $i$, we see that $\frac{1}{2} (\prod_{m=1}^{k} g^{(J[m])} + \prod_{m=1}^{k} g^{(w(J)[m])})$ is the additive reversibilization of the kernel $\prod_{m=1}^{k} g^{(J[m])}$. Hence, $\frac{1}{2} (\prod_{m=1}^{k} g^{(J[m])} + \prod_{m=1}^{k} g^{(w(J)[m])})$ is itself $\pi$-reversible (see \textit{e.g.} the introduction of \cite{choi2017metropolis} for a careful presentation of the additive reversibilization and a general argument as to why it is reversible). Thus, $G_{1}$ is $\pi$-reversible. Finally, the fact that $G_{1}$ satisfies Section \ref{assumptiondsf} follows immediately from the fact that the function \eqref{EqGibbsForwardMap} that gives the forward-mapping representation of $g$ is continuous. \end{proof} We conclude: \begin{thm}\label{ThmGibbsConc} Let $\mathcal{A}$ be the collection of transition kernels of the form given in \ref{EqDefGibbs}, that also have finite mixing time and satisfy $\rho(\theta) > 0$ for all $\theta \in \mathbb{R}^{d}$. Then there exists a universal constant $0<K<\infty$ such that, for all $0 < \alpha < 0.5$, there exist constants $0 < c_{\alpha} < \infty$, $0<c_{\alpha}'<\infty$ so that for all $\{g(x,1,\cdot)\}_{x\in X}\in \mathcal{A}$, \[ c_{\alpha} T_{m} \leq \ell_{H}^{(k)}(\alpha) \leq c_{\alpha}' (1 + \frac{t_{L}}{k}) \] uniformly in $k \geq Kd \log(t_{L}) $. \end{thm} \begin{proof} The first inequality follows immediately from \ref{ThmAsfMainBd} and \ref{LemmaGibbsSamplersAsf}. The second follows from \ref{LemmaASFClaim2} and \ref{mixequivalent}. \end{proof} \subsubsection{Using \ref{ThmGibbsConc} and \ref{MhThmMainCorr} }\label{SubsubsecUsingMain} We note that there are two obvious differences obstacles to using our main results, \ref{ThmGibbsConc} and \ref{MhThmMainCorr}, for practical problems: \begin{enumerate} \item Both refer to the transition kernel $G$ rather than the original kernel $g$ of interest. \item Both require some choice of $k$, which in turn requires some a-priori bound on the mixing time of $g_{L}$. \end{enumerate} The first is not a practical problem, as it is straightforward to sample from $G$: \begin{enumerate} \item Sample a Markov chain $\{X_{t}\}_{t \in \mathbb{N}} \sim g$. \item Transform the sequence $\{X_{0},X_{1},X_{2},\ldots\}$ by repeating each element a number of times given by a geometric(2) random variable, resulting in the sequence $\{X_{0},X_{1}', X_{2}',\ldots\}$. \item Take the $k$-skeleton of this sequence, $\{Y_{0},Y_{1},Y_{2},\ldots\} \equiv \{X_{0}',X_{k}',X_{2k}',\ldots\}$. \item As in step \textbf{(2)}, transform the sequence $\{Y_{0},Y_{1},Y_{2},\ldots\}$ by repeating each element a number of times given by a geometric(2) random variable, resulting in the sequence $\{Y_{0},Y_{1}', Y_{2}',\ldots\} \sim G$. \end{enumerate} The second point is slightly more subtle. In practice, it is often possible to find a \textit{weak} upper bound on a mixing time, even if \textit{practical} upper bounds are much harder. For a typical example, the paper \cite{collins2013accessibility} finds a very generic upper bound on the mixing time of a family of Markov chains that includes many Gibbs samplers. For many well-studied target distributions on $\mathbb{R}^{d}$ such as the uniform distribution on the simplex, box or ball, the upper bounds in \cite{collins2013accessibility} are roughly of the form $t_{m} \lesssim e^{c_{1} d}$. On the other hand, after many years of careful study, the true mixing times of many of these Markov chains were shown to be polynomial in $d$ (see \textit{e.g.} \cite{vempala2005geometric}). In this situation, it was not too difficult to show an \textit{exponential} bound on the mixing time, but it was quite hard to show a \textit{polynomial} bound. This is exactly the situation in which \ref{ThmGibbsConc} and \ref{MhThmMainCorr} are useful. If one can show that \textit{e.g.} $t_{L} \lesssim 2^{d}$, one can then choose $k \approx d \log(d) \lesssim t_{L}$ and use \ref{ThmGibbsConc} or \ref{MhThmMainCorr} to obtain much sharper estimates on the mixing time. \bibliographystyle{imsart-nameyear}
2,869,038,156,529
arxiv
\section{Introduction} \IEEEPARstart{M}{olecular} communication (MC) is an emerging field in which nano-scale devices communicate with each other via chemical signaling, based on exchanging small {\em information particles} \cite{eckBook, far16ST}. For instance, in biological systems MC can take place using hormones, pheromones, or ribonucleic acid molecules. To embed information in these particles one may use the particle's type \cite{kim13}, concentration \cite{kur12, nak12}, number \cite{far15TNANO}, or the time of release \cite{eck07, ITsubmission}. Particles can be transported from the transmitter to the receiver via diffusion, active transport, bacteria, and flow, as described in \cite[Sec. III.B]{far16ST} and the references therein. Although this new field is still in its infancy, several basic experimental systems serve as a proof of concept for transmitting short messages at low bit rates \cite{far13,lee2015_infocom,koo16}. There are several similarities between traditional electromagnetic (EM) communication and MC. As a result, several prior works have used tools and algorithms developed for EM communication in the design of MC systems. In particular, the work \cite{kil13} studied on-off transmission via diffusion of information particles, where the information is recovered at the receiver based on the measured concentration. A channel model with finite memory was proposed, which involves additive Gaussian noise, along with several sequence detection algorithms such as maximum a-posteriori (MAP) detection and maximum likelihood (ML) detection. The work \cite{Meng14} studied a similar setup proposing a technique for inter-symbol interference (ISI) mitigation and deriving a reduced-state ML sequence detection algorithm. Finally, \cite{noe14_RxDesign} studied on-off transmission over diffusive molecular channels with flow, proposed an ML sequence detection algorithm for this channel, and \changeYony{designed a family of sub-optimal weighted sums detectors with relatively low complexity}. While the above works build upon the similarities between EM communication and MC, namely, linear channel models with additive (and in some cases Gaussian) noise, there are aspects in which MC is fundamentally different from traditional EM communication. For instance, in EM communication the symbol duration is fixed, while in MC the symbol duration is often a random variable (RV). Therefore, information particles may arrive out-of-order, which makes correctly detecting particles in the order in which they were transmitted very challenging, in particular when the transmitted information particles are indistinguishable \cite{rose15, rose:InscribedPart1, nanoComNet}. This work focuses on receiver design for MC systems where information is modulated through the {\em time of release of the information particles}, which is reminiscent of pulse position-modulation \cite{shiu99}. A common assumption, which is accurate for many sensors, is that after some time duration each particle is absorbed by the receiver and removed from the environment. In this case, the random delay until a released particle arrives at the receiver can be modeled as a channel with an additive noise term. For diffusion-based channels {\em without flow}, this additive noise is L\'evy-distributed \cite{yilmaz20143dChannelCF}, while for diffusion-based channels {\em with flow}, this additive noise follows an inverse Gaussian (IG) distribution \cite{sri12}. Fig.~\ref{fig:diffuseMolComm} illustrates the additive noise timing channel model studied in this work. At first glance, the cases of diffusion with and without flow may seem similar; however, a closer look reveals a fundamental difference which stems from the different properties of the additive noise modeling the random propagation delay of each particle. The L\'evy distribution has an algebraic tail\footnote{An RV $X$ has an algebraic tail if there exists $\rho_1,\rho_2 > 0$ such that $\lim_{x \to \infty} x^{\rho_2} \Pr \{ |X| > x \} = \rho_1$.} \cite{nol15, gonzales07}, while the tail of the IG distribution, similarly to the standard Gaussian distribution, decays exponentially. Thus, traditional linear detection and signal processing techniques, which work well in the presence of noise with exponentially-decaying distributions such as Gaussian or IG noise, may perform poorly in the presence of additive L\'evy noise. The need for new detection methods in communication systems operating over channels with additive noise, characterized by algebraic tails, was observed in \cite{nikias-book} based on numerical simulations. \changeYony{In this work we rigorously prove that, when multiple particles are simultaneously released, the detection performance in diffusion-based molecular timing (DBMT) channels {\em without} flow {\em cannot be improved} by linear processing, compared to optimal detection when a single particle is released. While in the case of the DBMT channel without flow the noise is L\'evy distributed, thus belonging to the family of $\alpha$-stable distributions \cite{nol15, zolotarev-book}, our result regarding the inefficiency of linear processing extends to any $\alpha$-stable noise with $\alpha < 1$.} We note that $\alpha$-stable distributions are commonly used to model impulsive noise \cite{nikias-book, yang14, Fah_arXiv16}. Yet, the focus of the studies \cite{nikias-book, yang14, Fah_arXiv16} was on {\em symmetric} stable distributions. On the other hand, in this work we focus on the DBMT channel without flow, in which the additive noise follows the {\em asymmetric} L\'evy distribution. In addition to the fact that the tails of the additive noise decay slowly, ordering in time is not preserved in the considered diffusion-based timing channel. In particular, the information particles associated with a given symbol may arrive later than particles associated with a subsequent symbol. This gives rise to ISI. \changeYony{In the works \cite{nanoComNet, nanocom16} we designed a sequence detector for time-slotted transmission over DBMT channels without flow, when a single information particle is used per symbol. In this work, on the other hand, we focus on systems for which the ISI is negligible and multiple information particles are used to modulate a symbol. This setting arises in molecular communication systems with long symbol times such that the propagation delay of information particles is typically less than a symbol time.} Negligible ISI also arises in systems with one-shot bursty communication, such as a sensor that occasionally sends a single symbol conveying one or more bits, and then remains silent for many symbol times. Since we neglect ISI in our model, each symbol transmission can be analyzed independently. Specifically, we consider an MC system in which the information is encoded in the time of release of the information particles, where this time is selected out of a set with {\em finite cardinality}, namely, a {\em finite constellation} is used. At each transmission {\em $M$ information particles are simultaneously} released at the time corresponding to the current symbol, while the receiver's objective is to detect this transmission time. Note that $M$ is constant and does not change from one transmission to the next, i.e., information is not encoded in the number of particles. The $M$ particles travel over a DBMT channel without flow. We assume that consecutive channel uses are independent and identically distributed (i.i.d.). We derive the ML detection rule for our system which, as expected, entails high computational complexity. This motivates studying detectors with lower complexity. A common approach to reducing detector complexity in traditional EM communication, which was also proposed in \cite{sri12} for an MC system, is to use a linear detector. \changeYony{We show that for $M$ i.i.d samples of {\em any} $\alpha-$stable additive noise with $\alpha<1$, and in particular for L\'evy-distributed noise, linearly combining these samples results in a $\alpha-$stable RV with dispersion larger or equal to the dispersion of the original samples. Here dispersion is a parameter of the distribution measuring its spread,\footnote{The variance of a stable distribution RV with $\alpha<2$ is infinite.} see \cite[Defs. 1.7 and 1.8]{nol15}.} This increased dispersion degrades the probability of correct detection, compared to the case of a {\em single particle}. In other words, a linear detector in our system has better performance when a single particle is used to convey the symbol time ($M=1$) compared to when multiple particles convey the symbol time ($M>1$). To the best of our knowledge this is the first proof that linear processing degrades the performance of multiple particle release relative to single particle release in a MC system. In order to take advantage of multiple transmitted particles per symbol, we propose a new detector based on the first arrival (FA) among the $M$ information particles. \changeYony{We show that the probability density of the FA, conditioned on the transmitted time, concentrates towards the transmission time when $M$ increases, see Fig. \ref{fig:CondDistributions} in Section \ref{subsec:FA}. This increases the probability of correct detection, compared to the case of a single particle. This is in contrast to the probability density of a linear combination of the arrival times, conditioned on the transmission time, which disperses from the transmission time compared to the case of a single particle.} Furthermore, we show that the performance of the proposed FA detector is very close to that of the optimal ML detector, for small values of $M$ \changeYony{(on the order of tens)}. On the other hand, we use error exponent analysis to show that for large values of $M$, i.e., $M \to \infty$, ML significantly outperforms the FA detector, which agrees with the fact that the FA is {\em not} a sufficient statistic for the arrival time of all $M$ transmitted particles, as is computed by the ML detector. The rest of this paper is organized as follows. The problem formulation is presented in Section \ref{sec:ProbForm}. Sections \ref{sec:DBMT}--\ref{subsec:PerfCompare} study the case of a binary constellation \changeYony{($M$ particles are simultaneously released in one of two pre-defined timings)}: The ML detector and linear detection are studied in Section \ref{sec:DBMT}. The FA detector is derived in Section \ref{subsec:FA}, while its performance is compared to the performance of the ML detector in Section \ref{subsec:PerfCompare}. The FA detector is extended to the case of larger constellations in Section \ref{sec:beyondBinary}. Numerical results are presented in Section \ref{sec:numRes}, and concluding remarks are provided in Section \ref{sec:conc}. {\bf {\slshape Notation}:} We denote the set of real numbers by $\mathcal{R}$, the set of positive real numbers by $\mathcal{R}^{+}$, and the set of integers by $\mathcal{N}$. Other than these sets, we denote sets with calligraphic letters, e.g., $\mathcal{B}$. We denote RVs with upper case letters, e.g., $X$, $Y$, and their realizations with lower case letters, e.g., $x$, $y$. An RV takes values in the set $\mathcal{X}$, and we use $|\mathcal{X}|$ to denote the cardinality of a finite set. We use $f_{Y}(y)$ to denote the probability density function (PDF) of a continuous RV $Y$ on $\mathcal{R}$, $f_{Y|X}(y|x)$ to denote the conditional PDF of $Y$ given $X$, and $F_{Y|X}(y|x)$ to denote the conditional cumulative distribution function (CDF). We denote vectors with boldface letters, e.g., $\mathbf{x}, \mathbf{y}$, where the $k^{\text{th}}$ element of a vector $\mathbf{x}$ is denoted by $x_k$. Finally, we use $\Phi_{\text{G}}(x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{x}{e^{-u^2} du}$ to denote the CDF of a standard Gaussian RV, $\mathsf{erfc}(x) = \frac{2}{\sqrt{\pi}} \int_{x}^{\infty}{e^{-u^2} du}$ to denote the complementary error function, $\log (\cdot)$ to denote the natural logarithm, and $\mathbb{E}\{\cdot\}$ to denote stochastic expectation. \section{Problem Formulation} \label{sec:ProbForm} \subsection{System Model} \label{subsec:sysModel} Fig. \ref{fig:diffuseMolComm} illustrates a molecular communication channel in which information is modulated on {\em the time of release of the information particles}. We assume that the information particles themselves are {\em identical and indistinguishable} at the receiver. Therefore, the receiver can only use the time of arrival to decode the intended message. The information particles propagate from the transmitter to the receiver through some random propagation mechanism (e.g. diffusion). We make the following assumptions about the system: \renewcommand{0.8}{0.8} \begin{figure}[t] \begin{center} \includegraphics[width=0.8\columnwidth,keepaspectratio]{DiffuseMolComm.pdf} \end{center} \vspace{-0.3cm} \captionsetup{font=footnotesize} \caption{\label{fig:diffuseMolComm} Diffusion-based molecular communication timing channel. $X$ denotes the release time, $Z$ denotes the random propagation time, and $Y$ denotes the arrival time.} \vspace{-0.25cm} \end{figure} \begin{enumerate}[label = {\bf A{\arabic*}})] \item \label{assmp:synch} The transmitter perfectly controls the release time of each information particle, and the receiver perfectly measures the arrival times of the information particles. Moreover, the transmitter and the receiver are perfectly synchronized in time. \item \label{assmp:Arrival} An information particle that arrives at the receiver is absorbed and removed from the propagation medium. \item \label{assmp:indep} All information particles propagate independently of each other, and their trajectories are random according to an i.i.d. random process. This is a reasonable assumption for many different propagation schemes in molecular communication such as diffusion in dilute solutions, i.e., when the number of particles released is much smaller than the number of molecules of the solutions. \end{enumerate} \noindent Note that these assumptions have been traditionally considered in all previous works, e.g. \cite{nak12,ITsubmission,ata13,pie13, li14}, in order to make the models tractable. % % % Let $\mathcal{X}$ be a finite set of constellation points on the real line: $\mathcal{X} \triangleq \{\xi_0, \xi_1, \dots, \xi_{L-1} \}$, $0 \le \xi_0 \le \dots \le \xi_{L-1}$, and let $\xi_{L-1} < T_s < \infty$ denote the symbol duration. The $k^{\text{th}}$ transmission takes place at time $(K-1)T_s + X_k, X_k \in \mathcal{X}, k=1,2,\dots,K$. At this time, $M \in \mathcal{N}$ information particles are {\em simultaneously} released into the medium by the transmitter. We assume that at each transmission the same number of information particles is released. The transmitted information is encoded in the sequence $\{(K-1)T_s + X_k \}_{k=1}^K$, which is assumed to be independent of the random propagation time of {\em each} of the information particles. Let $\mathbf{Y}_k$ denote an $M$-length vector consisting of the times of arrival of each of the information particles released at time $(k-1)T_s + X_k$. It follows that $Y_{k,m} > X_k, m=1,2,\dots,M$. Thus, we obtain the following additive noise channel model: \begin{align} Y_{k,m} = (k-1)T_s + X_k + Z_{k,m}, \label{eq:LevyChan_k} \end{align} \noindent for $k \mspace{-2mu} = \mspace{-2mu} 1,2,\dots,K, m \mspace{-2mu} = \mspace{-2mu}1,2,\dots,M$, where $Z_{k,m}$ is a random noise term representing the propagation time of the $m^{\text{th}}$ particle of the $k^{\text{th}}$ transmission. Note that Assumption \ref{assmp:indep} implies that all the RVs $Z_{k,m}$ are independent. In the channel model \eqref{eq:LevyChan_k}, particles may arrive out of order, which results in a channel with memory. In this work, however, we assume that each information particle arrives before the next transmission takes place. This assumption can be formally stated as: \begin{enumerate}[label = {\bf A{\arabic*}}), resume] \item \label{assmp:memoryless} $T_s$ is a fixed constant chosen to be large enough such that the transmission times $X_k$ obey $Y_{k,m} \le k T_s$ with high probability.\footnote{Formally, let $\eta$ be arbitrarily high probability, then we choose $T_s$ such that $\Pr \{ Y_{k,m} < k T_s \} > \eta, k=1,2,\dots,K, m=1,2,\dots,M$.} \end{enumerate} \noindent With this assumption, we obtain an i.i.d. memoryless channel model which can be written as: \begin{align} Y_{m} = {X} + Z_{m}, \quad m=1,2,\dots,M. \label{eq:LevyChan} \end{align} \noindent In the rest of this work we focus on this memoryless channel model. Assumption \ref{assmp:memoryless} implies that $T_s$ is chosen such that consecutive transmissions are sufficiently separated in time relative to the random propagation delays of each particle. Thus, the effective communication channel is memoryless. To simplify the presentation, in most of this work we restrict our attention to the case of binary modulation, i.e., $\mathcal{X} = \{ \xi_0, \xi_1 \}$. Higher order modulations are discussed in Section \ref{sec:beyondBinary}. Let $S \in \{ 0,1 \}$, be an equiprobable bit to be sent over the channel \eqref{eq:LevyChan} to the receiver, and denote the estimate of $S$ at the receiver by $\hat{S}$. We note that our results can be easily extended to the case of different a-priori probabilities on the transmitted bits. Our objective is to design a receiver that minimizes the probability of error $P_{\varepsilon} = \Pr \{S \neq \hat{S} \}$. In order to minimize $P_{\varepsilon}$ we maximize the spacing between $\xi_0$ and $\xi_1$, and without loss of generality we use the following mapping for transmission: \begin{equation} X(S) = \begin{cases} 0, & s = 0 \\ \Delta , & s = 1. \end{cases} \label{eq:TxMapping} \end{equation} \noindent Note that the above description of communication over an MT channel is fairly general and can be applied to different propagation mechanisms as long as Assumptions \ref{assmp:synch}--\ref{assmp:memoryless} are not violated. Next, we describe the DBMT channel. \subsection{The DBMT Channel} \label{subsec:DBMTdef} In diffusion-based propagation, the released particles follow a random Brownian path from the transmitter to the receiver. In this case, to specify the random additive noise term $Z_m$ in \eqref{eq:LevyChan}, we define a L\'evy-distributed RV as follows: \begin{definition}[{\em L\'evy distribution}] Let $Z$ be L\'evy-distributed with location parameter $\mu$ and scale parameter $c$ \cite{nol15}. Then its PDF is given by: \begin{align} \label{eq:LevyNoise} f_Z(z)= \begin{cases} \sqrt{\frac{c}{2 \pi (z-\mu)^3}}\exp \left( -\frac{c}{2(z-\mu)} \right), & z>\mu \\ 0, & z\leq \mu \end{cases}, \end{align} \noindent and its CDF is given by: \begin{align} \label{eqn:LevyCDF} F_Z(z) = \begin{cases} \mathsf{erfc}\left(\sqrt{\frac{c}{2(z-\mu)}}\right), & z>\mu \\ 0, & z\leq\mu \end{cases}. \end{align} \end{definition} Let $d$ denote the distance between the transmitter and the receiver, and $D$ denote the diffusion coefficient of the information particles in the propagation medium. Following along the lines of the derivations in \cite[Sec. II]{sri12}, and using the results of \cite[Sec. 2.6.A]{karatzas-shreve}, it can be shown that for 1-dimensional pure diffusion, the propagation time of each of the information particles follows a L\'evy distribution, denoted in this work by $\sim \mathscr{L}(\mu,c)$ with $c = \frac{d^2}{2D}$ and $\mu=0$. Thus, $Z_m \sim \mathscr{L}(0,c), m=1,2,\dots,M$. \changeYony{Note that the scale parameter $c$ increases quadratically with the distance between the transmitter and the receiver $d$, and behaves inversely linear with the diffusion coefficient $D$, that has units of squared meter per second. Thus, the scale parameter $c$ has units of seconds.} \begin{remark}[{\em Scaled L\'evy distribution for 3-D space}] The work \cite{yilmaz20143dChannelCF} showed that a scaled L\'evy distribution can also model the first arrival time in the case of an infinite, three-dimensional homogeneous medium without flow. Hence, our results can be extended to 3-D space by simply introducing a scalar factor. \end{remark} The L\'evy distribution belongs to the class of stable distributions, discussed in the next subsection. For a detail description we refer the reader to \cite{nol15, zol86-book}. \subsection{Stable Distributions} \begin{definition}[{\em Stable distribution}] An RV $X$ has a stable distribution if for two independent copies of $X$, $X_1$ and $X_2$, and for any constants $a_1, a_2 \in \mathcal{R}^{+}$, there exists constants $a_3 \in \mathcal{R}^{+}$ and $a_4 \in \mathcal{R}$ such that: \begin{align} a_1 X_1 + a_2 X_2 \stackrel{d}{=} a_3 X + a_4, \label{eq:stableDistDef} \end{align} \noindent where $\stackrel{d}{=}$ denotes equality in distribution, i.e., both expressions follow the same probability law. \end{definition} Stable distributions can also be defined via their characteristic function. \begin{definition}[{\em Characteristic function of a stable distribution}] Let $-\infty < \mu < \infty, c\ge 0, 0 < \alpha \le 2$, and $-1 \le \beta \le 1$. Further define: \begin{align*} \Phi(t,\alpha) \triangleq \begin{cases} \tan \left( \frac{\pi \alpha}{2}\right), & \alpha \ne 1 \\ -\frac{2}{\pi} \log (|t|), & \alpha = 1 \end{cases}. \end{align*} \noindent Then, the characteristic function of a stable RV $X$, with location parameter $\mu$, scale (or dispersion) parameter $c$, characteristic exponent $\alpha$, and skewness parameter $\beta$, is given by: \begin{align} \varphi(t;\mu,c,\alpha,\beta) \mspace{-3mu} = \mspace{-3mu} \exp \left\{ j \mu t \mspace{-3mu} - \mspace{-3mu} |ct|^\alpha (1 \mspace{-3mu} - \mspace{-3mu} j \beta \mathsf{sgn}(t) \Phi(t,\alpha))\right\}. \label{eq:stableCharFunc} \end{align} \end{definition} In the following, we use the notation $\mathscr{S}(\mu, c, \alpha, \beta)$ to represent a stable distribution with the parameters $\mu, c, \alpha$, and $\beta$. Apart from several special cases, stable distributions do not have closed-form PDFs. The exceptional cases are the Gaussian distribution ($\alpha = 2$), the Cauchy distribution $(\alpha = 1)$, and the case of $\alpha = \frac{1}{2}$ which was very recently derived in \cite[Theorem 2]{NGCE:15}. Note that the L\'evy distribution is a special case of the results of \cite{NGCE:15} with $\beta = 1$, i.e, the L\'evy distribution belongs to the class of stable distributions with the parameters $\mathscr{S}(\mu, c, \frac{1}{2}, 1)$. Thus, its characteristic function is given by: \begin{align*} \varphi(t) = \exp \left\{ j \mu t - \sqrt{-2jct} \right\}. \end{align*} \noindent Finally, we note that all stable distributions, apart from the case $\alpha = 2$, have infinite variance, and all stable distributions with $\alpha \le 1$ also have infinite mean. In fact, this statement can be generalized to moments of order $p \le \alpha$, see \cite{gonzales07}. Next, we study ML and linear detection of particle arrival time for transmission over the DBMT channel. \section{Transmission over the DBMT Channel: ML and Linear Detection} \label{sec:DBMT} \subsection{Transmission over the Single-Particle DBMT Channel} \label{sec:SingleMolecule} We begin this section with the relatively simple case in which a single information particle is released, i.e., $M=1$. For this setup, the decision rule that minimizes the probability of error, and the corresponding minimal probability of error, are given in the following proposition: \begin{proposition} \label{prop:decRuleSymbBySymb} The decision rule that minimizes the probability of error when $M=1$, is given by: \begin{align} \hat{S}_{\text{ML}}(y_1) = \begin{cases} 0, & y_1 < \theta \\ 1, & y_1 \ge \theta, \end{cases} \label{eq:decisionRule} \end{align} \noindent where $\theta$ is the unique solution, in the interval $[\Delta, \Delta + \frac{c}{3}]$, of the following equation in $y_1$: \begin{align} y_1(y_1-\Delta) \log \left( \frac{y_1}{y_1-\Delta} \right) = \frac{c \Delta}{3}, \quad y_1 > \Delta > 0. \label{eq:thetaEquation} \end{align} \noindent Furthermore, the probability of error of this decision rule is given by: \begin{align} P_{\varepsilon} = 0.5 \left(1 - \mathsf{erfc} \left( \sqrt{\frac{c}{2\theta}} \right) + \mathsf{erfc} \left( \sqrt{\frac{c}{2(\theta - \Delta)}} \right) \right). \label{eq:errProbSymbBySymb} \end{align} \end{proposition} \begin{remark}[{\em Asymmetric channel}] \label{rem:asymmetry} The first term on the right-hand-side (RHS) of \eqref{eq:errProbSymbBySymb} corresponds to the probability of error when $X=0$ is transmitted, while the second term corresponds to the case of $X = \Delta$. As we consider a non-negative and heavy-tailed distribution, it follows that: \begin{equation*} 1 - \mathsf{erfc} \left( \sqrt{\frac{c}{2\theta}} \right) \gg \mathsf{erfc} \left( \sqrt{\frac{c}{2(\theta - \Delta)}} \right). \end{equation*} \noindent This implies that the channel is asymmetric, and the probabilities of error in sending $S=0$ or $S=1$ are different. \changeYony{The probabilities of error for each of the symbols can be made equal} by alternating the assignments of bits in \eqref{eq:TxMapping} over time, or by applying coding dedicated to asymmetric channels, see \cite{Mondelli_arXiv14, constantin, zhou13} and references therein. \end{remark} \begin{proof}[Proof of Proposition \ref{prop:decRuleSymbBySymb}] The optimal symbol-by-symbol decision rule is the MAP rule \cite[Ch. 4.1]{ProakisDigComm}. As we consider a binary detection problem with equiprobable constellation points, the MAP rule specializes to the ML rule, which using the mapping \eqref{eq:TxMapping} is written as: \begin{align} \frac{f_{Y|X}(y|x=0)}{f_{Y|X}(y_1|x=\Delta)} \mspace{8mu} \begin{matrix} \hat{S} = 0 \\ \gtrless \\ \hat{S} = 1 \end{matrix} \mspace{8mu} 1, \quad y_1 > \Delta. \label{eq:MAPrule} \end{align} \noindent Plugging the density in \eqref{eq:LevyNoise} with $\mu = x$ into the left hand side (LHS) of \eqref{eq:MAPrule}, and applying $\log(\cdot)$ on both sides, we obtain \eqref{eq:thetaEquation}. The uniqueness of the threshold $\theta$ follows from the fact that the PDFs for both hypotheses are shifted versions of the L\'evy PDF, which is unimodal \cite[Ch. 2.7]{zolotarev-book}. A formal and rigorous proof for this uniqueness is provided in Appendix \ref{annex:Uniqueness_proof}. Regarding the probability of error, for the case of $y_1<\Delta$, we note that due to causality $s$ must be equal to $0$. For $y_1 \ge \Delta$ we write: \begin{align*} P_{\varepsilon} & \stackrel{(a)}{=} 0.5 \left( \Pr \{ y > \theta | s=0 \} + \{ y \le \theta | s=1 \} \right) \\ % & \stackrel{(b)}{=} 0.5 \left(1 - \mathsf{erfc} \left( \sqrt{\frac{c}{2\theta}} \right) + \mathsf{erfc} \left( \sqrt{\frac{c}{2(\theta - \Delta)}} \right) \right), \end{align*} \noindent where (a) follows from the assumption that the symbols are equiprobable, and (b) follows from \eqref{eqn:LevyCDF}. We emphasize that this proposition can be easily extended to the case of unequal a-priori symbol probabilities. \end{proof} The probability of error in molecular communication with optimal detection can be reduced by transmitting multiple information particles for each symbol \cite{sri12}, \cite{men12}, namely, using $M>1$ particles for each transmission.\footnote{As we assume that the transmitter and the receiver are perfectly synchronized, the best strategy is to simultaneously release $M$ molecules. Releasing the $M$ molecules at different times can only increase the ambiguity at the receiver and therefore increase the probability of error \cite[Sec IV.C]{sri12}.} \changeYony{In fact, in \cite{ITsubmission} we showed that the capacity of the DBMT channel scales at least poly-logarithmically with $M$. Yet, capacity analysis in general, including that of \cite{ITsubmission}, does not provide an analysis of the probability of error, nor does it provide decoding methods for practical modulations.} In this section we first present the ML detector for the DBMT channel, and then discuss lower-complexity detection approaches. \subsection{ML Detection for $M>1$} Let $\mathbf{y} = \{ y_m\}_{m=1}^M$. The following proposition characterizes the ML detector based on the channel outputs $\mathbf{y}$: \begin{proposition} \label{prop:MLdetectorMultiple} The decision rule that minimizes the probability of error for $M \ge 1$ is given by: \begin{align} \hat{S}_{\text{ML}}(\mathbf{y}) \mspace{-3mu} = \mspace{-3mu} \begin{cases} 1, & \forall y_m: y_m \mspace{-3mu} > \mspace{-3mu} \Delta, \text{ and } \\ & \mspace{10mu} \sum_{m=1}^{M}{\log \mspace{-2mu} \left( \mspace{-2mu} \frac{y_m-\Delta}{y_m} \mspace{-2mu} \right) \mspace{-3mu} + \mspace{-3mu} \frac{c \Delta}{3} \frac{1}{y_m(y_m - \Delta)}} \mspace{-3mu} \le \mspace{-3mu} 0 \\ 0, & \text{otherwise}. \end{cases} \label{eq:decisionRuleMultiple} \end{align} \end{proposition} \begin{proof} The proof follows along the same lines as the proof of Prop. \ref{prop:decRuleSymbBySymb}. More precisely, as the a-priori probabilities are equal, the optimal detection rule is ML. Using Assumption \ref{assmp:indep} the joint conditional density of $\mathbf{y}$ is a product of the individual conditional densities, and applying $\log(\cdot)$ results in the condition $\sum_{m=1}^{M}{\log \left( \frac{y_m-\Delta}{y_m} \right) + \frac{c \Delta}{3} \frac{1}{y_m(y_m - \Delta)}} \le 0$. Finally, as the additive noise is positive, if $\exists y_m: y_m \le 0$, then $\hat{S}_{\text{ML}}(\mathbf{y}) = 0$. \end{proof} Although the above ML detector minimizes the probability of error, it lacks an exact performance analysis and is relatively complicated to compute; this in particular holds for the $\log(\cdot)$ operation \cite{paul09, gutierrez11, klinefelter15}. In the following we denote the probability of error of the ML detector by $P_{\varepsilon, \text{ML}}$. In traditional wireless communication, the common approach for reducing the complexity of detection is to apply {\em linear signal processing} to the sequence $\mathbf{y}$. The complexity of such a receiver is significantly lower compared to that of the ML detector, and for an AWGN channel this approach is known to be optimal \cite[Ch. 3.3]{TV:05}. \changeYony{In fact, even in non-Gaussian problems such as transmission over a timing channel with drift \cite[Sec. IV.C.2]{sri12}, modeled by the additive IG noise (AIGN) channel, the performance with linear detection improves by releasing multiple particles per symbol versus releasing just a single particle}. In the next subsection we argue that for the DBMT channel a linear receiver performs better when each symbol consists of a single particle release versus multiple particle releases. The sub-optimality of multiple particle releases versus a single particle release when linear signal processing is applied at the receiver, under channels with $\alpha$-stable additive noise was observed in \cite[Ch. 10.4.6]{nikias-book}. Yet, to the best of our knowledge, the analysis in the next sub-section is the first to rigorously show this effect. \subsection{Linear Detection for $M>1$} In this subsection we consider linear detection of signals transmitted over an additive channel corrupted by $\alpha$-stable noise with characteristic exponent smaller than unity, namely, we use the channel model \eqref{eq:LevyChan}, with the minor change that $Z_m \sim \mathscr{S}(0,c,\beta,\alpha), \alpha < 1$. Thus, the results presented in this subsection also hold for the L\'evy-distributed noise. Let $\{ w_m \}_{m=1}^M$, $w_m \mspace{-3mu} \in \mspace{-3mu} \mathcal{R}^{+}, \sum_{m=1}^M \mspace{-3mu} w_m \mspace{-3mu} = \mspace{-3mu} 1$ be a set of coefficients, and consider ML detection based on $Y_{\text{LIN}} \mspace{-3mu} \triangleq \mspace{-3mu} \sum_{m=1}^M \mspace{-3mu} w_m Y_m$: \begin{align} \hat{X}_{\text{LIN}} = \operatornamewithlimits{argmax}_{x \in \{0, \Delta\}} f_{Y_{\text{LIN}}|X}(y_{\text{LIN}}|X=x). \label{eq:lin_detector} \end{align} \noindent Let $P_{\varepsilon, \text{LIN}}$ denote the probability of error of the detector $\hat{X}_{\text{LIN}}$. We now have the following theorem: \begin{theorem} \thmlabel{thm:badlindet} The probability of error of the linear detector under multiple particle release ($M>1$) is higher than the probability of error of the detector with the decision rule \eqref{eq:decisionRule} under single particle release ($M=1$), namely, $P_{\varepsilon, \text{LIN}} \ge P_{\varepsilon}$, where $P_{\varepsilon}$ is given in \eqref{eq:errProbSymbBySymb}. \end{theorem} \begin{proof} We show that given $X = x$, $Y_{\text{LIN}} \sim \mathscr{S}(x, c_{\text{LIN}},\alpha,\beta)$, with $c_{\text{LIN}} \ge c$. Note that when $X=x$ is given then the $Y_m$'s are independent. Therefore, the characteristic function of $Y_{\text{LIN}}$, given $X=x$, is given by: \begin{align*} & \varphi_{Y_{\text{LIN}}|X=x}(t) \nonumber \\ & \quad = \prod_{m=1}^M \exp \Big\{ j x w_m t \nonumber \\ & \mspace{120mu} - |c w_m t|^\alpha (1 - j \beta \mathsf{sgn}(w_m t) \Phi(w_m t,\alpha)) \Big\} \\ % & \quad \stackrel{(a)}{=} \prod_{m=1}^M \exp \left\{ j x w_m t - |c w_m t|^\alpha (1 - j \beta \mathsf{sgn}(t) \Phi(t,\alpha))\right\} \\ % & \quad = \exp \left\{ \sum_{m=1}^M \left\{ j x w_m t - |c w_m t|^\alpha (1 - j \beta \mathsf{sgn}(t) \Phi(t,\alpha)) \right\} \right\} \\ % & \quad \stackrel{(b)}{=} \exp \left\{ j x t - \left( \sum_{m=1}^M c w_m ^\alpha \right) |t|^\alpha (1 - j \beta \mathsf{sgn}(t) \Phi(t,\alpha)) \right\} \\ % & \quad \stackrel{(c)}{=} \exp \left\{ j x t - |c_{\text{LIN}}t|^\alpha (1 - j \beta \mathsf{sgn}(t) \Phi(t,\alpha)) \right\}, \end{align*} \noindent where (a) follows from the fact that $w_m \mspace{-3mu} > \mspace{-3mu} 0$ and from the fact that $\Phi(t,\alpha)$ is independent of $t$, for $\alpha \mspace{-3mu} < \mspace{-3mu} 1$ ; (b) follows from the fact that $\sum_{m=1}^M w_m = 1$; and (c) follows by defining $c_{\text{LIN}} = c \cdot \left( \sum_{m=1}^M w_m^{\alpha} \right)^{\frac{1}{\alpha}}$. Therefore, given $X=x$, we have $Y_{\text{LIN}} \sim \mathscr{S}(x, c_{\text{LIN}},\alpha,\beta)$. Since $w_m \le 1, m=1,2,\dots,M$, we have $\left( \sum_{m=1}^M w_m^{\alpha} \right)^{\frac{1}{\alpha}} \ge 1$, and therefore $c_{\text{LIN}} \ge c$. Finally, as $c$ is the dispersion of the distribution, and since stable distributions are unimodal \cite[Ch. 2.7]{zolotarev-book}, it follows that the probability of error increases with $c$. Therefore, we conclude that $P_{\varepsilon, \text{LIN}} \ge P_{\varepsilon}$. \end{proof} As the L\'evy distribution is a special case of the family $\mathscr{S}(0,c,\beta,\alpha), \alpha < 1$, we have the following corollary: \begin{corollary} \label{cor:linDegradeLevy} In DBMT channels without flow and $M>1$, the linear detector has worse performance compared to the case of $M=1$. \end{corollary} The result of Corollary \ref{cor:linDegradeLevy} is demonstrated in Section \ref{sec:numRes}. \begin{remark}[{\em Comparison to the AIGN channel}] The difference between the AIGN channel (or the AWGN channel) and the channel considered in this paper stems from the fact that for the AIGN, the (weighted) averaging associated with linear detection can decrease the noise variance, namely, the tails of the noise. On the other hand, in the case of the L\'evy distribution, averaging leads to a heavier tail, and therefore to a higher probability of error. \end{remark} \begin{remark}[{\em Decision delay}] In order to implement the ML detector \eqref{eq:decisionRuleMultiple}, the receiver must wait for all particles to arrive as all particle arrival times are used in the detection algorithm. However, as the L\'evy distribution has heavy tails, this may result in very long decision delays. In fact, the average decision delay of such a receiver will be infinite. \end{remark} In the next section we present a simple detector that is based on the time associated with the first particle arrival. This detector requires a short reception interval (on the order of the single-particle case) and achieves performance very close to that achieved by the ML detector for small values of $M$. \section{Transmission over the DBMT Channel for $M>1$: FA Detection} \label{subsec:FA} The detector proposed in this section detects the transmitted symbol based only on the FA among the $M$ particles, namely, it waits for the first particle to arrive and then applies ML detection based on this arrival. In terms of complexity, the FA detector simply compares the first arrival to a threshold; this is in contrast to the complicated ML detector in \eqref{eq:decisionRuleMultiple}. Let $y_{\text{FA}} = \min \{ y_1,y_2,\dots,y_M\}$. In the sequel we show that the PDF of $Y_{\text{FA}}$ is more concentrated towards the release time than the original L\'evy distribution. The FA detector is presented in the following theorem: \begin{theorem} \label{thm:FA_detector} The decision rule that minimizes the probability of error, based on $y_{\text{FA}}$, is given by: \begin{align} \hat{S}_{\text{FA}}(y_{\text{FA}}) = \begin{cases} 0, & y_{\text{FA}} < \theta_M \\ 1, & y_{\text{FA}} \ge \theta_M, \end{cases} \label{eq:decisionRuleFA} \end{align} \noindent where $\Delta \le \theta_M \le \theta_{M-1}, \theta_1 = \theta$, is the solution of the following equation in $y_{\text{FA}}$: \begin{align} & y_{\text{FA}}(y_{\text{FA}} \mspace{-3mu} - \mspace{-3mu} \Delta) \mspace{-3mu} \cdot \mspace{-3mu} \log \left( \frac{y_{\text{FA}}}{y_{\text{FA}} \mspace{-3mu} - \mspace{-3mu} \Delta} \right) \nonumber \\ & \mspace{10mu} + \mspace{-3mu} y_{\text{FA}}(y_{\text{FA}} \mspace{-3mu} - \mspace{-3mu} \Delta) \mspace{-3mu} \cdot \mspace{-3mu} \log \mspace{-3mu} \left( \mspace{-3mu} \frac{1 \mspace{-3mu} - \mspace{-3mu} \mathsf{erfc} \mspace{-3mu} \left( \sqrt{\frac{c}{2(y_{\text{FA}} - \Delta)}} \right)}{1 \mspace{-3mu} - \mspace{-3mu} \mathsf{erfc} \mspace{-3mu} \left( \sqrt{\frac{c}{2y_{\text{FA}}}} \right)} \mspace{-3mu} \right)^{\mspace{-10mu} \frac{2(M \mspace{-3mu} - \mspace{-3mu}1)}{3}} \mspace{-12mu} = \mspace{-3mu} \frac{c \Delta}{3}, \label{eq:thetaMEquation} \end{align} \noindent for $y_{\text{FA}} \ge \Delta > 0$. The probability of error of the FA detector is given by: \begin{align} P_{\varepsilon, \text{FA}} & = 0.5 \Bigg( \left( 1 - \mathsf{erfc} \left( \sqrt{\frac{c}{2\theta_M}} \right) \right)^M \nonumber \\ & \mspace{40mu} + 1 - \left(1 - \mathsf{erfc} \left( \sqrt{\frac{c}{2(\theta_M - \Delta)}} \right) \right)^M \Bigg). \label{eq:errProbSymbBySymbFA} \end{align} \end{theorem} \begin{proof} The detection rule that minimizes the probability of error is the ML detector based on $Y_{\text{FA}}$. This requires the PDF and CDF of $Y_{\text{FA}}$ given $X$. Let $F_{Y|X}(y|x)$ denote the CDF of $y_m$ given $X$. Assumption \ref{assmp:indep} implies that given $X$, the channel outputs $Y_1,Y_2,\dots,Y_M$ are independent. Hence, using basic results from order statistics \cite[Ch. 2.1]{OrderStat-book}, we write: \begin{align} F_{Y_{\text{FA}}|X}(y_{\text{FA}}|x) & = 1 - \left( \Pr \{ Y > y | X=x \} \right)^M \nonumber \\ % & \stackrel{(a)}{=} 1 - \left(1 - \mathsf{erfc} \left( \sqrt{\frac{c}{2(y - x)}} \right) \right)^M \nonumber \\ % & \triangleq \Psi \left( c, M, y-x \right), \label{eq:yFAcdf} \end{align} \noindent where (a) follows from \eqref{eqn:LevyCDF}. Next, to obtain the PDF of $Y_{\text{FA}}$ given $X$, we write: \begin{align} f_{Y_{\text{FA}}|X}(y_{\text{FA}}|x) & = \mspace{-3mu} \frac{\partial F_{Y_{\text{FA}}|X}(y_{\text{FA}}|x)}{ \partial y_{\text{FA}}} \nonumber \\ % & = \mspace{-3mu} M \mspace{-3mu} \cdot \mspace{-3mu} f_{Y|X}(y|x) \mspace{-3mu} \cdot \mspace{-3mu} \left( \mspace{-3mu} 1 \mspace{-3mu} - \mspace{-3mu} \mathsf{erfc} \left( \sqrt{\frac{c}{2(y \mspace{-2mu} - \mspace{-2mu} x)}} \mspace{-3mu} \right) \mspace{-3mu} \right)^{\mspace{-3mu} M-1} \nonumber \\ % & = \mspace{-3mu} M \cdot \sqrt{\frac{c}{2 \pi (y-x)^3}} \exp \left( - \frac{c}{2(y-x)} \right) \nonumber \\ & \mspace{45mu} \times \mspace{-3mu} \left(1 \mspace{-3mu} - \mspace{-3mu} \mathsf{erfc} \left( \sqrt{\frac{c}{2(y - x)}} \right) \right)^{M-1}. \label{eq:yFApdf} \end{align} \noindent Hence, the ML decision rule based on the measurement $y_{\text{FA}}$ is given by: \begin{align} \frac{f_{Y_{\text{FA}}|X}(y_{\text{FA}}|x=0)}{f_{Y_{\text{FA}}|X}(y_{\text{FA}}|x=\Delta)} \mspace{8mu} \begin{matrix} \hat{S} = 0 \\ \gtrless \\ \hat{S} = 1 \end{matrix} \mspace{8mu} 1, \quad y_{\text{FA}} > \Delta. \label{eq:MAPruleFA} \end{align} \noindent Plugging the density in \eqref{eq:yFApdf} into the LHS of \eqref{eq:MAPruleFA}, and applying some algebraic manipulations we obtain \eqref{eq:thetaMEquation}. To show that $\theta_M \le \theta_{M-1}$ we first note that by plugging \eqref{eq:yFApdf} into \eqref{eq:MAPruleFA} it follows that $\theta_M$ is the solution of the following equation: \begin{align} \frac{f_{Y|X}(y_{\text{FA}}|x=0)}{f_{Y|X}(y_{\text{FA}}|x=\Delta)} \mspace{-3mu} = \mspace{-3mu} \left(\frac{1 - \mathsf{erfc} \left( \sqrt{\frac{c}{2 (y_{\text{FA}} - \Delta)}} \right) }{1 - \mathsf{erfc} \left( \sqrt{\frac{c}{2y_{\text{FA}} }} \right)} \right)^{\mspace{-5mu} M-1}. \label{eq:equibal_ofM} \end{align} \noindent Now, for $M=1$, the RHS of \eqref{eq:equibal_ofM} equals 1, and $\theta_1 \in [\Delta, \Delta + \frac{c}{3}]$. Thus, in this interval, the LHS of \eqref{eq:equibal_ofM} achieves the value 1. An explicit evaluation of the derivative of the LHS of \eqref{eq:equibal_ofM} shows that in this range the derivative is negative, and therefore the LHS of \eqref{eq:equibal_ofM} decreases with $y_{\text{FA}}$, independently of $M$. On the other hand, the RHS of \eqref{eq:equibal_ofM} increases with $M$ for all $y_{\text{FA}} \ge \Delta$. Therefore, we conclude that the solution of \eqref{eq:equibal_ofM} decreases with $M$. Regarding the probability of error, we first note that for $y_{\text{FA}}<\Delta$, due to the causality of the arrival time, $S$ must be equal to $0$, and therefore the probability of error is zero. For $y_{\text{FA}} \ge \Delta$ we write: \begin{align*} P_{\varepsilon, \text{FA}} & = 0.5 \Big( 1 - F_{Y_{\text{FA}}|X}(y_{\text{FA}}|x=\theta_M) \nonumber \\ & \mspace{65mu} + F_{Y_{\text{FA}}|X}(y_{\text{FA}}|x = \theta_M - \Delta) \Big). \end{align*} \noindent By plugging the CDF in \eqref{eq:yFAcdf} into this expression we obtain \eqref{eq:errProbSymbBySymbFA}. Finally, we note that this theorem can also be easily extended to the case of unequal a-priori symbol probabilities. \end{proof} \begin{example} Consider sending information particles with diffusion coefficient $D=10 \mu m^2 /s$, see \cite{ber-book}, and let the distance between the transmitter and the receiver be $d=4\sqrt{10} \mu m$. This implies that $c = 2 s$. We further set $\Delta = 1$, and using Prop. \ref{prop:decRuleSymbBySymb}, for $M=1$, we obtain the optimal decision threshold $\theta = 1.372$. The conditional probability densities $f_{Y|X}(y|x=0)$ and $f_{Y|X}(y|x=\Delta)$ are illustrated in Fig. \ref{fig:CondDistributions}. \changeYony{Fig. \ref{fig:CondDistributions} also depicts the conditional probability distributions for $M=3$ and $M=15$. For these cases the optimal decision thresholds are $\theta_3 = 1.286$ and $\theta_{15} = 1.146$. It can be observed that when $M$ is increased the conditional PDFs concentrate towards $X=0$ and $X=\Delta$. Moreover, the tails of the conditional PDFs in the case of $M=15$ are significantly smaller than the tails in the case of $M=3$ and $M=1$. Finally, note that while the tail decreases exponentially in $M$, the PDF around $X=0$ or $X=\Delta$ increases linearly with $M$, see \eqref{eq:yFApdf}.} \renewcommand{0.8}{0.9} \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{CondDistributions.pdf} \captionsetup{font=footnotesize} \caption{The conditional probability densities $f_{Y|X}(y|x=0)$ and $f_{Y|X}(y|x=\Delta)$, for $c=2$, and $\Delta = 1$.} \label{fig:CondDistributions} \end{figure} \end{example} \begin{remark}[{\em FA detector for $L>2$}] The FA detection framework can be directly extended to the case of $L>2$. In such cases the detection will be based on $L-1$ thresholds, which define the $L$ constellation points. Furthermore, as the conditional PDFs concentrate near $x$ when $M$ increases, we conclude that by increasing $M$ one can support larger $L$ for a given target probability of error.\footnote{Note that when $L>2$, then $\Pr \{S \ne \hat{S} \}$ refers to the {\em symbol} error probability.} This is demonstrated in Section \ref{sec:numRes}. \end{remark} In contrast to \Thmref{thm:badlindet}, which states that for DBMT channels {\em without} drift, linear processing has worse performance for multiple particle release ($M>1$) versus single particle release ($M=1$), in \cite[Sec. IV.C.2]{sri12} it is shown that linear processing yields significant performance gain for DBMT channels {\em with} drift (modeled by the AIGN channel). Thus, a natural question that arises is: {\em Does the FA detector improve upon the linear detector for AIGN channels?} In the next subsection, and in the numerical results section, we show that this is the case. \subsection{FA detection for the AIGN Channel} Before comparing the FA detection framework and the linear processing proposed in \cite[Sec. IV.C.2]{sri12}, we briefly introduce the IG distribution. For a detailed discussion regarding the IG distribution we refer the reader to \cite{sri12} and \cite{chhikara-folks}. Consider a fluid medium {\em with drift velocity} $v$. Similarly to Section \ref{subsec:DBMTdef}, let $D$ denote the diffusion coefficient and $d$ denote the distance between the transmitter and the receiver. Moreover, let $\kappa \triangleq \frac{d}{v}$ and $\lambda \triangleq \frac{d^2}{2D}$. In this case the additive noise $Z_m$ in \eqref{eq:LevyChan} follows an inverse Gaussian distribution, $Z_m \sim \mathscr{IG}(\kappa, \lambda)$. The conditional PDF of the AIGN channel output $Y$, given channel input $X=x$, is given in \cite[eq. (7)]{sri12} as: \begin{align} f^{\text{IG}}_{Y|X}(y|x) \mspace{-3mu} = \mspace{-3mu} \begin{cases} \sqrt{\frac{\lambda}{2 \pi (y - x)^3}} \exp \left( - \frac{\lambda (y - x - \kappa)}{2 \kappa^2 (y-x)} \right), & y \mspace{-3mu} > \mspace{-3mu} x \\ 0, & y \mspace{-3mu} \le \mspace{-3mu} x, \end{cases} \label{eq:IGcondPDF} \end{align} \noindent while the conditional CDF is given in \cite[eq. (22)]{sri12} as: \begin{align} F^{\text{IG}}_{Y|X}(y|x) = \begin{cases} \Phi_{\text{G}} \left( \sqrt{\frac{\lambda}{y-x}} \left(\frac{y-x}{\kappa} \mspace{-3mu} - \mspace{-3mu} 1 \right) \right) \\ \quad + \mspace{3mu} e^{\frac{2\lambda}{\kappa}} \Phi_{\text{G}} \left( - \sqrt{\frac{\lambda}{y-x}} \left(\frac{y-x}{\kappa} \mspace{-3mu} + \mspace{-3mu} 1 \right) \right), & y \mspace{-3mu} > \mspace{-3mu} x \\ 0, & y \mspace{-3mu} \le \mspace{-3mu} x. \end{cases} \label{eq:IGcondCDF} \end{align} \noindent In \cite[Sec. IV.C.2]{sri12} the authors proposed to average the $M$ channel outputs as: \begin{align} Y_{\text{LIN}} = \frac{1}{M} \sum_{m=1}^M Y_m \stackrel{(a)}{=} X + Z_{\text{LIN}}, \label{eq:avgIG} \end{align} \noindent where (a) follows by defining $Z_{\text{LIN}} \triangleq \frac{1}{M} \sum_{m=1}^{M} Z_m$, where $Z_{\text{LIN}} \sim \mathscr{IG}(\kappa, M \cdot \lambda)$. Then, $\hat{X}_{\text{LIN}}$ is detected from $y_{\text{LIN}}$ as in \eqref{eq:lin_detector}. Note that the variance of an IG-distributed RV is given by $\frac{\kappa^3}{\lambda}$. Therefore, compared to the single particle case, the averaging in \eqref{eq:avgIG} decreases the variance by a factor of $M$, which partially explains the performance improvement, compared to the case of $M=1$, reported in \cite[Fig. 9]{sri12}. Let $f^{\text{IG}}_{Y_{\text{FA}}|X}(y_{\text{FA}}|x=0)$ denote the conditional PDF of $Y_{\text{FA}}$, given $X=0$. This PDF can be obtained by following the steps leading to \eqref{eq:yFApdf}, and using the PDF and CDF of the IG distribution given in \eqref{eq:IGcondPDF} and \eqref{eq:IGcondCDF}, respectively. To qualitatively compare the FA detector and the linear detector presented in \cite[Sec. IV.C.2]{sri12}, in the case of the AIGN channel, we propose to examine the tails of $f^{\text{IG}}_{Y_{\text{LIN}}|X}(y_{\text{LIN}}|X=0)$ and $f^{\text{IG}}_{Y_{\text{FA}}|X}(y_{\text{FA}}|X=0)$. Both PDFs are more concentrated around $X=0$ than $f^{\text{IG}}_{Y_m|X}(y|x=0)$. While in the case of the linear detector this is a result of the lower variance, in the case of FA this is a result of the multiplication by the exponential term, see \eqref{eq:yFApdf}. Our analysis shows that $f^{\text{IG}}_{Y_{\text{FA}}|X}(y_{\text{FA}}|x=0)$ is more concentrated around $X = 0$ than $f^{\text{IG}}_{Y_{\text{LIN}}|X}(y_{\text{LIN}}|X=0)$, as indicated in the following example. \begin{example} To obtain some intuition why the FA detector improves upon the linear detector in \eqref{eq:avgIG}, Fig. \ref{fig:CondDistributionsIG} depicts $f^{\text{IG}}_{Y_m|X}(y|x=0), f^{\text{IG}}_{Y_{\text{LIN}}|X}(y_{\text{LIN}}|x=0)$, and $f^{\text{IG}}_{Y_{\text{FA}}|X}(y_{\text{FA}}|x=0)$, for $\gamma = 1, \kappa = 1$ and $M=4$. It can be observed that $f^{\text{IG}}_{Y_{\text{FA}}|X}(y_{\text{FA}}|x=0)$ is significantly more concentrated towards the origin compared to $f^{\text{IG}}_{Y_{\text{LIN}}|X}(y_{\text{LIN}}|x=0)$. Thus, as the detector compares two shifted versions of the same PDF, this leads to a lower probability of error. It can further be observed that the tail of $f^{\text{IG}}_{Y_{\text{LIN}}|X}(y_{\text{LIN}}|x=0)$ is smaller than the tail of $f^{\text{IG}}_{Y_m|X}(y|x=0)$ which is reflected in the smaller variance. This supports the performance gain of the linear detector compared to ML detection for $M=1$. \renewcommand{0.8}{0.9} \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{CondDistributionsIG.pdf} \captionsetup{font=footnotesize} \caption{DBMT channel with drift - The conditional probability densities $f^{\text{IG}}_{Y_m|X}(y|x=0), f^{\text{IG}}_{Y_{\text{LIN}}|X}(y_{\text{LIN}}|x=0)$, and $f^{\text{IG}}_{Y_{\text{FA}}|X}(y_{\text{FA}}|x=0)$, for $\lambda = 1, \kappa = 1$ and $M=4$.} \label{fig:CondDistributionsIG} \end{figure} \end{example} Finally, we note that while Fig. \ref{fig:CondDistributionsIG} provides a qualitative explanation for the superiority of the FA detector compared to the linear detector, in Fig. \ref{fig:IG_PeVsVelocity_DifMs} we provide simulation results which support this observation. \section{Performance Comparison of the ML and FA Detectors} \label{subsec:PerfCompare} \changeYony{In this section we compare the probability of error of the FA detector to the probability of error of the ML detector. Clearly, $Y_{\text{FA}}$ is {\em not} a sufficient statistic for decoding based on $\mathbf{y}$, yet, our numerical simulations indicate that for low values of $M$ (up to the order of tens), these detectors have almost equivalent performance. On the other hand, when $M$ is large (i.e., $M \to \infty$), we use error exponent analysis to show the superiority of the ML detector over the FA detector.} \subsection{Small $M$} To study the performance gap between the two detectors, when $M$ is small, we derive an upper bound on the probability that there is a mismatch between the decisions of the two detectors. More precisely, let $P_{\text{mm}} \triangleq \Pr \{\hat{S}_{\text{ML}}(\mathbf{y}) \neq \hat{S}_{\text{FA}}(\mathbf{y})\}$. The following theorem upper bounds $P_{\text{mm}}$: \begin{theorem} \thmlabel{thm:PmmUB} Let $g(x) \triangleq \log \left( \frac{x - \Delta}{x} \right) + \frac{c\Delta}{3 x (x-\Delta)}, x > \Delta$. The equation $g(x)=0$ has a unique solution $x^{\ast}$, where $g(x) > 0$ for $\Delta < x < x^{\ast}$, and $g(x) < 0$ for $x^{\ast} < x$. Furthermore, the mismatch probability is upper bounded by: \begin{align} P_{\text{mm}} \le P_{\text{mm}}^{\text{(ub)}} & = 0.5 \sum_{i=0}^1 \Big\{ \Psi \left( c, M, x^{\ast} - i\cdot \Delta \right) \nonumber \\ & \mspace{110mu} - \Psi \left( c, M, \bar{i} \Delta \right) \Big\}, \label{eq:Pmm_ub} \end{align} \noindent where $\bar{0} = 1$, and $\bar{1} = 0$. \end{theorem} \begin{proof} The proof is provided in Appendix \ref{annex:thm_PmmUB_proof}. \end{proof} \begin{remark}[{\em Tightness of the bound in \eqref{eq:Pmm_ub}}] Recall that $f_{Y_{\text{FA}}|X}(y|x=x_0)$ concentrates towards $x=x_0$ when $M$ increases. On the other hand, $x^{\ast}$ and $\Delta$ are independent of $M$ and depend on the propagation of a {\em single} particle. Therefore, when $M$ increases, the upper bound in \eqref{eq:Pmm_ub} becomes loose. We further note that the upper bound in \eqref{eq:Pmm_ub} is tightened when $\Delta$ is increased. For instance, let $M=2, c=1$, and $\Delta = 1$. For this setting $P_{\varepsilon,\text{ML}} = 0.2174, P_{\varepsilon,\text{FA}} = 0.2186$, and $P_{\text{mm}}^{\text{(ub)}} = 0.0283$. If we increase $\Delta$ to be equal to $5$ we obtain: $P_{\varepsilon,\text{ML}} = 0.05896, P_{\varepsilon,\text{FA}} = 0.05898$, and $P_{\text{mm}}^{\text{(ub)}} = 0.0012$. On the other hand, for larger values of $M$, e.g., $M=5$, we have $P_{\varepsilon,\text{ML}} = 0.06501, P_{\varepsilon,\text{FA}} = 0.06554, P_{\text{mm}}^{\text{(ub)}} = 0.0337$, for $\Delta = 1$, and $P_{\varepsilon,\text{ML}} = 0.002403, P_{\varepsilon,\text{FA}} = 0.002408, P_{\text{mm}}^{\text{(ub)}} = 0.001$, for $\Delta = 5$. \end{remark} For large values of $M$, we next analyze the error exponents of the FA and ML detectors, and show that in this regime the ML detector significantly outperforms the FA detector. \subsection{Large $M$} \label{subsec:LargeM} Let $P_{\varepsilon}^{(M)}$ denote the probability of error of a given detector, as a function of $M$. The error exponent is then given by: \begin{align} \mathsf{E} = \lim_{M \to \infty} - \frac{\log P_{\varepsilon}^{(M)}}{M}. \label{eq:errExpDef} \end{align} \changeYony{ \begin{remark} From \Thmref{thm:badlindet} it follows that for the linear detector the probability of error {\em does not} decrease when $M$ is increased, namely, $P_{\varepsilon, \text{LIN}}^{(M)} \ge P_{\varepsilon}$. Therefore, the definition of the error exponent in \eqref{eq:errExpDef} implies that $\mathsf{E}_{\text{LIN}} = 0$. \end{remark} } In the following we first derive the error exponent of the FA detector, and then numerically compare it to the exponent of the ML detector. This numerical comparison indicates that the error exponent of the ML detector is higher than that of the FA detector. This implies that the two detectors are not equivalent, even though for low values of $M$ they achieve very similar performance based on our simulation results. This performance gap is due to the fact that the first arrival {\em is not a sufficient statistic} for optimal decoding based on the received vector $\mathbf{y}$. The following theorem presents the error exponent of the FA detector: \begin{theorem} \label{thm:ErrExp_FA} The error exponent of the FA detector is given by: \begin{align} \mathsf{E}_{\text{FA}} = - \log \left( 1 - \mathsf{erfc} \left( \sqrt{\frac{c}{2 \Delta}} \right) \right). \label{eq:FA_errExp} \end{align} \end{theorem} \begin{proof}[Proof Outline] Recall the probability of error of the FA detector in \eqref{eq:errProbSymbBySymbFA}, repeated here for ease of reference: \begin{align*} P_{\varepsilon, \text{FA}} &= 0.5 \bigg( \left( 1 - \mathsf{erfc} \left( \sqrt{\frac{c}{2\theta_M}} \right) \right)^M \nonumber \\ & \mspace{65mu} + 1 - \left(1 - \mathsf{erfc} \left( \sqrt{\frac{c}{2(\theta_M - \Delta)}} \right) \right)^M \bigg). \end{align*} \noindent Based on the observations in Remark \ref{rem:asymmetry}, and noting that both PDFs are right-sided, namely, different than zero only for $y>x$, we intuitively expect the first term on the RHS of \eqref{eq:errProbSymbBySymbFA} to be larger than the second term. In this case, the error exponent of the FA detector is governed by the first term, and as $\theta_M \to \Delta$ when $M \to \infty$, we obtain \eqref{eq:FA_errExp}. In Appendix \ref{annex:thm_ErrExp_FA_proof} we rigorously analyze the scaling behavior of the second term in \eqref{eq:FA_errExp} and show that it yields {\em the same error exponent as the first term}, thus, leading to \eqref{eq:FA_errExp}. \end{proof} Next, we discuss the error exponent of the ML detector. Deriving a closed form expression for this error exponent seems intractable, therefore, we present an implicit expression and evaluate it numerically. The problem of recovering $x$ based on the $M$ i.i.d. realizations $\{y_m\}_{m=1}^M$ belongs to the class of binary hypothesis problems, which are studied in \cite[Ch. 11]{cover-book}. In particular, the error exponent for the probability of error is exactly the Chernoff information \cite[Theorem 11.9.1]{cover-book}. We emphasize that this optimal error exponent is independent of the prior probabilities associated with the two values of the transmitted symbol $x$, see the discussion in \cite[pg. 388]{cover-book}. Thus, the assumption of equiprobable bits places no limitation on the error exponent of the ML detector. Let $\pi_0>0$ and $\pi_{\Delta}>0$ denote a-prior probabilities for sending $x=0$ and $x=\Delta$, respectively, for a fixed $M$. Furthermore, let $g_{0}(y)$ and $g_{\Delta}(y)$ denote the likelihood functions corresponding to $x=0$ and $x=\Delta$, respectively. Finally, let $\mathtt{I}(\text{``condition"})$ denote the indicator function which takes the value 1 if the ``condition" is satisfied and zero otherwise. Since given $x$ the $\{y_m\}_{m=1}^M$ are independent, it follows that the probability of error of the ML detector, as a function of $M$, can be written as: \begin{align} P_{\varepsilon, \text{ML}}^{(M)} & \mspace{-3mu} = \mspace{-3mu} \pi_0 \mspace{-3mu} \int_{\mathbf{y}} \prod_{m=1}^{M} \mspace{-3mu} g_0(y_m) \mathtt{I} \mspace{-3mu} \left(\prod_{m=1}^M \mspace{-6mu} g_0(y_m) \mspace{-3mu} < \mspace{-3mu} \prod_{m=1}^M \mspace{-6mu} g_{\Delta} (y_m)\right) d\mathbf{y} \nonumber \\ & \mspace{8mu} + \mspace{-3mu} \pi_{\Delta} \mspace{-6mu} \int_{\mathbf{y}} \prod_{m=1}^M \mspace{-3mu} g_{\Delta}(y_m) \mathtt{I} \mspace{-3mu} \left(\prod_{m=1}^M \mspace{-6mu} g_0(y_m) \mspace{-3mu} > \mspace{-3mu} \prod_{m=1}^M \mspace{-6mu} g_{\Delta} (y_m) \mspace{-3mu} \right) \mspace{-3mu} d\mathbf{y}. \label{eq:Pe_ML_ErrExp} \end{align} \noindent Next, we define: \begin{equation*} \mathtt{J}_{M} \triangleq \int_{\mathbf{y}} \min \left\{ \prod_{m=1}^M g_0(y_m), \prod_{m=1}^M g_{\Delta}(y_m) \right\} d\mathbf{y}, \end{equation*} \noindent and note that \eqref{eq:Pe_ML_ErrExp} satisfies: \begin{align} \label{eq:1} \min \left\{ \pi_0, \pi_{\Delta} \right\} \mathtt{J}_M \leq P_{\varepsilon, \text{ML}}^{(M)} \leq \max \left\{\pi_0, \pi_{\Delta} \right\} \mathtt{J}_M. \end{align} \noindent Observe that for fixed $\pi_0$ and $\pi_{\Delta}$, the error exponent $\lim_{M\rightarrow \infty} \frac{-\log(P_{\varepsilon,\text{ML}}^{(M)})}{M}$ equals the error exponent of $\mathtt{J}_M$, namely, \begin{align*} \lim_{M\rightarrow \infty} \frac{-\log(P_{\varepsilon, \text{ML}}^{(M)})}{M} = \lim_{M\rightarrow \infty} \frac{-\log(\mathtt{J}_M)}{M}, \end{align*} \noindent which is exactly the {\em Chernoff information}, see \cite[pg. 387]{cover-book}. We further write: \begin{align*} \mathtt{J}_M & = \int_{\mathbf{y}} \min \left\{ \prod_{m=1}^M g_0(y_m), \prod_{m=1}^M g_{\Delta}(y_m) \right\} d\mathbf{y} \\ % & \stackrel{(a)}{\leq} \min_{s:0 \leq s \leq 1} \int_{\mathbf{y}} \left( \prod_{m=1}^M g_0(y_m) \right)^{s} \left( \prod_{m=1}^M g_{\Delta}(y_m) \right)^{(1-s)} d\mathbf{y} \\ % & \stackrel{(b)}{\leq} e^{-M \mathsf{E}_{\text{ML}}}, \end{align*} \noindent where (a) follows from the fact that for any positive numbers $a,b$ and a real number $s \in [0, 1]$, we have $ \min(a, b) \leq a^sb^{1-s}$, and (b) follows from defining: \begin{align} \mathsf{E}_{\text{ML}} \triangleq - \min_{s: 0 \leq s \leq 1} \log\left(\int_{y} g^{s}_0(y) g^{1-s}_{\Delta}(y) dy\right). \label{eq:ML_errExp} \end{align} \changeYony{ \noindent The above argument establishes an upper bound on the error exponent $\lim_{M \rightarrow \infty} \frac{-\log(\mathtt{J}_M)}{M}.$ A lower bound follows directly from \cite[Theorem 11.9.1]{cover-book}, namely, the best achievable exponent. The two bounds coincide as $M \rightarrow \infty$, see the discussion in \cite[pgs. 387--389]{cover-book}. Thus, we conclude that the error exponent of the ML detector is given in \eqref{eq:ML_errExp}. } \begin{example} In contrast to \eqref{eq:FA_errExp}, deriving a closed form expression for the error exponent \eqref{eq:ML_errExp} seems intractable, hence, we numerically evaluated it. Table \ref{tab:ErrExp} details both $\mathsf{E}_{\text{ML}}$ and $\mathsf{E}_{\text{FA}}$ for $\Delta \in \{0.1, 0.2, 0.3, 0.4\}$, and $c \in \{0.5, 1, 2\}$. Note that when $M$ increases, very small values of $\Delta$ can be used. For instance, for $M=2\cdot 10^4, c=2$ and $\Delta=0.1$, we obtain $P_{\varepsilon,\text{FA}} = 2\cdot 10^{-4}$. It can be observed that for small values of $\Delta$, and large values of $c$, the relative difference between the two error exponents is larger. \begin{table}[h] \begin{center} \footnotesize \begin{tabular}[t]{|c|c|c|c|c|} \hline & $\Delta=0.1$ & $\Delta=0.2$ & $\Delta=0.3$ & $\Delta=0.4$ \\ \hline \hline $ c= 0.5, \mathsf{E}_{\text{ML}}$ & 0.044106 & 0.132051 & 0.223149 & 0.306514 \\ \hline $ c= 0.5, \mathsf{E}_{\text{FA}}$ & 0.025674 & 0.120865 & 0.219034 & 0.305917 \\ \hline \hline $ c= 1, \mathsf{E}_{\text{ML}}$ & 0.012413 & 0.044103 & 0.086111 & 0.132012 \\ \hline $ c= 1, \mathsf{E}_{\text{FA}}$ & 0.001567 & 0.025674 & 0.070304 & 0.120865 \\ \hline \hline $ c= 2, \mathsf{E}_{\text{ML}}$ & 0.003230 & 0.012413 & 0.026441 & 0.044099 \\ \hline $ c= 2, \mathsf{E}_{\text{FA}}$ & 0.000008 & 0.001567 & 0.009872 & 0.025674 \\ \hline \end{tabular} \captionsetup{font=small} \caption{$\mathsf{E}_{\text{ML}}$ and $\mathsf{E}_{\text{FA}}$ for different values of $\Delta$ and $c$. \label{tab:ErrExp}} \vspace{-0.5cm} \end{center} \end{table} \end{example} \section{Non-Binary Constellations} \label{sec:beyondBinary} In this section we study communication over DBMT channels when $|\mathcal{X}| = 2^L, L > 1$. We restrict our attention to the FA detection framework due to the complexity of the ML analysis. Let $L$ be a fixed number of bits to be transmitted, $\Delta$ a fixed time interval, and $\{\xi_i \}_{i=0}^{2^L - 1}$ a set of distinct points in the interval $[0, \Delta]$. One can send the $L$ bits by releasing the $M$ particles at one of $\xi_i$ time points. The results of Section \ref{subsec:FA}, indicating that simultaneous release of multiple particles can dramatically decrease the probability of error in the binary case, also apply in this non-binary case. Therefore, for a fixed $L$, one can achieve a desired (symbol) probability of error by increasing the number of released particles. On the other hand, for a fixed $M$, increasing $L$ increases the number of bits conveyed in each symbol at the cost of smaller spacing between the $\xi_i$'s. This leads to two questions associated with the non-binary case: \begin{itemize} \item {\em What is the complexity of the FA detection in the case of $L>1$? Does it grow exponentially with $L$?} \changeYony{We show that given a simple choice of the points $\{\xi_i \}_{i=0}^{2^L - 1}$, the FA detector for the case of $L>1$ amounts to the FA detector presented in Thm. \ref{thm:FA_detector}.} \item {\em What is the scaling behavior of $L$ as a function of $M$, which insures a decreasing symbol error probability?} We show that if $L$ scales at most as $\log \log M$ then the symbol error probability decreases to zero when $M, L \to \infty$. \end{itemize} We begin with formally introducing the transmission scheme. The transmitter divides the interval $[0,\Delta]$ into $2^L - 1$ equal-length sub-intervals. Let $\tilde{\Delta}$ be the length of each such sub-interval. The constellation points (release times) are given by $n \cdot \tilde{\Delta}, n=0,1,\dots,2^L-1$. Observing a sequence of $L$ equiprobable and independent bits, the transmitter uses a predefined bits-to-symbol mapping and releases $M$ particles at the corresponding time. While in this work we focus on the {\em symbol} error rate, the bit error rate can be easily obtained via a bits-to-symbol mapping such as Gray coding \cite{agrell2004} and the approximation that a symbol error leads to a single bit error. The transmission scheme is illustrated in Fig. \ref{fig:BeyondBinarry}. \renewcommand{0.8}{0.9} \begin{figure}[t] \begin{center} \includegraphics[width=0.8\columnwidth,keepaspectratio]{BeyondBinarry.pdf} \end{center} \vspace{-0.3cm} \captionsetup{font=footnotesize} \caption{\label{fig:BeyondBinarry} Illustration of transmission when $L=3$. The bits-to-symbol mapping is the commonly-used Gray coding. Assuming the binary input sequence $110$ the transmitter releases $M$ particles at time $4\tilde{\Delta} = \frac{4\Delta}{7}$. The dashed arrows indicate other possible release times and the respective bit tuples.} \vspace{-0.35cm} \end{figure} Let $\omega_M$ denote the mode of the density of the FA, given in \eqref{eq:yFApdf}, assuming the offset parameter is zero.\footnote{Note that the unimodality of the density in \eqref{eq:yFApdf} follows from the fact that the density of the L\'evy distribution is unimodal.} Furthermore, let $n_{\text{floor}}(y_{\text{FA}}) \triangleq \left\lfloor \frac{(y_{\text{FA}} - \omega_M)}{\tilde{\Delta}} \right\rfloor$ and let $\hat{S}_{\text{FA}}^{\Delta^{\ast}}(y_{\text{FA}})$ denote the optimal FA detector stated in Thm. \ref{thm:FA_detector}, for the binary case, when the spacing between the two possible release times is $\Delta^{\ast}$. The optimal detector, based on $y_{\text{FA}}$ is presented in the following theorem. \begin{theorem} \label{thm:nonBin_FA_Det} The decision rule that minimizes the {\em symbol} probability of error, based on $y_{\text{FA}}$, is given by: \begin{align} \hat{n}_{\text{FA}}(y_{\text{FA}}) \mspace{-3mu} = \mspace{-3mu} \begin{cases} 0, & n_{\text{floor}}(y_{\text{FA}}) \mspace{-3mu} < \mspace{-3mu} 0 \\ 2^L \mspace{-3mu} - \mspace{-3mu} 1, & n_{\text{floor}}(y_{\text{FA}}) \mspace{-3mu} \ge \mspace{-3mu} 2^L \mspace{-3mu} - \mspace{-3mu} 1 \\ n_{\text{floor}}(y_{\text{FA}}) \\ \mspace{6mu} + \mspace{2mu} \hat{S}_{\text{FA}}^{\tilde{\Delta}}(y_{\text{FA}} \mspace{-4mu} - \mspace{-4mu} n_{\text{floor}}(y_{\text{FA}}) \tilde{\Delta}), & \text{otherwise}. \end{cases} \label{eq:nonBin_FA_Det} \end{align} \noindent The probability of error of this detector is exactly $\frac{2^L - 1}{2^{L-1}}$ times the probability of error of the detector in \eqref{eq:errProbSymbBySymbFA} with $\Delta$ replaced by $\tilde{\Delta}$, for $\theta_M$ (in \eqref{eq:errProbSymbBySymbFA}) the solution of \eqref{eq:thetaMEquation}, again, with $\Delta$ replaced by $\tilde{\Delta}$. \end{theorem} \begin{proof} We first discuss the optimality of the detector in \eqref{eq:nonBin_FA_Det}. The extreme cases in \eqref{eq:nonBin_FA_Det} are straightforward, thus, we discuss the ``middle'' points. The optimal detector based on $y_{\text{FA}}$ is the ML detector: \begin{align} \hat{n}_{\text{FA}}(y_{\text{FA}}) = \operatornamewithlimits{argmax}_{n \in \{0,1\dots,2^{L}-1\}}f_{Y_{\text{FA}}|X}(y_{\text{FA}}|x = n \tilde{\Delta}). \label{eq:MLfaNonBin} \end{align} \noindent Recall that the density of the FA is unimodal. It follows that $n_{\text{floor}}(y_{\text{FA}}) \tilde{\Delta} + \omega_M \le y_{\text{FA}} \le (n_{\text{floor}}(y_{\text{FA}}) + 1) \tilde{\Delta}$. Since the density is unimodal, we have: \begin{align*} & f_{Y_{\text{FA}}|X}(y_{\text{FA}}|x \mspace{-3mu} = \mspace{-3mu} n^{\ast} \tilde{\Delta}) \nonumber \\ & \quad \le \mspace{-3mu} f_{Y_{\text{FA}}|X}(y_{\text{FA}}|x \mspace{-3mu} = \mspace{-3mu} n_{\text{floor}}(y_{\text{FA}}) \tilde{\Delta}), \mspace{7mu} \forall n^{\ast} \mspace{-3mu} \le \mspace{-3mu} n_{\text{floor}}(y_{\text{FA}}), \\ % & f_{Y_{\text{FA}}|X}(y_{\text{FA}}|x \mspace{-3mu} = \mspace{-3mu} n^{\ast} \tilde{\Delta}) \nonumber \\ & \quad \le \mspace{-3mu} f_{Y_{\text{FA}}|X}(y_{\text{FA}}|x \mspace{-3mu} = \mspace{-3mu} (n_{\text{floor}}(y_{\text{FA}}) \mspace{-3mu} + \mspace{-3mu} 1) \tilde{\Delta}), \mspace{7mu} \forall n^{\ast} \mspace{-3mu} \ge \mspace{-3mu} n_{\text{floor}}(y_{\text{FA}}) \mspace{-3mu} + \mspace{-3mu} 1. \end{align*} \noindent Thus, in the maximization \eqref{eq:MLfaNonBin} one needs to consider only $n_{\text{floor}}(y_{\text{FA}})$ and $n_{\text{floor}}(y_{\text{FA}}) + 1$. The problem is reduced to a binary detection setup with spacing of $\tilde{\Delta}$. The optimal detector for this problem is given in Thm. \ref{thm:FA_detector}. For the probability of error we note that the first (in time) and last constellation points exactly correspond to the binary case discussed in Thm. \ref{thm:FA_detector}. The other $2^{L-1}$ constellation points have two adjacent neighbors (a preceding constellation point and a succeeding one). Letting $\theta_M(\tilde{\Delta})$ denote the decision threshold, the probability of error for the ``middle'' constellation points is given by $1 - (\Psi(c,M,\theta_M(\tilde{\Delta}) - \Psi(c,M,\theta_M(\tilde{\Delta}) - \tilde{\Delta})$, where $\Psi(\cdot)$ is defined in \eqref{eq:yFAcdf}. Finally we note that there are $2^{L-1}$ ``middle'' constellation points, thus the over all probability of error is $\frac{2^L - 1}{2^{L-1}}$ times the one stated in \eqref{eq:errProbSymbBySymbFA} (with the proper $\tilde{\Delta}$ and $\theta_M(\tilde{\Delta})$). This concludes the proof. \end{proof} Next, we consider the scaling order of $L$, as a function of $M$, which ensures a vanishing symbol error probability. We note that a linear increase in $L$ results in an exponential decrease in $\tilde{\Delta}$, thus, $L$ should scale at most logarithmically with $M$. The next theorem argues that with logarithmic scaling the probability of error does not vanish, thus, a slower scaling is required. \begin{theorem} \label{thm:nonBinScaling} The symbol probability of error of the detector in \eqref{eq:nonBin_FA_Det} vanishes when $L,M \to \infty$, if $L$ scales at most as $\log \log M^{1 -\epsilon}$, for some $\epsilon > 0$. \end{theorem} \begin{proof} The proof is provided in Appendix \ref{annex:thm_nonBinScaling_proof}. \end{proof} \noindent Thus, to reliably send a large number of bits using the above transmission scheme, a very large $M$ is required. Next, we present our numerical results. \section{Numerical Results} \label{sec:numRes} \renewcommand{0.8}{0.9} \begin{figure}[t!] \centering \includegraphics[width=0.8\columnwidth]{PeVsDelta_c1DiffMs.pdf} \captionsetup{font=footnotesize} \caption{$P_{\varepsilon}$ vs. $\Delta$, for $c=1 [s]$ and $M=1,2,3$.} \label{fig:PeVsDelta_c1DiffMs} \vspace{-0.3cm} \end{figure} In this section we numerically evaluate the performance of the different detectors as a function of the channel and modulation. Fig. \ref{fig:PeVsDelta_c1DiffMs} depicts the probability of error versus different values of $\Delta$, for $M=1,2,3$, for the ML, FA, and linear detectors. Throughout this section $10^6$ trials were carried out for each $\Delta$ point. When $M=1$ all the detectors have identical performance. For larger values of $M$, the probability of error of the ML detector was evaluated numerically, while the probability of error of the FA detector was calculated using \eqref{eq:errProbSymbBySymbFA}. For the linear detector we assumed $w_m = \frac{1}{M}$ which leads to $c_{\text{LIN}} = Mc$. It can be observed that, as expected, the probability of error decreases with $\Delta$. For the ML and FA detectors, the error probability also decreases with $M$, but for the linear detector, the error probability increases with M. Moreover, as stated in Section \ref{subsec:PerfCompare}, Fig. \ref{fig:PeVsDelta_c1DiffMs} shows that the performance of the ML and FA detectors is practically indistinguishable for small values of $M$. Fig. \ref{fig:PeVsM_c2DiffDeltas} depicts the probability of error versus the number of released particles $M$, for the ML and FA detectors, for $\Delta = 0.2, 0.5$, and $c=2$. Here, $10^6$ trials were carried out for each $M$ point. It can be observed that for small values of $M$, as indicated by Fig. \ref{fig:PeVsDelta_c1DiffMs}, the FA and ML are indistinguishable. On the other hand, when $M$ increases, e.g., $M > 100$, the superiority of the ML detector is revealed. This supports the results of Section \ref{subsec:LargeM}. \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{PeVsM_c2DiffDeltas.pdf} \captionsetup{font=footnotesize} \caption{$P_{\varepsilon}$ vs. $M$, for $c=2 [s]$ and $\Delta=0.2,0.5 [s]$.} \label{fig:PeVsM_c2DiffDeltas} \vspace{-0.3cm} \end{figure} Note that Fig. \ref{fig:PeVsM_c2DiffDeltas} also indicates that for large enough $M$, the probability of error decays exponentially with $M$. This implies that if $c$ changes, e.g., the distance between the transmitter and receiver increases, one can achieve the same $P_{\varepsilon}$ by increasing $M$. This is demonstrated in Fig. \ref{fig:MVsC_Pe0p01}, where $P_{\varepsilon}$ is fixed to $0.01$, and the required $M$ is presented as a function of $c$, for different values of $\Delta$. \changeYony{Note that for an uncoded $P_{\varepsilon}$ of $0.01$, coding can be used to drive down the BER to a desired level.} As discussed in Section \ref{sec:beyondBinary}, one can tradeoff the probability of error with the data rate, i.e. the number of bits conveyed in each transmitted symbol, $L$. More precisely, for a given $\Delta$ and $L$, by using $M$ large enough, one can achieve the desired probability of error. This is demonstrated in Fig. \ref{fig:PeVsDelta_c1DiffLs}, which shows that a {\em symbol} probability of error $P_{\varepsilon,s} = 0.01$ can be achieved when $\Delta$ is about 3 seconds by using different $(M,L)$ pairs. This implies that by using a large number of particles, the transmitter can send short messages using a single-shot transmission with relatively small values of $\Delta$. It can further be observed that the required $M$ must scale significantly faster than exponentially with $L$. Yet, to clearly observe the double exponential scaling of $M$ with $L$, much larger values of $(M,L)$ should be considered. \changeYony{Unfortunately, this leads to numerical instabilities in the numerical computations.} \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{MVsC_Pe0p01.pdf} \captionsetup{font=footnotesize} \caption{The number of particles $M$ required to achieve $P_{\varepsilon}=0.01$, as a function of $c [s]$, for the FA detector.} \label{fig:MVsC_Pe0p01} \vspace{-0.3cm} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{PeVsDelta_c1DiffLs.pdf} \captionsetup{font=footnotesize} \caption{$P_{\varepsilon,s}$ vs. $\Delta$, for $c \mspace{-3mu} = \mspace{-3mu} 1 [s]$ and the $(M,L)$ pairs: $(25,3)$, $(90,4)$, $(350,5)$.} \label{fig:PeVsDelta_c1DiffLs} \vspace{-0.2cm} \end{figure} \tyony{Finally, we consider the case of diffusion with a drift, i.e, the AIGN channel. The ML detector for the case of $M>1$ was presented in \cite[eq. (45)]{sri12}, while the FA and linear detectors are discussed in Section \ref{subsec:FA}. Fig. \ref{fig:IG_PeVsVelocity_DifMs} depicts the probability of error versus different values of drift velocity $v$, for the AIGN channel, and for $M=1,2,4$. Here $\Delta = 1[s], D=0.5 [\mu m^2 /s]$ and $d = 1 [\mu m]$, which implies that $\lambda = 1 [s]$. This setting is equivalent to the one simulated in \cite[Fig. 9]{sri12}. $10^6$ trials were carried out for each $v$ point. It can be observed that, similar to the case of diffusion without a drift, the FA detector significantly outperforms the linear detector, and is almost indistinguishable from the ML detector. Only when $M$ is very large the performance gap between FA and ML become apparent. \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{IG_PeVsVelocity_DifMs.pdf} \captionsetup{font=small} \caption{AIGN channel: $P_{\varepsilon}$ vs. $v$, for $D=2 [\mu m^2 /s]$ and $d = 1 [\mu m]$.} \label{fig:IG_PeVsVelocity_DifMs} \vspace{-0.3cm} \end{figure} } \section{Conclusions} \label{sec:conc} We have studied communication over DBMT channels assuming that multiple information particles are simultaneously released at each transmission. We first derived the optimal ML detector, which has high complexity. We next considered the linear detection framework, which was shown to be effective in Gaussian channels or in MT channels in the presence of drift. However, we showed that when the noise is stable with characteristic exponent smaller then unity, as is the case in MT channels without drift, then linear processing increases the noise dispersion, which results in a higher probability of error than for the single particle release modulation. We then proposed the FA detector and showed that for low to medium values of $M$ it achieves a probability of error very close to that of the ML detector with a complexity similar to that of the linear detector. On the other hand, since the first arrival is not a sufficient statistic for the detection problem, it is not expected that the FA and the ML detectors will be equivalent for all values of $M$. To rigorously prove this statement we derived the error exponent of both detectors and showed that, indeed, for large values of M the performance of ML is superior. \tyony{While the focus of this paper is on DBMT channels without drift, we showed that the above results also extend to the case of diffusion with drift (the AIGN channel). More precisely, for small values of $M$ (up to the order of tens), the FA detector outperforms the linear detector, and closely approaches the performance of the ML detector. Thus, the FA detector provides a very good approximation for the ML detector in DBMT channels.} \changeYony{Our derivations indicate that the FA detector has a nice property that the conditional densities concentrate towards the particles' release time}, see e.g., Figs. \ref{fig:CondDistributions} and \ref{fig:CondDistributionsIG}. This implies that by using $M$ large enough, one can use large constellations, thus, conveying several bits in each transmission. This property is very attractive for molecular nano-scale sensors that are required to send a limited number of bits and then remain quiet for a long period of time. \appendices \numberwithin{equation}{section} \section{Proof for the Uniqueness of $\theta$ in \eqref{eq:thetaEquation}} \label{annex:Uniqueness_proof} First we note that the mode of a standard L\'evy-distributed RV is $\frac{c}{3}$, and therefore, the decision threshold must lie in the interval $[\Delta, \Delta + \frac{c}{3}]$. The uniqueness of the solution stems from the fact that the PDF of the L\'evy distribution is unimodal \cite[Ch. 2.7]{zolotarev-book}, and from the fact that the PDFs in the two hypotheses are shifted version of the L\'evy distribution. More precisely, note that for $y_1 \to \Delta$, the LHS of \eqref{eq:thetaEquation} tends to zero, while for $y_1 \to \Delta + \frac{c}{3}$ the LHS of \eqref{eq:thetaEquation} is larger than $\frac{c \Delta}{3}$. We now show that the derivative of the LHS of \eqref{eq:thetaEquation}, which is given by $(2y_1 - \Delta) \log \left( \frac{y_1}{y_1 - \Delta} \right) - \Delta$, is positive. This implies that \eqref{eq:thetaEquation} has a unique solution. We write: \begin{align*} & (2y_1 - \Delta) \log \left( \frac{y_1}{y_1 - \Delta} \right) - \Delta \nonumber \\ & \mspace{80mu} \stackrel{(a)}{=} \Delta \left( \log \left( w \right) \left(1 - \frac{2}{1 - w} \right) - 1 \right) \\ % & \mspace{80mu} \stackrel{(b)}{\ge} \Delta \left( \left(1 - \frac{1}{w} \right) \left(1 - \frac{2}{1 - w} \right) - 1 \right) \\ % & \mspace{80mu} = \Delta \left( \frac{W+1}{W} - 1 \right) \\ % & \mspace{80mu} \ge 0. \end{align*} \noindent Here, (a) follows by setting $w = 1 - \frac{\Delta}{y_1}$. Note that since $y_1 \ge \Delta$, then $w \in [0,1]$. For step (b) we use the inequality $1 - \frac{1}{w} \le \log (w)$. Thus, as the derivative is positive, we conclude that \eqref{eq:thetaEquation} has a unique solution in the desired range. \section{Proofs for Theorem \thmref{thm:PmmUB}} \label{annex:thm_PmmUB_proof} First we prove that the equation $g(x) = 0$ has a unique solution. Then we derive the properties of $g(x)$, and finally, we derive the upper bound on the mismatch probability. Let $\alpha = \frac{c \Delta}{3}$. Thus, we can write $g(x)$ as $g(x) = \log \left( \frac{x - \Delta}{x} \right) + \frac{\alpha}{x (x-\Delta)}, x > \Delta$. First, we show that $g(x)$ has a single extreme point which is larger than $\Delta$. Writing the derivative of $g(x)$ we have: \begin{align*} \frac{\partial g(x)}{\partial x} = \frac{\alpha (\Delta - 2x) + \Delta x (x - \Delta)}{x^2(x-\Delta)^2}. \end{align*} \noindent Thus, the extreme points of $g(x)$ are the roots of the polynomial $x^2 - \frac{2 \alpha + \Delta^2}{\Delta} x + \alpha$. Plugging $\alpha = \frac{c \Delta}{3}$ and using the expressions for roots of a quadratic equation we obtain that the extreme points are given by: \begin{align*} x_1 & = \frac{c}{3} + \frac{\Delta}{2} \left( 1 + \sqrt{1 + \frac{4c^2}{9\Delta^2}} \right), \nonumber \\ x_2 & = \frac{c}{3} + \frac{\Delta}{2} \left( 1 - \sqrt{1 + \frac{4c^2}{9\Delta^2}} \right). \end{align*} \noindent Now, it can be observed that $x_1 > \frac{c}{3} + \Delta > \Delta $ which proves the existence of an extreme point larger than $\Delta$. For $x_2$ we write: \begin{align*} x_2 - \Delta & = \frac{c}{3} + \frac{\Delta}{2} \left( 1 - \sqrt{1 + \frac{4c^2}{9\Delta^2}} \right) - \Delta \nonumber \\ & = \frac{c}{3} - \frac{\Delta}{2} \left( 1 + \sqrt{1 + \frac{4c^2}{9\Delta^2}} \right) \nonumber \\ & < 0. \end{align*} \noindent Hence, in the range $x>\Delta$, the function $g(x)$ has a single extreme point. Next, we note that $\lim_{x \to \Delta} g(x) = \infty$, while $\lim_{x \to \infty} g(x) = 0$. Therefore, $x_1$ is a minimum point. Thus, the equation $g(x)=0$ has a single solution in the range $x>\Delta$. Next, we upper bound the mismatch probability. Let ``mismatch" denote the event of $\hat{S}_{\text{ML}}(\mathbf{y}) \neq \hat{S}_{\text{FA}}(\mathbf{y})$. We write: \begin{align} \Pr \{ \text{mismatch} \} & = 0.5 \Big( \Pr \{ \text{mismatch} | x = 0 \} \nonumber \\ & \mspace{60mu} + \Pr \{ \text{mismatch} | x = \Delta \} \Big). \label{eq:Prob_mismatch} \end{align} \subsection{Upper Bounding $\Pr \{ \text{mismatch} | x = 0 \}$} We begin with upper bounding $\Pr \{ \text{mismatch} | x = 0 \}$. Note that if $y_{\text{FA}} \le \Delta$ then $\hat{S}_{\text{ML}}(\mathbf{y}) = \hat{S}_{\text{FA}}(\mathbf{y}) = 0$, and therefore we analyze only the case of $\Delta < y_{\text{FA}}$. Recall that for $\Delta < y_{\text{FA}} < \theta_M$ the FA detector decides $\hat{S}_{\text{FA}}(\mathbf{y}) = 0$. Hence, a mismatch event occurs when the ML detector declares $\hat{S}_{\text{ML}}(\mathbf{y}) = 1$, which occurs if (see \eqref{eq:decisionRuleMultiple}): \begin{align} \sum_{m=1}^{M}{\log \left( \frac{y_m-\Delta}{y_m} \right) + \frac{c \Delta}{3} \frac{1}{y_m(y_m - \Delta)}} < 0. \label{eq:ML_1_decision} \end{align} \noindent The LHS of \eqref{eq:ML_1_decision} can be written as: \begin{align*} & \sum_{m=1}^{M}{\log \left( \frac{y_m-\Delta}{y_m} \right) + \frac{c \Delta}{3} \frac{1}{y_m(y_m - \Delta)}} \nonumber \\ & \qquad= \log \left( \frac{y_\text{FA}-\Delta}{y_\text{FA}} \right) + \frac{c \Delta}{3} \frac{1}{y_\text{FA}(y_\text{FA} - \Delta)} \nonumber \\ & \qquad \qquad + \sum_{m=2}^{M}{\log \left( \frac{y_m-\Delta}{y_m} \right) + \frac{c \Delta}{3} \frac{1}{y_m(y_m - \Delta)}} . \end{align*} \noindent Therefore, \eqref{eq:ML_1_decision} can be written as: \begin{align} & \log \left( \frac{y_\text{FA}-\Delta}{y_\text{FA}} \right) + \frac{c \Delta}{3} \frac{1}{y_\text{FA}(y_\text{FA} - \Delta)} \nonumber \\ & \qquad < \sum_{m=2}^{M}{\log \left( \frac{y_m}{y_m-\Delta} \right) - \frac{c \Delta}{3} \frac{1}{y_m(y_m - \Delta)}}. \end{align} \noindent Let $\phi(y) = \log \left( \frac{y-\Delta}{y} \right) + \frac{c \Delta}{3} \frac{1}{y(y - \Delta)}$. Next, we define the set: \begin{align*} \mathcal{B}_1(y) & \triangleq \Bigg\{(y_2,y_3,\dots,y_M): \nonumber \\ & \qquad \phi(y) \mspace{-3mu} < \mspace{-3mu} \sum_{m=2}^{M}{\log \left( \frac{y_m}{y_m \mspace{-3mu} - \mspace{-3mu} \Delta} \right) \mspace{-3mu} - \mspace{-3mu} \frac{c \Delta}{3} \frac{1}{y_m(y_m \mspace{-3mu} - \mspace{-3mu} \Delta)}}\Bigg\}. \end{align*} \noindent Thus, $\Pr \{ \text{mismatch} | x \mspace{-3mu} = \mspace{-3mu} 0 \}$, when $\Delta \mspace{-3mu} < \mspace{-3mu} y_{\text{FA}} \mspace{-3mu} < \mspace{-3mu} \theta_M$, is given by: \begin{align} & \Pr \{ \text{mismatch} | x = 0, \Delta < y_{\text{FA}} < \theta_M \} \nonumber \\ & \qquad = \int_{\Delta}^{\theta_M} f_{Y_{\text{FA}|X}}(y|x=0) \nonumber \\ & \qquad \qquad \times \int_{\mathcal{B}_1(y)}{f_{\{Y_j\}_{j=2}^M|X}(\{y_j\}_{j=2}^M|x=0)} \{dy_j\}_{j=2}^M dy \nonumber \\ % & \qquad \stackrel{(a)}{\le} \int_{\Delta}^{\theta_M}{f_{Y_{\text{FA}|X}}(y|x=0) dy} \nonumber \\ % & \qquad = \Psi(c,M,\theta_M) - \Psi(c,M,\Delta), \label{eq:probMM_firstTermBound} \end{align} \noindent where (a) follows from the fact that in the second integrand is a joint PDF, and therefore it is upper bounded by 1. \noindent Following similar arguments, for $\Pr \{ \text{mismatch} | x = \Delta \}$, when $\Delta < y_{\text{FA}} < \theta_M$, we obtain: \begin{align} & \Pr \{ \text{mismatch} | x \mspace{-2mu} = \mspace{-2mu} \Delta, \Delta < y_{\text{FA}} \mspace{-2mu} < \mspace{-2mu} \theta_M \} \nonumber \\ & \qquad \qquad \le \Psi(c,M,\theta_M-\Delta) - \Psi(c,M,0) . \label{eq:probMM_thirdTermBound} \end{align} \subsection{Upper Bounding $\Pr \{ \text{mismatch} | x = 0\}$ for $\theta_M < y_{\text{FA}} $} First, we recall that when $\theta_M < y_{\text{FA}}$ then $\hat{S}_{\text{FA}}(y_{\text{FA}})=1$. Hence, a mismatch event takes place if the ML detector declares $\hat{S}_{\text{ML}}(\mathbf{y}) = 0$, which occurs if (see \eqref{eq:decisionRuleMultiple}): \begin{align} \sum_{m=1}^{M}{\log \left( \frac{y_m-\Delta}{y_m} \right) + \frac{c \Delta}{3} \frac{1}{y_m(y_m - \Delta)}} > 0. \label{eq:ML_0_decision} \end{align} \noindent We showed that if $y_{\text{FA}} > x^{\ast}$ then $g(y_{m}) < 0, \forall m=1,2,\dots,M$. In such case the LHS of \eqref{eq:ML_0_decision} is negative and $\hat{S}_{\text{ML}}(\mathbf{y}) = 1$, thus, there is no mismatch. Therefore, $\Pr \{ \text{mismatch} | x = 0, \theta_M < y_{\text{FA}} \} = \Pr \{ \text{mismatch} | x = 0, \theta_M < y_{\text{FA}} < x^{\ast} \}$. Now, we define the set: \begin{align*} \mathcal{B}_2(y) & \triangleq \Bigg\{(y_2,y_3,\dots,y_M): \nonumber \\ & \qquad \phi(y) \mspace{-3mu} > \mspace{-3mu} \sum_{m=2}^{M}{\log \left( \frac{y_m}{y_m \mspace{-3mu} - \mspace{-3mu} \Delta} \right) \mspace{-3mu} - \mspace{-3mu} \frac{c \Delta}{3} \frac{1}{y_m(y_m \mspace{-3mu} - \mspace{-3mu} \Delta)}}\Bigg\}, \end{align*} \noindent and write $\Pr \{ \text{mismatch} | x = 0, \theta_M < y_{\text{FA}} < x^{\ast} \}$ as: \begin{align} & \Pr \{ \text{mismatch} | x = 0, \theta_M < y_{\text{FA}} < x^{\ast} \} \nonumber \\ & \qquad = \int_{\theta_M}^{x^{\ast}}f_{Y_{\text{FA}|X}}(y|x=0) \nonumber \\ & \qquad \qquad \times \int_{\mathcal{B}_2(y)}{f_{\{Y_j\}_{j=2}^M|X}(\{y_j\}_{j=2}^M|x=0)} \{dy_j\}_{j=2}^M dy \nonumber \\ % & \qquad \le \int_{\theta_M}^{x^{\ast}}{f_{Y_{\text{FA}|X}}(y|x=0) dy} \nonumber \\ % & \qquad = \Psi(c,M,x^{\ast}) - \Psi(c,M,\theta_M). \label{eq:probMM_secondTermBound} \end{align} \noindent Following similar arguments, $\Pr \{ \text{mismatch} | x \mspace{-3mu} = \mspace{-3mu} \Delta, \theta_M \mspace{-3mu} < \mspace{-3mu} y_{\text{FA}} \}$ is upper bounded by: \begin{align} & \Pr \{ \text{mismatch} | x = 0, \theta_M < y_{\text{FA}} \} \nonumber \\ & \qquad \qquad \le \Psi(c,M,x^{\ast} - \Delta) - \Psi(c,M,\theta_M - \Delta). \label{eq:probMM_fourthTermBound} \end{align} \noindent Combining \eqref{eq:probMM_firstTermBound}, \eqref{eq:probMM_thirdTermBound}, \eqref{eq:probMM_secondTermBound}, and \eqref{eq:probMM_fourthTermBound} we conclude the proof. \section{Proof of Theorem \ref{thm:ErrExp_FA}} \label{annex:thm_ErrExp_FA_proof} Let $a_M \mspace{-3mu} \triangleq \mspace{-3mu} \left( 1 \mspace{-3mu} - \mspace{-3mu} \mathsf{erfc} \left( \sqrt{\frac{c}{2\theta_M}} \right) \right)^M$ and $b_M \mspace{-3mu} \triangleq \mspace{-3mu} 1 \mspace{-3mu} - \mspace{-3mu} \left(1 \mspace{-3mu} - \mspace{-3mu} \mathsf{erfc} \left( \sqrt{\frac{c}{2(\theta_M - \Delta)}} \right) \right)^M$. Then, explicitly writing the probability of error in \eqref{eq:errProbSymbBySymbFA}, the error exponent of the FA detector is given by: \begin{align} \mathsf{E}_{\text{FA}} & = \lim_{M \to \infty} - \frac{\log P_{\varepsilon, \text{FA}}^{(M)}}{M} \nonumber \\ & = \lim_{M \to \infty} - \frac{\log \left( 0.5 \left( a_M + b_M \right) \right)}{M} \nonumber \\ % & = \min \left\{ \lim_{M \to \infty} - \frac{\log \left( a_M \right)}{M}, \lim_{M \to \infty} - \frac{\log \left( b_M \right)}{M} \right\}. \label{eq:Efa_General} \end{align} Since $\theta_M \to \Delta$, when $M \to \infty$, we write: \begin{align} & \lim_{M \to \infty} - \frac{\log \left( a_M \right)}{M} \nonumber \\ & \qquad = \lim_{M \to \infty} - \frac{\log \left( \left( 1 - \mathsf{erfc} \left( \sqrt{\frac{c}{2\theta_M}} \right) \right)^M \right)}{M} \nonumber \\ & \qquad = - \log \left( 1 - \mathsf{erfc} \left( \sqrt{\frac{c}{2 \Delta}} \right) \right). \label{eq:ErrExp_1stTerm} \end{align} \noindent Next, we analyze the second term in \eqref{eq:Efa_General}, and note that this term depend on the rate of convergence of $\theta_M$ to $\Delta$. Again, we use the fact that $\theta_M \to \Delta$ as $M \to \infty$, and write $\theta_M = \Delta + \delta_M$, where $\delta_M \to 0$. We then characterize the scaling behavior of $\delta_M$ to zero, for large values of $M$. As $\theta_M$ is the decision threshold, by equating the two PDFs in \eqref{eq:yFApdf}, we have the following equality in terms of $\delta_M$: \begin{align} & \left( \frac{\delta_M}{\Delta + \delta_M} \right)^{\frac{3}{2}} e^{-\frac{c}{2} \left(\frac{\Delta}{\delta_M (\Delta + \delta_M)} \right)} \nonumber \\ & \qquad \quad = \left(\frac{1 - \mathsf{erfc} \left( \sqrt{\frac{c}{2 (\Delta + \delta_M)}} \right) }{1 - \mathsf{erfc} \left( \sqrt{\frac{c}{2 \delta_M} } \right)} \right)^{M-1}. \label{eq:vanish_deltam_1} \end{align} \noindent Let $M$ be sufficiently large, and recall the equality $\lim_{x \to \infty} \frac{- \log \left(\mathsf{erfc}(x) \right)}{x^2} = 1$. Furthermore, for $M$ large enough we can write $\Delta + \delta_M \approx \Delta$. Thus, we write \eqref{eq:vanish_deltam_1} as: \begin{align} \left( \frac{\delta_M}{\Delta + \delta_M} \right)^{\frac{3}{2}} e^{-\frac{c}{2} \left(\frac{\Delta}{\delta_M (\Delta + \delta_M)} \right)} & \approx \left( \frac{\delta_M}{\Delta} \right)^{\frac{3}{2}} e^{-\frac{c}{2} \left(\frac{1}{\delta_M} \right)} \nonumber \\ & \approx \left(\frac{\beta}{1 - e^{\frac{-c}{2 \delta_M}}} \right)^{M-1}, \label{eq:vanish_deltam_2} \end{align} \noindent where we let $\beta \triangleq 1 - \mathsf{erfc} \left( \sqrt{\frac{c}{2 \Delta}} \right) \le 1$, and note that $1 - \mathsf{erfc} \left( \sqrt{\frac{c}{2 (\Delta + \delta_m)}} \right) \approx \beta$. We now assume that $\delta_M$ scales as $\frac{d_1}{M}$, for some constant $d_1$, and show that for large enough $M$ the LHS and RHS of \eqref{eq:vanish_deltam_2} have the same scaling. This also enables finding the constant $d_1$, and calculating the error exponent of the second term in \eqref{eq:Efa_General}. We write \eqref{eq:vanish_deltam_2} as: \begin{align} \left( \frac{d_1}{\Delta} \right)^{\frac{3}{2}} M^{\frac{-3}{2}} e^{-\frac{c}{2d_1} M } & \mspace{-3mu} = \mspace{-3mu} \frac{\beta^{M-1}}{\left( 1 - e^{\frac{-c}{2d_1}M} \right)^{M-1}} \nonumber \\ & \mspace{-3mu} \approx \mspace{-3mu} \beta^{M-1} \left( 1 \mspace{-3mu} + \mspace{-3mu} (M \mspace{-3mu} - \mspace{-3mu} 1) e^{\frac{-c}{2d_1}M} \right). \label{eq:vanish_deltam_3} \end{align} \noindent Thus, by noting that the two sides of \eqref{eq:vanish_deltam_3} must scale to zero at the same rate, we write:\footnote{Note that as we are interested in the error exponent, we apply analysis which focuses only on the scaling law, and therefore we ignore terms which scale slower, e.g., $M^{\frac{-3}{2}}$.} \begin{align} e^{-\frac{c}{2d_1} M } = e^{M(1+\log(\frac{\beta}{e}))} \Rightarrow d_1 = \frac{-c}{2(1+\log(\frac{\beta}{e}))}. \end{align} \noindent Having the scaling law of $\delta_M$, we now write the second term in \eqref{eq:Efa_General} as: \begin{align} & \lim_{M \to \infty} - \frac{\log \left( b_M \right)}{M} \nonumber \\ & \qquad = \lim_{M \to \infty} - \frac{\log \left( 1 - \left(1 - \mathsf{erfc} \left( \sqrt{\frac{c}{2(\theta_M - \Delta)}} \right) \right)^M \right)}{M} \nonumber \\ % & \qquad = \lim_{M \to \infty} \frac{ - \log \left(M e^{\frac{-c}{2 d_1} M} \right) }{M} \nonumber \\ % & \qquad = -1 - \log \left(\frac{\beta}{e} \right) \nonumber \\ % & \qquad = - \log \left( 1 - \mathsf{erfc} \left( \sqrt{\frac{c}{2 \Delta}} \right) \right). \label{eq:ErrExp_2ndTerm} \end{align} \noindent Finally, by plugging \eqref{eq:ErrExp_1stTerm} and \eqref{eq:ErrExp_2ndTerm} into \eqref{eq:Efa_General}, we conclude the proof. \section{Proof of Theorem \ref{thm:nonBinScaling}} \label{annex:thm_nonBinScaling_proof} We first recall the probability of error of the decision rule \eqref{eq:nonBin_FA_Det}: \begin{align} P_{\varepsilon, \text{FA}} & = \frac{2^L - 1}{2^L} \Bigg( \left( 1 \mspace{-3mu} - \mspace{-3mu} \mathsf{erfc} \left( \sqrt{\frac{c}{2\theta_M}} \right) \right)^M \nonumber \\ & \mspace{80mu} + \mspace{-3mu} 1 \mspace{-3mu} - \mspace{-3mu} \left(1 - \mathsf{erfc} \left( \sqrt{\frac{c}{2(\theta_M \mspace{-3mu} - \mspace{-3mu} \tilde{\Delta})}} \right) \right)^M \Bigg), \label{eq:errProbNonBinFA} \end{align} \noindent where $\theta_M$ is the solution of the following equation in $y_{\text{FA}}$: \begin{align} & y_{\text{FA}}(y_{\text{FA}} \mspace{-3mu} - \mspace{-3mu} \tilde{\Delta}) \mspace{-3mu} \cdot \mspace{-3mu} \log \left( \frac{y_{\text{FA}}}{y_{\text{FA}} \mspace{-3mu} - \mspace{-3mu} \tilde{\Delta}} \right) \nonumber \\ & \mspace{10mu} + \mspace{-3mu} y_{\text{FA}}(y_{\text{FA}} \mspace{-3mu} - \mspace{-3mu} \tilde{\Delta}) \mspace{-3mu} \cdot \mspace{-3mu} \log \mspace{-3mu} \left( \mspace{-3mu} \frac{1 \mspace{-3mu} - \mspace{-3mu} \mathsf{erfc} \mspace{-3mu} \left( \sqrt{\frac{c}{2(y_{\text{FA}} - \tilde{\Delta})}} \right)}{1 \mspace{-3mu} - \mspace{-3mu} \mathsf{erfc} \mspace{-3mu} \left( \sqrt{\frac{c}{2y_{\text{FA}}}} \right)} \mspace{-3mu} \right)^{\mspace{-10mu} \frac{2(M \mspace{-3mu} - \mspace{-3mu}1)}{3}} \mspace{-12mu} = \mspace{-3mu} \frac{c \tilde{\Delta}}{3}, \label{eq:thetaMEquation_nonBin} \end{align} \noindent As in Appendix \ref{annex:thm_ErrExp_FA_proof}, we let $\theta_M = \Delta + \delta_M$, where $\delta_M \to 0$. Plugging the expression for $\theta_M$ into \eqref{eq:errProbNonBinFA} we write: \begin{align} P_{\varepsilon, \text{FA}} & \propto \left( 1 - \mathsf{erfc} \left( \sqrt{\frac{c}{2\theta_M}} \right) \right)^M \nonumber \\ & \qquad + 1 - \left(1 - \mathsf{erfc} \left( \sqrt{\frac{c}{2(\theta_M - \tilde{\Delta})}} \right) \right)^M \nonumber \\ % & \stackrel{(a)}{=} \left( 1 - \mathsf{erfc} \left( \sqrt{\frac{c}{2(\tilde{\Delta} + \delta_M)}} \right) \right)^M \nonumber \\ & \qquad + 1 - \left(1 - \mathsf{erfc} \left( \sqrt{\frac{c}{2\delta_M }} \right) \right)^M \nonumber \\ % & \stackrel{(b)}{\approx} \left( 1 - \frac{1}{6} e^{-\frac{c}{2} \frac{1}{\tilde{\Delta} + \delta_M}} \right)^M + 1 - \left(1 - \frac{1}{6} e^{-\frac{c}{2} \frac{1}{\delta_M}} \right)^M \nonumber \\ % & \stackrel{(c)}{\approx} e^{-\frac{M}{6} e^{-\frac{c}{2} \frac{1}{\tilde{\Delta} + \delta_M}}} + 1 - e^{-\frac{M}{6} e^{-\frac{c}{2} \frac{1}{\delta_M}}}, \label{eq:eq:errProbNonBinFA_approx_1} \end{align} \noindent where (a) follows by plugging $\theta_M = \Delta + \delta_M$; (b) follows from the approximation $\mathsf{erfc}(x) \approx \frac{1}{6} e^{-x^2}$ \cite[Eq. (14)]{chiani2003}; and (c) follows from the limit definition of the exponential function. Next, recall that $\delta_M$ scales as $\frac{d_1}{M}$ (for the details see Appendix \ref{annex:thm_ErrExp_FA_proof}). Plugging this scaling into the second exponential term in \eqref{eq:eq:errProbNonBinFA_approx_1} we obtain: \begin{align} 1 - e^{-\frac{M}{6} e^{-\frac{c}{2} \frac{1}{\delta_M}}} \approx 1 - e^{-\frac{M}{6} e^{-\frac{c}{2} \frac{M}{d_1}}} \to_{M \to \infty} 0. \end{align} \noindent For the first term in \eqref{eq:eq:errProbNonBinFA_approx_1} we note that $\tilde{\Delta} \mspace{-3mu} \approx \mspace{-3mu} 2^{-L}\Delta$ and write: \begin{equation*} e^{-\frac{M}{6} e^{-\frac{c}{2} \frac{1}{\tilde{\Delta} + \delta_M}}} \mspace{-3mu} = \mspace{-3mu} e^{-\frac{M}{6} g(M,L)}, \end{equation*} \noindent where $g(M,L) \mspace{-3mu} \triangleq \mspace{-3mu} e^{-\frac{c}{2} \frac{1}{2^{-L} \Delta + \frac{d_1}{M}}}$. Now, for the probability of error to vanish, it is required that $M \cdot g(M,L) \to \infty$. Thus, we require $g(M,L) \propto M^{-(1-\epsilon)}$ for some $\epsilon > 0$. Explicitly writing this relationship we obtain: \begin{align} \log M^{1-\epsilon} \mspace{-3mu} \propto \mspace{-3mu} \frac{1}{2^{-L} \Delta + \frac{d_1}{M}} \Rightarrow 2^{-L} \Delta \mspace{-3mu} + \mspace{-3mu} \frac{d_1}{M} \mspace{-3mu} \propto \mspace{-3mu} \frac{1}{\log M^{1-\epsilon}}. \end{align} \noindent Thus, \begin{align} \Delta 2^{L} \propto \log M^{1-\epsilon} \Rightarrow L \propto \log \log M^{1-\epsilon}. \end{align} \noindent In conclusion, for the probability of error to vanish, $L$ should scale as $\log \log M^{1-\epsilon}$. \vspace{-0.2cm}
2,869,038,156,530
arxiv
\section{Introduction} Understanding the mechanism of turbulence has been a great challenge for over a century. Now, it is still very far from approaching a comprehensive theory and the final resolution of the turbulent problem \cite{lumly}. Reynolds (1883)[1] \cite{reynolds} did the first famous experiments on pipe flow demonstrating the transition from laminar to turbulent flows. Since then, various stability theories emerged during the past 120 years for this phenomenon, but few are satisfactory in the explanation of the various flow instabilities and the related complex flow phenomena. The pipe Poiseuille flow (Hagen-Poiseuille) is linearly stable for all the Reynolds number $Re$ by eigenvalue analysis \cite{landau} \cite{drazin} \cite% {schmid01} \cite{trefethen93} \cite{gross00}. However, experiments showed \ that the flow would become turbulence if $Re$ ($=\rho UD/\mu $) exceeds a value of $2000$. Experiments also showed that disturbances in a laminar flow could be carefully avoided or considerably reduced, the onset of turbulence was delayed to Reynolds numbers up to $Re=O(10^{5})$ \cite{trefethen93} \cite% {gross00}. For $Re$ above $2000$, the characteristic of turbulence transition depends on the disturbance amplitude and frequency, below which transition from laminar to turbulent state does not occur regardless of the initial disturbance amplitude \cite{wygnanski} \cite{darby}. Thus, it is clear that the transition from laminar to turbulence for $Re>2000$ is dominated by the behaviours of mean flow and disturbance. Only the combined effect of the two factors reaches the critical condition, could the transition occur for $Re>2000$. These are summarized in Table 1. Linear stability analysis of plane parallel flows gives critical Reynolds number $Re$ ($=\rho V_{0}h/\mu $) of $5772$ for plane Poiseuille flow, while experiments show that transition to turbulence occurs at Reynolds number of order $1000$ \cite{orszag71} \cite{landau} \cite{trefethen93} \cite{gross00}% , even though the laminar flow could also kept to $Re=O(10^{5})$ \cite% {Nishioak75}. One resolution of these paradoxes is that the domain of attraction for the laminar state shrinks for large Re (as $\func{Re}^{\gamma }$ say, with $\gamma <0),$ so that small but finite perturbations lead to transition \cite{trefethen93} \cite{chapamn2002}. Grossmann \cite{gross00} commented that this discrepancy demonstrates that nature of the onset-of-turbulence mechanism in this flow must be different from an eigenvalue instability. Orszag and Patera \cite{orszag80} remarked that the mechanism of transition is not properly represented by parallel-flow linear stability analysis. They proposed a linear three-dimensional mechanism to predict the transitional Reynolds number. Some nonlinear stability theories have been proposed, for example, in \cite{stuart}. However, these theories do not seem to offer a good agreement with the experimental data. Energy method was also used in the study of flow instabilities \cite% {LinCC1955} \cite{Betchov} \cite{joseph76} \cite{drazin} \cite{hinze} \cite% {schmid01}. In energy method, one observes the rate of increasing of disturbance energy to study the instability of the \emph{flow system}. The critical condition is determined by the maximum Reynolds number at which the disturbance energy in the system monotonically decreases. In the flow system, it is considered that turbulence shear stress interacts with the velocity gradient\ and the disturbance gets energy from mean flow in such a way. Thus, the disturbance is amplified and the instability occurs with the energy increasing of disturbance. Therefore, it is recognized that it is the basic state vorticity leading to instability. The energy method could not get agreement with the experiments either \cite{Betchov} \cite{drazin} \cite% {schmid01}. In recent years, various transition scenarios have been proposed \cite{trefethen93} \cite{gross00} \cite{Waleffe} \cite{baggett} \cite% {Zikanov} \cite{reddy1998} \cite{chapamn2002} \cite{Meseguer} for the subcritical transition. Although we can get a better understanding of the transition process from these scenarios, the mechanism is still not fully understood and the agreement with the experimental data is still not satisfied. Generally, the transition from laminar flow to turbulent flow is not generated suddenly in the entire flow field but it first starts from somewhere in the flow field and then spreads out gradually from this position. As is well known in solid mechanics, the damage of a metal component generally starts from some area such as manufacturing fault, crack, stress concentration, or fatigue position, etc. In fluid mechanics, we consider that the breaking down of a steady laminar flow should also start from a most dangerous position first. The consequent questions are: (a) \emph{Where is this most dangerous position for Poiseuille flow? }(b) \emph{What is the mechanism and the dominant factor for this phenomenon?} (c) \emph{What parameter should be used to characterize this position?} These questions are our concern. Finding the solution of these problems is important to the understanding of the phenomenon and the estimation of flow transition. Because the turbulence transition is generally resulted in by flow instability \cite{LinCC1955}, we think that the critical condition of transition should be determined by the position where the \emph{flow instability} first takes place. If the mechanism of flow instability is sought out and the most dangerous position is found, the critical condition of transition could be determined. \begin{table}[tbp] \centering% \begin{tabular}{|l|l|l|} \hline Base flow & Disturbance & Resulting flow state \\ \hline $Re<2000$ & No matter how large. & Disturbance decay; flow keeps laminar. \\ \hline $Re\thicksim 10^{5}$ & Kept low. & Flow keeps laminar. \\ \hline $Re\geq 2000$ & Enough large, depend on $Re$. & Transition occurs. \\ \hline \end{tabular}% \caption{Characteristic of transition of Hagen-Poiseuille flow (pipe Poiseuille flow).}% \end{table}% In this study, we explore the critical condition of main flow for instability and turbulence transition, and not deal with the detailed process of disturbance amplification. The energy gradient theory is proposed to explain the mechanism of flow instability and turbulence transition for parallel flows. A new dimensionless parameter for characterizing the critical condition of flow instability is proposed. Comparison with experimental data for plane Poiseuille flow and pipe Poiseuille flow at subcritical transition indicates that the proposed idea is really valid. \section{Proposed Mechanism of Flow Instability} \FRAME{ftbpFU}{3.4826in}{3.0952in}{0pt}{\Qcb{Velocity distribution variation with the increased Reynolds number for given fluid and geometry in plane Poiseuille flows. $Re=\protect\rho UL/\protect\mu ,$ $L=2h,$ where $h$ is the half-width of the channel.}}{}{parab1.ps}{\raisebox{-3.0952in}{\includegraphics[height=3.0952in]{PARAB11.ps}}} \bigskip \FRAME{ftbpFU}{4.2203in}{3.0943in}{0pt}{\Qcb{Velocity profiles for laminar and turbulent flows.}}{}{parabb1.ps}{\raisebox{-3.0943in}{\includegraphics[height=3.0943in]{PARAB1.ps}}} The plane Poiseuille flow in a channel is shown in Fig.1. For the given flow geometry and fluid property, with the increasing mean velocity $U$, the flow may transit to turbulence if the $Re$ exceeds a critical value (under certain disturbance). The velocity profiles for laminar and turbulent flows are shown respectively in Fig.2. It can be imagined that there is a \textquotedblleft driving factor\textquotedblright\ which pulls the laminar velocity profile outward toward the walls when the transition takes place. What should be such a driving factor? From a large amount of observations, it is thought that\emph{\ }the increase of the gradient of fluid kinematic energy in transverse direction, $\frac{\partial }{\partial y}(\frac{1}{2}% V^{2})=V\frac{\partial V}{\partial y}$, may form a \textquotedblleft \emph{% driving force}\textquotedblright\ to cause the increase of flow disturbance for given flow condition, while the gradient of the viscous friction force may resist or absorb the disturbance. Here, $V$ is the magnitude of local velocity. The stability of the flow depends upon the effects of these two roles. With the increasing of mean velocity $U$ for parallel flows, the energy gradient in the transverse direction increases. If this energy gradient is large enough it will lead to a disturbance amplification of the flow. The viscosity friction caused by shear stress would stabilize the flow by absorbing the velocity fluctuation. When the energy gradient in transverse direction reaches beyond a critical value, the laminar flow could not balance this disturbance and flow instability might be excited. Finally, the turbulence flow would be triggered when the transverse energy gradient continuously keeps large enough with the flow forward. The energy gradient in the transverse direction makes the exchange of energy between the fluid layers and sustains the turbulence. Therefore, it is proposed that \emph{the necessary condition for the turbulence transition is that there is an energy gradient in the transverse direction of the main flow.} Now, we prove this necessary condition is correct at least for parallel flows. If the gravitational energy is neglected, the total energy gradient in transverse direction is $\frac{\partial }{\partial y}\left( p+\frac{1}{2}% \rho V^{2}\right) $. For parallel flows, $\frac{\partial p}{\partial y}=0$ and $V=u$. If this energy gradient is zero, $\frac{\partial }{\partial y}(% \frac{1}{2}V^{2})=V\frac{\partial V}{\partial y}=0$, there must be $\frac{% \partial V}{\partial y}=0$ due to $V\neq 0$. Thus, the rate of increase of disturbance energy will be negative due to viscous dissipation because the disturbance could not obtain energy from the base flow owing to zero velocity gradient \cite{Betchov} \cite{drazin} \cite{schmid01}. Therefore, the disturbance must decay at this case. In such way, it is proved that the energy gradient in the transverse direction is a necessary condition for the flow transition. In addition, when there is a pressure gradient in the normal direction to the flow direction, this pressure gradient could also result in a flow instability even the Reynolds number is low. Both centrifugal and Coriolis instabilities are those caused by pressure gradients. Elastic instability is also that produced by the transversal pressure gradient \cite{shaqfeh96} \cite{dou02a}. The mechanism of instability should take into account of the effect of the variation of cross-streamline pressure for these cases, which may lead to flow instability or accelerates the instability initiation.{\ }% In some cases of incompressible flows such as stratified flows, the gravitational energy should be taken into account. For given flow geometry and fluid, it is proposed that the flow stability condition can be expressed as, \begin{equation} \frac{\partial }{\partial y}\left( \rho g_{y}y+p+\frac{1}{2}\rho V^{2}\right) <C\text{,} \label{E1} \end{equation}% where $g_{y}$ is the component of gravity acceleration in $y$ direction, and $C$ is a constant\ which is related to fluid property and geometry. The $x$ axis is along the flow direction and the $y$ axis is along the transverse direction. In this study, we first show that the proposed idea is really correct for Poiseuille flows. \section{Formulation and Theory Description} The conservation of momentum for an incompressible Newtonian fluid is (neglecting gravity force): {\ \begin{equation} \rho (\frac{\partial \mathbf{V}}{\partial t}+\mathbf{V}\cdot \nabla \mathbf{V% })=-\nabla p+\mu \nabla ^{2}\mathbf{V}\text{.} \label{E3} \end{equation}% Using the identity, \begin{equation} \mathbf{V}\cdot \nabla \mathbf{V}=\frac{1}{2}\nabla \left( \mathbf{V}\cdot \mathbf{V}\right) -\mathbf{V}\times \nabla \times \mathbf{V}\text{,} \label{E4} \end{equation}% equation (\ref{E3}) can be rearranged as, \begin{equation} \rho \frac{\partial \mathbf{V}}{\partial t}+\nabla (p+\frac{1}{2}\rho V^{2})=\mu \nabla ^{2}\mathbf{V}+\rho \mathbf{(V}\times \nabla \times \mathbf{V)}\text{,} \label{E5} \end{equation}% }where $\rho $ is the fluid density, $t$ the time, $\mathbf{V}$ the velocity vector, $p$ the hydrodynamic pressure, $\mu $ the dynamic viscosity of the fluid. If the viscous force is zero, the above equation becomes the Lamb form of momentum equation. This equation can be found in most text books. For incompressible flow, the total pressure represents the total energy in Eq. (\ref{E5}). Actually, the energy equation has long been used for stability analysis as previously mentioned \cite{hinze} \cite{drazin} \cite% {schmid01} \cite{joseph76} \cite{Betchov}. In previous sections, it is proposed that the instability of viscous flows depends on the relative magnitude of the energy gradient in transverse direction and the viscous friction term. A larger energy gradient in transverse direction tries to lead to amplification of a disturbance, and a large shear stress gradient in streamwise direction tends to absorb this disturbance and to keep the original laminar flow state. The transition\ of turbulence depends on the relative magnitude of the two roles of energy gradient amplification and viscous friction damping under given disturbance. We propose the parameter for characterizing the relative role of these effects below. Let $d\mathbf{s}$ represent the differential length along a streamline in a Cartesian coordinate system,{\ \begin{equation} d\mathbf{s}=d\mathbf{x}+d\mathbf{y}\text{.} \end{equation}% } With dot multiplying Eq. (\ref{E5}) by $d\mathbf{s}$, we obtain,{\ \begin{equation} \rho \frac{\partial \mathbf{V}}{\partial t}\cdot d\mathbf{s}+\nabla (p+\frac{% 1}{2}\rho V^{2})\cdot d\mathbf{s}=\mu \nabla ^{2}\mathbf{V}\cdot d\mathbf{s}% +\rho \mathbf{(V}\times \nabla \times \mathbf{V)}\cdot d\mathbf{s}\text{.} \label{E6} \end{equation}% } Since $\mathbf{(V}\times \nabla \times \mathbf{V)}\cdot d\mathbf{s=}0$ along the streamline, for steady flows, we obtain the energy gradient along the streamline,{\ \begin{equation} \partial (p+\frac{1}{2}\rho V^{2})/\partial s=\mu \nabla ^{2}\mathbf{V}\cdot \frac{d\mathbf{s}}{\left\vert d\mathbf{s}\right\vert }\mathbf{=}(\mu \nabla ^{2}\mathbf{V)}_{s}\text{.} \label{E7} \end{equation}% This equation shows that t}he total energy gradient along the streamwise direction equals to the viscous term $(\mu \nabla ^{2}\mathbf{V)}_{s}$. For pressure driving flows, $(\mu \nabla ^{2}\mathbf{V)}_{s}$ represents the energy loss due to friction. It is obvious that the total energy decreases along the streamwise direction due to viscous friction loss. The energy gradient along the transverse direction is,{\ \begin{equation} \partial (p+\frac{1}{2}\rho V^{2})/\partial n=\frac{\partial p}{\partial n}% +\rho V\frac{\partial V}{\partial n}\text{.} \label{E8} \end{equation}% It can be seen that the energy gradient at transverse direction depends on the velocity gradient and the velocity magnitude as well as the transversal pressure gradient. } The relative magnitude of the energy gradients in the two directions can be expressed by a \emph{new dimensionless parameter}, $K,$ the ratio of the energy gradient in the transverse direction to that in the streamwise direction, {\ \begin{equation} K=\frac{\partial E/\partial n}{\partial E/\partial s}=\frac{\partial (p+% \frac{1}{2}\rho V^{2})/\partial n}{\partial (p+\frac{1}{2}\rho V^{2})/\partial s}=\frac{\partial p/\partial n+\rho V(\partial V/\partial n)% }{(\mu \nabla ^{2}\mathbf{V)}_{s}}\text{,} \label{KNum} \end{equation}% where }$E=p+\frac{1}{2}\rho V^{2}$ expresses the total energy, $n$ denotes the direction normal to the streamwise direction, and $s$ denotes the streamwise direction. It is noticed that the parameter $K$ is a field variable and it{\ represents the direction of the vector of local total energy gradient. It can be seen that when }$K$ is small, the role of the viscous term in Eq.(9) is large and the flow tends to be stable. When $K$ is large, the role of numerator in Eq.(9) is large and the flow tends to be unstable. {For given flow field, there is a maximum of }$K$ in the domain which represents the most possible unstable location. Therefore, in the area of high value of $K$, the flow tends to be more unstable than that in the area of low value of $K$. The first instability should be initiated by the maximum of $K$, $K_{max}$, in the flow field for given disturbance. In other words, the position of maximum of $K$ is the most dangerous position. The magnitude of $K_{max}$ in the flow field symbolizes the extent of the flow approaching the instability. Especially, if $K_{max}=\infty $, the flow is certainly potential to be unstable under some disturbance. For given flow disturbance, there is a critical value of $K_{max}$ over which the flow becomes unstable. In particular, \emph{corresponding to the subcritical transition in wall bounded parallel flows, the }$K_{max}$\emph{\ reaches its critical value, below which no transition occurs regardless of the disturbance amplitude. }\ For unidirectional parallel flows, this critical value of $K_{max}$ should be a constant regardless of the fluid property and the magnitude of the geometrical parameter. Now, owing to the complexity of the flow, it is difficult to predict this critical value by theory. Nevertheless, it can be determined by the experimental data for given flows. For parallel flows, the coordinates $s-n$ becomes the $x-y$ coordinates and the pressure gradient $\partial p/\partial n=\partial p/\partial y=0$. Thus, using the global approximation, $\partial (\frac{1}{2}\rho V^{2})/\partial y\backsim \rho U^{2}/L$, $\left( \mu \nabla ^{2}\mathbf{V}\right) _{s}\backsim \partial ^{2}u/\partial y^{2}\backsim U/L^{2}$, we obtain $% K\backsim $ $\rho UL/\mu =Re,$ where $L$ is a characteristic length in the transverse direction. Thus, the parameter $K$ is equivalent to the Reynolds number in the global sense for parallel flows. The negative energy gradient in the streamwise direction plays a part of resisting on or absorbing the disturbance. Therefore, it has a stable role to the flow. The energy gradient in the transverse direction has a role of amplifying disturbance. Therefore, it has a unstable role to the flow. Here, it is not to say that a high energy gradient in transverse direction necessarily leads to instability, but it has a \emph{potential for instability}. Whether instability occurs also depends on the magnitude of the disturbance. From this discussion, it is easily understood that the disturbance amplitude required for instability is small for high energy gradient in transverse direction (high Re) \cite{darby}. \FRAME{ftbpFU}{5.5002in}{3.0952in}{0pt}{\Qcb{Schematic of energy gradient and energy angle for plane Poiseuille flows. (a) Energy angle increases with the Reynolds number; (b) Definition of the energy angle.}}{}{angle3a0.ps}{% \raisebox{-3.0952in}{\includegraphics[height=3.0952in]{angle3a0.ps}}} \FRAME{ftbpFU}{2.4517in}{3.0943in}{0pt}{\Qcb{Schematic of the direction of the total energy gradient and energy angle for flow with an inflection point at which the energy angle equals 90 degree.}}{}{center2a.ps}{\special% {language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width 2.4517in;height 3.0943in;depth 0pt;original-width 4.2921in;original-height 5.431in;cropleft "0";croptop "1";cropright "1";cropbottom "0";filename 'CENTER2A.ps';file-properties "XNPEU";}} {\ The value of \textquotedblleft }$\arctan K$\textquotedblright {\ expresses the angle between the direction of the total energy gradient and the streamwise direction. We write that,} \begin{equation} \alpha =\arctan K\text{.} \end{equation}% {In this paper, the angle }$\alpha ${\ is named as \textquotedblleft \emph{% energy angle},\textquotedblright\ as shown in Fig.3. }The value of the energy angle (its absolute value) can also be used to express the extent of the flow near the instability occurrence. Fig.3 and Fig.4 show the schematic of the energy angle for some flows. There is a critical value of energy angle, $\alpha _{c},$ corresponding to the critical value of $K_{max}$. When $\alpha >\alpha _{c}$, \ the flow becomes unstable. For Poiseuille flows, $% 0^{\circ }\leq \alpha <90^{\circ }$ and the stability depends on the magnitudes of the energy angle and the disturbance. For parallel flow with a velocity inflection, $\alpha =90^{\circ }$ ($K_{max}=\infty $) at the inflection point and the flow is therefore unstable. This is for the first time to theoretically prove that \emph{viscous flow with a velocity inflection is unstable}. Inversely, as discussed before, viscous flow without velocity inflection may not be necessarily stable, depending on the flow conditions (as shown in table 1). These results are correct at least for \emph{pressure driving flows}. Fig.4 is a best explanation of inflectional instability for viscous flows. According to Eq.(\ref{KNum}), the physics of the criterion presented in this paper is easily understood. The energy gradient in the transverse direction tries to amplify a small disturbance, while the energy loss due to friction in the streamwise direction plays a damping part to the disturbance. The parameter $K$ represents the relative magnitude of disturbance amplification due to energy gradient and disturbance damping of viscous loss. When there is no inflection point in the velocity distribution, the amplification or decay of disturbance depends on the two roles above, i.e., the value of $K.$ When there is an inflection point in the velocity distribution, the viscous term vanishes, and while the transversal energy gradient still exists at the position of inflection point. Thus, even a small disturbance must be amplified by the energy gradient at this location. Therefore, the flow will be unstable. Rayleigh (1880) \cite{rayleigh} only proved that inviscid flow with an inflection point is unstable, while we demonstrate here that viscous flow with an inflection point is unstable. The Rayleigh's criterion was derived from mathematics using the inviscid theory, but its physical meaning is still not clear. It is well known that viscosity plays dual role on the flow instability and disturbance amplification \cite{drazin} \cite{hinze} \cite{schmid01} \cite% {LinCC1955}. It can be seen from equation (\ref{KNum}) that the higher the viscosity, the larger the viscous friction loss. Thus, the flow is more stable. If the values of the vorticity and the streamwise velocity are high, the transversal energy gradient will be high. Thus, the flow is more unstable. From these discussion, it is known that \emph{viscosity mainly plays a stable role to the initiation of flow instability at subcritical transition} by affecting the base flow through the viscous friction of streamwise velocity. This is consistent with criterion of the Reynolds number. Reynolds number represents the ratio of convective inertia force to the viscous force in the Navier-Stokes equations as a dimensionless parameter. However, the magnitude of the Reynolds number is only a global indication of the flow states and a rough expression for the transition condition. At same $Re$ number, the behaviour of flow instability may be different due to the different combination of the magnitude of viscosity with other parameters when $Re$ is larger than an indifference Reynolds number, as shown by the chart of the solution of the Orr-Sommerfeld equation \cite{hinze} \cite% {white} \cite{schlichting}. The role of viscosity is complicated with the variations of flow parameters and flow conditions. The generation of turbulence is not simply caused by increasing the $Re$, but actually by increasing the value of $K$. When $Re$ increases, it inevitably leads to the increase of $K$ in the flow field. The magnitude of local disturbance for a fixed point depends not only on the apparent $Re$ number, but also on the local flow conditions. For steady Poiseuille flow, the convective inertia term is zero and the flow becomes turbulent at high $Re$. We can see that the occurrence of turbulence is not due to the convective inertia term in this case. In Poiseuille flow, the local Reynolds number is high in the core area along the centerline, but the degree of turbulence is low. In the area near the wall, the local Reynolds number is low, but the degree of turbulence is high. In uniform flows, high Reynolds number may not necessarily leads to turbulence. In summary, the Reynolds number is only a global parameter, and the $K$ is a local parameter which represents the local behaviour of the flow and best reflects the role of energy gradient. \section{Analysis on Poiseuille Flows} \subsection{Plane Poiseuille Flow} \FRAME{ftbpFU}{4.8222in}{3.1436in}{0pt}{\Qcb{Kinematic energy gradient and viscous force term versus the axial pressure gradient for plane Poiseuille flow for any position in the flow field. $\left\vert dp/dx\right\vert \propto U\propto Re$ for given fluid and geometry.}}{}{px1.ps}{\special% {language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width 4.8222in;height 3.1436in;depth 0pt;original-width 9.9895in;original-height 6.4895in;cropleft "0";croptop "1";cropright "1";cropbottom "0";filename 'px1.ps';file-properties "XNPEU";}% } The instability in Poiseuille flows with increasing mean velocity $U$ can be demonstrated as below for a given fluid and flow geometry by using the energy gradient concept. For 2D Poiseuille flow, the momentum equation is written as, {\ \begin{equation} 0=-\rho \frac{\partial p}{\partial x}+\mu \frac{\partial ^{2}u}{\partial y^{2}}\text{,} \label{P1} \end{equation}% showing that viscous force term is proportional to the streamwise pressure gradient. The integration to above equation gives, \begin{equation} u=u_{0}\left( 1-\frac{y^{2}}{h^{2}}\right) \text{,} \end{equation}% where }$u_{0}=-\frac{1}{2\mu }h^{2}\frac{\partial p}{\partial x}$, is the centerline velocity, and $h$ is the channel half width. {Although the energy gradient is not explicitly shown in above equation, it can be expressed as below for any position in the flow field }(noticing $v=0$% ){, \begin{equation} \frac{\partial }{\partial y}\left( \frac{1}{2}\rho V^{2}\right) =-\frac{\rho }{2\mu ^{2}}h^{2}y\left( 1-\frac{y^{2}}{h^{2}}\right) \left( \frac{\partial p% }{\partial x}\right) ^{2}\text{.} \label{P2} \end{equation}% Thus, for any position in the flow field,} {\ \begin{equation} \left\vert \frac{\partial }{\partial y}\left( \frac{1}{2}\rho V^{2}\right) \right\vert \propto \left\vert \left( \frac{\partial p}{\partial x}\right) ^{2}\right\vert \text{.} \label{P3} \end{equation}% } {The behaviours of the equations (\ref{P1}--\ref{P3}) are shown in Fig.5: \ the viscous force term is linear, while the kinematic energy gradient increases quadratically with the pressure gradient. }Therefore, at low value of mean velocity $U$, the viscous friction term could balance the disturbance amplification caused by energy gradient. At large value of $U$, the viscous friction term may not constrain the disturbance amplification caused by energy gradient and the flow may transit to turbulence. For plane Poiseuille flows, the ratio of the energy gradient to the viscous force term, $K$, is ($\partial p/\partial y=0$),{\ \begin{eqnarray} K &=&\frac{\partial }{\partial y}\left( \frac{1}{2}\rho V^{2}\right) /\left( \mu \frac{\partial ^{2}u}{\partial y^{2}}\right) =-\frac{2\rho yu_{0}}{h^{2}}% u_{0}\left( 1-\frac{y^{2}}{h^{2}}\right) /\left( -\mu \frac{2u_{0}}{h^{2}}% \right) \nonumber \\ &=&\frac{3}{2}\frac{\rho Uh}{\mu }\frac{y}{h}\left( 1-\frac{y^{2}}{h^{2}}% \right) =\frac{3}{4}Re\frac{y}{h}\left( 1-\frac{y^{2}}{h^{2}}\right) \text{.} \label{aa} \end{eqnarray}% Here, }$Re=\rho UL/\mu $, and $U=\frac{2}{3}u_{0}$ has been used for plane Poiseuille flow. $u_{0}$ is the maximum velocity at centerline and $U$ is the averaged velocity. It can be seen that $K$ is proportional to $Re$ for a fixed point in the flow field. The distribution of $u,E,$ and $K$ along the transversal direction for plane Poiseuille flow is shown in Fig.6. It is clear that there are {\emph{maximum} of }$K${\ at }${y/h=\pm 0.5774}$, as shown in Fig.6{. This maximum can also be obtained by differentiating the equation (\ref{aa}) with }$y/h${\ and letting the derivatives equal to zero. Since we concern the maginitude of the }$K$ and the velocity profile is symmetrical to the centerline, we refer the maximum of $K$ as its positive value thereafter. We think that the flow breakdown of the Poiseuille flow should not suddenly occur in the entire flow field, but it first takes place at the location of $K_{\max }$ in the domain, then it spreads out according to the distribution of $K$ value. The formation of turbulence spot in shear flows may be related to this procedure. \FRAME{ftbpFU}{3.9877in}{3.5483in}{0pt}{\Qcb{Velocity, energy, and $K$ along the transversal direction $y/h$ for plane Poiseuille flow, which are normalized by the their maximum. }}{}{pp.ps}{\raisebox{-3.5483in}{\includegraphics[height=3.5483in]{pp.ps}}} \FRAME{ftbpFU}{3.9885in}{3.5483in}{0pt}{\Qcb{Velocity, energy, and $K$ along the transversal direction $r/R$ for pipe Poiseuille flow, which are normalized by the their maximum. }}{}{pp3.ps}{\raisebox{-3.5483in}{\includegraphics[height=3.5483in]{pp3.ps}}} \subsection{Pipe Poiseuille Flow} Similar analysis to the plane Poiseuille flow can be carried out for the circular Poiseuille flow. For circular pipe Poiseuille flow, the momentum equation is written as, {\ \begin{equation} 0=-\rho \frac{\partial p}{\partial z}+\mu \left( \frac{\partial ^{2}u_{z}}{% \partial r^{2}}+\frac{1}{r}\frac{\partial u_{z}}{\partial r}\right) \text{,} \end{equation}% showing that viscous force term is also proportional to the streamwise pressure gradient. The }axial velocity is expressed as by integration on above equation,{\ \begin{equation} u_{z}=u_{0}\left( 1-\frac{r^{2}}{R^{2}}\right) \text{,} \end{equation}% where }$u_{0}=-\frac{1}{4\mu }R^{2}\frac{\partial p}{\partial z}$, is the centerline velocity, $z$ is in axial direction and $r$ is in radial direction of the cylindrical coordinates, and $R${\ is the radius of the pipe. }T{he energy gradient can be expressed as below for any position in the flow field }(noticing $u_{r}=0$){, \begin{equation} \frac{\partial }{\partial r}\left( \frac{1}{2}\rho V^{2}\right) =-\frac{\rho }{8\mu ^{2}}R^{2}r\left( 1-\frac{r^{2}}{R^{2}}\right) \left( \frac{\partial p% }{\partial z}\right) ^{2}\text{.} \end{equation}% } {\ Similar to plane Poiseuille flow, the viscous force term is linear, while the kinematic energy gradient increases quadratically with the pressure gradient.} For pipe Poiseuille flows, the ratio of the energy gradient to the viscous force term, $K$, is ($\partial p/\partial r=0$),{\ \begin{eqnarray} K &=&\frac{\partial }{\partial r}\left( \frac{1}{2}\rho V^{2}\right) /\mu \left( \frac{\partial ^{2}u_{z}}{\partial r^{2}}+\frac{1}{r}\frac{\partial u_{z}}{\partial r}\right) =-\frac{2\rho ru_{0}}{R^{2}}u_{0}\left( 1-\frac{% r^{2}}{R^{2}}\right) /\left( -\mu \frac{4u_{0}}{R^{2}}\right) \nonumber \\ &=&\frac{\rho UR}{\mu }\frac{r}{R}\left( 1-\frac{r^{2}}{R^{2}}\right) =\frac{% 1}{2}Re\frac{r}{R}\left( 1-\frac{r^{2}}{R^{2}}\right) \text{.} \label{bb} \end{eqnarray}% Here, }$Re=\rho UD/\mu $, and $U=\frac{1}{2}u_{0}$ has been used for pipe Poiseuille flow. $u_{0}$ is the maximum velocity at centerline and $U$ is the averaged velocity. The distribution of $K$ along the transversal direction for pipe Poiseuille flow is the same as that for plane Poiseuille flow if it is normalized by its maximum and $y/h$ is replaced by $r/R$, and the {\emph{maximum} of }$K${% \ also occurs at }${r/R=0.5774}$, as shown in Fig.7{. } \section{Comparison with Experiments} \begin{table}[tbp] \centering% \begin{tabular}{|l|l|l|l|} \hline Flow type & Authors & $Re_{c}$ & $Re_{c}$ \\ \hline\hline Poiseuille pipe & & & $Re=\rho UD/\mu $ \\ \hline & Reynolds (1883) & & $2300$ \\ \hline & Petal \& Head (1969) & & $2000$ \\ \hline & Darbyshire \& Mullin(1995) & & $1760\thicksim 2300$ \\ \hline & Most literature cited & & $2000$ \\ \hline\hline Poiseuille plane & & $Re=\rho UL/\mu $ & $Re=\rho u_{0}h/\mu $ \\ \hline & Davies \& White (1928) & $1440$ & $1080$ \\ \hline & Patel \& Head (1969) & $1350$ & $1012$ \\ \hline & Carlson et al (1982) & $1340$ & $1005$ \\ \hline & Alavyoon et al (1986) & $1466$ & $1100$ \\ \hline & Most literature cited & & $1000$ \\ \hline \end{tabular} \caption{Collected experimental data of the critical Reynolds number for plane Poiseuille flow and pipe Poiseuille flow. U is the averaged velocity. $u_0$=2U, and D is the diameter of the pipe for pipe Poiseuille flow. $u_0$=1.5U, L =2h, and L is the hight of the channel for plane Poiseuille flow.}% \end{table}% Experiments for Poiseuille flows indicated that when the Reynolds number is below a critical value, the flow is laminar regardless of the disturbances. For circular Poiseuille flow (Hagen-Poiseuille), Reynolds (1883) \cite% {reynolds} carried out the first systematic experiment on the flow transition and found that the critical Reynolds number for transition to turbulence is about $Re_{c}=\rho UD/\mu =2300$, where the $U$ is the averaged velocity and $D$ is the diameter of the pipe. Now, the most accepted critical value is $Re_{c}=\rho UD/\mu =2000$ which is demonstrated by numerous experiments \cite{schlichting}. All the collected data could be put in a range of $1760$ to $2300$ \cite{darby}. There are also experimental data for the transition for plane Poiseuille flows in the literature. Davies and White \cite{davies} showed that the critical Reynolds number for transition to turbulence is $Re_{c}=\rho UL/\mu =1440$ for plane Poiseuille flow, where the $U$ is the averaged velocity and $L=2h$ is the width of the channel. Patel and Head \cite{patel69} obtained a critical value for turbulence transition, $Re_{c}=2000$ for pipe Poiseuille flow, and $% Re_{c}=1350$ for channel flow through detailed measurements. Carlson et al. \cite{carlson82} found the transition at about $Re_{c}=1340$ for plane Poiseuille flow using flow visualization technique. Alavyoon et al.'s\ \cite% {alavyoon} experiments show that the transition to turbulence for plane Poiseuille flow occurs around $Re_{c}=\rho u_{0}h/\mu =1100$. The most accepted value of minimum $Re_{c}$ for plane Poiseuille flow is about $% Re_{c}=\rho u_{0}h/\mu \thickapprox 1000$ \cite{trefethen93} \cite{gross00}. All the collected experimental data are listed in Table 2. Although these experiments are done at various different environmental conditions, they are all near a common accepted value of critical Reynolds number. In the following, we show that there is a critical value of $K_{\max }$ at which the flow becomes turbulent. In order to more exactly compare plane Poiseuille flow to pipe Poiseuille flow at same experimental conditions, we prefer here to use Patel and Head's data \cite{patel69} to evaluate the parameters at the critical conditions. Patel and Head's data are also the best to fit all of the data and are cited by most literature. Now, we calculate the critical value of $K_{\max }$ at the transition condition for both plane Poiseuille flow and pipe Poiseuille flow using Eqs.(% \ref{aa}) and (\ref{bb}), respectively. For plane Poiseuille flow, {\ one obtains }$K_{\max }{=389}$ a{t the critical Reynolds number }${Re}_{c}{=1350} $. {For pipe Poiseuille flow, one obtains }$K_{\max }{=385}$ a{t the critical Reynolds number }${Re}_{c}{=2000}$. These results are shown in Table 3. In this table, the critical Reynolds number obtained from energy method is also listed. From the comparison of critical values of $K_{\max }$ for plane Poiseuille flow and pipe Poiseuille flow, we find that although the critical Reynolds number is different for the two flows, the turbulence transition takes place at the same $K_{\max }$ value, about $385\backsim 389$% . This demonstrated that $K_{\max }$ is really a dominating parameter for the transition, and $K_{\max }$ is a better expression than the $Re$ number for the transition condition. We can further conclude that energy gradient theory is better than the linear stability theory for the prediction of critical Reynolds number of subcritical transition. In this way, the proposed idea is verified for wall bounded parallel shear flows. Therefore, it may be presumed that the transition of turbulence in other complicated shear flows would also depend on the $K_{\max }$ in the flow field. \begin{table}[tbp] \centering% \begin{tabular}{|l|l|l|l|l|l|} \hline Flow type & $Re$ expression & Eigenvalue analysis & Energy & Experiments & $% K_{\max }$ at Exp \\ & & $Re_{c}$ & method $Re_{c}$ & $Re_{c}$ & $Re_{c}$ value \\ \hline Poiseuille pipe & $Re=\rho UD/\mu $ & stable for all $Re$\cite{gross00} & 81.5 & $2000$\cite{patel69} & $385$ \\ \hline Poiseuille plane & $Re=\rho UL/\mu $ & $7696$\cite{orszag71} & 68.7 & $1350$% \cite{patel69} & $389$ \\ \cline{2-6} & $Re=\rho u_{0}h/\mu $ & $5772$\cite{orszag71} & 49.6 & $1012$\cite{patel69} & $389$ \\ \hline Plane Couette & $Re=\rho u_{h}h/\mu $ & stable for all $Re$\cite{gross00} & 20.7 & $370$\cite{dav, till} & $370$ \\ \hline \end{tabular} \caption{Comparison of the critical Reynolds number and the ratio of the energy gradient to the viscous force term, $K_{\max }$ for plane Poiseuille flow and pipe Poiseuille flow. The critical Reynolds number by energy method is taken from Schmid and Henningson (2001). }% \end{table}% Nishioka et al (1975)'s famous experiments \cite{Nishioak75} for plane Poiseuille flow showed details of the outline and process of the flow breakdown. The measured instantaneous velocity distributions suggest that the break down of the flow is a local phenomenon, at least in its initial stage. As in Fig.8, the base flow is laminar and the instantaneous distribution of the velocity breaks at the position $y/h=0.50$ ($T=4$ to $6$% ) to $0.62$ ($T=8$ to $9$) by showing an oscillation of velocity in $% y/h=0.50\thicksim 0.62$. They show an inflectional velocity in this range of $y/h$. This result means that the flow breakdown first occurs in the range of $y/h=0.50\thicksim 0.62$. This coincides to the prediction of our theory, i.e., the position of $K_{\max }$ is the most dangerous point which occurs at $y/h=0.5774$. These results are enough to confirm the theory of \textquotedblleft energy gradient\textquotedblright\ valid at least for Poiseuille flows (pressure driving flow). For pipe flow, in a recent study \cite{wedin}, Wedin and Kerswell showed that there is the presence of the \textquotedblleft shoulder\textquotedblright\ in the velocity profile at about $r/R=0.6$ from their solution of travelling waves. They suggested that this corresponds to where the fast streaks of traveling waves reach from the wall. It\ can be construed that this kind of velocity profile as obtained by simulation is similar to that of Nishioka et al's experiments for channel flows. The location of the \textquotedblleft shoulder\textquotedblright\ is about same as that for $K_{\max }$. According to the present theory, this \textquotedblleft shoulder\textquotedblright\ may then be intricately related to the distribution of energy gradient. The solution of traveling waves has been confirmed by experiments more recently \cite{hof}. \FRAME{ftbpFU}{3.9972in}{3.1704in}{0pt}{\Qcb{Instantaneous velocity distributions in a plane Poiseuille flow (Nishioka et al. 1975) \protect\cite% {Nishioak75}. Time T corresponding to each distribution is noted on the trace of the u fluctuation at y/h=0.6, sketched in the figure. Solid circle is the mean velocity. Uc is the velocity on the channel center-plane. y/h=0 is at the center-plane and y/h=1 is at the wall. Instantaneous velocity U+u at a point is composed of the mean velocity U and the fluctuation velocity u. (Courtesy of Nishioka; Use permission by Cambridge University Press).}}{}{% nishioka1975.ps}{\raisebox{-3.1704in}{\includegraphics[height=3.1704in]{Nishioka1975.ps}}} The energy gradient theory can also be used to explain the reason why it is not appropriate to scale the outer flow or overlap profiles of channel flow and those of pipe flow at the same Reynolds number turbulent flows\cite% {wosnik00}.\ \ This is easily understood by the fact that pipe Poiseuille flow has a same velocity and energy gradient distributions in the radial direction as the plane Poiseuille flow has in the $y$ direction, but the former has a smaller hydraulic diameter than the latter and has more viscous friction. Therefore, the scaling should be carried out at the same $K_{\max } $ value, but not at the same Reynolds number (Fig.8). \ At a same Reynolds number, for example, say, $1000$, the plane Poiseuille flow reaches the critical $Re$ number for transition, while pipe Poiseuille flow is far from the critical $Re$ number, as shown in Fig.9a. The flow state at these two flows are definitely different at this Re number. This principle also applies to turbulence flow range. If we compare the two type of flows at same $K_{\max }$ value, they should have the same flow behaviour (Fig.9b). \FRAME{ftbpFU}{4.1165in}{3.0943in}{0pt}{\Qcb{Scaling of plane Poiseuille flow with pipe Poiseuille flow. (a) Keeping the $Re$ is constant. (b) Keeping the $K_{\max }$ is constant. \ \ The correlation should be carried out at the same $K_{\max }$ vaue (b), rather at the same Re (a).}}{}{% rere1.ps}{\raisebox{-3.0943in}{\includegraphics[height=3.0943in]{Rere1.ps}}} In a separating paper \cite{dou2004} (owing to the space limit here), we apply the energy gradient theory to the shear driving flows, and show that this theory is also correct for plane Couette flow. We obtain $K_{\max }=370$ at the critical transition condition determined by experiments below which no turbulence occurs (see Table 3). This value is near the value for Poiseuille flows, $385\backsim 389$. The minute difference in the number is not important because there is some difference in the determination of the critical condition. For example, the judgement of transition is from the chart of drag coefficient in Patel and Head \cite{patel69}, while visualization method is used in \cite{dav, till}. These results demonstrate that the critical value of $K_{\max }$ at subcritical transition for wall bounded parallel flows including both pressure driven and shear driven flows is about $370\backsim 389$. \FRAME{ftbpFU}{6.3287in}{5.6299in}{0pt}{\Qcb{Comparison of the theory with the experimental data for the instability condition of Taylor-Couette flow between concentric rotating cylinders ($R_{1}$=3.80cm, $R_{2}$=4.035 cm). $% R_{1}$: radius of the inner cylinder; $R_{2}$: radius of the outer cylinder. $\protect\omega _{1}$ and $\protect\omega _{2}$ are the angular velocities of the inner and outer cylinders, respectively. The critical value of the energy gradient parameter Kc=139 is determined by the experimental data at $% \protect\omega _{2}=0$ and $\protect\omega _{1}\neq 0$ (the outer cylinder is fixed, the inner cylinder is rotating). With $K_{c}$=139, the critical value of $\protect\omega _{1}/\protect\nu $ versus $\protect\omega _{2}/% \protect\nu $ is calculated using the energy gradient thoery for $\protect% \omega _{2}/\protect\nu $ = -2200---900 \protect\cite{douTC}.}}{}{ta44.ps}{% \raisebox{-5.6299in}{\includegraphics[height=5.6299in]{ta44.ps}}} More recently, the energy gradient theory is applied to the Taylor-Couette flow between concentric rotating cylinders \cite{douTC}. The detailed derivation for the calculation of the energy gradient parameter is provided in the study. The theoretical results for the critical condition of primary instability obtain very good agreement with Taylor's experiments (1923) \cite% {Taylor1923} and others, see Fig.10. Taylor (1923) used mathematical theory and linear stability analysis and showed that linear theory obtained agreement with the experiments. However, as is well know and discussed before, linear theory is failed for wall bounded parallel flows. As shown in this paper, the present theory is valid for all of these said flows. Therefore, it is concluded that the energy gradient theory is a universal theory for the flow instability and turbulent transition which is valid for both pressure and shear driven flows in both parallel flow and rotating flow configurations. The Rayleigh-Benard convective instability and the stratified flow instability could also be considered as being produced by the energy gradient transverse to the flow (thermal or gravitational energy). The energy gradient theory can be not only used to predict the generation of turbulence, but also it may be applied to the area of catastrophic event predictions, such as weather forecast, earthquakes, landslides, mountain coast, snow avalanche, motion of mantle, and movement of sand piles in desert, etc. The breakdown of these mechanical systems can be universally described in detail using this theory. In a material system, when the maximum of energy gradient in some direction is greater than a critical value for given material properties, the system will be unstable. If there is a disturbance input to this system, the energy gradient may amplify the disturbance and lead to the system breakdown. This problem will be further addressed in future study. \section{Conclusions} The mechanism for the flow instability and turbulent transition in parallel shear flows is studied in this paper. The energy gradient theory is proposed for the flow instability. The theory is applied to plane channel flow and Hagen-Poiseuille flow. The main conclusions of this study are as follows: \begin{enumerate} \item A mechanism of flow instability and turbulence transition is presented for parallel shear flows. The theory of energy gradient is proposed to explain the mechanism of flow instability and turbulence transition. It is stated that the energy gradient in transverse direction tries to amplify\ the small disturbance, while viscous friction in streamwise direction could resist or absorb this small disturbance. Initiation of instability depends upon the two roles for given initial disturbance. Viscosity mainly plays a stable role to the initiation of flow instability by affecting the base flow. \item A universal criterion for the flow instability initiation has been formulated for wall shear flows. A new dimensionless parameter characterizing the flow instability, $K$, which is defined as the ratio of the energy gradients in transverse direction and that in streamwise direction, is proposed for wall bounded shear flows. The most dangerous position in the flow field can be represented by the maximum of $K$. The initiation of flow breakdown should take place at this position first. This idea is confirmed by Nishioka et al.'s experiments. \item The concept of energy angle is proposed for flow instability. This concept helps to understand the mechanism of viscous instabilities. Using the concept of energy gradient and energy angle, it is theoretically demonstrated for the first time that \emph{viscous flow with a velocity inflection is unstable}. \item It is demonstrated that there is a critical value of the parameter $% K_{\max }$ at which the flow transits to turbulence for both plane Poiseuille flow and pipe Poiseuille flow, below which no turbulence exists. {% This value is about }$K_{\max }{=385\backsim 389}$. Although the critical Reynolds number is different for the two flows, the turbulence transition takes place at the same $K_{\max }$ value. \item The energy gradient theory is a universal theory for the flow instability and turbulent transition which is valid for both pressure and shear driven flows in both parallel flow and rotating flow configurations. \end{enumerate} \subsection*{Acknowledgment} The author would like to thank Professors N Phan-Thien (National University of Singapore) and JM Floryan (University of West Ontario) for their comments on the first version of the manuscript.
2,869,038,156,531
arxiv
\section{Introduction} The need for cyber security is growing by the day to provide protection against cyber attacks \cite{biju2019cyber} including denial of service (DoS), distributed denial of service (DDoS), phishing, eavesdropping, cross site scripting (XSS), drive by download, man in the middle, password attack, SQL injection and malware. Further, intrusions due to unauthorized access can compromise the confidentiality, integrity, and availability (CIA) triad of the cyber systems. To detect intrusions, intrusion detection systems (IDS) are used based on techniques like signatures, anomaly based methodologies and stateful protocol analysis \cite{liao2013intrusion}. In this paper, we focus on alert prioritization \cite{alsubhi2008alert} using IDS. Specifically, IDS generates a high volume of security alerts to ensure that no malicious behaviour is missed. Several techniques exist for reducing the number of alerts without impairing the capacity of intrusion detection \cite{hubballi2014false}, \cite{salah2013model}. However, given the complexity of the environment and the numerous aspects that influence the generation of alerts, the quantity of alerts generated can be overwhelming for cyber security professionals to monitor, and hence it is necessary to prioritise these alerts for investigation. There exists various traditional approaches for prioritizing alerts like fuzzy logic based alert prioritization \cite{alsubhi2012fuzmet}, game theoretic prioritization methods like GAIN \cite{laszka2017game} and RIO \cite{yan2018get} but these methods show poor prioritization capabilities that lack estimating attacker's dynamic adaptation to the defender's alert inspection. Several supervised and unsupervised machine learning techniques have been also used for prioritizing alerts \cite{mcelwee2017deep}, \cite{sharafaldin2018toward}. However, these methods perform poorly on large datasets and are incapable of efficiently identifying dynamic intrusions due to their reliance on fixed characteristics of earlier cyber attacks and their inability to adapt to changing attack patterns. To address the challenges of a dynamic environment, reinforcement learning (RL) \cite{sutton2018reinforcement} has been used, where it learns through exploration and exploitation of an unknown and dynamic environment. The integration of DL with RL i.e., deep reinforcement learning (DRL) \cite{franccois2018introduction} has immense potential for defending against more sophisticated cyber threats and providing effective cyber security solutions \cite{nguyen2019deep}. Further, DRL based actor-critic methods \cite{sewak2019deep}, \cite{lazaric2007reinforcement}, \cite{saxena2020nancy}, \cite{naresh2022sac}, \cite{naresh2022deep} are considered as an efficient solution for dealing with continuous and large action spaces, implying its application in a dynamic cyber security environment \cite{lopez2020application}, \cite{shukla2021detection}. Recently, \cite{tong2020finding} presents the framework that integrates adversarial reinforcement learning \cite{uther1997adversarial} and game theory for \cite{osborne1994course} prioritizing alerts. The interaction between an attacker (unauthorized user of a system) and the defender (who safeguards the system) is modeled as a zero-sum game, with the attacker selecting which attacks to perform and defender selecting which alerts to investigate. Furthermore, the double oracle framework \cite{mcmahan2003planning} is used to obtain the approximate mixed strategy Nash equlibrium (MSNE) of the game. Using this framework, \cite{tong2020finding} uses deep deterministic policy gradient (DDPG) \cite{lillicrap2015continuous} that computes pure strategy best response against opponent's mixed strategy. However, DDPG may entail instability due to its off policy nature and it may be prone to overfitting. Specifically, most of the actor-critic methods like TRPO \cite{schulman2015trust}, PPO \cite{schulman2017proximal}, A3C \cite{mnih2016asynchronous}, etc., are on-policy RL methods that improve stability of the training but leads to poor sample efficiency and weak convergence. Furthermore, off-policy methods like DDPG \cite{lillicrap2015continuous} improves sample efficiency but it is sensitive to hyperparameter tuning and it is prone to the overfitting problem. In this paper, we present soft actor-critic based DRL algorithm for alert prioritization (SAC-AP), an off-policy method, that overcomes the limitations of DDPG. SAC-AP is based on a maximum entropy reinforcement learning framework that aims to maximize the expected reward while also maximizing the entropy. The main contribution of the current work is to obtain robust alert investigation policies by using a soft actor critic for alert prioritization. We present the overall design of SAC-AP and evaluate its performance as compared to other state-of-the-art alert prioritization methods. Furthermore, we have presented the benefits of SAC-AP for two use-cases: fraud detection using fraud dataset \cite{link} and network intrusion detection using CICIDS2017 dataset \cite{sharafaldin2018toward}. We consider defender's loss., i.e., the defender's inability to investigate the alerts that are triggered due to attacks, as performance metric. Our results show that SAC- AP achieves up to 30\% decrease in defender’s loss as compared to the DDPG based alert prioritization method \cite{tong2020finding} and hence provides better protection against intrusions. Moreover, the benefits are even higher when SAC-AP is compared to other traditional alert prioritization methods including Uniform, GAIN \cite{laszka2017game}, RIO \cite{yan2018get} and Suricata \cite{link2}. The rest of this paper is organized as follows. A brief background of DRL based actor-critic methods in the context of our work is described in Section \uppercase\expandafter{\romannumeral 2 \relax}. The system architecture is described in Section \uppercase\expandafter{\romannumeral 3 \relax}. Our proposed approach, SAC-AP to find the approximate best responses are presented in Section \uppercase\expandafter{\romannumeral 4 \relax}. The experiment results are provided in Section \uppercase\expandafter{\romannumeral 5 \relax}. Section \uppercase\expandafter{\romannumeral 6 \relax} concludes the paper. \section{Background} In RL, an agent interacts with the environment and at each time step $t$, it observes state $s_t \in S$ and takes an action $a_t \in A$. The action selection is based on the chosen policy $\pi(a|s)$ that is a mapping from state $s$, to the action $a$ to be taken in a given state. The agent then receives a reward $R_{t+1}$ and moves to next state $s_{t+1}$ based on action $a_t$ \cite{sutton2018reinforcement}. The goal in RL is to maximize the total discounted reward and to obtain optimal policy. Let us also denote the value function, $V_\pi(s)$, as a function of discounted reward, specifying how good to be in a particular state $s$. It is given by, \begin{equation}\label{eq2} V_\pi(s) = E_\pi\Bigg[\sum_{k=0}^{\infty}\gamma^kR_{t+k+1}|s_t=s\Bigg] \end{equation} where $V_\pi(s)$ is value function for a state $s$ and policy $\pi$, and $\gamma\in[0,1]$ is the discount factor. Similarly, the action value function $Q_\pi(s,a)$ is defined as the value of taking action $a$ in a state $s$ under policy $\pi$: \begin{equation}\label{eq2} Q_\pi(s,a) = E_\pi\Bigg[\sum_{k=0}^{\infty}\gamma^kR_{t+k+1}|s_t=s, a_t=a \Bigg] \end{equation} The optimal $Q(s,a)$ is calculated as follows: \begin{equation}\label{eq2} Q^*(s,a) = \max_{\pi\in\Pi} Q_\pi(s,a) \end{equation} We can say that the policy $\pi$ is better than the policy $\pi^{'}$, if $Q_{\pi}(s,a)$ is greater than $Q_{\pi^{'}}(s,a)$ for all the states and action pairs. The optimal policy $\pi^{*}$ is given by, \begin{equation}\label{eq2} \pi^{*}(s) = \textit{argmax}_{a\in A}( Q^{*}(s,a)) \end{equation} In reinforcement learning, value based methods obtain optimal policy by choosing the best action at each state using the estimates generated by an optimal action value function. Some of the value based methods include SARSA, Q-learning, and DQN \cite{sutton2018reinforcement}. These methods are not suitable for large and continuous action spaces like robotics. Furthermore, the policy based methods are another type of reinforcement learning methods that directly optimize a policy based on policy gradient like REINFORCE algorithm \cite{sutton2018reinforcement} without using any explicit value function. These methods work well for continuous action space but are prone to high variance and slow convergence. To overcome these limitations of value based and policy based methods, actor-critic method \cite{sewak2019deep} are introduced that combine value and policy based methods into a single algorithm. \subsection{Overview of actor-critic methods} Actor-Critic (AC) methods utilize two different models: actor and critic as shown in Figure \ref{picture1}. The actor model takes the current environment state as input and gives the action to take as output \cite{sewak2019deep}. The critic model is a value based model that takes state and reward as inputs and measures how good the action taken for the state using state value estimates. Actor-Critic methods combines the advantages of both value and policy based methods that include learning stochastic policies, faster convergence and lower variance. The advantage function specifies how good it is to be in that state. If the rewards are better than the expected value of the state, the probability of taking that action is increased. The advantage function for state $s$ at time $t$ is calculated as follows: \begin{equation}\label{eq1} \delta_t = R_t + \gamma V(s_{t+1};W_{t}) - V(s_t;W_t) \end{equation} \begin{figure}[t] \begin{center} \includegraphics[scale=0.5]{fac.png} \end{center} \caption{Actor Critic Framework} \label{picture1} \end{figure} where $R_t$ is the reward at time $t$, $\gamma$ is the discount factor, $V(s_{t+1};W_{t})$ is the state value estimator function parametrized by weight vector $W$, which is updated as follows: \begin{equation}\label{eq} W_{t+1} = W_t + \alpha_w\delta_t\nabla_wV(s;W_t) \end{equation} with $\alpha_w$ as the learning rate. Similar to the critic model, the policy estimator function of an actor model with policy $\pi$ is also updated in each iteration by updating it's parameter vector $\theta$ using the following equation: \begin{equation} \theta_{t+1} = \theta_t + \alpha_\theta \delta_t\nabla_\theta log\pi_\theta(a_t|s_t;\theta_t) \end{equation} The actor-critic method converges and obtains the optimal policy by following the updates described in this section. \subsection{Overview of Soft Actor-Critic (SAC)} Soft actor critic method is an off policy, actor-critic deep reinforcement learning algorithm. SAC is based on maximum entropy reinforcement learning framework where actor aims to maximize the expected reward while also maximizing the entropy. SAC is stable, highly sample efficient, provides better robustness and solves the problem of overfitting and hence performs better than other state-of-the-art actor-critic methods including DDPG. SAC aims to optimize the maximum entropy objective function as follows \cite{haarnoja2018soft}: \begin{equation} \pi^{*}(.|s_t) = argmax _\pi E_\pi\Bigg[\sum_{t} R_t + \beta H\Big(\pi(.|s_t)\Big)\Bigg] \end{equation} where $H(\pi(.|s_t))$ is the entropy of the policy for the visited state under that policy $\pi$. The entropy is multiplied by the coefficient $\beta$ where if $\beta$ value is zero then the optimal policy is deterministic or else the optimal policy is stochastic. The soft policy iteration includes evaluating the policy using the following Bellman equation that converges to the optimal policy with $\tau^\pi$ as the bellman backup operator, where $\tau^\pi$ is given by, \begin{equation} \tau^{\pi}Q(s_t,a_t) \displaystyle\leftarrow R_t + E_{s_{t+1}\sim p}\Big[V(s_{t+1})\Big] \end{equation} where $p$ is the probability density of $s_{t+1}$ given action $a_t$ and current state $s_t$ induced by policy $\pi$. The value function is calculated using the following soft bellman iteration: \begin{equation} V_{s_{t}} = E_{a_{t}\sim \pi}\Bigg[Q(s_{t},a_{t}) - \beta log \pi(a_{t}|s_{t})\Bigg] \end{equation} The policy is updated towards the exponential new soft Q function using soft policy improvement step as follows: \begin{equation} \pi_{new} = argmin_{\pi^{'}\in\Pi} D_{KL}\left (\pi^{'}(.|s_t)||\frac{exp \Big(\frac{1}{\beta} Q^{\pi_{old}}(s_t,.)\Big)}{Z^{\pi_{old}}(s_t)} \right ) \end{equation} where $Z^{\pi_{old}}(s_t)$ normalizes the distribution and we have $Q^{\pi^{new}}\geq Q^{\pi^{old}}$ for the new policy that leads to the sequences of policies with the Q values increasing monotonically. In addition to using function approximators for both Q-function and the policy, SAC uses a state value function to approximate the soft value to stabilize training. \section{System Architecture} The system architecture is divided into four modules: regular users, attack detection environment (ADE), adversary, and defender, as shown in Figure \ref{picturee}. The regular users are the system's registered users. The adversary is the system's unauthorised user who intends to launch various forms of attacks. ADE generates alerts as a result of an adversary's actions, but on numerous occasions, alerts are also generated as a result of normal user activity. The defender's role is to safeguard the system by investigating the alerts generated by the ADE. \begin{figure*}[t] \begin{center} \includegraphics[scale=0.5]{fsm.png} \end{center} \caption{System Architecture} \label{picturee} \end{figure*} ADE contains system state as presented in Table \ref{table1}. It includes $N_t^k$, i.e., the number of alerts of type $t$ that have not been investigated by the defender at time $k$. We assume that both the defender and the attacker are aware of the value of $N_t^k$. Additionally, the system state contains $M_a^k$, a binary value that indicates whether an attack $a$ was executed, and it is only known by the attacker. Furthermore, the system state also includes $S_{a,t}^k$, which denotes the number of alerts of type $t$ that were raised due to an attack $a$, and it is also known only by the attacker. Following that, the adversary's knowledge include system's state at time period $k$ and defender's alert investigation policies in the previous rounds. The number of attacks the adversary can execute is limited by it's budget constraint $D$, and the adversary must adhere to the following constraint: \begin{equation} \sum_{a\in A} \alpha_{-1,a}E_a\leq D \end{equation} where $\alpha$ is a player's action, $D$ is an adversary's budget and $E_a$ is a cost of mounting attack $a$. The adversary is represented with two modules, one of which is an attack oracle and the other is an attack generator. The attack oracle runs a policy at each time period $k$ that maps state to choice of subset of attacks to be executed by the attack generator. The adversary succeeds if the defender does not investigate any of the alerts generated as a result of an attack. Additionally, the number of alerts the defender can choose to investigate is limited by it's budget constraint $B$, where the defender should follow the following constraint: \begin{equation} \sum_{t\in T} \alpha_{+1,t}^{k}C_t\leq B \end{equation} where, $\alpha$ is a player's action, $B$ is a defender's budget and $C_t$ is a cost of investigating alert type $t$. The defender consists of two modules: defense oracle and alert analyzer. At time $k$, the defense oracle runs a policy, that maps state to choice of subset of alerts to be investigated by the alert analyzer. \begin{table}[hbtp]\centering \caption{System state at time period $k$} \label{table1} \begin{tabular}{ |p{1.5cm}|p{2.5cm}| } \hline Symbol & Description \\ \hline $N_t^k$ & Number of alerts of type $t$, not being investigated by the defender (Observed by both defender and attacker)\\ \hline $M_a^k$ & Binary indicator that represents whether attack $a$ was executed (Observed only by attacker)\\ \hline $S_{a,t}^k$ & Number of alerts of type $t$ raised due to attack $a$ (Observed only by attacker) \\ \hline \end{tabular} \end{table} \section{Proposed Solution: SAC-AP} In this section, we present the overview of the solution and the proposed soft actor critic based alert prioritization (SAC-AP) algorithm. \subsection{Solution Overview} The interaction between an adversary and the defender (denoted by $-1, +1$) is modeled as a two player zero-sum game. In a given state, the attacker's set of possible actions includes all subsets of attacks and defender's set of possible actions includes all subsets of alerts satisfying the budget constraints mentioned in the previous section. The attacker's policy maps state to the subset of attacks and the defender's policy maps state to the subset of alerts. We consider player's deterministic policies as pure strategies and stochastic policies as mixed strategies. Let $\Pi_v$ and $\sum_v$ denotes the set of player's pure strategies and mixed strategies, respectively. Let us also denote $U_v(\pi_v,\pi_{-v})$ as the utility of player $v$ where $\sum_{v\in(-1,+1)}U_v(\pi_v,\pi_{-v}) = 0$ since its a zero-sum game. The expected utility when a player $v$ selects pure strategy $\pi_v\in\Pi_v$ and it's opponent selects mixed strategy $\sigma_{-v}\in\sum_{-v}$ is given as follows \cite{tong2020finding}: \begin{equation} U_v(\pi_v,\sigma_{-v}) = \sum_{\pi_{-v}\in\Pi_{-v}} \sigma_{-v}(\pi_{-v})U_v(\pi_v,\pi_{-v}) \end{equation} Further, player $v's$ expected utility when it selects mixed strategy $\sigma_v\in\sum_v$ and it's opponent also selects mixed strategy $\sigma_{-v}\in\sum_{-v}$ is given by: \begin{equation}\label{eqq} U_v(\sigma_v,\sigma_{-v}) = \sum_{\pi_v\in\Pi_v} \sigma_{v}(\pi_{v})U_v(\pi_v,\sigma_{-v}) \end{equation} Using the above equations, the utility of the defender when it chooses pure strategy $\pi_{+1}\in\Pi_{+1}$ and it's opponent (attacker) also chooses pure strategy $\pi_{-1}\in\Pi_{-1}$ with discount factor $\gamma$, is given by: \begin{equation} U_{+1}(\pi_{+1},\pi_{-1}) = E\bigg[\sum_{k=0}^{\infty}\gamma^k R_{+1}^{(k)}\bigg] \end{equation} where, $R_{+1}^{(k)}$ is the defender's reward at time period $k$, and it is given by: \begin{equation} R_{+1}^{(k)} = -\sum_{a\in A}L_a \hat M_a^{(k)} \end{equation} with $L_a$ as defender's loss when attack $a$ is not detected and $\hat M_a^{(k)}$ is a binary indicator when the executed attack has not been investigated. The utility of the attacker is given as, $U_{-1}(\pi_{+1},\pi_{-1}) = -U_{+1}(\pi_{+1},\pi_{-1})$. To solve this zero-sum game, we use an extension of double oracle algorithm \cite{tsai2012security}. The goal is to compute MSNE to find robust alert investigation policy. The MSNE condition for two players with mixed strategy profile $(\sigma_v^*,\sigma_{-v}^*)$ is given as follows: \begin{equation} U_v((\sigma_v^*,\sigma_{-v}^*)) \geq U_v((\sigma_v,\sigma_{-v}^*)) \forall \sigma_v\in\Sigma_v \end{equation} where the MSNE of a game is computed by solving the following linear program: \begin{align*} \textit{ max } U_v^*\\ \textit{ s.t. } U_v(\sigma_v,\pi_{-v}) \geq U_{v}^{*}, \forall \pi_{-v}\in\Pi_{-v}\\ \sum_{\pi_v\in\Pi_{v}}\sigma_v(\pi_v) = 1\\ \sigma_v(\pi_v)\geq0, \forall\pi_v\in\Pi_v \end{align*} where the mixed strategy provisional equilibrium is obtained by solving the above mentioned linear program and by considering the set of player's random policies $(\Pi_{+1},\Pi_{-1})$. The attack oracle then computes adversary's best response $\pi_{-1}(\sigma_{+1})$ to the equilibrium mixed strategy $\sigma_{+1}$ of the defender. Similarly, the defense oracle computes defender's best response $\pi_{+1}(\sigma_{-1})$ to the equilibrium mixed strategy $\sigma_{-1}$ of the attacker. Furthermore, these best responses are then added to $(\Pi_{+1},\Pi_{-1})$ policy sets. The process repeats until there is no further improvement in neither player's best responses against provisional equilibrium mixed strategy. The following subsection describes our proposed SAC-AP approach for calculating best response oracles of the attacker and defender. \subsection{SAC-AP} In this paper, we propose soft actor-critic based deep RL for alert prioritization algorithm, i.e., SAC-AP, where both the attack and defense oracles use SAC-AP to compute pure strategy best response against opponent's mixed strategy. SAC-AP operates on continuous action spaces and computes an approximate pure strategy best response $\pi_v$ to an opponent using a mixed strategy $\sigma_{-v}$. For each player $v$, SAC-AP uses neural networks to represent policy network (actor), value network, and action value networks (critic). These networks are described below. \subsubsection{Critic Network} The two critic networks $Q_{1,2}(O_v,\alpha_v|\theta_v^{Q_{1,2}})$ map an observation and an action to a value with parameters $\theta_{v}^{Q_1}$ and $\theta_{v}^{Q_2}$, respectively. SAC uses two critics to mitigate positive bias in the policy improvement step that is known to degrade the performance of value-based methods. This technique also speeds up training and leads to better convergence. We train the two critic functions independently to optimize the critic’s objective function, $J_{Q_{i}}$, ($i = 1,2$) as: \begin{equation} J_{Q_{i}} = E\big[\frac{1}{2}\big(Q^{'}-Q_i(\theta_{v}^{Q_i})\big)^2\big] \end{equation} where, $Q^{'} = R_v + \gamma E\big(\pi_v(\theta_{v}^{v^{'}})\big)$ and target network is used to estimate the state value instead of actual value network to improve stability. To update the weights of the critic network with the gradient descent method, the loss function $J_{Q_{i}}$ is minimized as follows: \begin{equation} \theta_{v}^{Q_i} \leftarrow \theta_{v}^{Q_i} - \lambda_Q\nabla J_{Q_i} \end{equation} \subsubsection{Value Network} The state value function approximates the soft value. The value network $V(O_v|\theta_v^v)$ has parameters $\theta_v^v$, and maps the state to a value. The soft value, $J_{v}$ function is trained to minimize the squared residual error: \begin{equation} J_v = E\big[\frac{1}{2}*\big( V^{'}-V(\theta_v^v)\big)^2\big] \end{equation} where, \begin{equation} V^{'} = E \big[Q-log\big( \pi_v(\theta_v^\pi)\big)\big] \end{equation} and, \begin{equation} Q = \min \big[E\big(Q_1(O_v^k,\pi_v(\theta_v^\pi)\big), E \big(Q_2(O_v^k,\pi_v(\theta_v^\pi)\big)\big] \end{equation} Further, we use gradient descent \cite{alpaydin2020introduction} to train the weights of the value network and exponential moving average to update the target value network: \begin{align*} \theta_v^v \leftarrow \theta_v^v-\lambda_v\nabla J_v\\ \theta_v^{v^{'}} \leftarrow \tau\theta_v^v + (1-\tau)\theta_v^{v^{'}} \end{align*} \subsubsection{Action Network} The actor-network $\pi_v(O_v|\theta_v^\pi)$ maps observation $O_v$ into an action with parameters $\theta_v^\pi$. It uses policy improvement to learn pure best strategy response $(\pi_v)$. The objective function, $J_\pi$, is defined as: \begin{equation} J_\pi = E\big[-Q + \beta log\big(\pi_v(\theta_v^\pi)\big)\big] \end{equation} where, $\beta$ is the entropy regularization factor which controls exploration-exploitation, and \begin{equation} Q = min\big[E\big(Q_1(O_v^k,\pi_v(\theta_v^\pi)\big), E \big(Q_2(O_v^k,\pi_v(\theta_v^\pi)\big)\big] \end{equation} Similar to the critic and value networks, We use gradient descent to minimize the above loss: \begin{equation} \theta_v^\pi \leftarrow \theta_v^\pi-\lambda_\pi\nabla J_\pi \end{equation} Initially, all four neural networks are randomly initialized. Then, they are trained over multiple episodes. At the beginning of each episode, the opponent samples a deterministic policy $\pi_{-v}$ with its mixed strategy $\sigma_{-v}$. The networks are trained as follows. First, the actor generates an action using the $\epsilon$-greedy method. Player $v$ then executes the action $\alpha_v$ using policy $\pi_v$ and player $-v$ executes an action $\alpha_{-v}$ using policy $\pi_{-v}$. The player $v$ receives a reward based on the state transition. It stores the $k^{th}$ transition $<O_v^k, \alpha_v^k, r_v^k, O_v^{k+1}>$ into a memory buffer. Player $v$ then samples a minibatch, a subset of transitions randomly sampled from the memory buffer, to update the networks. After a fixed number of episodes, the resulting policy network $\pi_v(O_v|\theta_v^\pi)$ is returned as the parameterized optimal response to an opponent with mixed strategy $\sigma_{-v}$. The complete SAC-AP algorithm is described in Algorithm $1$ and all the relevant hyperparameters are listed in Table \ref{table3}. \begin{table}[hbtp]\centering \caption{Hyperparameter configurations for SAC-AP experiments } \label{table3} \begin{tabular}{ |p{2 cm}|p{4 cm}|p{1 cm}| } \hline Hyperparameter & Description &Value \\ \hline $\lambda_Q$ & Learning rate of critic networks & 0.002\\ \hline $\lambda_v$ &Learning rate of value network &0.002\\ \hline $\lambda_\pi$&Learning rate of actor network &0.001\\ \hline $\gamma$ &Discount factor &0.95\\ \hline $\beta$&Entropy regularization factor &0.5\\ \hline $\tau$&Smoothing constant &0.01\\ \hline $\epsilon_{max}$&Maximum epsilon value &1\\ \hline $\epsilon_{discount}$&Epsilon discount factor &0.99\\ \hline \end{tabular} \end{table} \begin{algorithm} \caption{SAC-AP Algorithm: Compute the pure-strategy best response of player $v$ when its opponent takes mixed strategy $\sigma_{-v}$.}\label{alg} \hspace*{\algorithmicindent} \textbf{Input}: The set of opponent’s pure strategies, $\Pi_{-v}$ and mixed strategy of the opponent, $\sigma_{-v}$;\\ \hspace*{\algorithmicindent} \textbf{Output}: The policy network of player $v$, $\pi_v(O_v|\theta_v^\pi)$, the value network of player $v$, $V(O_v|\theta_v^v)$ and the critic networks of player $v$, $Q_{1,2}(O_v,\alpha_v|\theta_v^{Q_{1,2}})$ ; \begin{algorithmic}[1] \raggedright \State Randomly initialize $\pi_v(O_v|\theta_v^\pi)$, $V(O_v|\theta_v^v)$ and $Q_{1,2}(O_v,\alpha_v|\theta_v^{Q_{1,2}})$; \State Initialize replay memory $D$; \For {$episode = 0,M-1$} \State Initialize the system state $<N^{(0)}, M^{(0)}, S^{(0)}>$; \State Sample opponent’s policy $\pi_{-v}$, with its mixed \hspace*{5mm}strategy $\sigma_{-v}$ over $\Pi_{-v}$; \For{k=0, k-1} \State With probability $\epsilon$ select random action $\alpha_v^{(k)}$; \State Otherwise select $\alpha_v^{(k)}$ $=$ $\pi_v(O_v|\theta_v^\pi)$; \State Execute $\alpha_v^{(k)}$ and $\alpha_{-v}^{(k)} = \pi_{-v}(O_{-v}^{(k)})$, observe \hspace*{1cm} reward $r_v^k$ and transit the system state to $s^{k+1}$; \State Store transition $<O_v^k, \alpha_v^k, r_v^k, O_v^{k+1}>$ in $D$; \State Sample a random minibatch of $N$ transitions \hspace*{1cm}$<O_v^k, \alpha_v^k, r_v^k, O_v^{k+1}>$ from $D$; \State Set $Q=min\big[E\big(Q_1(O_v^k,\pi_v(\theta_v^\pi)\big)$, \hspace*{1cm}$E \big(Q_2(O_v^k,\pi_v(\theta_v^\pi)\big)\big]$; \State Set $J_\pi = E\big[-Q + \beta log\big(\pi_v(O_v|\theta_v^\pi)\big)\big]$; \State Set $V^ {'} = E \big[Q-log\big( \pi_v(\theta_v^\pi)\big)\big]$; \State Set $J_v = E\big[\frac{1}{2}*\big( V^{'}-V(\theta_v^v)\big)^2\big]$; \State Set $Q^{'} = r_v^k + \gamma*\pi_v(\theta_v^{v^{'}})$; \State Set $J_{Q_{1}} = \frac{1}{2}*E\big[\big( Q^{'}-Q_1(\theta_v^{Q_1})\big)\big]^2$; \State Set $J_{Q_2} =\frac{1}{2}*E\big[\big( Q^{'}-Q_2(\theta_v^{Q_2})\big)\big]^2$; \State $\theta_v^{Q_1}\leftarrow\theta_v^{Q_1}-\nabla J_{Q_1}$; \State $\theta_v^{Q_2}\leftarrow\theta_v^{Q_1}-\nabla J_{Q_2}$; \State $\theta_v^{\pi}\leftarrow\theta_v^{\pi}-\nabla J_{\pi}$; \State $\theta_v^{v}\leftarrow\theta_v^{v}-\nabla J_{v}$; \State $\theta_v^{v^{'}}\leftarrow \tau\theta_v^{v}+(1-\tau)\theta_ v^{v^{'}}$; \EndFor \EndFor \State \textbf{return} player $v$'s policy network $\pi_v(O_v|\theta_v^\pi)$; \end{algorithmic} \end{algorithm} \section{Results} In this section, we present the experimental setup and the results. \subsection{Simulation setup} The SAC-AP algorithm is implemented in Tensorflow \cite{abadi2016tensorflow}, an open-source library for training neural networks. We used Adam optimizer for learning the parameters of the networks. The values of various hyperparameters are shown in Table \ref{table3}. We investigate two case-studies for the experiments: (i) fraud detection \cite{link} and (ii) network intrusion \cite{link2}. We compare SAC-AP with several state-of-the-art and traditional alert prioritization methods including Uniform, GAIN \cite{laszka2017game}, RIO \cite{yan2018get}, Suricata \cite{link2} and recently proposed DDPG-MIX algorithm, which is based on the deep reinforcement learning. Furthermore, we consider both the cases when defender is aware and unaware of adversary's capabilities. The expected loss of the defender i.e., the negative utility of the defender (equation \ref{eqq}) is used as the performance metric to evaluate the performance of the proposed approach and its comparison with other state-of-the-art alert prioritization methods. \subsection{Case study 1 : Fraud detection} Our first case study includes learning based fraud detector along with fraud dataset \cite{link}. There are 284,807 credit card transactions in the fraud dataset, with 482 of them being fraudulent. A vector of 30 numerical attributes represents each transaction and each feature vector has a binary label that indicates the transaction type. First we consider the two cases when the defender is aware of the adversary's attack budget. Figure \ref{f_1} presents the performance comparison of different alert prioritization techniques for different budget values when the adversary's attack budget is fixed to 2. Our results show that SAC-AP performs 20.54\%, 14.63\%, and 7.18\% better than DDPG-MIX for budget values 10, 20 and 30 respectively. The gains are even higher when SAC-AP is compared with Uniform, RIO and GAIN. For fraud detection, Suricata is not used since it is specifically designed for intrusion detection and therefore it is used in the next subsection for the network intrusion data set. Further, Figure \ref{f_2} presents performance evaluation for different adversary's attack budgets when the defense budget is fixed to 20. For this case, our results show that SAC-AP performs 13.86\%, 14.63\%, and 3.29\% better than DDPG-MIX for attack budget 1, 2 and 3 respectively. Second, we consider the case when the defender is not aware of the adversary's attack budget. Figure \ref{f_3} presents the performance comparison of different alert prioritization techniques when the defense budget is fixed to 20. Since, the defender is unaware of the adversary's attack budget, we assume that it estimates it to 2. Figure \ref{f_3} shows results when the actual attack budgets are 1, 2 and 3. For this case, our results show that SAC-AP performs 13.72\%, 14.63\%, and 6.04\% better than DDPG-MIX for the actual attack budget values of 1, 2 and 3 respectively. \begin{figure}[h] \begin{center} \includegraphics[scale=0.37]{figure.png} \end{center} \caption{Usecase: Fraud Detection - Performance comparison of alert prioritization techniques for different budget values when adversary's attack budget is fixed to 2.} \label{f_1} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[scale=0.37]{ff2.png} \end{center} \caption{Usecase: Fraud Detection - Performance comparison of alert prioritization techniques for different adversary's budget values when defender's defense budget is fixed to 20.} \label{f_2} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[scale=0.37]{finalf3.png} \end{center} \caption{Usecase: Fraud Detection - Performance comparison of alert prioritization techniques for different adversary's attack budget values when defender's defense budget is fixed to 20 and estimated attack budget is 2.} \label{f_3} \end{figure} \subsection{Case study 2 : Network intrusion detection} Our second case study includes suricata IDS \cite{link2} an open source NIDS along with CICIDS2017 dataset \cite{sharafaldin2018toward}. This dataset was developed in 2017 by Canadian institute for cyber security (CIC). It is a large dataset that contains around 3 million network flows in different files. It contains 2,830,743 records where each record contains 78 different features. First, we consider two cases when the defender is aware of the adversary's attack budget. Figure \ref{i_1} presents the performance comparison of different alert prioritization techniques when the adversary's attack budget is fixed to 120. Our results show that SAC-AP performs 20.75\%, 17.14\%, and 4.54\% better than DDPG-MIX for the defense budget values of 500, 1000 and 1500, respectively. The gain is even higher when compared with other IDS including Uniform and Suricata. For this case, RIO and GAIN are not used since they are not suitable for the large size of CICIDS2017 dataset and are computationally expensive. Figure \ref{i_2} presents the gain of using SAC-AP for fixed defense budget of 1000 and for different adversary's attack budget values. \begin{figure}[hbt!] \begin{center} \includegraphics[scale=0.36]{fi1.png} \end{center} \caption{Usecase: Network intrusion detection - Performance comparison of alert prioritization techniques for different budget values when adversary's attack budget is fixed to 120.} \label{i_1} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.37]{fi2.png} \end{center} \caption{Usecase: Network Intrusion Detection - Performance comparison of alert prioritization techniques for different adversary's budget values when defender's defense budget is fixed to 1000.} \label{i_2} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.37]{fi3.png} \end{center} \caption{Usecase: Network Intrusion Detection - Performance comparison of alert prioritization techniques for different adversary's attack budget values when defender's defense budget is fixed to 1000 and estimated attack budget is 120.} \label{i_3} \end{figure} We have also considered the case when the defender is unaware of the adversary's attack budget. Figure \ref{i_3} presents the scenario when the defense budget is fixed to 1000 and the defender is unaware of the attacker's budget. We assume that it estimates it to 120 and evaluate the defender's loss for the attack budget of 60, 120 and 180. Specifically, when the actual attack budget value is 60, the defender is over estimating while when the actual attack budget value is 180, the defender is underestimating. However, for all these cases, SAC-AP performs 30\%, 17.14\%, and 11.76\% better than DDPG-MIX. Our results show that the proposed approach outperforms several state-of-the-art and traditional alert prioritization techniques for the above mentioned case studies for both the scenarios when the defender is aware and unaware of adversary's attack budget. \section{Conclusion} In this work, a novel alert prioritization method SAC-AP, based on deep reinforcement learning and double oracle framework is proposed. We implemented SAC-AP, which is based on maximum entropy reinforcement learning framework. Our proposed approach outperforms various state-of-the art alert prioritization methods and achieve up to 30\% gain when compared to recently proposed RL based DDPG-MIX algorithm. We conduct extensive experiments for two use cases: fraud and intrusion detection, and present the efficacy of the proposed approach in obtaining robust alert investigation policies. Future work includes the comprehensive study of investigating the performance of SAC-AP on the scenarios with multiple attackers. \bibliographystyle{ieeetr}
2,869,038,156,532
arxiv
\section{Introduction} The rank of a matrix is an important invariant for several problems in mathematics. In probability theory the rank can be used to determine when two discrete random variables are independent. More precisely, if $X$ and $Y$ are discrete random variables, and $A=(a_{i,j})$ is the joint probabilities matrix determined by $a_{i,j}=P(X=i,Y=j)$ for all $1\leq i\leq n$ and $1\leq j\leq m$, then $X$ and $Y$ are independent if and only if the rank of the $n\times m$ matrix $A$ is equal to $1$ (see \cite{dss}). The map $det^{S^2}:V_2^6\to k$ (for a vector space $V_2$ of dimension $2$) was introduced in \cite{sta2} as a natural generalization of the determinant map. It was obtain from an exterior algebra-like construction inspired by work of Pirashvili \cite{p} and Voronov \cite{vo}. Properties of this map and generalizations were studied in \cite{lss} and \cite{sv}. In particular, there exists a geometrical interpretation of the condition $det^{S^2}((v_{i,j})_{1\leq i<j\leq 4})=0$ that we will use in this paper (see Theorem \ref{th1}). Using the $det^{S^2}$ map, we we introduce the notion of $S^2$-rank of a matrix of type $d\times \frac{s(s-1)}{2}$. We investigate those matrices that have the $S^2$-rank equal to $1$, and give an application to probability theory. More precisely, if $X:D\to \{1,2,\dots, s\}$, and $Y:D\to \{1,2,\dots, d\}$ are random variables, and $B=(b_{i,j}^k)$ is the conditional probability matrix (i.e. $b_{i,j}^k=P(Y=k\vert i<X\leq j)$ for $1\leq i<j\leq s$ and $1\leq k\leq d$), then the $S^2$-rank of $B$ is $1$. We show that under some mild conditions the converse of this statement is also true. We give a few examples, and discuss possible applications to statistics. \section{Preliminaries} \subsection{The $det^{S^2}$ map} In this paper $k$ is a field with $char(k)=0$, $V_d=k^d$ is a $k$-vector space of dimension $d$, and $\mathcal{B}_d=\{e_1,e_2,\dots e_d\}$ is the standard basis in $V_d$. For applications we take $k=\mathbb{R}$. Consider $v_{i,j}\in V_d$ for all $1\leq i <j\leq s$, we denote by $$\mathfrak{V}=(v_{i,j})_{1\leq i<j\leq s}\in V_d^{\frac{s(s-1)}{2}},$$ the collection of $\frac{s(s-1)}{2}$ vectors $v_{i,j}$. Using the standard basis $\mathcal{B}_d$, we can identify $\mathfrak{V}$ with a $d\times \frac{s(s-1)}{2}$ matrix whose columns are determined by the vectors $v_{i,j}=\sum_{k=1}^dv_{i,j}^ke_k$. The rows of this matrix are indexed by elements in the set $\{1,2,\dots, d\}$, while columns are indexed by pairs $(i,j)$ where $1\leq i<j\leq s$. For example, if $d=2$, $s=4$ and $v_{i,j}=\begin{bmatrix} \alpha_{i,j}\\ \beta_{i,j} \end{bmatrix}$, then we identify $\mathfrak{V}=(v_{i,j})_{1\leq i<j\leq 4}\in V_2^6$ with the $2\times 6$ matrix $$\begin{bmatrix} \alpha_{1,2} & \alpha_{2,3} & \alpha_{3,4} & \alpha_{1,3} & \alpha_{2,4} &\alpha_{1,4}\\ \beta_{1,2} & \beta_{2,3} & \beta_{3,4} &\beta_{1,3} & \beta_{2,4}&\beta_{1,4}\\ \end{bmatrix}.$$ Notice that the order of the columns has to be fixed (we cannot permute them). To keep track of columns and operations that are allowed on this matrix see the tensor upper triangular notation and the results from \cite{lss}, \cite{sta2} and \cite{sv}. In this paper we do not need that much detail, so we will use this simpler matrix notation. We recall from \cite{sta2} the formula for the map $det^{S^2}:V_2^6\to k$. For $v_{i,j}=\begin{bmatrix} \alpha_{i,j}\\ \beta_{i,j} \end{bmatrix}\in k^2$ we have \begin{eqnarray*} &det^{S^2}\begin{bmatrix} \alpha_{1,2} & \alpha_{2,3} & \alpha_{3,4} & \alpha_{1,3} & \alpha_{2,4} &\alpha_{1,4}\\ \beta_{1,2} & \beta_{2,3} & \beta_{3,4} &\beta_{1,3} & \beta_{2,4}&\beta_{1,4}\\ \end{bmatrix}=&\\ &\alpha_{1,2}\alpha_{2,3}\alpha_{3,4}\beta_{1,3}\beta_{2,4}\beta_{1,4}+ \alpha_{1,2}\beta_{2,3}\alpha_{3,4}\beta_{1,3}\beta_{2,4}\alpha_{1,4}+ \alpha_{1,2}\beta_{2,3}\beta_{3,4}\alpha_{1,3}\alpha_{2,4}\beta_{1,4}&\\ &+\beta_{1,2}\beta_{2,3}\alpha_{3,4}\alpha_{1,3}\alpha_{2,4}\beta_{1,4}+ \beta_{1,2}\alpha_{2,3}\beta_{3,4}\beta_{1,3}\alpha_{2,4}\alpha_{1,4}+ \beta_{1,2}\alpha_{2,3}\beta_{3,4}\alpha_{1,3}\beta_{2,4}\alpha_{1,4}&\\ &-\beta_{1,2}\beta_{2,3}\beta_{3,4}\alpha_{1,3}\alpha_{2,4}\alpha_{1,4}- \beta_{1,2}\alpha_{2,3}\beta_{3,4}\alpha_{1,3}\alpha_{2,4}\beta_{1,4}- \beta_{1,2}\alpha_{2,3}\alpha_{3,4}\beta_{1,3}\beta_{2,4}\alpha_{1,4}&\\ &-\alpha_{1,2}\alpha_{2,3}\beta_{3,4}\beta_{1,3}\beta_{2,4}\alpha_{1,4}- \alpha_{1,2}\beta_{2,3}\alpha_{3,4}\alpha_{1,3}\beta_{2,4}\beta_{1,4}- \alpha_{1,2}\beta_{2,3}\alpha_{3,4}\beta_{1,3}\alpha_{2,4}\beta_{1,4}.& \end{eqnarray*} \begin{remark} This formula was obtained from an exterior algebra-like construction. It was proved in \cite{sta2} that the map $det^{S^2}$ is a the unique nontrivial multilinear map defined on $V_2^6$ which has the property that $det^{S^2}((v_{i,j})_{1\leq i<j\leq 4})=0$ if there exists $1\leq x<y<z\leq 4$ such that $v_{x,y}=v_{x,z}=v_{y,z}$. \end{remark} Alternatively, one can show that \begin{eqnarray} det^{S^2}((v_{i,j})_{1\leq i<j\leq 4})=det\begin{bmatrix} \alpha_{1,2} & \alpha_{2,3} & 0 & -\alpha_{1,3} & 0 &0\\ \beta_{1,2} & \beta_{2,3} & 0 &-\beta_{1,3} & 0&0\\ \alpha_{1,2} & 0 & 0 & 0 & \alpha_{2,4} & -\alpha_{1,4}\\ \beta_{1,2} & 0 & 0 & 0 & \beta_{2,4} & -\beta_{1,4}\\ 0 & 0 & \alpha_{3,4} & \alpha_{1,3} & 0 &-\alpha_{1,4}\\ 0 & 0 & \beta_{3,4} & \beta_{1,3} & 0 &-\beta_{1,4} \end{bmatrix}, \label{eqdS2} \end{eqnarray} where $det$ is the usual determinant map (see \cite{sv}). \begin{theorem} (\cite{sv}) Take $V_2=\mathbb{R}^2$ and let $(v_{i,j})_{1\leq i<j\leq 4}\in V_2^6$. Then the following are equivalent \begin{enumerate} \item $det^{S^2}((v_{i,j})_{1\leq i<j\leq 4})=0$.\\ \item There exist four points $Q_1$, $Q_2$, $Q_3$, $Q_4$ in the plane $\mathbb{R}^2$ and $\lambda_{i,j}\in\mathbb{R}$ for $1\leq i<j\leq 4$ not all zero such that $\lambda_{i,j}v_{i,j}=\overrightarrow{Q_iQ_j}$. \end{enumerate} \label{th1} \end{theorem} \begin{remark} For a vector space $V_d$ of dimension $d$, there exists a map $det^{S^2}:V_d^{d(2d-1)}\to k$ that is nontrivial, multilinear, and has the property that $det^{S^2}((v_{i,j})_{1\leq i<j\leq 2d})=0$ if there exist $1\leq x<y<z\leq 2d$ such that $v_{x,y}=v_{x,z}=v_{y,z}$ (see \cite{m4}). In this paper we will only use the case $d=2$, so we do not give details about the general case. For $d=2$, and $d=3$ it is know that a map with the above property is unique up to a constant (see \cite{lss} and \cite{sta2}). For $d>3$ uniqueness is still an open question. \end{remark} \subsection{Independent variables} To put the results from this paper in context, we recall from \cite{dss} a theorem about independent variables and the joint probability matrix. Let $P$ be a probability function on a space $D$, and take $X:D\to \{1,2,...,n\}$ and $Y:D\to \{1,2,...,m\}$ to be two discrete random variables. We say that $X$ and $Y$ are independent if $$P(X=i,Y=j)=P(X=i)P(Y=j),$$ for all $1\leq i\leq n$, and $1\leq j\leq m$ (see \cite{lm}). Define the joint probability matrix of $X$ and $Y$ as the $n\times m$ matrix with entries $$a_{i,j}=P(X=i, Y=j),$$ for all $1\leq i\leq n$, and all $1\leq j\leq m$. One has the following result (see \cite{dss}). \begin{proposition} The two random variables $X$ and $Y$ are independent if and only if the joint probability matrix $(a_{i,j})_{i,j}$ has rank $1$. \end{proposition} \section{Conditional Probability Matrix} In this section we discuss the notion of $S^2$-rank and show that the conditional probability matrix has the $S^2$-rank equal to $1$. \begin{definition} Let $\mathfrak{V}=(v_{i,j})_{1\leq i<j\leq s}\in V_d^{\frac{s(s-1)}{2}}$ be a nonzero element such that $\displaystyle{v_{i,j}=\sum_{k=1}^dv_{i,j}^ke_k}$. For every $1\leq x_1<x_2<x_3<x_4\leq s$, and every $1\leq a_1<a_2\leq q$ we define the $2$-minor $M_{x_1,x_2,x_3,x_4}^{a_1,a_2}(\mathfrak{V})$ as the element $(w_{i,j})_{1\leq i<j\leq 6}\in V_2^6$ determined by \begin{eqnarray} w_{i,j}=\begin{bmatrix} v_{x_i,x_j}^{a_1}\\ v_{x_i,x_j}^{a_2} \end{bmatrix}. \end{eqnarray} We say that $\mathfrak{V}$ has the $S^2$-rank equal to $1$ if for every $1\leq x_1<x_2<x_3<x_4\leq s$, and every $1\leq a_1<a_2\leq d$ we have $det^{S^2}(M_{x_1,x_2,x_3,x_4}^{a_1,a_2}(\mathfrak{V}))=0$. \label{rankS2} \end{definition} \begin{remark} As mentioned above, for every $q$ there exists a map $det^{S^2}_{q}:V_q^{q(2q-1)}\to k$ that generalizes the determinant map. One can easily extend Definition \ref{rankS2} by saying that $\mathfrak{V}=(v_{i,j})_{1\leq i<j\leq s}\in V_d^{\frac{s(s-1)}{2}}$ has $S^2$-rank equal to $q-1$ if there exists a $(q-1)$-minor of $\mathfrak{V}$ such that $det^{S^2}_{q-1}(M_{y_1,\dots,y_{2q-2}}^{b_1,\dots,b_{q-1}}(\mathfrak{V}))\neq 0$, and for every $q$-minor of $\mathfrak{V}$ we have $det^{S^2}_{q}(M_{x_1,\dots,x_{2q}}^{a_1,\dots,a_{q}}(\mathfrak{V}))=0$. Since we are only interested in the case $q=2$ we will not elaborate on the general definition. \end{remark} \begin{definition} Let $P$ be a probability function on $D$. Take $X:D\to \{1,2,\dots,s\}$, and $Y:D\to \{1,2,\dots,d\}$ two discrete random variable, such that $X(\delta)>1$ for all $\delta\in D$, and $P(i<X\leq j)>0$ for all $1\leq i<j\leq s$. \begin{enumerate} \item The conditional probability matrix is the $d\times \frac{s(s-1)}{2}$ matrix determined by $$v_{i,j}^a=P(Y=a ~\vert ~ i<X\leq j)=\frac{P(Y=a,~i<X\leq j)}{P(i<X\leq j)},$$ for all $1\leq a\leq d$, and $1\leq i<j\leq s$ . \item The distribution vectors $p_i\in V_d$ for $1\leq i\leq s$, are determined by $$p_i^a=P(Y=a,X\leq i),$$ for all $1\leq a\leq d$ and $1\leq i\leq s$. \item The distribution weights $\lambda_{i,j}\in [0,1]$ are defined by $$\lambda_{i,j}=P(i<X\leq j),$$ for all $1\leq i<j\leq s$. \end{enumerate} \end{definition} Notice that the conditional probability matrix can be identified with $\mathfrak{V}_{X,Y}=(v_{i,j})_{1\leq i<j\leq s}\in V_d^{\frac{s(s-1)}{2}}$, where $v_{i,j}=\sum_{a=1}^dv_{i,j}^ae_a\in V_d$. Also, because $X(\delta)>1$ for all $\delta\in D$ we have that $p_1=0\in V_d$. \begin{theorem} Let $X:D\to \{1,2,\dots,s\}$, and $Y:D\to \{1,2,\dots,d\}$ be discrete random variable such that $X(\delta)>1$ for all $\delta\in D$, and $P(i<X\leq j)>0$ for all $1\leq i<j\leq s$. Consider the conditional probability matrix $\mathfrak{V}_{X,Y}=(v_{i,j})_{1\leq i<j\leq s}\in V_d^{\frac{s(s-1)}{2}}$, the distribution vectors $p_i\in V_d$ for all $1\leq i\leq s$, and the distribution weights $\lambda_{i,j}$ for all $1\leq i<j\leq s$ defined as above. Then we have \begin{enumerate} \item For all $1\leq i<j\leq s$, and $1\leq a\leq d$ we have $\lambda_{i,j}\in (0,1]$, $v_{i,j}^a\in [0,1]$, ${\displaystyle \sum_{b=1}^{d}v_{i,j}^b=1}$, and $$\lambda_{i,j}v_{i,j}=(p_j-p_i).$$ \item For all $1\leq i< j< k\leq s$ there exists $\alpha_{i,j,k}\in (0,1)$ such that $$v_{i,k}=\alpha_{i,j,k}v_{i,j}+(1-\alpha_{i,j,k})v_{j,k},$$ in particular $rank[v_{i,j},v_{i,k},v_{j,k}]\leq 2$. \item The $S^2$-rank of the conditional probability matrix $\mathfrak{V}_{X,Y}=(v_{i,j})_{1\leq i<j\leq s}\in V_d^{\frac{s(s-1)}{2}}$ is $1$. \end{enumerate} \label{th2} \end{theorem} \begin{proof} The first statement follows directly from the definitions of $v_{i,j}$, $\lambda_{i,j}$ and $p_i$, and the fact that $P(Y=a,i<X\leq k)=P(Y=a,i<X\leq j)+P(Y=a,j<X\leq k)$. For the second statement notice that if $1\leq i<j<k\leq s$ then \begin{eqnarray}\label{eql1} \lambda_{i,j}v_{i,j}-\lambda_{i,k}v_{i,k}+\lambda_{j,k}v_{j,k}=0, \end{eqnarray} so we can take $\alpha_{i,j,k}=\frac{\lambda_{i,j}}{\lambda_{i,k}}$, and use the fact that $\lambda_{i,k}=\lambda_{i,j}+\lambda_{j,k}$. The last statement is a consequence of Theorem \ref{th1} (it also follows directly from equations \ref{eqdS2} and \ref{eql1}). \end{proof} \begin{example} \label{example1} Suppose that we want to analyze if students in a certain class watched the Super Bowl halftime show. For this purpose we ask two of the students in that class to calculate conditional probabilities $P(Y=1|i<X\leq j)$ for the variables $X$ and $Y$ defined below. We take $D$ to be the set of all students in the class, and $Y:D\to \{1,2\}$, $Y=1$ if the student watched the show, and $2$ otherwise. Finally, $X:D\to \{1,2,3,4\}$ with the convention that $X=2$ if the student is a freshman or sophomore, $X=3$ if the student is a junior or senior, and $X=4$ if the student is a graduate student. The two students submit the tables from Figure \ref{studentA} and Figure \ref{studentB}. \begin{figure}[htbp] \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline $(i,j)$&$(1,2)$&$(2,3)$&$(3,4)$&$(1,3)$&$(2,4)$&$(1,4)$\\ \hline $P(Y=1|i<X\leq j)$ &0.5&0.8&0.2&0.7&0.7&0.6\\ \hline \end{tabular} \caption{Table from Student A \label{studentA}} \end{figure} \begin{figure}[htbp] \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline $(i,j)$&$(1,2)$&$(2,3)$&$(3,4)$&$(1,3)$&$(2,4)$&$(1,4)$\\ \hline $P(Y=1|i<X\leq j)$ &0.5&0.75&0.25&0.7&0.65&0.625\\ \hline \end{tabular} \caption{Table from Student B \label{studentB}} \end{figure} Assuming that one of them is right, we want to decide which one has done the correct computation, and what is the minimum number of students in that class. We consider the conditional probability matrix corresponding to tables submitted by students A and B, respectively $$\mathfrak{V}_A=\begin{bmatrix} 0.5 & 0.8 & 0.2& 0.7 & 0.7&0.6\\ 0.5 & 0.2& 0.8 &0.3 & 0.3&0.4\\ \end{bmatrix},$$ and $$\mathfrak{V}_B=\begin{bmatrix} 0.5 & 0.75 & 0.25& 0.7 & 0.65&0.625\\ 0.5 & 0.25& 0.75 &0.3 & 0.35&0.375\\ \end{bmatrix}.$$ One can compute $det^{S^2}(\mathfrak{V}_A)=-0.007\neq 0$, and $det^{S^2}(\mathfrak{V}_B)=0$. From Theorem \ref{th2} we know that the table submitted by student A is wrong. Next, consider the system associated to $\mathfrak{V}_B$ \begin{eqnarray*} \begin{cases} \lambda_{1,2}v_{1,2}-\lambda_{1,3}v_{1,3}+\lambda_{2,3}v_{2,3}=0\\ \lambda_{1,2}v_{1,2}-\lambda_{1,4}v_{1,4}+\lambda_{2,4}v_{2,4}=0\\ \lambda_{1,3}v_{1,3}-\lambda_{1,4}v_{1,4}+\lambda_{3,4}v_{3,4}=0, \end{cases} \end{eqnarray*} or equivalently $$ \begin{bmatrix} 0.5 & 0.75 & 0 & -0.7 & 0 &0\\ 0.5 & 0.25 & 0 &-0.3 & 0&0\\ 0.5 & 0 & 0 & 0 & 0.65 & -0.625\\ 0.5 & 0 & 0 & 0 & 0.35 & -0.375\\ 0 & 0 & 0.25 & 0.7 & 0 &-0.625\\ 0 & 0 & 0.75& 0.3 & 0 &-0.375 \end{bmatrix} \begin{bmatrix} \lambda_{1,2}\\ \lambda_{2,3}\\ \lambda_{3,4}\\ \lambda_{1,3}\\ \lambda_{2,4}\\ \lambda_{1,4} \end{bmatrix}= \begin{bmatrix} 0\\ 0\\ 0\\ 0\\ 0\\ 0 \end{bmatrix}. $$ After we solve it, we get the general solution \begin{eqnarray} \begin{bmatrix} \lambda_{1,2}\\ \lambda_{2,3}\\ \lambda_{3,4}\\ \lambda_{1,3}\\ \lambda_{2,4}\\ \lambda_{1,4} \end{bmatrix}=\lambda \begin{bmatrix} 1\\ 4\\ 1\\ 5\\ 5\\ 6 \end{bmatrix}.\label{eqlam} \end{eqnarray} The smallest solution for which $\lambda_{i,j}v_{i,j}\in \mathbb{Z}_+^2$ for all $1\leq i<j\leq 4$ is when $\lambda=4$, so the minimum class size is $24$. In that case, the distribution is described in the table from Figure \ref{studentB2}. \begin{figure}[htbp] \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & Fr. or Soph. &Jr. or Snr.& Grad. & UG & Jr, Snr. or Gr. &All\\ \hline Watched the show &2&12&1&14&13&15\\ \hline Did not watch the show &2&4&3&6&7&9\\ \hline \end{tabular} \caption{Distribution table for $\mathfrak{V}_B$\label{studentB2}} \end{figure} \end{example} \section{Main Result} In this section we prove that under suitable conditions the converse of Theorem \ref{th2} holds. First, we recall a trivial linear algebra result that will be used several times in this section. \begin{remark} Let $v_1,v_2,v_3\in \mathbb{R}^d$ such that $rank[v_1,v_2,v_3]=2$. Then the vector equation \begin{eqnarray}x_1v_1+ x_2v_2+x_3v_3=0,\label{eq2} \end{eqnarray} has a nontrivial solution that is unique up to multiplication with a constant. In particular, if $(\lambda_1,\lambda_2,\lambda_3)$ and $(\mu_1,\mu_2,\mu_3)$ are solutions for the equation \ref{eq2} such that $\lambda_1=\mu_1\neq 0$, then $\lambda_2=\mu_2$, and $\lambda_3=\mu_3$. \label{remark1} \end{remark} We have the following result that generalizes Theorem \ref{th1}. \begin{lemma} Let $v_{i,j}=\begin{bmatrix} \alpha_{i,j}^1\\ \alpha_{i,j}^2\\ \vdots\\ \alpha_{i,j}^d \end{bmatrix}\in \mathbb{R}^d$ for all $1\leq i<j\leq s$. Suppose that \begin{enumerate} \item For every $1\leq i<j<k\leq s$, there exist $a_{i,j,k}> 0$, $b_{i,j,k}> 0$, and $c_{i,j,k}> 0$ such that \begin{eqnarray} a_{i,j,k}v_{i,j}-b_{i,j,k}v_{i,k}+c_{i,j,k}v_{j,k}=0. \end{eqnarray} \item For every $1\leq i<j<k<l\leq s$, and all $1\leq a\leq d$ there exists $b$ (that depends on $a$, $i$, $j$, $k$ and $l$) such that \begin{eqnarray}rank\begin{bmatrix} \alpha_{i_1,i_2}^a&\alpha_{i_1,i_3}^a&\alpha_{i_2,i_3}^a\\ \alpha_{i_1,i_2}^b&\alpha_{i_1,i_3}^b&\alpha_{i_2,i_3}^b \end{bmatrix}=2, \end{eqnarray} for all $i_1<i_2<i_3\in \{i,j,k,l\}$. \item The $S^2$-rank of $(v_{i,j})_{1\leq i<j\leq s}\in V_d^{\frac{s(s-1)}{2}}$ is $1$. \end{enumerate} Then there exist $p_i\in \mathbb{R}^d$ for $1\leq i\leq s$, and $\lambda_{i,j}>0$ for $1\leq i<j\leq s$ such that \begin{eqnarray}\lambda_{i,j}v_{i,j}=p_j-p_i, \end{eqnarray} for all $1\leq i<j\leq s$. \label{lemma1} \end{lemma} \begin{proof} We will prove this result by induction. When $s=4$ and $d=2$ this follows from Theorem \ref{th1}. Indeed, since $d=2$, and the $S^2$-rank of $(v_{i,j})_{1\leq i<j\leq 4}$ is $1$ (i.e. $v_{i,j}\in \mathbb{R}^2$ and $det^{S^2}((v_{i,j})_{1\leq i<j\leq 4})=0$), we know from Theorem \ref{th1} that there exist $p_1$, $p_2$, $p_3$, $p_4\in \mathbb{R}^2$, and $\lambda_{i,j}\in \mathbb{R}$ not all zero, such that $$\lambda_{i,j}v_{i,j}=p_j-p_i,$$ for all $1\leq i<j\leq 4$. We still need to show that $\lambda_{i,j}> 0$ for all $1\leq i<j\leq 4$. Since not all $\lambda_{i,j}$ are zero, let's assume that $\lambda_{1,2}\neq 0$ (the other cases are similar). If necessary, after multiplying all $\lambda_{i,j}$ with $-1$, we may assume that $\lambda_{1,2}> 0$. Notice that $(\lambda_{1,2},\lambda_{1,3}, \lambda_{2,3})$ and $(a_{1,2,3},b_{1,2,3},c_{1,2,3})$ are nontrivial solutions for the equation $$x_{1,2}v_{1,2}-x_{1,3}v_{1,3}+x_{2,3}v_{2,3}=0.$$ Since $rank[v_{1,2},v_{1,3},v_{2,3}]=2$, it follows from Remark \ref{remark1} that $(\lambda_{1,2},\lambda_{1,3}, \lambda_{2,3})=c(a_{1,2,3},b_{1,2,3},c_{1,2,3})$ for some constant $0\neq c\in \mathbb{R}$. Finally, because $a_{1,2,3}>0$, $b_{1,2,3}>0$, $c_{1,2,3}>0$, and $\lambda_{1,2}>0$, it follows that $c>0$, and so $\lambda_{1,3}>0 $ and $\lambda_{2,3}> 0$. This argument can be extended to show that $\lambda_{i,j}> 0$ for all $1\leq i<j\leq 4$. More precisely, using the fact that $\lambda_{1,2}> 0$ and the linear dependence relation among $v_{1,2}$, $v_{1,4}$ and $v_{2,4}$, one gets that $\lambda_{1,4}> 0$ and $\lambda_{2,4}> 0$. Then, using the fact that $\lambda_{2,3}> 0$, and the linear dependence relation among $v_{2,3}$, $v_{2,4}$ and $v_{3,4}$, one gets that $\lambda_{3,4}> 0$, which proves our statement for the case $s=4$ and $d=2$. First we will take $s=4$ and do induction over $d$. Notice that from the case $d=2$ we know that for every $1\leq a\leq d$ there exists $1\leq b\leq d$, $\lambda_{i,j}^{a,b}> 0$, and $p_i^{a,b}\in \mathbb{R}^2$ such that $$\lambda_{i,j}^{a,b}\begin{bmatrix} \alpha_{i,j}^a\\ \alpha_{i,j}^b \end{bmatrix}=p_j^{a,b}-p_{i}^{a,b},$$ for all $1\leq i<j\leq 4$. We need to show that $\lambda_{i,j}^{a,b}$ does not depend of $(a,b)$, and that we can glue together the vectors $p_i^{a,b}$ to get the statement we want. After permuting the elements of $\{1,2,\dots, d\}$ we may assume that we solved the problem for the set $\{1,2,\dots,d-1\}$. More precisely, if $w_{i,j}=\begin{bmatrix} \alpha_{i,j}^1\\ \alpha_{i,j}^2\\ \vdots\\ \alpha_{i,j}^{d-1} \end{bmatrix}\in \mathbb{R}^{d-1}$ there exist $\lambda_{i,j}> 0$, and $p_i=\begin{bmatrix} p_i^1\\ p_i^2\\ \vdots\\ p_i^{d-1} \end{bmatrix}\in \mathbb{R}^{d-1}$ such that $\lambda_{i,j}v_{i,j}=p_j-p_i,$ for all $1\leq i<j\leq 4$. Notice that for all $1\leq i<j<k\leq 4$ we have that $(\lambda_{i,j}, \lambda_{i,k},\lambda_{j,k})$, and $(a_{i,j,k}, b_{i,j,k}, c_{i,j,k})$ are nontrivial solutions of the vector equation $$x_{i,j}w_{i,j}-x_{i,k}w_{i,k}+x_{j,k}w_{j,k}=0.$$ Since the $rank([w_{i,j},w_{i,k},w_{j,k}])=2$, from Remark \ref{remark1} we know that $(\lambda_{i,j}, \lambda_{i,k},\lambda_{j,k})$ is a multiple of $(a_{i,j,k}, b_{i,j,k}, c_{i,j,k})$. Let $a=d$, then there exist $b\in \{1,2,\dots,d-1\}$ such that if $u_{i,j}=\begin{bmatrix} v_{i,j}^a\\ v_{i,j}^b\\ \end{bmatrix}$ then $rank([u_{i,j},u_{i,k},u_{j,k}])=2$ for all $1\leq i<j<k\leq 4$. From the case $d=2$ we know that there exist $\mu_{i,j}> 0$ for all $1\leq i<j\leq 4$, and $q_{i}=\begin{bmatrix} q_i^a\\ q_i^b\\ \end{bmatrix}\in \mathbb{R}^2$ for all $1\leq i\leq 4$, such that $$\mu_{i,j}u_{i,j}=q_j-q_i$$ for all $1\leq i<j\leq 4$. For all $1\leq i<j\leq 4$ we have that $(\mu_{i,j},\mu_{i,k},\mu_{j,k})$ and $(a_{i,j,k}, b_{i,j,k}, c_{i,j,k})$ are nontrivial solutions of the vector equation $$\mu_{i,j}u_{i,j}-\mu_{i,k}u_{i,k}+\mu_{j,k}u_{j,k}=0.$$ Since the $rank([u_{i,j},u_{i,k},u_{j,k}])=2$ from Remark \ref{remark1} we have that $(\mu_{i,j}, \mu_{i,k},\mu_{j,k})$ is a nonzero multiple of $(a_{i,j,k}, b_{i,j,k}, c_{i,j,k})$. After rescaling $(\mu_{i,j})_{1\leq i<j\leq 4}$ we may assume that $\mu_{1,2}=\lambda_{1,2}>0$. Since $(\mu_{1,2}, \mu_{1,3},\mu_{2,3})$ and $(\lambda_{1,2}, \lambda_{1,3},\lambda_{2,3})$ are both nonzero multiple of $(a_{i,j,k}, b_{i,j,k}, c_{i,j,k})$, we get that $\mu_{1,3}=\lambda_{1,3}$ and $\mu_{2,3}=\lambda_{2,3}$. Similarly, using the linear dependence relation among $w_{1,2}$, $w_{1,4}$ and $w_{2,4}$, we get that $\mu_{1,4}=\lambda_{1,4}$ and $\mu_{2,4}=\lambda_{2,4}$. Finally, using the linear dependence relation among $w_{2,3}$, $w_{2,4}$ and $w_{3,4}$ (and the fact that now we know $\mu_{2,3}=\lambda_{2,3}$), we get that $\mu_{3,4}=\lambda_{3,4}$. This shows that, if we take $\widetilde{p_i}=\begin{bmatrix} p_i^1\\ p_i^2\\ \vdots\\ p_i^{d-1}\\ q_i^{d} \end{bmatrix}\in \mathbb{R}^d$ we have $$\lambda_{i,j}v_{i,j}=\widetilde{p_j}-\widetilde{p_i},$$ for all $1\leq i<j\leq 4$. And so, by induction, we proved our statement when $s=4$. Next we will do induction over $s$. We just checked the case $s=4$, so we take $s>4$. Assume that for all $1\leq i< j\leq s-1$ there exist $\lambda_{i,j}>0$, and for all $1\leq i\leq s-1$ there exist $p_i\in \mathbb{R}^d$ such that \begin{eqnarray} \lambda_{i,j}v_{i,j}=p_j-p_i, \end{eqnarray} for all $1\leq i<j\leq s-1$. We need to construct $\lambda_{i,s}>0$ for all $1\leq i\leq s-1$ and $p_{s}\in \mathbb{R}^d$. For any $3\leq k\leq s-1$, from the case $s=4$ applied to the set $\{1,2,k,s\}\subseteq \{1,2,\dots, s\}$ we know that there exist $\mu_{i,j}^{(k)}>0$ for all $i<j\in \{1,2,k,s\}$ and $q_i^{(k)}\in \mathbb{R}^d$ for all $i\in \{1,2,k,s\}$ such that $$\mu_{i,j}^{(k)}v_{i,j}=q_j^{(k)}-q_i^{(k)},$$ for all $i<j\in \{1,2,k,s\}$. By replacing $q_i^{(k)}$ with $q_i^{(k)}+p_1-q_1^{(k)}$, we may assume that $q_1^{(k)}=p_1$. After rescaling $\mu_{i,j}^{(k)}$ we may assume that $\mu_{1,2}^{(k)}=\lambda_{1,2}$. Notice that $(\lambda_{1,2},\lambda_{1,k}, \lambda_{2,k})$, and $(\mu_{1,2}^{(k)},\mu_{1,k}^{(k)},\mu_{2,k}^{(k)})$ are nontrivial solutions for the vector equation $$x_{1,2}v_{1,2}-x_{1,k}v_{1,k}+x_{2,k}v_{2,k}.$$ Since $rank[v_{1,2},v_{1,k},v_{2,k}]=2$, it follows from Remark \ref{remark1} that $\lambda_{1,k}=\mu_{1,k}^{(k)}$ and $\lambda_{2,k}=\mu_{2,k}^{(k)}$ for all $3\leq k\leq s-1$. In particular, we have $$\lambda_{1,2}v_{1,2}=p_2-p_1=q_2^{(k)}-p_1,$$ and $$\lambda_{1,k}v_{1,k}=p_k-p_1=q_k^{(k)}-p_1,$$ which implies that $q_2^{(k)}=p_2$ and $q_k^{(k)}=p_k$ for all $3\leq k\leq s-1$. Notice that for all $3\leq k<l\leq s-1$ we have that $(\mu_{1,2}^{(k)},\mu_{1,s}^{(k)},\mu_{2,s}^{(k)})$ and $(\mu_{1,2}^{(l)},\mu_{1,s}^{(l)},\mu_{2,s}^{(l)})$ are nontrivial solutions of the vector equation $$x_{1,2}v_{1,2}-x_{1,s}v_{1,s}+x_{2,s}v_{2,s}=0.$$ Since $\mu_{1,2}^{(k)}=\mu_{1,2}^{(l)}=\lambda_{1,2}$, and $rank[v_{1,2},v_{1,s},v_{2,s}]=2$ we have that $\mu_{1,s}^{(k)}=\mu_{1,s}^{(l)}$ and $\mu_{2,s}^{(k)}=\mu_{2,s}^{(l)}$ for all $3\leq k<l\leq s-1$. So we can denote these constants by $\lambda_{1,s}$ and $\lambda_{2,s}$ respectively. We have $$\lambda_{1,s}v_{1,s}=q_{s}^{(k)}-q_{1}^{(k)}=q_{s}^{(l)}-q_{1}^{(l)}$$ for all $3\leq k<l\leq s-1$. Since $q_{1}^{(k)}=q_{1}^{(l)}=p_1$, we get that $q_{s}^{(k)}=q_{s}^{(l)}$ for all $3\leq k<l\leq s-1$, so we can denote this vector by $p_{s}$ and we get \begin{eqnarray} \lambda_{1,s}v_{1,s}=p_{s}-p_1, \end{eqnarray} \begin{eqnarray} \lambda_{2,s}v_{2,s}=p_{s}-p_2. \end{eqnarray} Finally for all $3\leq k\leq s-1$ we define $$\lambda_{k,s}=\mu_{k,s}^{(k)}.$$ We know that $$\mu_{1,k}^{(k)}v_{1,k}-\mu_{1,s}^{(k)}v_{1,s}+\mu_{k,s}^{(k)}v_{k,s}=0,$$ and so since $\lambda_{1,k}=\mu_{1,k}^{(k)}$, and $\lambda_{1,s}=\mu_{1,s}^{(k)}$ we get $$(p_k-p_1)-(p_{s}-p_1)+\lambda_{k,s}v_{k,s}=0$$ or in other words \begin{eqnarray} \lambda_{k,s}v_{k,s}=p_{s}-p_k. \end{eqnarray} And so, by induction we proved our statement. \end{proof} \begin{remark} Condition $a_{i,j,k}>0$, $b_{i,j,k}>0$ and $c_{i,j,k}>0$ is not necessary for the proof of this lemma. One can replace it with $a_{i,j,k}\neq 0$, $b_{i,j,k}\neq 0$ and $c_{i,j,k}\neq 0$, and get a result where $\lambda_{i,j}\neq 0$. However, in the next theorem we need the result with $\lambda_{i,j}>0$. \end{remark} We have the following converse to Theorem \ref{th2}. \begin{theorem} Let $v_{i,j}=\begin{bmatrix} \alpha_{i,j}^1\\ \alpha_{i,j}^2\\ \vdots\\ \alpha_{i,j}^d \end{bmatrix}\in \mathbb{R}^d$ for all $1\leq i<j\leq s$. Assume that \begin{enumerate} \item For all $1\leq i<j\leq s$, and all $1\leq a\leq d$ we have $v_{i,j}^a\in [0,1]$, and $\sum_{a=1}^{d}v_{i,j}^a=1$. \item For every $1\leq i<j<k\leq s$, there exist $a_{i,j,k}> 0$, $b_{i,j,k}> 0$, $c_{i,j,k}> 0$ such that \begin{eqnarray} a_{i,j,k}v_{i,j}-b_{i,j,k}v_{i,k}+c_{i,j,k}v_{j,k}=0. \end{eqnarray} \item For every $1\leq i<j<k<l\leq s$, and all $1\leq a\leq d$ there exists $b$ (that depends on $a$, $i$, $j$, $k$ and $l$) such that $$rank\begin{bmatrix} \alpha_{i_1,i_2}^a&\alpha_{i_1,i_3}^a&\alpha_{i_2,i_3}^a\\ \alpha_{i_1,i_2}^b&\alpha_{i_1,i_3}^b&\alpha_{i_2,i_3}^b \end{bmatrix}=2.$$ for all $i_1<i_2<i_3\in \{i,j,k,l\}$. \item The $S^2$-rank of $(v_{i,j})_{1\leq i<j\leq s}$ is $1$. \end{enumerate} Then there exist two random variables $X:(0,1]\to \{1,2,\dots,s\}$, and $Y:(0,1]\to \{1,2,...,d\}$ such that $$v_{i,j}^a=P(Y=a\vert i<X\leq j),$$ for all $1\leq i<j\leq s$, and $1\leq a\leq d$. \label{th3} \end{theorem} \begin{proof} From Lemma \ref{lemma1} we know that there exist $\lambda_{i,j}>0$ for all $1\leq i<j\leq s$, and $p_i\in \mathbb{R}^d$ for all $1\leq i\leq s$, such that $\lambda_{i,j}v_{i,j}=p_j-p_i$ for all $1\leq i<j\leq s$. We can normalize the $(\lambda_{i,j})_{1\leq i<j\leq s}$ such that $\lambda_{1,s}=1$. Changing $p_i$ to $p_i-p_1$ we may assume that $p_1=0\in \mathbb{R}^d$. For $1\leq i<j<k\leq s$ we have $\lambda_{i,j}v_{i,j}-\lambda_{i,k}v_{i,k}+\lambda_{j,k}v_{j,k}=0$. Summing all the entries in these vectors we get $$\lambda_{i,j}(\sum_{a=1}^dv_{i,j}^a)-\lambda_{i,k}(\sum_{a=1}^dv_{i,k}^a)+\lambda_{j,k}(\sum_{a=1}^dv_{j,k}^a)=0.$$ Since $\sum_{a=1}^dv_{i,j}^a=1$ for all $1\leq i<j\leq s$ this means that $$\lambda_{i,j}-\lambda_{i,k}+\lambda_{j,k}=0,$$ in particular $\lambda_{i,j}=\lambda_{1,j}-\lambda_{1,i}$. Notice that if $1\leq i<j\leq s$ then $\lambda_{1,i}<\lambda_{1,j}$. Define $X:(0,1]\to \{1,2,\dots, s\}$, and $Y:(0,1]\to \{1,2,\dots, d\}$ determined by $$X(t)=k~ {\rm if} ~t\in (\lambda_{1,k-1},\lambda_{1,k}],$$ (here we use the convention $\lambda_{1,1}=0$), and $$Y(t)=h ~ {\rm if} ~t\in (\lambda_{1,i-1}+\sum_{a=1}^{h-1}(p_i^a-p_{i-1}^a),\lambda_{1,i-1}+\sum_{a=1}^{h}(p_i^a-p_{i-1}^a)]$$ for some $1\leq i\leq s$. Let $P$ be the probability given by the standard measure on the interval $(0,1]$. From the above definitions we have that for every $1\leq i<j\leq s$ \begin{eqnarray*} P(i<X\leq j)&=&\lambda_{1,j}-\lambda_{1,i}\\&=&\lambda_{i,j}. \end{eqnarray*} For every $h\in \{1,2,\dots,d\}$ we have \begin{eqnarray*} P(Y=h)&=&\sum_{1\leq a\leq s}(p^h_a-p_{a-1}^h)\\ &=&p_{s}^h-p_{1}^h\\ &=&p_{s}^h\\ &=&\lambda_{1,s}v_{1,s}^h\\ &=&v_{1,s}^h.\\ \end{eqnarray*} More generally, for every $1\leq i<j\leq s$, and every $h\in \{1,2,\dots,d\}$ we have \begin{eqnarray*} P(Y=h,~ i<X\leq j)&=&\sum_{i< a\leq j}(p^h_a-p_{a-1}^h)\\ &=&p_j^h-p_i^h\\ &=&\lambda_{i,j}v_{i,j}^h. \end{eqnarray*} In particular we get $$P(Y=h~\vert~ i<X\leq j)=v_{i,j}^h,$$ which proves our statement. \end{proof} \begin{remark} Let $v_{1,2}=\begin{bmatrix} 0.5\\ 0.5 \end{bmatrix}$, $v_{2,3}=\begin{bmatrix} 0.8\\ 0.2 \end{bmatrix}$, $v_{3,4}=\begin{bmatrix} 0.2\\ 0.8 \end{bmatrix}$, $v_{1,3}=\begin{bmatrix} 0.7\\ 0.3 \end{bmatrix}$, $v_{2,4}=\begin{bmatrix} 0.7\\ 0.3 \end{bmatrix}$ and $v_{1,4}=\begin{bmatrix} 0.6\\ 0.4 \end{bmatrix}$. One can see that if we take $\alpha_{1,2,3}=\frac{1}{3}$, $\alpha_{1,2,4}=\frac{1}{2}$, $\alpha_{1,3,4}=\frac{4}{5}$, and $\alpha_{2,3,4}=\frac{5}{6}$ then we have $$v_{i,k}=\alpha_{i,j,k}v_{i,j}+(1-\alpha_{i,j,k})v_{j,k},$$ for all $1\leq i<j<k\leq 4$. As it was noticed in Example \ref{example1}, $det^{S^2}((v_{i,j})_{1\leq i<j\leq 4})=-0.007\neq 0$, and so the $S^2$-rank of $(v_{i,j})_{1\leq i<j\leq 4}$ is not equal to $1$. This means that condition four in the statement of Theorem \ref{th3} is not a consequence of the other three. Condition $rank[v_{i,j},v_{i,k},v_{j,k}]=2$ is not necessary. For example, if the two variables $X$ and $Y$ are independent then the conditional probability matrix will have rank equal to $1$. However, we were not able to prove a statement without it. \label{remark2} \end{remark} \begin{example} Let's assume that in Example \ref{example1} we have a third student that considered the set $E$ of all undergraduate students in that class, and a random variable $Z:E\to \{1,2,3,4\}$ defined by $Z=2$ if the student in freshman or sophomore, $Z=3$ if the student is junior, and $Z=4$ if the student is senior. He submitted the table from Figure \ref{studentC}. \begin{figure}[htbp] \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline $(i,j)$&$(1,2)$&$(2,3)$&$(3,4)$&$(1,3)$&$(2,4)$&$(1,4)$\\ \hline $P_C(Y=1|i<Z\leq j)$ &0.5&1&0.6&0.8&0.75&0.7\\ \hline \end{tabular} \caption{Table from Student C \label{studentC}} \end{figure} The corresponding matrix is $\mathfrak{V}_C=\begin{bmatrix} 0.5 & 1& 0.6& 0.8 & 0.75&0.7\\ 0.5 & 0& 0.4 &0.2 & 0.25&0.3\\ \end{bmatrix}.$ Just like in Example \ref{example1} one can find the general solution \begin{equation}\begin{bmatrix} \mu_{1,2}\\ \mu_{2,3}\\ \mu_{3,4}\\ \mu_{1,3}\\ \mu_{2,4}\\ \mu_{1,4} \end{bmatrix}=\mu \begin{bmatrix} 2\\ 3\\ 5\\ 5\\ 8\\ 10 \end{bmatrix}.\label{eqlam2} \end{equation} If we denote by $w_{i,j}$ the columns of matrix $\mathfrak{V}_C$, then the smallest value of $\mu$ for which $\mu_{i,j}w_{i,j}\in \mathbb{Z}_+^2$ for all $1\leq i<j\leq 4$ is $\mu=1$, so a minimum class size is $10$. In that case, the distribution is described in the table from Figure \ref{studentB2} \begin{figure}[htbp] \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & Fr. or Soph. &Jr. & Snr. & Fr. or Soph. & Jr, Snr &UG\\ \hline Watched the show &1&3&3&4&6&7\\ \hline Did not watch the show &1&0&2&1&2&3\\ \hline \end{tabular} \caption{Distribution table for $\mathfrak{V}_C$} \end{figure} Next, let's notice that Table \ref{studentB} and Table \ref{studentC} are compatible. Indeed, one can see that $P(Y=1|1<X\leq 2)=P(Y=1|1<Z\leq 2)$, $P(Y=1|2<X\leq 3)=P(Y=1|2<Z\leq 4)$ and $P(Y=1|1<X\leq 3)=P(Y=1|1<Z\leq 4)$. This allows us to define a new random variable $T:D\to \{1,2,3,4,5\}$ determined by $T=2$ if the student in freshman or sophomore, $T=3$ if the student is junior, $T=4$ if the student is senior, and $T=5$ if the student is a graduate student. Using this new variable $T$, the combined information is presented in Table \ref{studentComb}. \begin{figure}[htbp] \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $(i,j)$&$(1,2)$&$(2,3)$&$(3,4)$&$(4,5)$&$(1,3)$&$(2,4)$&$(3,5)$&$(1,4)$&$(2,5)$&$(1,5)$\\ \hline $P(Y=1|i<T\leq j)$ &0.5&1&0.6&0.25 &0.8&0.75&?&0.7&0.65&0.625\\ \hline \end{tabular} \caption{Combined information \label{studentComb}} \end{figure} Take $t_{i,j}=\begin{bmatrix} P(Y=1|i<T\leq j) \\ P(Y=2|i<T\leq j) \end{bmatrix}$ for all $1\leq i<j\leq 5$. Notice that there is missing information in the table from Figure \ref{studentComb}, we do not know the value of $P(Y=1|3<T\leq 5)$. However we can use the other information to determine it. If the data from these two tables is compatible we can take $\mu=2$ in Equation \ref{eqlam2}, and $\lambda=4$ in Equation \ref{eqlam} to get $\mu_{1,3}=\nu_{1,3}=10$ and $\lambda_{1,4}=\nu_{1,5}=24$. The equation $\nu_{1,3}t_{1,3}-\nu_{1,5}t_{1,5}+\nu_{3,5}t_{3,5}=0$ becomes $$10\begin{bmatrix} 0.8 \\ 0.2 \end{bmatrix}-24\begin{bmatrix} 0.625 \\ 0.375 \end{bmatrix}+\nu_{3,5}t_{3,5}=0,$$ which implies that $\nu_{3,5}t_{3,5}=\begin{bmatrix} 7 \\ 7 \end{bmatrix}$ and so $$t_{3,5}=\begin{bmatrix} 0.5 \\ 0.5 \end{bmatrix}~~~{\rm and }~~~\nu_{3,5}=14.$$ The corresponding distribution is presented in Table \ref{dist35}, and the minimum class size is still $24$. \begin{figure}[htbp] \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & Fr. &Jr. & Snr. & Gr & Fr. & Jr. & Sr. & UG & Jr. & All \\ & Soph. & & & & Soph & Snr. &Gr. & & Snr.& \\ & & & & & Jr. & & & & Gr. & \\ \hline Watched the show &2&6&6&1&8&12& 7& 14 & 13&15\\ \hline Did not watch the show &2&0&4&3&2&4 & 7& 6 & 7& 9\\ \hline \end{tabular} \caption{Combined distribution table \label{dist35}} \end{figure} \end{example} \begin{example} Building again on Example \ref{example1}, let's assume that after initially submitting an incorrect answer, student A tries to get extra credit and submits the new data from Table \ref{studentA2}. Here, $U:D\to \{1,2,3\}$ is determined by $U=1$ if the student enjoyed the show, $U=2$ if the student did not enjoy the show, and $U=3$ if the student did not watched the show. \begin{figure}[htbp] \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline $(i,j)$&$(1,2)$&$(2,3)$&$(3,4)$&$(1,3)$&$(2,4)$&$(1,4)$\\ \hline $P_A(U=1|i<X\leq j)$ &0.375&0.4375 &0.125& ? &?&?\\ \hline $P_A(U=2|i<X\leq j)$ &0.125&0.3125 &0.125&? &?&? \\ \hline $P_A(U=3|i<X\leq j)$ &0.5 &0.25 &0.75 &0.3 &0.35 &0.375\\ \hline \end{tabular} \caption{New table from Student A \label{studentA2}} \end{figure} Notice that the data from Table \ref{studentB} and Table \ref{studentA2} are compatible, since $P_B(Y=2|i<X\leq j)=P_A(U=3| i<X\leq j)$ for all $1\leq i<j\leq 4$. Again, there is some missing information in Table \ref{studentA2}, however that can be recovered by noticing that Equation \ref{eqlam} is the general solution of the problem associated to the data in Table \ref{studentA2}. In particular one can check that the smallest solution for which $\lambda_{i,j}v_{i,j}\in \mathbb{Z}_+^2$ for all $1\leq i<j\leq 4$ is when $\lambda=8$, so the minimum class size is $48$. In that case, the distribution is described in the table from Figure \ref{studentA3}. \begin{figure}[htbp] \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & Fr. or Soph. &Jr. or Snr.& Grad. & UG & Jr, Snr. or Gr. &All\\ \hline Enjoyed the show &3&14&1&17&15&18\\ \hline Did not enjoy the show &1&10&1&11&11&12\\ \hline Did not watch the show &4&8&6&12&14&18\\ \hline \end{tabular} \caption{Distribution table for $\mathfrak{V}_{A_{NEW}}$\label{studentA3}} \end{figure} \end{example} \begin{remark} It would be interesting to see if these results can be applied to statistics. One serious problem is that the rank, and the $S^2$-rank are highly sensitive to small variations of the vectors $v_{i,j}$. So, in order to use the ideas from this paper to explicit problems, one needs to decide what range values of $det$, and $det^{S^2}$ are small enough to be considered $0$ in this context, and how to correct the errors (fit data) in that situation. Since we don't have any expertise in that area, we leave this problem to statisticians to decide. \end{remark} \section*{Acknowledgment} We thank J. Chen and S. Lippold for feedback on an earlier version of this paper. \bibliographystyle{amsalpha}
2,869,038,156,533
arxiv
\section{Introduction} 2D quantum gravity on $AdS_2$ geometry is important due to its essential role in the context of black hole physics. Indeed the $AdS_2$ geometry is the factor which appears in the near horizon geometry of extremal black holes in any dimension. Therefore understanding quantum gravity on $AdS_2$ might ultimately help us understand the origin of the black hole entropy in other dimensions. The main problem which prevents us to explain quantum gravity on $AdS_2$ geometry is the fact that it is not quite clear what it actually means. Indeed this is the case for any dimension. An attempt to understand, or in better words, to make sense of quantum gravity in three dimensions has been made by Witten in \cite{witten} where it was argued that 3D quantum gravity makes sense only on $AdS_3$. The main reason supporting the argument is due to the existence of non-trivial three dimensional black holes, BTZ solutions, which carry non-zero entropy \cite{Banados:1992wn}. Being an AdS background it is natural to define the quantum gravity in terms of the dual CFT via AdS/CFT correspondence \cite{Maldacena:1997re}. In 2D Maxwell-dilaton gravity there are several classical solutions with non-zero entropy which may be interpreted as 2D extremal black holes. Therefore we would expect to have non-trivial 2D quantum gravity on $AdS_2$ geometry. Following the idea explored in \cite{witten} one may suspect that quantum gravity on $AdS_2$ can be defined via its CFT dual. We note, however, that although AdS$_{d+1}$/CFT$_{d}$ correspondence has been understood for $d\geq 2$ mainly due to explicit examples, little has been known for the case of $d=1$ and indeed it remains enigmatic. Nevertheless there are several attempts to explore $AdS_2/CFT_1$ correspondence, including \cite{{Strominger:1998yg},{Cadoni:1999ja},{NavarroSalas:1999up}, {Park:1999hs},{Cadoni:2000ah}, {Cadoni:2000gm},{Astorino:2002bj},{Kim:1998wy},{Leiva:2003kd},{Hyun:2007ii},{Correa:2008bi},{Kang:2004js}, {HS},{Sen:2008yk},{Alishahiha:2008tv},{Gupta:2008ki},{Cadoni:2008mw}, {Castro:2008ms},{Sen:2008vm},{Morita:2008qn}}. The aim of the present article is to further study 2D gravity on $AdS_2$ along the recent studies \cite{{HS},{Alishahiha:2008tv},{Castro:2008ms}} where 2D Maxwell-dilaton gravity has been considered. To have consistent boundary conditions it was shown in \cite{HS} that the asymptotic symmetry of the model is generated by a twisted energy momentum tensor whose central charge is non-zero. This central charge along with eigenvalue of $L_0$ of the dual CFT can be used to {\it consistently} reproduce the entropy of the bulk gravity via the Cardy formula. This was taken as an evidence that the CFT dual to gravity on $AdS_2$ should be {\it a chiral half of a 2D CFT}. To elaborate the above statement we will consider 2D Maxwell-dilaton gravity in the presence of higher order corrections given by 2D Maxwell-gravitational Chern-Simons term. To have consistent boundary conditions one needs to work with the twisted energy momentum tensor, though in this case due to the presence of the Chern-Simons term the corresponding central charge gets a correction. An important observation is that the correction not only depends on the coefficient of the Chern-Simons term, but also it is sensible to the sign of the electric charge. The sign dependent effect should, indeed, be associated with the fact that the dual theory should be a chiral half of a 2D CFT. To study the vacuum solutions of the model we should solve the equations of motion with a constant dilaton. Equivalently, we may utilize the entropy function formalism \cite{SEN} by which we are also able to find the entropy of the corresponding solutions. From the equations of motion we find three distinctive $AdS_2$ vacuum solutions. Using the asymptotic symmetry of the theory together with requiring to have a consistent picture we will be able to read the central charge of the corresponding solutions as well as the level of $U(1)$ current. The 2D solutions may be uplifted to three dimensions. The obtained 3D solutions are purely geometric solutions that will be either $AdS_3$ or warped $AdS_3$ with an identification. The warped $AdS_3$ solution has recently been studied in \cite{strominger2} (see also \cite{{compere},{Anninos:2008qb},{Carlip:2008eq},{Gibbons:2008vi}}). The paper is organized as follows. In the next section we will introduce our model where we apply the entropy function formalism to find the vacuum solutions as well as their entropy. Re-writing the entropy in a suggestive form, we will give an expression for the corrected central charge. In section 3 requiring to have consistent boundary conditions we will find the asymptotic symmetry of the theory which can be used to read the central charge. In section 4 we uplift the 2D solutions to three dimensions which may be compared with 3D solutions in \cite{strominger2}. The last section is devoted to discussions. \section{2D Maxwell-dilaton gravity with Chern-Simons term} Let us consider 2D Maxwell-dilaton gravity with the action \begin{equation}\label{action1} S=S_{EH}+S_{CS} \end{equation} where $S_{EH}$ is the Einstein-Hilbert action \begin{eqnarray}\label{actions} S&=&\frac{1}{8G}\int d^2x\sqrt{-g}\; e^{\phi}\left(R+2\partial_\mu\phi\,\partial^\mu\phi+\frac{2}{l^2}e^{2\phi}-\frac{l^2}{4}F_{\mu\nu}F^{\mu\nu}\right), \end{eqnarray} and $S_{CS}$ is the two dimensional Chern-Simons term given by \begin{equation} S_{cs}=-\frac{1}{32G\mu}\int d^2x\left(lR\epsilon^{\mu\nu}F_{\mu\nu}+l^3 \epsilon^{\mu\nu}F_{\mu\rho}F^{\rho\delta} F_{\delta\nu}\right). \end{equation} The action $S_{EH}$ can actually be obtained from the 3D pure gravity with cosmological constant by reducing to two dimensions along an $S^1$ \cite{Strominger:1998yg}. Similarly one may start from 3D gravitational Chern-Simons term and reduce along an $S^1$ to arrive at the 2D Chern-Simons $S_{cs}$ \cite{{Guralnik:2003we},{Grumiller:2003ad}}. From three dimensional point of view these actions have been used to study the entropy of extremal black holes in the presence of higher order corrections (see for example \cite{sen2}). The aim of this section is to study the vacuum solutions of the model given by the action \eqref{action1} which can be obtained by solving its equations of motion. In fact, setting $F_{\mu\nu}=\sqrt{-g}\epsilon_{\mu\nu} F$, the equations of motion are given by \begin{equation}\label{eq metric} \begin{split} &g_{\mu\nu}\left(\nabla^2e^\phi+\frac{1}{l^2}\,e^{3\phi}-\frac{l^2F^2}{4}\,e^\phi +e^\phi\partial_\mu\phi\,\partial^\mu\phi\right)-\nabla_\mu\nabla_\nu e^\phi-2e^{\phi}\partial_\mu\phi\partial_\nu\phi \cr &-\frac{l}{2\mu}\left[g_{\mu\nu}\left( \nabla^2F-{l^2F^3}-\frac{R}{2}F\right)-\nabla_\mu\nabla_\nu F \right]=0 \end{split} \end{equation} \begin{equation} \label{eq scalar} R+\frac{6}{l^2}e^{2\phi}+\frac{l^2}{2}F^2+2e^\phi\partial_\alpha\phi\partial^\alpha\phi-4\nabla^2e^\phi=0,\;\;\; \epsilon^{\mu\nu}\partial_\mu\bigg{(}e^\phi F+\frac{1}{2\mu l}(R+3l^2F^2)\bigg{)}=0 \end{equation} It is useful to work with trace and traceless parts of the equation \eqref{eq metric} \begin{equation}\label{trace part} \nabla^2e^\phi+\frac{2}{l^2}\,e^{3\phi}-\frac{l^2F^2}{2}\,e^\phi =\frac{l}{2\mu}\left( \nabla^2F-2{l^2F^3}-{R}F\right) \end{equation} \begin{equation}\label{non trace part} g_{\mu\nu}(\nabla^2e^\phi+2e^\phi\partial_\alpha\phi\,\partial^\alpha\phi)-2( \partial_\mu\phi\,\partial_\nu\phi+\nabla_\mu\nabla_\nu e^\phi)=\frac{l}{2\mu}\left(g_{\mu\nu}\nabla^2F-2\nabla_\mu\nabla_\nu F\right) \end{equation} This model admits $AdS_2$ vacuum solutions. To find them we should look for solutions with a constant dilaton. In this case one has \begin{equation} \frac{2}{l^2}\,e^{3\phi}-\frac{l^2F^2}{2}\,e^\phi =-\frac{l}{2\mu}\left(2{l^2F^3}+{R}F\right),\;\;\;\; R+\frac{6}{l^2}e^{2\phi}+\frac{l^2}{2}F^2=0, \end{equation} which, for a given gauge field, can be solved to find the constant dilaton. Indeed, these equations reduce to the following equation for dilaton \begin{equation} \left(e^{\phi}-\frac{3l}{2\mu}F\right) \left(\frac{2}{l^2}e^{2\phi}-\frac{l^2}{2}F^2\right)=0. \end{equation} Therefore, for arbitrary $\mu$ and $l$, the model may have three different vacuum solutions with constant dilaton given by \begin{equation} e^{\phi}=\pm\frac{l^2}{2}F,\;\;\;\;\;\;\;\;\;e^{\phi}=\frac{3}{\mu l}\frac{l^2}{2}F. \end{equation} It is worth noting that, as it is evident from the above expressions, in the special case of $\mu l=3$ the third solution degenerates with the first one (the positive sign above). We will come back to this point later. To find the whole solutions we need to plug these expressions for the dilaton into the equations of motion and solved for metric and gauge field. Equivalently, since the solutions we are looking for are $AdS_2$, one may utilize the entropy function formalism \cite{SEN}. An advantage of the entropy function formalism is that with this method we can not only find the solutions, but also we can read the entropy of the corresponding solutions. To proceed let us start from an ansatz preserving $SO(1,2)$ symmetry of the $AdS_2$ solution \begin{equation} ds^2=v(-r^2dt^2+\frac{dr^2}{r^2}),\;\;\;\;\;\;e^\phi=u, \;\;\;\;\;F_{01}=\frac{e}{l^2}. \end{equation} The entropy function is given by \begin{equation} {\cal E}=2\pi[qe-f(e,v,u)] \end{equation} where $f(e,v,u)$ is the Lagrangian density evaluated for the above ansatz. The parameters $e,v$ and $u$ can be obtained by extremizing the entropy function with respect to them. Then the entropy is given by the value of the entropy function evaluated at the extremum. Using the above ansatz, the entropy function for the action \eqref{action1} reads \begin{equation} {\cal E}=2\pi\left\{qe-\frac{1}{8G}\left[-2u+\frac{2u^3v}{l^2}+\frac{e^2u}{2vl^2}+\frac{1}{2\mu} \left(\frac{2e}{vl}-\frac{e^3}{v^2 l^3}\right)\right]\right\} \end{equation} Extremizing the entropy function with respect to the parameters $v,u$ and $e$, for generic $\mu$ and $l$ we find three different solutions \begin{eqnarray}\label{solutions} 1:&&v=\frac{1+1/\mu l}{-16Gq},\;\;\;e^{2\phi}=\frac{-4Gql^2}{1+1/\mu l},\;\;\;\;\;\;\; \frac{e}{l}=- \sqrt{\frac{1+1/\mu l}{-16qG}},\;\;\;\;\;\;\;\;\;q< 0,\cr &&\cr 2:&&v=\frac{1-1/\mu l}{16Gq},\;\;\;e^{2\phi}=\frac{4Gql^2}{1-1/\mu l},\;\;\;\;\;\; \frac{e}{l}= \sqrt{\frac{1-1/\mu l}{16qG}},\;\;\;\;\;\;\;\;\;\;\;\;\;q> 0,\cr &&\cr 3:&&v=\frac{1}{8Gq\mu l},\;\;\;\;\;\;e^{2\phi}=\frac{72Gq\mu l^3}{\mu^2l^2+27},\;\;\;\;\; \frac{e}{l}=\sqrt{\frac{\mu l}{2Gq(\mu^2l^2+27)}},\;\;\;q>0. \end{eqnarray} The entropy of the corresponding solutions written in a suggestive form is given by \begin{eqnarray} 1:&&S=2\pi\sqrt{\frac{-ql^2}{6}\frac{3}{2G}(1+\frac{1}{\mu l})},\cr 2:&&S=2\pi\sqrt{\frac{ql^2}{6}\frac{3}{2G}(1-\frac{1}{\mu l})},\cr 3:&&S=2\pi \sqrt{\frac{ql^2}{6}\frac{12\mu l}{G(\mu^2l^2+27)}}, \end{eqnarray} which may be compared with the Cardy formula for the entropy $S=2\pi\sqrt{\frac{L_0}{6}c}$. Following the general philosophy of the AdS/CFT correspondence \cite{Maldacena:1997re} if we assume that the 2D gravity on the $AdS_2$ solutions \eqref{solutions} has a dual CFT, it is then natural to identify $ql^2$ with the eigenvalue of $L_0$ of the dual CFT. Then the central charges of the corresponding CFTs read \begin{equation}\label{central charges1} 1:\;\;c_R=\frac{3}{2G}(1+\frac{1}{\mu l}),\;\;\;\;\;\;2:\;\; c_L=\frac{3}{2G}(1-\frac{1}{\mu l}),\;\;\;\;\;\; 3:\;\;c_L=\frac{12\mu l}{G(\mu^2 l^2+27)}. \end{equation} If correct, this means that the 2D Maxwell-dilaton gravity on $AdS_2$ background \eqref{solutions} is dual to a chiral half of a 2D CFT characterized by the above central charges. We note, however, that since the identification of $L_0=ql^2$ was speculative, the above presentation cannot be considered as an argument supporting $AdS_2/CFT_1$ correspondence. The best we can say is that as far as the entropy is concerned, with this identification, the picture seems self consistent. It is worth noting that for the case of $\mu\rightarrow \infty$ where the effect of the Chern-Simons term is zero, we recove the known results in the literature (see for example \cite{{Alishahiha:2008tv},{Castro:2008ms}}) legitimating our identifications. In the next section we will present another calculation supporting self consistency of the above picture. The indices $L,R$ in equations \eqref{central charges1} refer to the fact that, depending on the sign of $q$, the dual chiral CFT is left or right handed. Moreover, as we have already mentioned $\mu l=3$ is a special point. Indeed at this point the solution (3) degenerates with solution (2) where we get \begin{equation} c_R=\frac{2}{G},\;\;\;\;\;\;\;c_L=\frac{1}{G}. \end{equation} Another interesting point is $\mu l=\pm 1$ where we have two solutions with following central charges \begin{equation} c_R=\frac{3}{G},\;\;\;\;\;\;\;c_L=\frac{3}{7G}, \end{equation} In section 4 we will compare these results with the solutions of 3D gravity coupled to Chern-Simons term. \section{Asymptotic symmetry and central charge} In this section we closely follow \cite{HS} to study 2D Maxwell-dilaton quantum gravity on the three different $AdS_2$ backgrounds in \eqref{solutions}. We will see in order to have consistent boundary conditions the usual conformal diffeomorphisim, generated by the energy momentum tensor of \eqref{action1}, must be accompanied by a $U(1)$ gauge transformation. As a result we will have to work with a twisted energy momentum whose central charge is non-zero \cite{HS}. We note, however, that although we would expect to get three different central charges for three solutions in \eqref{solutions}, since all the solutions are obtained from the same action, \eqref{action1}, the procedure as well as the expressions for different quantities must be universal. To proceed we note that the $AdS_2$ vacuum solutions, setting $r=\frac{1}{\sigma}$, can be recast to the following form \begin{equation} ds^2=-4v\frac{dt^+dt^-}{(t^+-t^-)^2},\;\;\;\;\;\;\;\; A_{\pm}=-\frac{e}{2\sigma l^2},\;\;\;\;\;\;\;\;\;u=\eta={\rm constant}, \end{equation} where $t^{\pm}=t\pm\sigma$ and $v,e,u$ are given in \eqref{solutions}. Now the aim is to study 2D quantum gravity whose vacuum is given by either of the above solutions. To do so, we first need to understand the action of the conformal group on the theory. For this purpose, following the standard procedure in 2D CFT, we choose an appropriate gauge for the metric and the gauge field. For the metric we choose the conformal gauge \begin{equation} ds^2=-e^{2\rho} dt^+dt^- \end{equation} and for the gauge field the Lorentz gauge \begin{equation} \partial_+A_-+\partial_-A_+=0 \end{equation} In this gauge the gauge field can be written as $A_{\pm}=\pm\partial_{\pm}a$, for a scalar field $a$, such that $F_{+-}=-2\partial_+\partial_-a$. Our gauge choice fixes the coordinates and $U(1)$ gauge field up to residual conformal and gauge transformations generated by \begin{equation} t^{\pm}\rightarrow t^{\pm}+\zeta^{\pm}(t^{\pm})\ , \ \ \ \ \ \ \ \ \ a\rightarrow a+\theta(t^+)-\tilde{\theta}(t^-) \end{equation} In this gauge the action \eqref{action1} reads \begin{eqnarray} S_{GF}&=&\frac{1}{4G}\int\ d^2t \ [-2\partial_-\eta\partial_+\rho+\frac{e^{2\rho}}{2l^2}\ \eta^3-2\frac{\partial_+\eta\partial_-\eta}{\eta}+\frac{l^2}{2}\ e^{-2\rho}\eta\ (F_{+-})^2] \cr \nonumber\\ &-&\frac{l}{4G\mu}\int\ d^2t\ [2e^{-2\rho}\partial_+\partial_-\rho\ F_{+-}+l^2e^{-4\rho}(F_{+-})^3] \end{eqnarray} This action should be accompanied by the equations of motion for the fields that have been fixed by the gauge choice. These show up as the following constraints \begin{eqnarray} \frac{2}{\sqrt{-g}}\frac{\delta S}{\delta g^{\pm\pm}}\equiv T_{\pm\pm}&=&\frac{1}{4G}\left(-2\partial_{\pm}\rho\partial_{\pm}\eta+\partial_{\pm}\partial_{\pm}\eta+2 \frac{\partial_{\pm}\eta\partial_{\pm}\eta}{\eta}\right)\cr &&\cr &+& \frac{l}{8G\mu}\bigg(-2\partial_{\pm}\rho\partial_{\pm}F+\partial_{\pm}\partial_{\pm}F\bigg)=0 \end{eqnarray} \begin{equation}\label{Gcons} -\frac{\delta S}{\delta A_{\pm}}\equiv G_{\mp}=\pm \frac{l^2}{8G}\partial_{\mp}(\eta F)+j_{\mp}=0 \end{equation} where \begin{equation}\label{div} j_{\pm}=\mp\frac{l}{16G\mu}\partial_{\pm}(8e^{-2\rho}\partial_+\partial_-\rho+3l^2F^2),\;\;\;\;\;\;{\rm with}\;\;\;\partial_-j_++\partial_+j_-=0. \end{equation} On the other hand since we require no current flow out of the boundary one should impose the condition $j_\sigma|_{\sigma=0}=0$ which, using the equation \eqref{Gcons}, we find \begin{equation}\label{FF} j_{\sigma}= j_+-j_-=-\frac{l^2}{8G}\ \partial_t(\eta F)=0, \;\;\;\;{\rm at}\;\;\;\sigma=0. \end{equation} As a result, the boundary terms in the variation of the action will vanish if\footnote{The constraints \eqref{Gcons} together with \eqref{div} and the boundary condition $j_{\sigma}|_{\sigma=0}=0$, completely determine $j$. Indeed from the variation of the action with respect to $a$ we find a boundary term as $\partial_+(\eta F)\;\delta a$ which must be zero at the boundary. On the other hand due to \eqref{FF} we are led to $\delta a|_{\sigma=0}=0$. This forces a Dirichlet boundary condition for the field $a$.} \begin{equation}\label{boundary} \partial_t a|_{\sigma=0}=A_{\sigma}|_{\sigma=0}=0 \end{equation} In general the boundary condition \eqref{boundary} is not preserved by the remaining allowed diffeomorphisms and hence the coordinate transformations should be accompanied by appropriate gauge transformations \cite{HS} \begin{equation}\label{gauge} \theta(t^+)=\frac{e}{2 l^2}\partial_+\zeta^+\ ,\ \ \ \ \ \ \ \ \tilde{\theta}(t^-)=-\frac{e}{2l^2}\partial_-\zeta^-\ . \end{equation} Therefore the improved conformal transformations are generated by the twisted energy momentum tensor \begin{equation} {\tilde T}_{\pm\pm}=T_{\pm\pm}\mp \frac{e}{2l^2}\partial_\pm {\cal G}_\pm, \end{equation} where ${\cal G}_{\pm}$ is the current that generates the gauge transformations \eqref{gauge}. Denoting by $k$ the level of $U(1)$ current which parameterizes the gauge anomaly due to the Schwinger term, the central charge of the model reads \begin{equation}\label{central} c=3k\frac{e^2}{l^4}. \end{equation} The main challenge is to find the level of $U(1)$, $k$. In general it can be obtained by making use of the anomaly calculations \cite{{Manton:1985jm},{Heinzl:1991vd}}. We note, however, that it can be fixed using the known solutions. In particular for the case of $\mu \rightarrow \infty$ where the theory is given by the first action in \eqref{action1}, the central charge is found to be $\frac{3}{2G}$ \cite{{Alishahiha:2008tv},{Castro:2008ms}}. Equating this value with the central charge in \eqref{central} and using the first or second solution in \eqref{solutions} in the limit of $\mu \rightarrow \infty$ one finds $k=8|q| l^2$. Plugging this back into the equation \eqref{central} we get \begin{equation} 1)\;\;c_R=\frac{3}{2G}(1+\frac{1}{\mu l}),\;\;\;\;\;\;2)\;\;c_L=\frac{3}{2G}(1-\frac{1}{\mu l}),\;\;\;\;\;\; 3)\;\;c_L=\frac{12\mu l}{G(\mu^2 l^2+27)}, \end{equation} in agreement with our consistent results in the previous section, \eqref{central charges1}. \section{Relation to 3D gravity} In this section we would like to compare our 2D solutions with those in 3D Einstein-Chern-Simons gravity which have recently been studied in \cite{strominger2}. To do so, we note that the two dimensional $AdS_2$ solutions \eqref{solutions} may be uplifted to three dimensions. In general if we start from a 2D solution \begin{equation} ds_2^2=g_{\mu\nu}dx^\mu dx^\nu,\;\;\;\;\;e^{\phi},\;\;\;\;\;A_\mu, \end{equation} which we assume to be symmetric under an isometry group ${\cal G}$, we can find a pure geometric 3D gravity solution \begin{equation} ds^2_3=e^{2\phi}\bigg{[}ds_2^2+\bigg{(}dy+l A_\mu dx^\mu\bigg{)}^2\bigg{]} \end{equation} with isometry ${\cal G}\times U(1)$. Here $y$ is a coordinate that parameterizes an $S^1$ with period $2\pi l$. In particular consider the case where the two dimensional solution is $AdS_2$. The isometry group of the solution is $SL(2,R).$\footnote{The solution may have extra symmetries. For example for the solutions \eqref{solutions} we have $U(1)$ gauge symmetry as well.} Being symmetric under $SL(2,R)$ group the solution has constant dilaton and $F_{tr}$. By uplifting the solution to three dimensions we find a pure geometric solution whose isometry is $SL(2,R)\times U(1)$; the obtained solution will be $S^1$ fibered over $AdS_2$. In other words, in light of the recent terminology, the solution may be thought of as {\it warped} $AdS_3$ \cite{strominger2}. For particular values of the radius of the $AdS_2$ space and field strength, the resultant three dimensional solution describes a locally $AdS_3$ solution. However globally it is $AdS_3$ with an identification. The effect of this identification is that the isometry group of $AdS_3$, $SL(2,R)\times SL(2,R)$, breaks to $SL(2,R)\times U(1)$, as mentioned above. Applying the above procedure to the solutions \eqref{solutions} we get \begin{eqnarray}\label{3dsolutions} 1: && ds^2=\frac{l^2}{4}\left(-r^2dt^2+\frac{dr^2}{r^2}+(dz-rdt)^2\right) , \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;q<0,\cr &&\cr 2: && ds^2=\frac{l^2}{4}\left(-r^2dt^2+\frac{dr^2}{r^2}+(dz+rdt)^2\right) , \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;q>0,\cr &&\cr 3: &&ds^2=\frac{9l^2}{\mu^2 l^2+27}\left(-r^2dt^2+\frac{dr^2}{r^2}+\frac{4\mu^2 l^2}{\mu^2l^2+27}(dz+rdt)^2\right),\;\;\;\;q>0, \end{eqnarray} where $z=l\theta/|e|$ with the identification $z\sim z+ 2\pi l n \frac{l}{|e|}$. Here $n$ is an integer. We note, however, that the above description must be considered with special care. It is known that the asymptotic symmetry of the $AdS_2$ is is a copy of the Virasoro algebra whose global part is an $SL(2,R)$ \cite{Strominger:1998yg}. This is, indeed, the generalization of $AdS_3$ where the asymptotic symmetry is two copies of the Virasoro algebra with $SL(2,R)_L\times SL(2,R)_R$ global part \cite{BH}. It is crucial to note that, in general, the global part of the Virasoro algebra of $AdS_2$ geometry is not necessarily the $SL(2,R)$ symmetry which only leaves the metric invariant. Indeed as we have seen in the previous section the asymptotic symmetry of $AdS_2$ solutions of \eqref{solutions} is given by the twisted energy momentum tensor. Now uplifting the solutions to three dimensions the resultant $SL(2,R)$ must be read from the twisted energy momentum tensor. In other words, if we denote the left/right handed energy momentum tensor of the three dimensional theory by $T^{(3)}_{\pm\pm}$, one should identify $T^{(3)}_{\pm\pm}={\tilde T}_{\pm\pm}$ \cite{Strominger:1998yg}. Since in two dimensions the theory is chiral, upon uplifting the theory to three dimensions we only get non-zero excitations for one hand. In other words, depending on whether the two dimensional solution is left/right handed we will have left/right handed three dimensional energy momentum tensor. Actually from 3D point of view, as we have already mentioned, due to the identification the excitation states live purely in $SL(2,R)_L$or $SL(2,R)_R$ factor of the isometry group. On the other hand as we have seen in the previous section the 2D twisted energy momentum tensor has non-zero central charge given by \eqref{central charges1}. Therefore the corresponding central charge of the dual CFT of the three dimensional solutions is given by \begin{equation}\label{c1} 1:\;\;c_R=\frac{3l}{2G_3}(1+\frac{1}{\mu l}),\;\;\;\;\;\; 2:\;\;c_L=\frac{3l}{2G_3}(1-\frac{1}{\mu l}),\;\;\;\;\;\; 3:\;\;c_L=\frac{12\mu l^2}{G_3(\mu^2 l^2+27)}, \end{equation} where $G_3$ is 3D Newton constant. Of course, although the theory we get has excitations of only one hand, the other sector exists but has zero excitations. Thus the 2D CFT dual to the above 3D solutions has both $c_L$ and $c_R$. Using the diffeomorphism anomaly by which we have $c_L-c_R=-3/\mu G_3$ \cite{KL}, one finds \begin{equation}\label{c2} 1:\;\;c_L=\frac{3l}{2G_3}(1-\frac{1}{\mu l}),\;\;\;\;\;\; 2:\;\;c_R=\frac{3l}{2G_3}(1+\frac{1}{\mu l}),\;\;\;\;\;\; 3:\;\;c_R=\frac{15\mu^2 l^2+81}{\mu G_3(\mu^2 l^2+27)}, \end{equation} in agreement with \cite{sen2} and \cite{strominger2}. As we have already mentioned the $z$ coordinate in solutions \eqref{3dsolutions} is periodic. Therefore one may interpret the solutions as 3D extremal black holes. Due to the identification the $SL(2,R)_L/SL(2,R)_R$-invariant $AdS_3$ vacuum should give a thermal state for the left/right movers of the boundary CFT with zero right/left temperature and non-zero left/right temperature. On the other hand the $t$ direction can be treated as the null coordinate of the boundary, while the $z$ should be considered as a Rindler coordinate. Therefore the left/right temperature of the dual CFT is proportional to the magnitude of the shift in $z$ direction. More precisely, one gets\footnote{Note that such a treatment for warped $AdS_3$ is tricky due to its boundary. Nevertheless in writing the expression for this case we are encouraged by the fact that in this case the picture fits nicely as well.} \begin{equation} 1:\;T_R=\frac{2l}{\pi} \sqrt{\frac{-qG_3}{l(1+\frac{1}{\mu l})}},\;\; 2:\;T_L=\frac{2l}{\pi} \sqrt{\frac{qG_3}{l(1-\frac{1}{\mu l})}},\;\; 3:\;T_L=\frac{2l}{\pi}\sqrt{\frac{qG_3(\mu^2 l^2+27)}{8\mu l^2}}. \end{equation} The corresponding entropy using the Cardy formula $S=\frac{\pi^2}{3}c_{L/R} T_{L/R}$ reads \begin{eqnarray} 1:&&S=2\pi\sqrt{\frac{-ql^2}{6}\frac{3l}{2G}(1+\frac{1}{\mu l})},\cr 2:&&S=2\pi\sqrt{\frac{ql^2}{6}\frac{3l}{2G}(1-\frac{1}{\mu l})},\cr 3:&&S=2\pi \sqrt{\frac{ql^2}{6}\frac{12\mu l^2}{G(\mu^2l^2+27)}}, \end{eqnarray} which are compatible with those we have found in section two from 2D point of view. \section{Conclusions} In this paper we have studied 2D Maxwell-dilaton gravity on $AdS_2$ geometry in the presence of higher order correction given by Chern-Simons term. The model admits three distinctive $AdS_2$ vacuum solutions characterized by the sign of the electric field. Using the entropy function formalism we have evaluated the entropy of the solutions. Note that in the leading order when the action is given by the Einstein-Hilbert action the model has only one solution. Adding the Chern-Simons term the solution gets corrections which depend on the coefficient of the Chern-Simons term as well as the sign of the electric charge leading to three different solutions. The sign dependent nature of the corrections may be associated with the fact that the dual CFT is believed to be chiral half of a 2D CFT. When the coefficient of the Chern-Simons term is set to zero the solution (1) degenerates with solution (2) while the third one disappears. In other words, the solutions (1) and (2) are Einstein solutions while the last one is not. Of course for particular values of $\mu l$ the third one degenerates with the solution (2) as well. Following \cite{HS} we have studied the action of the conformal group in the model where we have seen that in order to have consistent boundary conditions we will have to work with a twisted energy momentum tensor. The twisted energy momentum tensor has non-zero central charge which should be associated with the central charge of the dual CFT. Requiring to have a consistent picture we have been able to read the corresponding central charge as well as the level of $U(1)$ current. We have compared our solutions with those in 3D gravity by uplifting the solutions to three dimensions. The solutions (1) and (2) have been uplifted to a solution which is locally $AdS_3$ though globally it is $AdS_3$ with an identification. The third one has been uplifted to a solution which is known as warped $AdS_3$ \cite{strominger2} with an identification. Due to the identification, the resultant 3D solutions may be thought of as 3D extremal black holes. We have also determined the entropy of the extremal black holes which are given by the Cardy formula using the obtained central charges. The consistency of the results points toward the conjecture made in \cite{strominger2} where the authors proposed that the 3D gravity on the warped $AdS_3$ geometry is dual to a 2D CFT with left and right hand central charges given by the third central charges in equations \eqref{c1} and \eqref{c2}. The $AdS_3$ and warped $AdS_3$ solutions in 3D gravity are believed to be dual to 2D CFTs with $c_L$ and $c_R$ given by equations \eqref{c1} and \eqref{c2}. Therefore in general one might expect that from two dimensional point of view we should have got four solutions corresponding to four different sectors which are obtained from 3D $AdS_3$ and warped $AdS_3$. But as we have seen in two dimensions only three of them can be realized. The missing one is the right handed sector of the warped $AdS_3$ solution with central charge given by the third one of \eqref{c2}. This means that if we consider an extremal black hole in warped $AdS_3$ there is only one possibility in which the left movers will survive. This is unlike an extremal black hole in $AdS_3$ where it could be either left or right handed with non-zero excitations of left or right mover states, respectively. It is worth mentioning that, as it was observed by the authors of \cite{compere}, when we study asymptotic symmetry of the warped $AdS_3$ solution one gets a copy of Virasoro algebra with central charge given by the third one of \eqref{c2}. This is exactly the one which cannot be realized from 2D point of view. It would be interesting to illustrate the physics behind this special behavior of the warped $AdS_3$. It was shown in \cite{LSS} that the TMG quantum mechanically makes sense only at $\mu l=1$ where we get chiral gravity. From 2D point of view although we have observed that $\mu l=1$ is a special point, it is not a priori clear why we should set $\mu l=1$ from 2D point of view. Indeed, there are several examples in string theory where we have extremal black holes which upon reduction to two dimensions we get an action very similar to that in \eqref{actions}. In these cases the coefficient of the Chern-Simons term usually is fixed by a topological number and the charges of black holes. As far as the black holes are concerned there are no conditions on the coefficient of the Chern-Simons term. It would be interesting to understand this point better \vspace*{1cm} {\bf Acknowledgments} We would like to thank Farhad Ardalan for useful and illustrative discussions on $AdS_2/CFT_1$ correspondence. This work is supported in part by Iranian TWAS chapter at ISMO.
2,869,038,156,534
arxiv
\section{Introduction} \label{sec:introduction} Throughout this paper, we discuss integer covering problems of the form \begin{align*} \min_{x \in \mathbb{Z}_+^n} \left\{ c^T x \mid Ax \geq r \right\} \tag{P} \label{LP:P} \end{align*} where $A \in \mathbb{R}^{m \times n}_+$, $r \in \mathbb{R}^m$, and $c\in \mathbb{R}_+^n$. We denote the row index set by $\mathcal{L}$ and the column index set by $E$. The entries of matrix $A$ are denoted by $a_{i,e}$ for row $i \in \mathcal{L}$ and column / item $e \in E$, and can be interpreted as the weight of element $e$ with respect to row / constraint $i$. We call the constraints $\{a_i^T x \geq r(i),\ i\in \mathcal{L}\}$ \emph{weighted covering constraints} since they ensure that the sum of weights of the multiplicities of items selected into a solution $x\in \mathbb{Z}_+^{|E|}$ cover $r(i)$, for each $i\in \mathcal{L}$. Throughout, we call $r(i)$ the \emph{rank of $i$}, and denote the support of $ i \in \mathcal{L}$ by $\text{supp}(i) = \{ e \in E : a_{i,e} > 0 \}$. Note that the non-negativity assumption on the cost vector $c \in \mathbb{R}_+^{|E|}$ is non-restrictive, as variables with non-positive objective value may be selected infinitely often, thus, either rendering constraints redundant or rendering the problem instance unbounded. For ease of notation, we assume that there is a trivial row $i \in \mathcal{L}$ with $supp(i) = \emptyset$ and $r(i) \leq 0$. And without loss of generality, we assume that there is some row $i' \in \mathcal{L}$ with $supp(i') = E$. Many interesting combinatorial optimization problems can be formulated in this manner: subset cover, cut covering, optimization over contra-polymatroids, and the knapsack cover problem, to mention just a few. Consider, for example, the special case where $A$ is the incidence matrix of the family of all subsets $S$ of a finite set $E$. That is, $\mathcal{L} = 2^E$ and, for each subset $S \subseteq E$, $a_{S,e} = 1$ if $e \in S$, and $a_{S,e} = 0$, otherwise. If $r: 2^E \to \mathbb{R}$ satisfies the three conditions (i) $r(\emptyset) = 0$, (ii) $r(S) \leq r(T)$ whenever $S \subset T$, and (iii) $r(S) + r(T) \leq r(S \cap T) + r(S \cup T)$ for all $S, T\subseteq E$, the polytope $\{x\in \mathbb{R}_+^{|E|}\mid Ax\ge r\}$ corresponding to system $(A,r)$ is called \emph{contra-polymatroid}. The well-known primal-dual \emph{(contra-) polymatroid greedy algorithm} \cite{edmonds1970submodular} determines for each cost function $c$ an optimal integral solution for $(\ref{LP:P})$ and its dual linear program \begin{align*} \max_{y \in \mathbb{R}_+^{|\mathcal{L}|}} \left\{ y^T r \mid \sum_{i \in \mathcal{L}} a_{i,e} y_i \leq c_e \; \forall e \in E \right\}. \tag{D} \label{LP:D} \end{align*} The polymatroid greedy algorithm will be described below. Conditions (ii) and (iii) are usually called \emph{monotonicity} and \emph{supermodularity}, respectively. The goal of this paper is to develop and analyze an extension of the primal-dual polymatroid greedy algorithm towards more general systems $(A,r)$ which may consist of an arbitrary integral matrix $A \in \mathbb{R}_+^{|\mathcal{L}| \times |E|}$ and an arbitrary rank function $r:\mathcal{L} \to \mathbb{R}$. Our primal-dual algorithm will return primal and dual candidate solutions, $x$ and $y$ with the properties: $y$ is a feasible solution to the dual problem (D), and $x$ is an integral vector. We establish conditions on system $(A,r)$ which guarantee 1) feasibility of the primal solution $x$ and 2) a bounded performance guarantee. We distinguish a primal and dual phase of the algorithm. During the dual phase, the algorithm constructs a feasible dual solution $y^*$ together with a collection of \emph{bottleneck elements} $E^* \subseteq E$ corresponding to a set of tight dual constraints with respect to $y^*$. The primal phase assigns non-negative integral values to the bottleneck elements in such a way that the primal constraints are fulfilled at least on the support of $y^*$. \sectionheadline{Dual phase} In general, given a feasible dual solution $y$, we call $i \in \mathcal{L}$ \emph{augmentable} if there exists some positive amount $\epsilon > 0$ such that $y + \epsilon \chi_i$ remains a feasible solution (here, as usual, $\chi_i\in \{0,1\}^{|\mathcal{L}|}$ is all-zero, except for component $i$, which is $1$). Recall the dual phase of the polymatroid greedy algorithm: Starting with the all-zero vector $y\equiv 0$, the algorithm iteratively selects an augmentable $i \in \mathcal{L}$ of largest rank (as long as augmentable variables exist), and raises $y_i$ as far as possible, that is, until the dual constraint of some element $e \in E \setminus E^*$ becomes tight. If we define the support of $i \in \mathcal{L}$ by $S_i = supp(a_i)$, the algorithm always selects an augmentable $i \in \mathcal{L}$ with inclusion-wise maximal set $S_i$. A similar dual greedy approach can be applied to general packing problems of type (\ref{LP:D}) with arbitrary matrix $A \in \mathbb{R}_+^{|\mathcal{L}| \times |E|}$. In fact, it is probably the most naive approach one can think of: Take some precedence rule $(\mathcal{L}, \preceq)$ on the row index set $\mathcal{L}$ such that the rank is monotone with respect to the precedence rule, that is, $i \preceq j$ implies $r(i) \leq r(j)$. Apply the following iterative procedure. \begin{enumerate} \item Initially, let $y^* \equiv 0$. \item While $\mathcal{L} \neq \emptyset$ \begin{enumerate} \item Select $i \in \mathcal{L}$ with $r(i)$ maximum. If there are multiple choices, select one which is maximum with respect to $(\mathcal{L}, \preceq)$. \label{alg:dual-phase:select-row} \item STOP if $r(i) \leq 0$. \label{alg:dual-phase:stop} \item Raise $y^*_i$ until some element $e^* \in E \setminus E^*$ becomes tight. \item Add $e^*$ to $E^*$ and iterate with $\mathcal{L} = \{i \in \mathcal{L} : a_{i, e^*} = 0\}$. \label{alg:dual-phase:lattice-subtract} \end{enumerate} \end{enumerate} Certainly, this approach always returns a feasible dual solution $y^*$. The performance of this algorithm can, however, be arbitrarily bad, even in case of binary matrices. In order to get upper bounds on the performance guarantee of the algorithm, let us first extend the dual phase by an associated primal phase. \sectionheadline{Primal phase} Note that, for each problem of type (D), the dual greedy algorithm returns a feasible dual solution $y^* \in \mathbb{R}_+^{|\mathcal{L}|}$ whose support forms a sequence $$\text{supp}(y^*)=\{i_1, \dots, i_{\ell+1}\}$$ in the order in which variables were considered during the dual phase. Here, we assume that $i_{\ell+1}$ is actually not raised but used in the STOP criterion \ref{alg:dual-phase:stop}). Recall that we assumed that there is always some trivial row in $A$ with $r(i) \leq 0$ and $supp(i) = \emptyset$. Thus, the STOP criterion is reached after at most $|E|$ iterations. Moreover, the choice of variables to be increased during the dual phase implies $r(i_1) \geq \dots \geq r(i_\ell) > 0 \geq r(i_{\ell+1})$. Let $E^*=\{e_1, \dots, e_\ell\} \subseteq E$ be the associated bottleneck-elements satisfying $$e_j\in \text{supp}(a_{i_j}) \quad \mbox{ and } \quad a_{i_k, e_j}=0 \quad \forall 1 \leq j < k \leq \ell + 1.$$ In the special case where $(A,r)$ describes a contra-polymatroid, the sequence $\{i_1, \dots, i_{\ell+1}\}$ corresponds to a chain of sets $E_{\ell+1} \subset \dots \subset E_1$ with $E_1 = E$ and $E_{j + 1} = E_j \setminus {\{e_j\}}$ for $j = 1, \dots, \ell$. Moreover, the chain satisfies $r(E_\ell) > 0 = r(E_{\ell+1})$. The primal phase of the polymatroid greedy algorithm simply constructs a primal vector $x \in \mathbb{Z}_+^{|E|}$ by setting $x_{e_\ell} = r(E_\ell)$ and $x_{e_j} = r(E_j) - r(E_{j+1})$ for $j = \ell-1$ down to $j = 1$. A natural extension of the polymatroid greedy algorithm towards more general systems $(A,r)$ is the following procedure: Given the sequence $\{i_1, \dots, i_{\ell+1}\} \subseteq \mathcal{L}$ and $E^* = \{e_1, \dots, e_\ell\} \subseteq E$ as constructed during the dual phase, set \begin{align} x^*_{e_j} = \left\lceil \frac{r(i_j)^+ - r(i_{j+1})^+}{a_{i_j,e_j}} \right\rceil \quad j = 1, \dots, \ell. \label{alg:primal-phase:set-x} \end{align} Here, $r(i)^+ = \max\{r(i), 0\}$ is the positive part of the rank. Our primal-dual greedy algorithm for weighted covering and packing problems of type (\ref{LP:P}) and (\ref{LP:D}) consists of the concatenation of the dual and primal phase as described above. In this paper, we discuss properties of system $(A,r)$ that ensure the following two properties of the primal-dual greedy algorithm: (1) the primal greedy solution $x^*$ is feasible and (2) has a bounded performance guarantee. We also provide complementing complexity theoretical results and lower bounds on integrality gaps under the assumed properties. \sectionheadline{Related work} Integrality of polyhedra described by systems $(A,r)$ plays an important role in combinatorial optimization. Probably one of the most-famous conditions of a system $(A,r)$ in order to guarantee integrality of the polyhedron is totally-unimodularity of matrix $A$ and integrality of $r$, which was discussed by Hoffmann \cite{hoffman1976total}. The same effect appears if $(A,r)$ is totally dual integral, a condition introduced by Giles and Pulleybank \cite{giles1979total}. Instead of solving a linear program to optimality via a general purpose linear programming algorithm, the primal-dual method was widely used in order to obtain optimal solutions. Many classical algorithms can also be cast as primal-dual methods, see e.g.\ Williamson and Shmoys \cite{williamson2011design} or Papadimitriou and Steiglitz \cite{papadimitriou1982combinatorial} for an overview of such connections. For matrices $A$ with coefficients in $\{-1,0,1\}$, many structural properties ensuring optimality of the primal-dual method were studied. Optimization over polymatroids is probably one of the most famous results of this type due to Edmonds \cite{edmonds1970submodular}. Following this result, lots of generalizations were established, such as \cite{faigle2009general,faigle2000order,faiglegreedy,faigle2012ranking,faigle2010two,frank1999increasing,fujishige1984note,fujishige2004dual}, to mention just a few. Sub- or supermodularity of $A$ (and/or $r$) usually plays an important role in these optimality results. Almost four decades ago, the primal-dual method was first used in order to obtain approximation algorithms for integer programs. Bar-Yehuda and Even obtained a primal-dual approximation algorithm for vertex cover \cite{bar1981linear}. Following this work, network design problems were considered e.g.\ by Agrawal et al. \cite{agrawal1995trees} or Goemans and Williamson \cite{goemans1995general}. The latter introduced a fairly general framework which applies to lots of network design problems modelled via so-called proper functions. See e.g.\ Bertsimas and Tao \cite{bertsimas1998valid}, Williamson and Shmoys \cite{williamson2011design} or Vazirani \cite{vazirani2013approximation} for surveys. Still, most approximation results in this direction consider only matrices $A$ with coefficients in $\{-1,0,1\}$. Carnes and Shmoys \cite{carnes2008primal} considered the knapsack cover problem and showed that a formulation with strengthened inequalities can be solved via the primal-dual method. The strengthening is due to Carr et al. \cite{carr2000strengthening}. Bar-Noy et al.\ \cite{bar2001unified} solved the flow cover on a line problem via a local-ratio technique. This technique can equivalently be seen as a primal-dual approach. \sectionheadline{Our contribution and structure of the paper} \begin{table}[tb] \begin{center} \begin{tabular}{l|l l} & \multicolumn{2}{c}{Approximation factor} \\ Problem & Best known & Our bound \\ \hline Optimization over contra-polymatroids & 1 \cite{edmonds1970submodular} & 1 $^*$ \\ Knapsack cover & 2 \cite{carnes2008primal,mccormick2016primal} & 2 $^*$ \\ Subset cover & $\log (\max_i |T_i|)$ \cite{chvatal1979greedy} & $\max_i |T_i|$ $^*$ \\ $p$-Contra-polymatroid intersection & $p$ \cite{jenkyns1976efficacy} & $p$ $^\dagger$ \\ Flow cover on $k$ lines & $k = 1: 4$ \cite{bar2001unified} & $4k$ $^\dagger$ \\ & $k > 1:$ none & \\ Knapsack cover with precedence constraints & $w$ \cite{mccormick2016primal} & $w$ $^\dagger$ \\ Generalized steiner tree & $2$ \cite{goemans1997primal} & $2$ $^\dagger$ \\ Minimum multicut on trees & $2$ \cite{garg1997primal} & $2$ $^\dagger$ \\ \end{tabular} \caption{Exemplary results derivable from this work. $^*$ via Theorem \ref{thm:sv:greedy-approximation}, $^\dagger$ via Theorem \ref{thm:ps:greedy-approximation}.} \label{table:intro:result-summary} \end{center} \end{table} We call a system $(A,r)$ a \emph{greedy system}, if it satisfies certain properties which are formalized in Section \ref{sec:simple-version}. Intuitively, these properties can be seen as generalizations of properties that define matroids. In contrast to matroids, however, our results do not necessarily provide optimum solutions but only approximation guarantees. Despite the fact that greedy systems can have an unbounded integrality gap, we can show that a careful truncation of coefficients of matrix $A$ obtains strong approximation results. We provide an approximation factor of $(2\delta + 1)$ or $2 \delta$, if $r$ is non-negative. Under certain conditions, the additional factor of $2$ vanishes. The characteristic $\delta$ depends on the range of coefficients in the truncated matrix, which is small in many applications. We also show that there are greedy systems such that the ratio between an optimum dual solution and the dual solution constructed by the greedy algorithm is $\delta$. This implies that the dependency on $\delta$ in our approximation ratio can not be improved (up to constant factors) unless the type of solution computed in the dual phase is changed. Finally, we show that the properties required for a greedy system are necessary in order to ensure that the discussed greedy algorithm always obtains a feasible solution. To be able to further increase our modelling power, we provide a generalization in Section \ref{sec:product-version}. The generalization can be seen as a composition of system $(A,r)$ of multiple greedy systems on the same column set. We call such a system a greedy product system. For greedy product systems, we can obtain similar results, proving approximation guarantees of $k (\delta + 1)$, or $k \delta$ if the truncation is a binary matrix. Here, $k$ is a characteristic that is problem specific, but small in the discussed applications. Again, we can show that the dependency on $k$ can not be improved unless the type of solution constructed in the dual phase is changed. Table \ref{table:intro:result-summary} contains some exemplary results derivable from this paper. A detailed discussion of the table including a description of the problems and all proofs can be found in the appendix. Although the result regarding subset cover does not coincide with the best-known result, this discrepancy is possibly expected. To the best of our knowledge, no logarithmic approximation guarantee based on a primal-dual analysis for subset cover is known. Instead, primal averaging arguments are commonly used. The result for flow cover on $k$ lines was not known before and looks like a promising direction of further modelling applications. \section{Sufficient conditions for feasibility and bounded performance} \label{sec:simple-version} In this section, we will discuss sufficient conditions for system $(A,r)$ in order to ensure that the primal solution obtained by the primal-dual greedy algorithm (from the previous section) is always feasible and has a bounded performance guarantee. Throughout this section, we assume that some given partial order $(\mathcal{L},\preceq)$ is fixed which is used in order to choose a dual variable to be increased. We call a system $(A,r)$ a \emph{greedy system} (with respect to $(\mathcal{L},\preceq)$), if it satisfies the following properties. Whenever we talk about a greedy system in the remainder of this work, we always assume that this is with respect to this fixed partial order. In order to simplify notation, we will use $S \in \mathcal{L}$ to denote the row $i$ with $supp(i) = S \subseteq E$. One of the subsequent properties will ask for the support of rows to be unique, hence, we can talk about \emph{the} rows. \begin{enumerate-prop} \item $r$ is monotone non-decreasing on $(\mathcal{L}, \preceq)$: $r(S) \leq r(T)$ for all $S \preceq T$. \label{prop:first} \label{prop:r-monotone} \item For each element $e \in E$, $a_{*,e}$ is monotone non-decreasing on $(\mathcal{L}, \preceq)$: $a_{S,e} \leq a_{T,e}$ for all $S \preceq T$. \label{prop:A-monotone} \item $(\mathcal{L},\preceq)$ is a modular lattice with join $\vee$ and meet $\wedge$ such that $i,j \in \mathcal{L}$ with $i \neq j$ implies $supp(i) \neq supp(j)$. Moreover, we require that for all $i,j \in \mathcal{L}$ and $e \in E$ it is true that $e \not\in supp(i) \cup supp(j) \Rightarrow e \not\in supp(i \vee j)$. \label{prop:lattice-modular} \item The system $(A,r)$ is \emph{weighted supermodular} on $(\mathcal{L},\preceq)$: \label{prop:A-r-coupling} $$ \frac{r(T) - r(S \wedge T)}{a_{T,e}} \leq \frac{r(S \vee T) - r(S)}{a_{S \vee T,e}} \quad \forall S,T \in \mathcal{L}, e \in T \setminus (S \wedge T).$$ \label{prop:last} \end{enumerate-prop} \vspace*{-0.5cm} In this paper, we will often talk about matrix $A$ being monotone. By this, we mean that \ref{prop:A-monotone} holds. A partial order $(\mathcal{L}, \preceq)$ is called a lattice if for any two elements $i,j \in \mathcal{L}$ there is a unique least common upper bound (join $i \vee j = \inf\{k \in \mathcal{L} : i,j \preceq k \}$) and a unique greatest common lower bound (meet $i \wedge j = \sup\{ k \in \mathcal{L} : k \preceq i,j \}$). In case of the Boolean lattice $(2^E, \subseteq)$, the join and meet are set union and intersection, respectively. A lattice is called modular, if for all $i,j,k \in \mathcal{L}$ with $i \preceq k$ the following holds: $i \vee (j \wedge k) = (i \vee j) \wedge k$. A subset $I \subseteq \mathcal{L}$ is called a \emph{chain} if the ordering relation $\preceq$ yields a total order of $I$. A chain is called \emph{dense}, if there is no $k \in \mathcal{L} \setminus I$ which can be added to $I$ without violating the chain property. Modularity will be an important property as it implies the following: Let $i,j \in \mathcal{L}$ and consider any dense chains $I_i \subseteq \{k \in \mathcal{L} : i \preceq k \preceq i \vee j \}$ and $I_j \subseteq \{ k \in \mathcal{L} : i \wedge j \preceq k \preceq j \}$. Modularity implies that there is an isomorphism $\psi: I_i \rightarrow I_j$ (c.f.\ Theorem 13 in \cite{birkhoff1948lattice}). The fact that this isomorphism exists will help in order to prove feasibility and a bounded approximation factor of our constructed primal solution. For more information on lattice theory, see e.g.\ \cite{birkhoff1948lattice}. Note that \ref{prop:A-monotone} and the fact that the support of row $S \in \mathcal{L}$ equals $S$ implies that the support on chains is monotonically increasing, that is, $S \preceq T$ implies $S \subseteq T$. For a row $S \in \mathcal{L}$ and element $e \in E$, we use the notation $\phi_e(S) = \max \{ T \in \mathcal{L} \mid T \preceq S, e \not\in T \}$ to denote the maximum row $T \preceq S$, which does not contain element $e$ in its support. Observe that $\phi$ always returns a unique element due to Lemma \ref{lem:sv:sublattice}. The removal of an element from the lattice obtains a sublattice of $\mathcal{L}$. Note that $\phi_e(S) = S$, if $e \not\in S$. The function $\phi_e(S)$ has a strong connection with Line~\ref{alg:dual-phase:lattice-subtract}) in the dual phase of the greedy algorithm: If $S_{\ell+1} \prec \dots \prec S_1$ is the support of the dual solution obtained with bottleneck elements $e_i, 1 \leq i \leq \ell$, then $S_{i+1} = \phi_{e_i}(S_i)$. In order to make this observation, it is helpful to realize that the restriction in Line~\ref{alg:dual-phase:lattice-subtract}) describes a sublattice of $\mathcal{L}$. Since $\mathcal{L}$ is a lattice, it always contains a unique maximum element (the join of all elements). Hence, also the row chosen in Line~\ref{alg:dual-phase:select-row}) will always be unique. And due to \ref{prop:r-monotone}, the maximum row will also have the maximum rank value. In other words, instead of computing the sublattice in iteration $i$ of Line~\ref{alg:dual-phase:lattice-subtract}) explicitly, we can select $S_{i+1} = \phi_{e_i}(S_i)$ in order to choose the row to be considered in the subsequent iteration. We will use two observations in the following sections. These will ensure that the step in Line \ref{alg:dual-phase:lattice-subtract}) maintains the properties of a greedy system. \begin{restatable}{lemma}{svObsPhiOrderPreserving} Let $S,T \in \mathcal{L}$ and $e \in E$, then $\phi_e(S) \preceq \phi_e(T)$. \end{restatable} \begin{proof} See appendix for the proof. \end{proof} \begin{restatable}{lemma}{svObsSublattice} Let $(A,r)$ be a greedy system and $e \in E$. Then the system restricted to $\mathcal{L}' = \{ L \in \mathcal{L} : e \not\in L \}$ is a greedy system. \label{lem:sv:sublattice} \end{restatable} \begin{proof} See appendix for the proof. \end{proof} \sectionheadline{Feasibility} Let us assume that the greedy algorithm returns the dual solution $y^*$ with support (in the order the sets occurred during the dual phase) $S_1, \dots, S_{\ell+1}$ and bottleneck elements $e_1,\dots,e_\ell$, and let $x^*$ be the corresponding primal vector. Then $r(S_{\ell+1}) \leq 0 < r(S_\ell)$. First, note that \ref{prop:r-monotone} and \ref{prop:A-monotone} and the choice of $S$ in every iteration ensures that $S_{\ell+1} \prec S_\ell \prec \dots \prec S_1$ forms a chain in $(\mathcal{L},\preceq)$. Moreover, the rank differences considered in order to construct $x^*$ are always non-negative. Hence, $x^*$ will be a non-negative vector. The following Lemma \ref{lem:sv:set-feasible} shows that it will also be a feasible primal solution. Before we prove feasibility of $x^*$, we will obtain two simple observations. The first is regarding the marginal increase version of supermodularity, the second is regarding the behavior of elements just before the rank becomes negative. \begin{restatable}{lemma}{obsSvMarginalIncreaseSupermodularity} Let $S \preceq T \in \mathcal{L}$ and let $e \in S$. Then $\frac{r(S) - r(\phi_e(S))}{a_{S,e}} \leq \frac{r(T) - r(\phi_e(T))}{a_{T,e}}$. \label{lem:sv:marginal-increase-supermodularity} \end{restatable} \begin{proof} See appendix for the proof. \end{proof} \begin{restatable}{lemma}{obsSvMonotoneAroundZero} Let $S \preceq T \in \mathcal{L}$ and let $e \in S$ such that $r(S), r(T) \geq 0$ and $r(\phi_e(S)), r(\phi_e(T)) \leq 0$. Then $\frac{r(S)}{a_{S,e}} \leq \frac{r(T)}{a_{T,e}}$ \label{lem:sv:monotone-around-zero} \end{restatable} \begin{proof} See appendix for the proof. \end{proof} \begin{restatable}{lemma}{lemSvFeasible} Let $(A,r)$ be a greedy system and let $(x^*,y^*)$ be obtained by the primal-dual greedy algorithm. Then $x^*$ is feasible for $(\ref{LP:P})$. \label{lem:sv:set-feasible} \end{restatable} \begin{proof} Let $S_{\ell + 1} \prec \dots \prec S_1$ be the support of $y^*$. Again, we assume that $S_{\ell+1}$ was used during the STOP criterion, that is, $r(S_{\ell+1}) \leq 0 < r(S_{\ell})$. Let $T \in \mathcal{L}$ be any row with $r(T) > 0$. We will show that $a_T x \geq r(T)$ holds, where $a_T$ denotes the row of $A$ induced by index $T$. Let $e_1,\dots,e_\ell$ be the bottleneck elements and let us consider the following chain: $T'_1 = T, T'_{i+1} = \phi_{e_k}(T'_i)$ for $i = 1,\dots,\ell$. Then $r(T'_{\ell+1}) \leq 0$, as $T'_{\ell+1} \preceq S_{\ell+1}$. Moreover, for all $1 \leq i \leq \ell$, $T'_i \preceq S_i$ holds due to modularity of the lattice. Note that the chain $T'_i$ may contain the same element multiple times, that is, $T'_i = T'_{i+1}$ may hold for some $i$. In this case, $e_{i+1} \not\in T'_i$. Let $\alpha = \max\{1 \leq i \leq \ell : r(T'_i) > 0 \}$ be the maximum index such that $T'_\alpha$ has positive rank. An important observation is that $a_{T'_\alpha,e_\alpha} > 0$. If this was not the case, $e_\alpha \not\in T'_\alpha$, hence, $\phi_{e_\alpha}(T'_\alpha) = T'_\alpha$, that is, $\alpha$ could be increased. Moreover, let $I = \{1 \leq i < \alpha : a_{T'_i,e_i} > 0\}$ be the index set of all distinct chain elements, prior to element $T'_\alpha$. Then \begin{align*} a_T x &= \sum_{i=1}^{\ell} a_{T,e_i} x^*_{e_i} \overset{\ref{prop:A-monotone}}{\geq} \sum_{i=1}^{\ell} a_{T'_i,e_i} x^*_{e_i} \geq \sum_{i \in I} a_{T'_i,e_i} \frac{r(S_i)^+ - r(S_{i+1})^+}{a_{S_i,e_i}} + a_{T'_\alpha,e_\alpha} x^*_{e_\alpha} \\ &\overset{Lem. \ref{lem:sv:marginal-increase-supermodularity}}{\geq} \sum_{i \in I} a_{T'_i,e_i} \frac{r(T'_i) - r(T'_{i+1})}{a_{T'_i,e_i}} + a_{T'_\alpha,e_\alpha} x^*_{e_\alpha} = r(T'_1) - r(T'_{\alpha}) + a_{T'_\alpha,e_\alpha} x^*_{e_\alpha} \end{align*} The second inequality uses the definition of $x^*$ and the definition of index set $I$. In the third inequality, we use the fact that only $r(S_{\ell+1})$ is possibly negative, hence, all coefficients in the sum were positive. The proof is almost concluded. If $\alpha = \ell$, we have \begin{align*} a_{T'_\alpha,e_\alpha} x^*_{e_\alpha} \geq a_{T'_\ell,e_\ell} \frac{r(S_\ell)}{a_{S_\ell,e_\ell}} \overset{Lem. \ref{lem:sv:monotone-around-zero}}{\geq} a_{T'_\ell,e_\alpha} \frac{r(T'_\ell)}{a_{T'_\ell,e_\ell}} = r(T'_\ell) = r(T'_\alpha), \end{align*} which implies $a_T x \geq r(T'_1) = r(T)$. Note that Lemma \ref{lem:sv:monotone-around-zero} is applicable, as $r(S_{\ell+1}) \leq 0$ and, moreover, $\phi_{e_\ell}(T'_\ell) \preceq S_{\ell+1}$, which implies by \ref{prop:r-monotone}, that $r(\phi_{e_\ell}(T'_\ell)) \leq r(S_{\ell+1}) \leq 0$. If $\alpha < \ell$, then \begin{align*} a_{T'_\alpha,e_\alpha} x^*_\alpha \geq a_{T'_\alpha,e_\alpha} \frac{r(S_\alpha) - r(S_{\alpha+1})}{a_{S_\alpha, e_\alpha}} \overset{Lem. \ref{lem:sv:marginal-increase-supermodularity}}{\geq} a_{T'_\alpha,e_\alpha} \frac{r(T'_\alpha) - r(\phi_{e_\alpha}(T'_\alpha))}{a_{T'_\alpha, e_\alpha}} = r(T'_\alpha) - r(\phi_{e_\alpha}(T'_\alpha)), \end{align*} which concludes the proof as $r(\phi_{e_\alpha}(T'_\alpha)) \leq 0$, by choice of $\alpha$. \end{proof} Unfortunately, formulations satisfying \ref{prop:first} - \ref{prop:last} may have an unbounded integrality gap. The integrality gap of an integer program is the ratio between the value of an optimum fractional solution and the value of an optimum integral solution. Consider for example the knapsack cover instance $\min \{ D x_1 + x_2 \mid D x_1 + (D-1) x_2 \geq D, x \in \{0,1\}^2 \}.$ It can be reformulated in such a way that the explicit variable upper bounds are no longer required. In its reformulated version, the system satisfies \ref{prop:first} - \ref{prop:last} and has an integrality gap of $D$. In order to obtain a stronger LP relaxation of (\ref{LP:P}), we will truncate the coefficients of matrix $A$. This will yield a stronger relaxation without cutting off any integer feasible solutions. \begin{restatable}{definition}{defSvTruncation} Let $(A,r)$ be a greedy system and define $A' \in \mathbb{Z}_+^{|\mathcal{L}| \times |E|}$ with coefficients as follows. For $S \in \mathcal{L}$ and $e \in E$, set $a'_{S,e} = \min\{ a_{S,e}, r(S)^+ - r(\phi_e(S))^+ \}$. We call the system $(A',r)$ the \emph{truncation} of $(A,r)$. \label{def:sv:truncation} \end{restatable} The truncation ensures that the coefficient $a'_{S,e}$ of an element $e$ with respect to row $S$ reflects at most the difference between the positive parts of ranks $r(S)$ and $r(\phi_e(S))$. The row $\phi_e(S)$ is chosen in such a way that it is the first row in which the coefficient of $e$ will vanish with respect to row $S$. Hence, intuitively, variables $x_f, f \in E$ with a positive coefficient $a'_{\phi_e(S),f}$ have to ensure that the residual rank $r(\phi_e(S))$ is covered. Hence, larger coefficients $a'_{S,e}$ should not help. We will now prove that the truncated system \begin{align*} \min_{x \in \mathbb{Z}^{|E|}_+} \{ c^T x \mid A'x \geq r \} \tag{T} \label{LP:T} \end{align*} contains the same integer feasible points as the original system (\ref{LP:P}). \begin{restatable}{lemma}{lemSvTruncationFeasible} Let $(A,r)$ be a greedy system with truncation $(A',r)$ and let $x \in \mathbb{Z}_+^{|E|}$. Then $x \in (\ref{LP:P})$ if and only if $x \in (\ref{LP:T})$. \label{lem:sv:truncation-feasible} \end{restatable} \begin{proof} See appendix for the proof. \end{proof} The truncated system (\ref{LP:T}) no longer necessarily satisfies \ref{prop:A-r-coupling}. However, $A'$ will still be monotone and both polyhedra describe the same integer points. Hence, one might ask if we can still apply the greedy algorithm to (\ref{LP:T}) in order to obtain a feasible solution. And this is indeed true as shown in the following Lemma. \begin{restatable}{lemma}{lemRfGreedyTruncationFeasible} The greedy algorithm applied to the truncation (\ref{LP:T}) of a greedy system $(A,r)$ obtains a feasible primal solution to (\ref{LP:T}) and (\ref{LP:P}). \label{lem:sv:greedy-truncation-feasible} \end{restatable} \begin{proof} See appendix for the proof. \end{proof} Note that, in case of the knapsack cover problem, the truncation coincides with the polyhedron discussed in \cite{carnes2008primal}, which has an integrality gap of at most 2. One might hope that the integrality gap of the truncation is always bounded by a small constant. But let us proceed with some negative results before we derive an approximation guarantee. \sectionheadline{Inapproximability and integrality gaps} We can construct simple examples with an integrality gap linear in the number of elements. Moreover, we can show that one can not expect a $(1- o(1)) \log n$ approximation unless $NP \subseteq DTIME(n^{O(\log\log n)})$. \begin{restatable}{proposition}{prpSvLogInapproximable} The subset cover problem can be modeled in the form (\ref{LP:P}) as a greedy system $(A,r)$. Hence, no $(1-o(1)) \log n$ approximation for (\ref{LP:T}) exists unless $NP \subseteq DTIME(n^{O(\log\log n)})$. \label{prp:sv:log-inapproximable} \end{restatable} \begin{proof} See appendix for the proof. \end{proof} \begin{restatable}{proposition}{prpSvLinearGap} There exists a family of instances such that the truncation (\ref{LP:T}) has an integrality gap linear in the number of elements. \label{prp:sv:linear-gap} \end{restatable} \begin{proof} See appendix for the proof. \end{proof} \sectionheadline{Approximation guarantee} The preceding two results might suggest that the truncation does not help in order to obtain general approximation guarantees. But a deeper look into the required properties of $(A,r)$ reveals a better understanding. In both constructions, the efficiency of elements dropped drastically before dropping to zero. The instance considered in Proposition \ref{prp:sv:linear-gap}, is very simple and defined on the Boolean lattice $\mathcal{L} = 2^E$ with $|E| = n$. Each element $e \in E$ has the same weight $a_{S,e} = 2^{|E|}$ for all $S \subseteq E, e \in E$ and the rank function is symmetric and defined as $r(S) = 2^n(2^{-(n-|S|)} - 2^{-\frac{n}{2}})$. That is, $r(S)$ for $S \in \mathcal{L}$ is positive if and only if $|S| > \frac{n}{2}$. Moreover, the marginal differences $r(S) - r(\phi_e(S))$ are exponentially decreasing with decreasing cardinality of set $S$. This implies that the truncation has large coefficients $a'_{E,e}$ and very small coefficients $a'_{S,e}$ for $S \in \mathcal{L}$ with $|S| = \frac{n}{2} + 1$ (assuming $n$ even). In particular, $a'_{E,e} \gg a'_{S,e}$ holds for these sets, which will pose as an issue for small approximation factors. A similar effect may appear if the ratio between coefficients in the matrix $A$ is large, that is, if $a_{E,e} \gg a_{S,e} > 0$ holds for some $S \in \mathcal{L}$ and $e \in E$. Let us consider the following parameter as a measure of the range of efficiency of elements. Given $S \in \mathcal{L}$ and $e \in E$, define \begin{align*} \delta_{S,e} = \begin{cases} \frac{a'_{E,e}}{a'_{S,e}} & a'_{S,e} > 0 \text{ and } (r(\phi_e(S)) \geq 0 \text{ or } a'_{S,e} = a_{S,e}), \\ 1 & \text{otherwise}, \end{cases} \end{align*} and let $\delta = \max \{ \delta_{S,e} : S \in \mathcal{L}, e \in E \}$. Then $\delta$ can be used in order to bound the ratio between coefficients in matrix $A'$. Note that we excluded certain rows and elements from the bound in order to possibly make it smaller. If the rank is non-negative, this does not have any impact. However, if the rank is possibly negative, we get better bounds as we will see later. And indeed, we can estimate the quality of a solution obtained by the greedy algorithm in terms of $\delta$ as the following theorem shows. The role of $b$ in the theorem can be seen as follows. Recall how the primal phase constructs the vector $x^*$ in (\ref{alg:primal-phase:set-x}). For $b = 1$, the rounding does not affect $x^*_e$ for any bottleneck element $e \in E$, that is, the vector will naturally be integral and, in particular, $a'_{S_i,e} x^*_e = r(S_i)^+ - r(S_{i+1})^+$ will hold for all $1 \leq i \leq \ell$. If $b = 2$, the rounding may have an impact on $x^*_e$. In this case, it is easy to see that the marginal rank differences are oversubscribed by at most a factor of two, that is, $a'_{S_i,e} x^*_e \leq 2 (r(S_i)^+ - r(S_{i+1})^+)$. \begin{restatable}{theorem}{thmSvGreedyApproximation} Let $(x^*,y^*)$ be a solution returned by the primal-dual greedy algorithm applied to the truncation (\ref{LP:T}) of a greedy system $(A,r)$. Then the cost of $x^*$ is no larger than $b \delta \text{OPT}$, if $r$ is non-negative, and $(b \delta + 1) \text{OPT}$, otherwise. Here, $b = 1$, if $\frac{r(S)^+ - r(\phi_e(S))^+}{a'_{S,e}} \in \mathbb{Z}_+$ for all $S \in \mathcal{L}$ and $e \in S$ with $a'_{S,e} > 0$, and $b = 2$, otherwise. \label{thm:sv:greedy-approximation} \end{restatable} \begin{proof} In this version, we provide only a brief outline of the proof techniques for $b = 2$ with possibly negative rank. The full proof can be found in the appendix. Let $S_{\ell+1} \prec S_\ell \prec \dots \prec S_1$ be the dual chain constructed by the algorithm, where $r(S_{\ell+1}) \leq 0 < r(S_\ell)$ and let $e_1,\dots,e_\ell$ be the bottleneck elements. Consider the lefthandside coefficients of any index $t$. For element $e_j$ with index $t \leq j < \ell$, we can use $\delta$ to bound the coefficient in $A'$, as $r(\phi_{e_j}(S_t)) \geq r(S_{j+1}) > 0$. This is true since $\mathcal{L}$ is modular, hence, $S_{j+1} = \phi_{e_j}(S_j) \preceq \phi_{e_j}(S_t)$. \begin{align*} a'_{S_t,e_j} x^*_{e_j} \leq \delta a'_{S_j, e_j} x^*_{e_j} = \delta a'_{S_j, e_j} \left\lceil \frac{r(S_j)^+ - r(S_{j+1})^+}{a'_{S_j,e_j}} \right\rceil \leq 2 \delta \left( r(S_j) - r(S_{j+1}) \right). \end{align*} The first inequality is due to the definition of $\delta$. The subsequent equality is due to the construction of $x^*$ in such a way that it covers the rank differences in each iteration. Note that this argument does not necessarily hold for the final element $e_\ell$ as the definition of $\delta$ does not cover this element if $r(S_{\ell+1}) < 0$ and $a'_{S_\ell,e_\ell} < a_{S_\ell,e_\ell}$. If $x^*_{e_\ell} = 1$, then $a'_{S_t,e_\ell} x^*_{e_\ell} = a'_{S_t,e_\ell} \leq r(S_t)$. But $x^*_{e_\ell} > 1$ implies $a'_{S_{\ell},e_\ell} = a_{S_{\ell},e_\ell} < r(S_{\ell})$. Hence, we can use $\delta$ and get \begin{align*} a'_{S_t,e_\ell} x^*_{e_\ell} \leq \delta a'_{S_\ell,e_\ell} x^*_{e_\ell} = \delta a'_{S_\ell,e_\ell} \left\lceil \frac{r(S_{\ell})^+ - r(S_{\ell+1})^+}{a'_{S_{\ell},e_\ell}} \right\rceil \leq 2 \delta r(S_\ell). \end{align*} A simple union bound yields $$a'_{S_t,e_\ell} x^*_{e_\ell} \leq 2 \delta r(S_\ell) + r(S_t).$$ Hence, for the constraint corresponding to $S_t$, we get: \begin{align*} \sum_{e \in S_t} a'_{S_t,e} x^*_e &= \sum_{j=t}^{\ell - 1} a'_{S_t,e_j} x^*_{e_j} + a'_{S_t,e_\ell} x^*_{e_\ell} \leq \sum_{j=t}^{\ell - 1} 2 \delta \left( r(S_j) - r(S_{j+1}) \right) + 2 \delta r(S_\ell) + r(S_t) \\ &= 2 \delta \left( r(S_t) - r(S_{\ell}) \right) + 2 \delta r(S_\ell) + r(S_t) = (2 \delta + 1)r(S_t). \end{align*} In the first equality in the second row, we use that the sum is telescopic. This implies the following approximate complementary slackness conditions: If $y^*_S > 0$, then $r(S) \leq a'_S x^* \leq (2 \delta + 1) r(S)$. Moreover, the primal solution is constructed in such a way that $x^*_e > 0$ implies $\sum_{S \in \mathcal{L}} a'_{S,e} y^*_S = c_e$. Hence, standard techniques for primal-dual approximation algorithms can be used to conclude the proof. The other cases are proven analogously and can be found in the appendix. \end{proof} Hence, if $\delta$ is bounded by a small constant, we can show that the greedy algorithm obtains good solutions. Note that the instance from Proposition \ref{prp:sv:linear-gap} shows that the integrality gap of a truncation can be of order $o(\log \delta)$. We can also show that the analysis in Theorem \ref{thm:sv:greedy-approximation} is essentially tight. \begin{corollary} There exists a family of instances such that the truncation (\ref{LP:T}) of a greedy system $(A,r)$ has integrality gap $o(\log \delta)$. \end{corollary} \begin{restatable}{proposition}{prpSvGreedyLowerBound} The analysis in Theorem \ref{thm:sv:greedy-approximation} is tight up to constant factors. \label{prp:sv:greedy-bad-dual-solution} \end{restatable} \begin{proof} See appendix for the proof. \end{proof} To round up this section, we will see that all properties \ref{prop:first} - \ref{prop:last} are necessary in the sense that the removal of any of them results in a situation where the greedy algorithm does not provide a feasible solution. While this does not rule out that other greedy algorithms may perform nicely, it points out certain limits of this analysis. \begin{restatable}{proposition}{prpSvNecessityFeasible} Suppose that a system $(A,r)$ satisfies \ref{prop:first} - \ref{prop:last} except for any one of the properties. Then the greedy algorithm does not necessarily terminate with a feasible solution. \label{prp:sv:greedy-necessity-feasible} \end{restatable} \begin{proof} See appendix for the proof. \end{proof} \section{Generalization to multiple greedy systems} \label{sec:product-version} Although greedy systems already capture some well-known problems such as knapsack cover or optimization over contra-polymatroids, the modelling techniques are limited. In this section, we discuss a generalization towards problems that are composed of multiple greedy systems on the same column set. A full version of this section can be found in the appendix. Due to space limitations, we discuss only a brief overview of the main results without proofs. Again, let $E$ be the index set of columns. Let $\mathcal{L}$ be a family of subsets of $E$ with some partial order $(\mathcal{L}, \preceq)$ associated. Moreover, we will assume that $\mathcal{L}$ is actually a lattice with join $\vee$ and meet $\wedge$. This family will be used similar to the previous section, it will also satisfy the properties elaborated in the previous section. In particular, we will assume that the sets in $\mathcal{L}$ are pairwise different. Moreover, let $\mathcal{U}$ be a family of subsets of $E$. Multiple copies of the same subset are allowed in $\mathcal{U}$. Let $\mathcal{B} = \mathcal{U} \times \mathcal{L}$, $A \in \mathbb{R}_+^{|\mathcal{B}| \times |E|}$ and $r: \mathcal{B} \rightarrow \mathbb{R}$. This time, the rows $a_{(U,S)}$ of matrix $A$ are indexed by tuples $(U,S) \in \mathcal{B}$. The coefficients of matrix $A$ are denoted by $a_{(U,S),e}$ for row $(U,S) \in \mathcal{B}$ and column indexed by $e \in E$. We will require $\{e \in E : a_{(U,S),e} > 0 \} = U \cap S$ for all $(U,S) \in \mathcal{B}$, that is, the support of each row of matrix $A$ indexed by a tuple $(U,S)$ equals the intersection of $U$ and $S$. If $\mathcal{U} = \{E\}$, the situation from Section \ref{sec:simple-version} will be recovered. We are interested in conditions of system $(A,r)$ such that problems of type \begin{align*} \min_{x \in \mathbb{Z}_+^{|E|}} \left\{ c^T x \mid a_{(U,S)} x \geq r(U,S)\; \forall (U,S) \in \mathcal{B} \right\} \tag{P} \end{align*} admit a bounded approximation guarantee via a simple primal-dual greedy algorithm. In order to use the primal-dual greedy algorithm from Section \ref{sec:introduction}, we need an ordering on $\mathcal{B}$ which chooses the variables to be increased during the dual phase. Note that we already assumed that $(\mathcal{L}, \preceq)$ is a partial order on $\mathcal{L}$. We will assume that some additional partial orders are provided as follows. For every $S \in \mathcal{L}$, let $(\mathcal{U}, \preceq_S)$ be a partial order of $\mathcal{U}$. The partial orders are not required to be correlated in any way. With these orderings, we compose the following lexicographic ordering for $\mathcal{B}$: \begin{align*} (U,S) \preceq_{\mathcal{B}} (U',S')\quad &\Leftrightarrow \begin{aligned} & \left( S \prec S' \right) \text{ or} \\ & \left( S = S' \text{ and } r(U,S) < r(U',S') \right) \text{ or} \\ & \left( S = S' \text{ and } r(U,S) = r(U',S') \text{ and } U \preceq_{S} U' \right) \end{aligned} \end{align*} We use $(A,r)_{|U}$ to denote the subsystem of $(A,r)$ induced by fixing a set $U \in \mathcal{U}$. Precisely, we say that $(A,r)_{|U} = (\bar{A}, \bar{r})$, where $\bar{A} \in \mathbb{R}_+^{|\mathcal{L}| \times |E|}$ with coefficients $\bar{a}_{S,e} = a_{(U,S),e}$ and $\bar{r}: \mathcal{L} \rightarrow \mathbb{R}, \bar{r}(S) = r(U,S)$. Note that the ordering of $\mathcal{B}$ restricted to a subsystem is consistent in the way that a chain $(U_\ell,S_\ell) \preceq_{\mathcal{B}} \dots \preceq_{\mathcal{B}} (U_1,S_1)$ in $\mathcal{B}$ will induce a chain $S_\ell \preceq \dots \preceq S_1$ in every subsystem $(A,r)_{|U}$. For any $e \in E$ we use the notation $\mathcal{B} \setminus \{e\} = \{ (U,S) \in \mathcal{B} : e \not\in S \}$ to denote the restriction of $\mathcal{B}$ to a subsystem of $\mathcal{L}$ that does not contain element $e$ in its support (with respect to the $\mathcal{L}$-component). Note that the operation is assumed to have no effect on the $\mathcal{U}$ component, that is, we will observe tuples $(U,S) \in (\mathcal{B} \setminus \{e\})$ with $e \in U$. The previous section elaborated that \ref{prop:first} - \ref{prop:last} are useful in order to prove approximation guarantees for subsystems $(A,r)_{|U}$. In this section, we will assume that the restricted system $(A,r)_{|U}$ satisfies \ref{prop:first} - \ref{prop:last} for all $U \in \mathcal{U}$. \begin{definition} A system $(A,r)$ on $\mathcal{B}$ (with respect to $(\mathcal{B}, \preceq_{\mathcal{B}})$) is called a \emph{greedy product system}, if for every $U \in \mathcal{U}$, the subsystem $(A,r)_{|U}$ satisfies \ref{prop:first} - \ref{prop:last}. \end{definition} Analogously to Section \ref{sec:simple-version}, we consider a truncated version of system $(A,r)$. Otherwise, the integrality gap may be unbounded. We apply the truncation from Definition \ref{def:sv:truncation} to each subsystem $(A,r)_{|U}, U \in \mathcal{U}$ individually and call the resulting system the \emph{truncation} of $(A,r)$. \begin{restatable}{definition}{defPlTruncation} Let $(A,r)$ be a greedy product system and define $A' \in \mathbb{R}_+^{|\mathcal{B}| \times |E|}$ with coefficients as follows. For $(U,S) \in \mathcal{B}$ and $e \in E$, set $a'_{(U,S),e} = \min\{a_{(U,S),e}, r(U,S)^+ - r(U,\phi_e(S))^+ \}$. We call the system $(A',r)$ the \emph{truncation} of $(A,r)$. \label{def:pl:truncation} \end{restatable} In this section, we will apply a revised version of the primal-dual greedy algorithm to system \begin{align*} \min_{x \in \mathbb{Z}^{|E|}_+} \{ c^T x \mid A'x \geq r \} \tag{T} \label{LP:T:product} \end{align*} and prove a bounded approximation guarantee similar to the previous section. \sectionheadline{The revised primal-dual greedy algorithm} In order to get results similar to Section \ref{sec:simple-version}, we need to slightly modify the greedy algorithm from Section \ref{sec:introduction}. This time, we will combine the dual and primal phase in a single algorithm which is given in Figure \ref{alg:product:pseudocode}. \begin{figure}[tb] \begin{enumerate} \item Initially, let $y^* \equiv 0, x^* \equiv 0$. \item While $\mathcal{B} \neq \emptyset$ \begin{enumerate} \item Let $B \subseteq \mathcal{B}$ be the maximal tuples in $\mathcal{B}$ with respect to ordering $\preceq_{\mathcal{B}}$. \label{alg:product:select-rows} \item STOP if $r(U,S) \leq 0$ for $(U,S) \in B$. \label{alg:product:stop} \item Raise $y^*_{(U,S)}$ for all $(U,S) \in B$ uniformly until some element $e^* \in E \setminus E^*$ becomes tight. \item Let $S' = \phi_{e^*}(S)$ and set $x^*_{e^*} = \max \left\{ \left\lceil \frac{r(W,S)^+ - r(W,S')^+}{a'_{(W,S),e^*}} \right\rceil : W \in \mathcal{U}, a'_{(W,S),e^*} > 0 \right\}.$ \label{alg:product:set-x} \item Add $e^*$ to $E^*$ and iterate with $\mathcal{B} = \mathcal{B} \setminus \{e^*\}$. \label{alg:product:lattice-subtract} \end{enumerate} \item For bottleneck elements $e^*$ in reverse order: Decrease $x_{e^*}$ as long as the solution remains feasible for all $(U,S) \in \mathcal{B}$. \label{alg:product:cleanup} \end{enumerate} \caption{Pseudocode of the revised primal-dual greedy algorithm.} \label{alg:product:pseudocode} \end{figure} In contrast to Section \ref{sec:introduction}, we now increase the dual variable for \emph{all maximal} tuples $(U,S) \in \mathcal{B}$ with respect to $\preceq_{\mathcal{B}}$ uniformly. Since $(\mathcal{L}, \preceq)$ is a lattice, all variables that are increased simultaneously during a single iteration share the same set $S \in \mathcal{L}$. Moreover, by definition of the lexicographic order, they share the same rank value $r^*$. If each partial order $(\mathcal{U}, \preceq_S)$ for $S \in \mathcal{L}$ exposes a single element, $\preceq_{\mathcal{B}}$ will also expose a single element. We also adapt the construction of $x^*_{e^*}$ for bottleneck elements. This time, we consider all rank differences $r(W,S)^+ - r(W,S')^+$ of sets $W \in \mathcal{U}$ and set $x^*_{e^*}$ sufficiently large as to cover \emph{all} these differences. The element $S' = \phi_{e^*}(S) \in \mathcal{L}$ was chosen in such a way that it is the element $S'$ that is considered in the subsequent iteration of the main loop. This will ensure primal feasibility. Finally, we add an additional cleanup phase. This will be beneficial, as variables from later iterations may render variables from previous iterations redundant. In this case, we may carefully decrease variables in a post-processing step. In general, deciding if a variable can be decreased by one may be a non-trivial task. Moreover, determining the maximum in Line~\ref{alg:product:set-x}) is not simple, either. In Table \ref{table:intro:result-summary} we provided some examples in which this is possible. \sectionheadline{Approximation guarantee for the revised greedy algorithm} Similar to Section \ref{sec:simple-version}, we can show that the truncation of a greedy product system does not cut off any integer feasible points. Moreover, we can show that the greedy algorithm always obtains feasible primal solutions. Due to space restrictions, we omit all feasibility results and provide only a summary regarding the approximability. \begin{restatable}{lemma}{lemPlCleanupNecessary} Without the cleanup phase in Line~\ref{alg:product:cleanup}, an analysis similar to Theorem~\ref{thm:sv:greedy-approximation} for a greedy product system results in an approximation factor of at least $|E|$. \end{restatable} In order to characterize the influence on elements in terms of the cleanup phase, let us consider a solution $x^* \in \mathbb{Z}_+^{|E|}$ obtained by the revised greedy algorithm. To get an intuition, let us assume for a second that the algorithm increased a single dual variable in each iteration, that is, Line~\ref{alg:product:select-rows}) returned a single maximum tuple in each iteration. Let $(U_{\ell+1},S_{\ell+1}) \prec \dots \prec (U_1, S_1)$ be the constructed dual chain $e_i \in S_i \setminus S_{i+1}, 1 \leq i \leq \ell$ be the bottleneck elements. As in the previous section, $r(U_{\ell+1},S_{\ell+1}) \leq 0 < r(U_{\ell},S_{\ell})$. During the cleanup phase, the value $x^*_{e_i}$ of element $e_i$ was not further reduced because either $x^*_{e_i} = 0$, or there is at least one tuple $(U,S) \in \mathcal{B}$ such that $$\sum_{f \in U \cap S} a'_{(U,S),f} x^*_f - a'_{(U,S),e_i} < r(U,S) \leq \sum_{f \in U \cap S} a'_{(U,S),f} x_f.$$ We call this tuple $(U,S)$ a \emph{witness} of bottleneck element $e_i$. Note that this tuple was not necessarily considered in Line~\ref{alg:product:select-rows}). But let us suppose that some element $e_i$ has a witness $(U_t, S_t)$ on the dual chain ($1 \leq t \leq \ell$). Then $i \geq t$, otherwise $e_i \not\in S_t$. The definition of witnesses implies $$\sum_{f \in U_t \cap S_t} a'_{(U_t,S_t),f} x^*_f \leq 2 r(U_t,S_t).$$ In other words, if \emph{every} tuple $(U_t,S_t)$ of the dual chain was a witness for some element $e_i$, then $x^*$ would be a $2$-approximation for (\ref{LP:T}) by standard primal-dual approximation arguments (c.f.\ proof of Theorem~\ref{thm:sv:greedy-approximation}). Of course, we can not expect this to happen in general. But the following observation establishes a strong connection between witnesses and elements on the dual chain. We will now cover the case that (possibly) multiple dual variables were increased simultaneously. We define a \emph{multiplicity witness-cover} as follows. Let $\mathcal{I} \subseteq \mathcal{B}$ be a family of tuples that were increased simultaneously in one iteration of the revised algorithm. In this case, the rank value $r^*$ of all these tuples equals by definition of $(\mathcal{B}, \preceq_{\mathcal{B}})$. We call $\mathcal{C} \subseteq \mathcal{B}$ a \emph{multiplicity witness-cover} of $\mathcal{I}$, if each tuple $(U,S) \in \mathcal{C}$ is a witness for some element $e \in E$, $r(U,S) \leq r^*$ and every element $e \in E$ with $x^*_e > 0$ appears at least as often in $\mathcal{C}$, as it appears in $\mathcal{I}$. That is, for all $e \in E$, $$|\{ (U,S) \in \mathcal{I} : a'_{(U,S),e} x^*_e > 0 \}| \leq |\{ (U,S) \in \mathcal{C} : a'_{(U,S),e} x^*_e > 0 \}|.$$ If $\mathcal{C}$ is of small cardinality, we can show that $x^*$ is a good approximation. We generalize our definition of $\delta$ from Section \ref{sec:simple-version} slightly to cover this case. Given $(U,\emptyset), (W,S) \in \mathcal{B}, e \in E$, let \begin{align*} & \delta_{U,(W,S),e} = \begin{cases} \frac{a'_{(U,\emptyset),e}}{a'_{(W,S),e}} & a'_{(W,S),e} > 0 \text{ and } (r(W,\phi_e(S)) \geq 0 \text{ or } a'_{(W,S),e} = a_{(W,S),e}), \\ 1 & \text{otherwise}, \end{cases} \end{align*} and set $\delta = \max_{U,(W,S),e} \left\{ \delta_{U,(W,S),e} \right\}$. The following Theorem \ref{thm:ps:greedy-approximation} yields bounds on the solution cost, depending on $A$ being binary or a general matrix. Finally, Proposition \ref{prp:ps:greedy-bad-dual-solution} shows that the dependency on $k$ is inherent in the type of dual solution constructed. \begin{restatable}{theorem}{thmPsMultiDualGreedyApproximation} Let $(A,r)$ be a greedy product system and let $k \in \mathbb{Z}_+$. Let $(x^*,y^*)$ be the solution obtained by the revised greedy algorithm with dual support $\mathcal{I}_i \subseteq \mathcal{B}$ in iteration $i$. If each family $\mathcal{I}_i$ has a witness cover of size at most $k |\mathcal{I}_i|$, then $x^*$ has cost no larger than $k (\delta + 1) OPT$. If, additionally, the truncation $A'$ is a binary matrix, then the solution has cost bounded by $k OPT$. \label{thm:ps:greedy-approximation} \end{restatable} \begin{restatable}{proposition}{prpPsGreedyLowerBound} For every $k \in \mathbb{Z}_+$ there is a greedy product system $(A,r)$ with truncation $A' \in \{0,1\}^{|\mathcal{B}| \times |E|}$ such that the dual $y^*$ obtained by the revised greedy algorithm has optimality gap $k$. \label{prp:ps:greedy-bad-dual-solution} \end{restatable}
2,869,038,156,535
arxiv
\section{Introduction} The HIPPARCOS satellite (Perryman 1989) has provided high quality parallaxes for $\sim 118,000$\ stars. These data can be used to address a variety of different astrophysical problems. HIPPARCOS parallaxes for about one hundred nearby dwarfs with metal abundances in the range $-2.5<$[Fe/H]$<0.2$\ were available to us, before their release to general public, as result of the Hipparcos FAST proposal n.022. These objects had been originally selected because they seemed the most suited for globular cluster distance and age derivations via main sequence fitting technique. They have already been used in a careful revision of the age of the oldest globular clusters, Gratton et al. (1997a), who obtained ages consistent with inflationary models for the universe, if the value of the Hubble constant is in the range 50-75 km~s$^{-1}$~Mpc$^{-1}$. Besides accurate parallaxes, an appropriate determination of {\bf magnitudes}, {\bf colours} and {\bf chemical composition} of the dwarf sample were crucial in the derivation of distances and ages of GCs via main sequence fitting (Gratton et al., 1997a). The relation between colour and magnitude of main sequence stars near the turnoff is very steep: $4<dV/d(B-V)<7$. Thus when fitting the main sequence of GC's with sequences of local subdwarfs, an error of only 0.01 mag in the colours translates into an uncertainty of $\sim 1$ Gyr in the derived cluster ages, $\sim$ 50 K in the derived effective temperatures, and, in turn, of 0.04-0.05 dex in the derived abundances. Moreover, a basic assumption of Gratton et al's (1997a) age derivation was that the nearby subdwarfs share the same chemical composition of main sequence stars in globular clusters. This assumption was verified by a homogenous spectroscopic analysis of both cluster and field stars. Abundance determinations for giants in globular cluster were published by Carretta \& Gratton (1997); cluster main sequence stars are too faint for a reliable analysis until more 8~m class telescopes become available. Here, we present the parallel spectroscopic study for 66 of the 99 nearby dwarfs in our sample. The main purposes of this study were: \begin{itemize} \item to provide a photometric data-base for the programme stars to be used both in the Globular Cluster distance and age derivation and in the abundance analysis. An accurate and homogeneous photometric data-base for these stars is needed for a number of reasons: first because the determination of reliable absolute magnitudes directly bears upon the availability of accurate apparent magnitudes; second because $B-V$, and possibly $V-I$ colours (the very deep, high resolution, colour magnitude diagrams recently obtained for a number of GCs by HST, are in a photometric band similar to the I band) are required to build up the subdwarf template sequences to compare with the Globular Clusters main sequences; and third because colours ($B-V$, $V-K$, $b-y$ and possibly $V-I$) are necessary to derive the effective temperatures used in abundance analyses. The vast data available in the literature should be homogenized, compared and implemented with the photometric data collected from the Hipparcos/Tycho mission, and transformed to a common uniform photometric system. \item to provide metal abundances for the programme stars. The analysis should be homogenous with the globular cluster one, to avoid spurious errors in the derived distances and ages; accurate because one anticipates that only a few stars will finally be used in the main sequence fitting (once metal-rich, binary, and too-distant stars are discarded); and should use updated model atmospheres (Kurucz 1993), in order to have consistent results for both the Sun and the programme stars (else systematic errors will be introduced in the globular cluster ages when turnoff magnitudes are compared to isochrones) \item to derive abundances not only for Fe, but also for O and for the $\alpha-$elements (mainly Mg and Si; but also Ca and Ti), because derived ages are affected at a significant level by non-solar abundance ratios. Many studies beginning with the pioneering study by Wallerstein et al. (1963), through the early echelle surveys of Cohen, Pilachowski, and Peterson (see Pilachowski, Sneden \& Wallerstein, 1983, and references therein) to recent large-sample abundance determinations in individual clusters (reviewed by e.g. Kraft 1994; Carretta, Gratton \& Sneden, 1997) have shown that these elements are overabundant in globular cluster stars. Available data indicate that a similar overabundance is shared by metal-poor field stars (see e.g. Wheeler, Sneden \& Truran 1989). However, it is important to verify that this is the case also for the subdwarfs used in the main sequence fitting procedure, since a few subdwarfs are known to have no excess of heavy elements (King 1997, Carney et al. 1997). \item to determine high precision ($<1$~km/s error) radial velocities for the target stars. These velocities can be compared with values in the literature, thus providing useful information on possible unknown binaries present in our sample. Binary contamination is one of the major concerns in the derivation of distances to globular clusters using the subdwarf main sequence fitting method. \item to compare stellar gravities deduced from Hipparcos parallaxes with those from ionization equilibrium computations. Several authors have suggested that there may be an appreciable Fe overionization in the atmosphere of late F-early K-dwarfs (Bikmaev et al., 1990; Magain \& Zhao, 1996; Feltzing \& Gustafsson, 1998). If this were true, Fe abundances derived assuming LTE would be largely in error. Gratton et al. (1997b) have shown that it is difficult to reconcile a large Fe overionization in cool dwarfs with the rather good ionization equilibrium found for lower gravity, warmer stars (see e.g. the case of RR Lyrae stars: Clementini et al. 1995). But now, thanks to Hipparcos, we are able to derive accurate surface gravities directly from luminosities and masses (the last ones using stellar evolution models) for a large sample of metal-poor dwarfs; we can then test whether a significant Fe overionization actually exists in these stars by simply comparing the abundances provided by neutral and singly ionized lines. \item Finally, to use this large, homogenous data set to recalibrate photometric and low $S/N$\ spectroscopic abundances, that may be used to obtain moderate accuracy (errors $\sim 0.15$~dex) abundances for thousands of metal-poor stars. In particular we have tied to our scale the metal abundances by Schuster \& Nissen (1989), Carney et al. (1994), and Ryan \& Norris (1991). \end{itemize} \section{Basic data for the Subdwarfs} \subsection{Photometric Data} Photometric data ($UBVRIJHK$ and Str\"omgren $b-y$, $m_1$\ and $c_1$) for all the programme stars (99 objects) have been collected from a careful selection of the literature data available in the SIMBAD data-base. A few stars (five objects) were re-observed. The Hipparcos catalog provides $V$ magnitudes and $B-V$, $V-I$ colours (in the Johnson-Cousins system) for all our objects. All $V-I$ colours are from the literature, while $V$ and $B-V$ are either from the literature or measured from the Hipparcos/Tycho missions (Grossmann et al., 1995). In particular, 55 of our objects have $V$\ magnitudes directly measured by Hipparcos ($V_H$), and 24 stars have $B-V$ colours measured by Tycho, $(B-V)_T$. Data from different sources and in different photometric systems were transformed to a uniform photometric system, using equations which are available in electronic form upon request to the first author. Average magnitudes and colours for the programme stars were derived combining literature and Hipparcos data according to an accurate standardization procedure that is briefly outlined below. Final adopted magnitudes and colours for the programme stars are given in Table 1 where $V$, $B-V$, and $V-K$ are in the Johnson system, and the $V-I$ and $b-y$ colours are in Cousins and Str\"omgren photometric systems, respectively. Uncertain values are marked by a colon. Only the first 20 entries of the table are given in the paper; data for the remaining stars are available in electronic form. \subsubsection{$V$\ magnitudes and $B-V$ colours} Mean $V$\ magnitudes ($V_{\rm g}$) and $B-V$ colours, $(B-V)_{\rm g}$, were computed as the average of the independent literature data. These mainly consist of Johnson $UBV$ photometry by Sandage \& Kowal (1986), Carney (1978, 1983a, 1983b), Carney \& Latham (1987), Laird, Carney \& Latham (1988), and Carney et al. (1994). According to the distribution of individual measures around the average, measures that were more than 0.07 and 0.03 mag off the mean V and $B-V$ values, respectively, were discarded. The ($V_{\rm g}$) and $(B-V)_{\rm g}$ values thus obtained were compared with $V_{\rm H}$'s for the 55 objects with Hipparcos $V$ magnitudes and $(B-V)_{\rm T}$'s for the 24 objects with Tycho colours, respectively. The best fit regression lines are: $V_{\rm H} = 1.003 V_{\rm g} - 0.038$ and $(B-V)_{\rm T} = 1.002(B-V)_{\rm g} + 0.012$. $V_{\rm H}$'s were transformed to $V_{\rm g}$'s using the regression equation given above, and then averaged with the ground-based measures; the mean values so derived are listed in column 7 of Table 1. The average uncertainty of these mean values is 0.014~mag (average on 87 objects). Figure~\ref{f:figure01} shows the residuals $(B-V)_{\rm g} - (B-V)_{\rm T}$ versus $(B-V)_{\rm g}$. A zero point offset of $-$0.01~mag is present in this plot and a strong scatter for colours redder than 0.75~mag. However, we only have 5 objects with $B-V>0.75$ in this plot, so it is difficult to assess the actual meaning and relevance of this scatter. We have thus resolved to use Tycho's $B-V$'s only if they did not differ more than the residual of the mean and, in any case, no more than 0.015~mag. This is true for 13 out of the 24 objects with $(B-V)_{\rm T}$. New mean $B-V$'s were computed as the average of the independent literature data and the $(B-V)_{\rm T}$'s for these objects. The adopted $B-V$'s are listed in column 10 of Table 1. The average uncertainty of these values is 0.011~mag (average on 73 objects). \begin{figure} \vspace{5.5cm} \special{psfile=figure01.eps vscale=70 hscale=70 voffset=-215 hoffset=-100} \caption{ Residuals $\Delta (B-V)=(B-V)_{\rm g} - (B-V)_{\rm T}$ of the mean $B-V$ colours estimated from the literature data, $(B-V)_g$, and Tycho' $B-V$'s, $(B-V)_{\rm T}$, for the 24 objects with Tycho measures, plotted versus $(B-V)_{\rm g}$} \label{f:figure01} \end{figure} \subsubsection{$V-K$ colours} Forty-seven of our objects have $K$\ magnitudes published in the literature. These mainly include $K$\ photometry in the CIT system (Carney 1983b, Laird 1985, Laird, Carney \& Latham (1988), in the TCS system by Alonso, Arribas \& Martinez-Roger (1994) and in the OT system by Arribas \& Martinez Roger (1987). There are also a few data in the Johnson, OT, KPNO, OAN, AAO and Glass (1974) systems. $V-K$ colours in the various photometric systems were formed from the $V$ values listed in column 7 of Table 1, transformed to Johnson and then averaged. The mean $V-K$'s in the Johnson system so derived are listed in column 12 of Table 1. The average uncertainty of these values is 0.016~mag (average on 24 objects). \subsubsection{$V-I$ colours} In order to check whether we could build up subdwarf template main sequences in $V-I$, we also compiled the available photometry in $R-I$\ and $V-I$\ colours. We found literature $R$ and $I$ photometric data for 59 of our stars. The literature sources are Carney (1983a,b), Laird (1985), Bessel (1990), Cutispoto (1991a,b) and unpublished, Upgren (1974), Eggen (1978) and Weis (1996). Data are in three different photometric systems: Cousins, Johnson, and Kron (Cousins 1976a,b, Johnson et al 1966, Kron, White \& Gascoigne 1953). $V-I$ colours directly in the Cousins system (hereinafter $(V-I)_{\rm Cg}$) are available only for 27 of our targets. For the remaining objects the literature $(R-I)_{\rm K}$, $R_{\rm K}$, $(R-I)_{\rm J}$, and $R_{\rm J}$ values were combined with the $V$ values in Table 1 to form $(V-I)_{\rm K}$'s and $(V-I)_{\rm J}$'s; these were then transformed to Cousins using Weis (1996) and Bessell (1979) relations respectively. Finally, average $(V-I)_{\rm g}$ in the Cousins system were derived as the mean of all the three sets of values; they are listed in Column 13 of Table 1. The average uncertainty of these values is 0.022~mag (average on 24 objects). The Hipparcos catalog lists $V-I$ values in the Cousins system for all our stars. The $(V-I)_{\rm g}$ were compared to $(V-I)_{\rm H}$ getting the regression line $(V-I)_{\rm H} = 0.996(V-I)_{\rm g} - 0.019$. The residuals, $(V-I)_{\rm g} - (V-I)_{\rm H}$, versus $(V-I)_{\rm g}$ are shown in Figure~\ref{f:figure02}. \begin{table*} \begin{minipage}{160mm} \begin{center} \caption{Hipparcos parallaxes and colours for the programme stars} \end{center} \scriptsize \begin{tabular}{rrrlrcrcccccccc} \hline\hline HIP~~&LTT~~&HD~~&~Gliese&$\pi$~~~&$\delta \pi/\pi$&$V~$&$M_v$&$\sigma (M_V)$&$B-V$&$b-y$ &$V-K$&$V-I$&N&Comments \\ & & & &(mas)& & & & & & & & & \\ \hline 999&10065& &G030-52&24.69 &0.049& 8.46&5.43&0.10 &0.790:&0.498 &2.183&0.960&2&SBO(2),TP(2)\\ 1897&10137& &G032-16&17.09 &0.080& 9.68&5.84&0.17 &0.880&0.519& & &0& \\ 3985&10310& 4906&G032-53&~9.04 &0.153& 8.76&3.54&0.31 &0.780&0.483& & &3& \\ 6012& & 7783& &10.02 &0.191& 9.42&4.42&0.38 &0.660& & & &0& \\ 6037& & 7808& &33.18 &0.062& 9.76&7.36&0.13 &1.000&0.572& &1.125 &0& \\ 6159&730 & 7983&G271-34&14.91 &0.082& 8.90&4.76&0.17 &0.585&0.387&1.515&0.705 &0& \\ 6448&10502& 8358&G071-03&15.21 &0.063& 8.28&4.19&0.13 &0.723&0.466&2.014 &0.875 &2&FR(1,2),SB2P(2)\\ 7217&10541& 9430&G034-36&15.33 &0.081& 9.04&4.96&0.17 &0.625& & & &2&SSB(2)\\ 8798&1007 & 11505&G071-40&26.56 &0.040& 7.42&4.55&0.08 &0.634&0.401& &0.711 &1&S?(1)\\ 10140&10733& &G074-05&17.66 &0.073& 8.76&4.99&0.15 &0.576&0.390&1.524&0.708:&4& \\ 10652&10774& 14056&G004-10&14.43 &0.092& 9.05&4.85&0.19 &0.620&0.404& & &4& \\ 10921&10794& &G073-44&18.40 &0.075& 9.12&5.44&0.16 &0.790&0.470& & &2&SBP(2)\\ 12306&10869& 16397&G036-28&27.89 &0.040& 7.36&4.58&0.09 &0.588&0.385&1.485&0.666 &2& \\ 13366&10934& 17820&G004-44&15.38 &0.090& 8.39&4.32&0.19 &0.546&0.377& &0.674:&2&S?(1)\\ 13631&10956& &G004-46&17.36 &0.084& 9.75&5.95&0.17 &0.800& & & &0& \\ 14401&17462& &G078-14&18.42 &0.077& 9.71&6.04&0.16 &0.777&0.465& &0.879 &0& \\ 14594&11021& 19445&G037-26&25.85 &0.044& 8.06&5.12&0.09 &0.457&0.355&1.361&0.614 &3& \\ 14992& & &G076-68&15.89 &0.115& 9.84:&5.85:&0.24 &0.935&0.552& 2.486:& &0& \\ 15394&11087& 20512&G005-27&17.54 &0.073& 7.42&3.64&0.15 &0.800&0.485& &0.830 &2&SBP(2)\\ 15797&11113& &G078-33&39.10 &0.032& 8.96&6.92&0.07 &0.980&0.558& & &2&SSB(1)\\ \hline \end{tabular} \medskip \medskip FR=Fast rotator, SSB=Suspected spectroscopic binary, VB=Visual binary, SBP=Spectroscopic binary with preliminary orbital solution, SB2P=Double-lined spectroscopic binary with preliminary orbital solution, SB2=Double-lined spectroscopic binary with orbital solution, SBO= Spectroscopic binary with orbital solution, TP=multiple system with preliminary orbital solution, S?=suspected spectroscopic binary detected on the basis of the comparison with Carney et al. (1994; see Section 2.4); (1) this paper, (2) Carney et al. (1994), (3) Peterson et al. (1980). \end{minipage} \end{table*} \begin{figure} \vspace{5.5cm} \special{psfile=figure02.eps vscale=70 hscale=70 voffset=-215 hoffset=-100} \caption{ Residuals $\Delta (V-I)=(V-I)_g - (V-I)_H$ of the mean $(V-I)$ colours estimated from the literature data, $(V-I)_g$, and those listed in the Hipparcos catalog $(V-I)_H$, plotted versus $(V-I)_g$} \label{f:figure02} \end{figure} \begin{figure} \vspace{5.5cm} \special{psfile=figure03.eps vscale=70 hscale=70 voffset=-215 hoffset=-100} \caption{Comparison between Hipparcos $(V-I)$'s and Cousins literature $(V-I)$'s, $(V-I)_H$ and $(V-I)_{Cg}$, respectively, for the 27 objects directly measured in Cousins system} \label{f:figure03} \end{figure} A very large scatter and a zero point offset of +0.02~mag is present in this plot. To investigate whether at least part of this scatter might be caused by the transformations between photometric systems, in Figure~\ref{f:figure03} we compare the Hipparcos $(V-I)$'s and the Cousins literature $(V-I)$'s, for the 27 objects directly measured in Cousins system. A trend and a zero point offset of $-0.065$~mag are present in this plot. Given this overall large uncertainty we decided not to use the $V-I$ colours in the present study as well as in Gratton et al. (1997a) study, and suggest some caution in using the $V-I$ colours listed in the Hipparcos catalog. \subsubsection{$b-y$ colours} Str\"omgren $b-y$ colours for 92 of the programme stars have been published by Laird, Carney \& Latham (1988), Carney (1983b), Laird (1985), Olsen (1983, 1984, 1994a,b), Schuster \& Nissen (1988), Schuster, Parrao \& Contreras Martinez (1993), Anthony-Twarog \& Twarog (1987), Twarog \& Anthony-Twarog (1995), and in the Eggen system by Eggen (1955, 1956, 1968a,b, 1972, 1978, 1979, 1987a,b). Mean $b-y$'s were computed as the average of the independent data available in the literature (see Column 11 of Table 1). $b-y$ values in Eggen photometric system were also averaged since $(b-y)_{\rm E}$=$b-y$ (Eggen 1976). \subsection{Parallaxes} HIPPARCOS parallaxes for the programme stars are given in column 5 of Table 1. The parallaxes as well as the absolute magnitudes $M_V$\ listed in this table do not include Lutz-Kelker corrections (Lutz \& Kelker 1973). These corrections depend on the distribution of the parallaxes of the population from which the observed sample is extracted. We refer to Gratton et al. (1997a) for a thorough discussion of this point. \begin{figure*} \vspace{12cm} \special{psfile=figure04.eps vscale=70 hscale=70 voffset=-100 hoffset=55} \caption{ Portions of the spectra of HD19445 (HIP 14594) and LTT10733 (HIP 10140) obtained with the Asiago (top panel) and McDonald (bottom panel) telescopes, respectively. The spectra of LTT10733 were shifted vertically by 0.2 for a more clear display. Note the large S/N ratio of both sets of spectra, and the higher resolution of the McDonald spectra} \label{f:figure04} \end{figure*} \subsection{Spectroscopic data: Observations and Reductions} High dispersion spectra for the programme stars were acquired using the 2d echelle coud\`e spectrograph at the 2.7~m telescope at McDonald and the REOSC echelle spectrograph at the 1.8~m telescope at Cima Ekar (Asiago), during the years 1994, 1995 and 1996. McDonald spectra have a very high quality (resolution R=60,000, $S/N\sim 200$, spectral coverage from about 4,000 to 9,000 \AA); they are available for 22 stars (most of them with [Fe/H]$<-0.8$). The Cima Ekar telescope provided somewhat lower quality spectra (resolution R=15,000, $S/N\sim 200$, two spectral ranges 4,500$< \lambda <$7,000 and 5,500 $<\lambda<$8,000~\AA) for 58 stars. There are 14 stars in common between the two samples. Portions of the spectra of some of our targets taken with the 2 different instrumental configurations are shown in Figure~\ref{f:figure04}. Spectral ranges were selected in order to cover a large variety of lines, including the permitted OI triplet at 7771-7774~\AA, which is the only feature due to O easily measured in the spectrum of the metal-poor dwarfs. We collected spectra for 66 of the objects listed in Table 1. The number of spectra available for each object is given in column 14 of the Table. Some of the spectra were not useful to measure abundances. For example, HIP 46191 turned out to be a double-lined spectroscopic binary, while HIP 999, 6448 and 116005 spectra have very broad lines suggesting that they may either be fast rotating objects or double-lined spectroscopy binaries not resolved in our spectroscopy. Carney et al. (1994), published data for HIP 999 and 6448. They flagged HIP 999 as a spectroscopic binary with an orbital solution and as a multiple system with a preliminary orbital solution. Only HIP 6448, which they classify as a double-lined spectroscopic binary with preliminary orbital solution, is included among the "rapid rotating objects" (see their Table 9). Those stars, as well as other known or suspected binaries present in our sample have been flagged in Column 15 of Table 1. Finally, 4 of the stars in Table~1 (namely HIP 17147, 38625, 76976 and 100568) have abundances from high resolution spectroscopy recently published by Gratton, Carretta \& Castelli (1997); these objects were not reobserved. Bidimensional spectra provided by the large format CCD detectors were reduced to unidimensional ones using standard routines in the IRAF package. Next steps were performed using the ISA package written by one of the authors (R.G.G.) and running on PCs. Considerable care was devoted to the somewhat subjective reduction to a fiducial continuum level. The bluest orders ($\lambda<4,900$~\AA) were not used; however, assignment of a fiducial continuum is still difficult on the Asiago spectra of the coolest and most metal rich stars. To reduce this concern, in the final analysis we rejected all lines for which the average value of the spectrum $c$\ (normalized to the fiducial continuum) is smaller than 0.9 over a region having a 200 pixels width ($\sim \lambda/200$) centered on the line. Equivalent widths $EW$s of the lines were measured by means of a Gaussian fitting routine applied to the core of the lines; they are available in electronic form from R. Gratton. This procedure neglects the contribution of the damping wings, which are well developed in strong lines in the spectra of dwarfs (given the higher resolution, the effect is more evident in the McDonald spectra than in the Asiago ones). However, fitting by Voigt profiles would make the results very sensitive to the presence of nearby lines and to even small errors in the location of the fiducial continuum, a well known problem in solar analysis (see e.g. the discussion in Anstee, O'Mara, \& Ross 1997). A full analysis would have required a very time consuming line-by-line comparison with synthetic spectra. We deemed it beyond the purposes of the present study, although the high quality of the McDonald spectra makes this effort worthwhile in future studies. Here, we simply applied an average empirical correction to the $EW$s of strong lines ($EW>80$~m\AA). This correction was obtained from a comparison of $EW$s measured using the Gaussian routine and by direct integration for both the clean Ca~I line at 6439~\AA\ and synthetic spectra of typical Fe lines. Since the corrections depend on the instrumental profile (and hence on the resolution), individual corrections were derived for the Asiago and McDonald spectra. In the end, to avoid use of large (and hence uncertain) corrections, only lines with $\log {EW/\lambda}<-4.7$\ were used in the final analysis (corrections to the $EW$s for these lines are $\leq 7$~m\AA, which is well below 10\%). We also dropped weak lines ($\log {EW/\lambda}<-5.7$) measured on Asiago spectra, since they were deemed too close to the noise level. The large overlap between Asiago and McDonald samples (13 stars in common, after the short period double-lined spectroscopic binary HIP 48215 is eliminated), allowed standardization of the equivalent widths used in the analysis. Our procedure was to empirically correct the Asiago $EW$s to the McDonald ones. The final correction (based on 346 lines) is: $$EW_{\rm final} = (1.079\pm 0.020)~EW_{\rm original}+(42\pm 28)(1-c)-5.5~~ {\rm m\AA}$$ The r.m.s. scatter around this relation is 7.8~m\AA. External checks on our $EW$s are possible with Edvardsson et al. (1993) and Tomkin et al. (1992). Comparisons performed using McDonald $EW$s alone show that they have errors of $\pm 4$~m\AA. From the r.m.s. scatter between Asiago and McDonald $EW$s, we then estimate that the former have errors of $\pm 6.7$~m\AA. When Asiago and McDonald $EW$s are considered together, we find average residuals (this paper$-$others) of $-0.2\pm 1.0$~m\AA\ (39 lines, r.m.s. scatter 6.1~m\AA) and $+0.8\pm 1.0$~m\AA\ (36 lines, r.m.s. scatter 5.9~m\AA) with Edvardsson et al. (1993) and Tomkin et al. (1992), respectively. Our $EW$s are on the same system of these two papers. \subsection{Radial Velocities: Searching for Unknown Binaries} Radial velocities (RV's) were measured from the spectra of our stars (62 objects, once the 3 objects with very broad lines HIP 999, 6448 and 116005, and the double-lined spectroscopic binary HIP 46191 are discarded). Forty-seven of these objects have multiple observations, but most are consecutive exposures and thus cannot give useful information about unknown binary systems contaminating the sample. Average RV's (with individual values weighted according to their $\sigma$), have been derived for the objects with multiple observations. Radial velocities are natural by-products of the EWs measurements, since the Gaussian fitting routine used to measure the EWs also measures the radial velocity of the centroid of the lines. About 50 - 100 lines were measured in each star and the zero point of our radial velocities was set by measuring $\sim 10$\ telluric lines present in the spectra. \begin{table} \scriptsize \caption{Radial Velocities} \begin{tabular}{rrrrrc} \hline\hline \multicolumn{1}{r}{HIP}& \multicolumn{1}{c}{HD/Gliese}& \multicolumn{1}{c}{~RV}& \multicolumn{1}{c}{$\sigma$}& \multicolumn{1}{c}{$\Delta$RV}&Comments\\ &\multicolumn{1}{c}{(this paper)}& \multicolumn{1}{c}{(this paper)}& &\\ \multicolumn{1}{c}{}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{(km/s)}&\multicolumn{1}{c}{(km/s)}& \multicolumn{1}{c}{(km/s)}&\\ \hline 3985 & 4906& $-$83.2~~~~~& 1.6~~~~~~ & 0.0~~~~&\\ 7217 & 9430& $-$54.7~~~~~& 0.2~~~~~~ &$-$1.3~~~~&SSB\\ 8798 & 11505& $-$13.8~~~~~& & 3.0~~~~&*\\ 10140&G074-05& 27.4~~~~~& 1.2~~~~~~ & 0.3~~~~&\\ 10652& 14056& $-$21.8~~~~~& 0.4~~~~~~ &$-$0.7~~~~&\\ 10921&G073-44& 43.3~~~~~& 0.0~~~~~~ &$-$1.2~~~~&SBP\\ 12306& 16397& $-$100.0~~~~~& 0.5~~~~~~ &$-$0.1~~~~&\\ 13366& 17820& 5.0~~~~~& 0.2~~~~~~ &$-$1.3~~~~&*\\ 14594& 19445& $-$140.0~~~~~& 0.3~~~~~~ & 0.5~~~~&\\ 15394& 20512& 7.2~~~~~& 0.4~~~~~~ &$-$1.2~~~~&SBP\\ 15797&G078-33& 7.2~~~~~& 2.9~~~~~~ & &*\\ 16169& 21543& 64.4~~~~~& 0.6~~~~~~ & 0.9~~~~&\\ 16788& 22309& $-$28.2~~~~~& 0.6~~~~~~ &$-$0.7~~~~&\\ 20094& 27126& $-$44.8~~~~~& 0.3~~~~~~ &$-$1.9~~~~&*\\ 21272& 28946& $-$46.2~~~~~& 1.3~~~~~~ & &\\ 21703& 29528& $-$19.3~~~~~& 0.6~~~~~~ &$-$0.3~~~~&\\ 29759&G098-58& 242.2~~~~~& 1.4~~~~~~ &0.0~~~~&SSB\\ 30862& 45391& $-$6.1~~~~~& 0.2~~~~~~ & &\\ 34414& 53927& 18.9~~~~~& 0.2~~~~~~ & &\\ 35377& 56513& $-$34.2~~~~~& 0.5~~~~~~ & &\\ 36491& 59374& 90.5~~~~~& 0.2~~~~~~ &$-$0.4~~~~&\\ 37335&G112-36& 49.6~~~~~& 0.6~~~~~~ & 0.4~~~~&\\ 38541& 64090& $-$234.8~~~~~& 0.1~~~~~~ & 0.2~~~~&SSB\\ 48215& 85091& 32.7~~~~~&28.5~~~~~~ &$-$10.8~~~~&SBO\\ 49615& 87838& 23.1~~~~~& 1.2~~~~~~ &$-$0.1~~~~&\\ 49988& 88446& 61.0~~~~~& 0.3~~~~~~ &$-$0.6~~~~&\\ 50139& 88725& $-$22.1~~~~~& 0.9~~~~~~ &$-$0.1~~~~&\\ 53070& 94028& 66.1~~~~~& 0.4~~~~~~ & 0.7~~~~&SSB\\ 54196& 96094& 0.5~~~~~& 0.1~~~~~~ &$-$0.1~~~~&\\ 57939& 103095& $-$98.9~~~~~& &$-$0.5~~~~&SSB\\ 60956& 108754& 0.4~~~~~& 1.6~~~~~~ &$-$0.3~~~~&SBO\\ 62607& 111515& 3.9~~~~~& 1.3~~~~~~ & 1.5~~~~&SSB\\ 64115& 114095& 77.8~~~~~& & &\\ 64345& 114606& 26.0~~~~~& 0.3~~~~~~ &$-$0.7~~~~&\\ 64426& 114762& 50.5~~~~~& 0.5~~~~~~ & 1.2~~~~&SBO\\ 65982& 117635& $-$50.8~~~~~& 0.1~~~~~~ & 0.7~~~~&SSB\\ 66509& 118659& $-$44.7~~~~~& 1.5~~~~~~ & 0.6~~~~&\\ 66860& 119288& $-$10.9~~~~~& 0.3~~~~~~ & &\\ 72998& 131653& $-$67.8~~~~~& & &\\ 74033& 134113& $-$58.7~~~~~& 0.1~~~~~~ & 1.9~~~~&SBP\\ 74234& 134440& 311.1~~~~~& &$-$0.4~~~~&\\ 74235& 134439& 310.5~~~~~& &$-$0.1~~~~&\\ 80837& 148816& $-$47.6~~~~~& 0.5~~~~~~ & 0.3~~~~&\\ 81170& 149414& $-$152.1~~~~~& & 17.7~~~~&SBP\\ 81461& 149996& $-$36.6~~~~~& &$-$0.7~~~~&\\ 85007& 157466& 34.8~~~~~& 0.4~~~~~~ & &\\ 85378& 158226& $-$73.5~~~~~& &$-$0.1~~~~&\\ 85757& 158809& 4.5~~~~~& & 0.7~~~~&SSB\\ 92532& 174912& $-$13.0~~~~~& 1.1~~~~~~ & 0.2~~~~&\\ 95727& 231510& 5.5~~~~~& 0.4~~~~~~ & 0.7~~~~&\\ 96077& 184448& $-$21.7~~~~~& & 0.4~~~~&\\ 97023& 186379& $-$8.6~~~~~& 0.1~~~~~~ & &\\ 97527& 187637& $-$0.4~~~~~& & &\\ 100792&194598& $-$248.2~~~~~& 0.4~~~~~~ &$-$0.5~~~~&\\ 103987&200580& 4.6~~~~~& 0.6~~~~~~ & 6.1~~~~&TP\\ 104659&201891& $-$44.7~~~~~& 1.3~~~~~~ &$-$0.2~~~~&\\ 105888&204155& $-$84.5~~~~~& & 0.1~~~~&\\ 109450&210483& $-$73.2~~~~~& 0.8~~~~~~ & &\\ 109563&210631& $-$12.5~~~~~& & 0.0~~~~&SBP\\ 112229&215257& $-$33.5~~~~~& 1.0~~~~~~ & 0.1~~~~&\\ 112811&216179& $-$4.3~~~~~& &$-$0.2~~~~&\\ 117918&224087& $-$28.3~~~~~& 0.5~~~~~~ &$-$0.1~~~~&SBO\\ \hline \end{tabular} \label{t:velrad} \end{table} Carney et al. (1994) have published radial velocities for 50 of these objects; among them 18 are confirmed or suspected binaries. A comparison of the remaining objects in common, (28 and 13 stars for the Asiago and McDonald samples, respectively) shows that there is a systematic shift and a trend between Asiago estimates and Carney et al's values described by the linear regression RV$_{\rm this~paper}-RV_{\rm Carney}= -0.0146 RV -5.8970$, with correlation coefficient $r=-$0.726. The trend and zero point with the McDonald spectra is much lower: RV$_{\rm this~paper}-RV_{\rm Carney}= -0.0076 RV -0.4278$, $r=-$0.923. We think that the lower resolution of the Asiago spectra is responsible for the rather large offset found between Asiago and Carney et al's values. The radial velocities measured from our spectra, (corrected accordingly to the equations given above, to tie them to Carney et al's system) are listed in Table~\ref{t:velrad}. Also shown are the standard deviations of the means (for stars with multiple observations, Column 4). These standard deviations may give some information on the binary contamination suffered by our sample. The quadratic mean of the rms deviations corresponds to 0.81 km/s, after the previously mentioned 18 known or suspected binaries are eliminated, (average on 33 objects). This value drops to 0.57 km/s, (average on 32 objects), if we also eliminate HIP 15797 which $\sigma$ is higher than 2.5 $\times$ the quadratic mean. We suspect that this object is likely to be a previously unrecognized binary. We can conservatively assume that our radial velocity measurements are accurate to 1 km/s. Column 5 of Table~\ref{t:velrad} gives the residuals, this paper $-$ Carney et al. (1994) for the stars in common. These residuals point to possible unknown binaries contaminating the sample. Their quadratic mean, once all known or suspected binaries are eliminated, corresponds to 0.62 (average on 32 objects). If we eliminate objects whose residuals are higher than 2.5 $\times$ the quadratic mean we obtain 0.24 (average on 30 stars, discarding HIP 8798, 20094), and 0.19 if we also eliminate HIP 13366 (average on 29 stars). In summary, besides the 18 already known or suspected binaries Table~\ref{t:velrad} includes 3 objects whose binarity is highly probable (namely, HIP 8798, 15797, 20094) and 1 more object (HIP 13366) whose binarity is likely. All these objects are marked by an asterisk in Column 6. Radial velocities measured from the individual spectra of known and newly discovered binaries are available in electronic form upon request to the first author. In summary, we find that 26 out of 66 objects with spectroscopic observations are confirmed or suspect binaries, i.e. $\sim 39 \%$ of the total sample. Of them 20 are binaries already known and 6 are new possible binaries detected in the present study. We refer to Gratton et al. (1997a) for a more thorough discussion of the the binary contamination affecting our sample. \section{Abundance Analysis} \subsection{Atmospheric Parameters} The abundance derivation followed precepts very similar to the reanalysis of $\sim 300$\ field dwarfs and $\sim 150$\ globular cluster giants described in Gratton, Carretta \& Castelli (1997) and Carretta \& Gratton (1997). The same line parameters were adopted. The atmospheric parameters needed in the analysis were derived according to the following procedure: \begin{enumerate} \item we assumed initial values of $\log g=4.5$\ and metal abundances derived from the $uvby$\ photometry using the calibration by Schuster \& Nissen (1989) \item initial effective temperatures were derived from the $B-V$, $b-y$, and $V-K$\ colours listed in Table 1 using the empirical calibration of Gratton, Carretta \& Castelli (1997) for population I stars (assumed to be valid for [Fe/H]=0), and the abundance dependence given by Kurucz (1993) models. Following Carney et al. (1994), who assumed zero reddening values for all the stars in our sample, colours were not corrected for reddening. \item a first iteration value of $\log g$\ was then obtained from the absolute bolometric magnitude (derived from the apparent $V$\ magnitudes, the parallaxes from Hipparcos, and the bolometric corrections $BC$\ given by Kurucz, 1993), and the masses derived by interpolating in $T_{\rm eff}$\ and [A/H] within the Bertelli et al. (1994) isochrones \item the two last steps were iterated until a consistent set of values was obtained for $T_{\rm eff}$, $\log g$, and [A/H] \item the equivalent widths were then analyzed, providing new values for $v_{\rm t}$\ and [A/H] (assumed to be equal to the [Fe/H] value obtained from neutral lines) \item the whole procedure was iterated until a consistent parameter set was obtained. Note that only $BC$'s and masses are modified, so that convergence is actually very fast. \end{enumerate} \begin{table*} \caption{Atmospheric parameters and abundances for the programme stars} \scriptsize \begin{tabular}{rrcccrrcrrcrrr} \hline\hline HIP&~~HD/Gliese&&$T_{\rm eff}$&$\log g$&[Fe/H]&[O/Fe]&[Mg/Fe]&[Si/Fe]&[Ca/Fe]& [Ti/Fe]&~~[$\alpha$/Fe]&~[Cr/Fe]&[Ni/Fe]\\ \\ &&&(K)& & &&&&&&&&\\ \hline \\ \multicolumn{14}{c}{Single stars}\\ 3985&4906&A&5149&3.61&$-$0.65& & & 0.19& 0.42& & 0.30&$-$0.04&0.00\\ 10140&G074-05&M&5755&4.38&$-$0.85& 0.44&0.40& 0.19& 0.22&0.24& 0.26&0.00& 0.02\\ 10140&G074-05&A&5755&4.38&$-$0.78& 0.18& & 0.13& 0.19& & 0.16& 0.02&$-$0.16\\ 10652&14056&A&5647&4.31&$-$0.58& 0.34& & 0.14& 0.34& & 0.24&$-$0.01& 0.06\\ 12306&16397&A&5810&4.28&$-$0.50& 0.47& & 0.24& 0.17& & 0.20& 0.08&$-$0.05\\ 14594&19445&M&6059&4.49&$-$1.91& 0.67&0.32& & 0.36&0.45& 0.38& 0.01&\\ 14594&19445&A&6059&4.49&$-$2.03& & & & 0.46& & 0.46&$-$0.04&\\ 16169&21543&A&5673&4.37&$-$0.50& 0.33& & 0.07& 0.23& & 0.15& 0.01&$-$0.04\\ 16788&22309&A&5873&4.29&$-$0.25& 0.22& &$-$0.07& 0.02& &$-$0.03&$-$0.19&$-$0.15\\ 21272&28946&A&5288&4.55&$-$0.03&$-$0.26& & 0.06& 0.09& & 0.07& 0.11&0.00\\ 21703&29528&M&5331&4.35& 0.15& 0.02& & 0.02& 0.11&0.22& 0.11&$-$0.12& 0.02\\ 21703&29528&A&5331&4.35& 0.03& & & 0.08& 0.04& & 0.06& 0.01& 0.06\\ 30862&45391&A&5707&4.46&$-$0.37& 0.08& &$-$0.05& 0.16& & 0.05& 0.11&$-$0.08\\ 34414&53927&A&4937&4.66&$-$0.37& 0.07& & 0.10& 0.10& & 0.10& 0.02& 0.16\\ 35377&56513&A&5659&4.50&$-$0.38&$-$0.22& & 0.09& 0.20& & 0.14& 0.04& 0.04\\ 36491&59374&A&5903&4.44&$-$0.88& 0.25& & 0.24& 0.30& & 0.27& 0.03&$-$0.01\\ 37335&G112-36&A&5036&2.92&$-$0.82& 0.34& & 0.23& 0.46& & 0.34&$-$0.09&$-$0.06\\ 49615&87838&A&6078&4.30&$-$0.30&$-$0.17& & 0.08& 0.13& & 0.11& 0.24&$-$0.16\\ 49988&88446&A&5935&3.97&$-$0.36& 0.35& & 0.14&$-$0.05& & 0.04&$-$0.08&$-$0.18\\ 50139&88725&M&5695&4.39&$-$0.55& 0.16&0.31& 0.11& 0.22&0.34& 0.24&$-$0.01&$-$0.01\\ 50139&88725&A&5695&4.39&$-$0.57& 0.26& & 0.25& 0.40& & 0.33& 0.00&$-$0.05\\ 54196&96094&A&5879&3.97&$-$0.33& 0.23& & 0.01& 0.03& & 0.02&$-$0.13&$-$0.20\\ 57939&103095&M&5016&4.80&$-$1.30& & & & 0.23&0.35& 0.29&$-$0.08&$-$0.06\\ 64115&114095&A&4741&2.69&$-$0.35&$-$0.35& &$-$0.54& 0.39& &$-$0.07& & 0.02\\ 64345&114606&A&5611&4.28&$-$0.39& 0.11& & 0.18& 0.21& & 0.19& 0.08&$-$0.06\\ 66509&118659&A&5494&4.37&$-$0.55& 0.51& & 0.13& 0.03& & 0.08&$-$0.10&$-$0.10\\ 66860&119288&A&6566&4.19&$-$0.27& 0.15& & 0.06& 0.06& & 0.06&$-$0.14&$-$0.02\\ 72998&131653&M&5356&4.65&$-$0.63& 0.36& & 0.30& 0.23&0.41& 0.31& 0.01&$-$0.06\\ 74234&134440&M&4879&4.74&$-$1.28& & & & 0.11&0.18& 0.15& 0.06&$-$0.16\\ 74235&134439&M&5106&4.74&$-$1.30& & & 0.51& 0.15&0.22& 0.29&$-$0.06&$-$0.15\\ 80837&148816&M&5923&4.16&$-$0.64& 0.23&0.33& 0.23& 0.33&0.31& 0.30&$-$0.01&$-$0.03\\ 80837&148816&A&5923&4.16&$-$0.61& 0.39& & 0.19& 0.30& & 0.25& &$-$0.18\\ 81461&149996&M&5726&4.14&$-$0.38& 0.09&0.23& 0.13& 0.29&0.23& 0.22&$-$0.02&$-$0.02\\ 85007&157466&M&6053&4.39&$-$0.32&$-$0.02&0.15& 0.00& 0.10&0.05& 0.08&$-$0.05&$-$0.08\\ 85007&157466&A&6053&4.39&$-$0.38& & &$-$0.01& 0.02& & 0.00&$-$0.21&$-$0.13\\ 85378&158226&A&5803&4.18&$-$0.42& & & 0.44& 0.55& & 0.50& 0.21& 0.04\\ 92532&174912&M&5954&4.49&$-$0.39&$-$0.10&0.03& 0.05& 0.14&0.24& 0.11& 0.02&$-$0.04\\ 92532&174912&A&5954&4.48&$-$0.29& & & 0.15& 0.12& & 0.14& 0.04& 0.03\\ 95727&231510&M&5253&4.59&$-$0.44& 0.34&0.20& 0.14& 0.07&0.16& 0.14& 0.02& 0.00\\ 96077&184448&A&5656&4.20&$-$0.22& & & 0.19& 0.19& & 0.19&$-$0.09&$-$0.15\\ 97023&186379&A&5894&3.99&$-$0.20&$-$0.03& & 0.12& 0.10& & 0.11& 0.13&$-$0.17\\ 97527&187637&A&6169&4.23&$-$0.10& & & 0.02& 0.04& & 0.03&$-$0.01&$-$0.24\\ 100792&194598&M&6032&4.33&$-$1.02& 0.35&0.25& 0.18& 0.28&0.19& 0.08&$-$0.10&$-$0.12\\ 100792&195598&A&6032&4.33&$-$0.99& & & & 0.37& & 0.37& 0.04&$-$0.43\\ 104659&201891&M&5957&4.31&$-$0.97& 0.35&0.32& 0.27& 0.31&0.34& 0.31&$-$0.07&$-$0.10\\ 104659&201891&A&5957&4.31&$-$0.91& 0.47& & 0.28& 0.26& & 0.27&$-$0.08&$-$0.13\\ 105888&204155&A&5816&4.08&$-$0.56& & & 0.18& 0.29& & 0.24&$-$0.02&$-$0.08\\ 109450&210483&A&5847&4.20&$-$0.03& 0.05& & 0.09& 0.00& & 0.04& 0.02&$-$0.11\\ 112229&215257&A&6008&4.23&$-$0.66& 0.14& & 0.03& 0.24& & 0.14& 0.12&$-$0.01\\ 112811&216179&M&5443&4.46&$-$0.66& 0.45&0.34& 0.23& 0.31&0.28& 0.29& 0.02& 0.01\\ \\ \multicolumn{14}{c}{Known or suspected binaries}\\ 7217&9430&A&5689&4.40&$-$0.34& & & 0.30& 0.38& & 0.34& 0.08&$-$0.09\\ 8798&11505&A&5695&4.31&$-$0.05& & & 0.19& 0.09& & 0.14&0.00& 0.03\\ 10921&G073-44&A&5267&4.40&$-$0.12& & & 0.19& 0.06& & 0.13& 0.49&$-$0.09\\ 13366&17820&A&5849&4.19&$-$0.59& 0.37& & 0.43& 0.37& & 0.40& 0.22&$-$0.01\\ 15394&20512&A&5253&3.72& 0.10&$-$0.55& &$-$0.07& 0.12& & 0.03& 0.21&$-$0.05\\ 15797&G078-33&A&4734&4.68&$-$0.41& & & 0.27& 0.05& & 0.16&$-$0.12&$-$0.01\\ 20094&27126&A&5538&4.39&$-$0.26&$-$0.07& & 0.15& 0.23& & 0.19& 0.03& 0.05\\ 29759&G098-58&M&5432&3.37&$-$1.83& 0.65&0.38& & 0.26&0.25& 0.30&$-$0.27&$-$0.32\\ 29759&G098-58&A&5432&3.37&$-$1.84& & & & 0.43& & 0.43&$-$0.18& \\ 38541&64090&M&5475&4.62&$-$1.49& &0.40& & 0.27&0.28& 0.32&$-$0.05&$-$0.02\\ 38541&64090&A&5475&4.62&$-$1.59& & & & 0.33& & 0.33& 0.08&$-$0.18\\ 48215&85091&M&5698&4.15&$-$0.29& 0.10& & 0.03& 0.12&0.48& 0.21&$-$0.11&$-$0.17\\ 48215&85091&A&5698&4.15&$-$0.50& 0.42& & 0.43&$-$0.02& & 0.21& 0.09&$-$0.15\\ 53070&94028&M&6049&4.31&$-$1.32& 0.43&0.48& 0.52& 0.33&0.24& 0.39&$-$0.12&$-$0.12\\ 53070&94028&A&6049&4.31&$-$1.35& 0.42& & 0.23& 0.19& & 0.21&$-$0.20&$-$0.06\\ 60956&108754&A&5388&4.42&$-$0.58& 0.42& & 0.08& 0.21& & 0.14& 0.15&$-$0.07\\ 62607&111515&A&5446&4.49&$-$0.52& 0.29& & 0.16& 0.04& & 0.10& &$-$0.07\\ 64426&114762&A&5928&4.18&$-$0.66& 0.49& & 0.12& 0.21& & 0.16&$-$0.15&$-$0.12\\ 65982&117635&A&5197&4.10&$-$0.48& 0.43& &$-$0.12& 0.42& & 0.15& &$-$0.11\\ 74033&134113&A&5776&4.11&$-$0.66& 0.29& &$-$0.03& 0.36& & 0.16& 0.20&$-$0.18\\ 81170&149414A&M&5185&4.50&$-$1.14& 0.45&0.53& 0.25& 0.29&0.38& 0.36& 0.04&$-$0.02\\ 85757&158809&M&5527&4.07&$-$0.53& 0.50&0.37& 0.21& 0.29&0.38& 0.31& 0.02&$-$0.04\\ 103987&200580&A&5934&3.93&$-$0.43& 0.05& &$-$0.06& 0.10& & 0.02&$-$0.07&$-$0.27\\ 109563&210631&A&5785&4.12&$-$0.37& & &$-$0.02& 0.30& & 0.14& 0.20&$-$0.12\\ 117918&224087&A&5164&4.42&$-$0.25& & &$-$0.05& 0.05& & 0.00&$-$0.14&$-$0.05\\ \hline \end{tabular} \label{t:paratm} \end{table*} \normalsize We list in Table~3 the atmospheric parameters adopted for the programme stars, as well as the derived abundances for Fe, the $\alpha-$elements (O, Mg, Si, Ca, and Ti) and the Fe-group elements Cr and Ni. Column 3 of this table indicates the source of the spectroscopic material (A=Asiago, M=McDonald). \normalsize \subsection{Error analysis} Errors in the atmospheric parameters used in the analysis were estimated as follows. Random errors in $T_{\rm eff}$\ can be obtained by comparing temperatures derived from different colours. The mean quadratic error estimated in this way (once the different weight attributed to the colours are considered: $B-V$: weight 1; $b-y$: weight 1; $V-K$: weight 4) is $\pm 45$~K. Systematic errors may be larger: the $T_{\rm eff}$-scale used in this paper is discussed in detail in Gratton, Carretta \& Castelli (1997). We will assume that systematic errors in the adopted $T_{\rm eff}$'s are $\leq 100$~K. Random errors in the gravities may be directly estimated from the error in the masses (5\%\footnote{This value includes internal as well as external errors. In fact, independently of the adopted isochrone set, for $M_{\odot}=0.75-0.85$, an error of $\sim 4 \times 10^{9}$ in the age leads to uncertainties of 0.02-0.03 $M_{\odot}$ in the mass, at $M_{V}=5$ and 4 respectively. Additionally, the ability of different isochrone sets in reproducing the Sun assures that contributions of external errors possibly due to incorrect input physics are unlikely to be larger than an additional $2-3\%$.}), $M_V$'s (mean quadratic error is 0.18 mag), and in the $T_{\rm eff}$'s (0.8\%), neglecting the small contribution due to $BC$'s. Expected random errors in the gravities are 0.09~dex. Systematic errors are mainly due to errors in the $T_{\rm eff}$\ scale and in the solar $M_V$\ value. They are about 0.04~dex. Random errors in microturbulent velocities can be estimated from the residuals around the fitting relation in $T_{\rm eff}$\ and $\log g$. We obtain random errors of 0.47 and 0.17~km~s$^{-1}$\ for Asiago and McDonald spectra, respectively. While systematic errors may be rather large (mainly depending on the adopted collisional damping parameters, but also on the structure of the atmosphere), they are less important in the abundance analysis, since the microturbulent velocity is an empirical parameter derived so that abundances from (saturated) strong lines agree with those provided by (unsaturated) weak lines which are insensitive to the velocity field. On the other hand, for this same reason the very low values we obtain for the cooler stars should be reexamined more thoroughly before any physical meaning is attributed to them (although convection velocities are indeed expected to be much lower in the photosphere of K-dwarfs, at least insofar mixing length theory is adopted: Kurucz, private communication). Random errors in the equivalent widths and in the line parameters significantly affect the abundances when few lines are measured for a given specie. Roughly speaking, these errors should scale as $\sigma/\sqrt{n}$\, where $\sigma$\ is the typical error in the abundance from a single line (0.14~dex for the Asiago spectra, and 0.11 dex for the McDonald ones, as derived from Fe~I lines) and $n$\ is the number of lines used in the analysis (with $14<n<79$). However, errors may be larger if all lines for a given element are in a small spectral range (like e.g. O, for which only lines of the IR triplet at 7771-74~\AA\ were used), since in this case errors in the $EW$s for individual lines (mainly due to uncertainties in the correct location of the continuum level) are not independent from each other. Furthermore, undetected blends may contribute significantly to errors when the spectra are very crowded: this is expected to occur mainly for the Asiago spectra of cool, metal-rich stars. These limitations should be kept in mind in the discussion of our abundances. \normalsize \begin{table*} \caption{Random errors in the abundances} \begin{tabular}{lcccccccc} \hline \hline Parameter & Unit & Error& [Fe/H]&[Fe/H](II-I)&[O/Fe]&[Si/Fe]&[Ca/Fe]&[Ti/Fe]\\ \hline $T_{\rm eff}$ & (K) & $\pm 45$ & 0.033& 0.056& 0.022& 0.029& 0.008& 0.015\\ $\log g$ & (dex)&$\pm 0.09$& 0.009& 0.048& 0.008& 0.016& 0.012& 0.000\\ ${\rm [A/H]~(A)}$& (dex)&$\pm 0.07$& 0.007& 0.013& 0.019& 0.002& 0.001& 0.007\\ ${\rm [A/H]~(M)}$& (dex)&$\pm 0.04$& 0.005& 0.008& 0.012& 0.001& 0.000& 0.005\\ $v_t$~(A) &(km/s)&$\pm 0.47$& 0.055& 0.008& 0.032& 0.041& 0.008& 0.031\\ $v_t$~(M) &(km/s)&$\pm 0.17$& 0.020& 0.003& 0.012& 0.015& 0.003& 0.011\\ r.m.s. lines (A)& & & 0.024& 0.081& 0.070& 0.083& 0.085& 0.085\\ r.m.s. lines (M)& & & 0.019& 0.041& 0.044& 0.057& 0.039& 0.035\\ \hline total (A) & & & 0.069& 0.110& 0.083& 0.098& 0.087& 0.092\\ total (M) & & & 0.044& 0.085& 0.053& 0.068& 0.042& 0.040\\ \hline \end{tabular} \label{t:err} \end{table*} Internal errors in the model metal abundances were simply obtained by summing up quadratically the errors due to the other sources. Systematic errors can be due to non-solar abundance ratios. We will reexamine this point in Section 3.4. Table~\ref{t:err} gives the sensitivity of the Fe abundances and of the abundance ratios computed in this paper to the various error sources considered above. Random errors in the Fe abundances are $\sim 0.07$ and $\sim 0.04$~dex for abundances derived from Asiago and McDonald spectra, respectively. Systematic errors are mainly due to the $T_{\rm eff}$\ scale (which in turn depends on the adopted set of model atmosphere): they are $\sim 0.08$~dex. \subsection{ Errors in abundance analysis due to binarity } An additional problem in the abundance analysis is given by the presence of known and undetected spectroscopic binaries. This issue is more relevant in the analysis of dwarfs than for giants, since magnitude differences between the two components are generally smaller for dwarfs. A large variety of possible combinations of components exists. In the following discussion, we will only consider the case of a main sequence primary with a smaller mass (low main sequence) secondary. This is likely to be the most frequent combination. If neglected, binarity may affect the abundance analysis of the primary components of such systems in various ways: \begin{itemize} \item effective temperatures, derived from the combined light, are underestimated. Panels {\it a} and {\it b} of Figure~\ref{f:figure05} show the run of the differences between temperatures derived from colours of the combined light, and from colours of the primary alone, as a function of the difference in magnitude between primary and secondary components, for typical main sequence stars. Temperatures derived from the combined light may be as much as 200~K lower than the temperature of the primary. The effect is more relevant for $V-K$\ colours. In fact, when the magnitude difference is in the range 2-5~mag, temperatures derived from $V-K$\ colours are lower than those derived from $B-V$\ by more than 50~K (see panel {\it c} of Figure~\ref{f:figure05}), raising the possibility of detecting the presence of a companion from the infrared excess (see Gratton et al., 1997a). When the magnitude difference is lower than 2.5 mag there is no detectable infrared excess, but in some case the companion may bright enough to be directly detected in the spectrum\footnote{Indeed, we detected faint lines due to secondaries in the spectra of HIP 48215 and HIP 81170. In both cases the difference in magnitude between primary and secondary component could be estimated as about 2.5 mag}, when phase or orbit inclination are favourable (if separation is large there is a good chance that the star is a known visual binary). On the other hand, when the secondary is more than 5~mag fainter than the primary, temperatures derived from colours are underestimated by $<20$\ and $<70$~K from $B-V$\ and $V-K$\ colours, respectively \item gravities derived from luminosities (via Hipparcos parallaxes) and estimated masses are underestimated because the luminosity is overestimated (a concurrent, smaller effect is due to the lower temperatures). Since gravities are proportional to the inverses of luminosities, the effect may be as large as 0.3~dex, but is $<0.04$~dex when the magnitude difference between the two components is $>2.5$~mag. On average, we expect that gravities for binaries with magnitude differences $<6$~mag are underestimated by $0.05-0.10$~dex, depending on the assumed distribution of the luminosity differences between primaries and secondaries. On the other hand, gravities derived from the equilibrium of ionization will also be underestimated, because temperatures are underestimated: gravities from ionization equilibrium are 0.23~dex too low if temperatures are underestimated by 100~K. Panel {\it a} of Figure~\ref{f:figure06} shows the difference between the gravity derived from the combined light and that derived from the primary alone, as a function of the magnitude difference between primary and secondary component, for typical main sequence stars (again, the secondary is assumed to be a faint main sequence star). Results obtained from the location in the colour magnitude diagrams are shown as a solid line; those obtained from ionization equilibrium of Fe are shown as dotted (temperatures from $B-V$\ colour) and dashed lines (temperatures from $V-K$\ colour), respectively. Panel {\it b} of Figure~\ref{f:figure06} displays the run of the differences between spectroscopic gravities and those from the location in the c-m diagram. In this plot the solid line represents results for temperatures derived from $B-V$, and the dotted line results for temperatures derived from $V-K$\ colours. It is clear that gravities derived from ionization equilibrium will be in most cases lower than those obtained from the c-m diagram if a secondary component is present; the difference is larger when $V-K$\ colours are used. On average, we expect that for binaries gravities derived from the equilibrium of ionization should be lower than those derived from the colour-magnitude diagram by 0.07~dex if temperatures are derived from $B-V$, and by 0.18~dex if temperatures are derived from $V-K$. These estimates were obtained considering only binary systems where the magnitude difference between primary and secondary is $<6$~mag, and by assuming that secondary components distribute in luminosities as field stars (i.e. there is no correlation between the masses of both components: see Kroupa, Tout \& Gilmore 1993) following the luminosity function of Kroupa, Tout \& Gilmore (1993). The assumption about the luminosity distribution of the secondaries is not critical though: in fact, if we assume a flat luminosity function we obtain average offsets of 0.09 and 0.20~dex in the gravities derived using temperatures from $B-V$\ and $V-K$\, respectively. \item equivalent widths of lines are also affected, but a quantitative analysis is extremely difficult, since the effect depends on the magnitude difference between primary and secondary, on the velocity difference between the components (in relation to the adopted spectral resolution), and on the selected spectral region. When the lines from the two components are not resolved, the equivalent widths of the primary will be slightly overestimated (lines are stronger in the spectra of the cooler secondary), partially balancing the effects of the too low estimated temperatures. When lines from the two components are resolved, the equivalent widths of the primary will be underestimated. However, these general predictions may be wrong if errors in the positioning of the continuum are also taken into account, due to the higher line density, the continuum will generally be underestimated, thus reducing equivalent widths. On the whole, errors in abundances as large as 0.2~dex may well be present when the magnitude difference is smaller than a couple of magnitudes (errors are likely much smaller in element-to-element abundance ratios) \end{itemize} Summarizing, abundances derived for binaries are less reliable than for single stars. We expect that our analysis will underestimate temperatures, gravities and metal abundances for binaries. In extreme cases errors may be as large as 200~K, 0.3~dex, and 0.2~dex respectively, although typical values should be much smaller (roughly one third of these values, on average). In general, errors are larger when temperatures from $V-K$\ are used rather than those from blue colours. We caution that corrections for individual cases may be very different from the average ones (and, moreover, other combinations of evolutionary stages exist). Rather than applying uncertain corrections, we prefer to keep results obtained from known or suspected binaries clearly distinct from those obtained from {\it bona fide} single stars (see Table 3 and Figure 7). Of course, some {\it bona fide} single star may indeed be a binary, if orbital circumstances were not favourable to its detection. However, in most cases the contamination by possible secondary components is likely to be small for our {\it bona fide} single stars, and we expect that systematic errors due to undetected binaries are on average much smaller than uncertainties in the temperature scale. \begin{figure} \vspace{11.5cm} \special{psfile=figure05.eps vscale=60 hscale=60 voffset=-85 hoffset=-75} \caption{Run of the difference between temperatures derived from $B-V$\ colours (panel {\it a}) and $V-K$\ colours (panel {\it b}) of the combined light for a binary system, and from the colours of the primary alone, as a function of the magnitude difference between primary and secondary, for typical main sequence stars (the secondary is assumed to be a faint main sequence star too). Differences between temperatures derived from $V-K$\ and $B-V$\ are shown in panel {\it c}. A vertical dashed line separates regions where double-lined spectroscopic binaries are expected (magnitude differences $<2.5$~mag), from those where binaries can be detected from their infrared excess (magnitude differences in the range from 2.5 to 5 mag); binaries with magnitude differences $>5$~mag are single-lined spectroscopic binaries, or visual binaries, and may easily go unnoticed if extensive and accurate radial velocity observations are not available} \label{f:figure05} \end{figure} \begin{figure} \vspace{10.5cm} \special{psfile=figure06.eps vscale=70 hscale=70 voffset=-130 hoffset=-100} \caption{Panel {\it a}: difference between gravities derived from the combined light of binary systems and from the primary alone, as a function of the magnitude difference between primary and secondary, for typical main sequence stars (the secondary is assumed to be a faint main sequence star too). Results obtained from the gravities derived from luminosities (via Hipparcos parallaxes) and estimated masses (labelled log g(cmd)) are shown as a solid line ; those obtained from ionization equilibrium of Fe are shown as dotted (temperatures from $B-V$\ colour) and dashed lines (temperatures from $V-K$\ colour), respectively. Panel {\it b}: run of the differences between spectroscopic gravities and those from the location in the c-m diagram. In this case solid line represents results for temperatures derived from $B-V$, and dotted line results for temperatures derived from $V-K$\ colours. The vertical dashed line has the same meaning as in Figure 5.} \label{f:figure06} \end{figure} \subsection{Fe abundances} Since gravities are derived from masses and luminosities rather than from the ionization equilibrium for Fe, we may test if the predictions based on LTE are satisfied for the programme stars. This is a crucial test, since several authors (Bikmaev et al. 1990, Magain \& Zhao 1996, Feltzing \& Gustafsson 1998) have suggested that Fe abundances are significantly affected by departures from LTE in late F-K dwarfs and subdwarfs. A proper model atmosphere analysis of Fe ionization equilibrium must take into account the well known overabundance of O and $\alpha-$elements in metal-poor stars (see e.g. Wheeler, Sneden \& Truran 1989, and the following subsections of the present paper). This affects abundance derivations mainly in two ways: (i) due to the excess of Mg and Si (which are nearly as abundant as Fe and have similar ionization potentials) more free electrons are available (increasing continuum opacity due to $H^-$\ and affecting Saha equilibrium of ionization); and (ii) a stronger blanketing effect occurs (cooling the outer layers of the atmospheres). A full consideration of these effects would require the computation of new model-atmospheres with appropriate non-solar abundance ratios; this is beyond the purposes of the present paper. Here to estimate the impact of the non-solar abundance ratios we simply assumed that the model atmosphere most appropriate for each star had metallicity scaling down as [(Mg+Si+Fe)/H]. In practice, due to the small number of Mg and Si lines available, we assumed [Mg/Fe]=0.38 and [Si/Fe]=0.32 for [Fe/H]$<-0.5$, and [Mg/Fe]=$-$0.76~[Fe/H] and [Si/Fe]=$-$0.64~[Fe/H] for [Fe/H]$>-0.5$\ (as we will see below, these are representative average values for the programme stars). The net result of the application of these corrections for the metal-poor stars is to increase Fe~I abundances by $\sim 0.02$~dex, and Fe~II abundances by $\sim 0.07$~dex, the differences between Fe~I and Fe~II abundances are then reduced by $\sim 0.05$~dex. Figure~\ref{f:figure07} displays the run of the differences between abundances from neutral and singly ionized Fe lines, corrected for the effect of the excess of $\alpha$-elements, as a function of effective temperature and metal abundance. Different symbols are used for results obtained from McDonald and Asiago spectra and for {\it bona fide} single stars and known or suspected binaries. Once appropriate weights are attributed to the individual data-points of Figure~\ref{f:figure07}, (McDonald spectra have higher weight because the better resolution allowed us to measure a large number of Fe~II lines: $10\sim 20$, and errors in the $EW$s are smaller, while, conversely, very few Fe~II lines could be measured in the crowded spectra of cool and/or metal-rich stars observed with the Asiago telescope), the average difference between abundances from Fe I and Fe II lines is only marginally different from zero. Mean differences (FeI$-$FeII) are given in Table~\ref{t:fe1fe2m}. As expected, results for single stars have smaller scatter than those for binaries. Also, the smaller scatter of the higher quality McDonald spectra is evident. The r.m.s. scatter we get for the {\it bona fide} single stars (0.07 dex for the McDonald spectra, and 0.09 dex for the Asiago ones) agrees well with the expected random errors in temperatures and equivalent widths (see Table~\ref{t:err}). The lower average differences for the Asiago spectra likely reflect some residual contamination of the few Fe~II lines measurable on these lower resolution spectra. Reversing the results of Table~5, we conclude that the Fe equilibrium of ionization would provide gravities on average $0.09\pm 0.04$~dex larger than those given by masses and luminosities, with an r.m.s. scatter of 0.13~dex for individual stars (here we only consider results from {\it bona fide} single stars with McDonald spectra; however results obtained from the other samples are not very different). This small difference could be explained without invoking departures from LTE if the adopted $T_{\rm eff}$\ scale were too high by $\sim 40$~K, well within the quoted error bar of $\pm 100$~K. \begin{figure*} \vspace{5.5cm} \special{psfile=figure07.eps vscale=70 hscale=70 voffset=-200 hoffset=55} \caption{ Run of the difference between the abundances derived from neutral and singly ionized Fe lines, including correction for non-solar [$\alpha$/Fe] values (see text), as a function of temperature (panel {\it a}) and overall metal abundance (panel {\it b}). Open symbols are abundances obtained from the Asiago spectra; filled symbols are abundances obtained from the McDonald spectra. Squares are {\it bona fide} single stars; triangles are known or suspected binaries } \label{f:figure07} \end{figure*} \normalsize \begin{table} \caption{Mean differences between abundances given by Fe~I and II lines. For each group of spectra the number of stars used, the average value, and r.m.s. scatter of individual values are given } \scriptsize \begin{tabular}{lcccccc} \hline \hline Group & \multicolumn{3}{c}{McDonald} & \multicolumn{3}{c}{Asiago} \\ \hline Single stars & 15 & $+0.04\pm 0.02$ & 0.07 & 31 & $-0.02\pm 0.02$ & 0.09 \\ Binaries & ~6 & $+0.07\pm 0.03$ & 0.10 & 18 & $-0.01\pm 0.04$ & 0.16 \\ All & 21 & $+0.05\pm 0.02$ & 0.08 & 49 & $-0.02\pm 0.02$ & 0.12 \\ \hline \end{tabular} \label{t:fe1fe2m} \end{table} We therefore conclude that {\bf in our analysis the Saha ionization equilibrium for Fe is well satisfied in late F-K dwarfs of any metallicity}. We must stress, however, that while our gravities determined from masses and luminosities are very robust (expected errors are mainly due to the adopted temperature scale, and are smaller than 0.04 dex), our results about the goodness of the ionization equilibrium for Fe directly depends on the adopted model atmospheres and temperature scale, as well as on details of the abundance analysis procedure such as the adopted oscillator strengths, for instance. These issues will be addressed in more detail in the remaining part of this section. We have 9 stars in common with Nissen et al. (1997). Our gravities average only $0.03\pm 0.01$~dex (9 stars; r.m.s. of 0.02 dex) larger than those of Nissen et al. (1997). This (small) systematic offset is entirely due to our higher temperature scale, (by $119\pm 16$~K, r.m.s. scatter of 48 K). Fuhrmann et al. (1997) derived surface gravities from the wings of strong, pressure broadened Mg I lines. We have three stars in common with them, (HD19445=HIP 14594, HD194598=HIP 100792, and HD201891=HIP 104659). The Fuhrmann et al. (1997) gravities for these stars are lower than those we derive from luminosities and masses by only $0.10\pm 0.02$\ (r.m.s.=0.04 dex), on the whole supporting the weight Fuhrmann et al. attribute to this gravity indicator. (The temperatures adopted by Fuhrmann et al. are very similar to ours, differences being on average only $3\pm 11$~K, r.m.s.=19~K). With these gravities, Fuhrmann et al. (1997) find that abundances from Fe I lines are significantly lower than those given by Fe II lines, and suggest that some overionization of Fe occurs in the atmospheres of solar-type stars, although they do not rule out other possible explanations of this discrepancy. This result is at odds with our finding. For these three stars our abundances from Fe I lines are higher than those from Fe II lines by $0.10\pm 0.03$~dex (r.m.s.=0.05 dex): that is, using the ionization equilibrium we would derive gravities even larger (by $0.19\pm 0.05$~dex) than what we obtain from masses and luminosities. The gravities Fuhrmann et al. derive for these three stars from the equilibrium of ionization of Fe are on average $0.23\pm 0.07$~dex smaller than obtained by us from masses and luminosities. In order to understand the reasons for this large ($\sim 0.4$~dex) discrepancy between gravities derived with the same method (LTE equilibrium of ionization), we considered the Fe I and Fe II abundances separately. Once corrected for the effects of the $\alpha-$elements enhancement, our Fe II abundances are roughly identical to those derived by Fuhrmann et al. (difference is $0.00\pm 0.03$~dex, r.m.s=0.06~dex). However, our abundances from the Fe I lines are $0.17\pm 0.01$~dex (r.m.s.=0.02~dex) larger than those by Fuhrmann et al. (1997). The reason for these differences is not clear. They are not due to differences in the atmospheric parameters (either for the programme stars and the Sun): in fact, if allowance is given for our larger gravities (0.10 dex), smaller microturbulent velocities (0.34~km/s), and higher solar temperature (5770~K rather 5750~K), we would expect our abundances to be larger than those by Fuhrmann et al. by 0.07~dex and 0.09~dex for Fe~I and Fe~II lines, respectively. Indeed, our Fe I abundances are unexpectedly larger than those of Fuhrmann et al. by 0.10 dex, while those for Fe II are lower by 0.09 dex. While both sets of results should be essentially differential with respect to the Sun, the present analysis differs from that of Fuhrmann et al. in a number of respects: first, different model atmospheres are used, second, we used laboratory oscillator strengths for Fe (see Clementini et al. ,1995, for references). Our abundance analysis is differential in the sense that we repeated it for both the programme stars and the Sun. The solar abundance is determined from weak lines, and it is then insensitive to the adopted collisional damping and to uncertainties in the equivalent widths due to extended wings. Unfortunately these lines do not coincide with those used for subdwarfs, and the accuracy of our abundances is determined by the reliability of the $gf$-scale. Fuhrmann et al. preferred to use solar $gf$'s: the same line list is then used for both the Sun and the programme stars. However, since lines are much weaker in subdwarfs than in the Sun, solar $gf$'s for those lines measurable in the subdwarf spectra are sensitive to the adopted damping parameters and to the accuracy of continuum location (see Anstee, O'Mara \& Ross, 1997). Unfortunately, errors induced by these uncertainties do not cancel out for metal-poor dwarfs, because lines are weak in the spectra of these stars and damping is unimportant. Lacking more details about the line parameters used by Fuhrmann et al., it is not clear which analysis should be preferred, but it is evident that systematic differences as large as $\sim 0.1$~dex may well be present in the derived abundances. Caution should be used when considering gravities derived from ionization equilibrium, since results are influenced by the adopted temperature scale, uncertainties in model atmospheres, and on the line parameters. However, evidence for iron overionization in the photosphere of subdwarfs is weak. In fact, both our and Fuhrmann et al's. results clearly exclude departures from LTE affecting the abundances by a factor larger than 2. A further strong argument against a significant Fe overionization in subdwarfs comes from the extensive, consistent, statistical equilibrium calculations for Fe in the stellar atmospheres over a wide range of temperatures and gravities by Gratton et al. (1997b). These authors normalized the uncertain collisional cross sections in order to reproduce observations for RR Lyraes. Since these are warm, low gravity and metal-poor stars, overionization is expected to be much larger than in late F-K dwarfs. The lower limit to collisional cross sections given by the absence of detectable overionization in RR Lyrae spectra (Clementini et al. 1995) implies that LTE should be a very good approximation for the formation of Fe lines in dwarfs. \subsection{O and $\alpha-$element abundances} Oxygen abundances were derived from the permitted IR triplet, and include non-LTE corrections computed for every line in each star following the precepts of Gratton et al. (1997b). We found that O and the other $\alpha-$elements are overabundant in all metal-poor stars in our sample (see panels $\it a$ and $\it b$ of Figure~\ref{f:figure08}). The average overabundances in stars with [Fe/H]$<-0.5$ are: $$ {\rm [O/Fe]}= +0.38\pm 0.13$$ $$ {\rm [\alpha/Fe]}= +0.26\pm 0.08,$$ where the errors are the r.m.s. scatters of the individual values around the mean, and not the standard deviation of the sample (which is 0.02~dex in both cases). The moderate value of the O excess derived from the IR permitted lines is a consequence of the rather high temperature scale adopted (see also King 1993), which directly stems from the use of the Kurucz (1993) model atmospheres and colours. If this procedure is adopted, abundances from permitted OI lines agree with those determined from the forbidden [OI] and the OH lines (Carretta, Gratton \& Sneden 1997). \begin{figure} \vspace{12.0cm} \special{psfile=figure08.eps vscale=70 hscale=70 voffset=-115 hoffset=-100} \caption{Run of the overabundances of O (panel {\it a}) and $\alpha-$elements (panel {\it b}) as a function of [Fe/H] for the programme subdwarfs. Filled squares are abundances from McDonald spectra; open squares are abundances from Asiago spectra } \label{f:figure08} \end{figure} \subsection{Cr and Ni abundances} \begin{figure} \vspace{12.0cm} \special{psfile=figure09.eps vscale=70 hscale=70 voffset=-115 hoffset=-100} \caption{Runs of the underabundances of Cr (panel {\it a}) and Ni (panel {\it b}) as a function of [Fe/H] for the programme subdwarfs. Filled squares are abundances from McDonald spectra; open squares are abundances from Asiago spectra } \label{f:figure09} \end{figure} Figure~\ref{f:figure09} displays the run of the [Cr/Fe] and [Ni/Fe] abundance ratios with [Fe/H]. Cr and Ni are very slightly deficient the most in metal-poor stars of our sample. \section{Comparison with previous work} \normalsize \begin{table} \begin{minipage}{160mm} \caption{Average abundances in metal-poor stars ([Fe/H]$<-0.8$)} \scriptsize \begin{tabular}{lrrcrrcrc} \hline \hline Element&\multicolumn{3}{c}{McDonald}&\multicolumn{3}{c}{Asiago}&Others&Ref\\ \hline ${\rm [O/Fe]}$ & 7 & 0.48 & 0.13 & 5 & 0.33 & 0.12 & 0.45 & 1 \\ ${\rm [Mg/Fe]}$ & 8 & 0.38 & 0.09 & ~ & ~ & ~ & 0.38 & 1 \\ ${\rm [Si/Fe]}$ & 6 & 0.32 & 0.15 & 5 & 0.22 & 0.06 & 0.30 & 2 \\ ${\rm [Ca/Fe]}$ & 11 & 0.26 & 0.07 & 9 & 0.33 & 0.11 & 0.29 & 2 \\ ${\rm [Ti/Fe]}$ & 11 & 0.28 & 0.09 & ~ & ~ & ~ & 0.28 & 2 \\ ${\rm [Cr/Fe]}$ & 12 & $-$0.05 & 0.09 & 9 & $-$0.05 & 0.10 & $-$0.04 & 2 \\ ${\rm [Ni/Fe]}$ & 10 & $-$0.10 & 0.10 & 7 & $-$0.15 & 0.14 & $-$0.04 & 2 \\ \hline \end{tabular} \medskip \medskip 1. Carretta, Gratton \& Sneden 1998; 2. Gratton \& Sneden 1991 \label{t:mean} \end{minipage} \end{table} A vast literature exists for some of the stars in our list. In general agreement is quite good, once differences in the atmospheric parameters and in the solar analysis are taken into account. The abundances presented in this paper are on the same scale as those of Gratton et al. (1998), to which we refer for a thorough comparison with results from other authors. The overabundances of O and $\alpha-$ elements we find for the field subdwarfs is also similar to the excesses found for the globular cluster giants (apart from those stars affected by the O-Na anticorrelation, see Kraft 1994). Table~\ref{t:mean} gives the average element-to-iron abundance ratios for metal-poor stars ([Fe/H]$<-0.8$) obtained in this paper, along with the number of stars and the r.m.s scatter of individual stars around the mean value. Averages are computed separately for McDonald and Asiago spectra. For comparison, we also give the same values derived from the analysis of Carretta, Gratton \& Sneden (1998: O and Mg) and Gratton \& Sneden (1991: all other elements). These authors use an abundance analysis technique similar to that described in this paper, but different observational material. The agreement is excellent, in particular for the McDonald spectra. \section{Calibration of the photometric abundances} Once combined with the abundances obtained by Gratton, Carretta \& Castelli (1997), the sample of late F to early K-type- field stars with internally homogenous and accurate high dispersion abundances includes nearly 400 stars. Although the number of entries in this database may seem quite large, the vast majority of stars of these spectral classes (the most useful in galactic evolution studies, and those to be used to derive ages for globular clusters via main-sequence fitting techniques) present in the HIPPARCOS catalogue still lack of accurate metal abundances. However, Schuster \& Nissen (1989) have shown that fairly accurate metal abundances for late F to early K-type can be obtained using the Str\"omgren $uvby$\ photometry, which is available for a considerable fraction of the HIPPARCOS stars. Furthermore, the extensive binary search by Carney et al. (1994) has provided a large number of metal abundances derived from an empirical calibration of the cross correlation dips for metal-poor dwarfs. Finally, metal abundances from an index based on the strength of the Ca~II K line in low dispersion spectra have been obtained by Ryan \& Norris (1991). While the small scatter of the correlations with our abundances (see below) is testimonial to the efforts made by these authors to accurately calibrate their indices, their metallicity scales were at the mercy of a heterogenous collection of literature studies based on high dispersion analysis. In most cases these calibrating abundances were derived using model atmospheres different from Kurucz (1993), and unable to provide solar abundances in agreement with e.g. the meteoritic value. It is difficult to find out what solar abundance was used in these abundance systems. It is worthwhile to recalibrate these abundance scales. The Schuster \& Nissen (1989) and Carney et al. (1994) abundances correlate very well with our high dispersion results. Schuster \& Nissen's only differs by a zero point offset (see panel {\it a} of Figure~\ref{f:figure10}); the mean difference is: $${\rm [Fe/H]}_{\rm us}={\rm [Fe/H]}_{\rm SN} + (0.102\pm 0.012),$$ based on 152 stars (the r.m.s. scatter for a single star is 0.151~dex). Note that here we considered all the stars having high dispersion abundances from Gratton, Carretta \& Castelli (1997) and the present work, and for all these stars we derived abundances following the precepts of Schuster \& Nissen (1989). In the case of Carney et al. (1994: panel {\it c} of Figure~\ref{f:figure10}), a small linear term is also required. The best fit line (based on 66 stars) is: $${\rm [Fe/H]}_{\rm us}=(0.935\pm 0.032){\rm [Fe/H]}_{\rm Carney~et~al.} + (0.181\pm 0.173),$$ where the error on the intercept is the r.m.s. scatter around the best fit line. Finally, the scatter is somewhat larger for the abundances determined by Ryan \& Norris (1991: panel {\it b} of Figure~\ref{f:figure10}). Also, the range where these abundances are available is quite restricted ([Fe/H]$<-1$), because the used index saturate for more metal-rich stars. Hence, only the offset can be determined. The best calibration we get (excluding one star, G059-27, which gives discrepant results) is: $${\rm [Fe/H]}_{\rm us}={\rm [Fe/H]}_{\rm Ryan \& Norris} + (0.40\pm 0.04),$$ with a r.m.s. scatter of 0.23 dex for individual stars. \begin{figure} \vspace{13cm} \special{psfile=figure10.eps vscale=60 hscale=60 voffset=-80 hoffset=-75} \caption{ Comparison between the abundances obtained from high dispersion spectra (present analysis or Gratton, Carretta \& Castelli, 1997), and those provided by the original calibration of Schuster \& Nissen (1989, panel {\it a}), Ryan \& Norris (1991, panel {\it b}), and Carney et al. (1994, panel {\it c}) } \label{f:figure10} \end{figure} The offsets between the high dispersion abundances and those provided by the above metallicity indicators are mainly due to different assumptions about the solar abundances in the high dispersion analyses originally used in the calibrations. In most cases, the stellar abundances were derived using the MARCS grid of model atmospheres (Gustafsson et al. 1975), while the solar abundances obtained with the empirical model by Holweger \& M\"uller (1974) were used. Since the MARCS solar model is much cooler in the line forming region than the Holweger \& M\"uller's one, there is an offset of $\sim 0.15$\ dex between solar and stellar abundances. This offset is canceled out when consistent model atmospheres are used for the Sun and the stars (e.g. the solar scaled atmospheres used sometimes, or model atmospheres extracted from the same grid). Given the heterogenous nature of the calibrating samples, it is not possible to go further in detail. Once corrected to place them on our scale, errors (derived from the r.m.s. scatter of differences with our estimates) are 0.15~dex for abundances from the Str\"omgren photometry and 0.18~dex for those derived from Carney et al. (1994). For the stars having both (independent) estimates (a large fraction of the over one thousands metal-poor dwarfs in the full HIPPARCOS catalogue) errors are as low as 0.12~dex. \section{Discussion and Summary} We have collected literature photometric data for a sample of 99 dwarfs with parallaxes measured by the Hipparcos satellite, and high resolution spectra have been obtained for 66 of them. The photometric data were selected through a careful revision of the literature data, and were compared and implemented with the Hipparcos/Tycho photometric measurements in order to get a homogeneous, and accurate photometric data-base that includes Johnson $V$, $B-V$, $V-K$, Cousins $V-I$ and Str\"omgren $b-y$'s. Typical accuracy are 0.014, 0.011, 0.016 and 0.022~mag in $V$, $B-V$, $V-K$ and $V-I$, respectively. The spectroscopic data set consists of high dispersion ($15\,000<R<60\,000$), high $S/N$\ ($>200$) spectra obtained at the Asiago and McDonald Observatories. They were used to measure radial velocities and to derive high accuracy abundances of Fe, O, and the $\alpha-$elements Mg, Si, Ca, and Ti for the programme stars, according to a procedure totally consistent with that used in Gratton, Carretta \& Castelli (1997, $\sim 300$~field stars) and Carretta \& Gratton (1997), (giants in 24 globular clusters). This large and homogeneous photometric and spectroscopic data base has been used to derive accurate ages of galactic globular clusters (Gratton et al., 1997a). \bigskip\bigskip\noindent ACKNOWLEDGEMENTS The Hipparcos data used in the present analysis were the result of the FAST proposal n. 022; we are grateful to P.L. Bernacca for allowing us to have access to them and for continuous help in the related procedures. We wish to thank G. Cutispoto for his collaboration in the data acquisition. E.Carretta gratefully acknowledges the support by the Consiglio Nazionale delle Ricerche. The financial support of the {\it Agenzia Spaziale Italiana} (ASI) is also gratefully acknowledged. C. Sneden was supported by NSF grants AST-9315068 and AST-9618364. This research has made use of the SIMBAD data-base, operated at CDS, Strasbourg, France.
2,869,038,156,536
arxiv
\section{Results} \subsection{Percolation model} A quantum walk, defined in analogy with a random walk, is a particular quantum mechanical process on a prescribed graph, which consists of iterative applications of a unitary operator usually called a step, which factorizes as $\hat{U}=\hat{S}\hat{C}$. The \textit{coin operator} $\hat{C}$ modifies the walker's internal coin state and is crucial for the non-trivial quantum dynamics, while the \textit{shift operator} $\hat{S}$ implements transitions across the links of the graph in dependence of the internal state. The extension of quantum walks to dynamical percolation graph structures leads to the concept of percolation quantum walks \cite{kollar_asymptotic_2012} (PQW). Here, a finite set of vertices is considered, where at each step a graph with an edge configuration $\kappa$ is probabilistically chosen from all possible configurations $\mathcal{K}$. On the graph with configuration $\kappa$ the dynamics are defined in analogy to the DTQW. At the gaps, the shift operator $\hat{S}$ is modified by inserting reflection operators, leading to the shift operator \cite{kollar_asymptotic_2012} $\hat{S}_{\kappa}$. The probabilistic nature of the choice of the configuration $\kappa$ models an open system dynamics. The evolution of the walker's state from $\hat{\rho}(n-1)$ to $\hat{\rho}(n)$ is described by the random unitary map (RUM) \begin{equation} \hat{\rho}(n) = \sum_{\kappa \in \mathcal{K}} p(\kappa,\mathfrak{p}) \left(\hat{S}_{\kappa} \hat{C}\right) \hat{\rho}(n-1) \left(\hat{S}_{\kappa} \hat{C}\right)^{\dagger}, \label{RUO_iteration} \end{equation} where $p(\kappa,\mathfrak{p})$ is the probability of each configuration $\kappa$. Generally, it is assumed that open dynamics results in the gradual loss of information about the initial state, destroying all coherence. The system evolution under RUM contradicts this intuition and can result in a variety of non-trivial asymptotic states\cite{kollar_asymptotic_2012} attained after a dynamically rich transient regime. The typical characteristics already manifest themselves for a graph describing a linear chain. For their experimental observation we needed to design an apparatus, which is able to provide the following capabilities: first, the implementation of finite graph structures along with the dynamical creation or removal of edges between vertices; second, the easy and quick reconfigurability of the apparatus for the collection of data over the large configuration space $\mathcal{K}$ in a short time; third, the full access to the coin state to track coherences in the walker's state during its evolution. \subsection{Experimental realisation} We based our simulator on the time-multiplexing technique \cite{schreiber_photons_2010, schreiber_2d_2012}. Thus it inherits advantageous features such as remarkable resource efficiency, excellent access to all degrees of freedom throughout the entire time evolution, and stability sustained over many consecutive measurements providing sufficient statistical ensembles. As before the input state is prepared by weak coherent light at the single photon level, which is appropriate for studying any single particle properties of our walk \cite{knight_quantum_2003}. Our detection apparatus is adapted to single photon detection, which makes our interference circuit compatible for future multi-particle experiments with coincidence detection. The greatest challenge in this experiment has been the implementation of the dynamically changing shift operator $\hat{S}_{\kappa}$, that realises the reflecting boundary conditions as well as the dynamical creation of edges between vertices. The implementation of the walk is based on a loop architecture where the walker is realised by an attenuated laser pulse \cite{schreiber_photons_2010, schreiber_2d_2012}. Its polarization, expressed in the horizontal and vertical basis states $\ket{H}$ and $\ket{V}$, is used as the internal quantum coin and manipulated by standard linear elements, performing the \textit{coin operation} $\hat{C}$. Different fibre lengths in the loop setup introduce a well defined time delay between the polarisation components, where different position states are uniquely represented by discrete time bins (mapping the position information into the time domain). To attain repeated action, we have completed the apparatus with a loop geometry that consists of the two paths A and B (see Fig.~1), similarly to the 2d quantum walk \cite{schreiber_2d_2012}. In contrast to previous experiments, here one full step of the PQW is executed by two round-trips in the loop architecture, alternating between paths A and B. Additionally to the standard half-wave plate (HWP) in path $A$ (red area) we include a fast electro-optic modulator (EOM) in path B (green area), which now allows to actually change the underlying graph structure, and defines the additional \textit{graph operation} $\hat{G}_{\kappa}$. It is embedded between two partial shifts $\hat{S}$ making up a full shift operation as $\hat{S}_{\kappa} = \hat{S} \hat{G}_{\kappa} \hat{S}$ thus implementing the unitary $\hat{U}_{\kappa} = \hat{S}_{\kappa} \hat{C}$. The EOM is programmed to perform the transmission $\hat{T}$ or reflection operation $\hat{R}$ depending on whether a link is present or absent at the particular time encoded position in the configuration $\kappa$. Thus, changing the structure or size of the graph requires only a reprogramming of the timings delivered to the EOM. Detection at each step by a pair of avalanche photo diodes gives us access to the time evolution in the coin as well as in the position degree of freedom. Aiming for the reconstruction of the (reduced) density matrix $\hat{\sigma}(n)$ of the coin at the $n$th step, we perform a full tomography \cite{james_measurement_2001} of the coin state and demonstrate its evolution over six full steps. We test our simulator by performing a PQW with a Hadamard coin on a graph consisting of three vertices and at most two links. This choice of system size enables us to observe all relevant dynamical features within limits of the number of possible iterations due to round-trip losses. The sample space for the complete dynamics over $n$ steps corresponds to the set $\mathcal{K}^n$ of all possible patterns of length $n$. A restriction to the configurations $\mathcal{K}'$, obtained from $\mathcal{K}$ after removing graphs with both links present or both absent, reduces the size of the experimental sample space to $64$ for a $6$ step dynamics, while leaving the asymptotic behaviour unaffected (see supplementary material). We realise all configuration patterns from $\mathcal{K}^{\prime 6}$ which corresponds to a link probability $\mathfrak{p}=1/2$. The transmission $\hat{T}$ and reflection $\hat{R}$ operations realized by the EOM in the setup yield stationary asymptotic dynamics characterized by the single asymptotic state $\hat{\rho}_{\infty}$ being the identity. The study of the distance between the completely mixed coin state $\hat{\sigma}_{\infty} = \frac{1}{2}(\ket{H}\!\bra{H} + \ket{V}\!\bra{V})$ and our measured $\hat{\sigma}(n)$ thus yields two kind of information. First, it allows us to track how far is the system from the asymptotic state, and second, any increase of the distance from the stationary state, that in our case is the completely mixed state, is a clear signature of non-Markovian evolution in the coin space. \subsection{Finite graphs} The individual analyses of experimental measurement results for each of the 64 patterns can be used to reveal the extent of accuracy to which the step operators $\hat{S}_{\kappa}\hat{C}$ were realized. Residual populations outside the positions $-1, 0$ and $1$ constituted less than $2\,\%$ on average, confirming the realization of a finite graph. Since an unconfined walker would have spread over a length of 12 sites over the 6 steps, the strong confinement to three sites achieved by a programmable boundary and not by a fixed one \cite{meinecke_coherent_2013} is remarkable. For horizontally polarised initial states, the experimentally obtained spatial distributions are displayed on Fig.~2 for selected configuration patterns, demonstrating the precision of the implementation of the dynamically changing graph structure. (See the supplementary material and extended data figures for vertically polarised input.) \subsection{Quantum percolation walk} We implement the open system dynamics by averaging tomographic data over 64 patterns at each step $n$, corresponding to taking the average over a fluctuating external field \cite{Alicki_opensystems_2007}. The open system dynamics is arises due to the loss of knowledge about the external field, and not due to a coupling to some external quantum heat bath. We reconstruct the reduced coin state $\hat{\sigma}(n)$ by determining the Stokes parameters presented in Fig.~3a. The measured parameters (red lines) are compared both to the ideal model (dotted lines) and to a realistic model incorporating the systematic errors present in the experiment (blue lines). All Stokes parameters are in very good agreement with the theoretical models, $S_1(n)$ and $S_3(n)$ show the oscillatory behaviour, and $S_2(n)$ is zero within the error bars. The systematic errors lead to small deviations only in the amplitude but not in the qualitative form and oscillation periods compared to the ideal model theory. Details on the realistic model and the errorbars can be found in the Methods section. On Fig.~3b we present the Hilbert--Schmidt distance \cite{buzek_quantum_1996} of our measured density matrix $\hat{\sigma}(n)$ from the completely mixed state $\hat{\sigma}_{\infty}$. \section{Discussion} The initially pure reduced state $\hat{\sigma}(0)=\ket{H}\!\bra{H}$ (at distance $0.5$, not shown on the plot) becomes completely mixed in a single step, however soon the distance increases. The observed curve is part of an oscillatory evolution \cite{kollar_asymptotic_2012} that eventually decays to the maximally mixed state for the set of coin operators used in the experiment. The non-Markovian behaviour reflected in the revival from the completely mixed state is the witness that between the position and the coin degree of freedoms sufficient coherence survives the averaging over 64 patterns. The excellent agreement with the realistic model proves that the evolution is dominantly dictated by the controlled random unitary evolution, and other sources of decoherence, such as dephasing, are negligible. In summary, we have demonstrated the percolation quantum walk over 6 steps using a quantum simulator exploiting enhanced time-multiplexing techniques. As a highlight, our system is capable of realizing the walk on arbitrary, dynamically changing linear graph structures in a programmable way. By its design the device allows access to internal and external degrees of freedom, facilitating the study of their intricate interplay, in particular revealing the exchange of coherences. The clear revival of coherences in the coin state obtained by tomographic measurements confirms the precise control of the open system dynamics, and prove the sustained high stability of the system. Our work paves the road to study coherence properties of systems with changing connectivity for materials resembling in structure and function grainy or porous substances. While losses restrict our proof-of-principle experiment there is no geometric limitation on the size of the graph. Classical light sources and amplification can be used for studying coherence properties over 300 steps \cite{wimmer_optical_2013}. Prospective phenomena to investigate experimentally include boundary induced effects such as edge states \cite{kollar_discrete_2014} and non-trivial asymptotic behaviour, transport on percolation structures, and critical phenomena in higher dimensions. The introduction of multiple single photon states in a system without amplifiers, will open the route for the full experimental exploration of quantum interference effects in percolated media. \section{Methods} \subsection{Experimental setup.} The laser used in the experiment is a diode laser with a central wavelength of 805\,nm. It produces pulses of approximately 88\,ps FWHM duration which are attenuated by several neutral density filters to a level of about 135 photons per pulse after the incoupling mirror of the experiment. This leads to only 1.2 photons arriving at the detectors in the first step relevant to our measurements, whereas the overall round trip losses sum up to 50\,\%. The main contributions are the coupling losses at the fibres and the losses at the incoupling and outcoupling mirrors, where we probabilistically couple 0.2\,\% into the setup and 7\,\% out at each round trip. The repetition rate is variable and chosen with respect to the duration of a full quantum walk. To realise the partial shift two single mode fibres of 135\,m and 145\,m length have been used leading to a position separation of 46\,ns. This allows for 13 occupied positions without any overlap and signifies the maximum possible system size with this specific set of fibres. The EOM and its characteristics are discussed in the next section. The detectors used are silicon-based avalanche photo diodes operating in Geiger mode with a dead time of about 50 ns and detection efficiencies around 65\,\%. The single photon detectors were chosen since their dynamic range is more accessbile in comparison to regular photo diodes, and also as a preparation for future genuine single photon input. \subsection{Characteristics and description of the EOM. \label{EOM}} The operation of the electro-optic modulator (EOM) in our experiment is based on the Pockels effect. It has a rise time of below 5\,ns and can switch faster than 50\,ns between consecutive switchings, which is comparable to the distance between neighbouring positions in the quantum walk. The switched pattern can be an arbitrary non-periodic signal, however some technical restrictions apply. It consists of two identical birefringent crystals with their optical axes rotated relative to each other by $90^\circ$ to compensate for their natural birefringence inducing a phase $\varphi$. By applying a voltage $U$ an additional phase retardation $\phi_U$ can be achieved. The pair of crystals are rotated by $45^\circ$ with respect to the horizontal and vertical polarisation axes defined by the polarising beam splitters in our setup. The action of the EOM on arbitrary polarization states is given in the $\{ \ket{H}, \ket{V} \}$ basis by the matrix \begin{eqnarray}\nonumber &\hat{G}&_\mathrm{EOM}(U) = R(45 ^\circ) \cdot \hat{G}_\mathrm{crystal~1} \cdot \hat{G}_\mathrm{crystal~2} \cdot R(-45 ^\circ) \\ &=& \frac{1}{2} \begin{pmatrix} 1 & -1 \\ 1 & 1 \\ \end{pmatrix} \begin{pmatrix} e^{i \phi_U} & 0 \\ 0 & e^{i \varphi} \\ \end{pmatrix} \begin{pmatrix} e^{i \varphi} & 0 \\ 0 & e^{-i \phi_U} \\ \end{pmatrix} \begin{pmatrix} 1 & 1 \\ -1 & 1 \\ \end{pmatrix} \\ \nonumber &=& e^{i \varphi} \cdot \begin{pmatrix} \cos(\phi_U) & i \sin(\phi_U) \\ i \sin(\phi_U) & \cos(\phi_U) \\ \end{pmatrix}. \label{eq:C_EOM_U} \end{eqnarray} For $\phi_U = 0$ (at $U = 0$) the EOM realises the transmission operator $\hat{T}$, and for an appropriate choice of $U$ yielding $\phi_U = \frac{\pi}2$ the reflection operator $\hat{R}$, with \begin{equation} \begin{split} \hat{T} = \begin{pmatrix} 1 & 0 \\ 0 & 1\\ \end{pmatrix},~~~~ \hat{R} = \begin{pmatrix} 0 & i \\ i & 0 \\ \end{pmatrix}. \end{split} \label{eq:C_EOM_T} \end{equation} \subsection{The realistic model and calculation of errorbars.} We have identified four sources of systematic errors to define a realistic model of our experiment: first, the detector and power dependent detection efficiencies, which were determined in a separate measurement; second, the different losses experienced in different paths due to dissimilar coupling efficiencies and path geometries, which were similarly estimated in an independent measurement with an accuracy of $\pm 2\,\%$; third, the transmission through the (switched) EOM is greater than $98\,\%$, but not exactly known; fourth, the angle of the HWP defining $\hat{C}$ can be set only with a precision of $0.2^\circ$. The power dependence of the detector efficiencies is constant from the second step on since the power in our experiment drops exponentially from step to step. To keep the number of parameters small, the resulting correction factor for the final steps was applied as a global correction factor yielding larger errors for the first step resulting in larger errorbars. For the determination of the parameters of the other three errors we resorted to a numerical model. We manually varied the parameters in the ranges suggested by the corresponding independent measurement results and device specifications. The parameters yielding the best fit within these ranges were chosen for the realistic model presented on the figures. The mean deviation of the statistics produced by a Monte Carlo scan of the parameters within these ranges was used to determine the size of the errorbars. For the first step, the errorbars produced by the Monte Carlo simulation vanish due to a symmetry, leaving the aforementioned deviation of detection efficiencies as the only source of error.\\
2,869,038,156,537
arxiv
\section{Introduction} Fault-tolerant quantum computing is the research effort to make quantum computers reliable despite the many ways that quantum hardware suffers from errors beyond an experimenter's control. Techniques from physics, information theory, and computer science are employed to develop robust quantum processors. The development of quantum error correction was a critical result for quantum computing in general, because it showed that arbitrarily complex computations could be executed on hardware with nonzero error rate~\cite{Calderbank1996,Steane1996,Preskill1998,Nielsen2000,Knill2005}. However, optimism was tempered by the realization that the \emph{resource overhead} (the redundancy in hardware that enables error correction) could be several orders of magnitude larger than a noise-free circuit for plausible error rates and interesting quantum algorithms~\cite{Knill2005,Isailovic2008,Jones2012,Fowler2012b}. Current hardware designs can control fewer than ten quantum bits~\cite{Ladd2010,Lucero2012,Blatt2012,Politi2009,Maurer2012,Shulman2012}, so the million-qubit devices that implement fault-tolerant computation must be several technology generations away from the current state of the art. To bridge the gap, research in fault-tolerant quantum computing focuses on developing methods to reduce the overhead and to perform reliable quantum computing on hardware that is simpler to design and fabricate. This work addresses the most resource-intensive component in most, if not all, quantum computations. An important result from quantum error correction is that, in any quantum code, there always exists one crucial operation that is not natively available~\cite{Zeng2007,Eastin2009}, and hence it is expensive to prepare. A commonly selected operation is the Toffoli gate~\cite{Barenco1995,Jones2013,Eastin2012}, defined by $U_{\mathrm{Tof}}\ket{a,b,c} = \ket{a,b,c \oplus ab}$, where $(a,b,c)$ are binary variables and operation $\oplus$ is binary XOR. This paper introduces a fault-tolerant construction for the Toffoli gate which can substantially lower the resource overhead in fault-tolerant quantum computing. The two-round error detection in this paper is an improvement over the one-round error detection in Refs.~\cite{Jones2013,Eastin2012}. By incorporating this construction into recent analyses of fault-tolerant quantum architectures~\cite{Jones2012,Fowler2012b}, we anticipate that the resource costs determined therein could be reduced by more than an order of magnitude. The paper is organized as follows. Section~\ref{preliminaries_section} outlines some notation and preliminary assumptions. Section~\ref{overview_section} summarizes the error-detection methods implemented in the paper. Section~\ref{error_detection_section} gives an explicit quantum-circuit procedure for producing a composite Toffoli gate. Section~\ref{analysis_section} calculates the suppressed error probability that results from this construction. Section~\ref{teleportation_section} shows how to make use of the composite Toffoli gate with teleportation. Section~\ref{discussion_section} discusses the impact of these results on fault-tolerant quantum computing. \section{Preliminaries} \label{preliminaries_section} An important distinction in this paper is made between quantum gates that are ``easy'' and ``hard.'' An operation is easy when it has a direct and low-overhead implementation within a chosen error-correcting code. Traditionally, these operations were labeled ``transversal,'' because they could be applied element-wise to a code block or in matching pairs element-wise between two code blocks, which ensured fault tolerance~\cite{Nielsen2000}. However, modern codes like surface codes~\cite{Raussendorf2007,Fowler2009} do not actually use transversal gates. Still, the distinction is important because some operations are hard, meaning they require substantially more overhead to perform. Often, the hard operations invoke many easy operations to perform some distillation procedure~\cite{Knill2004,Bravyi2005,Meier2012,Bravyi2012,Jones2012c}. In the important family of CSS codes~\cite{Calderbank1996,Steane1996}, as well as many more stabilizer codes~\cite{Gottesman1997}, the easy operations are Clifford gates. The group of Clifford gates includes the Pauli operators $\sigma^x \equiv X$, \emph{etc.}. The group is generated by the phase gate $S = \exp[i \pi (I - Z)/4]$, Hadamard $H = (1/\sqrt{2})(X+Z)$, and CNOT. For convenience, we will also consider initialization and measurement in the $X$ and $Z$ bases to be easy operations, so we may say they are ``Clifford'' although they are not unitary. By contrast, non-Clifford gates tend to be much more difficult. References~\cite{Zeng2007,Eastin2009} show that there is always one operation required for universal quantum computing that is not transversal in a given code. In the surface code~\cite{Raussendorf2007,Fowler2009}, only a subset of the Clifford group is natively available, while the rest must be ``injected'' into the code space. Injection is not a fault-tolerant process, so the injected states must be purified of errors, which is costly. Distilling the non-Clifford operation $T = \exp[i \pi (I - Z) /8]$ requires about $50\times$ the circuit resources as a fault-tolerant CNOT~\cite{Fowler2012d}. This disparity motivates our efforts to find a more efficient non-Clifford operation in the form of Toffoli gates. Moreover, the high cost of fault-tolerant non-Clifford gates (compared to Clifford gates) is the justification for another assumption, that only errors in the non-Clifford $T$~gates are considered. This paper derives quantum circuits in a way that is well-suited to surface code error correction. The features of the surface code make some logical code operations more convenient than others. In particular, only CNOT, Hadamard, and $X$- and $Z$-basis initialization and measurement are natively available~\cite{Fowler2009,Fowler2012}. We later demonstrate an ancilla-aided $Y$-basis measurement. Rotations by angles $\pi/2$ or $\pi/4$ about the $X$ and $Z$ axes on the Bloch sphere are possible but more costly, as they require magic-state distillation. We assume that the non-Clifford $T$~gate (rotation by $\pi/4$ around the $Z$ axis) is available to produce logical Toffoli gates with error detection. \section{Overview of main results} \label{overview_section} We briefly summarize the main points of the composite-Toffoli construction to show what the analysis in later sections accomplishes. This high-level description is also useful for reference. A form of the composite Toffoli gate is shown in Fig.~\ref{composite_CCZ_logical}, which depicts four controlled-controlled-$Z$ (CCZ) gates. This circuit has flexibility to turn any particular qubit line into Toffoli-gate target(s) using Hadamard gates, which are local and Clifford. Throughout most of the paper our approach is to create the composite-CCZ gate in Fig.~\ref{composite_CCZ_logical} and to assume that the appropriate Hadamard gates are inserted when this gate is used in an algorithm. An important constraint to note is that these CCZ gates are inseparable, meaning they must all be implemented in the shown arrangement, without inserting any gates in the middle of the circuit. \begin{figure} \centering \includegraphics[width=5cm]{Fig1.pdf}\\ \caption{A composite CCZ gate acting on eight qubits, which are numbered for later reference. Any qubit line could be converted to Toffoli target(s) using Hadamard gates, because CCZ is symmetric in its inputs.} \label{composite_CCZ_logical} \end{figure} The composite Toffoli is constructed with two rounds of error detection. For now, we consider the only source of failure to be $T$~gates having $Z$ errors, each with independent probability $p$. This simplifies the analysis and allows us to focus on the non-Clifford gates, which previous investigations found to be the most resource-costly component of fault-tolerant quantum computing~\cite{Jones2012,Fowler2012b,Fowler2012d,Fowler2013}. Each round uses the $C_4$ error-detecting code~\cite{Knill2005}, which has distance two and which can detect a single error on any qubit. The composite construction with two rounds of $C_4$ error detection has distance four with respect to $T$~gates. As a result, the distance-four circuit will have postselected error of $O(p^4)$. In the first round of error detection, a $C_4$ code enables the construction of magic states for the controlled-$S$ gate. The gate $S$ is a Clifford gate, but its controlled version is a non-Clifford gate from which Toffoli can be constructed. The initial state of the error-detection circuit consists of a bare $\ket{+} = (1/\sqrt{2})(\ket{0} + \ket{1})$ qubit and a $C_4$ code block with two $\ket{+}$ encoded qubits. The magic-state preparation will use four controlled-$H$ gates produced using eight $T$~gates, as we explain later. Because transversal $H$ is a logical operation in $C_4$, the controlled-$H$ with the control on the bare qubit is also logical with respect to the code block~\cite{Meier2012}, as shown in Fig.~\ref{C4_controlled_H_simple}. There are several important steps needed to make this process successful, and the procedure is detailed in Sec.~\ref{error_detection_section}. Stabilizer measurements will detect a single error in the code block, and we later show that this will detect a single error in any of eight $T$~gates used in this procedure. The output is a three-qubit magic state that can be used to produce two controlled-$S$ gates with a common control qubit (or common target, as controlled-$S$ is a symmetric operation). The error probability for this three-qubit state is $28p^2$. \begin{figure} \centering \includegraphics[width=8.3cm]{Fig2.pdf}\\ \caption{Circuit for constructing the three-qubit register encoding two coupled controlled-$H$ gates. The right-hand side shows the equivalent logical circuit.} \label{C4_controlled_H_simple} \end{figure} The second round of error detection also uses $C_4$ code blocks. Transversal controlled-$Z$ is a logical operation between two $C_4$ codes. Similar to above, we implement CCZ with a bare qubit controlling a transversal controlled-$Z$ operation between $C_4$ codes. CCZ gates are constructed using controlled-$S$ gates, which are supplied by the magic states from above. The controlled-$S$ gates act on the $C_4$ blocks, and a single error in any controlled-$S$ gate in each code block can be detected using the stabilizers of all $C_4$ code blocks. To ensure independence of errors, any pair of controlled-$S$ gates with common control must place their targets in separate $C_4$ blocks, as explained later. Since these gates are still linked, the final logical operation has common control lines, as shown in Fig.~\ref{composite_CCZ_overview}. \begin{figure} \centering \includegraphics[width=8.3cm]{Fig3.pdf}\\ \caption{A high-level depiction of the second round of error detection in the composite-Toffoli circuit. Each of the CCZ gates requires two controlled-$S$ gates produced using magic states from the first round shown in Fig.~\ref{C4_controlled_H_simple}. The common control line from each coupled pair of controlled-$S$ gates is aligned with the top $C_4$ block. The correspondence with Fig.~\ref{composite_CCZ_logical} is as follows: the bare qubits are inputs $(1,2)$; the pair of encoded qubits in each $C_4$ block are inputs $(3,4)$, $(5,6)$, and $(7,8)$, from top to bottom.} \label{composite_CCZ_overview} \end{figure} The second round of error detection uses eight copies of the output of the first round, so 64 $T$~gates are required in total. The analysis in Sec.~\ref{analysis_section} shows that the error probability for the output state is $3072p^4$ to lowest order. After decoding the three $C_4$ blocks, the output state is equivalent to the result of applying the composite gate in Fig.~\ref{composite_CCZ_logical} to eight qubits, each of which is in the $\ket{+}$ state. Section~\ref{teleportation_section} shows how this resource state can teleport the composite-Toffoli gate into any quantum circuit. \section{Error-detection circuits} \label{error_detection_section} The two rounds of error detection in the composite Toffoli gate are (1) building controlled-$S$ gates from $T$~gates and (2) building CCZ gates from controlled-$S$ gates. The techniques in both rounds are similar, but there are important differences as well. In this section, we examine the two steps separately for pedagogical clarity. Furthermore, we assume that Clifford operations are error-free, including initialization and measurement, and that the only errors come from the non-Clifford $T$~gates. The first round of error detection implements transversal controlled-$H$ gates on a $C_4$ code block. The first detail we must address is which implementation of the $C_4$ code we use. All implementations are generated by stabilizers $g_1 = X_1 X_2 X_3 X_4$ and $g_2 = Z_1 Z_2 Z_3 Z_4$, where subscript on each Pauli operator denotes one of the four qubits in the code. However, logical operators can be chosen in multiple distinct ways, and this choice determines encoding/decoding circuits and the set of transversal gates. We will label our first implementation the ``$X$/$Y$ encoding'' because the logical $X$ and $Y$ operators on both encoded qubits are weight-2; they can be written as: \begin{eqnarray} \overline{X}_1 & = & X_1 X_2 \nonumber \\ \overline{X}_2 & = & X_1 X_3 \nonumber \\ \overline{Y}_1 & = & Y_1 Y_3 \nonumber \\ \overline{Y}_2 & = & Y_1 Y_2, \label{C4_operators} \end{eqnarray} where the bar in $\overline{X}_1$ distinguishes logical code operators from physical qubit operators, and subscript corresponds to one of the two encoded qubits. Importantly, $X$/$Y$ encoding does not yield a code where transversal CNOT implements encoded CNOT. The second implementation we use is the standard $X$/$Z$ encoding~\cite{Knill2005}: \begin{eqnarray} \overline{X}_1 & = & X_1 X_2 \nonumber \\ \overline{X}_2 & = & X_1 X_3 \nonumber \\ \overline{Z}_1 & = & Z_1 Z_3 \nonumber \\ \overline{Z}_2 & = & Z_1 Z_2. \label{C4_operators} \end{eqnarray} The $X$/$Z$ encoding does permit transversal CNOT, and conversion between encodings will be necessary to satisfy our aim of using only $X$- and $Z$-axis rotations. Note that our derivation using different encodings is just one way to explain this circuit. A different interpretation, where there is a single fixed encoding and where all operations that commute with the $C_4$ stabilizers are logical operators, is equally valid. The $X$/$Y$ encoding permits a transversal $K = T X T^{\dag} = (1/\sqrt{2})(X+Y)$ operation. In particular, the operator $U = K_1 K_2 K_3 K_4$ commutes with the stabilizers and implements $\overline{K}_1 \overline{K}_2$ and SWAP on the two encoded qubits. Simply put, $K$ interchanges $X$ and $Y$ operators, just as Hadamard interchanges $X$ and $Z$ operators; in a later step, we map $K$ to $H$. A similar circuit was used for magic-state distillation with the $C_4$ code using the $X$/$Z$ encoding with transversal $H$~\cite{Meier2012}. Using $X$/$Y$ encoding, we initialize the circuit to logical $\ket{+}$ qubits, apply controlled-$K$ transversally to the code block, and verify the result. This procedure is depicted in Fig.~\ref{C4_controlled_H}. The initialization procedure prepares two encoded $\ket{+}$ qubits as well as a bare $\ket{+}$ qubit. Next, the $T$~gates and CNOT perform transversal $\overline{K}_1 \overline{K}_2$ and SWAP controlled by the bare qubit. Since the encoded qubits are identical, the SWAP is trivial. The stabilizer measurement of the $C_4$ code can detect a single error in any of the $T$~gates. The final step in this round is to transform this code from $X$/$Y$ to $X$/$Z$ encoding. The reasons for doing so are twofold. The $X$/$Z$ encoding has simpler decoding circuits for the $C_4$ block; alternatively, the $X$/$Z$ encoding has transversal, encoded CNOT that enables access to the logical state without decoding. The code transformation is simple and fault-tolerant. Apply transversal $R_x(\pi/2) = \exp[i \pi (I - X)/4]$ to each qubit, as shown in Fig.~\ref{C4_controlled_H}. This operation maps $Y$ operators to $Z$ operators: $[R_x(\pi/2)] Y [R_x(-\pi/2)] = Z$. The stabilizers are unchanged, but the encoding of logical operators is modified. $K$ maps to $H$, so the entire circuit is equivalent to applying controlled-$H$ transversally to an $X$/$Z$-encoded $C_4$ block. The reason for the two-step procedure with $X$/$Y$ and $X$/$Z$ encodings is subtle --- it enables better fault-tolerant circuits because stabilizers can be measured before and after the $R_x(\pi/2)$ gates in Fig.~\ref{C4_controlled_H}. \begin{figure} \centering \includegraphics[width=8.3cm]{Fig4.pdf}\\ \caption{Detailed construction of the circuit in Fig.~\ref{C4_controlled_H_simple}. After initializing in $X$/$Y$ encoding, controlled-$K$ gates are produced using $T$~gates and CNOTs. The stabilizer measurement can detect a single $Z$ error occurring in any of the $T$~gates. The transversal $R_x(\pi/2)$ gates transform the $C_4$ block to $X$/$Z$ encoding, and in this basis the controlled-$K$ gates are mapped to controlled-$H$.} \label{C4_controlled_H} \end{figure} The three-qubit magic state created with (effective) controlled-$H$ gates can be used to teleport controlled-$S$ gates. A circuit for doing so is shown in Fig.~\ref{Controlled_H_to_S}. The $Y$-basis measurement is not desirable for surface code error correction, but at least one such non-native gate or measurement seems necessary. We give a fault-tolerant, $C_4$-encoded circuit for this measurement at the end of this section. The residual $S^{\dag}$ gate will be handled in a later step. \begin{figure*} \centering \includegraphics[width=12cm]{Fig5.pdf}\\ \caption{Circuit for teleporting two coupled controlled-$S$ gates using the magic state (dashed box in upper left) prepared by the first round of error detection in Fig.~\ref{C4_controlled_H}. The measurement results are recorded in binary variables $(m_1, m_2, m_3)$. Subsequent corrections are conditionally implemented based on these measurements, with the conditions for each gate given by the binary expression above the gate. Overbar here denotes logical inverse, and symbol $\oplus$ denotes binary operation XOR. The $Z$ operator in the dashed box is incorporated into the Pauli frame~\cite{Knill2005,DiVincenzo2007}.} \label{Controlled_H_to_S} \end{figure*} The second round of error detection implements transversal controlled-$Z$ between two $C_4$ code blocks, controlled by a bare qubit, as illustrated in Fig.~\ref{composite_CCZ_overview}. As before, the inputs to the circuit will all be logical $\ket{+}$ qubits. Controlled-$Z$ is a transversal operation in $C_4$ codes; the logical operation is controlled-$Z$ with swapped targets, which is trivial when the targets are identical. CCZ gates are broken down into controlled-$S$ gates. However, the controlled-$S$ magic states from the first round come in coupled pairs which must fan out to separate CCZ gates to ensure that errors in any one $C_4$ block are independent. The resulting arrangement of CCZ gates with common controls leads to the composite CCZ operation in Fig.~\ref{composite_CCZ_logical}. A construction for CCZ using controlled-$S$ magic states is shown in Fig.~\ref{CCZ_construction}. Referring back to Fig.~\ref{composite_CCZ_overview}, we see that each of the four adjacent pairs of coupled-CCZ gates (sharing one common control) is implemented by the circuit in Fig.~\ref{CCZ_construction}. Each coupled-CCZ gate uses two copies of Fig.~\ref{C4_controlled_H}, or 16 $T$~gates. The entire circuit thus uses 64 $T$~gates. \begin{figure*} \centering \includegraphics[width=12cm]{Fig6.pdf}\\ \caption{Construction of two coupled CCZ gates using teleported controlled-$S$ gates from Fig.~\ref{Controlled_H_to_S}. The controlled-$S^{\dag}$ gates are created by conceptually applying $Z$ and controlled-$Z$ gates after the output of Fig.~\ref{Controlled_H_to_S}, which in practice is absorbed into the existing conditional operations. Note that the residual $S$ and $S^{\dag}$ gates cancel. The overbar in each $\overline{M_y}$ measurement symbol denotes that controlled-$Z$ gates are conditioned on the qubit being in the $(1/\sqrt{2})(\ket{0} - i \ket{1})$ state, the $(-1)$ eigenvector of $Y$.} \label{CCZ_construction} \end{figure*} Although Fig.~\ref{CCZ_construction} builds a coupled pair of CCZ gates, each has distance two with respect to $T$-gate error with probability $p$, resulting in total error probability $56p^2$ to leading order (using error detection in the first round only). By using another round of error detection with $C_4$ codes, we can achieve distance four and error probability of $3072p^4$ for a composite operation of four CCZ gates. The next section calculates error probability of this composite CCZ gate when one assumes that $T$~gates are the dominant failure mechanism. The final circuit component we require is a fault-tolerant $Y$-basis measurement $M_y$. A simple way to do this is to perform the gate $R_x(\pi/2)$ followed by $Z$-basis measurement. However, our circuit constructions use $C_4$-encoded qubits, so we would like to perform $C_4$-encoded $M_y$. The logical operation $R_x(\pi/2)$ is not transversal in $C_4$, so it is not convenient to implement. However, we can implement $M_y$ using operations transversal in $C_4$ with the aid of the ancilla state $S^{\dag}\ket{+}$, as shown in Fig.~\ref{Y_measurement_circuit}a. The $Y$-basis measurement is given by the binary XOR of the $M_x$ and $M_z$ results. By encoding two $S^{\dag}\ket{+}$ qubits in a $C_4$ code, we can perform encoded $M_y$ using transversal operations, as shown in Fig~\ref{Y_measurement_circuit}b. This is fault-tolerant $M_y$ with respect to the $C_4$ code blocks, because the single-qubit measurements can be used to reconstruct both the logical $Y$-basis measurements and the stabilizer parity measurements for error detection. State $S^{\dag}\ket{+}$ is not natively available in the surface code, so it may require distillation~\cite{Raussendorf2007,Fowler2009,Fowler2012d}. The protocol in Ref.~\cite{Aliferis_thesis} (p. 94) can be adapted to distilling $C_4$-encoded $S^{\dag}\ket{+}$ qubits. \begin{figure} \centering \includegraphics[width=8.3cm]{Fig7.pdf}\\ \caption{$Y$-basis measurement using ancillas and operations transversal in $C_4$ codes. \textbf{(a)} Logical measurement circuit using the $S^{\dag}\ket{+}$ ancilla state. The measurement result is $M_y = m_1 \oplus m_2$, where $\oplus$ denotes binary XOR. \textbf{(b)} $Y$-basis measurement in $C_4$ code blocks. The dashed box shows an encoding circuit for the ancilla block that is prepared, though distillation of this register may also be required. Both logical $Y$-basis measurement and code stabilizers can be reconstructed from the single-qubit measurements.} \label{Y_measurement_circuit} \end{figure} \section{Error analysis} \label{analysis_section} Determining the probability of error in the output of the composite-Toffoli circuit is simplified by the operating assumption that errors only occur in $T$~gates with independent probability $p$. We assume that $p \ll 1$ so that the output error is approximated well by the first non-vanishing term in a power-series expansion in $p$, and we show that this term is $O(p^4)$. As before, we analyze the two rounds of the protocol, where the second round detects some errors missed in the first. In the first round of error detection shown in Fig.~\ref{C4_controlled_H}, there are eight $T$~gates which may each have a $Z$~error. Any single error will be detected by the $C_4$ stabilizers, while any combination of two errors will not be detected. There are 28 distinct arrangements of two errors, and they can be grouped into seven error patterns at the output. After teleportation of coupled-controlled-$S$ gates in Fig.~\ref{Controlled_H_to_S}, the possible error configurations are the seven configurations of one or more $Z$~errors on the three output qubits. Each of these configurations has probability $4p^2$ because each can arise in four different patterns of $T$-gate errors. The total error probability for this operation is $28p^2$, as expected. An important design feature of the composite Toffoli gate is that the three outputs of the first round fan out to different $C_4$ blocks in the second round. The most likely patterns of errors which evade detection in the second round are those where two instances of the first round both had undetected errors at their respective outputs. As before, a single error in any $C_4$ block will be detected, so the two faulty instances of coupled-controlled-$S$ must have exactly the same error configuration. If not, there will be a single error in at least one block, which is detected. None of the $C_4$ codes detect errors when any two first-round states have matched errors, so these events represent the most likely errors at the output of the second round. For a few configurations, errors from the first round can cancel. Referring to Fig.~\ref{CCZ_construction}, if both coupled-controlled-$S$ instances have a single $Z$ error on the $S$/$S^{\dag}$ qubit, these will cancel without any effect on the broader circuit. There are four different possible patterns for this event. As a result, undetected output errors can occur via six matching first-round error patterns, each having 28 permutations, or the seventh first-round pattern with just 24 permutations (the other four self-cancel), which adds up to 192 distinct configurations. Each first-round error pattern has probability $4p^2$, so the total probability of error in the composite CCZ gate is $192 \times (4p^2)^2 = 3072p^4$. The use of error detection, instead of correction, implies that known faulty states are discarded. In such an event, some or all of the preparation steps must be repeated. The probability of detected circuit failure can be upper bounded by $p_{\mathrm{fail}} \le 1 - (1-p)^{64} \le 64p$. This assumes the entire circuit fails on any single $T$-gate error. Less overhead from repeating circuits is required if one repeats only the round which failed; if one of the eight copies of first-round error detection fails, repeat just that circuit rather than the entire composite CCZ gate. To accommodate failure, we prepare encoded states before teleporting data through the gate. \section{Teleportation into quantum algorithms} \label{teleportation_section} The composite CCZ operation (or equivalently composite Toffoli) in Fig.~\ref{composite_CCZ_logical} can be encoded into a quantum register by applying this gate to eight $\ket{+}$ qubits. After constructing and verifying this state, the gate interacts with data qubits using teleportation, which is an extension of the methods developed in Ref.~\cite{Gottesman1999}. The teleportation circuit is shown in Fig.~\ref{composite_CCZ_teleportation}. \begin{figure*} \centering \includegraphics[width=\textwidth]{Fig8.pdf}\\ \caption{Teleportation circuit for the composite CCZ gate shown in Fig.~\ref{composite_CCZ_logical}. CCZ~gates are symmetric in their inputs, so placing Hadamard gates on both sides of the teleportation circuit on the same data qubit will convert the affected CCZ gates to Toffoli (targeting this same qubit). Operations in dashed boxes, which are all in the Clifford group, are implemented conditioned on the indicated measurement result being logical $\ket{1}$. In many cases, the conditional corrections can be delayed or combined with other gates.} \label{composite_CCZ_teleportation} \end{figure*} If the four coupled CCZ operations are problematic, one can sacrifice two CCZ gates to leave two uncoupled CCZ gates. Referring to Fig.~\ref{composite_CCZ_logical}, if one sets inputs 6 and 7 to $\ket{0}$ while the others are set to $\ket{+}$, then the second and third CCZ gates act trivially. Equivalently, the teleportation circuit in Fig.~\ref{composite_CCZ_teleportation} is modified by deleting lines 6 and 7, as well as any gates which touch them. As a result, two independent CCZ gates are produced. The total error probability will be lower because some errors become trivial. \section{Discussion} \label{discussion_section} Using our results, circuits that depend on Toffoli gates have reduced fault-tolerant resource overhead. The precise improvement factor depends generally on too many parameters and assumptions to be covered here. Instead, we give an illustrative example showing how resource costs are lowered. Suppose that we are using surface code error correction as in Refs.~\cite{Jones2012,Fowler2012b}. A typical implementation of Shor's algorithm~\cite{Shor1999} may require an error probability per Toffoli gate around $10^{-12}$. The simplest Toffoli circuit uses four $T$~gates, which would each require error probability $2 \times 10^{-13}$. The error-detecting constructions in Refs.~\cite{Jones2013,Eastin2012} require eight $T$~gates with error probability $2 \times 10^{-7}$. Increasing the acceptable probability of error means one less round of magic-state distillation is required, reducing total resources by about a factor of ten~\cite{Fowler2012b,Jones2013}. Additionally, intermediate Clifford operations can use lower code distance~\cite{Fowler2012b}. The construction in this paper continues this trend. With two rounds of error detection, the $T$-gate error probability need only be $10^{-4}$. Even less magic-state distillation is required in this instance, and intermediate operations can tolerate higher probability of errors. By a cursory resource counting, the savings can be a factor of 20 to 50 for producing Toffoli gates, using methods developed in Refs.~\cite{Fowler2012b,Jones2013}. Moreover, error rate $10^{-4}$ is plausibly achievable by physical gates without error correction, which could make magic-state distillation unnecessary and save even more resources. Some important considerations must be mentioned. We assumed that non-Clifford operations dominate resource costs, which was borne out in previous investigations~\cite{Isailovic2008,Jones2012,Fowler2012b,Fowler2012d,Fowler2013}. However, the composite Toffoli changes the situation when its resource cost no longer dominates the the total cost of the computation. Other operations in a quantum algorithm like routing of qubits for long-range interactions may become important. The resource savings factor for the entire algorithm will always be less than that for the individual Toffoli gates; still, most quantum algorithms like factoring~\cite{Shor1999,Isailovic2008,Jones2012} and simulation~\cite{Lloyd1996,Jones2012b} benefit substantially from a more efficient Toffoli construction. More research is needed to fully understand resource costs of the composite Toffoli construction in the context of a chosen quantum code. Similar work has been performed to optimize magic-state distillation protocols implemented in a surface code~\cite{Fowler2012d,Fowler2013}. Our results could also be implemented within other codes, such as Bacon-Shor codes~\cite{Aliferis2007}. In such analysis, another opportunity beyond $T$~gates for saving resources is in the Clifford gates. We have assumed throughout that Clifford operations are perfect, but this is never the case in practice. Instead, Clifford operations can have arbitrarily low error for some resource cost. The constructions in this paper use $C_4$ codes to detect errors in $T$~gates, but they can also detect errors in other gates~\cite{Knill2005}. For example, the Clifford operations produced using a surface code could have higher error rate if one knew that errors would be caught by the $C_4$ error-detection circuits. When higher error rates are allowed, lower code distance can be used, which means fewer hardware resources are required for the same circuit. The composite Toffoli gate demonstrates several important techniques in fault-tolerant quantum computing that merit further investigation. A quantum operation is encoded into a known state that is verified before being teleported into the rest of the quantum circuit. Early work on teleportation gates focused on one-, two-, or three-qubit operations~\cite{Gottesman1999,Nielsen2000}; by comparison, the composite Toffoli gate is an eight-qubit operation. The process of compiling quantum operations into encoded states with verification followed by teleportation is a powerful technique for generating fault-tolerant quantum circuits. We propose the term \emph{quantum logic synthesis} for methods of synthesizing arbitrary-size, fault-tolerant quantum logic networks in a hierarchical arrangement of preparation and teleportation. The possible techniques go far beyond ``sequential'' decompositions~\cite{Isailovic2008,Jones2012,Jones2012b,Fowler2012b}, where a quantum algorithm is decomposed into a long sequence of fundamental gates from a small set. For each fundamental gate, fault-tolerant constructions are known, but the cost of each is high because every operation must have very low error rate. By contrast, hierarchical designs weave error checking into the algorithm, allowing higher error rates throughout. Quantum logic synthesis can compress larger, more complex operations than Toffoli gates, which is the subject of forthcoming work.
2,869,038,156,538
arxiv
\section{Introduction} \label{sec:intro} Magnetic fields are expected to play an important role in the process of star formation. During the main accretion phase, magnetic fields in protostellar envelopes regulate mass and momentum accretion onto a disk and consequently affect disk formation and disk properties, such as mass and size. Thus, magnetic fields may have an impact on the subsequent planet formation in the system, since such protostellar disks provide the initial conditions for planet formation. On one hand, magnetohydrodynamics (MHD) simulations have suggested that magnetic fields strongly suppress disk formation in the protostellar phase through the so-called the magnetic braking \citep[e.g., ][]{me.li08, he.ci09, joos12}. On the other hand, theoretical studies also suggest that non-ideal MHD effects (Ohmic dissipation, ambipolar diffusion, and Hall effect) can weaken the effects of magnetic braking, enhancing disk formation \citep[e.g., ][]{inut10, kras11, dapp12, tomi15, tsuk15, zhao18}. Observations of magnetic fields are thus necessary to verify the above theoretical predictions and reveal how magnetic fields affect mass accretion onto disks in the disk formation phase. Polarized emission from dust grains enables us to observe magnetic fields. However, recent observational studies have reported that magnetic fields are not the only source of polarized emission on disk scales in young stellar objects (YSOs). For example, protoplanetary disks show polarized dust emission arising from self-scattering \citep{kata16, kata17, ohas19} at millimeter wavelengths. This interpretation is based on theoretical predictions of the polarization direction and fraction: polarization directions along the disk minor axis and polarization fractions on the order of 1\% independent of the polarization intensity \citep{yang17}. In addition to self-scattering, the mechanical momentum in the protostellar outflow can align dust grains producing polarized dust emission. If gaseous flux aligns the major axis of dust grains along the outflow direction \citep{gold52}, the resultant polarization direction is parallel to the outflow axis. In contrast, if torque generated by the outflow rotates dust grains around the outflow axis, the resultant polarization direction is perpendicular to the outflow axis \citep{lego19}. Recent observations toward protostars on multiple spatial scales suggest that magnetic fields affect circumstellar materials in different degrees on different spacial scales \citep{hull17a, hull17b}. It is, therefore, important to observationally investigate magnetic fields, along with kinematics, specifically on the protostellar envelope-disk scales in order to understand how magnetic fields regulate mass accretion onto disks. In order to investigate the effects of magnetic fields on mass accretion onto a protostellar disk, we observed the Class I protostar TMC-1A using the SMA and ALMA with the polarized dust continuum emission at 1.3 mm. TMC-1A is a good target for this purpose because previous observational studies have already revealed the kinematics in its protostellar disk and envelope. TMC-1A is in the Taurus molecular cloud located at a distance of 130-160 pc away from us \citep{gall18}; we adopt 140 pc as the distance to TMC-1A in this paper. The disk in TMC-1A was identified kinematically by a power-law index of radial profiles of rotational velocity close to $-0.5$ \citep{hars14, aso15}. \citet{hars14} fitted a disk and envelope model to the continuum visibilites at 1.3 mm derived from observations using Plateau de Bure Interferometer (PdBI) at an angular resolution of $0\farcs 7$-$1\farcs 3$, estimating the disk size, disk inclination angle, and central stellar mass to be 80-100 au, $55\arcdeg$, and $0.53~M_{\sun}$, respectively. \citet{aso15} reproduced, by a disk and envelope model, position-velocity (PV) diagrams along the major- and minor-axes of the disk in the C$^{18}$O $J=2-1$ line observed with ALMA at an angular resolution of $\sim 1\arcsec$, estimating the disk size, disk inclination angle, and central stellar mass to be $\sim 100$ au, $\sim 65\arcdeg$, and $0.68~M_{\sun}$, respectively. They also suggested an radial infall velocity of $\sim 0.3$ times the free fall velocity from their model and estimated a magnetic-field strength required to explain the slow infall velocity to be $\sim 2$~mG. In addition to the disk and envelope, \citet{bjer16} revealed disk wind from a radius range of 7-22 au from the disk in TMC-1A. Previous observations at a 8-au resolution revealed that optically thick dust hides the C$^{18}$O and $^{13}$CO $J=2-1$ lines in the central $\pm40$ au region \citep{hars18}. The authors interpreted this high optical depth as a result of dust grains with mm size rather than a result of a massive disk because the high mass required to explain the high optical depth should cause gravitational instability, which is inconsistent with the smooth structure observed in the continuum emission. This paper is structured as follows. We describe the settings of our SMA and ALMA observations in Section \ref{sec:obs}. Section \ref{sec:results} shows results of polarized 1.3 mm coninuum emission observed with the SMA and ALMA, and Stokes $I$ of the CO, $^{13}$CO, and C$^{18}$O $J=2-1$ lines observed with ALMA. We apply the Davis-Chandrasekhar-Fermi (DCF) method to the SMA result and analyze a non-axisymmetric component in the ALMA continuum image in Section \ref{sec:analyses}. Possible origins of the observed polarization and the non-axisymmetric component are discussed in Section \ref{sec:discussion}. We summarize our conclusions in Section \ref{sec:conclusions}. \clearpage \section{SMA and ALMA Observations} \label{sec:obs} \subsection{SMA} \label{sec:sma} We observed TMC-1A using the SMA for one track on 2018 January 3 with the full polarization mode. The total observing time is $\sim525$ min (8.75 hr) for TMC-1A including overhead. Seven antennas were used in the compact configuration. The minimum projected baseline length is 16 m. Any emission beyond $9\arcsec$ (1300 au) was resolved out by $\gtrsim 63\%$ with the antenna configuration \citep{wi.we94}. Our SMA observations used two orthogonally polarized receivers, tuned to the same frequency range in the full polarization mode, and the SWARM correlator. The upper sideband (USB) and lower sideband (LSB) covered 213--221 and 229--237 GHz, respectively. Each sideband was divided into four `chunks' with a bandwidth of 2 GHz, and each 'chunk' has a fixed channel width of 140 kHz. The SMA data were calibrated with the MIR software package\footnote{https://github.com/qi-molecules/sma-mir}. Instrumental polarization leakage was calibrated independently for USB and LSB using the MIRIAD task {\it gpcal} and removed from the data. The calibrated visibility data in Stokes $I$, $Q$, and $U$ were Fourier transformed and CLEANed with the natural weighting, using the MIRIAD package \citep{saul95}, by integrating channels without line emission. The polarized intensity, position angle, and polarization percentage were derived from the Stokes $I$, $Q$, and $U$ maps using the MIRIAD task {\it impol}. The parameters of our SMA observations mentioned above and others are summarized in Table \ref{tab:sma}. \begin{deluxetable*}{cc} \tablecaption{Summary of the SMA observational parameters \label{tab:sma}} \tablehead{ \colhead{Date} & 2018.January.03\\ \colhead{Projected baseline length} & \colhead{16--77 m (12--58 k$\lambda$)}\\ \colhead{Primary beam} & \colhead{$54\arcsec$}\\ \colhead{Passband/polarization calibrator} & \colhead{3C279 (7.2 Jy), 3C454.3 (12 Jy)}\\ \colhead{Flux calibrator} & \colhead{Neptune}\\ \colhead{Gain/polarization calibrator} & \colhead{0510+180 (1.5 Jy), NRAO530 (1.5 Jy)}\\ \colhead{Coordinate center (J2000)} & \colhead{$04^{\rm h}39^{\rm m}$35\fs 20, $25^{\circ}41\arcmin 44\farcs 35$} } \startdata Frequency (GHz) & 224 \\ Bandwidth (GHz) & 8 \\ Beam (P.A.) & $2\farcs 8\times 2\farcs4\ (47\arcdeg)$ \\ rms noise level (${\rm mJy~beam^{-1}}$) & 0.75 (Stokes I), 0.26 (Stokes Q/U) \enddata \end{deluxetable*} \subsection{ALMA} \label{sec:alma} We also observed TMC-1A using ALMA in Cycle 6 on 2018 November 18 with the full polarization mode. The total observing time is $\sim202$ min (3.37 hr), while the on-source observing time for TMC-1A is $\sim 43$ min. The number of antennas was 49. The array configuration was C43-5, whose minimum projected baseline length is 12 m. Any emission beyond $12\arcsec$ (1700 au) was resolved out by $\gtrsim 63\%$ with the antenna configuration \citep{wi.we94}. Spectral windows for $^{12}$CO, $^{13}$CO, and C$^{18}$O ($J=2-1$) lines have 1920, 960, and 960 channels covering 117, 59, and 59 MHz band widths at frequency resolutions of 61.0, 61.0, and 61.0 kHz, respectively. In making maps, channels are binned to produce the velocity resolution of $0.1\ {\rm km~s^{-1}}$ for all of the three lines. Another spectral window covers 232-234 GHz, which was assigned to the continuum emission. All the imaging procedure was carried out with Common Astronomical Software Applications (CASA), version 5.6.2. The visibilities in Stokes $I$, $Q$, $U$, and $V$ were Fourier transformed and CLEANed with Briggs weighting, a robust parameter of 0.5, and a threshold of 3$\sigma$. Emission of the three lines were detected only in Stokes $I$. The CLEAN process using the CASA task {\it tclean} adopted the auto-masking (auto-multithresh) with the parameters of sidelobethreshold=3.0, noisethreshold=3.0, lownoisethreshold=1.5, and negativethreshold=3.0. The polarized intensity and position angle were derived from the Stokes $I$, $Q$, and $U$ maps using the CASA task {\it immath}. We also performed self-calibration for Stokes $I$ of the continuum data using tasks in CASA ($clean$, $gaincal$, and $applycal$). Only the phase was calibrated by decreasing the time bin from `inf', 90 sec, and then 18 sec. The self-calibration improved the rms noise level of the continuum maps by a factor of $\sim 120$. The obtained calibration tables for the Stokes $I$ continuum data were also applied to the other Stokes continuum data and the line data for consistency. The noise level of the line maps were measured in emission-free channels. The parameters of our ALMA observations mentioned above and others are summarized in Table \ref{tab:alma}. \begin{deluxetable*}{ccccc} \tablecaption{Summary of the ALMA observational parameters \label{tab:alma}} \tablehead{ \multicolumn{2}{c}{Date}&\multicolumn{3}{c}{2018.November.21}\\ \multicolumn{2}{c}{Projected baseline length}&\multicolumn{3}{c}{12--1376 m (9.4--1070 k$\lambda$)}\\ \multicolumn{2}{c}{Primary beam}&\multicolumn{3}{c}{$27\arcsec$}\\ \multicolumn{2}{c}{Bandpass/flux calibrator}&\multicolumn{3}{c}{J0510+1800}\\ \multicolumn{2}{c}{Polarization calibrator}&\multicolumn{3}{c}{J0522-3627 (1.0 Jy)}\\ \multicolumn{2}{c}{Check source}&\multicolumn{3}{c}{J0426+2952}\\ \multicolumn{2}{c}{Phase calibrator}&\multicolumn{3}{c}{J0438+3004 (0.23 Jy)}\\ \multicolumn{2}{c}{Coordinate center (ICRS)}&\multicolumn{3}{c}{$04^{\rm h}39^{\rm m}$35\fs 20, $25^{\circ}41\arcmin 44\farcs 23$} } \startdata &Continuum&CO $J=2-1$&$^{13}$CO $J=2-1$&C$^{18}$O $J=2-1$\\ \hline Frequency (GHz)&234&230.538000&220.398684&219.560354\\ Bandwidth/resolution&2 GHz&$0.1\ {\rm km~s^{-1}}$&$0.1\ {\rm km~s^{-1}}$&$0.1\ {\rm km~s^{-1}}$\\ Beam (P.A.)&$0\farcs 37\times 0\farcs26\ (-2\arcdeg)$&$0\farcs 40\times 0\farcs 28\ (-1\arcdeg)$&$0\farcs 42\times 0\farcs 29\ (-1\arcdeg)$&$0\farcs 42\times 0\farcs 29\ (-1\arcdeg)$\\ rms noise level&$25~{\rm \mu Jy~beam^{-1}}$&$2~{\rm mJy~beam^{-1}}$&$2~{\rm mJy~beam^{-1}}$ & $2~{\rm mJy~beam^{-1}}$ \enddata \end{deluxetable*} \clearpage \section{Results} \label{sec:results} \subsection{SMA Polarized 1.3 mm Continuum} Figure \ref{fig:smacont} shows the polarized continuum emission at 1.3 mm observed with the SMA. The Stokes I map shows a compact component associated with TMC-1A and an extension to the east at the $3\sigma$ level, and slightly expands into the southwest. Gaussian fitting to the Stokes I emission provides a peak position of $\alpha ({\rm J2000})=04^{\rm h}39^{\rm m}35\fs 20,\ \delta ({\rm J2000})=25\arcdeg 41\arcmin 44\farcs 14$ and a deconvolved size of $0\farcs 66 \pm 0\farcs 09\times 0\farcs 50 \pm 0\farcs 11$ with a position angle of $66\arcdeg \pm 28\arcdeg$. This size corresponds to $92\times 70$~au at the distance of TMC-1A, 140 au. The major axis of the emission is consistent with the major axis of the disk around TMC-1A and perpendicular to the associated outflow \citep{aso15}. The peak intensity and total flux density measured in our SMA observation are $0.171~{\rm Jy~beam^{-1}}$ and 0.182~Jy, respectively. \begin{figure*}[htbp] \gridline{ \fig{tmc1a_polcont_sma0.png}{0.36\textwidth}{(a)} \fig{tmc1a_polcont_sma90.png}{0.36\textwidth}{(b)} } \caption{The continuum emission at 1.3 mm observed with the SMA. The contour map shows Stokes $I$ with contour levels of 3, 6, 12, 24,...$\times \sigma$, where $1\sigma$ corresponds to $0.75\ {\rm mJy~beam^{-1}}$. The color map shows polarization fraction, not de-biased, in the region where Stokes $I$ is above the $2\sigma$ level. The segments show (a) polarization angles and (b) those rotated by $90\arcdeg$, de-biased with the 2$\sigma$ ($0.52\ {\rm mJy~beam^{-1}}$) cutoff. The diagonal lines denote the major (P.A.$=75\arcdeg$) and minor ($165\arcdeg$) axes of the TMC-1A disk (Section \ref{sec:sym}), centered at the protostellar position. The filled ellipse denotes the SMA synthesized beam. \label{fig:smacont}} \end{figure*} The polarized fraction $\sqrt{Q^2 + U^2} / I$, where $I$, $Q$, and $U$ are the Stokes parameters, is shown in the region where Stokes $I$ is detected above the $2\sigma$ level. The orange segments have a uniform length and show the polarization angles in panel (a) and those rotated by $90\arcdeg$ in panel (b) that are de-biased with the $2\sigma$ level. The polarization fraction is typically $\sim 10\%$ in the north and east of TMC-1A, where the de-biased polarization is detected. In contrast, the fraction is $\lesssim 0.5\%$, i.e., de-polarized in the northeast, center, and southwest. The polarization angle is $\sim -45\arcdeg$ from the disk-minor axis (Section \ref{sec:sym}) on the northern side of the de-polarized layer, while being distributed around $\sim +30\arcdeg$ from the disk minor axis on the southern side of the de-polarized layer. The angle is around the disk minor axis in the eastern extension. The $90\arcdeg$-rotated segments (Figure \ref{fig:smacont}b) are supposed to trace the direction of magnetic fields in the protostar on the 1000-au scale observed with the SMA. The $90\arcdeg$-rotated segments have relative angles to the disk minor axis (P.A.$\sim -15\arcdeg$; see Section \ref{sec:sym}) ranging from $\sim 20\arcdeg$ to $\sim 50\arcdeg$ in the northwest, while the relative angles range from $\sim -80\arcdeg$ to $\sim -60\arcdeg$ in the southeast. The inferred magnetic field morphology shows symmetry with respect to the de-polarized layer. The inferred field appears to be drawn from the northeast to the southwest turning the direction at the de-polarized layer. When the variation of the field is smaller than an observational beam size, the region appears to be de-polarized. Hence, drawn morphology of the magnetic fields may explain the observed de-polarized layer. The de-polarization due to such field variation is numerically simulated \citep{kata12} and observed in other protostellar systems \citep[e.g.,][]{kwon19}. \subsection{ALMA Polarized 1.3 mm Continuum} \label{sec:almacont} Figure \ref{fig:almacont} shows the polarization continuum emission at 1.3 mm observed with ALMA. The spatial scale is five times smaller than the SMA figure, Figure \ref{fig:smacont}. The Stokes $I$ emission consists of a compact, strong component ($\lesssim 50$~au in radius or a signal-to-noise ratio of $\gtrsim 150\sigma$) and an extended component having sizes in radius of $\sim 200$~au in the direction of P.A.$=75\arcdeg$ (the disk major axis in Section \ref{sec:sym}) and $\sim 120$~au in the direction of P.A.$=165\arcdeg$ (the disk minor axis in Section \ref{sec:sym}) at the $3\sigma$ level. The peak intensity and total flux density measured in our ALMA observation are $0.120~{\rm Jy~beam^{-1}}$ and 0.236~Jy, respectively. The direction of the major elongation is consistent with previous 1.3 mm observations at a spatial resolution of $\sim 8$~au \citep{bjer16, hars18}. Becuase the Stokes I emission shows the compact and extended components, double-Gaussian fitting is more appropriate for this Stokes $I$ emission than single-Gaussian fitting is. Double-Gaussian fitting provides a peak position of $\alpha ({\rm J2000})=04^{\rm h}39^{\rm m}35\fs 200,\ \delta ({\rm J2000})=25\arcdeg 41\arcmin 44\farcs 229$ for the compact component, while $\alpha ({\rm J2000})=04^{\rm h}39^{\rm m}35\fs 202,\ \delta ({\rm J2000})=25\arcdeg 41\arcmin 44\farcs 235$ for the extended component. We adopt the peak position of the compact component as the protostellar position of TMC-1A in this paper. The deconvolved sizes of the compact and extended components are $0\farcs 25\times 0\farcs 15$ (P.A.$=76\arcdeg$) and $1\farcs 09\times 0\farcs 63$ (P.A.$=73\arcdeg$), corresponding to $35\times 21$~au and $153\times 88$~au, respectively. \begin{figure*}[htbp] \gridline{ \fig{tmc1a_polcont_alma0.png}{0.36\textwidth}{(a)} \fig{tmc1a_polcont_alma90.png}{0.36\textwidth}{(b)} } \caption{The continuum emission at 1.3 mm observed with ALMA. The contour map shows Stokes $I$ with contour levels of 3, 6, 12, 24,...$\times \sigma$, where $1\sigma$ corresponds to $25\ {\rm \mu Jy~beam^{-1}}$. The color map shows polarization fraction, not de-biased, in the region where Stokes $I$ is above the $3\sigma$ level. The segments show (a) polarization angles and (b) those rotated by $90\arcdeg$, de-biased with the 3$\sigma$ ($75\ {\rm \mu Jy~beam^{-1}}$) cutoff. The diagonal lines denote the major (P.A.$=75\arcdeg$) and minor ($165\arcdeg$) axes of the TMC-1A disk (Section \ref{sec:sym}), centered at the protostellar position. The filled ellipse denotes the ALMA synthesized beam. \label{fig:almacont}} \end{figure*} \begin{deluxetable*}{ccccc} \tablecaption{Results of Gaussian fitting to the SMA and ALMA 1.3 mm continuum images. The fitting to the ALMA image adopts a double-Gaussian function. \label{tab:gauss}} \tablehead{ \colhead{Observations} & \colhead{Flux (mJy)} & \colhead{Center} & \colhead{Deconvolved FWHM} & \colhead{P.A.} } \startdata SMA & $169.8\pm 0.9$ & \begin{tabular}{c} $04^{\rm h}39^{\rm m}35\fs 2024\pm 0.0005{\rm s}$ \\ $25\arcdeg 41\arcmin 44\farcs 145\pm 0.006\arcsec$ \end{tabular} & \begin{tabular}{c} $0\farcs 66\pm 0\farcs 09$ \\ $0\farcs 50 \pm 0\farcs10$ \end{tabular} & $66\arcdeg \pm 28 \arcdeg$ \\ \hline ALMA compact & $111.66\pm 0.03$ & \begin{tabular}{c} $04^{\rm h}39^{\rm m}35\fs 199945\pm 0.000003{\rm s}$ \\ $25\arcdeg 41\arcmin 44\farcs 22931\pm 0.00005\arcsec$ \end{tabular} & \begin{tabular}{c} $0\farcs 2502\pm 0\farcs 0002$ \\ $0\farcs 1542 \pm 0\farcs004$ \end{tabular} & $75.8\arcdeg \pm 0.1 \arcdeg$ \\ ALMA extended & $66.8\pm 0.2$ & \begin{tabular}{c} $04^{\rm h}39^{\rm m}35\fs 2017\pm 0.0001{\rm s}$ \\ $25\arcdeg 41\arcmin 44\farcs 235\pm 0.001\arcsec$ \end{tabular} & \begin{tabular}{c} $1\farcs 085\pm 0\farcs 004$ \\ $0\farcs 632 \pm 0\farcs03$ \end{tabular} & $73.4\arcdeg \pm 0.3 \arcdeg$ \\ \hline \enddata \end{deluxetable*} The polarized fraction is shown in the region where Stokes $I$ is detected above the $3\sigma$ level. The polarization angles are de-biased with the $3\sigma$ level. The polarization fraction is typically $\sim 0.7\%$ at the center, where the de-biased polarization is detected. This central polarized region is surrounded by a de-polarized ring with a radius of $\sim $40-60~au showing a polarization fraction of $\lesssim 0.3\%$. The polarization angle in the central region is overall in the disk minor axis (Section \ref{sec:sym}), whereas it also has the azimuthal component particularly in the outer part of this region. A similar de-polarized ring is also reported in the disk around a massive protostar, GGD27 MM1 \citep{gira18}. In addition to the ring, the disk around GGD27 MM1 shows spatial distribution of polarization fraction similar to that of TMC-1A: the fraction is $\sim 1\%$ in an inner part of the disk and $>6\%$ in an outer part of the disk. On the other hand, the inner polarization fraction in GGD27 MM1 is higher on the near side of the disk, from which \citet{gira18} suggests that dust settling has not occurred yet in GGD27 MM1. In comparison, the inner polarization in TMC-1A shows symmetric distribution of the polarization fraction and thus could be interpreted as dust settling stronger than in GGD27 MM1. The outer polarization in GGD27 MM1 shows azimuthal directions, which is ascribed to self-scattering with optically thin continuum emission by the authors. In comparison, the outer polarization direction in TMC-1A is roughly radial, and thus the polarization mechanism in the outer part is unlikely the self-scattering. In addition to the central polarized component, the de-biased polarization is also detected at $\sim 100$~au north and south of the central protostar, where the polarization fraction is typically $\sim 10\%$. The segments in the northern and southern regions have relative angles to the disk-minor axis (P.A.$= -15\arcdeg$; see Section \ref{sec:sym}) ranging from $\sim 0\arcdeg$ to $\sim 45\arcdeg$ in the northern region, while the relative angles range from $\sim 0\arcdeg$ to $\sim 35\arcdeg$ in the southern region. In other words, the $90\arcdeg$-rotated segments are roughly in the azimuthal direction in the northern and southern regions, as shown in Figure \ref{fig:almacont}(b). The polarization direction in the southern region is more consistent with the SMA result than that in the northern region, although it is difficult to directly compare the SMA and ALMA results because of the spacial scale difference by one order of magnitude. Unlike on the SMA scale, several mechanisms can cause polarization on the 100 au scale around protostars. Potential mechanisms will be discussed in Sections \ref{sec:cpol} and \ref{sec:nspol}. \subsection{ALMA CO Isotopologue Lines} \label{sec:line} Figure \ref{fig:line} shows results of the CO, $^{13}$CO, and C$^{18}$O $J=2-1$ lines observed with ALMA. The spatial scale is the same as that of Figure \ref{fig:almacont}. The blue- and redshifted emission is integrated over the same velocity width, from the systemic velocity, and thus has the same noise level in each panel. The integrated range is divided at $2~{\rm km~s^{-1}}$, which roughly corresponds to the Keplerian velocity at the disk radius of TMC-1A, $\sim100$~au, with the central protostellar mass, $\sim 0.7~M_{\sun}$, \citep{aso15} and an inclination angle of $50\arcdeg$--$60\arcdeg$. The high-velocity components of the C$^{18}$O and $^{13}$CO emission (Figure \ref{fig:line}b and \ref{fig:line}d) are integrated until the highest velocity at which the emission is detected at the signal-to-noise ratio (SNR) $3\sigma$. The boundary of the high-velocity (Figure \ref{fig:line}f) and very-high-velocity (Figure \ref{fig:line}g) components of the CO emission is the highest velocity at which the blueshifted emission is detected at SNR$=3\sigma$. In these integrated channel maps on the same scale as the continuum image, the entire emission is shown in Appendix \ref{sec:mom01} using the integrated intensity (moment 0) and mean velocity (moment 1) maps. \begin{figure*}[htbp] \gridline{ \fig{tmc1a_C18Obr_l.png}{0.3\textwidth}{(a)} \fig{tmc1a_C18Obr_h.png}{0.3\textwidth}{(b)} } \gridline{ \fig{tmc1a_13CObr_l.png}{0.3\textwidth}{(c)} \fig{tmc1a_13CObr_h.png}{0.3\textwidth}{(d)} } \gridline{ \fig{tmc1a_CObr_l.png}{0.3\textwidth}{(e)} \fig{tmc1a_CObr_h.png}{0.3\textwidth}{(f)} \fig{tmc1a_CObr_hh.png}{0.3\textwidth}{(g)} } \caption{Integrated channel maps of the CO isotopologue lines observed with ALMA. The integrated velocity range relative to $V_{\rm sys}=6.34\ {\rm km~s^{-1}}$ (Section \ref{sec:kep}) is denoted in each panel. Blue and red contours show blue- and redshifted emission, respectively. Contour levels are in $12\sigma$, $24\sigma$, and $36\sigma$ steps for C$^{18}$O, $^{13}$CO and CO, respectively, from $12\sigma$}, where $1\sigma$ corresponds to (a) 0.9, (b) 1.0, (c) 0.9, (d) 1.2, (e) 0.9, (f) 1.7, and (g) 1.5 ${\rm mJy~beam^{-1}}~{\rm km~s^{-1}}$. The diagonal lines denote the major (P.A.$=75\arcdeg$) and minor ($165\arcdeg$) axes of the TMC-1A disk (Section \ref{sec:sym}), centered at the protostellar position. The filled ellipse at the bottom-left corner of each panel denotes the ALMA synthesized beam. \label{fig:line} \end{figure*} The C$^{18}$O emission shows a velocity gradient mainly along the disk major axis (Section \ref{sec:sym}) in both the low- (Figure \ref{fig:line}a) and high-velocity (Figure \ref{fig:line}b) components. In addition to this main velocity gradient, the low-velocity component also shows blueshifted emission in the northwest and redshifted emission in the southeast at SNR $\lesssim 50\sigma$, causing a different, diagonal velocity gradient. This diagonal velocity gradient is reported in \citet{aso15} in the low velocity range $<2~{\rm km~s^{-1}}$. The low-velocity component of the C$^{18}$O emission shows double-peaked morphology on both blue- and redshifted sides, whereas the high-velocity component shows compact, single-peaked morphology on both blue- and redshifted sides. The double-peaked morphology in the low velocity range is newly revealed with the higher angular resolution than in \citet{aso15}. The $^{13}$CO emission shows the same features as the C$^{18}$O emission: the main velocity gradient along the disk major axis, another diagonal velocity gradient in the low-velocity component at SNR $\lesssim 120\sigma$, and compact, single-peaked morphology in the high-velocity component. The emission peaks make the main velocity gradient along the disk major axis in both high- and low-velocity components, although the emission peaks are shifted to the north from the disk major axis in the low-velocity component. Emission at low SNR makes the diagonal velocity gradient shown in the low-velocity C$^{18}$O emission above and that in \citet{aso15}. The low-velocity emission decreases in the central $\sim 40$~au, i.e., within one beam. This is consistent with the absorption in the $^{13}$CO $J=2-1$ line due to strong continuum emission at the protostellar position, reported with the observation at a $\sim 8$~au resolution \citep{hars18}. The CO emission shows more complicated structures than the C$^{18}$O and $^{13}$CO emission. The fainter, or absorbed, part in the low-velocity component is clearer in the CO line than in the $^{13}$CO line. The high-velocity component (Figure \ref{fig:line}f) clearly traces the associated outflow going along the disk minor axis. Strong emission (SNR$>150\sigma$) shows the main velocity gradient along the disk major axis same as the C$^{18}$O and $^{13}$CO emission. A part of the strong emission is also extended to the north tracing the outflow, as is also seen in the $^{13}$CO emission. Weak emission, SNR$<150\sigma$, is extended to the southern side along the disk minor axis in both blue- and redshifted components in the high velocity range (Figure \ref{fig:line}f); these structures are not seen in the C$^{18}$O or $^{13}$CO lines. The very-high-velocity component (Figure \ref{fig:line}g) traces a part of the blueshifted outflow lobe, being more collimated than the high-velocity component (Figure \ref{fig:line}f). This is consistent with the previous observation showing that emission in the blueshifted lobe is more collimated at higher velocities \citep{aso15}. \clearpage \section{Analyses} \label{sec:analyses} \subsection{DCF Method} \label{sec:dcf} TMC-1A has an infalling rotating envelope around the Keplerian disk. \citet{aso15} reported a radial infall velocity of $\sim 0.3$ times the free-fall velocity and suggested that a magnetic field of $\sim 2$~mG in strength is required to reduce the infall velocity to the observed value. Our SMA observations enable us to test this suggestion by estimating the field strength. The Davis-Chandrasekhar-Fermi (DCF) method \citep{davi51, ch.fe53} is the most widely used technique for inferring the magnetic field strength from polarization observations \citep[e.g., ][]{kwon19}. The method assumes that Alfv\'enic fluctuation dominates the magnetic and velocity fields, with the magnetic-field strength $B_{\rm pos}$ in the plane-of-sky estimated from: \begin{equation} \label{eq:dcf} B_{\rm pos} = \xi \sqrt{4\pi \langle \rho \rangle} \frac{\delta v_{\rm los}}{\delta \phi}, \end{equation} where $\langle \rho \rangle$, $\delta v_{\rm los}$, and $\delta \phi$ are the mean density, dispersion of the line-of-sight velocity, and dispersion of the polarization angle, respectively. The correction factor $\xi$ is usually taken to be $\sim 0.5$ based on numerical simulations \citep{heit01, ostr01, pado01}. \begin{figure*}[htbp] \gridline{ \fig{mapDCF.pdf}{0.42\textwidth}{(a)} \fig{histDCF.pdf}{0.37\textwidth}{(b)} } \caption{(a) Deviation of polarization angles observed with the SMA from 2-beam averaged angles. The blue segments denote the 2-beam averaged angles $\langle \phi \rangle$. The orange segments, the contour map, and the filled ellipse are the same as those in Figure \ref{fig:smacont}. The coordinates are relative to the protostellar position. (b) Cumulative histogram of the relative polarization angle from the 2-beam averaged angle, $\phi - \langle \phi \rangle$. The black curve is the error function with a standard deviation of $21\arcdeg$. \label{fig:dcf}} \end{figure*} Measuring the angle dispersion $\delta \phi$ requires an average angle at each position. We define the average angle $\langle \phi \rangle$ by averaging the Stokes Q and U emission over a 2D Gaussian function with a FWHM of $6\arcsec$. This Gaussian function is larger than the SMA synthesized beam by a factor of $\sim 2$. This size is selected because the polarized emission is detected over $\sim 4$ beam area. Figure \ref{fig:dcf}(a) shows the average angles as blue segments and the relative angle ($\phi - \langle \phi \rangle$) as a color map. The dispersion $\delta \phi$ in Equation (\ref{eq:dcf}) is derived as the standard deviation of the relative angle, $21\arcdeg$. Figure \ref{fig:dcf}(b) shows the cumulative histogram of the relative angle overlaid with the error function with a standard deviation of $21\arcdeg$. This histogram indicates that the standard deviation represents the distribution of the relative angle well. The mean density $\langle \rho \rangle$ is derived from the total flux density and the size of the SMA continuum emission. The total flux density 0.182~Jy corresponds to $\sim 0.024~M_{\sun}$ for a dust opacity coefficient, index, dust temperature, and gas-to-dust mass ratio of $0.035~{\rm cm}^2~{\rm g}^{-1}$ at $850~\micron$ \citep{andr05}, $1.46$, 28~K \citep{chan98}, and 100, respectively. The SMA continuum emission is detected over $\sim 500$~au in radius at the $3\sigma$ level. Averaging the mass of $0.024~M_{\sun}$ over a sphere of 500~au in radius, we obtain a mean density of $\sim 3\times 10^{-17}~{\rm g}~{\rm cm}^{-3}$, which is adopted as the mean density $\langle \rho \rangle$ in Equation (\ref{eq:dcf}). The velocity dispersion is difficult to measure directly from the observations toward TMC-1A because this protostar is known to have ordered motions, such as rotation and/or radial infall, which are not included in the original DCF method. For this reason, the observed velocity dispersion provides an upper limit of $\delta v$ in Equation (\ref{eq:dcf}). For example, the C$^{18}$O emission observed in our ALMA observations has a velocity dispersion (standard deviation) of $\sim 1.0~{\rm km~s^{-1}}$ when the emission over the entire spatial area is included. With the three quantities derived above, as well as the correction factor 0.5, Equation (\ref{eq:dcf}) yields $B_{\rm pol}\sim 3$~mG. This estimate is consistent with the prediction in \citet{aso15} for explaining the radial infall velocity of $\sim 0.3$ times the free fall velocity in the TMC-1A envelope. We caution the reader that the estimate is very rough because each quantity has a large uncertainty. For example, if the average area for $\delta \phi$ is changed from 1.5- to 4-beams, $\delta \phi^{-1}$ would vary by $\pm 50\%$. If dust grains grow to such a large size that the dust opacity index is close to zero, $\sqrt{\langle \rho \rangle}$ would decrease by $\sim 40\%$. If the velocity dispersion due to Alfv\'enic motions is similar to the sound speed at 28 K \citep{chan98}, $\sim 0.3~{\rm km~s^{-1}}$, $\delta v$ would decrease by a factor of $\sim 3$. When these uncertainties are taken into account, $B_{\rm los}$ is estimated to be within a range of 1--5~mG. \subsection{Axisymmetric Component Subtraction} \label{sec:sym} The morphology of continuum emission helps us reveal the physical origin of the dust polarization. For this purpose, we investigate the morphology of the 1.3 mm continuum emission in our ALMA observations in this subsection. Specifically, we decompose the continuum emission into an axisymmetric component and a residual through model fitting. Our axisymmetric model has a radial intensity profile, $f(r)$, centered at $(x_0, y_0)$. The central coordinates are free parameters, defined as offsets from the protostellar position derived from the double-Gaussian fitting in Section \ref{sec:almacont}. This model is scaled along the minor axis direction at $\theta _0-90\arcdeg$ by a factor of $\cos i$ and then convolved with the Gaussian beam with the same FWHM as the ALMA synthesized beam. The position angle $\theta _0$ and the inclination angle $i$ are free parameters. To produce an arbitrary radial profile, $f(0~{\rm au})$, $f(20~{\rm au})$, $f(40~{\rm au})$, ..., $f(300~{\rm au})$ are free parameters in this model fitting. The grid spacing of $f(r)$, 20~au, is half beam of the ALMA observations. Intensities at locations with 300~au but not on the grid radii are interpolated from the grid points, and those beyond $300~{\rm au}$ are set to zero. The best-fit parameters are derived by minimizing $\chi ^2=\sum (f_{\rm obs}-f_{\rm mod})^2/\sigma ^2$ in the image domain, where $f_{\rm obs}$, $f_{\rm mod}$, and $\sigma$ are the observed intensity, model intensity, and rms noise level of the continuum observations, respectively. For this fitting, we adopt the Markov Chain Monte Carlo method using the open code {\it ptemcee} \citep{fore13, vous16}, where the log-likelihood function is $-\chi ^2 / 2$. The numbers of temperatures, walkers, and the number of steps are 2, 190, and 10000, respectively. The first 5000 steps are burnt out. The uncertainties of the parameters are defined as the 10th and 90th percentiles of the parameter chain. The best-fit coordinates and angles, including their uncertainties, are $(x_0,\ y_0,\ \theta _0,\ i)=(0.08^{+0.02}_{-0.02}~{\rm au},\ 0.02^{+0.02}_{-0.02}~{\rm au},\ 75^{+0.3}_{-0.2}~{\rm deg},\ 53^{+0.2}_{-0.3}~{\rm deg})$. Figure \ref{fig:sym} shows the best-fit model with the observation and the residual, the observation minus the best-fit model. Although the best-fit model reproduces the overall shape of the observation, the residual interestingly shows positive and negative spiral-like patterns. Note that this residual is a relative structure from the axisymmetric component; in other words, a single one-armed spiral plus an axisymmetric component can provide such positive and negative patterns. The orientation angle $\theta_0=75\arcdeg$ is close to the disk major axis reported in previous works \citep{hars14, aso15, bjer16}, and thus we define this orientation angle as the disk major axis and the orthogonal direction as the disk minor axis of TMC-1A in this paper. \begin{figure*}[htbp] \gridline{ \fig{sym_obsmod.pdf}{0.42\textwidth}{(a)} \fig{sym_res.pdf}{0.42\textwidth}{(b)} } \caption{(a) Comparison of the ALMA 1.3 mm continuum and an axisymmetric model. (b) Residual, the observation minus the axisymmetric model, in the gray-scale and contour maps. The contour levels, diagonal lines, and the filled ellipse in both panels are the same as those in Figure \ref{fig:almacont}. \label{fig:sym}} \end{figure*} To investigate the spiral-like residual in more detail, we determine the relation between the radius and azimuthal angle along the positive and negative patterns. The intensity-weighted mean position (radius) is derived along the radial direction at every $5\arcdeg$. This mean radius uses intensities above $+3\sigma$ for the positive pattern and below $-3\sigma$ for the negative pattern. The derived mean radii are plotted in Figure \ref{fig:dprj} on the residual map de-projected using the orientation and inclination angles, $\theta_0=75\arcdeg$ and $i=53\arcdeg$. The red and green points are the mean radii for the positive and negative patterns, respectively. Figure \ref{fig:dprj} shows that the derived points represent the spiral-like patterns well. The mean radii for the positive pattern (red points) are fitted with a simple logarithmic spiral, $r=r_0~\exp (\alpha \theta)$, on the de-projected plane, where $r$ and $\theta$ are the polar coordinates, while $r_0$ and $\alpha$ are free parameters. The best-fit model is obtained by minimizing $\sum (r_{\rm obs}-r_{\rm mod})^2$ on the angle grid, where $r_{\rm obs}$ and $r_{\rm mod}$ are the observed mean radii (red points in Figure \ref{fig:dprj}) and the model fit, respectively. The best-fit parameters are $r_0=128$~au and $\alpha=-0.195$. This logarithmic spiral is consistent with the positive pattern as shown in Figure \ref{fig:dprj} (see the orange curve). \begin{figure}[htbp] \epsscale{1} \plotone{sym_res_dprj.pdf} \caption{The deprojected residual along the disk-minor axis, P.A.$=165\arcdeg$ by a factor of $\cos 53\arcdeg$. The filled ellipse is the deprojected beam, $0\farcs 38\times 0\farcs 34\ (-17.2\arcdeg)$. The diagonal lines are the same as those in Figure \ref{fig:almacont}. The red points are the intensity-mean radius at every $5\arcdeg$ of the positive residual, while the red line is an interpolated line. The green points and line are the contour parts for the negative residual. The orange curve is the logarithmic spiral fitted to the red points, $r=128\ {\rm au}\ \exp (-0.195\theta)$, where $r$ and $\theta$ are the radius and azimuthal angle in the deprojected plane, and $\theta$ is in the unit of radian. \label{fig:dprj}} \end{figure} \begin{figure*}[htbp] \epsscale{1.17} \plotone{contC18O_chans.png} \caption{Channel maps of the C$^{18}$O line with contours overlaid on the gray-scale map of the continuum residual. The contour levels are in $10\sigma$ steps, where $1\sigma$ corresponds to $1\ {\rm mJy~beam^{-1}}$. The diagonal lines are the same as those in Figure \ref{fig:almacont}. The filled ellipse is the C$^{18}$O synthesized beam. The velocity of each channel is denoted on the top-left corner of each panel. \label{fig:resc18o}} \end{figure*} The continuum residual also shows a correlation with the C$^{18}$O emission. Figure \ref{fig:resc18o} shows the residual map together with the C$^{18}$O channel maps at a velocity resolution of $0.4~{\rm km~s^{-1}}$. One clear correlation can be seen at 4.35--$5.15~{\rm km~s^{-1}}$ and 7.15--$7.95~{\rm km~s^{-1}}$. From 2.35 to 4.35~${\rm km~s^{-1}}$, the emission peak moves outward from the protostellar position along the disk major axis. The emission peak arrives at the eastern edge of the positive residual at $4.35~{\rm km~s^{-1}}$ and is divided into two peaks in the disk minor axis direction from $4.75~{\rm km~s^{-1}}$, not going beyond the eastern edge of the positive residual. Similarly, from 10.35 to $8.35~{\rm km~s^{-1}}$, the emission peak moves outward from the protostellar position along the disk major axis. The emission peak arrives at the western edge of the positive residual at $7.95~{\rm km~s^{-1}}$ and is divided into two peaks in the disk minor axis direction from $7.55~{\rm km~s^{-1}}$. These results on the eastern and western sides suggest that the C$^{18}$O emission is also concentrated inside the continuum residual, making the same asymmetry as the continuum residual. The high velocities $V<4.35~{\rm km~s^{-1}}$ and $V>8.35~{\rm km~s^{-1}}$ correspond to $|V-V_{\rm sys}|\gtrsim 2~{\rm km~s^{-1}}$. The fact that the emission peak is located along the disk major axis suggests that the C$^{18}$O emission traces the Keplerian disk around TMC-1A at this velocity range. Furthermore, the emission peak is divided at the lower velocities. This change at $|V-V_{\rm sys}|\sim 2~{\rm km~s^{-1}}$ suggests that the velocity channels at $|V-V_{\rm sys}|\sim 2~{\rm km~s^{-1}}$ trace the C$^{18}$O emission arising from the outer edge of the TMC-1A disk. In other words, the lowest rotational velocity in the TMC-1A disk (at the outer edge) is $\sim 2~{\rm km~s^{-1}}$. This is consistent with the disk radius, $\sim 100$~au, and the central protostellar mass, $\sim 0.7~M_{\sun}$, of TMC-1A previously reported by \citet{aso15}. The C$^{18}$O intensity also implies another correlation with the continuum residual. Figure \ref{fig:meanspi} shows the mean intensity of the C$^{18}$O line on the positive (red in Figure \ref{fig:dprj}) and negative (green in Figure \ref{fig:dprj}) patterns at each velocity channel. The mean intensity on the positive pattern is significantly stronger than the negative one within $|V-V_{\rm sys}|<2~{\rm km~s^{-1}}$. This suggests intensity enhancement in the C$^{18}$O line at the same spiral as the intensity enhancement in the continuum emission, in this velocity range. On the other hand, the intensity on the positive pattern is stronger at blueshifted higher velocities ($V-V_{\rm sys}<-2~{\rm km~s^{-1}}$), while the intensity on the negative pattern is stronger at redshifted higher velocities ($V-V_{\rm sys}>2~{\rm km~s^{-1}}$). These differences at higher velocities are consistent with the Keplerian rotation as discussed in more detail in Section \ref{sec:kep} below using a Keplerian disk model; the solid lines without points in Figure \ref{fig:meanspi} are from the model in Section \ref{sec:kep}. \begin{figure}[htbp] \epsscale{1} \plotone{meanspirals.pdf} \caption{Mean intensities of the C$^{18}$O line along the positive and negative residuals of the continuum emission as each velocity channel. The vertical lines are $V_{\rm sys}-2\ {\rm km~s^{-1}}$, $V_{\rm sys}$, and $V_{\rm sys} + 2\ {\rm km~s^{-1}}$. The light color profiles show the mean intensities of the Keplerian disk model discussed in Section \ref{sec:kep}. \label{fig:meanspi}} \end{figure} \subsection{Keplerian Rotation and Infall Motion} \label{sec:kep} The analysis in the previous subsection revealed correlations between the spatial distributions of the C$^{18}$O emission and the continuum residual. In this subsection, the velocity structure of the C$^{18}$O emission is modeled to help interpret the correlations. TMC-1A is known to have a Keplerian disk \citep{hars14, aso15, bjer16}. Hence, we model the Keplerian disk by fitting with a toy model. The main purpose of this fitting is not to constrain physical conditions of the disk but to derive the distribution of intensity arising from the disk in the position-position-velocity space. The velocity field in the toy model is the Keplerian rotation determined by the central protostellar mass $M_*$ and has a uniform line profile described as a Gaussian function with a standard deviation of $c_s$. The systemic velocity is $V_{\rm sys}$. The intensity in the model disk is a power-law function of radius up to an outer radius $R_{\rm out}$, $I_{100} (r/100~{\rm au})^{-p}$, where $r$, $I_{100}$, and $p$ are the radius on the disk plane, a coefficient, and a power-law index, respectively. This intensity field is located on two parallel planes at $\pm H$ from the midplane, to mimic the scale height of the disk. The model disk is oriented by $\theta _0$ and inclined by $i$: the angles derived from the fitting to the continuum emission (Section \ref{sec:sym}). This model intensity is convolved with the Gaussian beam having the same FWHM as the C$^{18}$O observation. In summary, the toy model has seven free parameters, $M_*$, $c_s$, $V_{\rm sys}$, $I_{100}$, $p$, $R_{\rm out}$, and $H$. The best-fit parameters are derived by minimizing $\chi^2 = \sum (f_{\rm obs}-f_{\rm mod})^2/\sigma ^2$ over the velocity range of 1.4--4.4 and 8.4--$11.4~{\rm km~s^{-1}}$ in the image domain, where $f_{\rm obs}$. $f_{\rm mod}$, and $\sigma$ are the observed C$^{18}$O intensity, model intensity, and rms noise level of the C$^{18}$O observation. The best-fit model has the parameters of $(M_*,\ c_s,\ V_{\rm sys},\ I_{100},\ p,\ R_{\rm out},\ H)=(0.72~M_{\sun},\ 0.38~{\rm km~s^{-1}},\ 6.34~{\rm km~s^{-1}},\ 1.5~{\rm mJy~pixel}^{-1},$ $0.48,\ 150~{\rm au},\ 14~{\rm au})$. The central protostellar mass is consistent with the previous work \citep[$0.68~M_{\sun}$;][]{aso15}. The best-fit model well reproduces the observed C$^{18}$O channel maps in the fitted velocity range, as shown in Appendix \ref{sec:chmod}. This model also reproduces the mean intensities along the positive and negative residuals in the fitted velocity range, $|V-V_{\rm sys}|>2.0~{\rm km~s^{-1}}$, as shown in Figure \ref{fig:meanspi}. The Keplerian disk model can be used to inspect non-Keplerian components in the C$^{18}$O data cube. In particular, it is worth comparing the velocity structure along the positive and negative residuals in the continuum emission with the Keplerian velocity field. For this purpose, Figure \ref{fig:pvsp}(a) and \ref{fig:pvsp}b show the position-velocity diagrams of the observed C$^{18}$O emission and the Keplerian disk model along the positive (red in Figure \ref{fig:dprj}) and negative (green in Figure \ref{fig:dprj}) residuals, where the abscissa coordinate is the position angle measured from the north to the east. These diagrams show that the overall velocity structure in the C$^{18}$O line is the Keplerian rotation. A possible offset of the emission peak can be found in the positive-residual diagram around the minor axis (P.A.$=165\arcdeg$) as highlighted with an arrow in Figure \ref{fig:pvsp}(a): the observed velocity structure appears slightly redshifted with respect to the Keplerian disk model by 0.5--$0.6~{\rm km~s^{-1}}$. This angle corresponds to a radius of 70--80~au on the de-projected plane (Figure \ref{fig:dprj}). The Keplerian velocity at this radius is 2.8--$3.0~{\rm km~s^{-1}}$ with $M_*=0.72~{\rm km~s^{-1}}$. The ratio between the offset $\sim 0.5~{\rm km~s^{-1}}$ and the Keplerian velocity 2.8--$3.0~{\rm km~s^{-1}}$ is consistent with the logarithmic spiral with $\alpha \sim 0.2$ derived in Section \ref{sec:sym}, suggesting a flowing motion along the positive residual with a radial infall velocity of $\sim 0.2$ times the rotational velocity. The presence of the infalling component is consistent with the velocity gradient in the minor-axis direction mentioned in Section \ref{sec:line}. \begin{figure*}[htbp] \gridline{ \fig{obsmod_pvspiral.pdf}{0.4\textwidth}{(a)} \fig{obsmod_pvspineg.pdf}{0.4\textwidth}{(b)} } \gridline{ \fig{obsmod_pvspiral_i.pdf}{0.4\textwidth}{(c)} \fig{obsmod_pvspineg_i.pdf}{0.4\textwidth}{(d)} } \caption{Position-velocity diagrams along the positive (left) and negative (right) residuals of the continuum emission. The abscissa value is the position angle from the north to the east. The blue contours show a Keplerian disk model or a Keplerian$+$infall model. The contour levels are in $10\sigma$ steps. The white curve is the velocity field on the midplane. The vertical and horizontal dashed lines denote the disk-minor axis and the systemic velocity, respectively. The arrows point to the main difference between the two models. Note that the positive residual prefers the Keplerian$+$infall model, while the negative residual is better fitted by the Keplerian-only model. \label{fig:pvsp}} \end{figure*} Figure \ref{fig:pvsp}c shows comparison between the observed C$^{18}$O emission (same as in Figure \ref{fig:pvsp}a) and an infalling rotating model that has the Keplerian rotation and a radial infall velocity of 0.2 times the Keplerian rotation. The emission peak of this Keplerian$+$infall model traces the observed peak better around the minor axis than that of the Keplerian disk model, while other parts are similar to each other. In contrast, the emission peak of the Keplerian disk model is more consistent with the observed peak in the negative-residual diagram than that of the Keplerian$+$infall model (Figure \ref{fig:pvsp}b and \ref{fig:pvsp}d), in particular, around the minor axis (P.A.$=-15\arcdeg$). This difference implies that only the positive residual has an inflow motion, supporting the possibility that a single one-armed spiral provides both positive and negative patterns as a relative structure with respect to the axisymmetric average component as mentioned in Section \ref{sec:sym}. The radial infall velocity at $r=70$-80~au revealed in this subsection corresponds to $\sim 14\%$ of the free fall velocity, since the free fall velocity is $\sqrt{2}$ times the Keplerian velocity. Although the radial infall velocity at outer radii cannot be evaluated from our results, a larger ratio of $\sim 30\%$ is reported in \citet{aso15} at $r=100$-200 au. This difference may suggest soft landing of the collapsing envelope material, through the one-armed spiral, onto the outer part of the disk. \clearpage \section{Discussion} \label{sec:discussion} The origin of polarization is not as simple on the disk scale around protostars as on the envelope scale and larger. Polarization on the larger scales, such as that probed by our SMA observations, is generally thought to come from magnetic alignment. We discuss possible origins of the ALMA polarization, as well as that of the accretion flow, in this section. \subsection{Central Polarization} \label{sec:cpol} The ALMA polarization shows two components clearly divided by a de-polarized ring: central ($r\lesssim 50$~au) and a northern/southern component (Figure \ref{fig:almacont}). The central component shows polarization E-vectors along the disk minor-axis near the axis and include an azimuthal component near the outer edge. These two morphological features suggest dust self-scattering occurring on the disk surface as the origin of the central polarization. Polarization due to self-scattering is detected in other Class 0 and I protostars \citep[e.g., ][]{lee18}. Theoretical simulations of self-scattering predict such polarization directions and polarization fractions $\sim 0.8\%$ to $\sim 1.0\%$ with inclination angles of $50\arcdeg$ to $60\arcdeg $ \citep{yang17}, which is also consistent with the observed polarization fraction. Figure \ref{fig:polfi} shows the observed polarization fraction as a function of Stokes $I$. At the radii of the central component $r\lesssim 50$~au, the polarization fraction is independent from Stokes $I$. This tendency prefers self-scattering to the magnetic alignment. The theoretical simulations suggest that self-scattering requires a high optical depth ($\tau \gtrsim 1$) to produce a polarization fraction high enough ($\gtrsim 0.5\%$) to be observed. Previous observations toward TMC-1A indeed reported that the 1.3 mm dust continuum emission is optically thick in a central region with $r\lesssim 20$--30~au \citep{hars18}. \begin{figure}[htbp] \epsscale{1} \plotone{tmc1a_polcontFI.pdf} \caption{Polarization fraction as a function of Stokes I of the ALMA 1.3 mm continuum emission. The color denotes the radius of each point. The dashed line is the $3\sigma$ detection level for the polarization intensity. The data points with the larger point size correspond to the positions where de-biased polarization angles are derived in Figure \ref{fig:almacont}. \label{fig:polfi}} \end{figure} The high optical depth in the TMC-1A disk was interpreted as a result of a high opacity due to dust grains with mm size \citep{hars18}. Although a massive disk can also produce a high optical depth, such a disk should be gravitationally unstable and thus show some sign of instability on the disk (e.g., spiral arm or fragmentation). Previous observational studies did not show such a sign in the TMC-1A disk at the mm wavelength, preferring the mm grain scenario to the massive disk scenario. Millimeter grains are, however, not consistent with the self-scattering origin of polarization because self-scattering requires a maximum grain size of $\sim 80$--$300~\micron$ to produce a polarization fraction of $\gtrsim 1\%$, while mm-sized grains decrease the predicted polarization fraction to $\sim 0.01\%$ \citep{kata16}. In addition to the self-scattering-induced polarization, we have revealed a spiral-like residual in the 1.3 mm continuum emission. This structure may hint at the gravitational instability; hence, the high optical depth in the TMC-1A disk could be due to a massive disk. For these reasons, we suggest that dust grains mainly have a size up to a few 100s~$\micron$ in the TMC-1A disk. \subsection{Northern and Southern Polarization} \label{sec:nspol} The northern and southern components of the ALMA polarization are unlikely produced by scattering, because of their high polarization fraction, location in the outer part of the disk along the minor axis, and relatively low optical depth. We consider three alternative possibilities. The first possibility is magnetic alignment by toroidal magnetic fields in the disk or outflow associated with TMC-1A. A toroidal field is described by an ellipse elongated along the disk major-axis, which is broadly consistent with the $90\arcdeg$-rotated vectors detected with ALMA except for the central component. The polarization fraction in the north/south component is $\sim 10\%$ and shows an anticorrelation with Stokes $I$ (Figure \ref{fig:polfi}). Such a high polarization fraction is possible with elongated dust particles aligned in a magnetic field, although boundary areas of a structure in interferometric observations can show a relatively high polarization fraction due to a larger filtering effect in Stokes $I$ \citep{kwon19}. \citet{kwon19} also reported a negative power-law index, $-0.4$ to $-1.0$, between the two quantities on a few 1000 au scale in the protostellar system L1448 IRS 2, where polarization originates in the magnetic alignment. Because the TMC-1A system is inclined from the line-of-sight, the polarization near the minor axis is expected to be stronger than near the major axis because non-spherical grains rapidly spinning along their magnetically aligned short axis are effectively oblate spheres and such aligned oblate spheres are seen more edge-on near the minor axis and more face-on (and thus more circular with lower polarization) near the major axis (see the top panels of Figure 10 of \citet{ch.la07} or the middle panels of Figure 3 of \citet{yang16}). In addition, because of the disk inclination, the toroidal fields show a low (high) curvature near (far from) the outflow axis on the projected plane. When magnetic fields have a high curvature, observed polarization can have a low polarization fraction because signals with different polarization directions are summed in an observational beam and cancelled out. These two effects could explain the fact that the polarized emission is detected with ALMA mainly in the north and south of the central protostar. The second possibility is the mechanical alignment called ``Gold mechanism" by the associated outflow \citep{gold52}. Dust grains in the outflow lobe are aligned in the outflow direction by the gas motion, as if each grain is a boat in a river. The aligned grains, then, produce the polarization direction parallel to the outflow direction as observed in our ALMA observations. The TMC-1A outflow shows a projected velocity faster than $\sim 10~{\rm km~s^{-1}}$, indicating that the velocity in this outflow is sufficiently high to provide supersonic gas flow that makes the Gold mechanism efficient \citep{laza94}. However, because the disk material is expected to be much denser than the outflow material, most of dust emission must be coming from the disk. If the disk emission is not polarized, the intrinsic polarization from the outflow emission must be much higher than the observed 10\%, which is unlikely. Furthermore, while the blue-shifted outflow spatially coincides with the northern polarization component, there is no clearly detected red-shifted outflow that spatially coincides with the southern polarization component. For these reasons, we regard the Gold mechanism as less likely. We should mention that another version of mechanical alignment was studied by \citet{hoan18}, where the grains are expected to align their long axes perpendicular to the gas flow. It would predict polarization E-vectors perpendicular to the minor axis, which are not observed in this component. The third possibility is magnetic alignment by a magnetic field along the accretion flow suggested in Section \ref{sec:kep}. This possibility is motivated by the fact that the northern polarization is overlapped on the spiral-like residual and the $90\arcdeg$-rotated vectors are also along the residual, as shown in Figure \ref{fig:res90}. This peculiar structure could explain the limited location of the detected polarization. This possibility cannot be applied to the southern polarization because the southern polarization is overlapped on the negative spiral outside the one-armed spiral. \begin{figure}[htbp] \epsscale{1} \plotone{sym_res_90.pdf} \caption{ALMA polarization directions rotated by $90\arcdeg$ overlaid on the spiral-like residual in the 1.3 mm Stokes $I$. The northern polarization shows the vectors ($90\arcdeg$-rotated) along the spiral-like residual. \label{fig:res90}} \end{figure} \subsection{DCF Method to the ALMA Polarization} \label{sec:dcfalma} The northern and southern components in the ALMA polarized 1.3 mm emission are likely to originate in toroidal magnetic fields in TMC-1A, as discussed in the previous subsection. Then, the DCF method allows us to roughly estimate the field strength, as it does for our SMA data (Section \ref{sec:dcf}). The dispersion of the polarization angle is measured from the 2-beam averaged angles using a 2D Gaussian function with a FWHM of $\sim 0\farcs 6$. Note that the central polarization is masked by a threshold of polarization fraction $<1\%$ and not used for the average-angle calculation. Figure \ref{fig:dcfalma}(a) shows the comparison of the original and average angles. The dispersion is calculated to be $\delta \phi=16\arcdeg$ in this case. Figure \ref{fig:dcfalma}(b) shows the cumulative histogram of the relative angle with the error function with a standard deviation of $16\arcdeg$. The northern and southern components have Stokes $I$ $\sim 0.6~{\rm mJy~beam^{-1}}$. This corresponds to a mass column density of 0.12--$0.33~{\rm g}~{\rm cm}^{-2}$ on the same dust opacity, temperature, and gas-to-dust mass ratio as in Section \ref{sec:dcf}. If this column density is distributed over the scale height of 14~au (Section \ref{sec:kep}), the mean density is $\langle \rho \rangle =0.6$--$1.6\times 10^{-15}~{\rm g}~{\rm cm}^{-3}$. The velocity dispersion is also in the same range 0.3--$1.0~{\rm km~s^{-1}}$ as in Section \ref{sec:dcf}. Consequently, from Equation (\ref{eq:dcf}), the field strength is calculated to be 5--25~mG. \begin{figure*}[htbp] \gridline{ \fig{mapDCFalma.pdf}{0.42\textwidth}{(a)} \fig{histDCFalma.pdf}{0.37\textwidth}{(b)} } \caption{(a) Deviation of polarization angles observed with ALMA from 2-beam averaged angles. The blue segments denote the 2-beam averaged angles $\langle \phi \rangle$. The orange segments, the contour map, and the filled ellipse are the same as those in Figure \ref{fig:almacont}. The coordinates are relative to the protostellar position. (b) Cumulative histogram of the relative polarization angle from the 2-beam averaged angle, $\phi - \langle \phi \rangle$. The black curve is the error function with a standard deviation of $16\arcdeg$. \label{fig:dcfalma}} \end{figure*} \subsection{Accretion Flow} The spiral-like residual suggests a one-armed accretion flow in the TMC-1A disk. A simple interpretation of this flow is occasional mass accretion from the associated envelope. The C$^{18}$O emission also supports flowing motion along the spiral. Similar morphology is also identified in multiple protostellar systems. \citet{tobi16} show a one-armed spiral with a radius of $\sim 200$~au in the triple protostellar system L1448 IRS3B observed in the 1.3 mm continuum emission. Their line observations in C$^{18}$O $J=2-1$ indicate a rotational motion perpendicular to an associated outflow. \citet{taka14} show a one-armed spiral with a radius of $\sim 200$~au in the protostellar binary L1551 NE observed in the 0.9 mm continuum emission. The authors reproduced the spiral structure and kinematics observed in the C$^{18}$O $J=3-2$ line using a hydrodynamic simulation. The similarity of the TMC-1A spiral with these multiple systems may hint at gravitational instability in the TMC-1A disk, although the instability may be weaker here so that the spiral can only be seen as the residual from the axisymmetric component. An inner part of the spiral-like residual also appears to delineate a part of a ring with a radius of $\sim 50$~au. Such a ring may result from the `growth front' \citep[or the pebble production line;][]{la.jo14}, where dust grains drastically grow from $\micron$ size to mm size. The growth front is estimated to be 50-60~au in radius with the typical age of Class I protostars, 0.1~Myr \citep{ohas20}, which is consistent with the inner part of the spiral-like residual. \section{Conclusions} \label{sec:conclusions} We have observed the linearly polarized dust continuum emission at 1.3 mm in the Class I protostellar system TMC-1A using the SMA and ALMA at angular resolutions of $\sim 3\arcsec$ (400 au) and $\sim 0\farcs 3$ (40 au), respectively. The ALMA observations also included the CO, $^{13}$CO, and C$^{18}$O $J=2-1$ lines. The main results are summarized below. \begin{enumerate} \item The SMA polarization observations trace magnetic fields in the TMC-1A envelope on a 1000-au scale. The field directions are between the parallel and perpendicular directions to the outflow axis. We estimate the field strength to be 1-5 mG by applying the DCF method to the SMA polarization. The diagonal direction and mG-order strength of the magnetic field are consistent with the previous prediction by \citet{aso15} to explain the radial infall velocity at $\sim 30\%$ of the free fall velocity. \item We subtract an axisymmetric component from the ALMA continuum emission, Stokes $I$, and discover a spiral-like residual for the first time in TMC-1A. The deprojected spiral suggests an accretion flow with an radial infall velocity at $20\%$ of the rotational velocity along the spiral-like residual. Comparison of the C$^{18}$O emission and a Keplerian disk model also supports this ratio between the rotational and radial infall velocities. \item The polarized emission observed with ALMA consist of a central component and a north/south component. The central component can be interpreted as a result of self-scattering because the polarization directions are mostly along the disk minor-axis, but with an azimuthal component near the major axis, and the polarization fraction is $\sim 0.8$\% independent of the polarization intensity. \item The north/south polarization component is located along the outflow direction (i.e., in the north and south of the protostellar position), and the polarization E-vectors are also broadly parallel to the outflow direction. Three potential mechanisms are discussed for this polarization: (1) toroidal magnetic fields in the outflow or disk in this system, (2) mechanical grain alignment by the gaseous outflow, and (3) a magnetically channeled accretion flow as suggested by the spiral-like residual in Stokes $I$. \end{enumerate} \acknowledgments This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2018.1.00701.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The Submillimeter Array is a joint project between the Smithsonian Astrophysical Observatory and the Academia Sinica Institute of Astronomy and Astrophysics and is funded by the Smithsonian Institution and the Academia Sinica. SPL acknowledges grants from the Ministry of Science and Technology of Taiwan 109-2112-M-007-010-MY3. ZYL is supported in part by NASA 80NSSC18K1095 and NSF AST-1716259 and 1815784. \vspace{5mm} \facilities{ALMA, SMA} \software{APLpy \citep{ro.br12}, astropy \citep{astr13, astr18}, CASA \citep{mcmu07}, MIR (https://github.com/qi-molecules/sma-mir), MIRIAD \citep{saul95}, ptemcee \citep{fore13, vous16}} \clearpage
2,869,038,156,539
arxiv
\section{Introduction} \label{sec:intro} The Sun exhibits many time scales, from the ten minute lifetimes of granules to multi-millennial magnetic activity modulations. One of the most prominent of these scales is the 11-year sunspot cycle, during which the number of magnetically active regions waxes and wanes. The Sun also possesses longer-term variability of its magnetic activity such as the 88-year Gleissberg cycle \citep{gleissberg39}. There are also intermittent and aperiodic phenomenon commonly described as grand extrema \citep{usoskin13}, such as the Maunder Minimum \citep{eddy76,ribes93}, wherein the overall magnetic activity of the Sun declines or increases for many polarity cycles relative to a long-term average. These longer-term trends in solar activity can be seen both in visual observations of the number of sunspots as well as in less direct measurements such as radioisotopic measurements \citep[e.g.,][]{beer98}. During grand minima, the number of sunspots tends to decrease and sometimes vanish for several polarity cycles. In contrast, their numbers increase over a grand maximum. Furthermore, during extrema such as the Maunder Minimum and the Modern Maximum, measurements of cosmogenic radioisotopes suggest that the heliospheric magnetic field strength can vary by at most a factor of five, but more typically by a factor of two \citep{mccracken07}. The simulation presented here shares some of these characteristics, where it shows disrupted cycles and a decrease in volume-averaged magnetic energy at lower latitudes during an extended interval covering several magnetic polarity cycles. So, the interval of disrupted magnetic cycles has been tagged as a ``grand minimum.'' In addition to its large range of time scales, the magnetic field at the solar surface exhibits complex, hierarchical structures that persist over a vast range of spatial scales. Nevertheless, large-scale organized spatial patterns of smaller structures such as Maunder's butterfly diagram, Joy's law, and Hale's polarity law suggest the existence of a structured large-scale magnetic field within the solar convection zone. On the Sun's surface active regions initially emerge at mid-latitudes and appear at progressively lower latitudes as the cycle progresses, thus exhibiting equatorward migration. In contrast, the diffuse field that is comprised of small-scale bipolar regions migrates toward the poles, with the global-scale reversal of the polar magnetic field occurring near solar maximum \citep[e.g.,][]{benevolenskaya04,hathaway10}. Other main-sequence stars also exhibit observable magnetic phenomenon under several measures such as Ca II, photometric, spectropolarimetric, and X-ray observations \citep[e.g.,][]{baliunas96, hempelmann96, favata08, metcalf10, fares13, mathur13}. Such observations have shown that solar-mass stars younger than the Sun can also possess magnetic activity cycles. These younger stars tend to rotate more rapidly than the Sun as a consequence of having been born with a relatively high angular momentum and due to their relatively slow rate of angular momentum loss \citep[e.g.,][]{barnes07,matt15}. There are further hints from both observations and from theory that stellar magnetic cycle periods should be linked to its rotation rate \citep[e.g.,][]{saar09,jouve10,morgenthaler11}. So, in some senses, the simulation presented here could be considered to be capturing some of the dynamo behavior of a young Sun-like star. Moreover, from a theoretical point of view, the ratio of the polarity cycle period to the relevant dynamical time scales of the rotation period may be of more interest. For the Sun, this ratio is about 287, and as will be seen later this ratio is about 243 for this model. \subsection{Cyclic Convective Dynamo Action} \label{sec:cycles} It has been suspected for at least 60 years that the crucial ingredients for the solar dynamo are the shear of the differential rotation and the helical nature of the small-scale convective flows present in the solar convection zone \citep[e.g.,][]{parker55, steenbeck69, parker77}. Though other models and observations suggest that the surface magnetic fields play a significant role, such as in the Babcock-Leighton dynamo mechanism \citep[e.g.,][]{babcock61,charbonneau05,miesch12}. Such models have been fairly successful in capturing aspects of the solar cycles. However, when adopting fully nonlinear global-scale 3-D MHD simulations \citep[e.g.,][]{gilman83,glatzmaier85,brun04,browning06}, it has been challenging to achieve dynamo action that exhibits a majority of the properties of Sun's large-scale magnetism. Recent global-scale simulations of convective dynamos have begun to make substantial contact with some of the properties of the solar dynamo using a wide variety of numerical methods \citep[e.g.,][]{ghizaru10,brown11,racine11,kapyla12,augustson13,passos14,fan14}. The simulation analyzed here fits within this vein of modern stellar dynamo modeling, where it exhibits some features akin to those observed during solar and stellar cycles. In particular, global-scale convective dynamo simulations in rotating spherical shells have recently achieved the long-sought goal of cyclic magnetic polarity reversals with a multi-decadal period. Moreover, some of these simulations have illustrated that large-scale dynamo action is possible within the bulk of the convection zone, even in the absence of a tachocline \citep[e.g.,][]{brun04,brown10,kapyla12}. Global-scale MHD simulations of a more rapidly rotating Sun with the ASH code have produced polarity-reversing dynamo action that possesses strong toroidal wreaths of magnetism that propagate poleward as a cycle progresses \citep{brown11}. These fields are contained within the convection zone itself, with the majority of the magnetic energy present near the lower boundary. Furthermore, a recent simulation with ASH employs a dynamic Smagorinski diffusion scheme, wherefore a greater level of turbulent complexity is achieved for the resolved spatial structures. Those simulations show that the large-scale toroidal wreaths persist despite the greater pummeling they endure from the more complex and vigorous convection \citep{nelson13a}. Not only do the toroids of field persevere, but portions of them can be so amplified that the combination of upward advection and magnetic buoyancy create loops of magnetic field that rise upward toward the surface \citep{nelson13b}. \subsection{Differing Approaches to Sub-Grid-Scale Dissipation} \label{sec:sgs} Both explicit and implicit large-eddy simulations (LES and ILES) have concurrently paved the road toward more orderly long-term cycles in a setting that may mimic the solar interior. Indeed, the first 3-D simulation to produce regular polarity cycles over a long time period utilized the Eulerian-Lagrangian magnetohydrodynamics code (EULAG-MHD) \citep{ghizaru10}. The polarity cycles in that simulation occur roughly every 80~years, with the magnetic fields existing primarily at higher latitudes and within the tachocline at the base of the convection zone \citep[e.g.,][]{racine11,passos14}. Such dynamo action is likely made possible through two mechanisms: the first being that the ILES formulation of EULAG attempts to maximize the complexity of the flows and magnetic fields for a given Eulerian grid resolution, and the second being the reduction of the enthalpy transport of the largest scales through a relatively simple sub-grid-scale (SGS) model. The latter mechanism operates through the dissipation of entropy structures by adding a thermal drag to the entropy equation. This reduces the buoyancy of the resolved convective structures, and thereby the root-mean-square velocities, which in turn decreases the Rossby number. The magnetic fields in those EULAG-MHD simulations, however, have primarily shown radial propagation of structures but little latitudinal variation during a cycle. Though much like prior simulations using ASH, a recent EULAG-MHD simulation of a Sun-like star rotating at thrice the solar rate can also produce low-latitude poleward propagating solutions \citep{charbonneau13}. Similarly, 3-D MHD simulations in spherical segments employing the Pencil code also possess regularly cyclic magnetic polarity reversals in addition to a rich set of other behavior. In particular, a few of those polarity reversing solutions were the first to exhibit low-latitude equatorward propagating magnetic features \citep{kapyla11b,kapyla13}. In those simulations, the stratification and a sufficient level of turbulence appear to be necessary to achieve the phase alignment between the magnetic field and the differential rotation required to produce the dynamo wave phenomenon known as the Parker-Yoshimura effect \citep{warnecke14}. Inspired by those recent results, a slope-limited diffusion (SLD) scheme was incorporated into ASH with the express goal of achieving a low effective $\mathrm{Pr}$ and $\mathrm{Pm}$ dynamo, thus attempting to better mimic the low fluid and magnetic Prandtl numbers present in the solar interior. This effort minimizes the effects of viscosity, and so extends the inertial range as far as possible for a given resolution, whereas the thermal and magnetic fields retain their LES-SGS eddy diffusivities. Consequently, SLD permits more scales to be captured before entering the dissipation range. This in turn allows more scale separation between the larger magnetic and and smaller kinetic scales participating in the low $\mathrm{Pm}$ dynamo \citep{ponty05, schekochihin07, brandenburg09}, given that the ratio of the magnetic to the viscous dissipation scales is greater than unity. Subsequently, the kinetic helicity is also greater at small scales than otherwise would be achieved with the required Newtonian momentum diffusion at the same resolution, which has been shown to have a large influence on the dynamo efficiency \citep{malyshkin10}. With the newly implemented SLD scheme, a solution has been found that possesses features similar to those of the solar dynamo: (i) a regular magnetic polarity cycle, though with a period of 6.2~years, in which the magnetic polarity reversals occur near the maximum in the magnetic energy, (ii) an equatorward propagation of magnetic features, (iii) a poleward migration of oppositely-signed flux, and (iv) the equilibrium of regular cycles is punctuated by an interval where the cycling behavior is disrupted, the magnetic energy is reduced at low latitudes, and after which the cycle is recovered. In keeping with the ASH nomenclature for related cases as in \citet{brown10, brown11} and \citet{nelson13a}, this dynamo solution has been called case K3S. \subsection{General Layout} \label{sec:layout} The basic layout of the paper is as follows: \S\ref{sec:methods} contains the details of the equations solved and of their numerical implementation; \S\ref{sec:overview} provides an overview of the dynamics of the solution and references to each of the relevant in-depth analysis sections. Then \S\ref{sec:reversal} assesses the properties of the typical polarity cycles and the mechanisms contributing to the evolution of the poloidal magnetic field within this dynamo. The grand minimum seen in this simulation and its properties are discussed in \S\ref{sec:minimum}. The processes relevant to the equatorward propagation of the magnetic fields during a cycle are covered in \S\ref{sec:propagate}. An analysis of time scales is given in \S\ref{sec:periods}. Connections to a mean-field description of the simulation are made in \S\ref{sec:alpha}. Discussion of the significance of our findings with K3S, and their relation to other studies, is provided in the concluding \S\ref{sec:conclude}. Appendix A defines the operators used in slope-limited diffusion and illustrates some of its properties. Appendix B provides a derivation of equations governing the evolution of the kinetic energy contained in the differential rotation. \section{Computational Methods} \label{sec:methods} The 3-D simulation of convective dynamo action presented here as case K3S uses the ASH code to evolve the Lantz-Braginski-Roberts (LBR) form of the anelastic MHD equations for a conductive plasma in a rotating spherical shell. ASH solves those equations employing a pseudo-spectral method with spherical harmonic expansions in the horizontal directions of the entropy, magnetic field, pressure, and mass flux \citep{clune99,miesch00}. A fourth-order non-uniform finite difference in the radial direction resolves the radial derivatives. The solenoidality of the mass flux and magnetic vector fields is maintained through the use of a streamfunction formalism \citep{brun04}. The density, entropy, pressure, and temperature are linearized about the spherically symmetric background values $\overline{\rho}$, $\overline{S}$, $\overline{P}$, and $\overline{T}$ respectively, which are functions of the radial coordinate only. These linearized thermodynamic variables are denoted $\rho$, $S$, $P$, and $T$. The reduced pressure $\varomega=P/\overline{\rho}$ is used in the LBR implementation from which the equivalent thermodynamic pressure fluctuations can be recovered. The equations solved in ASH retain physical units, are in spherical coordinates $(r,\theta,\varphi)$, and are evolved in time $t$ as \vspace{-0.25truein} \begin{center} \begin{align} \text{continuity:} \quad & \displaystyle \boldsymbol{\nabla}\boldsymbol{\cdot}{\overline{\rho}\mathbf{v}} = 0, \label{eqn:ashcont} \\ \text{momentum:} \quad & \displaystyle \overline{\rho}\ddtime{\mathbf{v}} = -\overline{\rho} \mathbf{v} \boldsymbol{\cdot}\boldsymbol{\nabla} \mathbf{v} -\boldsymbol{\nabla} \varomega + \frac{S g}{c_P} \hat{\boldsymbol{\mathrm{r}}} \nonumber \\ \mbox{} & + 2 \overline{\rho} \mathbf{v} \boldsymbol{\times} \hat{\boldsymbol{\Omega}}_0 + \frac{1}{4\pi}\left(\boldsymbol{\nabla}\boldsymbol{\times}\mathbf{B}\right)\boldsymbol{\times}\mathbf{B} + \boldsymbol{\nabla}\boldsymbol{\cdot}{\mathcal{D}}, \label{eqn:ashmom} \\ \text{energy:} \quad & \displaystyle \overline{\rho}\overline{T}\ddtime{S} = -\overline{\rho}\overline{T}\mathbf{v} \boldsymbol{\cdot}\boldsymbol{\nabla} \left(\overline{S}+S \right) -\nabla \cdot \mathbf{q} + \Phi, \label{eqn:asherg} \\ \text{flux conservation:} \quad & \displaystyle \boldsymbol{\nabla}\boldsymbol{\cdot}{\mathbf{B}} = 0, \\ \text{induction:} \quad & \displaystyle \ddtime{\mathbf{B}} = \boldsymbol{\nabla}\boldsymbol{\times}\left[\mathbf{v}\boldsymbol{\times}\mathbf{B}-\eta\boldsymbol{\nabla}\boldsymbol{\times}\mathbf{B}\right], \label{eqn:ashind} \end{align} \end{center} \noindent with the velocity field being $\mathbf{v}=\mathrm{v_r}\hat{\boldsymbol{\mathrm{r}}}+\mathrm{v}_{\theta}\hat{\boldsymbol{\theta}}+\mathrm{v}_{\varphi}\hat{\boldsymbol{\varphi}}$, and the magnetic field being $\mathbf{B}=B_{\mathrm{r}}\hat{\boldsymbol{\mathrm{r}}}+B_{\theta}\hat{\boldsymbol{\theta}}+B_{\varphi}\hat{\boldsymbol{\varphi}}$. $\hat{\boldsymbol{\Omega}}_0=\Omega_0 \hat{\mathbf{z}}$ is the angular velocity of the rotating frame, $\hat{\mathbf{z}}$ is the direction along the rotation axis, and the magnitude of the gravitational acceleration is $g$. The diffusion tensor $\mathcal{D}$, which includes both viscous and slope-limited components, and the dissipative term $\Phi$ are \vspace{-0.25truein} \begin{center} \begin{align} \displaystyle \mathcal{D}_{ij} &= 2 \overline{\rho} \nu \left[ e_{ij} - \frac{1}{3} \boldsymbol{\nabla}\boldsymbol{\cdot}{\mathbf{v}} \delta_{ij} \right] + \mathcal{F}_{\mathbf{v}, ij}^{\mathrm{sld}}, \label{eqn:vsstress} \\ \displaystyle \Phi &= 2\overline{\rho}\nu\left[e_{ij} e_{ij} - \frac{1}{3} \left(\boldsymbol{\nabla}\boldsymbol{\cdot}{\mathbf{v}}\right)^2\right] + \frac{4\pi\eta}{c^2}\mathbf{J}^2 + \boldsymbol{\nabla}\boldsymbol{\cdot}{\mathbf{F}_{\mathrm{ke}}^{\mathrm{sld}}}-\overline{\rho}\mathbf{v}\cdot\boldsymbol{\nabla}\boldsymbol{\cdot}{\mathcal{F}_{\mathbf{v}}^{\mathrm{sld}}}, \label{eqn:heating} \end{align} \end{center} \begin{figure}[t!] \begin{center} \includegraphics[width=0.495\textwidth]{figure1.pdf} \figcaption{Background stratification and energy flux balance in K3S as a function of fractional solar radius ($r/\Rsun{}$). (a) The isentropic background state, showing $\overline{\rho}$ (solid blue) and $\overline{T}$ (solid red). The entropy diffusion coefficient $\kappa$ and magnetic diffusion coefficient $\eta$ are shown as the dashed-blue and dashed-red lines respectively. All quantities have been normalized by their maximum value. (b) The time and horizontally-averaged radial energy fluxes represented as luminosities (e.g., the flux in a quantity $x$ is $L_x = 4\pi r^2 F_x$) in units of the solar luminosity ($L_{\sun}=3.86\times10^{33} \, \mathrm{erg\,s^{-1}}$). The fluxes are shown averaged over two intervals, with solid lines averaged around a magnetic energy maximum and the dashed lines near a magnetic energy minimum. The lines are total flux ($L_{\mathrm{sum}}$) in black, radiative flux ($L_{\mathrm{rd}}$) in red, enthalpy flux ($L_{\mathrm{en}}$) in blue, conductive entropy flux ($L_{\mathrm{un}}$) in green, kinetic energy ($L_{\mathrm{ke}}$) in light blue, slope-limited diffusion flux ($L_{\mathrm{vd}}$) in teal, and Poynting flux in ($L_{\mathrm{me}}$) orange. \label{fig1}} \end{center} \end{figure} \noindent involving the stress tensor $e_{ij}$, the effective kinematic eddy viscosity $\nu$, the magnetic eddy resistivity $\eta$, and the current density $\mathbf{J} = c/4\pi\boldsymbol{\nabla}\boldsymbol{\times}\mathbf{B}$. The slope-limited velocity diffusion tensor is $\mathcal{F}_{\mathbf{v}}^{\mathrm{sld}}$, the kinetic energy slope-limited diffusion flux vector is $\mathbf{F}_{\mathrm{ke}}^{\mathrm{sld}}$, and they are computed using the algorithm shown in Appendix A. The difference of the divergence of the two fluxes accounts for the change in entropy due to the SLD operator acting on the velocity field. The energy flux $\mathbf{q}$ is comprised of a radiation flux (in the diffusion approximation) and an inhomogeneous turbulent entropy diffusion flux, \vspace{-0.25truein} \begin{center} \begin{equation} \displaystyle \mathbf{q} = \kappa_r \overline{\rho} \mathrm{c_P} \boldsymbol{\nabla} \left(\overline{T}+T\right)+\kappa \overline{\rho} \overline{T} \boldsymbol{\nabla} S, \label{eqn:ediff} \end{equation} \end{center} \begin{figure}[t!] \begin{center} \includegraphics[width=0.475\textwidth]{figure2.pdf} \figcaption{Evolution of the energy densities as well as the angular velocity variations and mean (longitudinally-averaged) toroidal magnetic field $\langle B_{\varphi} \rangle$ at $\Rsun{0.92}$ over the first 20~years of the simulation. (a) Time variation of the volume-averaged energy density of the differential rotation (DRE, black), nonaxisymmetric flows (CKE, blue), axisymmetric toroidal magnetic energy (TME, green), axisymmetric poloidal magnetic energy (PME, orange), and nonaxisymmetric magnetic energy (FME, red) in units of $\mathrm{erg}\, \mathrm{cm}^{-3}$ (b) Time-latitude diagram of angular velocity variations $\avg{\Delta\Omega}/\Omega_0=\left(\langle\Omega\rangle-\{\Omega\}\right)/\Omega_0$ in cylindrical projection, elucidating the propagation of equatorial and polar branches of torsional oscillations. The color indicates faster rotation in red and slower rotation in blue, with departures of up to $\pm 10$\% of the bulk rotation rate. (c) Time-latitude diagram of $\langle B_{\varphi} \rangle$ in cylindrical projection, exhibiting the equatorward migration of the wreaths from the tangent cylinder, and the poleward propagation of the higher latitude field, with the polarity of the field such that red (blue) tones indicate positive (negative) toroidal field. \label{fig2}} \end{center} \end{figure} \noindent with $\kappa_r$ the molecular radiation diffusion coefficient, and $c_{\mathrm{P}}$ the specific heat at constant pressure. The entropy diffusion flux has the thermal eddy diffusivity $\kappa$ acting on the entropy fluctuations. A calorically-perfect ideal gas equation of state is used for the mean state, about which the fluctuations are linearized as \vspace{-0.25truein} \begin{center} \begin{align} \displaystyle \overline{P} &= (\gamma-1)\mathrm{c_P}\overline{\rho}\overline{T}/\gamma, \label{eqn:idealgas} \\ \displaystyle \rho/\overline{\rho} &= P/\overline{P} - T/\overline{T} = P/\gamma \overline{P} - S/\mathrm{c_P}, \label{eqn:asheos} \end{align} \end{center} \noindent with $\gamma=5/3$ the adiabatic exponent. The anelastic system of MHD equations requires 12 boundary conditions in order to be well posed. One of the primary goals of this work is to assess the generation of magnetic field and how it impacts the organization of angular momentum and energy in the simulation. Thus, the following impenetrable, torque-free, and flux transmitting boundary conditions are employed \vspace{-0.25truein} \begin{center} \begin{equation} \displaystyle \mathrm{v_r} = \ddr{}\left(\frac{\mathrm{v}_{\theta}}{r}\right) = \ddr{}\left(\frac{\mathrm{v}_{\varphi}}{r}\right) = \ddr{S} = 0, \quad \mathrm{on} \; r=r_1 \; \mathrm{and} \; r_2. \label{eqn:bdrycond} \end{equation} \end{center} \noindent The magnetic boundary conditions are perfectly conducting at the lower radial boundary ($r_1$) and matching to a potential field at the upper radial boundary ($r_2$), implying that \vspace{-0.25truein} \begin{center} \begin{equation} \left. \displaystyle B_{\mathrm{r}}\right|_{r_1} = 0 \quad \mathrm{and} \quad \left. \mathbf{B}\right|_{r_2} = \nabla\Psi \Rightarrow \triangle \Psi = 0, \label{eqn:magbdry} \end{equation} \end{center} \noindent with $\Psi$ the magnetic potential. The solution of Laplace's equation defines the three components of $\mathbf{B}$ at the upper boundary. Further details of the implementation and formulation of the ASH code can be found in \citet{clune99} and \citet{brun04}. Here a one solar mass star, with a solar luminosity, is considered that is rotating at three times the solar rate. An isentropic background stratification is employed that closely resembles the helioseismically-constrained Model S stratification \citep{christdals96}, with its normalized spherically-symmetric profiles of density ($\overline{\rho}$) and temperature ($\overline{T}$) shown in Figure \ref{fig1}(a). The simulated domain stretches from the base of the convection zone at $r_1=\Rsun{0.72}$ to the upper boundary of the simulation at $r_2=\Rsun{0.97}$, where $\Rsun{}$~$= 6.96\times 10^{10} \, \mathrm{cm}$. This approximation omits the near-surface region and any regions below the convection zone, such as a tachocline. The simulation K3S has a resolution of $N_r\times N_{\theta} \times N_{\varphi} = 200\times 256\times 512$, corresponding to a horizontal resolution with a maximum spherical harmonic degree of $\ell_{\mathrm{max}} = 170$. In what follows, the operator $\avg{}$ indicates a longitudinal average (or mean) of a quantity, whereas the operator $\{\}$ indicates a longitudinal and temporal average. The extent of the temporal average varies depending upon the context of its use, so that interval will be indicated when the operator is invoked. The SLD mechanism implemented in the ASH code, and used in case K3S, is similar to the schemes presented in \citet{rempel09} and \citet{fan13}, though it has been modified to compensate for the grid convergence at the poles. This diffusive operator is detailed in Appendix A. SLD acts locally to achieve a monotonic solution by limiting the slope in each coordinate direction of a piecewise linear reconstruction of the unfiltered solution. The scheme minimizes the steepest gradient, while the rate of diffusion is regulated by the local velocity. It is further reduced through a function $\varphi$ that depends on the eighth power of the ratio of the cell-edge difference $\delta_i q$ and the cell-center difference $\Delta_i q$ in a given direction $i$ for the quantity $q$. This limits the action of the diffusion to regions with large differences in the reconstructed solutions at cell-edges. Since SLD is computed in physical space, it incurs the cost of smaller time steps due to the convergence of the grid at the poles, which is largely mitigated by introducing a filtering operator that depends upon latitude. The resulting diffusion fields are projected back into spectral space and added to the solution with a forward Euler time step. \begin{figure*}[t!] \begin{center} \includegraphics[width=\textwidth]{figure3.pdf} \figcaption{Evolution of the mean (longitudinally-averaged) radial $\avg{B_{\mathrm{r}}}$ and toroidal $\avg{B_{\varphi}}$ magnetic fields over an extended interval of the K3S simulation, with its regular cycling interrupted by a grand minimum during the interval roughly between 33 to 49 years. (a) Time-latitude diagram of $\avg{B_{\mathrm{r}}}$ at $\Rsun{0.92}$ in cylindrical projection, elucidating the poleward propagation of mid and high-latitude magnetic field and the equatorward propagation of lower latitude field. (b) Time-latitude diagram of $\avg{B_{\varphi}}$ at $\Rsun{0.92}$ in cylindrical projection, exhibiting the equatorward migration of the wreaths from the tangent cylinder (horizontal dotted lines at $\pm\dgr{43}$) and the poleward propagation of the higher latitude field. The polarity of the fields are such that red (blue) tones indicate positive (negative) field. The interval containing the grand minimum is marked by vertical dashed lines. Each magnetic energy cycle is labeled starting at unity. \label{fig3}} \end{center} \end{figure*} The SLD has been restricted to act only on the velocity field in this simulation. This mimics a lower thermal and magnetic Prandtl number ($\mathrm{Pr}$, $\mathrm{Pm}$) than otherwise attainable through a purely Newtonian diffusion operator with the spatial resolution used in this simulation. Yet a weak viscous eddy diffusion is retained in addition to the SLD operator in order to reduce the condition number of the matrices used in ASH for the implicit Crank-Nicholson time stepping method. In contrast, the entropy and magnetic fields remain fully under the influence of an eddy diffusion, with both a radially-dependent entropy diffusion $\kappa$ and resistivity $\eta$. The eddy diffusion coefficients are roughly similar in form to those of case D3 from \citet{brown10} and case D3a \citet{nelson13a}, with $\kappa$, $\nu$, and $\eta \propto \overline{\rho}^{\; -1/2}$ and where the profiles of $\kappa$ and $\eta$ are shown in Figure \ref{fig1}(a). The value of these diffusion coefficients at the upper boundary are $\kappa(r_2) = 1.6\times 10^{13}$, $\nu(r_2) = 4\times 10^{8}$, and $\eta(r_2) = 8\times 10^{12}$ with the units of each coefficient being $\mathrm{cm}^2\, \mathrm{s}^{-1}$. Since the majority of the viscous diffusion and dissipation is handled with the SLD scheme, it is somewhat involved to estimate standard fluid parameters such as the Reynolds number. However, a detailed analysis carried out in Appendix A.3 provides an estimate for the effective SLD viscosity. This also permits the estimation of the Reynolds number as $\mathrm{Re}\approx 350$, as well as thermal Prandtl number as $\mathrm{Pr}\approx 0.115$ and a magnetic Prandtl number of $\mathrm{Pr}_\mathrm{m} \approx 0.23$. This is about a factor of two lower than in previous ASH simulations, which typically are carried out with $\mathrm{Pr}=1/4$ and $\mathrm{Pr}_{\mathrm{m}}=1/2$. The effective magnetic Reynolds number is then $\mathrm{Re}_{\mathrm{m}}=\mathrm{Pr}_{\mathrm{m}}\mathrm{Re}_{\mathrm{eff}} \approx 90$. Further, the Rayleigh number can be characterized at mid-convection zone as $\mathrm{Ra}=\Delta\overline{S} g d^3/c_P \nu \kappa \approx 6.3\times 10^5$ and the Taylor number as $\mathrm{Ta}=4\Omega_0^2 d^4/\nu^2 \approx 9.1\times 10^7$, where $\Omega_0=\Osun{3}$ or $7.8\times 10^{-6} \mathrm{rad s^{-1}}$. The Rossby number, when defined with the enstrophy as $\mathrm{Ro}=|\nabla\times\mathbf{v}|/2\Omega_0$, varies with the magnetic cycle between 0.12 at magnetic maximum to 0.33 at magnetic minimum. Thus, some of these parameters differ significantly from other simulations that present features similar to this dynamo, namely those with equatorward propagating fields such as in \citet{racine11} and \citet{warnecke14} where the effective thermal and magnetic Prandtl numbers are between two and ten times larger. However, other relevant parameters are quite similar such as the effective Reynolds numbers as well as the Rayleigh, Taylor, and Rossby numbers. \begin{figure*}[ht!] \begin{center} \includegraphics[width=0.95\textwidth]{figure4.pdf} \figcaption{Changing velocity and magnetic fields during magnetic cycling. (a) The cylindrical projection of the mean toroidal magnetic field $\langle B_{\varphi} \rangle$ is shown with time and latitude at $\Rsun{0.90}$ through the course of a full polarity cycle (capturing cycles 8 and 9), with colorbar as in (l). The times $t_1$ through $t_5$ are indicated with vertical dashed lines. (b)-(f) The horizontal structure of the radial velocity at $\Rsun{0.90}$ is displayed in a global Mollweide projection (with dashed lines marking the equator and latitudes every $\dgr{30}$). The colorbar is given in (b). Downflows are darker tones and upflows lighter tones. Latitudes that are influenced by strong magnetic fields are indicated with arrows. (g)-(k) A 20-day-time and longitude-averaged angular velocity $\{\Omega\}/\Omega_0$ is shown in the meridional plane, with colorbar as in (g) where faster rotation is in red tones and slower in blue tones. (l)-(p) The longitudinal field $B_{\varphi}$ at $\Rsun{0.90}$ is illustrated, with the shared colorbar given in (l). (q)-(u) A 20-day-time and longitude-average of the toroidal magnetic field $\{B_{\varphi}\}$ is shown, with colorbar as in (q). \label{fig4}} \end{center} \end{figure*} \section{Overview of the Cycling Dynamics} \label{sec:overview} With the formulation of the problem established, the discussion now turns to an overview of the dynamics occurring within the K3S simulation. A first diagnostic of the K3S simulation is to assess its cycles in a global sense. This is easily achieved by considering the evolution of volume-averaged energy densities of various components of the flow and magnetic field. The definition of the various energy densities are \vspace{-0.25truein} \begin{center} \begin{align} & \mathrm{DRE} = \frac{1}{2}\overline{\rho}\avg{\mathrm{v}_{\varphi}}^2, \; \mathrm{MCE} = \frac{1}{2}\overline{\rho}\left(\avg{\mathrm{v_r}}^2 + \avg{\mathrm{v}_{\theta}}^2\right), \nonumber \\ &\mathrm{CKE} = \frac{1}{2}\overline{\rho}\left(\mathbf{v}-\avg{\mathbf{v}}\right)^2, \; \mathrm{TME} = \frac{1}{8\pi}\avg{B_{\varphi}}^2, \nonumber \\ &\mathrm{PME} = \frac{1}{8\pi}\left(\avg{B_{\mathrm{r}}}^2 + \avg{B_{\theta}}^2\right), \; \mathrm{FME} = \frac{1}{8\pi}\left(\mathbf{B}-\avg{\mathbf{B}}\right)^2, \end{align} \end{center} \noindent where the total kinetic energy is $\mathrm{DRE}+\mathrm{MCE}+\mathrm{CKE}$ and the total magnetic energy is $\mathrm{TME}+\mathrm{PME}+\mathrm{FME}$. The periodic modulation of the kinetic energy in both the convection (CKE) and the differential rotation (DRE) can be seen in Figure \ref{fig2}(a), which covers the first 20~years of evolution of the simulation. Those changes are accompanied by the wax and wane of the energy contained in the magnetic field, though with a different temporal shift. The mean (longitudinally-averaged) toroidal magnetic fields (TME) contain the most magnetic energy, being formed by the action of the differential rotation on the poloidal magnetic field (e.g., \S\ref{sec:tme}). The nonaxisymmetric magnetic fields (FME) have the second greatest magnetic energy, whereas the mean (longitudinally-averaged) poloidal magnetic fields (PME) contain the least amount of magnetic energy. A double-peaked structure can be seen in the FME as well as in the TME. These variations in turn are largely due to the modulation of the differential rotation over the course of a cycle, which becomes readily apparent in Figures \ref{fig2}(b). Particularly, the pole and equator are accelerated as the system recovers from the quenching of the differential rotation that occurs during the magnetic maxima, leading to the first peak. There is also a phase difference in the peak in the magnetic energy between the deep and upper convection zones, which results in the second peak. The specific correlations and mechanisms that are behind this oscillatory behavior are covered below and in later sections. The polarity reversals of the magnetic field are illustrated for the first 20~years of the simulation in Figure \ref{fig2}(c) and over an extended interval of the simulation in Figure \ref{fig3}. The magnetic field begins to regularly oscillate roughly every 3.1~years between positive and negative polarity states. Such regular cycling behavior arises shortly after the roughly 2~year kinematic growth phase of the magnetic fields, which began at year zero when this MHD simulation was initialized by inserting a dipolar magnetic field into a preexisting but mature hydrodynamic simulation. That initial magnetic field had a strength of about 100~G at the base of the convection zone. The initial energy in that magnetic field is about $10^5$ times smaller than the total kinetic energy. The oscillations in the magnetic energy then continue throughout the entire evolution of the system. Throughout this paper two cycle periods will be cited. There is a 3.1~year magnetic energy cycle measured between maxima in the magnetic energy (or half-polarity cycle), which could be considered to be akin to the 11~year sunspot cycle. There is also a 6.2~year polarity cycle measured as the interval between magnetic maxima that have the same polarity as seen in Figure \ref{fig3} for instance. This polarity cycle is akin to the 22~year solar polarity cycle. The overall structure of the magnetic fields during a magnetic cycle is readily apparent in both Figures \ref{fig3}(a) and (b). The mean radial magnetic field ($\avg{B_{\mathrm{r}}}$) is largely confined to higher latitudes, whereas the mean toroidal magnetic field field ($\avg{B_{\varphi}}$) has both prominent polar and low-latitude branches. At the radius where the magnetic fields in Figure \ref{fig3} are sampled, the mean radial and toroidal magnetic fields differ by about a factor of three in magnitude. This ratio is approximately maintained throughout the domain, leading to the roughly order of magnitude difference between the toroidal and poloidal magnetic energies seen in Figure \ref{fig2}(a). There is an interval roughly between years 33 and 49 as seen in Figure \ref{fig3} during which the system fails to fully reverse its polarity for five magnetic cycles. This interval will be referred to as a ``grand minimum.'' While the choice of the beginning and end of this interval is somewhat arbitrary, this interval was chosen to be between the minimum in the magnetic energy near the upper boundary of the last ``normal'' cycle (near year 33, or cycle 11) and the similar minimum of the last abnormal cycle (near year 49, or cycle 16). The magnetic energy cycle is still operating during that interval, with the polar field waxing and waning but not fully reversing. However, the polarity cycles are disrupted at lower latitudes and the magnetic energy there is significantly reduced. This will be further explored in \S\ref{sec:minimum}. \subsection{Magnetic Energy Cycle in Detail} \label{sec:magcyc} Figure \ref{fig4} illustrates the morphology of the convection, differential rotation, and the longitudinal magnetic fields in space and time over the course of a polarity reversal. Particularly, Figures \ref{fig4}(b)-(f) shows the convective patterns represented in radial velocities that are prevalent during different phases of the cycle (labeled as times $t_1$-$t_5$ in Figure (\ref{fig4})), with elongated and north-south aligned flows at low latitudes (banana cells) and apparently smaller scales at higher latitudes. Such flows are typical in the rotationally-constrained convection captured in global-scale large-eddy MHD simulations \citep[e.g.,][]{miesch00,kapyla11a,guerrero13,augustson12,augustson13}. In aggregate, the velocity field of those rotationally-aligned convective cells produce correlations in the velocity field that yield strong Reynolds stresses that act to accelerate the equator and slow the poles. In concert with turbulent heat transport, such stresses serve to rebuild and maintain the differential rotation during each cycle. Indeed, when combined with the longitudinally-averaged Lorentz force and Maxwell stresses, those stresses induce the modulation in the angular velocity seen in Figures \ref{fig2}(b) and \ref{fig4}(g)-(k). The values of the shear do not quite reach zero, but are reduced by about 60\% relative to their maximum value at lower latitudes. The elements of angular momentum transport that give rise to such modulation are further discussed in \S\ref{sec:dre}. The presence of large-scale and longitudinally-connected magnetic structures is evident in $B_{\varphi}$ as shown in Figures \ref{fig4}(l)-(p). Such toroidal structures have been dubbed wreaths \citep{brown10}. In this simulation, there are two evolving counter-polarized, lower-latitude wreaths that form in the region near the tangent cylinder at nearly all depths, meaning that the latitude of formation decreases with depth. This region is also where the peak in the latitudinal gradient of the differential rotation exists for much of a magnetic energy cycle (Figures \ref{fig2}(b) and \ref{fig4}(g)-(k)). In the latter set of figures, it is clear that the radial shear is roughly proportional to $r\sin{\theta}$ at low latitudes, for the differential rotation is largely cylindrical. Though, it tends decrease near the upper boundary. So for figures showing radial cuts, a radius of $0.92 R_{\odot}$ was chosen to emphasize the region of greatest shear as it corresponds to the depth where $d\Omega/dr$ is largest. There are also polar caps of magnetism that possess a magnetic polarity that is reversed compared to that of the low-latitude wreaths. These caps act to moderate the polar differential rotation, which would otherwise tend to accelerate and hence establish fast polar vortices. The average structure of the wreaths and caps at each point in the cycle is apparent in $\{B_{\varphi}\}$ exhibited in Figures \ref{fig4}(q)-(u), which is averaged over 20~days at each time $t_i$. The wreaths appear rooted at the base of the convection zone, whereas the caps have the bulk of their energy in the lower convection zone above its base. As will be seen in \S\ref{sec:evodyno}, the wreaths are initially generated higher in the convection zone while the wreath generation mechanism (primarily the $\Omega$-effect) migrates equatorward and toward the base of the convection zone over the course of the cycle. The equatorward migration takes place largely in the upper convection zone nearer the beginning of the cycle at times $t_1$ and $t_2$. At later times, the mean toroidal magnetic field near the base of the convection zone migrates poleward and begins to build up the polar magnetic caps, which have a polarity opposite to the flux generated at lower latitudes. This represents a very different dynamo mechanism relative to a typical flux-transport dynamo. The changes in the structure of the convection seen in Figure \ref{fig4} plays a role in the dynamo, for they induce changes in the Reynolds stress and the electromotive force (EMF) that generates the magnetic field. As a cycle proceeds, the magnetic fields disrupt the alignment and correlations of the convective cells through Lorentz forces, which is particularly evident in Figure \ref{fig4}(c). The presence of the magnetic fields, in addition to modifying the structure of the low-latitude convection, modulates the global convective amplitudes as might be ascertained by comparing Figures \ref{fig4}(b)-(f). Particularly, while the magnetic field gathers strength during a cycle, the strong longitudinally-connected magnetic fields also create a thermal shadow, weakening the thermal driving of the equatorial cells, as indicated with arrows. Such influences of the magnetic fields on the convection and its ability to transport heat are also apparent, albeit less directly, in Figure \ref{fig1}(b) where the enthalpy, entropy diffusion, and kinetic energy fluxes are modulated by about 30\% throughout a cycle. These fluxes vary largely in phase with the cycle, where at magnetic maximum the fluxes are smallest and near minimum they are largest. The reduction of the convective amplitudes also leads to the angular momentum transport of the flows being diminished as the magnetic fields become stronger (see \S\ref{sec:dre}). The effects of the magnetic fields on the convection are also captured in the ebb and flow of the kinetic energy contained in the nonaxisymmetric velocity field, which here varies by about 50\% over the magnetic energy cycle (CKE in Figure \ref{fig2}(a)). Indeed, signatures of such in-phase magnetically-modulated convection have also been detected in EULAG-MHD simulations \citep{cossette13,charbonneau14}. These magnetic feedback mechanisms are in keeping with the predicted impacts of strong longitudinal fields in the convection zone suggested by \citet{parker87}. There is also the direct impact of the large-scale Lorentz forces on the differential rotation (e.g., the \citet{malkus75} effect). This process and the magnetic influences on the convection described above combine to explain why the differential rotation seen in Figure \ref{fig2}(b) cannot be fully maintained during the cycle. Rather, the angular velocity has substantial variations throughout the cycle (Figure \ref{fig2}(a)), which are largely driven by the strong feedback of the magnetic fields (see \S\ref{sec:dre} and Figure \ref{fig5}). Such strong nonlinear Lorentz force feedbacks are not without precedent, as they have been seen in previous convective dynamo simulations as well \citep[e.g.,][]{gilman83,brun04,browning08,brown11}. \begin{figure}[t!] \begin{center} \includegraphics[width=0.45\textwidth]{figure5.pdf} \figcaption{Energy densities and energy generation rates over the average polarity cycle. (a) Log-linear plot of the energy density of the differential rotation (DRE, black), nonaxisymmetric flow (CKE, blue), toroidal magnetic field (TME, green), nonaxisymmetric magnetic field (FME, red), poloidal magnetic energy (PME, orange) in units of $\mathrm{erg\, cm}^{-3}$. (b) Volume-integrated differential rotation energy generation rate ($K_{\varphi}$, as in Equation \ref{eqn:evokdr}), with the rate due to Reynolds stresses (RS, blue), mean magnetic stresses (MM, yellow), and fluctuating Maxwell stresses (FM, orange). (c) Volume-integrated mean toroidal magnetic energy generation rate ($M_{\varphi}$, as in Equation \ref{eqn:tme}), with the rate due mean shear (MS, blue), compressive motions (CC, green), fluctuating advection (FA, yellow), and resistive diffusion (RD, red). In (b) and (c), the values are normalized by the maximum absolute value of the generation rates. (d) The cycle and longitude-averaged toroidal magnetic field $\langle B_{\varphi} \rangle$ evaluated over the average magnetic polarity cycle shown at $\Rsun{0.92}$, with a color bar given in units of kG. \label{fig5}} \end{center} \end{figure} \section{Anatomy of a Reversal} \label{sec:reversal} The analysis of dynamical elements contributing to regular magnetic polarity cycles is aided by building a composite of a number of our regular cycles. This composite omits the interval designated as the grand minimum. \subsection{Average Polarity Cycle} \label{sec:avgcyc} The average polarity cycle is shown in Figure \ref{fig5}, which has been formed by identifying the common structures in each polarity cycle, obtaining the times of the beginning and end of each polarity cycle as defined through these structures, and then stretching each polarity cycle to be the same length in time and co-adding them. The statistical significance of this process is greatly aided by the regularity of the magnetic polarity cycle period, which typically varies by only about 10\% of the average polarity cycle period (see \S\ref{sec:periods}). In particular, Figure \ref{fig5}(a) shows the time evolution of the relevant volume-integrated components of the total energy density. The differential rotation energy (DRE) is the largest component of the total kinetic energy followed by the kinetic energy in the convection (CKE). The energy contained in the meridional circulation is the smallest component, being roughly three orders of magnitude smaller than the DRE and is omitted from Figure \ref{fig5}(a). Such a small contribution of the meridional flow to the overall kinetic energy is typical of global-scale convection simulations \citep[e.g.,][]{brown08,augustson12,brun14}. The total magnetic energy at its peak is about 30\% of the total kinetic energy, or about 50\% of the CKE, placing the K3S dynamo close to equipartition when averaged over the cycles and the domain. There are certainly some cycles and most certainly some regions in the computational domain in which the kinetic and magnetic energies are near equipartition. The bulk of the magnetic energy resides in the low-latitude magnetic wreaths and polar caps (TME), with the energy in the nonaxisymmetric field (FME) a close second. The energy contained in the mean poloidal field (PME) is about an order of magnitude smaller. Figure \ref{fig5}(a) further elucidates three prominent phase shifts, one between the differential rotation and the convection, a second between the magnetic field and the differential rotation, and a third between the convection and the magnetic field. The first phase shift is apparent when comparing peak values of CKE and DRE, where the CKE precedes the DRE by about a year. The second shift has the peak in DRE preceding that of TME by about 0.6~years. Finally, there is an additional phase shift between maxima of CKE and TME, with CKE preceding TME by about 1.5~years. Each of these phase shifts are related to the nonlinear coupling of the convective Reynolds stresses, the differential rotation, the Lorentz forces and the Maxwell stresses as will be further studied in \S\ref{sec:dre} and \ref{sec:tme}. The magnetic energy cycles visible in Figure \ref{fig3} can be numbered starting with unity at year zero and extending up to cycle 24 at year 80. There are quantifiable differences between the even and odd-numbered magnetic energy cycles over that extended interval that become more apparent in the averaged polarity cycle shown in Figure \ref{fig5}. Such a temporal parity is similar to the observed behavior of the sunspots known as the Gnevyshev-Ohl rule \citep[e.g.,][]{charbonneau10}. Those asymmetries are reflected in the behaviors exhibited by the energy densities and their transport mechanisms in the average polarity cycle shown in Figure \ref{fig5}. The average of the odd-numbered cycles are captured there as the first 3.1~year interval, and the average even cycle is shown over the succeeding 3.1~year interval. In particular, there is a deeper minimum in the magnetic energy densities as the odd-numbered cycles are entered as compared to the the entrance of the even-numbered cycles. When TME and PME are assessed alone, the deeper minima become more apparent, with their energy densities being about 50\% and 25\% lower respectively (Figure \ref{fig5}(a)). Such temporal differences in the properties of the odd and even cycles are also evident in the transport mechanisms. For instance the Reynolds stresses and Maxwell stresses of the odd cycles have a broader variation in time than the even cycles, where they are more sharply peaked in both of those quantities (Figure \ref{fig5}(b)). In contrast, the magnetic energy production terms exhibit the opposite correlation, with the production terms being more peaked during the even-numbered cycles (Figure \ref{fig5}(c)). Yet this simulation must be run longer to better assess the statistical significance of this temporal asymmetry. However, similar signatures of such differences between even and odd magnetic cycles have been found within the long-running cycles of an EULAG-MHD simulation \citep{passos14}. \subsection{Maintaining the Differential Rotation}\label{sec:dre} The evolution of the energy contained in the differential rotation is critical to the behavior of the K3S dynamo, and its evolution is considered explicitly here. A detailed derivation of Equation \ref{eqn:evokdr} is given in Appendix B. In particular, it is shown in Appendix B that the boundary fluxes of mean kinetic energy of the longitudinal flows are zero as is the volume integrated energy arising from the advection of the angular velocity. Therefore, the evolution of the volume-integrated differential rotation kinetic energy (DRE) is \vspace{-0.35truein} \begin{center} \begin{align} K_{\varphi} &= \frac{d\mathrm{DRE}}{dt} = \displaystyle \int_V dV \ddtime{}\frac{1}{2}\overline{\rho}\avg{\mathrm{v}_{\varphi}}^2 = \int_V dV \bigg[\overbrace{\overline{\rho}\lambda\avg{\mathrm{v}_{\varphi}'\mathbf{v}'}\boldsymbol{\cdot}\boldsymbol{\nabla}\avg{\Omega}}^{\mathrm{RS}} \nonumber \\ & -\overbrace{\frac{\lambda}{4\pi}\avg{B_{\varphi}}\avg{\mathbf{B}}\boldsymbol{\cdot}\boldsymbol{\nabla}\avg{\Omega}}^{\mathrm{MM}} -\overbrace{\frac{\lambda}{4\pi}\avg{B_{\varphi}'\mathbf{B}'}\boldsymbol{\cdot}\boldsymbol{\nabla}\avg{\Omega}}^{\mathrm{FM}} \bigg]. \label{eqn:evokdr} \end{align} \end{center} \noindent with $\mathbf{v}'=\mathbf{v}-\avg{\mathbf{v}}$ the nonaxisymmetric velocity, $\mathbf{B}'=\mathbf{B}-\avg{\mathbf{B}}$ the nonaxisymmetric magnetic field, as well as $\avg{\mathbf{v}}$ and $\avg{\mathbf{B}}$ the axisymmetric velocity and magnetic field respectively. Figure \ref{fig5}(b) shows the evolution of the primary components contributing to the dissipation and production of differential rotation kinetic energy (DRE) given in Equation (\ref{eqn:evokdr}). Clearly, the Reynolds stresses (RS) are the only significant means of producing DRE. The contribution of the SLD viscous stresses are very small when integrated over the volume, being over two orders of magnitude less than the RS. Thus they are not shown. The contribution of the RS to the DRE varies substantially throughout a cycle, which is a reflection of the Lorentz force impacting the morphology of convective structures that can be formed and thus their capacity to generate DRE. The magnetic fields do not play just a passive role either, for they actively dissipate and transfer energy as well. Indeed, both the mean magnetic stresses (MM) and the fluctuating Maxwell stresses (FM) contribute to the global transfer of DRE to the magnetic energy reservoir where some of this energy will be dissipated via a resistive channel. More importantly, the FM can act to inhibit local turbulence and vortical motions, acting much like an anisotropic and inhomogeneous viscous dissipation, whereas the MM act primarily on the large-scale flows such as the differential rotation. The FM dominate throughout much of the cycle, though the MM play a larger role during minima. Nevertheless, the amplitudes of the FM and MM during a magnetic energy cycle are tightly correlated with the magnetic energy densities, as expected. The Reynolds stresses (RS) reach a peak about 0.4~years before a magnetic minimum and then begin to decrease through the minimum and the rest of the cycle (Figure \ref{fig5}(b)). If it were primarily the magnetic fields that modify the RS, one might expect that the RS terms would be maximum at the minimum of the magnetic energy. Instead, the RS are maximum when the differential rotation is at a minimum. There are likely two reasons for this: one is that the energy in the convection is growing at that time, leading to an increase in the RS, and the other is that the shear of the differential rotation itself modifies the velocity correlations of the convective structures. The shear of the differential rotation will radially and longitudinally stretch the equatorial columns of convection (banana cells) that are primarily responsible for building it. This tends to diminish the velocity correlations responsible for generating the Reynolds stresses. \begin{figure*}[t!] \begin{center} \includegraphics[width=\textwidth]{figure6.pdf} \figcaption{Time evolution of poloidal magnetic potential $\langle A_{\varphi} \rangle$ through a typical magnetic energy cycle. Longitudinal component of the magnetic potential (a) at $t_1$ called $\avg{A_{\varphi,1}}$ and (b) $t_2$ called $\avg{A_{\varphi,2}}$, and their difference (c) $\avg{\Delta A_\varphi}=\avg{A_{\varphi,2}} - \avg{A_{\varphi,1}}$. The sum of the right-hand-side terms in Equation (\ref{eqn:daphi}) is shown as panel (d). The components of the sum are shown individually as: (e) the turbulent EMF $\{\mathcal{E}'_\varphi\}$, (f) the mean EMF $\left[\mathcal{E}_\varphi\right]$, and (g) the resistive diffusion $-\eta \{J_\varphi\}$. The location of the tangent cylinder (T.~C.) is shown in (b). \label{fig6}} \end{center} \end{figure*} \subsection{Building Toroidal Magnetic Structures} \label{sec:tme} The coherent large-scale wreath-like magnetic structures have been realized in many stellar convective dynamo simulations utilizing very different codes such as in \citet{browning06,brown08,ghizaru10,kapyla12,augustson13}, and \citet{nelson13a}. The common feature shared by all those simulations is that the regions in which the wreaths form in the convection zone is typically one where the Rossby number is low. For the ASH simulations, another feature that appears to promote the formation of longitudinal magnetic structures is a perfectly conducting lower boundary condition, which requires the field to be horizontal there. Thus the formation of magnetic wreaths in the K3S simulation is generally promoted both through its relatively low Rossby number as well as through the use of perfectly conducting lower boundary condition. These mean toroidal magnetic fields $\langle B_{\varphi} \rangle$ are shown in Figures \ref{fig2}, \ref{fig3}, and \ref{fig4}. Such magnetic fields are initially generated and subsequently maintained by similar processes. During the growth phase of the magnetic field, the shear of the differential rotation acts to fold and wind the initial poloidal field into large-scale longitudinal magnetic structures. In this kinematic phase, the shear and meridional flows are largely unaffected and can be considered stationary relative to the time scales of the growing field. However, once the magnetic fields are strong enough, they begin to impact the convective flows that cross them through Lorentz forces. Hence, the magnetic field strength becomes saturated as the back-reaction of the Lorentz forces increases the alignment of the velocity field and the magnetic field, which reduces both its generation and can lead to its destruction. To quantify these processes, consider the time evolution of the toroidal magnetic energy (TME), which can be represented as \vspace{-0.25truein} \begin{center} \begin{align} M_\varphi &= \frac{d\mathrm{TME}}{dt} = \displaystyle \int_V dV\ddtime{}\frac{\avg{B_{\varphi}}^2}{8\pi} \label{eqn:tme} \\ &= \int_V dV\frac{\langle \Bcp\rangle}{4\pi}\hat{\boldsymbol{\varphi}}\cdot\bigg[\overbrace{\langle \vB\rangle\boldsymbol{\cdot}\boldsymbol{\nabla}\langle \vv\rangle}^{\mathrm{MS}} + \overbrace{\bigavg{\mathbf{B}'\boldsymbol{\cdot}\boldsymbol{\nabla}\mathbf{v}'}}^{\mathrm{FS}} - \overbrace{\langle \vv\rangle\boldsymbol{\cdot}\boldsymbol{\nabla}\langle \vB\rangle}^{\mathrm{MA}} \nonumber \\ &- \overbrace{\bigavg{\mathbf{v}'\boldsymbol{\cdot}\boldsymbol{\nabla}\mathbf{B}'}}^{\mathrm{FA}} + \overbrace{\left(\avg{\mathbf{B}}\avg{\mathrm{v_r}} + \avg{\mathbf{B}'\mathrm{v_r}'}\right) \ddr{\ln{\overline{\rho}}}}^{\mathrm{CC}} - \overbrace{\boldsymbol{\nabla}\boldsymbol{\times}\left(\eta\avg{\mathbf{J}}\right)}^{\mathrm{RD}} \bigg]. \nonumber \end{align} \end{center} \noindent A detailed derivation of the production terms for the mean magnetic fields in spherical coordinates is provided in Appendix A of \citet{brown10}. The terms in Equation (\ref{eqn:tme}) are the production of magnetic energy by mean shear ($\mathrm{MS}$), fluctuating shear ($\mathrm{FS}$), mean advection ($\mathrm{MA}$), fluctuating advection ($\mathrm{FA}$), compressive correlations ($\mathrm{CC}$), and resistive diffusion ($\mathrm{RD}$). The significant volume-integrated components of Equation (\ref{eqn:tme}) are shown in Figure \ref{fig5}(c). As suggested above, the $\Omega$-effect or mean shear (MS) here is the dominant means of producing magnetic energy in the toroidal fields, which is accompanied by a weak contribution from the compressive terms (CC). In contrast, resistive dissipation (RD) and fluctuating advection (FA) dissipate TME. The other terms comprise less than 5\% of the total production or dissipation of TME. While there is a generation of TME when all the terms are summed during much of the magnetic energy cycle, it is clear that much of the temporally local generation through mean shearing effects (or the $\Omega$-effect) is counter-balanced by dissipative processes. As with the poloidal generation mechanisms as seen in \S\ref{sec:genpol}, the generation of field is greater than its rate of dissipation during the growth phase of the energy cycle, as to be expected of a convective dynamo whose magnetic Reynolds number is supercritical. Whereas during the declining phase of the cycle, dissipation dominates these processes and so the magnetic energy declines. There is also a strong correlation between the generation of field through the compressive mechanism and dissipation by fluctuating advection. Their amplitudes are, however, not perfectly matched. Instead, the energy dissipated through the FA term is energy that is converted into either mechanical energy or magnetic energy such as the poloidal and nonaxisymmetric magnetic fields. Note that the magnetic energy dissipation through the FA term provides a first indication that the dynamo operating in K3S is of an $\alpha$-$\Omega$ type rather than $\alpha^2$-$\Omega$ within the context of mean-field dynamo theory \citep[e.g.,][]{krause80}. Indeed, the dissipative character of the FA term is more reminiscent of the diffusive mean-field $\beta$ effect, which is defined in \S\ref{sec:alpha}. \subsection{Generating Poloidal Fields} \label{sec:genpol} The time evolution of the magnetic field can be recovered from the magnetic vector potential, where for instance the mean toroidal magnetic vector potential $\avg{A_{\varphi}}$ captures the poloidal magnetic field as $\langle \mathbf{B}_P \rangle = \boldsymbol{\nabla}\boldsymbol{\times}(\langle A_{\varphi} \rangle\hat{\boldsymbol{\varphi}})$. In particular, the behavior of $\avg{A_{\varphi}}$ is governed by the following form of the induction equation \vspace{-0.25truein} \begin{center} \begin{align} \displaystyle \ddtime{\langle A_{\varphi} \rangle} = \hat{\boldsymbol{\varphi}}\cdot\left[\avg{\mathbf{v}'\times\mathbf{B}'} + \avg{\mathbf{v}}\times\avg{\mathbf{B}} - \eta\avg{\mathbf{J}}\right]. & \label{eqn:daphidt} \end{align} \end{center} \noindent In what follows, $\boldsymbol{\mathcal{E}}=\mathbf{v}\times\mathbf{B}$ is the electromotive force (EMF). Thus the turbulent electromotive force (EMF, $\boldsymbol{\mathcal{E}}'$) is defined as $\avg{\boldsymbol{\mathcal{E}}'}=\avg{\mathbf{v}'\times\mathbf{B}'}$. The diffusion is proportional to the product of the current $\mathbf{J}=c/4\pi\boldsymbol{\nabla}\boldsymbol{\times}\mathbf{B}$ and the magnetic diffusion coefficient $\eta$. As noted in \citet{nelson13a}, the definite time integral of this equation subsequently yields \vspace{-0.25truein} \begin{center} \begin{align} &\displaystyle \Delta\langle A_{\varphi} \rangle = \avg{A_{\varphi,2}} - \avg{A_{\varphi,1}} = \{\mathcal{E}'_\varphi\} + \left[\mathcal{E}_\varphi\right] - \eta \{J_{\varphi}\} \nonumber \\ &=\!\!\!\int_{t_1}^{t_2}\!\!\!\!\!\! dt \hat{\boldsymbol{\varphi}}\cdot\avg{\mathbf{v}'\times\mathbf{B}'} + \int_{t_1}^{t_2}\!\!\!\!\!\! dt \hat{\boldsymbol{\varphi}}\cdot\left(\avg{\mathbf{v}}\times\avg{\mathbf{B}}\right) - \int_{t_1}^{t_2}\!\!\!\!\!\! dt \eta\avg{J_\varphi}. \label{eqn:daphi} \end{align} \end{center} \noindent where $\left[\mathcal{E}_\varphi\right]$ denotes the time integral of the mean component of the EMF. This can be interpreted as the difference between two snapshots of the longitudinal vector potential being proportional to three time-integrated terms: the longitudinal-average of the turbulent EMF, the mean EMF, and the magnetic diffusion. Since only $\avg{A_{\varphi}}$ is being considered, Equations (\ref{eqn:daphidt}) and (\ref{eqn:daphi}) are rendered gauge invariant since $\partial_\varphi \avg{A_{\varphi}} = 0$. The mechanisms that set the time scales relevant to the reversal of the poloidal field are difficult to assess. Namely, these mechanisms require information about the collective action of the turbulent convection upon existing magnetic structures as well as the complex self-interaction of convection to produce differential rotation. These processes are inherently nonlocal in space as magnetic energy from the local and small-scale action of helical motions upon a large-scale toroidal magnetic structure leads to a large-scale poloidal field, and thus is also nonlocal in time as the large-scale structures evolve on longer time scale than the convection. However, these processes can be individually assessed, beginning with an illustration of the several components of the production of poloidal magnetic field. To help further disentangle the various influences of the convection on the turbulent production of magnetic field, a mean-field analysis of the K3S dynamo is provided in \S\ref{sec:alpha}. \begin{figure}[t!] \begin{center} \includegraphics[width=0.485\textwidth]{figure7.pdf} \figcaption{Evolution of total magnetic energy, nonaxisymmetric magnetic energy, and radial magnetic field through the grand minimum. (a) Time variation of the magnetic energy through the grand minimum. The normalized and volume-averaged total magnetic energy is depicted with integrals taken over the total volume (dotted black line) and at lower latitudes between $\pm\dgr{40}$ (red line). (b) Time advance of the nonaxisymmetric magnetic field magnitude ($|B_r|$ for $m>0$) at $\Rsun{0.95}$, with regions of strong equatorially antisymmetry encircled. (c) Time and latitude dependence of the nonaxisymmetric magnetic field magnitude ($|B_r|$ for $m>0$) at $\Rsun{0.75}$, with hemispheric temporal lags indicated with arrows. (d) Time-latitude diagram of $\avg{B_{\mathrm{r}}}$ at a depth of $\Rsun{0.75}$ with dot-dashed lines helping to illustrate the primary evolution of the polarity cycle. Units in (b), (c), and (d) are kG. The tangent cylinder is indicated with horizontal dotted lines and the grand minimum by vertical dashed lines.\label{fig7}} \end{center} \end{figure} The terms in Equation (\ref{eqn:daphi}) are shown in Figure \ref{fig6}, where the times $t_1$ and $t_2$ in Equation (\ref{eqn:daphi}) are taken at the peak values of the magnetic energy on either side of a minimum in magnetic energy. Thus a magnetic energy cycle and a magnetic polarity reversal are captured. In order, Figure \ref{fig6} shows the two instances of the vector potential (Figures \ref{fig6}(a), (b)) whose difference (Figure \ref{fig6}(c)) closely corresponds to the sum of the time-integrated components of the EMF and the magnetic diffusion, which are shown individually in the last three panels. Comparing Figures \ref{fig6}(e, g), the predominant competition is between the fluctuating EMF and the resistive diffusion, with a modest contribution from the mean EMF (Figure \ref{fig6}(f)) contributing to the full sum (Figure \ref{fig6}(d)). Despite the large degree of cancellation between the fluctuating EMF and the resistive dissipation, the two act together to reverse the polarity of the poloidal field. In particular, the fluctuating EMF provides the dominant means of reversing the vector potential at lower latitudes (outside the tangent cylinder), whereas the resistive dissipation dominates at the higher latitudes (inside the tangent cylinder). Such an arrangement is largely due to the disparate spatial scales of the convection present in the magnetic field inside and outside the tangent cylinder, with the more easily dissipated smaller scales being prevalent at higher latitudes. There is also a temporal lag with the generation of the turbulent EMF and the later action of the magnetic diffusion. Nonetheless, at high latitudes and close to the upper boundary, the mean EMF makes a significant contribution to reversing the magnetic vector potential. This portion of the mean EMF is predominantly due to the poleward meridional flow in that region, namely $\avg{\mathrm{v}_{\theta}}\avg{B_{\mathrm{r}}}$. This term must dominate as the longitudinal average of the radial velocity is quite small in this near-surface high-latitude region due to the cancellation of small-scale convective flows and due to the impenetrable boundary condition. This influence of the meridional flow suggests that some aspects of a flux-transport dynamo could be operating in the near-surface region of this simulation. However, the direction of the transport is reversed relative to a typical flux-transport dynamo. \section{Characterizing the Grand Minimum} \label{sec:minimum} Some 3-D convective dynamo simulations have attained magnetic cycles that also show a longer-term modulation in the amplitude of their peak magnetic energy \citep[e.g.,][]{brown11,augustson13,charbonneau13}. The K3S simulation shows similar properties, though with the additional features of a significant disruption and later recovery of the magnetic polarity cycles. Indeed, Figures \ref{fig3}, \ref{fig7}, and \ref{fig8} show different aspects of the dynamo during the 16~year interval in the evolution of case K3S in which the polarity cycles are substantially disrupted and the magnetic energy is reduced. The volume-averaged magnetic energy density at lower-latitudes is decreased by a factor of two (Figure \ref{fig7}(a)), which is reflected in the larger decrease in the magnetic energy in the longitudinal fields relative to the other energy components. During this ``grand minimum,'' the quite regular and self-similar cycles seen prior to it are nearly lost. In particular, the mean radial magnetic field in the deep convection zone does not exhibit a polarity cycle at low-latitudes, whereas there is a semblance of a polarity cycle at higher latitudes (Figure \ref{fig7}(d)). In the upper convection zone, the higher-latitudes do not reverse their polarity, though the lower-latitudes do retain something akin to a polarity cycle (Figure \ref{fig3}(a)). Despite those disruptions, both the spatial and temporal coherency of the cycles are recovered after this interval and persist for the last 30~years of the 80~year-long simulation. Due to the decrease in the volume-averaged magnetic energy density during this interval and because it retains it magnetic energy cycle of 3.1~years, it is fairly similar to observed characteristics of the Sun during an average grand minimum. The largest difference between what is being called a ``grand minimum'' here and the characteristics of the grand minima seen in cosmogenic isotope data is that the heliospheric magnetic field appears to have maintained the reversals of its dipolar mode and its magnetic energy apparently was reduced by roughly a factor of four during an average grand minimum \citep{mccracken07}. \subsection{Entering and Exiting the Grand Minimum} \label{sec:spectra} As the interval of reduced magnetic energy and disrupted cycles is entered, there is an anomalous excitation of low-$m$ modes, where $m$ is the longitudinal wavenumber. This event appears to be precipitated by an asymmetry of the magnetic field in time and relative to the equator (Figures \ref{fig7}(b), (c), (d)). The regions of strong magnetic field in the nonaxisymmetric modes are circled in Figure \ref{fig7}(b), and the direction of the hemispheric temporal lags are indicated by arrows in Figure \ref{fig7}(c). The atypical excitation of the nonaxisymmetric modes occurs near the end of cycle 10, during what should normally be a minimum in the magnetic energy. This likely disrupts the normally clean polarity reversals and may permit the longer-term excitation of the axisymmetric even-$\ell$ modes. Those even-$\ell$ modes are equatorially symmetric with $m=0$, where $\ell$ is the spherical harmonic degree. Cycle 10, which begins near year 30, is an atypical magnetic energy cycle during which there is a strong cross-equatorial filament of radial magnetic field (Figure \ref{fig3}(a)) and where only the northern hemisphere exhibits a significant equatorward propagation (Figure \ref{fig3}(b), (d)). This led to a substantial temporal lag between the northern and southern hemispheres throughout the magnetic energy cycle. While the precise physical mechanisms that yield such a state are ambiguous, the symmetric modes of the radial magnetic field were strongly excited as the grand minimum is entered at year 33 (Figures \ref{fig7}(c) and \ref{fig8}(b)). Furthermore, the subsequent four energy cycles of the grand minimum do not fully reverse the odd-$\ell$ axisymmetric modes, whereas some of the even-$\ell$ axisymmetric modes do begin to reverse (particularly at depth). Given the larger contribution of the even modes during the grand minimum, the $\Omega$-effect (or $\langle \mathbf{B}_P \rangle\cdot\nabla\avg{\Omega}$) is less efficient at building and maintaining strong longitudinal magnetic fields through latitudinal shear. However, the symmetric modes of the radial magnetic field can influence the dynamo at low latitudes, where the differential rotation has a strong radial gradient due to the cylindrical rotation profile. Hence during the grand minimum, the longitudinal magnetic field is largely generated by radial shear rather than by latitudinal shear. Both the radial magnetic field and the radial gradient of the angular velocity tend to be weaker than their latitudinal counterparts, leading to the weaker longitudinal magnetic fields seen during this grand minimum. Moreover, as was shown in \citet{strugarek13}, the primary influence on the dipolar mode is the differential rotation, whereas the quadrupolar mode was fed energy through coupling to small-scale convection. Such nonlocal couplings may also be at work in K3S as well. Such differences in the primary inverse energy cascades of the dynamo are indicative of the sensitivity of the dynamo to the symmetry of the convection as well as the magnetic field. There is a decay of the even modes and an increase in the energy of the odd modes throughout the magnetic energy cycles of the grand minimum, as indicated with dashed lines in Figures \ref{fig8}(a) and (b). This likely permits the exit from the grand minimum near year 49 into another interval of regular equatorially antisymmetric cycles. These regular magnetic energy cycles involve a prominent alternation in the peak energy of the even modes between successive magnetic energy cycles. Cycle 15 possesses the first such high peak after the grand minimum. Indeed, it appears that the entrance into and the exit from the grand minimum are heralded by the excitation of the even modes of the poloidal magnetic field (Figure \ref{fig8}(b)). Such findings are not without precedent, as similar issues regarding the relative influence of higher-order multipole modes and their interactions within magnetic dynamos have been discussed \citep[e.g.,][]{tobias97,brandenburg08,nishikawa08,gallet09,derosa12,karak13}. What is unique here are the strongly excited low-$m$ modes and the temporal lag between the hemispheres, during the cycles that are active as the grand minimum is entered and exited. These atypical events appear to excite the symmetric dynamo modes and diminish the influence of the antisymmetric modes throughout the grand minimum. \begin{figure}[t!] \begin{center} \includegraphics[width=0.45\textwidth]{figure8.pdf} \figcaption{Time variation of the radial magnetic energy through the grand minimum at two depths. (a) Power in odd $\ell$, $m=0$ modes of $B_{\mathrm{r}}^2/8\pi$ at a depth of $\Rsun{0.75}$ (blue) and $\Rsun{0.95}$ (black). (b) Power in even $\ell$, $m=0$ modes of $B_{\mathrm{r}}^2/8\pi$ at a depth of $\Rsun{0.75}$ (green) and $\Rsun{0.95}$ (black). The red dashed lines in (a) and (b) indicate the general trends of the magnetic energies. (c) Magnetic parity, showing the dominance of odd parity during normal cycles and more even parity during the minimum at two depths ($\Rsun{0.75}$ orange, $\Rsun{0.95}$ black), indicating that even parity becomes prominent at depth entering and during the grand minimum. \label{fig8}} \end{center} \end{figure} \subsection{Dynamo Families and Magnetic Field Parity} \label{sec:parity} The interplay of the antisymmetric (primary) and symmetric (secondary) dynamo families and their parity has some precedent within the context of the solar dynamo as explored in \citet{derosa12}. Rather than attempting to assess the behavior of each mode to quantify how these two dynamo families interact within the K3S simulation, it is useful to construct a more encompassing measure of all the modes involved in the dynamo. One such measure is the parity of the radial magnetic field, which provides a scalar indication of the relative importance of the symmetric and antisymmetric modes for each longitudinal wavenumber $m$ and radius $r$. It is defined as \vspace{-0.25truein} \begin{center} \begin{align} &\mathcal{P}\left(r,m\right) = \frac{B_{\mathrm{even}}^2\left(r,m\right) - B_{\mathrm{odd}}^2\left(m\right)}{B_{\mathrm{even}}^2\left(r,m\right) + B_{\mathrm{odd}}^2\left(r,m\right)}, \label{eqn:parity} \\ &B_{\mathrm{even}}^2\left(r,m\right) = \sum_{\underset{\mathrm{even}}{\ell + m}} \left| B_{\mathrm{r}}\left(r,\ell, m\right) \right|^2, \; B_{\mathrm{odd}}^2\left(r,m\right) = \sum_{\underset{\mathrm{odd}}{\ell + m}} \left| B_{\mathrm{r}}\left(r,\ell, m\right) \right|^2, \nonumber \end{align} \end{center} \begin{figure*}[t!] \begin{center} \includegraphics[width=\textwidth]{figure9.pdf} \figcaption{Comparison of the evolution of the mean poloidal magnetic energy production through the fluctuating EMF ($P_{FL}$) and mean toroidal magnetic energy production by mean shear ($T_{MS}$) over the average polarity cycle. An overlay of the contours $\avg{B_{\varphi}}$ at 500~G is also shown, with solid contours being of positive polarity and dash-dotted of negative polarity. (a) Mean poloidal magnetic energy generation $P_{FL}$ at $\Rsun{0.92}$ shown with latitude and time in cylindrical projection, illustrating the strong field generation above about $\pm 30^{\circ}$ and the weak generation that accompanies the equatorward propagation of the toroidal wreaths. (b) $P_{FL}$ plotted with depth $r/\Rsun{}$ at a latitude of $25^{\circ}$. The tapering with depth arises from the strong radial dependence of $\mathbf{B}_P$, with a downward direction of propagation. (c) $P_{FL}$ averaged over the magnetic energy cycle rendered in the meridional plane. (d) Mean toroidal magnetic energy generation $T_{MS}$ exhibited over the average cycle and with latitude in cylindrical projection at $\Rsun{0.92}$, showing the equatorward migration of field generation. (e) $T_{MS}$ in depth and time at a latitude of $25^{\circ}$, with two zones of migration evident. (f) $T_{MS}$ averaged over a magnetic energy cycle, where the equatorial and polar regions represent spatially separated zones of generation. The color bars are shared between (a)-(c) and (d)-(f). \label{fig9}} \end{center} \end{figure*} \noindent with $\ell$ the spherical harmonic degree. When this parity measure is negative, the radial magnetic field favors an equatorially antisymmetric state, whereas a positive parity indicates that the radial magnetic field is in a symmetric state. The axisymmetric ($m=0$) parity of the radial magnetic field in case K3S is usually large and negative as is illustrated by Figure \ref{fig8}(c), meaning that the system strongly prefers most of the energy being in the antisymmetric (odd-$\ell$) modes during typical cycles. In the near-surface region this is especially true, as it possesses an average magnetic parity of $-0.9$. The magnetic parity in the deeper convection zone is also quite negative being $-0.7$ on average, when the grand minimum is omitted from that average. In contrast, the nonaxisymmetric modes (those with $m>0$) show no parity preference, where their near-zero parity is maintained during both typical cycles and during the grand minimum. During regular polarity cycles, there is also a distinct phase delay between the peak in magnetic energy in the odd-ordered modes at depth and those near the surface, as seen in Figure \ref{fig8}(a). Furthermore, it shows that the reversal in the odd-ordered modes usually occurs later at depth than it does near the surface, indicating the top-down nature of the poloidal magnetic field reversal. In contrast, the even-ordered modes show no such phase discrepancies. Yet in Figure \ref{fig8}(b), there is a prominent alternation in the peak energy of the even modes between successive magnetic energy cycles during the regular cycles after the grand minimum. Such behavior accounts for the similarity to the Gnevyshev-Ohl rule that those cycles seem to obey. However, prior to entering the grand minimum this pattern is lost, and it is only regained as the grand minimum is exited. The three magnetic energy cycles prior to entering the grand minimum were all atypical. In particular, magnetic energy cycles 8 and 9 were more equatorially antisymmetric than a typical cycle, as indicated by the low magnitude of their even-$\ell$ axisymmetric modes (Figure \ref{fig8}(b)). The surface layers also reversed earlier than normal relative to when the magnetic fields in the deep convection zone reversed, being almost 2.5~years earlier instead of the average of one year (Figure \ref{fig8}(a)). This preceded cycle 10, where the lengthened minimum and reversal phase discrepancy were especially pronounced in the southern hemisphere. This breaks the pronounced equatorial antisymmetry of the magnetic field. Such a change in symmetry is largely maintained throughout the grand minimum, when the parity remains near zero. This indicates the stronger coupling of the symmetric and antisymmetric dynamo families within that interval. Indeed, throughout the grand minimum, both the deep convection zone and the upper convection zone have a significant contribution to the amplitude of the radial magnetic field from both the symmetric (even-$\ell$) and antisymmetric (odd-$\ell$) modes. In that interval, the ratio of the power in those modes remains close to unity within the convection zone, which is particularly evident in the deep convection zone as visible in the magnetic parity shown in \ref{fig8}(c). The parity variations similar to those seen in Figure \ref{fig8}(c) have also been found and studied in nonlinear mean-field dynamo models \citep{tobias97,moss00,brooke02}. In those studies, there is a clear variability of the parity of the dynamo models between the antisymmetric and symmetric modes, with the dynamo solution being often accompanied by a strong quadrupolar component. Such variability, which also leads to intervals of deep minima, confirms that the interplay between the dynamo families is critical to achieve intermittent and aperiodic dynamo states. These studies have further shown that low magnetic Prandtl numbers favor more irregular and time-dependent solutions with extended intervals of minimal activity. Similarly the SLD treatment used for the K3S simulation, by favoring lower magnetic Prandtl number dynamos, is likely at the origin of the appearance of the deep minimum described above. In the \citet{moss00} and \citet{brooke02} studies, the symmetric dynamo modes can become completely dominant. In contrast, the symmetric dynamo modes do not fully become dominant in the K3S simulation. Rather, it is likely that the interplay of the two dynamo families leads to the grand minimum and the return to the regular magnetic cycles captured in the low magnetic Prandtl number regime realized here. \section{Equatorward Propagation of Magnetic Wreaths} \label{sec:propagate} Having now studied how the regular cycles could be disrupted, it is appropriate to analyze in further detail how the magnetic energy is generated in both space and time. This will also show how the equatorward propagation may arise in K3S. \subsection{Spatio-Temporal Evolution of Dynamo Mechanisms} \label{sec:evodyno} As will be seen in \S\ref{sec:alpha}, the K3S simulation is most akin to an $\alpha$-$\Omega$ mean-field dynamo, requiring an assessment of the two dominant modes of magnetic energy generation. These modes are the generation of poloidal magnetic energy through the fluctuating EMF (the $\alpha$-effect, e.g., \citet{moffatt78}) and the generation of mean toroidal magnetic energy through the differential rotation ($\Omega$-effect) acting on the mean poloidal magnetic field. The first mechanism is denoted $\avg{P_{\mathrm{FL}}} = \avg{\mathbf{B_P}\cdot\nabla\times\mathcal{E}'_{\varphi}\hat{\boldsymbol{\varphi}}}$ and the second $\avg{T_{\mathrm{MS}}} = \lambda\avg{B_{\varphi}}\langle \mathbf{B}_P \rangle\cdot\nabla\Omega$. What is also of note, had the generation of the magnetic field been instead shown, nearly identical propagating patterns and correlations are seen. In Figure \ref{fig9}, two primary regions of production of poloidal magnetic field are visible: one at low latitudes and another at high latitudes. The low-latitude regions of poloidal energy production are associated with convective cells acting upon the equatorially migrating magnetic wreaths. While the mechanism is somewhat similar for the high-latitude poloidal field, it primarily originates in the helical action of convection on the polar caps. The primary source of magnetic energy for those caps of toroidal magnetic field is the more rapidly rotating poles, which are established during the magnetic minimum as seen in Figures \ref{fig2} and \ref{fig4}. The spatial and temporal separation of mean poloidal and toroidal field generation is particularly evident when comparing Figures \ref{fig9}(a) and \ref{fig9}(d). The greatest generation of poloidal field is concentrated in the polar regions and near the tangent cylinder, with the low latitudes playing much less of a role. In contrast, the generation of mean toroidal magnetic field is greatest at low latitudes throughout the bulk of the cycle. In Figures \ref{fig9}(b) and \ref{fig9}(e), there is a relatively thin region in radius where the sign of the poloidal magnetic energy generation rate reverses. This sign reversal is largely due to the change in the sign of the kinetic helicity of the convection that occurs near the base of the convection zone. Such a kinetic helicity reversal is expected due to the influence of the lower boundary \citep[e.g.,][]{miesch00}, which has an impact on the $\alpha$-effect as will be discussed in \S\ref{sec:alpha}. The beginning of each magnetic energy cycle in Figure \ref{fig9} occurs at years 0.0 and 3.1, when the magnetic fields are weakest. At those times, the toroidal and poloidal magnetic fields begin to grow at roughly $\pm\dgr{30}$ (Figures \ref{fig2}(c), \ref{fig3}, \ref{fig9}(a), (d)). The turbulent action of the convection on this newly generated wreath sustains the poloidal field generation through $P_{FL}$ on the high-latitude edge of the wreath, which is near the tangent cylinder. In combination with the polar EMF that emerges from the action of convection on the longitudinal fields there, this sustains the poloidal field that in turn allows the wreaths to be maintained through the shearing action of the differential rotation. However, once the polar differential rotation has been quenched, the toroidal magnetic fields begin to decay. Subsequently, the poloidal field generation that had been quite prominent at the tangent cylinder vanishes. The remaining generation of the poloidal field then moves equatorward, advancing with the migration of the wreaths. Yet that poloidal field generation is still largely on the high-latitude edge of the low-latitude wreaths. At this point, the strong longitudinal magnetic field at low latitudes has begun to significantly feedback on the equatorial differential rotation, modifying the structure of the convection and diminishing the differential rotation. Indeed, the centroid for the greatest dynamo action propagates equatorward and downward in radius as the magnetic energy cycle progresses, which is evident in the time-latitude diagram (Figure \ref{fig9}(a), (d)) and which is suggested in the time-radius diagram (Figure \ref{fig9}(b)). Hence, the equatorial migration begun at the surface makes its way deeper into the domain as the magnetic energy cycle advances. The low-latitude wreaths of field eventually lose their coherence and energy through the lack of sufficient differential rotation to sustain them (e.g., Figure \ref{fig2}(b)), the destructive influence of the convection (Figure \ref{fig7}), and also due to cross-equatorial flux cancellation. Once those magnetic field structures have been sufficiently diminished, the diffusion and convection serve to rapidly redistribute the remaining magnetic flux. This is evident in Figures \ref{fig3} and \ref{fig4}. As the end of each cycle is approached, the wreaths converge on the equator and their resulting destruction changes the morphology of the convective cells. The modified convective patterns better permit the poleward migration and diffusion of the surviving low-latitude magnetic field polarity. Such actions lead to the topological reconnection of the large-scale magnetic field. This migrating field is of the opposite sense compared to the previous cycle's polar cap. Being of greater amplitude compared to the remaining polar magnetic field, those fields establish the sense of the subsequent cycle's polar field. Thus the polarity of the subsequent magnetic field seems to be determined by the EMF generated at the equator, as was also seen in \citet{augustson13} and \citet{nelson13a}. This source of poloidal magnetic field begins to be generated once the toroidal magnetic fields are sufficiently close to the equator to enable a strong cross-equatorial interaction. It is sustained throughout the rest of the cycle. During this period, the origin of this poloidal field generation is the action of convection on the low-latitude edge of the wreaths. \begin{figure}[t!] \begin{center} \includegraphics[width=0.485\textwidth]{figure10.pdf} \figcaption{The latitudinal component of the propagation direction of a dynamo wave, corresponding to Equation (\ref{eqn:stheta}) with $\overline{\alpha}=\alpha_{(\varphi\varphi)}$. (a) Latitudinal propagation velocity $S_\theta$ shown over an average polarity cycle at $\Rsun{0.92}$ with time and latitude. An overlay of the contours $\avg{B_{\varphi}}$ at 500~G is also shown, with solid contours being of positive polarity and dash-dotted of negative polarity. (b) The latitudinal propagation $\{S_\theta\}$ averaged over the polarity cycle and shown in the meridional plane. Dark tones indicate negative latitudinal propagation, light tones positive latitudinal propagation. \label{fig10}} \end{center} \end{figure} \subsection{Kinematic Versus Nonlinear Dynamo Waves} \label{sec:illusions} The equatorward propagation of magnetic features observed in this simulation, which is visible in Figure \ref{fig2}(b) and the broad panorama of Figure \ref{fig3}(b), will now be assessed to disentangle which mechanisms permit such behavior. The equatorward propagation in kinematic $\alpha$-$\Omega$ dynamo models is traditionally attributed to the propagation of a dynamo wave. In kinematic theory the propagation direction of such a wave is given by the Parker-Yoshimura rule \citep[e.g.,][]{parker55,yoshimura75} as \vspace{-0.25truein} \begin{center} \begin{equation} \mathbf{S} = -\lambda\overline{\alpha}\hat{\boldsymbol{\varphi}}\boldsymbol{\times}\boldsymbol{\nabla}\frac{\Omega}{\Omega_0}, \label{eqn:stheta} \end{equation} \end{center} \noindent where $\lambda = r\sin{\theta}$ and $\overline{\alpha}=-\tau_o\avg{\mathbf{v}'\boldsymbol{\cdot\omega'}}/3$, with $\boldsymbol{\omega'}=\boldsymbol{\nabla}\boldsymbol{\times}{\mathbf{v}'}$. Thus $\overline{\alpha}$ depends on the convective overturning time $\tau_o$ and the kinetic helicity. When Lorentz-force back-reactions are taken into account, there is also a current helicity contribution to $\overline{\alpha}$ \citep[e.g.,][]{pouquet76,gruzinov94}. \citet{augustson14} demonstrated that the kinematic expression in Equation \ref{eqn:stheta} with $\overline{\alpha}$ defined using only the kinetic helicity fails to explain the equatorward propagation seen in this dynamo simulation. The current helicity can in principle reverse the sign of the scalar $\alpha$-effect \citep{warnecke14}. For instance, it has recently been shown that the direction of the propagation of the magnetic field through a cycle can depend upon the Parker-Yoshimura mechanism \citep[e.g.,][]{racine11,kapyla13,warnecke14}. \begin{figure}[t!] \begin{center} \includegraphics[width=0.485\textwidth]{figure11.pdf} \figcaption{Coevolution of the mean toroidal magnetic field $\avg{B_{\varphi}}$ at $\Rsun{0.92}$ over the average magnetic polarity cycle with (a) the magnitude of the mean angular velocity gradient $\Rsun{}$~$|\nabla\Omega|/\Omega_0$ and (b) latitudinal velocity $\avg{\mathrm{v}_{\theta}}$ of the evolving meridional circulation in units of $\mathrm{m\, s^{-1}}$. Here $\langle B_{\varphi} \rangle$ is overlain with positive magnetic field as solid lines and negative field as dashed lines, with the contours corresponding to a 500~G strength field. \label{fig11}} \end{center} \end{figure} Rather than explicitly including the current helicity here, $\overline{\alpha}$ can be procured from the simulation itself. As described in \S\ref{sec:alpha}, the full $\alpha$ tensor is obtained from an SVD decomposition. When the scalar $\alpha$-effect utilizes information from that derived $\alpha$ tensor, it will automatically include information about both the current and kinetic helicities. Hence, the $\alpha_{(\varphi\varphi)}$ component of the $\alpha$ tensor is used in place of $\overline{\alpha}$ in Equation \ref{eqn:stheta} to determine the propagation direction of the dynamo wave predicted by mean-field theory. This is shown in Figure \ref{fig10}. With such a definition of $\overline{\alpha}$, the Parker-Yoshimura sign rule still does not hold for this simulation. More precisely, the sign of the latitudinal propagation of a dynamo wave is of the opposite sign required for both the equatorward migration of low-latitude magnetic structures and the poleward propagation of magnetic field as seen in Figure \ref{fig10}. So, although the Parker-Yoshimura mechanism seems to be at work in other convective dynamo simulations \citep{racine11,warnecke14}, it does not appear to be operating here. Thus, another mechanism must be sought to explain the equatorward propagation in the K3S simulation. In contrast to the intuition derived from mean-field theory, the dominant mechanism appears to be the nonlinear feedback of the magnetic fields upon the differential rotation. The tight correlation between the presence of $B_{\varphi}$ and the angular velocity gradient $|\nabla\Omega|$ is demonstrated in Figure \ref{fig11}(a). Since the latitudinal shear serves to build and maintain the magnetic wreaths (as in \S\ref{sec:tme}), the latitude of peak magnetic energy corresponds to that of the greatest shear. So the region with available shear moves progressively closer to the equator as the Lorentz forces of the wreaths locally weaken the shear (Figure \ref{fig11}(a)). Hence the appearance of equatorward motion in K3S is attributed to a nonlinear dynamo wave. This interpretation is consistent with the substantial modulations of the differential rotation seen in Figure \ref{fig2} and the equatorward migration of the toroidal source term shown in Figure \ref{fig9}. In a mean-field dynamo, the equatorward propagation of the toroidal source term arises from the equatorward propagation of the poloidal magnetic field. Here, the equatorward propagation of $\nabla\Omega$ also contributes. Accompanying the local weakening of the gradient of the differential rotation is a meridional flow that is gyroscopically induced at the poleward edge of the low-latitude magnetic wreaths as seen in Figure \ref{fig11}(b), with gyroscopic pumping defined as in \citet{mcintyre98} and \citet{miesch11}. In particular, the torque provided by the divergence of the Lorentz force and the Maxwell stresses produce a change in the local shear which in turn induces a change in the meridional flow. Indeed the spatio-temporal correlation between the changing differential rotation, the mean toroidal magnetic field, and the meridional flow seen in Figure \ref{fig11} appears to support a nonlinear dynamo wave. This mechanism for producing an equatorward migration of magnetic field relies upon the complex dynamical coupling of the differential rotation, meridional flows, and the magnetic field. \section{Assessing the Cycle Periods} \label{sec:periods} A correlation analysis of the dominant processes in this simulation is useful to quantitatively ascertain the magnetic and polarity cycle periods and the correlations. Such analysis also permits the estimation of the variance of those cycles. First consider the dynamical coupling of the mean magnetic fields $\avg{\mathbf{B}}$ and the mean angular velocity $\langle\Omega\rangle$, which plays a crucial role in regulating the magnetic energy cycle. The significant spatial and temporal correlation between $\langle B_{\varphi} \rangle$ and angular velocity variations $\avg{\Delta\Omega}$ during reversals is apparent when comparing Figures \ref{fig2}(a) and \ref{fig2}(b), and it is readily seen in Figure \ref{fig11}(a), revealing the strong nonlinear coupling of the magnetic field and the differential rotation. \subsection{Correlation Analysis} \label{sec:corr} The dynamics that couples the differential rotation and the mean toroidal magnetic field is captured in two terms: the mean toroidal magnetic field generation due to mean shear ($S = \lambda \avg{\mathbf{B}_{P}} \boldsymbol{\cdot}\boldsymbol{\nabla} \langle\Omega\rangle$, with $\avg{\mathbf{B}_{P}}$ the mean poloidal field) and the azimuthal component of the mean Lorentz-force $Q = \boldsymbol{\hat{\varphi}\cdot} \avg{\boldsymbol{\nabla}\boldsymbol{\times}{\avg{\mathbf{B}}}} \boldsymbol{\times} \avg{\mathbf{B}}$, which acts on the longitudinal component of the momentum equation (Equation (\ref{eqn:ashmom})). As shown in Figure \ref{fig12}(a), the auto-correlation of each of these components of the MHD system reveals that $Q$ varies with a period corresponding to the magnetic energy cycle, whereas $S$ is self-correlated over the polarity cycle period. Similarly, the cross-correlation between $P_{\mathrm{FL}}$ and $T_{\mathrm{MS}}$ indicates their close temporal correlation. There is, however, a phase lag of about $0.05\tau_{\mathrm{M}}$ or 2~months between the two primary energy generation mechanisms for the mean toroidal and poloidal magnetic fields (Figure \ref{fig12}(b)). Yet there is also a high degree of temporal regularity between cycles, with the auto-correlation of both quantities remaining significant with 95\% confidence for a single polarity cycle and with 67\% confidence for three such cycles as indicated by the shaded areas of Figures \ref{fig12}(a), (b). To further quantify the regularity of the cycles one could consider the full-width half-maximum of the auto-correlation of $S$ and $Q$ as well as the cross-correlation of $P_{\mathrm{FL}}$, and $T_{\mathrm{MS}}$ (Figure \ref{fig12}(b)). Indeed, an average of the FWHM of those correlated quantities yield a standard variance of the cycle period of about 4~months or about 10\% of the magnetic energy cycle period of 3.1~years. \begin{figure}[t!] \begin{center} \includegraphics[width=0.45\textwidth]{figure12.pdf} \figcaption{Auto and cross-correlation of Lorentz-force and mean toroidal magnetic field production by mean-shear. (a) Volume-averaged temporal auto-correlation of mean toroidal magnetic field generation by mean shear ($S$, blue curve) and the same for the mean Lorentz force impacting the mean angular velocity ($Q$, red curve) plotted against temporal lags $\Delta \mathrm{t}$ normalized by the magnetic energy cycle period $\tau_\mathrm{M}$. Confidence intervals are shown as shaded gray regions, with the 67\% interval in darker gray and 95\% in lighter gray. (b) Cross-correlation of the mean poloidal energy production through the fluctuating EMF ($P_{\mathrm{FL}}$) and the mean toroidal magnetic energy production due to the mean shear ($T_{\mathrm{MS}}$). (c) The volume-averaged temporal power spectrum for $\avg{B_{\varphi}}$. The red vertical dotted lines indicate the convective overturning time of 0.07~years and the polarity reversal time scale of 6.2~years and the red dashed line indicates a $1/f^2$ pink noise spectrum. \label{fig12}} \end{center} \end{figure} This analysis has been undertaken primarily to emphasize two basic features of the K3S dynamo. First, the time scale for the Lorentz-force feedback on the differential rotation, and hence the energy cycle time scale, is $\tau_M$. This is the basic mechanism that sets the clock for the magnetic energy cycles. Second, the polarity cycle time $\tau_P$ is then twice the energy cycle time scale, which simply follows from the mathematical behavior of the square of a sinusoid being directly related to a sinusoid of twice of its period. To further illustrate the time scales that arise in this simulation, the volume-averaged frequency power spectrum of $\avg{B_{\varphi}}$ is shown in Figure \ref{fig12}(c). There is clear peak in the power spectrum around the average polarity cycle time $\tau_P$ of 6.2~years. The width of this peak gives a hint as to the variance of the magnetic polarity cycle period, which appears to be about one year. If the peak in power at $\tau_P$ is subtracted, then there is a nearly uniform distribution of time scales longer than about 2~years, potentially reflecting the aperiodic character of those time scales. The power at shorter time scales decreases roughly as the inverse square of the frequency, which is common in the pink noise of highly complex and nonlinear systems. There is also a broad peak at time scales corresponding to the convective overturning time of about 25~days, or 0.07~years. \subsection{Potential Time Scales} In addition to the primary identification of these time scales, the processes assessed in previous sections suggest that there at least two processes that lead to those time scales. One is the effective rate of conversion of differential rotation kinetic energy into mean toroidal magnetic energy ($\tau_\Omega$). The second is the time required to diffuse magnetic field from the equator to the pole ($\tau_\eta$). A third potentially relevant time scale is related to the rate at which the magnetic field converges toward the equator may also be important. This in turn can be related to the gyroscopically-pumped meridional circulation induced by the changes in the magnetic fields as they approach the equator. The first time scale $\tau_\Omega$ can be measured as \vspace{-0.25truein} \begin{center} \begin{align} \tau_\Omega = \frac{2\pi}{\Delta\Omega}\frac{\left|\avg{B_{\varphi}}\right|}{\left|\langle \mathbf{B}_P \rangle\right|} \approx 3.6\, \mathrm{years}. \end{align} \end{center} \noindent The diffusion time scale from the equator to the pole is \vspace{-0.25truein} \begin{center} \begin{align} \tau_\eta = \frac{\pi}{2\left(r_2-r_1\right)}\int_{r_1}^{r_2} \frac{dr r^2}{\eta} \approx 6.7\, \mathrm{years}. \end{align} \end{center} \noindent Finally, the meridional circulation time scale follows from the average time it takes for an ensemble of particles to traverse a path of the meridional circulation in K3S or \vspace{-0.25truein} \begin{center} \begin{align} \tau_{\mathrm{MC}} = \frac{1}{N_p}\sum_{\mathbf{p}_i}\oint_{\mathbf{p}_i} \frac{d\boldsymbol{\ell}\cdot\mathbf{v}_{\mathrm{MC}}}{\mathrm{v}_{\mathrm{MC}}^2} \approx 1.2\, \mathrm{years}, \end{align} \end{center} \noindent where each particle path is denoted as $\mathbf{p}_i$. With these simple estimates of these relevant time scales, it is seen that the shear time scale and the diffusion time scale are close to the magnetic energy cycle period and to the magnetic polarity cycle respectively. However, since all three of these mechanisms are operating concurrently, a single time scale might emerge from their combined influence. One such way to accomplish this is to take the geometric mean of these time scales, which yields 3.1~years, which reflects the observed magnetic energy cycle period $\tau_M$. This, however, only loosely suggests that these three processes may share in setting the magnetic energy cycle period as well as the polarity cycle period. Indeed, the shearing and the diffusion time scales alone are nearly adequate to explain the magnetic and polarity cycle time scales. \section{Mean-Field Analysis} \label{sec:alpha} The complex set of processes at work in this dynamo solution, as just assessed in \S\ref{sec:reversal} and \ref{sec:evodyno}, yield a dynamo that falls outside of the broad classes of $\alpha$-$\Omega$ dynamos that underpin much of the mean-field theory (MFT) of MHD \citep[e.g.,][]{steenbeck66,moffatt78,krause80,brandenburg05}, which typically assume a temporally constant differential rotation and fluctuating EMF. Rather, the feedback of the magnetic field on both the nonaxisymmetric and axisymmetric components of the convective flows is critical to the operation of the dynamo running in this simulation. This suggests that, if one were to attempt to fully model these dynamics in the context of MFT, one would need to include an $\alpha$-quenching mechanism and the Malkus-Proctor effect. Nevertheless, helical turbulent convection is largely responsible for the generation of poloidal field. As such, MFT provides a route to assess and quantify the various zeroth-order influences of the turbulent velocity field upon the generation of the turbulent electromotive force (EMF, $\boldsymbol{\mathcal{E}}'$). Thus a spatially varying, but temporally constant, and $\delta$-correlated $\alpha$-effect is now examined. \subsection{Examining the Turbulent Electromotive Force} \label{sec:mfemf} As seen in \S\ref{sec:genpol} and \S\ref{sec:evodyno}, $\boldsymbol{\mathcal{E}}'$ is largely responsible for the generation of poloidal magnetic field in this simulation. Therefore, the generation of poloidal field will be characterized through the mean-field evolution of the mean toroidal vector potential $\avg{A_{\varphi}}$ as in Equation (\ref{eqn:daphi}), which is gauge independent since it only considers longitudinally-averaged quantities. The connection between MFT and the EMF achieved in this simulation will be examined by noting that the first-order expansion of $\boldsymbol{\mathcal{E}}'$ around the mean magnetic field and its gradient is \vspace{-0.25truein} \begin{center} \begin{equation} \avg{\boldsymbol{\mathcal{E}}'} = \alpha \avg{\mathbf{B}} + \beta \nabla \avg{\mathbf{B}} + \mathcal{O}\left(\partial\avg{\mathbf{B}}/\partial t,\nabla^2\avg{\mathbf{B}}\right), \end{equation} \end{center} \noindent where $\alpha$ is a rank two pseudo-tensor and $\beta$ is a rank three tensor. In the following the $\beta$ term will be neglected for simplicity. However, this does increase the systematic error in estimating the $\alpha$ tensor. A SVD decomposition that includes the $\beta$-effect has been undertaken in order to provide a lower bound on this systematic error. It is 21\% when averaged over all components of the $\alpha$ tensor. In this analysis $\alpha$ has been expanded as $\alpha \langle\mathbf{B}\rangle = \alpha_S\avg{\mathbf{B}} + \boldsymbol{\gamma}\times\avg{\mathbf{B}}$, with $\alpha_S$ being the symmetric portion of $\alpha$ and $\boldsymbol{\gamma}$ the antisymmetric portion. The latter is also known as the turbulent pumping velocity \citep[e.g.,][]{krause80,kapyla06}. The diagonal components of $\alpha$ are automatically symmetric. Yet the symmetrized elements of $\alpha$ are all denoted as $(ij)$. To reconstruct the $\alpha$ tensor from the simulation data, its individual components are determined from a temporal sequence of data at each radial and latitudinal grid point using a method similar to the least-squares singular value decomposition (SVD) methodology described in \citet{racine11}. Such a local fitting technique assumes that each point may be treated independently, which precludes the capture of temporal and spatial correlations that can influence the dynamo action \citep{brown11,nelson13a,augustson13}. Yet it has the advantage that the magnetic fields and electric currents from the simulation constrain the components of $\alpha$. The reconstruction of $\alpha$ is carried out using the data from the extended interval of 80~years, as shown in Figure \ref{fig3}. \begin{figure}[t!] \begin{center} \includegraphics[width=0.44\textwidth]{figure13.pdf} \figcaption{A mean-field theoretic interpretation of the dynamics. The nine components of the $\alpha$ tensor are shown, as computed using a SVD technique. The diagonal components of $\alpha$ are shown in (a), (e), and (i). The symmetrized components of the tensor are shown in (b), (c), and (f). The antisymmetric turbulent pumping components are shown in (d), (g), and (h). The colorbar and scaling are uniform across all panels. \label{fig13}} \end{center} \end{figure} Figure \ref{fig13} shows the nine components of $\alpha$ that contribute to $\avg{\boldsymbol{\mathcal{E}}'}$. The magnitude of the components of the $\alpha$ tensor are on par with other solar dynamo simulations \citep{racine11, nelson13a, simard13}. All of the $\alpha$ tensor components contribute to the generation of the magnetic vector potential through the turbulent EMF and thus to the poloidal and toroidal magnetic fields. The spatial structures of $\alpha_{(r\varphi)}$, $\alpha_{(\varphi\varphi)}$, and $\gamma_{\theta}$ visible in Figures \ref{fig13}(c), (g), and (i) are roughly antisymmetric between the two hemispheres, with the other two components $\alpha_{(\theta\varphi)}$ and $\gamma_r$ being nearly north-south symmetric. The antisymmetry arises from the influence of Coriolis forces on the flows, with the cyclonic turbulence present at higher latitudes being an important contributor to the sign and magnitude of these fields. Such an arrangement is also indicative of the dipolar nature of the poloidal magnetic field, which has a symmetric latitudinal magnetic field and an antisymmetric radial magnetic field. Thus, it is not a surprise that the equatorially symmetric components of $\alpha$ are those that act on the latitudinal magnetic field. All of the components shown also have structures that reflect the global-scale properties of the convection zone (CZ), where there is a radial symmetry or antisymmetry about the middle of the CZ. This is particularly evident in $\alpha_{(\theta\varphi)}$ and $\alpha_{(\varphi\varphi)}$, where the divergence of vortical downflows and the convergence of swirling upflows gives rise to this feature. As might be expected, there is a latitudinal dependence to these structures, where a distinct transition in flow morphology occurs across the tangent cylinder. Most of the features seen here are also present in the $\alpha$ tensor yielded in the analysis of an EULAG-MHD simulation shown in \citet{racine11}. In particular, the components of the tensor possess a radial antisymmetry with respect to a radius related to the point at which the vorticity of the bulk of the flows reverses its sign, which is above the base of the convection zone. Moreover, the radial and longitudinal elements of $\alpha$ are latitudinally antisymmetric, much like the simulation here. The latitudinal components are symmetric reflecting the importance of the dipolar component of the magnetic field. Some of the tensor elements of the EULAG-MHD simulation may appear to differ by a sign relative to the $\alpha$ tensor seen in Figure \ref{fig13}. However, this is due to the use of latitude rather than co-latitude in displaying the results of \citet{racine11}. \subsection{The Efficiency of the $\alpha$ Effect} \label{sec:aeff} An interesting measure of the dynamo is how efficiently the convective flows can regenerate existing mean magnetic fields. The dynamo efficiency of these flows can be surmised by finding the average magnitude of an estimated $\alpha$-effect relative to the rms value of the nonaxisymmetric velocity field. One such measure of the dynamo efficiency $E$ is \vspace{-0.25truein} \begin{center} \begin{align} \langle\frac{\alpha}{v_{\mathrm{rms}}}\rangle &\sim E = \frac{3}{2\left(r_2^3-r_1^3\right)}\!\!\sum_{a,b}\iint{\!\!\!\mathrm{d}r \mathrm{d}\theta r^2 \sin{\theta} \sqrt{\frac{\alpha_{ab}\alpha^{ab}}{\{\mathbf{v}'\cdot\mathbf{v}'\}}}}, \label{eqn:dyneff} \end{align} \end{center} \noindent where $\{\mathbf{v}'\cdot\mathbf{v}'\}$ is the sum of the diagonal elements of the Reynolds stress tensor averaged over the duration of the simulation and over all longitudes. For the K3S simulation, this measure yields a dynamo efficiency of 70\%. If one considers only the generation of $\mathcal{E}'_\varphi$ by summing over only the components of $\alpha_{i\varphi}$, this efficiency measure yields 15\%, which is half of the 30\% value found for simulations of F-type stars \citep{augustson13}. Such a level of efficiency is what one might expect given the factor of five between the rate of mean toroidal magnetic energy generation and poloidal energy generation seen in Figures \ref{fig9}(c) and (f). One can utilize Equation \ref{eqn:dyneff} to provide a measure of the importance of each of the relative components of $\alpha$ as \vspace{-0.25truein} \begin{center} \begin{align} &\langle\frac{\alpha_{ij}}{v_{\mathrm{rms}}}\rangle \sim \epsilon_{ij} = \frac{3}{2 E\left(r_2^3-r_1^3\right)}\!\!\iint{\!\!\!\mathrm{d}r \mathrm{d}\theta r^2 \sin{\theta} \sqrt{\frac{\alpha_{ij}\alpha^{ij}}{\{\mathbf{v}'\cdot\mathbf{v}'\}}}} \nonumber \\ &= \begin{bmatrix} \epsilon_{(rr)} & \epsilon_{(r\theta)} & \epsilon_{(r\varphi)} \\ \epsilon_{\gamma_\varphi} & \epsilon_{(\theta\theta)} & \epsilon_{(\theta\varphi)} \\ \epsilon_{\gamma_\theta} & \epsilon_{\gamma_r} & \epsilon_{(\varphi\varphi)} \end{bmatrix} = \begin{bmatrix} 0.355 & 0.124 & 0.073 \\ 0.110 & 0.103 & 0.066 \\ 0.062 & 0.054 & 0.053 \end{bmatrix}.\label{eqn:dynnorm} \end{align} \end{center} Equation \ref{eqn:dynnorm} clearly indicates that the $\alpha_{(rr)}$ component is dominant, with those processes contributing to $\alpha_{(rr)}$ being about 2.9 times more efficient than the next largest component $\alpha_{(r\theta)}$ and 6.7 times more efficient than the smallest component $\alpha_{(\varphi\varphi)}$. Indeed, the upper two by two matrix formed by $\alpha_{(rr)}$, $\alpha_{(r\theta)}$, $\alpha_{(\theta\theta)}$ and $\gamma_\varphi$ possesses the terms that make the largest contribution to the $\alpha$-effect. Specifically, these terms encapsulate those turbulent processes that are the most efficient at converting mean poloidal magnetic field into toroidal magnetic field. However, an assessment of the magnetic energy generating capacity of the $\alpha$ and $\Omega$-effects requires a proper tensor norm of the relevant energy generation terms. Such an assessment also permits the characterization of the dynamo in the context of mean-field dynamo theory. \subsection{Mean-Field Characterization of the Dynamo} \label{sec:chardyn} To assess which category of mean-field dynamos that the K3S dynamo falls into, one can measure the relative influence of the $\alpha$ effect to that of the $\Omega$ effect. In particular, the following ratio quantifies this \vspace{-0.25truein} \begin{center} \begin{align} \frac{\alpha_\phi}{\Omega} = \frac{3}{2\left(r_2^3-r_1^3\right)}\!\!\iint\!\!\! dr d\theta r^2 \sin{\theta} \left|\frac{\avg{B_{\varphi}}\hat{\boldsymbol{\varphi}}\boldsymbol{\cdot\nabla\times}\avg{\boldsymbol{\mathcal{E}'}}}{\lambda\avg{B_{\varphi}}\langle \mathbf{B}_P \rangle\boldsymbol{\cdot\nabla}\avg{\Omega}}\right|. \label{eqn:alpvomg} \end{align} \end{center} \noindent With this measure one can also quantify the impact of individual components of the $\alpha$-effect that generates mean toroidal magnetic field. Overall, it is found that the $\alpha$ effect is 11.9 times smaller than the $\Omega$ effect. If each of the six terms in the numerator of Equation \ref{eqn:alpvomg} are considered individually, it is seen that the term $\alpha_{(\theta\varphi)}$ is about 50\% larger than the average of the other terms. Moreover, the terms in the longitudinal component of the $\alpha$ effect arising from $\mathcal{E}'_\theta$ are 43\% larger than those from $\mathcal{E}'_r$. This implies first that the generation of toroidal magnetic field through the $\alpha$-effect is weak and that those terms that contribute to that effect are dominated by the horizontal components of $\alpha$. The component $\alpha_{(\varphi\varphi)}$ effectively represents the action of helical convection on the mean toroidal magnetic field. In the context of the classical $\alpha$-$\Omega$ dynamo, that term is usually taken to be the largest if not the only term in the $\alpha$ tensor. In the K3S dynamo this term has the smallest contribution, being about three times smaller than the components with the largest efficiency norm. However, it is also acting on the mean toroidal field, which is the largest component of the magnetic field. To quantify the efficacy of the poloidal field generation by the turbulent EMF relative to that of the toroidal magnetic field, consider the following measure \vspace{-0.25truein} \begin{center} \begin{align} \frac{\alpha_P}{\alpha_\varphi} = \frac{3}{2\left(r_2^3-r_1^3\right)}\iint dr d\theta r^2 \sin{\theta} \left|\frac{\langle \mathbf{B}_P \rangle\boldsymbol{\cdot\nabla\times}\avg{\boldsymbol{\mathcal{E}'}}}{\avg{B_{\varphi}}\hat{\boldsymbol{\varphi}}\boldsymbol{\cdot\nabla\times}\avg{\boldsymbol{\mathcal{E}'}}}\right|. \label{eqn:alpvalp} \end{align} \end{center} \noindent This average yields a ratio of 4.4 for the relative generation of poloidal and toroidal magnetic field through the $\alpha$ effect. This is consistent with the above argument that the $\alpha$ effect is relatively unimportant for the generation of the toroidal magnetic field. It is evident that the impact of the convection upon the mean magnetic fields provides the primary regenerative mechanism for the mean poloidal field at low latitudes, while the diffusion of poloidal field is the primary mechanism operating at higher latitudes (see \S\ref{sec:genpol}). In contrast, the mean toroidal magnetic fields are primarily built through the interaction of the mean poloidal magnetic field and the rotational shear. Therefore, given the presence of the differential rotation to sustain the mean toroidal magnetic field and the convective regeneration and diffusion of the poloidal magnetic field, this simulation could be characterized as an $\alpha$-$\Omega$ dynamo. However, the nonlocal spatio-temporal correlations contained in the generation terms of the magnetic field in this simulation makes an exact characterization of the dynamo within the context of mean-field theory difficult. For instance, if this dynamo were simply a kinematic $\alpha$-$\Omega$ dynamo, the Parker-Yoshimura mechanism should be sufficient to explain the equatorward propagation of magnetic field structures. \section{Conclusions and Discussion} \label{sec:conclude} The 3-D simulation K3S self-consistently exhibits five prominent features: (i) regular magnetic energy cycles during which the magnetic polarity reverses near the magnetic energy minimum; (ii) magnetic polarity cycles with a period of $\tau_P = 6.2$~years, where the equatorially antisymmetric modes of the poloidal magnetic field returns to the polarity of the initial condition; (iii) the equatorward migration of longitudinal field structures during these cycles; (iv) the poleward migration of oppositely-signed flux; and (v) a ``grand minimum,'' where there is a period of relative magnetic quiescence at low-latitudes and disrupted polarity cycles after which the previous polarity cycle is recovered. These aspects bear resemblance to some of the behavior of solar magnetism. It, however, does possess different time scales. Further, it does not have explicit surface flux emergence, and there is a significant modulation of the differential rotation. The most prevalent properties of K3S involve both a prominent solar-like differential rotation and distinctive wreaths of magnetism. Similar properties have been realized in a broad range of simulations carried out previously with ASH \citep[e.g.,][]{brown10,brown11,augustson13,nelson13a}. The primary characteristic shared among these simulations is that they typically have a low Rossby number, leading to the formation of large-scale toroidal magnetic wreaths. Another common feature in global-scale convective dynamo simulations is that the interplay between the mean toroidal magnetic field and the angular velocity leads to a significant modulation of the differential rotation. In K3S this contributes to the waxing and waning of the magnetic energy since the production of mean toroidal magnetic energy relies upon the shear of the differential rotation though the $\Omega$ effect. In particular, as the shear reaches a minimum due to the Lorentz forces, the magnetic fields subsequently weaken. Yet once the magnetic fields are sufficiently diminished, the convective patterns at low latitudes regain structures that can more efficiently regenerate the differential rotation. Thus the magnetic field generation starts anew. The strong feedback of the Lorentz forces on the differential rotation leads to the equatorward propagation of the magnetic fields. As the wreaths approach the equator, there is an increase in the cross-equatorial magnetic flux that permits the low latitude convection to generate poloidal magnetic fields with the opposite polarity of the dominant magnetic wreaths. This oppositely-signed magnetic flux is then advected and diffused, eventually reaching the poles and completing a magnetic polarity reversal. This dynamo regime is distinct from mean-field models of cyclic dynamos that do not involve Lorentz-force feedbacks, and in some ways it operates in contrast to typical flux-transport dynamos. Such robust properties appear to be unaffected by the new SLD treatment for diffusion of vorticity. What SLD admits are noticeably lower magnetic Prandtl numbers $\mathrm{Pm}$, which allow a richer suite of temporal variability. The lower $\mathrm{Pm}$, combined with a strong stratification, has resulted in K3S possessing more regular cycles and reversals than obtained in previous wreath-building ASH simulations. The substantial advances in supercomputing capabilities has led to the recent development of a series of global-scale 3-D MHD simulations that all possess the interplay of convection, rotation, and magnetism. The K3S simulation builds upon this contemporary work \citep[e.g.,][]{ghizaru10,brown10,racine11,brown11,kapyla12,augustson13,nelson13a,fan14}. The two common threads in those studies is the generation of large-scale coherent magnetic wreaths within the convection zone and the presence of a solar-like differential rotation achieved due to the low Rossby number of the convective flows. Some of those simulations exhibit regular magnetic polarity reversals. Indeed, both K3S and the Millennium simulation \citep{charbonneau14}, which also includes a tachocline at the base of the convection zone, have attained many such reversals over an extended interval. Furthermore, K3S and the Millennium simulation both show that the amplitude of the magnetic cycles appears to be regulated by the feedback of the Lorentz forces on the differential rotation \citep{racine11}. They do, however, appear to operate differently in the context of mean-field theory. Namely, the Millennium simulation seems to be akin to an $\alpha^2$-$\Omega$ dynamo \citep{racine11}, whereas in K3S the $\alpha$ effect generating toroidal magnetic fields is quite weak leading it to be more similar to the classical $\alpha$-$\Omega$ dynamo. In a smaller subset of those simulations including K3S, equatorward propagation of the large-scale magnetic structures is another shared feature during their magnetic cycles. In the simulations of \citet{kapyla13} and \citet{warnecke14}, a sufficiently large density stratification and the linear dynamo waves of mean-field theory appear to provide an explanation for the equatorward propagation of the magnetic fields. Similarly, K3S shares a large density stratification. However, unlike a kinematic $\alpha$-$\Omega$ dynamo, the equatorward propagation arises from the nonlinear interaction of the magnetic fields and the differential rotation. The K3S simulation exhibits an interval of 16~years during which the magnetic cycles are disrupted and the magnetic energy is reduced at low latitudes, after which the regular cycles resume. This is somewhat reminiscent of solar grand minima, when the observed solar magnetic energy is substantially reduced and sunspot emergence is largely interrupted \citep[e.g.,][]{ribes93,mccracken07}. As such, the disruption of regular cycling in K3S has been loosely identified as a grand minimum, since this is the first appearance of such long-sought behavior in a 3-D global-scale simulation. The likely mechanism that leads to intermittency such as this grand minimum in K3S is the interplay of symmetric and antisymmetric dynamo families. During the typical magnetic energy cycles, the antisymmetric dynamo family is dominant throughout the majority of the cycle in which the magnitude of the odd-ordered modes of the poloidal magnetic field is much larger than those of the even-ordered modes. In contrast during the grand minimum, the even modes can be equal to, and at times greater than, the magnitude of the odd modes. The increased symmetry about the equator of the poloidal field disrupted the ability of the dynamo to fully reverse the antisymmetric modes and led to a weaker dynamo state during the grand minimum, in which the low-latitude volume-integrated magnetic energy decreased by about 50\%. What is particularly striking is that during the magnetic energy cycle preceding the grand minimum, magnetic structures in the northern and the southern hemispheres become highly asynchronous, leading to the strong excitation of many symmetric modes in the poloidal magnetic field. It also appears that the typical phasing of the magnetic field between the deep convection zone (CZ) and that of the upper CZ is disrupted. Together those losses of phase coherence admit dynamo action from both the symmetric and the antisymmetric modes, which heralds the entrance of the grand minimum. Eventually, the symmetric modes decay in amplitude, allowing the antisymmetric family and thus the dipole mode to reassert its dominance. In earlier studies of simpler though nonlinear mean-field dynamo models, there is a clear variability between the dominance of antisymmetric or symmetric dynamo modes in those models \citep[e.g.,][]{tobias97,moss00,brooke02,bushby06}. Such variability can lead to intervals of deep minima. Moreover, it was seen that the interplay between the dynamo families is critical to achieve intermittent and aperiodic dynamo states. Those studies have further shown that low magnetic Prandtl numbers favor solutions with extended intervals of minimal activity. This provides some background for how the lower $\mathrm{Pm}$ achieved with the SLD treatment in K3S may lead to the appearance of deep minima and the coupling of the symmetric and antisymmetric dynamo families. Though the K3S simulation does share robust features with other global-scale convective dynamos, it must be stated that the results do depend on the dissipation through the effective values of the Reynolds and magnetic Reynolds numbers, Re and Rm. Yet there are currently no global convective dynamo simulations in the known literature that demonstrate convergence with increasing Re and Rm. Since the SLD diffusion is linked with the spatial resolution, this sensitivity to Re and Rm translates to a dependence on resolution. Particularly, reducing the effective viscous diffusion using the SLD method likely permitted this solution to enter an interesting parameter regime. Further simulations are being carried out to characterize parameter sensitivities, including the identification of the factors that set the cycle period and the search for asymptotic behavior. These results will be presented in future papers. The SLD method has been shown to converge for solar surface magnetoconvection simulations \citep{rempel12}, but large-scale dynamos may be more subtle. Nevertheless, the K3S simulation lies in an interesting parameter regime that exhibits a novel, self-consistent, cyclic convective dynamo with intermittent cycle modulation that may share some dynamic features with dynamos in more extreme parameter regimes and indeed, in stars. Despite rotating three times faster than the Sun and parameterizing large portions of its vast range of scales, some of the features of the dynamo that may be active within the Sun's interior have been realized in this global-scale ASH simulation. For instance, the period of the magnetic polarity cycle in K3S is 6.2 years. This period is about 243 times the rotation period, which can be compared to the Sun's ratio of about 287. Under a linear scaling of the rotation rate, a comparable solar simulation could have a half-period of 9.3 years, close to the sunspot cycle period of the Sun. So, although this model star rotates more rapidly than the Sun and has a shorter cycle period, the ratio of the cycle period to the rotation period is not much different. Particularly, its rapid rotation helped to put it into an interesting Rossby number regime that permits a solar-like differential rotation (fast equator, slow poles), and once there it produced a cycle period that may be non-dimensionally comparable to that of the Sun. Indeed, the increasingly frequent emergence of 3-D convective dynamo simulations that exhibit solar-like dynamo features such as this one are reshaping the understanding of the physics of convective dynamos. Thus, future global-scale dynamo simulations promise new insights into solar and stellar dynamos as supercomputing resources continue to advance. \section*{Acknowledgments} The authors wish to thank an anonymous referee for extensive and helpful comments. A singular thanks is due to Nicholas Featherstone for his effort in greatly improving the computational efficiency and scaling of the ASH code. The authors also thank Bradley Hindman, Mark Rast, Matthias Rempel, and Regner Trampedach for useful conversations. This research was supported by NASA through the Heliophysics Theory Program grant NNX11AJ36G, with additional support for Augustson first through the NASA NESSF program by award NNX10AM74H and second through the NCAR Advanced Study Program. NCAR is supported by the NSF. A.S. Brun acknowledges financial support through ANR TOUPIES, CNES Solar Orbiter grant and INSU/PNST. The computations were primarily carried out on Pleiades at NASA Ames with SMD grants g26133 and s0943, and also used XSEDE resources for analysis. This work further utilized the Janus supercomputer, which is supported by the NSF award CNS-0821794 and the University of Colorado Boulder.
2,869,038,156,540
arxiv
\section*{Introduction} In 1976, Chouinard gave a general formula for injective dimension of a module, when it is finite (cf. \cite{Chouinard}). \\ {\bf Theorem.} Let $M$ be an $R$-module of finite injective dimension. Then $$\mbox{inj.dim}\,_R M= \sup \{ \mbox{depth}\, R_\mathfrak{p} - \mbox{width}\,_{R_\mathfrak{p}}M_{\mathfrak{p}}\, | \, \mathfrak{p} \in \mbox{Spec}\,(R) \}.$$ Recall that for a module $M$ over a local ring $R$, $\mbox{width}\,_R M$ is defined as the $\inf \{ \, i \, | \, \mbox{Tor}\,_i^R(k,M) \neq 0\}$, where $k$ is the residue field of $R$. This gives a general formula from which the so called Bass formula can be concluded, namely, the injective dimension of a finite module over a local ring is either infinite or equals the depth of the base ring. Our theorem \ref{GCH} extends the chouinard's formula for Gorenstein injective dimension. \\ {\bf Theorem.} Let $R$ be a noetherian ring and $M$ an $R$-module of finite Gorenstein injective dimension. Then $$\mbox{Gid}\,_R M = \sup \{\mbox{depth}\, R_{\mathfrak{p}}- \mbox{width}\,_{R_\mathfrak{p}}M_\mathfrak{p} \, | \, \mathfrak{p} \in \mbox{Spec}\,(R)\}.$$ \section{Main Theorem} \begin{defn} An $R$-module $G$ is said to be {\rm Gorenstein injective} if and only if there exists an exact complex of injective $R$-modules, $$I=\cdots \to I_2\longrightarrow I_1\longrightarrow I_0\longrightarrow I_{-1}\longrightarrow I_{-2}\longrightarrow\cdots$$ such that the complex $\mbox{Hom}\,_R(J,I)$ is exact for every injective $R$-module $J$ and $G$ is the kernel in degree 0 of $I$. The {\rm Gorenstein injective dimension} of an $R$-module $M$, $\mbox{Gid}\,_R (M)$, is defined to be the infemum of integers $n$ such that there exists an exact sequence $$0 \to M \to G_0 \to G_{-1} \to \cdots \to G_{-n} \to 0$$ with all $G_i$'s Gorenstein injective. \end{defn} \begin{thm}\label{GCH} Let $R$ be a commutative noetherian ring and $M$ an $R$-module of finite Gorenstein injective dimension. Then $$\mbox{Gid}\,_R(M)=\sup \{ \mbox{depth}\, R_\mathfrak{p}-\mbox{width}\,_{R_\mathfrak{p}}M_\mathfrak{p} \, | \, \mathfrak{p} \in \mbox{Spec}\,(R) \}.$$ \end{thm} \begin{proof} First assume that $\mbox{Gid}\,_R(M)=0$. By definition, there is an exact sequence $$E_\bullet: \, \, \cdots \to E_1 \to E_0 \to M \to 0$$ such that every $E_i$ is injective. Set $K_i= \ker(E_{i-1} \to E_{i-2})$. \\ For any $\mathfrak{p} \in \mbox{Spec}\,(R)$ and every $R_\mathfrak{p}$-module $T$, we have $\mbox{Ext}\,^i_{R_\mathfrak{p}}(T,M_\mathfrak{p}) \cong \mbox{Ext}\,^{i+t}_{R_\mathfrak{p}}(T,(K_t)_\mathfrak{p})$ for any two positive integers $i$ and $t$. Hence using \cite[5.3(c)]{CFF} we get $$\begin{array}{ll} 0 &= \sup \{ \, i \, |\, \mbox{Ext}\,_{R_\mathfrak{p}}^i(T,M_\mathfrak{p}) \neq 0, \, \, \mathrm{for} \, \, \mathrm{some} \, \, R_\mathfrak{p} \mathrm{-module} \, \, T \, \, \mathrm{with} \, \, \mbox{proj.dim}\,_{R_\mathfrak{p}} T < \infty \} \\ & \geq \mbox{depth}\, R_\mathfrak{p}-\mbox{width}\,_{R_\mathfrak{p}}M_\mathfrak{p}. \end{array}$$ In addition, if $\mathfrak{p}$ is such that $\dim R/\mathfrak{p}=\dim_RM$ then, using \cite[5.3(c)]{CFF} again, we get $\mbox{depth}\, R_\mathfrak{p}-\mbox{width}\,_{R_\mathfrak{p}}M_\mathfrak{p}=0$. \\ Therefore, $\sup \{\mbox{depth}\, R_\mathfrak{p}-\mbox{width}\,_{R_\mathfrak{p}}M_\mathfrak{p} \, | \, \mathfrak{p} \in \mbox{Spec}\,(R)\}=0$ for a Gorenstein injective module $M$. Now assume that $n=\mbox{Gid}\,_RM >0$. By \cite[2.16]{CFH}, there exists a short exact sequence $$0 \to K \to L \to M \to 0,$$ where $K$ is Gorenstein injective and $\mbox{inj.dim}\,_RL=\mbox{Gid}\,_RM=n$. \\ Thus $$0=\sup\, \{ \, \mbox{depth}\, R_\mathfrak{p}-\mbox{width}\,_{R_\mathfrak{p}}K_\mathfrak{p} \, | \, \mathfrak{p} \in \mbox{Spec}\,(R)\}.$$ On the other hand, by Chouinard's equality \cite[3.1]{C}, we have $$\mbox{inj.dim}\,_RL =\sup \{\mbox{depth}\, R_\mathfrak{p}-\mbox{width}\,_{R_\mathfrak{p}}M_\mathfrak{p} \, | \, \mathfrak{p} \in \mbox{Spec}\,(R) \}.$$ For any $\mathfrak{p} \in \mbox{Spec}\,(R)$, we denote $k(\mathfrak{p})=R_\mathfrak{p}/{\mathfrak{p} R_\mathfrak{p}}$. Then, for any $\mathfrak{p} \in \mbox{Spec}\,(R)$, the exact sequence $0 \to K_\mathfrak{p} \to L_\mathfrak{p} \to M_\mathfrak{p} \to 0$ induces the long exact sequence $$\cdots \to \mbox{Tor}\,_i^{R_\mathfrak{p}}(k(\mathfrak{p}),K_\mathfrak{p}) \to \mbox{Tor}\,_i^{R_\mathfrak{p}}(k(\mathfrak{p}),L_\mathfrak{p}) \to \mbox{Tor}\,_i^{R_\mathfrak{p}}(k(\mathfrak{p}),M_\mathfrak{p}) \to \mbox{Tor}\,_{i-1}^{R_\mathfrak{p}}(k(\mathfrak{p}),K_\mathfrak{p}) \to \cdots$$ This sequence gives rise to the following inequalities. $$\begin{array}{c} \mbox{width}\,_{R_\mathfrak{p}}L_\mathfrak{p} \geq \min\{\mbox{width}\,_{R_\mathfrak{p}}M_\mathfrak{p},\mbox{width}\,_{R_\mathfrak{p}}K_\mathfrak{p}\} \, \, \mathrm{and}\\ \mbox{width}\,_{R_\mathfrak{p}}M_\mathfrak{p} \geq \min\{\mbox{width}\,_{R_\mathfrak{p}}L_\mathfrak{p},\mbox{width}\,_{R_\mathfrak{p}}K_\mathfrak{p}+1\} \end{array}$$ If $\mathfrak{p} \in \mbox{Spec}\,(R)$ is such that $\mbox{width}\,_{R_\mathfrak{p}}K_\mathfrak{p} > \mbox{width}\,_{R_\mathfrak{p}}M_\mathfrak{p}$ then, by the mentioned inequalities, $\mbox{width}\,_{R_\mathfrak{p}}M_\mathfrak{p} = \mbox{width}\,_{R_\mathfrak{p}}L_\mathfrak{p}.$ \\ But for any $\mathfrak{p}$ with $\mbox{width}\,_{R_\mathfrak{p}}K_\mathfrak{p} \leq \mbox{width}\,_{R_\mathfrak{p}}M_\mathfrak{p}$, we also have $\mbox{width}\,_{R_\mathfrak{p}}K_\mathfrak{p} \leq \mbox{width}\,_{R_\mathfrak{p}}L_\mathfrak{p}$. Thus $$\begin{array}{c} \mbox{depth}\, R_\mathfrak{p} - \mbox{width}\,_{R_\mathfrak{p}}L_\mathfrak{p} \leq \mbox{depth}\, R_\mathfrak{p} - \mbox{width}\,_{R_\mathfrak{p}}K_\mathfrak{p} \leq 0 \, \, \mathrm{and}\\ \mbox{depth}\, R_\mathfrak{p} - \mbox{width}\,_{R_\mathfrak{p}}M_\mathfrak{p} \leq \mbox{depth}\, R_\mathfrak{p} - \mbox{width}\,_{R_\mathfrak{p}}K_\mathfrak{p} \leq 0. \end{array}$$ Therefore we have $$\begin{array}{r} \mbox{Gid}\,_R(M)=\mbox{inj.dim}\,_R(L)=\\ \sup\{\mbox{depth}\, R_\mathfrak{p}-\mbox{width}\,_{R_\mathfrak{p}}L_\mathfrak{p} \, | \, \mathfrak{p} \in \mbox{Spec}\,(R)\}= \\ \sup\{\mbox{depth}\, R_\mathfrak{p}-\mbox{width}\,_{R_\mathfrak{p}}L_\mathfrak{p} \, | \, \mathfrak{p} \, \, \mathrm{with} \, \, \mbox{width}\,_{R_\mathfrak{p}}M_\mathfrak{p} < \mbox{width}\,_{R_\mathfrak{p}}K_\mathfrak{p}\}= \\ \sup\{\mbox{depth}\, R_\mathfrak{p}-\mbox{width}\,_{R_\mathfrak{p}}M_\mathfrak{p} \, | \, \mathfrak{p} \, \, \mathrm{with} \, \, \mbox{width}\,_{R_\mathfrak{p}}M_\mathfrak{p} < \mbox{width}\,_{R_\mathfrak{p}}K_\mathfrak{p}\}= \\ \sup\{\mbox{depth}\, R_\mathfrak{p}-\mbox{width}\,_{R_\mathfrak{p}}M_\mathfrak{p} \, | \, \mathfrak{p} \in \mbox{Spec}\,(R)\}. \end{array}$$ \end{proof} \begin{cor} Let $M$ be an $R$-module and $\mathfrak{p} \subseteq \mathfrak{q}$ prime ideals of $R$. If $\mbox{Gid}\,_{R_\mathfrak{p}}M_\mathfrak{p} < \infty$ then $$\mbox{Gid}\,_{R_\mathfrak{p}}M_\mathfrak{p} \leq \mbox{Gid}\,_{R_\mathfrak{q}}M_\mathfrak{q}. $$ \end{cor} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
2,869,038,156,541
arxiv
\section{Introduction} Let $A$ be a $d\times d$ matrix over $\mathbb{Z}$ (the integers), and assume that its eigenvalues $\lambda$ satisfy $|\lambda|>1$. In particular $A$ is assumed invertible. Let $\mathbb{Z}^d[A^{-1}]$ be the associated discrete group obtained as an inductive limit $$\mathbb{Z}^d\stackrel{A}{\longrightarrow}\mathbb{Z}^d\stackrel{A}{\longrightarrow}\mathbb{Z}^d\stackrel{A}{\longrightarrow}\dots$$ i.e., $\mathbb{Z}^d[A^{-1}]=\cup_{k=0}^\infty A^{-k}\mathbb{Z}^d$, and let $\mathcal S_A$ denote the corresponding dual compact abelian group, where duality is in the sense of Pontryagin; $\mathcal S_A$ is a {\it solenoid}. Let $A^T$ be the transposed matrix, and let $i:\mathbb{Z}^d[A^{-1}]\rightarrow\mathbb{R}^d$ be the natural embedding (of groups), so $i$ is a homomorphism of the respective additive groups. Now let $\hat i:\mathbb{R}^d\rightarrow\mathcal S_A$ be the dual homomorphism. Hence $\hat i$ embeds $\mathbb{R}^d$ as a subgroup of the compact solenoid $\mathcal S_A$. We will need this generality in an application to the analysis of wavelet multiresolutions. Here we use $\widehat{\mathbb{R}^d}=\mathbb{R}^d$, i.e., $\mathbb{R}^d$ is its own Pontryagin dual. In a special case of this construction, for $d = 1$, our embedding $\hat i$ corresponds to ergodic theoretic flows on compact spaces studied in a variety of contexts in dynamics. Note $\hat i(A^Tx)=\sigma_A(\hat i(x))$, $x\in\mathbb{R}^d$, where $\sigma_A:\mathcal S_A\rightarrow\mathcal S_A$ is the endomorphism induced by $A$. Motivated by classical number theoretic problems in digital representations of fractions, we explore here symbolic representations of points in $\mathcal S_A$. Our analysis uses an extension of George Mackey's semidirect product construction from representation theory (section 3) combined with a study of a family of combinatorial cycles (section 4.1). In section 6 we show that our encoding theorems (sections 4 and 5) apply to the construction of wavelet multiresolutions, i.e., for generalized wavelet constructions where scaling in $\mathbb{R}^d$ corresponds to the matrix multiplication $x\mapsto Ax$, $x\in\mathbb{R}^d$; and the corresponding $\mathbb{Z}$-action $\mathbb{Z}\times\mathbb{R}^d\ni(k,x)\mapsto A^kx\in\mathbb{R}^d$. {\bf Motivation from physics: The renormalization group.} The idea of scale invariance is old. Its best know modern formulation in mathematics takes the form of iterated function systems (IFS), e.g., \cite{Hut81}; and in physics it takes the form of a Renormalization group (RG). But scaling arguments are commonplace in pure (e.g., wavelets) and applied mathematics; for example in attempts at explaining turbulence. The renormalization group makes its appearance in physics in different guises, often as a mathematical trick to get rid of the infinities, for example in quantum field theory, see e.g., \cite{Fe87, Bal91}. As a pure technique, it obtained maturity with for example the papers \cite{Fe87, Fe94, KW88, KW91} among others. The technique was developed also in quantum electrodynamics by R. Feynman, K. Wilson and others. The physicists devised theories of mass and charge renormalization. Old-style renormalization group (RG) techniques in physics have run into difficulties with non-renormalizability of gravity. Still they are used in various guises as tools in solid state physics, as they often get around divergence difficulties with the use of perturbation theory. As with iterated function systems (IFS), renormalization groups in physics attempt to describe infinite systems in terms of block variables, i.e.: some magnitudes which describe the average behavior of the various constituent blocks, often approximately true in practice, and good enough, to a first approximation. This is more or less amounts to finding long term behavior of a suitable RG transformation. When iterated many times, this RG transformation leads to a certain number of fixed points analogous to those seen in (non-contractive) IFSs. It was suggested in the 1970s by Don Knuth and others that there is an intriguing geometry behind computations in a positional number system. As is known since Euclid, representation of numbers in a fixed basis entails expansions in powers of the chosen base, say $b$. There is then a subset $\mathcal D$ of the integers $\mathbb{Z}$ of cardinality $|b|$ such that the corresponding ``digital'' expansion of real numbers is encoded by finite or infinite words in the ``alphabet'' $\mathcal D$. In fact Knuth \cite{Knu76} stresses that for a fixed $b$, there are many choices of digit sets $\mathcal D$ which yield a positional number system in a sense which is made precise. Similarly Knuth suggested the use of a matrix in place of $b$. In sections 5 and 6 below, we outline both the initial suggestion, the relevant literature, starting with \cite{Odl78}; and we present our main results. The first sections of our paper address tools from representation theory central to our problem. Both the choice of the base $b$, and the set of digits allows for a great deal of freedom, even if we restrict numbers on the real line. Don Knuth suggested that this idea works in higher dimensions, i.e., in encoding points in $\mathbb{R}^d$ this way. The case $d = 2$ of course includes positional representations of complex numbers, and associated computer generated images in the plane. But by increasing the dimension further, this suggests using, for scaling basis, instead a fixed $d$ by $d$ expansive matrix over $\mathbb{Z}$, say $A$, in place of the base number $b$, and a subset $\mathcal D$ of $\mathbb{Z}^d$ of cardinality $| \det A|$ for digits. This leads to the puzzling question: ``In this geometric formulation, what are then the fractions as subsets of $\mathbb{R}^d$ ?'' Early computer calculations by Knuth in 2D suggested that ``fractions'' take the form of ``dragon-like'' compact sets, ``Twin Dragons'' etc. Knuth's idea was taken up in later papers (by other authors, e.g., Lagarias and Wang) under the name affine Iterated Function Systems (IFSs), and the set of fractions associated to a fixed pair $(A, \mathcal D)$ in $d$ real dimensions have been made precise in the form of attractors for the IFS defined from $(A, \mathcal D)$, see e.g., \cite{Hut81} and \cite{BrJo99}. For points in Euclidian space, we introduce matrix scaling and digit sets. Our aim is to study the interplay between associated spectra and geometry. For a fixed matrix scaling and digit set we introduce a positional number system where the basis for our representation is a fixed $d$ by $d$ matrix $A$ over $\mathbb{Z}$. Specifically, a pair $(A,\mathcal D)$ is given, the matrix $A$ assumed expansive, and a finite set $\mathcal D$ a chosen as a complete digit set, i.e., the points in $\mathcal D$ are in bijective correspondence with the finite group $\mathbb{Z}^d/A\mathbb{Z}^d$. Such higher dimensional ``number systems'' allow more flexibility than the classical one, introducing a computational device for the study of, for example, exotic tilings, wavelet sets and fractals. These are geometric structures in $\mathbb{R}^d$, studied recently in \cite{BMM99, Cho07, JKS07, Shu03, Hut81, BHS05, FMM06, Rud89}. We take advantage of a natural embedding of $\mathbb{R}^d$ in an associated solenoid, and we obtain an explicit solenoid-encoding of geometries in $\mathbb{R}^d$, giving insight into notions of redundancy, and offering a computational tool. This expanded view also introduces novelties such as non-commutativity into encoding. We give an explicit geometric representation and encoding for pairs pair $(A,\mathcal D)$ (Theorem \ref{prop3_9}), i.e., an encoding with specific infinite words in letters from $\mathcal D$. Our positional ``number representation'' takes the form of an explicit IFS-encoding of points in a compact solenoid SA associated with the pair $(A,\mathcal D)$. A crucial part (Theorem \ref{thenccycl}) is played by certain extreme cycles in the integer lattice $\mathbb{Z}^d$ for the initial $(A,\mathcal D)$-IFS. Using these cycles we write down a formula for the two maps which do the encoding as well as the decoding in our positional $\mathcal D$-representation. We will need basic tools from spectral theory (e.g., \cite{Arv02}), but our aim is computational, still using operator algebraic tools, analogous to those used in the analysis of graphs and generalized multiresolutions, see e.g., \cite{BMM99, Cho07, JKS07, Shu03}. Our use of iterated function systems follows conventions from \cite{Hut81, BHS05, FMM06}. When an invertible matrix $A$ and a finite subset $\mathcal D$ in $\mathbb{R}^d$ are given, then we consider an associated finite set of affine mappings $\tau_d$, indexed by points $d$ in $\mathcal D$, as follows, $$\tau_d(x):= A^{-1}(x + d).$$ Under suitable conditions on the pair $(A,\mathcal D)$, see Definition \ref{ifs}, repeated iterations of the combined system $(\tau_d)_{d \in\mathcal D}$, then yields certain limit concepts. They take the precise form of either certain compact subsets in $\mathbb{R}^d$ (attractors) or the form of limiting measures, so called equilibrium measures. In this formulation, the theory was made precise by J. Hutchinson in the paper \cite{Hut81}, and this gave rise to what is now known as affine iterated function systems (IFSs). As is known, e.g.,\cite{Hut81}, to make the limit notions precise, one introduces metrics on families of compact subsets in $\mathbb{R}^d$, or on families of probability measures. (In their primitive form these metrics generalize the known Hausdorff distance.) In harmonic analysis a subclass of the IFSs have been studied extensively by R. Strichartz and his co-authors, see \cite{Str05, Str06}. The emphasis there is on discrete potential theory, while our present focus is on tiling and coding questions. However much of our motivation derives from harmonic analysis. It was also realized that wavelet algorithms can be put into this framework, and for fixed $(A, \mathcal D)$, one is led to ask for Haar wavelets, and to wavelet sets. In this paper we show that there is a representation theoretic framework for these constructions involving a certain discrete group that is studied in algebra under the name of the Baumslag-Solitar group. In addition to our identifying wavelet sets in a solenoid encoding (section 4), in section 6 we further show that the positional number representation for $\mathbb{R}^d$ associated to a given pair $(A,\mathcal D)$ takes an algorithmic form involving both ``fractions'' and ``integer points''. Here the fractions are represented by a Hutchinson attractor $X(A^T,\mathcal D)$ and the ``$(A,\mathcal D)$-integers'' by a certain lattice $\Gamma$ which makes $X(A^T,\mathcal D)$ tile $\mathbb{R}^d$ by $\Gamma$ translations. To compute this lattice $\Gamma$ which makes an $X(A^T,\mathcal D)$ tiling, we use a certain spectral duality (Lemma \ref{lemfuglede}). Finally (section 6.2) the last mentioned duality is illustrated with specific planar examples, where both cycles and lattices are worked out. \section{Iterated Function Systems} The setting of this paper is a fixed $d$ by $d$ matrix $A$ over the integers $\mathbb{Z}$, satisfying a certain spectral condition, and its relation to the rank-$d$ lattice $\mathbb{Z}^d$ in $\mathbb{R}^d$. Traditional wavelet bases in $L^2(\mathbb{R}^d)$ are generated by a distinguished finite family of functions in $L^2(\mathbb{R}^d)$ and the operations translation with the rank-$d$ lattice $\mathbb{Z}^d$, and scaling with powers of $A^j$, $j \in \mathbb{Z}$. But there are similar wavelet constructions, super wavelets, in other Hilbert spaces which we explore here. See also \cite{BJMP05,DuJo06,DuJo07} From the initial matrix $A$ we form the discrete group $\bz^d[A^{-1}]$ generated by powers of $A^j$, $j\in\mathbb{Z}$, applied to $\mathbb{Z}^d$; and the compact dual solenoid group $\mathcal S_A$. This solenoid is related to $[0,1)^d \times \Omega$ where $\Omega$ is a compact infinite product of a fixed finite alphabet. But the two $\mathcal S_A$ and $[0,1)^d \times \Omega$ are different, and their relationships are explored below. First recall that matrix multiplication by $A$ induces an automorphism $\sigma_A$ in $\mathcal S_A$. We are interested in periodic points for this action, and in a certain family of extreme orbits called cycles. As is well known, $\mathbb{R}^d$ is naturally embedded in $\mathcal S_A$. Here we continue the study started in \cite{Dut06} of the support of the representations associated to super-wavelets. Starting with an embedding $\hat i$ of $\mathbb{R}^d$ into the solenoid, we have some periodic characters $\chi_i$ in $\mathcal S_A$, associated to the cycles. Then we show that the representation is supported on the union of $\chi_i \hat i(\mathbb{R}^d)$ where the multiplication here is just the multiplication in $\mathcal S_A$. We combine Mackey's theory of induced representations with the analysis of $\mathcal S_A$-cycles. In this connection, we find a dynamical obstruction for embeddings as follows. Intuitively, one wants to encode the solenoid into a symbol space $[0,1)^d \times \Omega$. This can be a problem when one is dealing with matrices, as compared to the dyadic scaling in one dimension which is the traditional context for wavelet analysis. The reason for the obstruction is that the candidate $[0,1)^d$ is not invariant under the inverse branches $\tau_d(x)=(A^T)^{-1}(x+d)$, $d\in\mathcal D$ where $\mathcal D$ is a chosen finite set of vectors in $\mathbb{R}^d$. These maps serve as inverse branches to the action by $A^T$ on $\mathbb{R}^d/\mathbb{Z}^d$. The other candidate different from $[0,1)^d$ is the Hutchinson attractor $X(A^T,\mathcal D)$ for $\tau_d$, $X(A^T,\mathcal D)$ contained in $\mathbb{R}^d$. But $X(A^T,\mathcal D)$ might not tile $\mathbb{R}^d$ by $\mathbb{Z}^d$. So one question is: Can one choose $\mathcal D$ to be a complete set of representatives for the finite quotient group $\mathbb{Z}^d/ A^T\mathbb{Z}^d$ such that the attractor $X(A^T,\mathcal D)$ of $\{\tau_d\}$ tiles $\mathbb{R}^d$ by $\mathbb{Z}^d$? If not, then how should one choose $A$ such that this is possible? Or perhaps, one must replace $A$ with an iterate $A^p$ of $A$ for a suitable $p$? The reader may find our use of the pair of scaling matrix $A$, and its transposed $A^T$ confusing. It is unavoidable and is dictated by our essential use of Fourier duality: If $A$ is acting in $d$-space, then $A^T$ is acting in the dual vector variable (say frequency), see e.g., Lemma \ref{lemfuglede}. In deciding tiling properties for $X = X(A^T,D)$ our use of spectral theory is essential as there seem to be no direct way of attacking the tiling problem for $X$. While initially Knuth's analysis \cite{Knu69} of what is now called affine Iterated Function Systems (IFS) was motivated by the desire to introduce geometry into algorithms for general digit sets in positional number systems, the idea of turning ``digits'' into geometry and tiling questions was followed up later by others, e.g., Odlyzko \cite{Odl78}, Hutchinson \cite{Hut81}, Bratteli-Jorgensen \cite{BrJo99}, and Lagarias-Wang \cite{LaWa96a, LaWa96b, LaWa96c, LaWa97, LaWa00}. In \cite{LaWa96a} the authors suggest that the tiling issues implied by the geometric positional ``number'' systems are directly connected with wavelet algorithms. In particular they pointed out that, for a fixed choice of $A$ and $\mathcal D$, the corresponding attractor $X(A^T,\mathcal D)$ as described above does not always tile $\mathbb{R}^d$ by translation vectors from the unit-grid lattice $\mathbb{Z}^d$. When it does, we say that $X(A^T,\mathcal D)$ is a Haar wavelet. The term ``Haar wavelet'' is used because the corresponding indicator function is a scaling function (father function) for an ONB wavelet system in $L^2(\mathbb{R}^d)$. The authors of \cite{LaWa96a} showed that in 2D, every expansive $d$ by $d$ matrix over $\mathbb{Z}$ has at least one ``digit'' set $\mathcal D$ such that $X(A^T,\mathcal D)$ is a Haar wavelet. It was later proved that in 5D, not every expansive 5 by 5 matrix over $\mathbb{Z}$ can be turned into a Haar wavelet; i.e., for such a matrix $A$, that there is no choice of $\mathcal D$ for which $X(A^T,\mathcal D)$ is a Haar wavelet. The fact that there are exotic 5 by 5 expansive matrices $A$ over $\mathbb{Z}$, i.e., $A$ in $\mathcal M_5(\mathbb{Z})$ for which no digit set $\mathcal D$ may be found such that $X(A^T,\mathcal D)$ makes a $\mathbb{Z}^5$-tiling of $\mathbb{R}^5$ was worked out in the following papers \cite{LaWa96c, LaWa97, HLR02,HeLa04}. By digit set we mean a subset $\mathcal D$ in $\mathbb{Z}^5$, in bijective correspondence with $\mathbb{Z}^5/A^T\mathbb{Z}^5$. Such exotic matrices $A$ are said to not allow Haar wavelets. The question came up after Lagarias-Wang \cite{LaWa97} showed that every expansive $A$ in $\mathcal M_2(\mathbb{Z})$ allows digit sets in $\mathbb{Z}^2$ which make $\mathbb{Z}^2$ tile, i.e, they allow Haar wavelets. The aim of this paper is to revisit the geometry of sets $X(A^T,\mathcal D)$ in light of recent results on IFS involving dynamics and representation theory, see e.g., \cite{Dut06, DuJo06a, DuJo06b, DuJo06c, DuJo07}. \section{Definitions and notations} While standard wavelet bases built on wavelet filters and on a fixed expansive $d$ by $d$ matrix over $\mathbb{Z}$, say $A$, refer to the Hilbert space $L^2(\mathbb{R}^d)$, many naturally occurring wavelet filters \cite{Dut06} suggest other Hilbert spaces than $L^2(\mathbb{R}^d)$, in fact Hilbert spaces containing a copy of $L^2(\mathbb{R}^d)$. This approach \cite{Dut06} suggests the the name ``super wavelets'', and naturally leads to representations of a Baumslag-Solitar group built on the matrix $A$, called wavelet representations. Starting with a fixed $A$, there is a compact solenoid $\mathcal S_A$ with the property that matrix multiplication by $A$ on $\mathbb{R}^d/\mathbb{Z}^d$ induces an automorphism $\sigma_A$ on $\mathcal S_A$. \par In the past decade the literature on self-affine sets, encoding and digit representations for radix matrices has grown; in part because of applications to such areas as number theory, to dynamics, and to combinatorial geometry. It is not possible here to give a complete list of these directions. Our present work has been motivated by the papers \cite{AkSc05}, \cite{Cur06}, \cite{GaYu06}, \cite{HLR02}, \cite{HeLa04}, \cite{KLSW99}, \cite{Li06}, \cite{Li07}, \cite{Saf98},\cite{ZZ06}, \cite{LaWa00}. \begin{definition}\label{deftaud} Let $A$ be a $d\times d$ matrix with integer entries. We say that the matrix $A$ is {\it expansive} if all its eigenvalues $\lambda$ satisfy $|\lambda|>1$. \end{definition} Let $A$ be a $d\times d$ expansive matrix with integer entries. Let \begin{equation}\label{eqzda} \mathbb{Z}^d[A^{-1}]=\{A^{-j}k\,|\,j\in\mathbb{N},k\in\mathbb{Z}^d\}. \end{equation} Note that $\mathbb{Z}^d[A^{-1}]$ is the inductive limit of the group inclusions $$\mathbb{Z}^d\hookrightarrow A^{-1}\mathbb{Z}^d\hookrightarrow A^{-2}\mathbb{Z}^d\hookrightarrow\dots$$ or equivalently $$\mathbb{Z}^d\stackrel{A}{\rightarrow}\mathbb{Z}^d\stackrel{A}{\rightarrow}\mathbb{Z}^d\stackrel{A}{\rightarrow}\dots$$ On $\mathbb{Z}^d[A^{-1}]$ we consider the discrete topology (even though $\mathbb{Z}^d[A^{-1}]$ is a subgroup of $\mathbb{R}^d$). On the group $\bz^d[A^{-1}]$, the map $\alpha_A(x)=Ax$, $x\in\bz^d[A^{-1}]$ defines an automorphism of the group $\bz^d[A^{-1}]$. For the use of these groups in $C^*$-algebras, see e.g. \cite{BrJo99,BJKR01}. \subsection{The group $G_A:=\bz^d[A^{-1}]\rtimes_{\alpha_A}\mathbb{Z}$} The group $G_A:=\bz^d[A^{-1}]\rtimes_{\alpha_A}\mathbb{Z}$ is the semidirect product of $\bz^d[A^{-1}]$ under the action of $\mathbb{Z}$ by the automorphisms $\alpha_A$. This means that \begin{equation}\label{eqga} G_A:=\{(j,b)\,|\,j\in \mathbb{Z}, b\in\bz^d[A^{-1}]\},\quad (j,b)\cdot(k,c)=(j+k,\alpha_A^j(c)+b),\quad(j,k\in\mathbb{Z},b,c\in\bz^d[A^{-1}]). \end{equation} \begin{proposition}\label{propgagen} The group $G_A$ is generated by the elements $u:=(1,0)$ and $t_k=(0,k)$, $k\in\bz^d[A^{-1}]$. Moreover \begin{equation} ut_ku^{-1}=t_{Ak},\quad(k\in\mathbb{Z}) \end{equation} \begin{equation} t_{A^{-n}k}:=(0,A^{-n}k)=u^{-n}t_ku^n,\quad (j,A^{-n}k)=t_{A^{-n}k}u^j,\quad(n\geq 0,k\in\mathbb{Z}^d,j\in\mathbb{Z}). \end{equation} \end{proposition} \begin{remark} From Proposition \ref{propgagen} we infer that a unitary representation of the group $G_A$ is completely determined by giving some unitary operators $T_k$, $k\in\mathbb{Z}^d$ and $U$ subject to the relation \begin{equation} UT_kU^{-1}=T_{Ak},\quad(k\in\mathbb{Z}). \end{equation} \end{remark} In \cite{DJ07b} we use induced representations of $G_A$ in the sense of Mackey in order to encode wavelet sets for a fixed expansive $d$ by $d$ matrix over $\mathbb{Z}$. Note that Mackey's method was developed for continuous groups, where encoding is done with co-adjoint orbits. In contrast we show in the present paper that encoding in the solenoid is required for wavelet representations, i.e., representations of discrete versions of higher rank $ax + b$ groups. \subsection{The dual group of $\mathbb{Z}^d[A^{-1}]$: the solenoid $\mathcal S_A$.} The dual group of $\mathbb{Z}^d$ is $\mathbb{T}^d$, where $$\mathbb{T}^d:=\{(z_1,\dots,z_d)\,|\, |z_i|=1, i\in\{1,\dots,d\}\}.$$ For $x=(x_1,\dots,x_d)\in\mathbb{R}^d$ let $$e^{2\pi ix}:=(e^{2\pi ix_1},\dots,e^{2\pi ix_d})\in\mathbb{T}^d.$$ For $k=(k_1,\dots,k_d)\in\mathbb{Z}^d$ and $z=(e^{2\pi ix_1},\dots,e^{2\pi ix_d})\in\mathbb{T}^d$, we use the notation $$z^k:=e^{2\pi ik_1x_1+\dots +2\pi ik_dx_d}=e^{2\pi ik\cdot x}\in\mathbb{T}.$$ The duality between $\mathbb{Z}^d$ and $\mathbb{T}^d$ is given by \begin{equation}\label{eqdualzt} \ip{k}{z}=z^k,\quad(k\in\mathbb{Z}^d,z\in\mathbb{T}^d). \end{equation} For $z=(e^{2\pi ix_1},\dots e^{2\pi ix_d})\in\mathbb{T}^d$ we denote by \begin{equation}\label{eqza} z^{A}:=e^{2\pi i(A^T) x}=(e^{2\pi i \sum_{j=1}^da_{j1}x_j},\dots,e^{2\pi i\sum_{j=1}^da_{jd}x_d}). \end{equation} Note that $z^{Ak}=(z^A)^k$ for all $k\in\mathbb{Z}^d$. \begin{definition}\label{defsole} The dual group of $\bz^d[A^{-1}]$ is the group $\mathcal S_A$ defined by $$\mathcal S_A:=\{(z_n)_{n\in\mathbb{N}}\,|\, z_n\in\mathbb{T}^d, z_{n+1}^A=z_n,\mbox{ for all }n\in\mathbb{Z}\}.$$ The group $\mathcal S_A$ is called the {\it solenoid} of $A$. It is a compact Abelian group with the topology induced from the product topology on $\mathbb{T}^\mathbb{N}$. \end{definition} The duality is given by \begin{equation}\label{eqdualza} \ip{A^{-j}k}{(z_n)_{n\in\mathbb{N}}}=\ip{k}{z_j}=z_j^k,\quad(j\in\mathbb{N},k\in\mathbb{Z}^d,(z_n)_{n\in\mathbb{N}}\in\mathcal S_A). \end{equation} The dual of the automorphism $\alpha_A$ on $\bz^d[A^{-1}]$, $\alpha_A(x)=Ax$ is the {\it shift} \begin{equation}\label{eqshift} \sigma_A(z_0,z_1,\dots)=(z_0^A,z_0,z_1,\dots),\quad((z_0,z_1,\dots)\in\mathcal S_A). \end{equation} We denote by $\theta_n$ the projection maps $\theta_n:\mathcal S_A\rightarrow\mathbb{T}^d$, $\theta_n(z_0,z_1,\dots)=z_n$ for $n\in\mathbb{N}$. Note that \begin{equation}\label{eqtetan} \theta_{n+1}\circ\sigma_A=\theta_n,\quad \left(\theta_{n+1}(z_0,z_1,\dots)\right)^A=\theta_n(z_0,z_1,\dots),\quad(n\in\mathbb{N},(z_0,z_1,\dots)). \end{equation} \section{Embeddings of $\mathbb{R}^d$ into the solenoid $\mathcal S_A$}\label{embed} \par We saw in Proposition \ref{propgagen} that there is a natural semidirect product discrete group $G_A$ which carries a unitary wavelet representation. For wavelets in $\mathbb{R}^d$, this construction begins with a fixed expansive $d$ by $d$ matrix $A$, and the unitary representation will be acting in the Hilbert space $L^2(\mathbb{R}^d)$. It is known that a certain redundancy (\cite{BDP05}, \cite{HaLa00}) in wavelet constructions dictates unitary representations in Hilbert spaces larger than $L^2(\mathbb{R}^d)$, i.e., with $L^2(\mathbb{R}^d)$ embedded as an isomorphic copy in an ambient Hilbert space. By introducing a specific embedding of $\mathbb{R}^d$ in an ambient solenoid $S_A$ we are able to account for super representations (Definition \ref{c1}). The action of $A$ induces an automorphism $\sigma_A$ in $S_A$. By computing periodic points for $\sigma_A$ we are able (Theorem \ref{prop3_9}) to account for the super representations acting in an $L^2$ space defined from an induced measure on $S_A$. \par In sections 5 and 6 we will further study periodic points and cycles. It turns out that the notion of cycle is different when referring to the integer points and the fractions. Starting with a fixed radix pair $(A,\mathcal D)$, we will make ``integer points'' precise in terms of associated lattices (rank-d subgroups in $\mathbb{R}^d$), and the ``fractions'' will take the form of compact subsets $X$ in $\mathbb{R}^d$ defined by an $(A,\mathcal D)$ self-similarity (Definition \ref{ifs}). We begin by showing how the space $\mathbb{R}^d$ can be seen as subspace of the solenoid $\mathcal S_A$. \begin{proposition}\label{propembed1} The inclusion $i:\bz^d[A^{-1}]\rightarrow\mathbb{R}^d$ has a dual $\hat i:\mathbb{R}^d\rightarrow\mathcal S_A$ \begin{equation}\label{eqhati} \hat i(x)=(e^{2\pi i(A^T)^{-n}x})_{n\in\mathbb{N}},\quad(x\in\mathbb{R}^d). \end{equation} The map $\hat i$ is one-to-one, and onto the set of sequences $(z_n)_{n\in\mathbb{N}}\in\mathcal S_A$ with the property that $\lim_{n\rightarrow\infty}z_n=\mathbf 1$ where $\mathbf 1=(1,\dots,1)$ is the neutral element of $\mathbb{T}^d$. The map $\hat i$ satisfies the following relation \begin{equation}\label{eqhatia} \hat i(A^Tx)=\sigma_A(\hat i(x)),\quad(x\in\mathbb{R}^d). \end{equation} \end{proposition} \begin{proof} To see that $\hat i$ is one-to-one, we notice that if $x,x'\in\mathbb{R}^d$ and $\hat i(x)=\hat i(x')$ then $(A^T)^{-n}x-(A^T)^{-n}x'\in\mathbb{Z}^d$ for all $n\in\mathbb{N}$. Since $A$ is expansive, the norm $\|(A^T)^{-n}(x-x')\|$ converges to $0$ as $n\rightarrow\infty$. Thus, $x$ and $x'$ must be equal. Since $A$ is expansive, so is $A^T$, so $(A^T)^{-n}x$ converges to $0$ for all $x\in\mathbb{R}^d$. Therefore $e^{2\pi iA^{-n}x}$ converges to $\mathbf 1$. Conversely, suppose $(z_n)_{n\in\mathbb{N}}$ is in $\mathcal S_A$ and $z_n$ converges to $1$. Then $z_n=e^{2\pi ix_n}$ for some $x_n\in\mathbb{R}^d$ and, for $n$ large we can assume $x_n$ is close to $0$. Since we have $z_{n+1}^A=z_n$, this implies that $A^Tx_{n+1}\equiv x_n\mod\mathbb{Z}^d$, so $A^Tx_{n+1}=x_n+l$ for some $l\in\mathbb{Z}^d$. But since both $x_{n+1}$ and $x_n$ are close to $0$, this implies that, for some $n_0\in\mathbb{N}$, we have $(A^T) x_{n+1}=x_n$ for $n\geq n_0$. Let $x:=(A^T)^{n_0} x_{n_0}$. The previous argument shows that $(A^T)^jx_{n_0}\equiv x_{n_0-j}\mod\mathbb{Z}^d$ for all $j\leq n_0$, so $\hat i(x)=(e^{2\pi i(A^T)^{-n}x})_{n\in\mathbb{N}}$. Thus, $\hat i$ is onto. The other assertions follow from some direct computations, using the duality in \eqref{eqdualza}. \end{proof} Now that we have the embedding of $\mathbb{R}^d$ into the solenoid, we can transport the wavelet representation on $L^2(\mathbb{R}^d)$ to the solenoid $\mathcal S_A$. \begin{definition}\label{deftul2} On $L^2(\mathbb{R}^d)$ we denote by $T_k$ the {\it translation operator} $(T_kf)(x)=f(x-k)$, $g\in L^2(\mathbb{R}^d)$, $x\in\mathbb{R}^d$, $k\in\mathbb{Z}^d$, and by $U$ the {\it dilation operator} $(Uf)(x)=\frac{1}{\sqrt{|\det A|}}f(A^{-1}x)$. Their Fourier transform is \begin{equation}\label{eqhattu} (\hat T_kh)(x)=e^{2\pi ik\cdot x}f(x),\quad(\hat Uh)(x)=\sqrt{|\det A|}h(A^Tx),\quad(h\in L^2(\mathbb{R}^d),x\in\mathbb{R}^d,k\in\mathbb{Z}^d). \end{equation} The operators $\{U,T_k\}$ (or $\{\hat U,\hat T_k\}$) define a representation of the group $G_A$ on $L^2(\mathbb{R}^d)$. \end{definition} \begin{definition} We denote by $\mathcal S_A(1)$ the set of sequences $(z_n)_{n\in\mathbb{N}}\in\mathcal S_A$ such that $\lim_{n\rightarrow\infty}z_n=\mathbf 1$. On $\mathcal S_A(1)$ consider the measure $\tilde\mu$ defined by \begin{equation}\label{eqmutilda} \int_{\mathcal S_A(1)}f\,d\tilde\mu=\int_{\mathbb{T}^d}\sum_{(z_n)_n\in\mathcal S_A(1),\theta_0((z_n)_{n\in\mathbb{N}})=z}f((z_n)_{n\in\mathbb{N}})\,d\mu(z), \end{equation} Where $\mu$ is the Haar measure on $\mathbb{T}^d$. On $L^2(\mathcal S_A(1),\tilde\mu)$ define the operators \begin{equation}\label{eqTtilda} (\tilde T_kf)(z_0,z_1,\dots)=z_0^kf(z_0,z_1,\dots),\quad(z_0,z_1,\dots)\in\mathcal S_A(1),k\in\mathbb{Z}^d), \end{equation} \begin{equation}\label{eqUtilda} (\tilde Uf)(z_0,z_1,\dots)=\sqrt{|\det A|}f(\sigma_A(z_0,z_1,\dots)),\quad((z_0,z_1,\dots)\in\mathcal S_A(1)). \end{equation} \end{definition} Our next theorem shows that this unitary resentation of the reduced solenoid $\mathcal S_A(1)$ is a universal super representation of the wavelet group in that when $A$ is given, then via an intertwining isometry $\mathcal W: L^2(\mathbb{R}^d)\rightarrow L^2(\mathcal S_A(1))$ the standard $A$-wavelet-unitary representation acting on $L^2(\mathbb{R}^d)$ is naturally included in the $\mathcal S_A(1)$ representation. The details of the symbolic encoding of this pair of representations depends on a choice of digit set $\mathcal D$, and the structure of the associated $(A,\mathcal D)$-cycles (we will give these details in Section 6). \begin{theorem} (i) The measure $\tilde\mu$ satisfies the following invariance property \begin{equation}\label{eqinvmutilda} \int_{\mathcal S_A(1)}f\circ\sigma_A\,d\tilde\mu=\frac1{|\det A|}\int_{\mathcal S_A(1)}f\,d\tilde\mu,\quad(f\in L^1(\mathcal S_A(1),\tilde\mu)). \end{equation} (ii) The operators $\tilde T_k$, $k\in\mathbb{Z}^d$ and $\tilde U$ are unitary and they satisfy the following relation \begin{equation}\label{eqcov} \tilde U\tilde T_k\tilde U^{-1}=\tilde T_{Ak},\quad(k\in\mathbb{Z}^d) \end{equation} so $\{\tilde U,\tilde T_k\}$ generate a representation of the group $G_A$ on $L^2(\mathcal S_A(1),\tilde\mu)$. (iii) The map $\hat i$ is a measure preserving transformation between $\mathbb{R}^d$ and $\mathcal S_A(1)$. (iv) The operator $\mathcal W:L^2(\mathbb{R}^d)\rightarrow L^2(\mathcal S_A(1),\tilde\mu)$ defined by $\mathcal Wf=f\circ\hat i^{-1}$ is an intertwining isometric isomorphism, \begin{equation}\label{eqhatiinter} \mathcal W\hat T_k=\tilde T_k\mathcal W,\quad(k\in\mathbb{Z}^d),\quad \mathcal W\hat U=\tilde U\mathcal W. \end{equation} \end{theorem} \begin{proof} (i) The Haar measure on $\mathbb{T}^d$ satisfies the following strong invariance property: \begin{equation}\label{eqinvmu} \int_{\mathbb{T}^d}f\,d\mu=\frac{1}{|\det A|}\int_{\mathbb{T}^d}\sum_{y^A=z}f(y)\,d\mu(z),\quad(f\in L^1(\mu)). \end{equation} Using this, we have for $f\in L^1(\mathcal S_A(1),\tilde\mu)$: $$\int_{\mathcal S_A(1)}f\circ\sigma_A\,d\tilde\mu=\int_{\mathbb{T}^d}\sum_{(z_n)_n\in\mathcal S_A(1),z_0=z}f(z_0^A,z_0,z_1,\dots)\,d\mu(z)=$$ $$\frac1{|\det A|}\int_{\mathbb{T}^d}\sum_{y^A=z}\sum_{(z_n)_n\in\mathcal S_A(1), z_0=y}f(z_0^A,z_0,z_1,\dots)\,d\mu(z)= \frac{1}{|\det A|}\int_{\mathbb{T}^d}\sum_{(w_n)_n\in\mathcal S_A(1),w_0=z}f(w_0,w_1,\dots)\,d\mu(z)=$$$$\frac{1}{|\det A|}\int_{\mathcal S_A(1)}f\,d\tilde\mu.$$ (ii) follows from (i) and some direct computations. (iii) First note that, by Proposition \ref{propembed1} $(z_n)_n\in\mathcal S_A(1)$ with $z_0=e^{2\pi ix}$ iff $(z_n)_n=\hat i(y)$ and $e^{2\pi iy}=e^{2\pi ix}$, i.e., $(z_n)_n=\hat i(x+k)$ for some $k\in\mathbb{Z}^d$. Let $f\in L^1(\mathcal S_A(1),\tilde\mu)$. Then $$\int_{\mathbb{R}^d}f(\hat i(x))\,dx=\int_{[0,1)^d}\sum_{k\in\mathbb{Z}^d}f(\hat i(x+k))\,dx=$$ $$\int_{[0,1)^d}\sum_{(z_n)_n\in\mathcal S_A(1),z_0=e^{2\pi ix}}f(z_0,z_1,\dots)\,dx=\int_{\mathbb{T}^d}\sum_{(z_n)_n\in\mathcal S_A(1),z_0=z}f(z_0,z_1,\dots)\,d\mu(x)=\int_{\mathcal S_A(1)}f\,d\tilde\mu.$$ This proves that $\hat i$ is measure preserving. (iv) Since $\hat i$ is measure preserving, $\mathcal W$ is an isometric isomorphism. The intertwining relation \eqref{eqhatiinter} follows by a direct computation that uses \eqref{eqhatia}. \end{proof} \subsection{Cycles} Next we will show how the ``super-wavelet'' representations from \cite{BDP05} can be realized on the solenoid. \begin{definition}\label{c1} An ordered set $C:=\{\zeta_0,\zeta_1,\dots,\zeta_{p-1}\}$ in $\mathbb{T}^d$ is called a {\it cycle} if $\zeta_{j+1}^A=\zeta_j$ for $j\in\{0,\dots,p-2\}$ and $\zeta_0^A=\zeta_{p-1}$, where $p\geq 1$. The number $p$ is called the {\it period} of the cycle if the points $\zeta_i$ are distinct. \end{definition} \begin{definition}\label{defhc} Let $C=\{\zeta_0,\zeta_1,\dots,\zeta_{p-1}\}$ be a cycle. Denote by \begin{equation}\label{eql2rlap} \mathcal H_C:=\underbrace{L^2(\mathbb{R}^d)\oplus\dots\oplus L^2(\mathbb{R}^d)}_{p\mbox{ times}}=L^2(\mathbb{R}^d\times\mathbb{Z}_p), \end{equation} where $\mathbb{Z}_p=\{0,\dots,p-1\}$ is the cyclic group of order $p$. Define the operators on $\mathcal H_C$: \begin{equation}\label{eqtc} T_{C,k}(f_0,\dots,f_{p-1})=(\zeta_0^kT_kf_0,\dots,\zeta_{p-1}^kT_kf_{p-1}),\quad((f_0,\dots,f_{p-1})\in\mathcal H_C,k\in\mathbb{Z}^d) \end{equation} \begin{equation}\label{equc} U_C(f_0,\dots,f_{p-1})=(Uf_{p-1},Uf_0,\dots,Uf_{p-2}),\quad((f_0,\dots,f_{p-1})\in\mathcal H_C) \end{equation} where $T_k$ and $U$ are the operators on $L^2(\mathbb{R}^d)$ from Definition \ref{deftul2}. We will denote by $\hat T_{C,k}$ and $\hat U_C$ the Fourier transform of these operators (the Fourier transform being applied on each component of $\mathcal H_C$). \end{definition} Then a simple calculations shows that: \begin{proposition} The operators $T_{C,k}$, $k\in\mathbb{Z}^d$ and $U_C$ are unitary and satisfy the following relation \begin{equation}\label{eqcovc} U_CT_{C,k}U_C^{-1}=T_{C,Ak},\quad(k\in\mathbb{Z}^d) \end{equation} so they define a representation of the group $G_A$ on $\mathcal H_C$. \end{proposition} Some examples and wavelet applications are also included in \cite{Jor03}. \begin{definition} Let $C:=\{\zeta_0,\dots,\zeta_{p-1}\}$ be a cycle, $\zeta_j=e^{2\pi i\theta_j}$, for some $x_j\in\mathbb{R}^d$, $(j\in\{0,\dots,p-1\})$. We will use the notation $\theta_n:=\theta_{n\mod p},\zeta_n:=\zeta_{n\mod p}$ for all $n\in\mathbb{Z}$. We denote by $$\chi_C:=(\zeta_0,\dots,\zeta_{p-1},\zeta_0,\dots,\zeta_{p-1},\dots)\in\mathcal S_A.$$ Let \begin{equation} \mathcal S_A(C):=\bigcup_{j=0}^{p-1}\sigma_A^{-j}(\chi_C)\hat i(\mathbb{R}^d). \label{eq:} \end{equation} Let $\hat i_C:\mathbb{R}^d\times\mathbb{Z}_p\rightarrow\mathcal S_A(C)$ \begin{equation} \hat i_C(x,j)=\sigma_A^{-j}(\chi_C)\hat i(x)=(e^{2\pi i((A^T)^{-n}x+\theta_{n+j})})_{n\in\mathbb{N}},\quad(x\in\mathbb{R}^d,j\in\mathbb{Z}_p). \label{eqic} \end{equation} Define the measure $\tilde\mu$ on $\mathcal S_A(C)$ by an equation similar to \eqref{eqmutilda}, (the only difference here is the support $\mathcal S_A(C)$ instead of $\mathcal S_A(1)$) $$\int_{\mathcal S_A(C)}f\,d\tilde\mu=\int_{\mathbb{T}^d}\sum_{(z_n)_n\in\mathcal S_A(C),\theta_0((z_n)_{n\in\mathbb{N}})=z}f((z_n)_{n\in\mathbb{N}})\,d\mu(z). $$ Define the operators $\tilde T_k$, $k\in\mathbb{Z}^d$ and $\tilde U$ on $L^2(\mathcal S_A(C),\tilde\mu)$ by the same formulas as in \eqref{eqTtilda} and \eqref{eqUtilda}. \end{definition} \begin{theorem}\label{prop3_9} (i) The point $\chi_C$ is periodic for $\sigma_A$, $\sigma_A^p(\chi_C)=\chi_C$, and $\sigma_A$ permutes cyclically the sets $\sigma^{-j}(\chi_C)\hat i(\mathbb{R}^d)$, $j\in\mathbb{Z}_p$. (ii) The set $\mathcal S_A(C)$ consists of exactly the points $(z_n)_{n\in\mathbb{N}}$ with the property that the distance from $z_n$ to $C$ converges to $0$. (iii) Let $\alpha_{A,p}:\mathbb{Z}^d\times\mathbb{Z}_p\rightarrow \mathbb{Z}^d\times\mathbb{Z}_p$ \begin{equation}\label{eqalphap} \alpha_{A,p}(x,j)=(A^Tx,j-1),\quad(x\in\mathbb{R}^d,j\in\mathbb{Z}_p). \end{equation} The map $\hat i_C$ is a bijective measure preserving transformation that satifies the relation \begin{equation} \hat i_C\circ\alpha_{A,p}=\sigma_A\circ\hat i_C. \label{eqica} \end{equation} (iv) The operator $\mathcal W_C: \mathcal H_C=L^2(\mathbb{R}^d\times\mathbb{Z}_p)\rightarrow L^2(\mathcal S_A(C),\tilde\mu)$, $\mathcal W_Cf=f\circ\hat i_C^{-1}$, is an intertwining isometric isomorphism: \begin{equation}\label{eqwcint} \mathcal W_C\hat T_{C,k}=\tilde T_k\mathcal W_C,\quad(k\in\mathbb{Z}^d),\quad\mathcal W_C\hat U_C=\tilde U\mathcal W_C. \end{equation} \end{theorem} \begin{proof} (i) is trivial. (ii) If $(z_n)_n$ is in $\mathcal S_A(C)$ then for some $j$, we have that $(\sigma_A^{-j}(\chi_C))^{-1}(z_n)_n$ is in $\hat i(\mathbb{R}^d)$ so $\zeta_j^{-1}z_{np}$ converges to $0$. Therefore $z_{np}$ converges to $\zeta_j$ so $z_{np-l}=z_{np}^{A^l}$ converges to $\zeta_{j-l}$ for all $l\in\mathbb{Z}_p$. Conversely, suppose $(z_n)_n\in\mathcal S_A$ and $\operatorname*{dist}(z_n,C)$ converges to $0$. We claim that $z_{np}$ converges to one of the points of the cycle $\zeta_i$. Pick an $\epsilon>0$ small enough such that for all $i\in\mathbb{Z}_p$, $\operatorname*{dist}(z_n,\zeta_i)<\epsilon$ implies $\operatorname*{dist}(z_n,\zeta_{i'})>\epsilon$ for $i\neq i'$. There exists a $\delta>0$, $\delta<\epsilon$ such that for all $i\in\mathbb{Z}_p$, if $\operatorname*{dist}(z,\zeta_i)<\delta$ then $\operatorname*{dist}(z^{A^p},\zeta_i)<\epsilon$. There exists an $n_\epsilon$ such that if $n\geq n_\epsilon$ then $\operatorname*{dist}(z_n,C)<\delta<\epsilon$. Then for some $i\in\mathbb{Z}_p$ we have $\operatorname*{dist}(z_{n_\epsilon},\zeta_i)<\epsilon$. Also for some $i'\in\mathbb{Z}_p$ we have $\operatorname*{dist}(z_{n_\epsilon+p},\zeta_{i'})<\delta$. This implies that $\operatorname*{dist}(z_{n_\epsilon},\zeta_{i'})<\epsilon$ so $i'=i$. By induction, we obtain that $\operatorname*{dist}(z_{n_\epsilon+kp},\zeta_i)<\delta<\epsilon$. And since $\operatorname*{dist}(z_n,C)$ converges to $0$ this shows that $\operatorname*{dist}(z_{n_\epsilon+kp},\zeta_i)$ converges to $0$. Applying the map $z^A$ several times we conclude that $z_{np}$ converges to one of the elemenst of the cycle, $\zeta_j$. Then consider $(w_n)_n:=\sigma_A^j(\chi_C)^{-1}(z_n)_n\in\mathcal S_A$. Clearly $w_n$ converges to $\mathbf 1$. By Proposition \ref{propembed1}, there exists an $x\in\mathbb{R}^d$ such that $(w_n)_n=\hat i(x)$. Thus $(z_n)_n=\sigma_A^j(\chi_C)\hat i(x)$, and this proves (ii). (iii) Since $\hat i$ is bijective (Proposition \ref{propembed1}), clearly $\hat i_C$ is also bijective. To check that $\hat i_C$ is a measure preserving transformation, since $\hat i$ is measure preserving by Proposition \ref{propembed1}, it is enough to check that multiplication by $\sigma_A^{-j}(\chi_C)$ leaves the measure $\tilde\mu$ invariant, i.e., for a function $f$ defined on $\sigma_A^{-j}(\mathbb{R}^d)$, \begin{equation}\label{eqinvtrmutilda} \int_{\sigma_A^{-j}(\chi_C)\hat i(\mathbb{R}^d)}f\,d\tilde\mu=\int_{\hat i(\mathbb{R}^d)}f(\sigma_A^{-j}(\chi_C)(z_n)_n)\,d\tilde\mu(z_n)_n,\quad(j\in\mathbb{Z}_p). \end{equation} It is enough to check this for $j=0$. Using the translation invariance of the Haar measure $\mu$ on $\mathbb{T}^d$, we have: $$\int_{\chi_C\hat i(\mathbb{R}^d)}f\,d\tilde\mu=\int_{\mathbb{T}^d}\sum_{(z_n)_n\in\chi_C\hat i(\mathbb{R}^d),z_0=z}f((z_n)_n)\,d\mu(z)= \int_{\mathbb{T}^d}\sum_{(w_n)_n\in\hat i(\mathbb{R}^d),w_0=z\zeta_0^{-1}}f(\chi_C(w_n)_n)\,d\mu(z)=$$ $$\int_{\mathbb{T}^d}\sum_{(w_n)_n\in\hat i(\mathbb{R}^d),w_0=z}f(\chi_C(w_n)_n)\,d\mu=\int_{\hat i(\mathbb{R}^d)}f(\chi_C(w_n)_n)\,d\tilde\mu(w_n)_n.$$ Equation \eqref{eqica} follows by a direct computation. (iv) Follows from \eqref{eqica}. \end{proof} \section{Encoding of integer points} The idea of using matrices and geometry in creating a positional number system for points in $\mathbb{R}^d$ was initiated by Don Knuth, see especially \cite{Knu76}, vol 2, chapter 4 (Arithmetic), and 4.1 which introduces this geometric and algorithmic approach to positional number system. In fact, the Twin-Dragon appears on page 206 (in v 2 of \cite{Knu76}). In our present discussion, with a fixed expansive matrix $A^T$ playing the role of the basis-number (or the radix) in our radix representations, the natural question arises: ``What is the role of the integer lattice $\mathbb{Z}^d$ relative to our radix system?'' This section gives a preliminary answer to the question, and the next section is a complete analysis involving cycles. As before, we begin with a choice of expansive matrix $A$ (i.e., $A$ is a fixed $d$ by $d$ matrix over $\mathbb{Z}$), and we choose a subset $\mathcal D$ in $\mathbb{Z}^d$ (for digits), points in $\mathcal D$ in bijective correspondence with $\mathbb{Z}^d/A^T\mathbb{Z}^d$. As it turns out, ``the integers'' relative to the $(A,\mathcal D)$-radix typically will not have finite radix (or Laurent) expansions in positive powers of $A^T$. The reason for this is the presence of certain non-trivial cycles $C$ in $\mathbb{Z}^d$ leading to infinite repetitions, which we will take up systematically in the next section. In this section we will show that when the pair $(A,\mathcal D)$ is fixed, there is an encoding mapping $\phi$ which records the finite words in the alphabet $\mathcal D$ which will correspond to the cycles in $\mathbb{Z}^d$ that are associated with our particular choice of $(A,\mathcal D)$. However, once the cycles in $\mathbb{Z}^d$ are identified, there is a much more detailed encoding directly for $\mathbb{Z}^d$ which will be done in detail in Theorem \ref{thenccycl}. So our present encoding mapping $\phi : \mathbb{Z}^d\rightarrow \mathcal D^{\mathbb{N}}$, depending on the pair $(A,\mathcal D)$, is an introduction to our analysis of a refined solenoid encoding and of all cycles in the next section. Still our starting point is a fixed radix-pair $(A,\mathcal D)$. The fact that the encoding mapping $\phi : \mathbb{Z}^d\rightarrow \mathcal D^{\mathbb{N}}$ is injective is a consequence of the expansive property. Corresponding to $(A,\mathcal D)$ there is a finite set of finite words $F = F(A,\mathcal D)$ in letters from $\mathcal D$. These words label the $\mathbb{Z}^d$-cycles, and the encoding mapping $\phi : \mathbb{Z}^d\rightarrow \mathcal D^{\mathbb{N}}$ (infinite Cartesian product) maps onto the set of infinite words which terminate in an infinite repetition of one of the words from $F$. Let $d$ be given. Let $B$ be a $d\times d$ matrix over $\mathbb{Z}$, and assume \begin{equation}\label{eqdz1} \bigcap_{k\geq 1} B^k\mathbb{Z}^d=\{0\}. \end{equation} Note that this holds if $B$ is assumed expansive. Let $\mathcal D\subset\mathbb{Z}^d$ be a complete set of representatives for $\mathbb{Z}^d/B\mathbb{Z}^d$. Assume $0\in\mathcal D$. (This assumption is for convenience and can be easily removed {\it mutatis mutandis}.) \begin{definition} Define a $(B,\mathcal D)$ encoding of $\mathbb{Z}^d$ $$\phi(x):=d_0d_1d_2\dots, \phi:\mathbb{Z}^d\rightarrow\mathcal D^{\mathbb{N}}$$ as follows: when $x\in\mathbb{Z}^d$ is given, there is a unique pair $x_1\in\mathbb{Z}^d$ and $d_0\in\mathcal D$ such that \begin{equation}\label{eqdz3} x=d_0+Bx_1. \end{equation} By the same argument, now determine $d_1,d_2,\dots\in\mathcal D$, and $x_2,x_3,\dots\in\mathbb{Z}^d$ recursively such that \begin{equation}\label{eqdz4} x_k=d_k+Bx_{k+1}. \end{equation} \end{definition} \begin{definition} Set $$\Omega:=\mathcal D^{\mathbb{N}}=\prod_{n=0}^\infty \mathcal D.$$ Elements $\omega\in\Omega$ are called infinite words in the alphabet $\mathcal D$. If $v$ is a finite word, we denote by $\underline v$ the infinite repetition of this word $vvv\dots$. If there are finite words $v$ and $w$ such that $\omega=(v\underline w)$, we say that $\omega$ ends in a cycle. \end{definition} \begin{proposition}\label{propdzdz} (i) The encoding mapping $\phi:\mathbb{Z}^d\rightarrow\Omega$ is well defined. (ii) $\phi$ is one-to-one. (iii) $\phi$ maps onto a subset of $\Omega$ of all infinite words that end in cycles. \end{proposition} \begin{proof} Part (i) is immediate from \eqref{eqdz3}, \eqref{eqdz4}. (ii) Suppose $x,y\in\mathbb{Z}^d$ and $\phi(x)=\phi(y)$. Then an application of \eqref{eqdz4} shows that $x-y\in\cap_{k\geq 1}B^k\mathbb{Z}^d$, and we conclude that $x=y$ by an application of \eqref{eqdz1}. (iii) Follows immediately from Theorem \ref{thenccycl}. \end{proof} \begin{remark} (i) For $x\in\mathbb{Z}^d$, the encoding $\phi(x)=v\underline w=d_0d_1d_2\dots$ with $d_i\in\mathcal D$, and $v,w$ finite words is unique; but the formal sum \begin{equation}\label{Eqdz5} d_0+Bd_1+B^2d_2+\dots \end{equation} is not convergent unless $\underline w=\underline 0=000\dots$ infinite repetition. In that case there exists $m\in\mathbb{N}$ such that $v=d_0\dots d_{m-1}$ and $$x=d_0+Bd_1+\dots+B^{m-1}d_{m-1}.$$ (ii) Suppose $v=\emptyset$ and $\phi(x)=\underline w$, with $w=l_0l_1\dots l_{p-1}$. Then $-x$ has the following infinite, convergent, periodic fractional expansion $$-x=\sum_{k=0}^\infty\sum_{i=0}^{p-1} B^{-kp+i+1}l_{p-1-i}.$$ See Proposition \ref{proprc} and Theorem \ref{thenccycl} for the proof. \end{remark} \begin{example} The following simple example in 1D illustrates the cases (i) and (ii) above. Let $d=1$ , $B=2$, and $\mathcal D=\{0,3\}$. Let $\phi:\mathbb{Z}\rightarrow\Omega$ be the encoding. We have $\phi(11)=3003\underline{30}$, i.e., $v=3003$ and $w=30$. And $\phi(18)=033\underline0$, i.e., $v=033$ and $w=0$, corresponding to the finite representation $$18=0+3\cdot 2+3\cdot 2^2.$$ Finally $\phi(-2)=\underline{03}$, i.e., $v=\emptyset$, and $w=03$. Hence by (ii) we get the following infinite fractional dyadic representation $$2=3\cdot 2^{-1}+0\cdot 2^{-2}+3\cdot 2^{-3}+0\cdot 2^{-4}+3\cdot 2^{-5}+\dots$$ Proposition \ref{proprc} shows that the cycles are obtained by intersecting $\mathbb{Z}$ with the set $-X(B,\mathcal D)$, where $X(B,\mathcal D)$ is the attractor of the maps $\tau_0(x)=x/2$, $\tau_3(x)=(x+3)/2$. In our example $X(B,\mathcal D)=[0,3]$. There are two cycles of length one $\{0\}$ and $\{-3\}$ and one cycle of length two $\{-1,-2\}$. The encoding mapping $\phi$ records the cycles as follows: the one-cycles $\phi(0)=\underline 0$, $\phi(-3)=\underline 3$. The two-cycle: $\phi(-1)=\underline{30}$, $\phi(-2)=\underline{03}$. \end{example} The next section takes up the encodings in general. \section{Encodings of the solenoid} In this section we return to the geometry of general digit sets in positional number systems, turning ``digits'' into geometry and tilings. The starting point is a given pair $(A,\mathcal D)$ with $A$ expansive over $\mathbb{Z}$, and $\mathcal D$ a complete digit set. With the aid of the solenoid we give an explicit encoding. Specifically, we show that the attractor $X(A^T,\mathcal D)$ for the corresponding affine Iterated Function System (IFS) is a set of fractions for an $(A,\mathcal D)$-digital representation of points in $\mathbb{R}^d$. Moreover our positional ``number representation'' is spelled out in the form of an explicit IFS encoding of the compact solenoid $\mathcal S_A$ associated with the pair $(A,\mathcal D)$. The intricate part (Theorem \ref{thenccycl}) is played by the cycles in $\mathbb{Z}^d$ for the initial $(A,\mathcal D)$-IFS. Using the cycles we are able to write down formulas for the two maps which do the encoding as well as the decoding in our positional $\mathcal D$-representation. Take a point $(z_n)_{n\in\mathbb{N}}$ in the solenoid $\mathcal S_A$. Then $z_0\in\mathbb{T}^d$, $z_1^A=z_0$, $z_2^A=z_1$, and so on. Since $z_0$ is in $\mathbb{T}^d$ it can represented by $e^{2\pi ix_0}$, where $x_0\in\mathbb{R}^d$. Note that one has several choices for $x_0$, any of its integer translates $x_0+k$, $k\in\mathbb{Z}^d$, will do. Then $z_1^A=z_0$, so $z_1$ is a ``root'' of $z_0$. There are $|\det A|$ choices: if $z_0=e^{2\pi ix_0}$ then $z_1$ must be one of the points $e^{2\pi i(A^T)^{-1}(x_0+d)}$, $d\in\mathcal D$, where $\mathcal D$ is some complete set of representatives for $\mathbb{Z}^d/A^T\mathbb{Z}^d$. Say $z_1=e^{2\pi i(A^T)^{-1}(x_0+d_0)}=e^{2\pi ix_1}$. At the next step $z_2$ is a root of $z_1$ so $z_2=e^{2\pi i (A^T)^{-1}(x_1+d_1)}$ for some $d_1\in\mathcal D$. By induction, we get a sequence $d_1,d_2,\dots$ in $\mathcal D$. Thus, picking a point $(z_n)_{n\in\mathbb{N}}$ in $\mathcal S_A$ amounts to choosing an $x_0\in\mathbb{R}^d$ and an infinite word $d_0d_1\dots\in\Omega:=\mathcal D^{\mathbb{N}}$. Thus we say that $(z_n)_{n\in\mathbb{N}}$ can be {\it encoded} as $$(z_n)_{n\in\mathbb{N}}\leftrightarrow (x_0,d_0d_1\dots).$$ Now note that changing the choice of $x_0$ (to say $x_0+k$), affects the entire sequence $d_0,d_1,\dots$. We want to make this choice unique in some sense. For this we need to find a subset $F$ of $\mathbb{R}^d$, such that for each $z\in\mathbb{T}^d$, there is a unique $x\in F$ such that $z=e^{2\pi ix}$. In other words, $F$ must tile $\mathbb{R}^d$ by $\mathbb{Z}^d$-translations. Of course a first choice of this set $F$ would be $[0,1)^d$. While this works in dimension $d=1$, it may be inappropriate for higher dimensions. The problem is that we would need also $z_1$ to come from $e^{2\pi ix_1}$ with $x_1\in F$, and this would mean that $x_1=(A^T)^{-1}(x_0+d_0)$ is in $F$. Thus, our set $F$ must have the following property $$\bigcup_{d\in\mathcal D}(A^T)^{-1}(F+d)\subset F.$$ But since $(z_1,z_2,\dots)$ is also an element of $\mathcal S_A$ and $z_1$ can be any point in $\mathbb{T}^d$, it follows that we must actually have \begin{equation}\label{eqF} \bigcup_{d\in\mathcal D}(A^T)^{-1}(F+d)= F. \end{equation} Of course, when we are interested only in measure theoretic notions, we can allow the equalities to hold only almost everywhere. When $F$ is compact, this equation identifies $F$ as the attractor of an affine iterated function system. \begin{definition}\label{ifs} Let $\mathcal D$ be a complete set of representatives for $\mathbb{Z}^d/A^T\mathbb{Z}^d$. For every $d\in\mathcal D$, we denote by $\tau_d$ the map on $\mathbb{R}^d$ defined by \begin{equation}\label{eqtaud} \tau_d(x)=(A^T)^{-1}(x+d),\quad(x\in\mathbb{R}^d). \end{equation} \end{definition} With this notation, if $F$ is compact, equation \eqref{eqF} says that $F$ is the attractor of the affine iterated function system $(\tau_d)_{d\in\mathcal D}$. This identifies $F$ as \begin{equation}\label{eqxa} F=X(A^T,\mathcal D):=\{\sum_{j=1}^\infty (A^T)^{-j}d_j\,|\,d_j\in\mathcal{D}\}. \end{equation} To find an encoding of the solenoid $\mathcal S_A$ means to find a subset $F$ of $\mathbb{R}^d$ and a complete set of representatives $\mathcal D$ of $\mathbb{Z}^d/A^T\mathbb{Z}^d$ such that, if $x\in F$ then $\tau_dx\in F$ for all $d\in\mathcal D$, and the {\it decoding} map $\mathfrak d:F\times\mathcal{D}^\mathbb{N}\rightarrow \mathcal S_A$ defined by $$F\times\mathcal D^{\mathbb{N}}\ni(x_0,d_0d_1\dots)\mapsto (e^{2\pi ix_0},e^{2\pi i\tau_{d_0}x_0},e^{2\pi i\tau_{d_1}\tau_{d_0}x_0},\dots)\in\mathcal S_A$$ is a bijection. Thus, to find this encoding of $\mathcal S_A$ we need a subset $F$ of $\mathbb{R}^d$ and a complete set of representatives $\mathcal D$ that satisfy \eqref{eqF} and such that $F$ tiles $\mathbb{R}^d$ by integer translations, i.e., \begin{equation}\label{eqtile} \bigcup_{k\in\mathbb{Z}^d}(F+k)=\mathbb{R}^d\mbox{ Lebesgue-a.e., and }(F+k)\cap(F+k')=\emptyset\mbox{ Lebesgue-a.e.} \end{equation} So the problem of encoding the solenoid into the space $F\times\mathcal D^{\mathbb{N}}$ is equivalent to the following: {\bf Question.} Given an expansive $d\times d$ integer matrix $A$, is it possible to find a complete set of representatives $\mathcal D$ of $\mathbb{Z}^d/A^T\mathbb{Z}^d$ such that the attractor $X(A^T,\mathcal D)$ of the iterated function system $(\tau_d)_{d\in\mathcal D}$ tiles $\mathbb{R}^d$ by $\mathbb{Z}^d$? As explained in the introduction, while this is known to be true for dimension $d=1$ or $d=2$, there are some 5 by 5 matrices for which such a $\mathcal D$ does not exist. \begin{definition} (a) We say that a subset $F\subset \mathbb{R}^d$ tiles $\mathbb{R}^d$ by a lattice $\Gamma$ iff the following two properties hold: $$\mathbb{R}^d=\bigcup_{\gamma\in\Gamma}(F+\gamma)$$ $$(F+\gamma)\cap(F+\gamma')=\emptyset,\quad\gamma,\gamma'\in\Gamma,\gamma\neq\gamma'.$$ We say that $F$ tiles $\mathbb{R}^d$ up to measure zero by the lattice $\Gamma$ if these two properties hold up to Lebesgue measure zero. (b) By a lattice we mean a rank-$d$ subgroup of $\mathbb{R}^d$. We shall be interested in sublattices $\Gamma\subset\mathbb{Z}^d$. For a fixed $\Gamma\subset\mathbb{Z}^d$ we say that $\Gamma$ is of index $k$ if the order of the quotient $\mathbb{Z}^d/\Gamma$ is $k$. \end{definition} \begin{lemma}\label{lem7_2} Suppose a relatively compact subset $F\subset \mathbb{R}^d$ tiles by some lattice $\Gamma\subset\mathbb{Z}^d$. Then the lattice $\Gamma$ is of index $k$ if and only if the mapping $F\ni x\mapsto e^{2\pi ix}\in\mathbb{T}^d$ is $k$-to-$1$ (up to measure zero). \end{lemma} \begin{proof} It follows from the definition that $F$ tiles by $\Gamma$ iff the restriction to $F$ of the quotient mapping $\mathbb{R}^d\rightarrow\mathbb{R}^d/\Gamma$ is bijective up to measure zero. Hence the assertion that the given map is $k$-to-$1$ is equivalent to the natural mapping $\mathbb{R}^d/\Gamma\rightarrow\mathbb{R}^d/\mathbb{Z}^d$ being a $k$-fold cover; but this is so by the induced isomorphism $(\mathbb{R}^d/\Gamma)/(\mathbb{R}^d/\mathbb{Z}^d)\cong \mathbb{Z}^d/\Gamma$. \end{proof} \begin{proposition}\label{propmfd} Let $\Omega:=\mathcal D^{\mathbb{N}}$. Define the map $\mathfrak d:\mathbb{R}^d\times\Omega\rightarrow\mathcal S_A$ by \begin{equation}\label{eqmathfrakd} \mathfrak d(x,\omega_0\omega_1\dots)=(e^{2\pi ix},e^{2\pi i\tau_{\omega_0}x},e^{2\pi i\tau_{\omega_1}\tau_{\omega_0}x},\dots),\quad(x\in\mathbb{R}^d,\omega_0\omega_1\dots\in\Omega). \end{equation} \begin{enumerate} \item For each $x\in \mathbb{R}^d$, $k\in\mathbb{Z}^d$, and $\omega\in\Omega$ there is a unique $\omega'=\omega'(x,k,\omega)\in\Omega$ such that $\mathfrak d(x,\omega)=\mathfrak d(x+k,\omega')$. Moreover, if $x,x'\in\mathbb{R}^d$ and $\omega,\omega'\in\Omega$, such that $\mathfrak d(x,\omega)=\mathfrak d(x',\omega')$, then $x'=x+k$ for some $k\in\mathbb{Z}^d$ and $\omega'=\omega'(x,k,\omega)$. \item Let $F$ be a subset of $\mathbb{R}^d$. The restriction of the map $\mathfrak d$ to $F\times\Omega$ is injective if and only if $F\cap (F+k)=\emptyset$ for all $k\in\mathbb{Z}^d$, $k\neq0$. \item The restriction of $\mathfrak d$ to $F\times\Omega$ is onto if and only if $$\bigcup_{k\in\mathbb{Z}^d}(F+k)=\mathbb{R}^d.$$ \item The restriction of $\mathfrak d$ to $F\times\Omega$ is bijective if and only if $F$ tiles $\mathbb{R}^d$ by $\mathbb{Z}^d$-translations. \item Define the map $\rho:\mathbb{R}^d\times\Omega\rightarrow\mathbb{R}^d\times\Omega$ \begin{equation}\label{eqrho} \rho(x,\omega_0\omega_1\dots)=(\tau_{\omega_0}x,\omega_1\omega_2\dots),\quad(x\in\mathbb{R}^d,\omega_0\omega_1\dots\in\Omega). \end{equation} Then \begin{equation}\label{eqrhosigma} \mathfrak d\circ\rho=\sigma^{-1}\circ\mathfrak d. \end{equation} \end{enumerate} \end{proposition} \begin{proof} (i) We want $\tau_{\omega_0}x\equiv\tau_{\omega_0'}(x+k)$. So $(A^T)^{-1}\omega_0\equiv(A^T)^{-1}(k+\omega_0')$, i.e., $\omega_0'\equiv \omega_0-k\mod A^T\mathbb{Z}^d$. Since $\mathcal D$ is a complete set of representatives for $\mathbb{Z}^d/A^T\mathbb{Z}^d$, there is a unique $\omega_0'\in\mathcal D$ such that this is satisfied. Proceeding by induction we see that $\omega_2',\omega_3',\dots$ can be uniquely constructed such that $e^{2\pi i\tau_{\omega_n}\dots\tau_{\omega_0}x}=e^{2\pi i\tau_{\omega_n'}\dots\tau_{\omega_0'}(x+k)}$ for all $n\in\mathbb{N}$. If $\mathfrak d(x,\omega)=\mathfrak d(x',\omega')$ then $e^{2\pi ix}=e^{2\pi ix'}$ so $x'=x+k$ for some $k\in\mathbb{Z}^d$. The rest follows from the uniqueness of $\omega'(x,k,\omega)$. (ii) Suppose $\mathfrak d$ restricted to $F\times\Omega$ is injective. Then, by (i), we cannot have $x$ and $x+k$ in $F$ for some $k\neq 0$. Conversely, if $\mathfrak d$ is not injective on this set, then $\mathfrak d(x,\omega)=\mathfrak d(x',\omega')$ for some $x,x'\in F$ and $\omega,\omega'\in\Omega$. Using (i) again we get that $x'=x+l$ for some $l\in\mathbb{Z}^d$ so $F\cap (F+l)\neq\emptyset$. (iii) Suppose the restriction of $\mathfrak d$ to $F\times\Omega$ is onto. Then for all $y\in\mathbb{R}^d$, there is $x\in F$ and $\omega\in\Omega$ such that $\mathfrak d(x,\omega)=\hat i(y)$. Then $e^{2\pi ix}=e^{2\pi iy}$ so $y=x+k$ for some $k\in\mathbb{Z}^d$, and therefore $y\in F+k$. This shows that $\cup(F+k)=\mathbb{R}^d$. Conversely, take $(z_n)_{n\in\mathbb{N}}\in\mathcal S_A$. There exist $x_n\in\mathbb{R}^d$ such that $z_n=e^{2\pi ix_n}$ for all $n$. By hypothesis we can take $x_0\in F$. Since $z_1^A=z_0$ we have that, $A^Tx_1\equiv x_0$ so for some $\omega_0\in\mathcal D$, $\tau_{\omega_0}x_0\equiv x_1$. Then, by induction we can construct $\omega_n\in\mathcal D$ such that $\tau_{\omega_n}\dots\tau_{\omega_0}x_0\equiv x_{n+1}$. This proves that $\mathfrak d(x,\omega)=(e^{2\pi ix_n})_n=(z_n)_n$. (iv) follows directly from (ii) and (iii). (v) requires nothing more that a simple computation. \end{proof} \begin{proposition} Suppose $F$ is a subset of $\mathbb{R}^d$ that tiles $\mathbb{R}^d$ by a sublattice $\Gamma$ of $\mathbb{Z}^d$ with $|\mathbb{Z}^d/\Gamma|=N$. Then the restriction of the map $\mathfrak d$ to $F\times\Omega$ is $N$-to-$1$. \end{proposition} \begin{proof} Since $\cup_{k\in\mathbb{Z}^d}(F+k)\supset \cup_{\gamma\in \Gamma}(F+\gamma)=\mathbb{R}^d$, it follows from Proposition \ref{propmfd}(iii) that the map is onto. We claim that for each $x\in F$ there are exactly $N$ points $k\in\mathbb{Z}^d$ such that $x+k\in F$. Indeed, let $d_1,\dots d_{N}$ be a complete list of representatives for $\mathbb{Z}^d/\Gamma$. Then for each $i\in\{1,\dots,N\}$ there is a unique $\gamma_i\in\Gamma$ such that $x+d_i\in F+\gamma_i$. Then we can not have $d_i-\gamma_i=d_{i'}-\gamma_{i'}$ for $i\neq i'$ (that would imply $d_i\equiv d_{i'}\mod\Gamma$) so the points $d_i-\gamma_i$ are distinct and $x+(d_i-\gamma_i)\in F$. Now we can use Proposition \ref{propmfd}(i) to see that $\mathfrak d$ restricted to $F\times\Omega$ is $N$-to-$1$. \end{proof} Consider now the compact attractor $X(A^T,\mathcal D)$ of the iterated function system $(\tau_d)_{d\in\mathcal D}$ given in \eqref{eqxa}. It is known (see \cite{LaWa96c, LaWa97}) that $X(A^T,\mathcal D)$ always tiles $\mathbb{R}^d$ (up to measure zero) by some sublattice $\Gamma$ of $\mathbb{Z}^d$. \par The connection between lattice tilings and spectral theory was studied systematically in \cite{Fug74} and \cite{Ped96}, and we will introduce spectrum in subsection 6.2 below. From the choice of digit set $\mathcal D$ for a fixed matrix $A$, we conclude that $X(A^T,\mathcal D)$ has non-empty interior. In fact the $d$-dimensional Lebesgue measure of $X(A^T,\mathcal D)$ must be an integer. It is 1 if and only if $X(A^T,\mathcal D)$ tiles $\mathbb{R}^d$ by the "unit-lattice" $\mathbb{Z}^d$. By spectral theory we are referring to the Hilbert space $L^2(X(A^T,\mathcal D))$. \begin{definition}\label{deftile} We say that $(A,\mathcal D)$ satisfy the tiling condition if $X(A^T,\mathcal D)$ tiles $\mathbb{R}^d$ (up to measure zero) by the lattice $\mathbb{Z}^d$. \end{definition} In subsection 6.2 below we give examples for $d = 2$ of pairs $(A,\mathcal D)$ which do not satisfy the tiling condition. Nonetheless, even if some particular pair $(A,\mathcal D)$ in the plane does not satisfy the tiling condition, it will be possible to change the digit set $\mathcal D$ into a different one $\mathcal D'$, while keeping the matrix $A$ fixed, such that the modified pair $(A,\mathcal D')$ will satisfy the tiling condition. But by going to higher dimensions ($d = 5$) as we noted there are matrices $A$ for which no $\mathcal D$ may be chosen making $(A,\mathcal D)$ satisfy the tiling condition. \begin{lemma}\label{lemnooverlap} For all $d,d'\in\mathcal D$, $d\neq d'$ the intersection $\tau_d(X(A^T,\mathcal D))\cap\tau_{d'}(X(A^T,\mathcal D))$ has Lebesgue measure zero. \end{lemma} \begin{proof} We have the following relations, with $\mu$ the Lebesgue measure: $$X(A^T,\mathcal D)=\bigcup_{d\in\mathcal D}\tau_d(X(A^T,\mathcal D)),$$ $$\mu(\tau_d(X(A^T,\mathcal D)))=\frac{1}{|\det A|}\mu(X(A^T,\mathcal D)).$$ As a result we get $$\mu(X(A^T,\mathcal D))=\sum_{d\in\mathcal D}\frac{1}{|\det A|}\mu(X(A^T,\mathcal D))-\mu(\mbox{ combined overlap sets}).$$ Therefore the combined overlap sets must have measure zero. \end{proof} \begin{proposition}\label{proprho} Suppose $(A,\mathcal D)$ satisfy the tiling condition. Then the function $\rho$ defined in \eqref{eqrho} maps $X(A^T,\mathcal D)\times\Omega$ onto itself and the restriction of $\rho$ to $X(A^T,\mathcal D)\times\Omega$ is injective a.e. in the sense that the set of points $x\in X(A^T,\mathcal D)$ with the property that there exist $x'\in X(A^T,\mathcal D)$, $\omega,\omega'\in\Omega$ $\rho(x,\omega)=\rho(x',\omega')$, has Lebesgue measure zero. The inverse $\rho^{-1}$ of this restriction is defined by \begin{equation}\label{eqrhoi} \rho^{-1}(x,\omega_0\omega_1\dots)=(A^Tx-\omega_x,\omega_x\omega_0\omega_1\dots),\quad(x\in X(A^T,\mathcal D),\omega_0\omega_1\dots\in\Omega), \end{equation} where $\omega_x$ is the unique element of $\mathcal D$ with the property $x\in\tau_{\omega_x}(X(A^T,\mathcal D))$. \end{proposition} \begin{proof} Since \begin{equation}\label{eqatt} X(A^T,\mathcal D)=\cup_{d\in\mathcal D}\tau_d(X(A^T,\mathcal D)), \end{equation} it follows that $\rho$ maps $X(A^T,\mathcal D)$ onto itself. Suppose now $\rho(x,\omega)=\rho(x',\omega')$ for $x,x'\in X(A^T,\mathcal D)$, $\omega,\omega'\in\Omega$, and $(x,\omega)\neq (x',\omega')$. So either $x\neq x'$ or $\omega\neq \omega'$. When $\omega\neq \omega'$, since $\rho(x,\omega)=\rho(x',\omega')$ it follows that $\omega_1\omega_2\dots=\omega_1'\omega_2'\dots$ so $\omega_0\neq \omega_0'$. Also $\tau_{\omega_0}x=\tau_{\omega_0'}x'$. But $\tau_{\omega_0}(X(A^T,\mathcal D))\cap\tau_{\omega_0'}(X(A^T,\mathcal D))$ has measure zero (see Lemma \ref{lemnooverlap}), so $x$ must be in a set of measure zero. If $\omega=\omega'$ then $\omega_0=\omega_0'$ so $\tau_{\omega_0}x=\tau_{\omega_0'}x'$ implies $x=x'$. This proves the injectivity of $\rho$. Since $\tau_d(X(A^T,\mathcal D))$ are mutually disjoint, the element $\omega_x$ of $\mathcal D$ is well defined. Then it is easy to check that $\rho^{-1}$ is indeed the inverse of the restriction of $\rho$. \end{proof} \subsection{Encodings of cyclic paths} Let $\mathcal D$ be a complete set of representatives of $\mathbb{Z}^d/A^T\mathbb{Z}^d$. And let $\Omega:=\mathcal D^{\mathbb{N}}$. Consider now a cycle $C:=\{\zeta_0,\zeta_1,\dots,\zeta_{p-1}\}$ and suppose $\zeta_j=e^{2\pi i\theta_j}$ for some $\theta_j\in\mathbb{R}^d$. If in addition, $A$ and $\mathcal D$ satisfy the tiling condition we can pick $\theta_j$ in $X(A^T,\mathcal D)$. Then, since $z_1^A=z_0$, there is a $l_0\in\mathcal D$ such that $\theta_1=\tau_{l_0}\theta_0$. Continuing this process, we can find $l_0,l_1,\dots,l_{p-1}$ such that $\tau_{l_j}\theta_j=\theta_{j+1}$ for $j\in\{0,\dots,p-2\}$ and $\tau_{l_{p-1}}\theta_{p-1}=\theta_0$. Thus the point $\chi_C=(\zeta_0,\dots,\zeta_{p-1},\zeta_0,\dots,\zeta_{p-1},\zeta_0,\dots)$ from $\mathcal S_A$ can be encoded by $(\theta_0,l_0\dots l_{p-1}l_0\dots l_{p-1},l_0\dots)$, i.e., by an infinite repetition of the finite word $l_0\dots l_{p-1}$. \begin{definition}\label{defcycle} A finite set in $\mathbb{R}^d$, $C=\{\theta_0,\dots,\theta_{p-1}\}$ is called a {\it cycle} if there exist $l_0,\dots,l_{p-1}\in\mathcal {D}$ such that $\tau_{l_0}\theta_0=\theta_{1},\tau_{l_1}\theta_{1}=\theta_{2},\dots,\tau_{l_{p-2}}\theta_{p-2}=\theta_{p-1}$ and $\tau_{l_{p-1}}\theta_{p-1}=\theta_{0}$. Thus $\theta_0$ is the fixed point of $\tau_{l_{p-1}}\dots\tau_{l_0}$, $\theta_1$ is the fixed point of $\tau_{l_0}\tau_{l_{p-1}}\dots\tau_{l_1}$, $\dots$, $\theta_{p-1}$ is the fixed point of $\tau_{l_{p-2}}\dots\tau_{l_0}\tau_{l_{p-1}}$. The points $\theta_0,\dots,\theta_{p-1}$ are called {\it cyclic points}. We say that $\theta_0$ is the {\it cyclic point associated to} $l_0\dots l_{p-1}$, and we say that $C=\{\theta_0,\dots,\theta_{p-1}\}$ is the {\it cycle} associated to $l_0\dots l_{p-1}$. \end{definition} Let $C:=\{e^{2\pi i\theta_0},\dots,e^{2\pi i\theta_{p-1}}\}$ be a cycle associated to $l_0\dots l_{p-1}$. Take now a point $(z_n)_{n\in\mathbb{N}}$ in $\mathcal S_A(C)$. We want to see how the encodings of points in $\mathcal S_A(C)$ look like. By Theorem \ref{prop3_9}, the point $(z_n)_{n\in\mathbb{N}}$ is in one of the sets $\sigma_A^{-j}(\chi_C)\hat i(\mathbb{R}^d)$. Suppose $z_0=e^{2\pi ix}$ for some $x\in\mathbb{R}^d$. Then $(z_n)_{n\in\mathbb{N}}=\sigma_A^{-j}(\chi_C)\hat i(y)$ for some $y\in\mathbb{R}^d$, and looking at the 0 position, $x\equiv \theta_j+y$ so $y=x+k-\theta_j$ for some $k\in\mathbb{Z}^d$. Thus $$(z_n)_{n\in\mathbb{N}}=\sigma_A^{-j}(\chi_C)\hat i(x+k-\theta_j)=\hat i_C(x+k-\theta_j,j).$$ On the other hand, according to the previous discussion, $(z_n)_{n\in\mathbb{N}}$ is equal to $(e^{2\pi ix},e^{2\pi i\tau_{\omega_0}x},e^{2\pi i\tau_{\omega_1}\tau_{\omega_0}x},\dots)$ for some infinite word $\omega_0\omega_1\dots$. Thus we must have some precise correspondence between the pair $(k,j)\in\mathbb{Z}^d\times\mathbb{Z}_p$ and the infinite word $\omega_0\omega_1\dots\in\Omega$. Since $z_n$ is approaching the cycle $C$ as $n\rightarrow\infty$ one might expect that the infinite word $\omega_0\omega_1\dots$ ends in a repetition of the finite word $l_0\dots l_{p-1}$ that generates the cycle $C$. While this is often true, there might be some other cycles $C'$ that are congruent $\mod\mathbb{Z}^d$ to $C$, that will affect this encoding $\omega$. In any case, $\omega_0\omega_1\dots$ that corresponds to $(k,j)$ will be eventually periodic, and it will end in an infinite repetition of a finite word that corresponds to such a cycle $C'$. \begin{definition} We denote by $\underline{l_0\dots l_{p-1}}$ the infinite word in $\Omega$ obtained by the infinite repetition of the word $l_0\dots l_{p-1}$. Let $$\Omega_C:=\{\omega_0\dots\omega_n\underline{l_0\dots l_{p-1}}\,|\,\omega_0,\dots,\omega_n\in\mathcal D,n\in\mathbb{N}\},$$ i.e., the set of infinite words that end in an infinite repetition of the word $l_0\dots l_{p-1}$. \end{definition} There are some cycles which have cycle points that differ by integers. Such cycles would make our encoding ambiguous, so we avoid this situation. \begin{example} Let $d=1$, $A=2$ and $\mathcal D=\{0,3\}$. Then $\tau_0x=x/2$, $\tau_3=(x+3)/2$. Then it is easy to check that the attractor $X(A^T,\mathcal D)$ is $[0,3]$. The set $\{1,2\}$ is a cycle that corresponds to $\underline{30}$, and its points differ by an integer. \end{example} \begin{definition} We say that the cycle $C=\{\theta_0,\dots,\theta_{p-1}\}$ is {\it simple} if $\theta_j\not\equiv\theta_{j'}\mod\mathbb{Z}^d$ for $j\neq j'$. \end{definition} Following \cite{BrJo99}, for a simple cycle $C=\{\theta_0,\dots,\theta_{p-1}\}$, we define an automorphism $\mathcal R_C$ on the set $\mathbb{Z}^d-C$. Note that since the cycle is simple, the sets $\mathbb{Z}^d-\theta_j$ are mutually disjoint. The map $\mathcal R_C$ is an extension of the division with remainder. Here we ``divide'' by $A^T$. For each point in $a-\theta_j\in\mathbb{Z}^d-\theta_j$, there is a unique ``quotient'' $b-\theta_{j+1}$ in $\mathbb{Z}^d-\theta_{j+1}$ and a unique ``remainder'' $d_0\in\mathcal D$ such that $$a-\theta_j=A^T(b-\theta_{j+1})+d_0.$$ Then $\mathcal R_C(a-\theta_j)$ is defined as the quotient $\mathcal R_C(a-\theta_j)=b-\theta_{j+1}$. We used here the fact that $A^T\theta_j\equiv\theta_{j-1}\mod\mathbb{Z}^d$, because $\tau_{l_{j-1}}\theta_{j-1}=\theta_j$, for all $j\in\mathbb{Z}$. Recall also that we use the notation $\theta_j:=\theta_{j\mod p}$ for $j\in\mathbb{Z}$. \begin{definition} Let $C=\{\theta_0,\dots,\theta_{p-1}\}$ be a simple cycle. On $\mathbb{Z}^d-C=\bigcup_{j=0}^{p-1}(\mathbb{Z}^d-\theta_j)$ we define the map $\mathcal R_C$ as follows: for each $a\in\mathbb{Z}^d$ and $j\in\{0,\dots,p-1\}$ there exist a unique $b\in\mathbb{Z}^d$ and $d_0\in\mathcal D$ such that \begin{equation}\label{eqdefrc} a-\theta_j=A^T(b-\theta_{j+1})+d_0.\mbox{ We define }\mathcal R_C(a-\theta_j):=b-\theta_{j+1}. \end{equation} Therefore $\mathcal R_C(a-\theta_j)$ is defined by $$(a-\theta_j)-A^T\mathcal R_C(a-\theta_j)\in\mathcal D.$$ Also, $-\mathcal R_C(a-\theta_j)=\tau_{d_0}(-(a-\theta_j))$, where $d_0$ is the unique element of $\mathcal D$ such that $\tau_{d_0}(-(a-\theta_j))\in\mathbb{Z}^d-\theta_{j+1}$. \end{definition} The encoding of $(k,j)\in\mathbb{Z}^d\times\mathbb{Z}_p$ is obtained by a generalized Euclidean algorithm: take $k-\theta_j$ in $\mathbb{Z}^d-\theta_j$, then ``divide'' by $A^T$ and keep the remainder: $k-\theta_j=A^T\mathcal R_C(k-\theta_j)+\omega_0$. Then take the quotient $\mathcal R_C(k-\theta_j)$, divide by $A^T$ and keep the remainder $\omega_1$, and so on to infinity. The infinite sequence of remainders will give us $\omega$. But first, we need some properties of the map $\mathcal R_C$. \begin{proposition}\label{proprc} Let $C=\{\theta_0,\dots,\theta_{p-1}\}$ be a simple cycle. Let $X(A^T,\mathcal D)$ be the attractor of the iterated function system $(\tau_{d})_{d\in \mathcal D}$. \begin{enumerate} \item A point $t\in C-\mathbb{Z}^d$ is a cycle point for the iterated function system $(\tau_{d})_{d\in\mathcal D}$ if and only if there is some $n\geq 1$ such that $\mathcal R_C^n(-t)=-t$, i.e., $-t$ is a periodic point for $\mathcal R_C$. Moreover if $t$ is associated to $m_0\dots m_{q-1}$ then $q$ is a multiple of $p$ and $$m_n=\mathcal R_C^n(-t)-A^T\mathcal R_C^{n+1}(-t),\quad(n\in\mathbb{N}).$$ \item For every $t\in C-\mathbb{Z}^d$ there exists a $l\geq0$ such that $\mathcal R_C^l(-t)$ is periodic for $\mathcal R_C$, i.e., every point in $\mathbb{Z}^d-C$ is eventually periodic for $\mathcal R_C$. Moreover $-\mathcal R_C^l(-t)$ is in $(C-\mathbb{Z}^d)\cap X(A^T,\mathcal D)$. \item The intersection $(C-\mathbb{Z}^d)\cap X(A^T,\mathcal D)$ consists exactly of negative the periodic points for $\mathcal R_C$. \end{enumerate} \end{proposition} \begin{proof} (i) If $t_0=t\in\theta_j-\mathbb{Z}^d$ is a cyclic point for $(\tau_{d})_{d\in \mathcal D}$, then $\tau_{m_0}t_0=t_1$, $\tau_{m_1}t_1=t_2,\dots,\tau_{m_{q-1}}t_{q-1}=t_0$ for some $m_0,\dots, m_{q-1}\in\mathcal D$ and some $t_1,\dots, t_{q-1}\in\mathbb{R}^d$. Then $t_{q-1}=A^Tt_0-m_{q-1}$ so $t_{q-1}\in\theta_{j-1}-\mathbb{Z}^d$, (because $A^T\theta_j\equiv \theta_{j-1}$). By induction $t_l\in\theta_{j+l-q}-\mathbb{Z}^d$ for $l\in\{q-1,q-2,\dots,0\}$. Since $t_0\in\theta_j-\mathbb{Z}^d$ and also $t_0\in\theta_{j-q}-\mathbb{Z}^d$, as the cycle is simple, it follows that $q$ must be a multiple of $p$. We have $\tau_{m_0}t_0=t_1$ so $-t_0=A^T(-t_1)+m_0$. Also $-t_1\in \mathbb{Z}^d-\theta_{j+1}$. This means that $\mathcal R_C(-t_0)=-t_1$ and $m_0=-t_0-A^T\mathcal R_C(-t_0)$. By induction $\mathcal R_C(-t_{n})=-t_{n+1}$ and $m_n=-t_n-A^T\mathcal R_C(-t_n)$. This proves one direction. For the converse, if $\mathcal R_C^q(-t_0)=-t_0$, for some $t_0\in\theta_j-\mathbb{Z}^d$, then for each $n$ there is some $m_n\in\mathcal D$ such that: $\mathcal R_C^n(-t_0)=A^T\mathcal R_C^{n+1}(-t_0)+m_n$. Thus the sequence $\{m_n\}$ has period $q$, and $\tau_{m_n}(-\mathcal R_C^n(-t_0))=-\mathcal R_C^{n+1}(-t_0)$, which proves that $t_0$ is in the cycle $\{-(-t_0),-\mathcal R_C(-t_0),\dots,-\mathcal R_C^{q-1}(-t_0)\}$. (ii) Since $A$ is expansive there is a norm on $\mathbb{R}^d$ such that for some $0<c<1$, $\|(A^T)^{-1}x\|\leq c\|x\|$ for all $x\in\mathbb{R}^d$. Then if $R>\frac{c\max_{d\in\mathcal D}\|d\|}{1-c}$, $$\tau_{d}(B(0,R))\subset B(0,R).$$ Indeed $\|\tau_{d}x\|\leq c(\|x\|+\|d\|)<c(R+\|d\|)<R$ for all $x\in B(0,R)$ and all $d\in\mathcal{D}$. \par Take now $a-\theta_j\in\mathbb{Z}^d$. Take some $R>\max\{\|a-\theta_j\|,\frac{c\max_{d\in\mathcal D}\|d\|}{1-c}\}$. Then note that $\mathcal R_C(a-\theta_j)=-\tau_{d}(-(a-\theta_j))$ for some $d\in\mathcal D$. Therefore $\mathcal R_C$ maps $B(0,R)\cap\cup_j(\mathbb{Z}^d-\theta_j)$ into itself. So $\{\mathcal R_C^n(a-\theta_j)\,|\,n\in\mathbb{N}\}$ is a finite set. Therefore there exists some $n\in\mathbb{N}$, and $q\geq 0$ such that $\mathcal R_C^n(a-\theta_j)=\mathcal R_C^{n+q}(a-\theta_j)$. Thus $\mathcal R_C^n(a-\theta_j)$ is periodic. From (i) we have that $-\mathcal R_C^n(a-\theta_j)$ is cyclic for $(\tau_{d})_{d\in\mathcal D}$. So $-\mathcal R_C^n(a-\theta_j)$ is in the attractor $X(A^T,\mathcal D)$. (iii) From (i) and (ii) it is clear that the periodic points for $\mathcal R_C$ lie in $(\mathbb{Z}^d-C)\cap(-X(A^T,\mathcal D))$. For the other inclusion take $t_1\in(\mathbb{Z}^d-C)\cap (-X(A^T,\mathcal D))$. Then using the formula \eqref{eqxa}, there exist $d_1,d_2\dots\in\mathcal D$ such that $$-t_1=(A^T)^{-1}d_1+(A^T)^{-2}d_2+\dots$$ Let $-t_n:=(A^T)^{-1}d_{n}+(A^T)^{-2}d_{n+1}+\dots$. We have \begin{equation}\label{eqtn} A^T(t_n)+d_n=t_{n+1},\quad(n\in\mathbb{N}). \end{equation} Since $t_1\in\mathbb{Z}^d-C$, equation \eqref{eqtn} implies that $t_2$ is in $\mathbb{Z}^d-C$ and $t_1=\mathcal R_C(t_2)$. By induction $t_{n+1}$ is in $\mathbb{Z}^d-C$ and $\mathcal R_C(t_{n+1})=t_n$ for all $n\in\mathbb{N}$. But we have also $t_n\in -X(A^T,\mathcal D)$. And, since $(\mathbb{Z}^d-C)\cap (-X(A^T,\mathcal D))$ is finite, there exist $n,m\geq 1$ such that $t_n=t_{n+m}$. This implies that $\mathcal R_C^m(t_n)=\mathcal R_C^m(t_{n+m})=t_n$. Since $\mathcal R_C^{n-1}(t_n)=t_1$, it follows that $t_1$ is periodic for $\mathcal R_C$. \end{proof} \begin{theorem}\label{thenccycl} Let $C:=\{\theta_0,\dots,\theta_{p-1}\}$ be a simple cycle. \begin{enumerate} \item For each $k\in\mathbb{Z}^d$ and each $j\in\mathbb{Z}_p$ there is a unique $\omega(k,j)=\omega_0\omega_1\dots\in\Omega$ such that for all $x\in\mathbb{R}^d$, \begin{equation}\label{eqomegak} (e^{2\pi i x},e^{2\pi i\tau_{\omega_0}x},e^{2\pi i\tau_{\omega_1}\tau_{\omega_0}x},\dots)=\hat i_C(x+k-\theta_j,j)=\sigma_A^{-j}(\chi_C)\hat i(x+k-\theta_j)=(e^{2\pi i((A^T)^{-n}(x+k-\theta_j)+\theta_{j+n})})_{n\in\mathbb{N}}. \end{equation} Moreover there exists a cycle $C'\in (C-\mathbb{Z}^d)\cap X(A^T,\mathcal D)$ such that $\omega(k,j)\in\Omega_{C'}$. \item The infinite word $\omega(k,j)$ can be constructed as follows: \begin{equation} \omega_n=\mathcal R_C^n(k-\theta_j)-A^T\mathcal R_C^{n+1}(k-\theta_j),\quad(n\in\mathbb{N}). \label{eq:defomegank} \end{equation} \item Suppose $C'$ is a cycle in $(C-\mathbb{Z}^d)\cap X(A^T,\mathcal D)$, and $\omega\in\Omega_{C'}$. Then there is a unique $(k(\omega),j(\omega))\in\mathbb{Z}^d\times\mathbb{Z}_p$ such that for all $x\in\mathbb{R}^d$, \begin{equation}\label{eqkomega} (e^{2\pi i x},e^{2\pi i\tau_{\omega_0}x},e^{2\pi i\tau_{\omega_1}\tau_{\omega_0}x},\dots)=\hat i_C(x+k(\omega)-\theta_{j(\omega)},j(\omega)). \end{equation} \item $(k(\omega),j(\omega))$ can be constructed as follows: if $\omega\in\Omega_{C'}$ has the form $\omega_0\dots\omega_{np-1}\underline{m_0\dots m_{q-1}}$, then the fixed point $\eta_0$ of $\tau_{m_{q-1}}\dots\tau_{m_0}$ belongs to $\theta_{j(\omega)}-\mathbb{Z}^d$ for some unique $j(\omega)\in\mathbb{Z}_p$. And \begin{equation}\label{eqdefkomega} k(\omega)=\omega_0+\dots+(A^T)^{np-1}\omega_{np-1}+\theta_{j(\omega)}-(A^T)^{np}\eta_0. \end{equation} \item Let $$\tilde \Omega_C:=\bigcup\{\Omega_{C'}\,|\, C'\mbox{ cycle in }(C-\mathbb{Z}^p)\cap X(A^T,\mathcal D)\}.$$ The maps $$\mathfrak e_C:\mathbb{Z}^d\times\mathbb{Z}_p\rightarrow \tilde \Omega_C, \mathfrak e_C(k,j)=\omega(k,j),$$ and $$\mathfrak d_C:\tilde \Omega_C\rightarrow\mathbb{Z}^d\times\mathbb{Z}_p, \mathfrak d_C(\omega)=(k(\omega),j(\omega))$$ are inverse to each other. \end{enumerate} \end{theorem} \begin{proof} Let $(k,j)\in\mathbb{Z}^d\times\mathbb{Z}_p$ and let $\omega(k,j)$ be defined as in (ii). We prove that the relation \eqref{eqomegak} is satisfied. We have $$A^T\mathcal R_C(k-\theta_j)+\omega_0=k-\theta_j, A^T\mathcal R_C^2(k-\theta_j)+\omega_1=\mathcal R_C(k-\theta_j),\dots$$ Therefore $$\mathcal R_C(k-\theta_j)=(A^T)^{-1}(k-\theta_j)-(A^T)^{-1}\omega_0, \mathcal R_C^2(k-\theta_j)=(A^T)^{-2}(k-\theta_j)-(A^T)^{-2}\omega_0-(A^T)^{-1}\omega_1,\dots$$ By induction $$\mathcal R_C^n(k-\theta_j)=(A^T)^{-n}(k-\theta_j)-(A^T)^{-n}\omega_0-\dots-(A^T)^{-1}\omega_{n-1}.$$ So $$\tau_{\omega_{n-1}}\dots\tau_{\omega_0}x=(A^T)^{-n}x +(A^T)^{-n}\omega_0+\dots+(A^T)^{-1}\omega_{n-1}=(A^T)^{-n}x+(A^T)^{-n}(k-\theta_j)-\mathcal R_C^n(k-\theta_j).$$ But $\mathcal R_C^n(k-\theta_j)\in\mathbb{Z}^d-\theta_{j+n}$ so $$\tau_{\omega_{n-1}}\dots\tau_{\omega_0}x\equiv (A^T)^{-n}(x+k-\theta_j)+\theta_{j+n}.$$ Therefore the relation \eqref{eqomegak} is satisfied. Next we prove the uniqueness of $\omega$. Suppose $\omega'\in\Omega$ also satisfies \eqref{eqomegak}. Then $\tau_{\omega_0'}x\equiv\tau_{\omega_0}x$ so $(A^T)^{-1}\omega_0'\equiv(A^T)^{-1}\omega_0$ which implies that $\omega_0'-\omega_0\in A^T\mathbb{Z}^d$. Since $\mathcal D$ is a complete set of representatives for $\mathbb{Z}^d/A^T\mathbb{Z}^d$, it follows that $\omega_0'=\omega_0$. By induction $\omega_n'=\omega_n$ so $\omega'=\omega$. To see that $\omega(k,j)$ is in some $\omega_{C'}$ for a cycle $C'$ in $(C-\mathbb{Z}^d)\cap X(A^T,\mathcal D)$ we use Proposition \ref{proprc}. There exists an $l$ such that $\mathcal R_C^l(k-\theta_j)$ is periodic for $\mathcal R_C$, so $-\mathcal R_C^l(k-\theta_j)$ is a cycle point for $(\tau_{d})_{d\in\mathcal D}$. Let $C'$ its corresponding cycle. By Proposition \ref{proprc}, $C'$ is contained in $(C-\mathbb{Z}^d)\cap X(A^T,\mathcal D)$. By the construction of $\omega(k,j)$ given in (ii), and by Proposition \ref{proprc}(i), we see that $\omega(k,p)\in\Omega_{C'}$. Now let $\omega$ be of the form given in (iii). And let $(k,j):=(k(\omega),j(\omega))$ be as in (iv). Since $\eta_0\in \theta_{j}-\mathbb{Z}^d$ it follows that $(A^T)^{np}\eta_0$ is also in $\theta_{j}-\mathbb{Z}^d$ so $k=k(\omega)$ is indeed an integer. We check that if for $(k,j)$ we construct $\nu=\nu_0\nu_1...=\omega(k,j)$ as in (ii), then $\nu=\omega$. This will prove also (v). From \eqref{eqdefkomega} we have with $t_0=-(k-\theta_j)$, $$t_1:=\tau_{\omega_0}(t_0)=\tau_{\omega_0}(-(k-\theta_j))=-\omega_1-\dots-(A^T)^{np-2}\omega_{np-1}+(A^T)^{np-1}\eta_0\in\theta_{j+1}-\mathbb{Z}^d,$$ since $\eta_0\in \theta_j-\mathbb{Z}^d$ implies that $(A^T)^{np-1}\eta_0\in\theta_{j-(np-1)}-\mathbb{Z}^d=\theta_{j+1}-\mathbb{Z}^d$. This shows that $\mathcal R_C(-t_0)=-t_1$, and $\nu_0=\omega_0$. By induction we obtain $\nu_1=\omega_1,\dots,\nu_{np-1}=\omega_{np-1}$ and that $-\eta_0=\mathcal R_C^{np}(-t_0)$. And since the cycle $C'$ of $\eta_0$ is given by $\underline{m_0\dots m_{q-1}}$, it follows by Proposition \ref{proprc}(i) that $\nu=\omega_0\dots\omega_{np-1}\underline{m_0\dots m_{q-1}}$. For the uniqueness of $(k,j)$, suppose $(k,j)$ and $(k',j')$ satsify \eqref{eqkomega}. Then $$(A^T)^{-n}(k-\theta_j)+\theta_{n+j}\equiv (A^T)^{-n}(k'-\theta_{j'})+\theta_{n+j'},\quad(n\in\mathbb{N}).$$ Taking the limit as $np\rightarrow\infty$ we obtain $\theta_j-\theta_{j'}\in\mathbb{Z}^d$. Since the cycle is simple, $j=j'$. Therefore $$(A^T)^{-n}(k-\theta_j)\equiv (A^T)^{-n}(k'-\theta_{j}),\quad(n\in\mathbb{N}).$$ But this means that $\hat i(k-\theta_j)=\hat i(k'-\theta_{j})$, and by Proposition \ref{propembed1} $\hat i$ is injective so $k=k'$. \end{proof} We summarize our results in the following corollary. \begin{corollary}\label{corsum} Suppose $(A,\mathcal D)$ satisfy the tiling condition (Definition \ref{deftile}). Let $C=\{\theta_0,\dots,\theta_{p-1}\}$ be a simple cycle. Let $$\tilde \Omega_C:=\bigcup\{\Omega_{C'}\,|\, C'\mbox{ cycle in }(C-\mathbb{Z}^d)\cap X(A^T,\mathcal D)\}.$$ \begin{enumerate} \item The maps $$\mathfrak d:X(A^T,\mathcal D)\times\tilde \Omega_C\rightarrow\mathcal S_A(C),\quad \mathfrak d(x,\omega)=(e^{2\pi ix},e^{2\pi i\tau_{\omega_0}x},e^{2\pi i\tau_{\omega_1}\tau_{\omega_0}x},\dots),$$ $$\hat i_C:\mathbb{R}^d\times\mathbb{Z}_p\rightarrow \mathcal S_A(C),\quad \hat i_C(x,j)=(e^{2\pi i((A^T)^{-n}x+\theta_{n+j})})_{n\in\mathbb{N}}$$ are bijections. \item $$\hat i_C^{-1}(\mathfrak d(x,\omega))=(x-\theta_{j(\omega)}+k(\omega),j(\omega)),\quad(x\in X(A^T,\mathcal D),\omega\in\tilde\Omega_C),$$ where $k(\omega),j(\omega)$ are defined in Theorem \ref{thenccycl}(iv). $$\mathfrak d^{-1}(\hat i_C(x,j))=(y,\omega(k,j)),\quad(x\in\mathbb{R}^d, j\in\mathbb{Z}_p),$$ where $y\in X(A^T,\mathcal D)$, $k\in\mathbb{Z}^d$ are uniquely defined by $x+\theta_j=y+k$, and $\omega(k,j)$ is defined in Theorem \ref{thenccycl}(ii). \item The following diagram is commutative: $$ \begin{CD} X(A^T,\mathcal D)\times\tilde \Omega_C @>\mathfrak d>> \mathcal S_A(C) @<\hat i_C<< \mathbb{R}^d\times\mathbb{Z}_p\\ @VV\rho^{-1}V @VV\sigma_A V @VV \alpha_{A,p} V\\ X(A^T,\mathcal D)\times\tilde \Omega_C@>\mathfrak d>> \mathcal S_A(C) @<\hat i_C<< \mathbb{R}^d\times\mathbb{Z}_p \end{CD} $$ \end{enumerate} \end{corollary} \begin{corollary} With the notations in Theorem \ref{thenccycl}, we have \begin{equation}\label{eqjomega} j(\omega_1\omega_2\dots)=j(\omega_0\omega_1\dots)+1,\quad(\omega_0\omega_1\dots\in\tilde \Omega_C). \end{equation} \begin{equation} (A^T)^{-n}(x-\theta_{j(\omega_0\omega_1\dots)}+k(\omega_0\omega_1\dots))=\tau_{\omega_{n-1}}\dots\tau_{\omega_0}x-\theta_{j(\omega_0\omega_1\dots)+n}+k(\omega_n\omega_{n+1}\dots) \end{equation} for all $x\in X(A^T,\mathcal D),\omega_0\omega_1\dots\in\tilde\Omega_C$. \end{corollary} \begin{proof} We apply the commutative diagram in Corollary \ref{corsum} to $\rho^n$: $$\hat i_C^{-1}\mathfrak d\rho^n(x,\omega)=\hat i_C\mathfrak d(\tau_{\omega_{n-1}}\dots\tau_{\omega_0}x,\omega_n\omega_{n+1}\dots)=(\tau_{\omega_{n-1}}\dots\tau_{\omega_0}x-\theta_{j(\omega_n\omega_{n+1}\dots)}+k(\omega_n\omega_{n+1}\dots),j(\omega_n\omega_{n+1}\dots)).$$ $$\alpha_{A,p}^{-n}\hat i_C^{-1}\mathfrak d(x,\omega)=\alpha_{A,p}^{-n}(x-\theta_{j(\omega_0\omega_1\dots)}+k(\omega_0\omega_1\dots),j(\omega_0\omega_1\dots))=$$$$ ((A^T)^{-n}(x-\theta_{j(\omega_0\omega_1\dots)}+k(\omega_0\omega_1\dots)),j(\omega_0\omega_1\dots)+n).$$ Since the two quantities are equal to each other (by the commutative diagram in Corollary \ref{corsum}), the relations follow. \end{proof} With the aid of our cycles and associated encoding/decoding mappings we are now able to state our main result regarding super representations. Notice that the introduction of cycles yields the following improvement of Theorem \ref{prop3_9} in section 4 above. \begin{corollary} Suppose $(A,\mathcal D)$ satisfies the tiling condition, and let $C$ be a simple cycle of length $p$. On $X(A^T,\mathcal D)\times\tilde\Omega_C$ define the measure $\breve\mu$ by $$\int_{X(A^T,\mathcal D)\times\tilde\Omega_C}f\,d\breve\mu=\int_{X(A^T,\mathcal D)}\sum_{\omega\in\tilde\Omega_C}f(x,\omega)\,dx.$$ Define the operators $\breve T_k$, $k\in\mathbb{Z}^d$ and $\breve U$ on $L^2(X(A^T,\mathcal D)\times\tilde\Omega_C,\breve\mu)$ by $$\breve T_kf(x,\omega)=e^{2\pi ik\cdot x}f(x,\omega),\quad(x\in X(A^T,\mathcal D),\omega\in\tilde\Omega_C),$$ $$\breve Uf=\sqrt{|\det A|}f\circ\rho^{-1}.$$ Then $\{\breve T_k,\breve U\}$ define a unitary representation of $G_A$ and $\mathcal W:L^2(\mathbb{R}^d\times\mathbb{Z}_p)\rightarrow L^2(X(A^T,\mathcal D),\breve\mu)$, $\mathcal Wf=f\circ\mathfrak d\circ\hat i_C^{-1}$ is an isomorphism that intertwines this representation with the one in Definition \ref{defhc}. \end{corollary} When $(A,\mathcal D)$ satisfy the tiling condition we can say a bit more about the possible extra cycles in $(C-\mathbb{Z}^d)\cap X(A^T,\mathcal D)$: \begin{proposition} Suppose $(A,\mathcal D)$ satisfies the tiling condition. Assume that there is a cycle point $\theta_0\in X(A^T,\mathcal D)$ such that $\theta_0-k\in X(A^T,\mathcal D)$ for some $k\in\mathbb{Z}^d$, $k\neq 0$. Then the entire cycle of $\theta_0$ is on the boundary of $X(A^T,\mathcal D)$. \end{proposition} \begin{proof} Let $X(A^T,\mathcal D)^\circ$ denote the interior of $X(A^T,\mathcal D)$. We will prove first that if a point $x$ is in $X(A^T,\mathcal D)\cap(X(A^T,\mathcal D)+k)$ with $k\in\mathbb{Z}^d$, $k\neq0$, then $x$ is on the boundary of $X(A^T,\mathcal D)$. Suppose not, then $x\in X(A^T,\mathcal D)^\circ$. By \cite{LaWa96c} we know that the closure of $X(A^T,\mathcal D)^\circ$ is $X(A^T,\mathcal D)$. This implies that the neighborhood $X(A^T,\mathcal D)^\circ$ of $x$ must intersect the set $X(A^T,\mathcal D)^\circ +k$. But since $X(A^T,\mathcal D)$ tiles $\mathbb{R}^d$ by $\mathbb{Z}^d$, this implies that the interiors of $X(A^T,\mathcal D)$ and $X(A^T,\mathcal D)+k$ cannot intersect (the intersection would have positive Lebesgue measure). So $x$ must be on the boundary of $X(A^T,\mathcal D)$. Now consider $\theta_0$ and let $C=\{\theta_0,\dots,\theta_{p-1}\}$ be the cycle of $\theta_0$ and let $l_0,\dots l_{p-1},$ the corresponding digits. We have $\tau_{l_{p-1}}\dots\tau_{l_j}\theta_j=\theta_0$ for all $j\in\{0,\dots,p-1\}$. Suppose one of the points $\theta_j$ of the cycle $C$ is in $X(A^T,\mathcal D)^\circ$. Since $$X(A^T,\mathcal D)=\bigcup_{d\in\mathcal D}\tau_d(X(A^T,\mathcal D)),$$ we obtain that $\tau_d(X(A^T,\mathcal D)^\circ)\subset X(A^T,\mathcal D)^\circ$ ($\tau_d$ is a homeomorphism). So if $\theta_j$ is an interior point for $X(A^T,\mathcal D)$, then $\theta_0=\tau_{l_{p-1}}\dots \tau_{l_j}\theta_j$ is also in the interior of $X(A^T,\mathcal D)$. This contradiction implies that $C$ is contained in the boundary of $X(A^T,\mathcal D)$. \end{proof} \par To help the reader appreciate our encoding results we present some examples which at the same time stress tiles versus spectrum. Since the technical points are illustrated already for the real line we begin with dimension one, and then turn to the plane $\mathbb{R}^2$. \begin{example} Let us take $d=1$, $A=2$ and $\mathcal D=\{0,1\}$. The maps are $\tau_0x=x/2$, $\tau_1x=(x+1)/2$. Consider the simple cycle $C:=\{\theta_0\}=\{0\}$. It corresponds to $\underline 0$. The attractor $X(A^T,\mathcal D)$ is $[0,1]$. The intersection $(C-\mathbb{Z})\cap X(A^T,\mathcal D)=(-\mathbb{Z})\cap[0,1]=\{0,1\}$, so it consists of the cycles $C'$: $\{0\}$ and $\{1\}$, which correspond to $\underline 0$ and $\underline 1$ respectively. Therefore $\tilde\Omega_0=\Omega_0\cup\Omega_1$, i.e., the words that end in an infinite repetition of $0$ or an infinite repetition of $1$. We have the map $$\hat i_0^{-1}\mathfrak d: [0,1)\times(\Omega_0\cup\Omega_1)\rightarrow \mathbb{R},\quad \hat i_0^{-1}\mathfrak d(x,\omega)=x+k(\omega),$$ (we used $[0,1)$ here instead of $[0,1]$ to have that the map $\hat i_0^{-1}\mathfrak d$ is a true bijection, not just up to measure $0$), and with formula \eqref{eqdefkomega}: $$k(\omega_0\dots\omega_n\underline0)=\omega_0+2\cdot\omega_1+\dots+2^n\omega_n,\quad k(\omega_0\dots\omega_n\underline1)=\omega_0+2\cdot\omega_1+\dots+2^n\omega_n-2^{n+1}.$$ Applying the commutative diagram in Corollary \ref{corsum} to $\rho^n$, we have $\hat i_0^{-1}\mathfrak d \rho^n(x,\omega)=\frac{1}{2^n}\hat i_0^{-1}\mathfrak d(x,\omega),$ which implies that $$\frac{1}{2^n}(x+k(\omega_0\omega_1\dots))=\tau_{\omega_{n-1}}\dots\tau_{\omega_0}x+k(\omega_n\omega_{n+1}\dots),\quad (x\in[0,1), \omega\in \Omega_0\cup \Omega_1).$$ \end{example} \begin{example} Let $d=1$, $A=2$ and $\mathcal D=\{0,1\}$. Consider the simple cycle $C:=\{\theta_0,\theta_1\}=\{1/3,2/3\}$. It corresponds to $\underline{10}$ (because $\tau_1(1/3)=2/3$, $\tau_0(2/3)=1/3$. The attractor is $X(A^T,\mathcal D)=[0,1]$. Then $(C-\mathbb{Z})\cap [0,1]=C$, so $\tilde \Omega_C=\Omega_C$, i.e., the words that end in $\underline{10}$. We have that the map $$\hat i_C^{-1}\mathfrak d:[0,1)\times\Omega_C\rightarrow \mathbb{R}\times \mathbb{Z}_2,\quad \hat i_C^{-1}\mathfrak d(x,\omega)=(x-\theta_{j(\omega)}+k(\omega),j(\omega))$$ is a bijection, and $$j(\omega_0\dots\omega_{2n-1}\underline{10})=0,\quad j(\omega_0\dots\omega_{2n-1}\underline{01})=1,$$ $$k(\omega_0\dots\omega_{2n-1}\underline{10})=\omega_0+2\cdot\omega_1+\dots+2^{2n-1}\omega_{2n-1}+\frac13-2^{2n}\cdot\frac13,$$ $$k(\omega_0\dots\omega_{2n-1}\underline{01})=\omega_0+2\cdot\omega_1+\dots+2^{2n-1}\omega_{2n-1}+\frac23-2^{2n}\cdot\frac23.$$ The map $\Omega_C\ni \omega\mapsto (k(\omega),j(\omega))\in\mathbb{Z}\times\mathbb{Z}_2$ is a bijection. As an example, let us show how to compute the $\omega\in\Omega_C$ associated to $(k,j)=(15,0)$. Take $k-\theta_j=15-\frac13$. We want a $k_1\in\mathbb{Z}$ and $\omega_0\in \mathcal D$ such that $15-\frac13=2(k_1-\frac23)+\omega_0$. We have \begin{align*} 15-\frac13&=2(8-\frac23)+0& 1-\frac13&=2(1-\frac23)+0\\ 8-\frac23&=2(4-\frac13)+0& 1-\frac23&=2(0-\frac13)+1\\ 4-\frac13&=2(2-\frac23)+1& 0-\frac13&=2(0-\frac23)+1\\ 2-\frac23&=2(1-\frac13)+0& 0-\frac23&=2(0-\frac13)+0\\ & &\vdots \end{align*} \end{example} Thus $\omega(15,0)=001001\underline{10}$. \begin{remark} Our next example is in the plane, but it illustrates a more general picture in $\mathbb{R}^d$ for any $d$. Start with a given pair $(A, \mathcal D)$ with the matrix $A$ assumed expansive, and $\mathcal D$ a chosen complete digit set, i.e., in bijective correspondence with the points in $\mathbb{Z}^d/A^T\mathbb{Z}^d$. So in particular, $|\mathcal D| = |\det A|$. In general it is not true that the same set $\mathcal D$ is a digit set for $A$, i.e., that it is a bijective image of $\mathbb{Z}^d/A\mathbb{Z}^d$. Here for $d = 2$, we give an explicit geometric representation of a pair $(A, \mathcal D)$ for which the same $\mathcal D$ is a digit set for both the radix representation with $A$ and with the transposed matrix $A^T$. Hence we get two attractors $X(A^T,\mathcal D)$ and $X(A,\mathcal D)$. Both will be referred to as Cloud Nine, a left-handed version, and a right-handed version. That is because there are nine integer points, i.e., the intersections with $\mathbb{Z}^2$ consists of nine points, and it is the same set for the two fractals. For each, there are three one-cycles, and one six-cycle. While the six-cycles (for $A$ and for $A^T$) are the same as sets, we will see that they are traveled differently under the actions discussed in our encodings from sections 5 and 6 from above; the difference being essentially a reversal of orientation. Hence our encoding with infinite words in letters from $\mathcal D$ will also be different for the two cases, and the details are worked out below. Recall the attractor $X(A^T,\mathcal D)$ is the set of ``fractions'' for our digital representation of points in $\mathbb{R}^2$. The attractor $X(A^T,\mathcal D)$ is also an affine Iterated Function System (IFS) based on $(A,\mathcal D)$. Thus the Cloud Nine examples further illustrate the intricate part played by the cycles in $\mathbb{Z}^2$ for the initial $(A,\mathcal D)$-IFS. In each case, using these cycles we are able to write down formulas for the two maps which do the encoding as well as the decoding in our positional $\mathcal D$-representation. \end{remark} \par For use of matrices in radix representation, the distinction between the radix matrix $A$ and its transpose $A^T$ is important. First the two matrices sit on separate sides in a Fourier duality; and secondly, even if the chosen set of digits $\mathcal D$ is the same, the two attractors may be different. In fact, in general the same $\mathcal D$ may not work for both $A$ and $A^T$. There is not a natural connection between the two quotients $\mathbb{Z}^d/A\mathbb{Z}^d$ and $\mathbb{Z}^d/A^T\mathbb{Z}^d$ , i.e., the one for $A$ and the other one for the transposed. \par But in the particular 2D example below, Example \ref{a0}, called Cloud-Nine, one may check by hand that, for this matrix $A$, with $\det A=5$, each of the two quotients $\mathbb{Z}^2/A\mathbb{Z}^2$ and $\mathbb{Z}^2/A^T\mathbb{Z}^2$ are in bijective correspondence with the same subset $\mathcal D$ in $\mathbb{Z}^2$. (See details!) \par As a result it makes sense to analyze the two different Hutchinson attractors $X(A^T, \mathcal D)$ and $X(A, \mathcal D)$; both compact and with non-empty interior. The first one, in a different context was studied earlier in \cite{BrJo99} and \cite{Jor03}, but both are interesting. Note that while two references \cite{BrJo99} and \cite{Jor03} use these examples the questions addressed in these papers are completely different. \par As we see, there is an intriguing connection between cycles, solenoids, and encodings for the two. \begin{example}\label{a0} Take $d=2$, $A=\left(\begin{array}{cc}1&-2\\2&1\end{array}\right)$ so $A^T=\left(\begin{array}{cc}1&2\\-2&1\end{array}\right)$, and $\mathcal D=\left\{\vectr00,\vectr{\pm3}{0},\vectr{0}{\pm2}\right\}$. We consider the trivial cycle $C:=\{\vectr00\}$. We want to compute $(C-\mathbb{Z}^d)\cap X(A^T,\mathcal D)$, i.e., $\mathbb{Z}^d\cap X(A^T,\mathcal D)$. First, we need to locate the attractor $X(A^T,\mathcal D)$. For this we use the proof of Proposition \ref{proprc}(ii), and conclude that if $R:=\frac{\|(A^T)^{-1}\|\max_{d\in\mathcal D}\|d\|}{1-\|(A^T)^{-1}\|}$ then the ball $B(0,R)$ is invariant under all the maps $\tau_d$ which implies that $X(A^T,\mathcal D)$ is contained in this ball. Since $\|(A^T)^{-1}\|=\frac{1}{\sqrt{5}}$, we conclude that $R=\frac3{\sqrt{5}-1}=2.427\dots$. There are 21 points in $\mathbb{Z}^d\cap B(0,R)$ and we can check how $\mathcal R_C$ acts on each of them. If we want $\vectr{x}{y}=A^T\vectr{a}{b}+\vectr{d_1}{d_2}$ with $\vectr{a}{b}\in\mathbb{Z}^2$ and $\vectr{d_1}{d_2}\in\mathcal D$, then we must have \begin{equation}\label{eqcl9} \frac{(x-d_1)-2(y-d_2)}5=a,\quad \frac{2(x-d_1)+(y-d_2)}5=b. \end{equation} Thus, given $\vectr{x}{y}$, to find $\mathcal R_C\vectr xy=\vectr ab$ and $\vectr{d_1}{d_2}\in\mathcal D$, we first look for an element in $\mathcal D$ with $d_1-2d_2\equiv x-2y\mod 5$ and $2d_1+d_2\equiv 2x+y\mod 5$. Then we compute $a,b$ as in \eqref{eqcl9}. It is interesting to note here also that if $d_1-2d_2\equiv x-2y\mod 5$ then the other equivalence $\mod 5$ is satisfied too. This is because $2d_1+d_2\equiv 2(d_1-2d_2)\mod 5$ etc. We can use the following table \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline $\vectr{d_1}{d_2}$&$\vectr{0}{0}$&$\vectr{3}{0}$&$\vectr{-3}{0}$&$\vectr{0}{2}$&$\vectr{0}{-2}$\\ \hline $(d_1-2d_2)\mod 5$&0&3&2&1&4\\ \hline \end{tabular} \end{center} We apply these ideas to the points in $\mathbb{Z}^2\cap B(0,R)$: $\vectr00=A^T\vectr00+\vectr00$ so $\mathcal R_C\vectr00=\vectr00$, and $-\vectr00$ is a cycle that corresponds to $\underline{\vectr00}$. $\vectr10=A^T\vectr10+\vectr02$ so $\mathcal R_C\vectr10=\vectr10$, and $\vectr{-1}0$ is a cycle that corresponds to $\underline{\vectr02}$. $\vectr{-1}0=A^T\vectr{-1}0+\vectr0{-2}$ so $\mathcal R_C\vectr{-1}{0}=\vectr{-1}{0}$, and $\vectr{1}{0}$ is a cycle that corresponds to $\underline{\vectr{0}{-2}}$. $\vectr01=A^T\vectr{-1}{-1}+\vectr30$, $\vectr{-1}{-1}=A^T\vectr{1}{-1}+\vectr02$, $\vectr{1}{-1}=A^T\vectr{0}{-1}+\vectr30$, $\vectr{0}{-1}=A^T\vectr{1}{1}+\vectr{-3}{0}$, $\vectr{1}{1}=A^T\vectr{-1}{1}+\vectr{0}{-2}$, $\vectr{-1}{1}=A^T\vectr{0}{1}+\vectr{-3}{0}$. So $$\vectr01\stackrel{\mathcal R_C}{\rightarrow}\vectr{-1}{-1}\stackrel{\mathcal R_C}{\rightarrow}\vectr{1}{-1}\stackrel{\mathcal R_C}{\rightarrow}\vectr{0}{-1}\stackrel{\mathcal R_C}{\rightarrow}\vectr11\stackrel{\mathcal R_C}{\rightarrow}\vectr{-1}{1}\stackrel{\mathcal R_C}{\rightarrow}\vectr{0}{1}$$ and $\left\{\vectr{0}{-1},\vectr{1}{1},\vectr{-1}{1},\vectr{0}{1},\vectr{-1}{-1},\vectr{1}{-1}\right\}$ is a cycle that corresponds to the word $$\underline{\vectr30\vectr02\vectr30\vectr{-3}0\vectr0{-2}\vectr{-3}{0}}.$$ Similar computations show that $\vectr20\stackrel{\mathcal R_C}{\rightarrow}\vectr12\stackrel{\mathcal R_C}{\rightarrow}\vectr02\stackrel{\mathcal R_C}{\rightarrow}\vectr00$, $\vectr{-2}{0}\stackrel{\mathcal R_C}{\rightarrow}\vectr{-1}{-2}\stackrel{\mathcal R_C}{\rightarrow}\vectr{0}{-2}\stackrel{\mathcal R_C}{\rightarrow}\vectr00$, $\vectr{1}{-2}\stackrel{\mathcal R_C}{\rightarrow}\vectr10$, $\vectr21\stackrel{\mathcal R_C}{\rightarrow}\vectr01$, $\vectr{2}{-1}\stackrel{\mathcal R_C}{\rightarrow}\vectr{0}{1}$, $\vectr{-2}{1}\stackrel{\mathcal R_C}{\rightarrow}\vectr{0}{-1}$, $\vectr{-2}{-1}\stackrel{\mathcal R_C}{\rightarrow}\vectr{0}{-1}$. Thus we have three cycles of length one, and one cycle of length 6. Consider now the pair $(A^T, \mathcal D)$, that is replace the matrix $A$ by $A^T$, and $\mathcal D$ is also a complete set of representatives for $\mathbb{Z}^d/(A^T)^T\mathbb{Z}^d$. As above we get the same cycles:\\ $\vectr00$ is a one-cycle that corresponds to $\underline{\vectr00}$\\ $\vectr10$ is a one-cycle that corresponds to $\underline{\vectr0{2}}$\\ $\vectr{-1}0$ is a one-cycle that corresponds to $\underline{\vectr{0}{-2}}$\\ We also obtain the six-cycle $\left\{\vectr{0}{1},\vectr{1}{-1},\vectr{-1}{-1},\vectr{0}{-1},\vectr{-1}{1},\vectr{1}{1}\right\}$. This cycle corresponds to a different periodic word: $$\underline{\vectr{3}0\vectr0{-2}\vectr30\vectr{-3}0\vectr0{2}\vectr{-3}{0}}.$$ This means that the 6-cycle is traveled by two diferent paths according to the matrix $A$ or $A^T$. \end{example} \subsection{Tiling and spectra in some examples} \par One of the uses of encoding is applications to tiling questions in $\mathbb{R}^d$. The simplest tiles $X$ in $\mathbb{R}^d$ are measurable subsets which make a tiling of $\mathbb{R}^d$ by translations with vectors from some lattice, say $\Gamma$ (i.e, a rank d subgroup). Since we work in the measurable category we allow different translates in the tiling $X + \gamma$, for $\gamma\in\Gamma$ to overlap on sets of measure zero. One might think that when $X$ is given, then the presence of a suitable lattice $\Gamma$ making $X$ into a translation tile for $\mathbb{R}^d$ could be decided by visual inspection, at least in the case of $d = 2$. After all, when a pair $(A,\mathcal D)$ is given, then there are fast Mathematica programs which produce excellent plots of the attractor sets $X = X(A^T,\mathcal D)$, black on white; see for example \cite{BrJo99}. But except for isolated cases, it turns out that when the planar sets $X$ are represented in black on white, then there will typically be many white spots, or gaps, disconnecting $X$ in complicated ways. If some lattice $\Gamma$ will make $X$ into a translation tile, then the white areas must be filled in by black under translations $X + \gamma$, $\gamma \in \Gamma$. An inspection of \cite{BrJo99} reveals that this is not easy to discern by visual inspection. Hence, instead we resort below to spectral theoretic tools for locating the lattices which do the job. \par A more complicated form of tilings (still with a single base tile) refer to the case when the set $\Gamma$ of translation vectors is a set which is not a lattice, e.g., translation sets of quasiperiodic tilings. But for our present considerations lattice tilings will suffice. \par The sets X which will interest us are the attractors $X = X(A^T,\mathcal D)$ from affine IFSs as described in Corollary \ref{corsum}. It is known that every such $X$ is compact with non-empty interior, and so in particular it has positive d-dimensional Lebesgue measure. \par Hence it is of interest to ask for a spectral analysis of the Hilbert space $L^2(X)$, referring to $d$-dimensional Lebesgue measure. In fact, using Pontryagin duality for abelian groups, one can check that $X$ tiles $\mathbb{R}^d$ with a lattice if and only if the dual lattice makes an orthogonal basis of complex exponentials in $L^2(X)$. The result is often refered to as Fuglede's theorem. For background, see the references \cite{Fug74} and \cite{Rud62}. \par To understand the correspondence between choice of translation lattice on the one hand and spectrum on the other we need: \begin{definition} Let $X\subset\mathbb{R}^d$ be measurable with $0<\mu(X)<\infty$ where $\mu$ denotes the $d$-dimensional Lebesgue measure. For $\xi\in\mathbb{R}^d$ set $e_{\xi}(x)=e^{2\pi i\xi\cdot x}$, where $\xi\cdot x:=<\xi\mbox{, }x>=\xi_1x_1+\cdots \xi_dx_d$ and $x=(x_1,\cdots,x_d)\in\mathbb{R}^d$. If $\Gamma\subset\mathbb{R}^d$ is a discrete subgroup (in this case a rank $d$ lattice) set $$E_X(\Gamma):=\{ e_{\xi}|_X\mbox{ : }\xi\in\Gamma\}$$ where $|_X$ denotes restriction to the set $X$. \\ If $\Gamma$ is a lattice we set $$\Gamma^{\circ}:=\{\lambda\in\mathbb{R}^d\mbox{ }|\mbox{ }\lambda\cdot\xi\in\mathbb{Z}\mbox{ for all }\xi\in\Gamma\}$$ called the dual lattice. \\ If $\Lambda\subset\mathbb{R}^d$ is a discrete subset we say that it is a {\it spectrum} for $X$ or that the pair $(X, \Lambda)$ is a {\it spectral pair} iff $E_X(\Lambda)$ is an orthogonal basis in the Hilbert space $L^2(X)=L^2(X,\mu)$. \end{definition} \begin{lemma}\label{lemfuglede} (Fuglede \cite{Fug74}) Let $0<\mu(X)<\infty$ and $\Lambda$ a rank-$d$ lattice. The following conditions are equivalent:\\ i) $E_X(\Lambda)$ is an orthonormal basis in $L^2(X)$;\\ ii) $X$ tiles $\mathbb{R}^d$ by the dual lattice $\Lambda^{\circ}$. \end{lemma} \begin{remark} We can draw the following stronger conclusion: When $X$ is given, there are no other tiling lattices for $X$ than those which arise as in (ii) by spectral duality. Proof. Every lattice $\Lambda$ satisfies ${\Lambda^\circ} ^{\circ} = \Lambda$, i.e., the double dual yields back the initial lattice. To see this, use the following general observations which also serve to make explicit the standard lattice operations which we will be using in the proof. \end{remark} Referring to the IFS of Definition \ref{ifs} we note the following formula for the computation of the $L^2(X(A^T,\mathcal D))$-inner products. Set $X=X(A^T,\mathcal D)$ and $\hat \chi_X(\xi):=\int_Xe_{\xi}(x)dx$. Then $$\hat \chi_X(\xi)=\prod_{n=1}^{\infty}m_{\mathcal D}(A^{-n}\xi)$$ where $$m_{\mathcal D}(\xi):=\frac{1}{|\det A|}\sum_{d\in\mathcal D}e_d(\xi).$$ Recall that $|\det A|=$number of elements in $\mathcal D$. \par Some remarks about lattices in $\mathbb{R}^d$ are in order: every lattice is by definition a rank-$d$ subgroup of $\mathbb{R}^d$ and it can be shown that it has the form $\Gamma=M\mathbb{Z}^d$, where $M$ is an invertible $d\times d$ matrix and where points in $\mathbb{Z}^d$ are represented by column vectors. We will write $\Gamma_M$ to emphasize the matrix $M$ that completely determines the lattice. The next lemma is elementary: \begin{lemma} {\rm (i)} $\Gamma_M\subset\mathbb{Z}^d$ if and only if $M\in\mathcal M_d(\mathbb{Z})$. {\rm (ii)} If $\Gamma=\Gamma_M$ then $\Gamma^{\circ}=\Gamma_{(M^T)^{-1}}$. In other words if $\Gamma$ is given by $M$ then its dual is given by $(M^T)^{-1}$. {\rm (iii)} $\Gamma^{\circ\circ}=\Gamma$ \end{lemma} \par We will use names from \cite{BrJo99} for the fractals $X = X(A^T,\mathcal D)$ in $\mathbb{R}^2$. These names refer both to their geometric appearance as planar sets $X$, as well as to a counting of $\mathbb{Z}^2$-cycles, i.e., the number of points in $(-X)\cap\mathbb{Z}^2$. See \cite{BrJo99} (end of subsection 9.3) for details. For example, Cloud-Nine has three one-cycles and one six-cycle in $\mathbb{Z}^2$. What follows is a family of examples in 2D. In each case, we are asking the following questions: How much flexibility is there in selecting digits when the base for the 2D number system is fixed? In our case, we are using the positional radix representation for vectors, and thus the base for our number system is a chosen matrix $A$. For several of the examples below, we fix a particular $A$, and then we vary our choices of ``digit'' sets $\mathcal D$ in $\mathbb{Z}^2$. The points in $\mathcal D$ will serve as ``digits'' in a positional representation. We are motivated by Knuth's algorithmic approach mentioned in the Introduction: What are the ``integers'' and what are the ``fractions'' in a number system specified by a particular pair $(A,\mathcal D)$? What is the encoding, and what is the decoding? When the matrix $A$ is fixed, how do changes in $\mathcal D$ reflect themselves in the answer to the questions? The examples below are sketched with Mathematica programming in \cite{BrJo99}, and the names we use for the fractals $X$ are consistent with \cite{BrJo99}, i.e., the the respective names of the sets $X$, Cloud-Nine etc. The examples when $A$ is the same but $\mathcal D$ changes are referred to by the name Cloud, followed by a number. The number indicates the cardinality of $(-X)\cap \mathbb{Z}^2$. However the questions addressed here are different from those of [BJ99]. It is of interest to understand how much flexibility there is in selecting digits when the base for the number system is fixed. In our case, the base for our vector number system is the matrix $A$, and so we vary the choices for the companion set $\mathcal D$. But when $A$ is given, the choice of $\mathcal D$ is always restricted by demanding a bijection $\mathcal D \leftrightarrow \mathbb{Z}^2/A^T\mathbb{Z}^2$. \begin{itemize} \item Cloud Three. $A=\left(\begin{array}{cc}1&-2\\2&1\end{array}\right)$, $\mathcal D=\left\{\vectr00,\vectr0{\pm 1},\vectr{0}{\pm2}\right\}$. Here the lattice $2\mathbb{Z}\times\mathbb{Z}$ makes $X$ tile $\mathbb{R}^2$. Cloud Three only has one-cycles on $\mathbb{Z}^2$, i.e., $(-X)\cap\mathbb{Z}^2= \mathcal C_1$. Moreover $X$ is not a Haar wavelet. It has measure = 2. \item Cloud Five. $A=\left(\begin{array}{cc}1&-2\\2&1\end{array}\right)$, $\mathcal D=\left\{\vectr00,\vectr{\pm3}0,\vectr{\pm1}0\right\}$. Cloud Five is a lattice tile with lattice $\mathbb{Z}\times 2\mathbb{Z}$. So $X$ is not a Haar wavelet. It has measure = 2. \item Cloud Nine. $A=\left(\begin{array}{cc}1&-2\\2&1\end{array}\right)$, $\mathcal D=\left\{\vectr00,\vectr{\pm3}{0},\vectr{0}{\pm2}\right\}$. Cloud Nine is a lattice tile with the lattice $\mathbb{Z}\times 2\mathbb{Z}$. Cloud Nine $X$ has three one-cycles and one six-cycle. So Cloud Nine is not a Haar wavelet. It has measure = 2. \item Twin Dragon. $A=\left(\begin{array}{cc}1&1\\-1&1\end{array}\right)$, $\mathcal D=\left\{\vectr00,\vectr10\right\}$. The Twin Dragon is a lattice tile with lattice $\mathbb{Z}^2 = \Gamma$. So the Twin Dragon is a Haar wavelet. It has measure = 1. \end{itemize} We are using Lemma \ref{lemfuglede} in identifying lattices which make the various Cloud examples $X$ tile $\mathbb{R}^2$. For this purpose we must identify our cycles relative to so called Hadamard systems as defined in \cite{DuJo07c}. A Hadamard system consists of a matrix $A$ and two sets $\mathcal D$ and $\mathcal L$ as dual digits, $\#\mathcal D = \#\mathcal L = |\det A|$. By ``dual'' we mean that the matrix formed from the exponentials as \begin{equation}\label{eqhada} \frac1{\sqrt{|\det A|}} (e^{ 2 \pi i (A^T)^{-1}d\cdot l})_{ d \in\mathcal D, l \in\mathcal L}. \end{equation} is a unitary $|\det A|\times |\det A|$ matrix. Let $N$ be the absolute value of the determinant, and let $\mathbb{Z}_N$ be the cyclic group of order $N$. Then the matrix $U_N$ for the Fourier transfrom on $\mathbb{Z}_N$ is an example of a Hadamard matrix as in \eqref{eqhada}; specifically the $j,k$ entry in $U_N$ is $\frac{1}{\sqrt{N}} \zeta^{jk}$, $j,k \in \mathbb{Z}_N$, where $\zeta= \zeta_N$ is a fixed principal $N$'th root of $1$. \begin{proof} To find the lattices that give tiles for these examples, we use Lemma \ref{lemfuglede}, and find the dual lattices that give orthogonal bases of exponentials. For this we use the techniques introduced in \cite{DuJo06,DuJo07c}. First let us look at the matrix $A=\left(\begin{array}{cc}1&-2\\2&1\end{array}\right)$ for the cloud examples. We want to find what is the lattice that makes $X(A^T,\mathcal D)=\{\sum_{j=1}^\infty(A^T)^{-j}d_j\,|\,d_j\in\mathcal D\}$ tile $\mathbb{R}^2$. For this we need a set $\mathcal L$ such that $\frac{1}{\sqrt{|\det A|}}(e^{2\pi i (A^T)^{-1}d\cdot l})_{d\in\mathcal D,l\in\mathcal L}$ is a unitary matrix, i.e., $(A^T, \mathcal D,\mathcal L)$ is a Hadamard triple. It is enough to take $\mathcal L$ a complete set of representatives for $\mathbb{Z}^2/A\mathbb{Z}^2$. We will take $\mathcal L$ to be $$\mathcal L:=\left\{\vectr00,\vectr{\pm3}{0},\vectr{0}{\pm2}\right\}.$$ With this choice of $\mathcal L$ the reader may check that for each of the Cloud-examples listed above, the corresponding Hadamard matrix from \eqref{eqhada} turns out, up to permutation, to simply agree with the matrix $U_5$ for the Fourier transform on $\mathbb{Z}_5$. According to \cite{DuJo07c} we have to see if there are any proper invariant subspaces for $A$. But those would give rise to real eigenvalues of $A$, and this is not the case. Thus, by \cite{DuJo07c}, the spectrum of $X(A^T,\mathcal D)$, is determined only by the ``$m_{\mathcal D}$-cycles''. These are the cycles $C=\{x_0,\dots,x_{p-1}\}$ for the ``dual'' IFS $\sigma_l(x)=A^{-1}(x+l)$, $l\in \mathcal L$ with the property that $|m_{\mathcal D}(x_i)|=1$ for all $i\in\{0,\dots,p-1\}$. Then, by \cite{DuJo06,DuJo07c} the spectrum of $X(A^T,\mathcal D)$ is the smallest set $\Lambda$ that contains $-C$ for all the $m_{\mathcal D}$-cycles, and such that $A\Lambda+\mathcal L\subset \Lambda$. {\bf Cloud Three}: We have $$m_{\mathcal D}(x,y)=\frac15\left(1+e^{2\pi iy}+e^{-2\pi iy}+e^{2\pi i2y}+e^{-2\pi i2y}\right).$$ If we want $|m_{\mathcal D}(x,y)|=1$ then we must have that all the terms in the sum are $1$ so $y\in\mathbb{Z}$, and $x$ is arbitrary. We are looking for $m_{\mathcal D}$-cycles, so $|m_{\mathcal D}(\sigma_l(x,y))|=1$ for some $l\in\mathcal L$ so $\sigma_l(x,y)$ must have the second component in $\mathbb{Z}$. The inverse of $A$ is $A^{-1}=\frac{1}{5}\left(\begin{array}{cc}1&-2\\2&1\end{array}\right)$. Thus $\frac15(2(x+l_x)+(y+l_y))\in\mathbb{Z}$. This implies that $x\in\frac12\mathbb{Z}$. We claim that $\Lambda=\frac12\mathbb{Z}\times\mathbb{Z}$. For this, note first that $A(\frac12\mathbb{Z}\times\mathbb{Z})+\mathcal L\subset \frac12\mathbb{Z}\times\mathbb{Z}$. By the previous computation, $\frac12\mathbb{Z}\times\mathbb{Z}$ contains the negative of all the $m_{\mathcal D}$-cycles. Then, take $w_0:=(\frac k2,k')\in\frac12\mathbb{Z}\times\mathbb{Z}$. Then for $l\in\mathcal L$, $$\sigma_l(-\frac k2,-k')=(\frac1{10}(-k+2l_x+4k'-4l_y),\frac15(-k+2l_x-k'+l_y)).$$ Note that there is a unique $l_0\in\mathcal L$ such that $w_1:=-\sigma_{l_0}(-\frac k2,-k')\in\frac12\mathbb{Z}\times\mathbb{Z}$. As in the proof of Proposition \ref{proprc}, there is a sequence $l_0,l_1\dots\in\mathcal L$ such that if $w_{n+1}=-\tau_{l_n}(-w_n)$, $n\in\mathbb{N}$, then $w_n\in\frac12\mathbb{Z}\times\mathbb{Z}$ and, for some $m$, $-w_m$ is a cycle point for $(\sigma_l)_{l\in\mathcal L}$. Note that since $w_n\in\frac12\mathbb{Z}\times\mathbb{Z}$, $-w_m$ is a point in a $m_{\mathcal D}$-cycle. Since $w_{m}=-\tau_{l_{m-1}}(w_{m-1})$, we have that $w_{m-1}=Aw_{m}+l_{m-1}\in A(-C)+\mathcal L$, where $C$ is the $m_{\mathcal D}$-cycle of $-w_m$. By induction we obtain that $w_{0}$ must be in $\Lambda$. Thus $\Lambda=\frac12\mathbb{Z}\times\mathbb{Z}$ is the spectrum. Taking the dual we obtain that $X(A^T,\mathcal D)$ tiles $\mathbb{R}^2$ by $2\mathbb{Z}\times\mathbb{Z}$. {\bf Cloud Five}: We have $$m_{\mathcal D}(x,y)=\frac15\left(1+e^{2\pi i3x}+e^{-2\pi i3x}+e^{2\pi ix}+e^{-2\pi ix}\right).$$ Therefore $|m_{\mathcal D}(x,y)|=1$ iff $x\in\mathbb{Z}$. For $m_{\mathcal D}$-cycles we must have that the first component $\sigma_l(x,y)$ must be in $\mathbb{Z}$ for some $l\in\mathcal L$. This implies that $y\in\frac12\mathbb{Z}$. We claim that $\Lambda=\frac12\mathbb{Z}\times\mathbb{Z}$. The proof works just as for the Cloud Three example so we will leave it to the reader. Thus the dual lattice is $2\mathbb{Z}\times\mathbb{Z}$ and $X(A^T,\mathcal D)$. {\bf Cloud Nine}: We have $$m_{\mathcal D}(x,y)=\frac15\left(1+e^{2\pi i3x}+e^{-2\pi i3x}+e^{2\pi i2y}+e^{-2\pi i2y}\right).$$ So $|m_{\mathcal D}(x,y)|=1$ iff $x\in\frac13\mathbb{Z}$ and $y\in\frac12\mathbb{Z}$. For $m_{\mathcal D}$-cycles we must have $(x,y)=(\frac k3,\frac {k'}2)$ with $k,k'\in\mathbb{Z}$ and also $|m_{\mathcal D}(\sigma_l(\frac k3,\frac{k'}2))|=1$ for some $l\in\mathcal L$. This implies that $\frac15(2(\frac k3+d_x)-(\frac{k'}2+d_y))\in\frac12\mathbb{Z}$ so $k$ must be divisible by $3$. Thus the $m_{\mathcal D}$-cycles are contained in $\mathbb{Z}\times\frac12\mathbb{Z}$. Just as in the previous examples we get that $\Lambda=\mathbb{Z}\times\frac12\mathbb{Z}$ so the tiling lattice is $\mathbb{Z}\times 2\mathbb{Z}$. {\bf Twin Dragon}: For $A=\left(\begin{array}{cc}1&1\\-1&1\end{array}\right)$, there are no proper invariant subspaces so the analysis of the $m_{\mathcal D}$-cycles will suffice. We can take $\mathcal L:=\left\{\vectr00,\vectr10\right\}$. $$m_{\mathcal D}(x,y)=\frac12(1+e^{2\pi ix}).$$ Therefore $|m_{\mathcal D}(x,y)|=1$ iff $x\in\mathbb{Z}$. $A^{-1}=\frac12\left(\begin{array}{cc}1&-1\\1&1\end{array}\right)$. We want the first component of $\sigma_l(x,y)$ to be in $\mathbb{Z}$ so $\frac12(x-y)\in\mathbb{Z}$, therefore $y\in\mathbb{Z}$. As in the previous examples we can check that $\Lambda=\mathbb{Z}^2$ so the dual lattice is $\mathbb{Z}^2$. \end{proof} The method used in our analysis of the examples may be formalized as follows. Stated in general terms it applies to a large class of IFSs which carry a fairly minimal amount of intrinsic duality. For the convenience of the reader, we begin with two definitions. This is of interest as there are few results in the literature which produce formulas for lattices which turn particular attractors $X$ into tiles under the corresponding translations. For details we refer to \cite{DuJo07c,DuJo06}. \begin{definition} (i) A Hadamard triple in $\mathbb{R}^d$ is a system $(A,\mathcal D, \mathcal L)$ where $A$ is expansive in $\mathcal M_d(\mathbb{Z})$, $\mathcal D,\mathcal L$ are in $\mathbb{Z}^d$, and $\mathcal L$ is such that the matrix \eqref{eqhada} is unitary. (ii) For a Hadamard triple $(A,\mathcal D,\mathcal L)$ the cycles $C$, that correspond to the IFS $(\sigma_l)_{l\in\mathcal L}$, for which the absolute value of $m_{\mathcal D}$ is 1 are called extreme relative to $m_{\mathcal D}$, or $m_{\mathcal D}$-cycles. \end{definition} We are now ready to state our general tiling result. \begin{corollary} Let $(A,\mathcal D,\mathcal L)$ be a Hadamard triple in $\mathbb{R}^d$, and such that $\mathcal D$ is a complete set of representatives for $\mathbb{Z}^d/A^T\mathbb{Z}^d$, and let $X = X(A^T,\mathcal D)$ be the corresponding Hutchinson attractor. Suppose $A$ has no proper invariant subspaces. Let $\Lambda$ be the smallest lattice in $\mathbb{R}^d$ containing all the sets $-C$ where $C$ runs over the $m_{\mathcal D}$-extreme cycles, and which is invariant under the affine mappings $x\mapsto Ax + l$, for $l \in \mathcal L$. Then the dual lattice $\Gamma = \Lambda^\circ$ makes $X$ tile $\mathbb{R}^d$ with $\Gamma$ translations. \end{corollary} \begin{acknowledgements} The second named author had helpful discussions with Prof Sergei Silvestrov, University of Lund, Sweden. Useful suggestions from a referee led to improvements in the presentation. \end{acknowledgements} \bibliographystyle{alpha}
2,869,038,156,542
arxiv
\section{Introduction}\label{sec::introduction} Online communication channels or mediums of computer mediated communication such as emails, blogs and online social networking websites represent different forms of social networks. These networks have attracted billions of users in recent years \cite{ahn07} adding new dimensions to socializing behavior and communication technologies. These networks provide a challenging opportunity for researchers from different domains to analyze and understand how the new age of communication is shaping the future. These networks also help us understand how information disseminates \cite{iribarren11} in social networks and how communication plays a role in the creation of new knowledge \cite{prusak01}. An important aspect of these networks is that they can undergo intentional attacks or random failures which results in communication breakdown. Thus researchers have focussed on studying the stability of networks in terms of how resilient or how robust these networks are against any malicious activity or natural random failures \cite{cohen00,holme02a}. Given a network with $n$ nodes and $m$ edges, targeted or random attacks are modeled by the removal of a series of selected nodes or edges from the network. The way these nodes or links are chosen, known as the attack strategy determines the impact and behavoir it causes on the resilience of the network. The natural evolution of these networks has introduced several structural properties which play an important role in determining the resilience of these networks. These properties characterize the behvior of many social and other real world networks giving us two important classifications, the scale free networks \cite{barabasi99} and small world networks \cite{watts98}. Scale free networks have a degree distribution following power law\footnote{power law $p_k \sim k^{-\alpha}$ where $\alpha$ is usually in the range of $[2,3]$}. Small world networks have low average path lengths (APL) scaling logarithmically with the increase in number of nodes (n) and high clustering coefficient implying the presence of large number of triads present in the network. Many social networks have both these structural properties giving us another class of networks called, small world-scale free networks. Scale free networks have been extensively studied with respect to resilience, and Internet provides the perfect dataset for such analysis \cite{cohen00,cohen01,guillaume05}. Researchers have shown that scale free networks are highly sensitive to targeted attacks and very robust against random attack strategies \cite{cohen00,cohen01}. This phenomena is often termed as the `Achilles heel of the Internet'. Resilience of networks with only small world properties, and both small world-scale free properties has not been the focus of studies even though many social networks around us exhibit both small world and scale free properties \cite{albert02,newman06a}. One example of such networks is the structure of the world wide web studied by \cite{broder00}. The authors found that the web has a bow tie structure and is very robust against targeted attack on nodes. This result contradicts the findings that scale free networks are fragile to targeted attacks. The reason is that deleting nodes with high degree is not enough to cross the percolation threshold as the average edge-node ratio (also called density or average degree) of these graphs is very high. This finding is similar to our results for the case of social networks. In this paper, we perform two sets of experiments. The first set of experiments compares the behavior of four different classes of networks, small world networks, scale free networks, small world-scale free networks and random networks with four equivalent size real social networks. These social networks are from a political blog, Epinions who-trust-whom network, Twitter social network and Co-authorship network of researchers. We study these networks under six different attack strategies which are, targeted attack on nodes and edges, random failure of nodes and edges, and almost random failure\footnote{defined in section \ref{sec::experimentation}} of nodes and edges \cite{guillaume05}. The idea is to see how structural organization of these different networks impact resilience when their edge-node ratio is equivalent to that of semantically different social networks. Our results lead us to these findings: \begin{itemize} \item Five of the six attack strategies behave similarly for all different classes of networks, the exception being targeted attack on nodes. \item Clustering coefficient has no effect on the resilience of networks if netoworks with high edge-node ratio are studied. \item Results show scale free and small world-scale free networks are more fragile to targeted attacks. Targeted attack on edges removes the same number of edges from other classes of networks and the behavior of all classes including random networks remains the same indicating that the different behavior in scale free and small world-scale free networks is due to the large number of edges being removed from the network and not due to the structural organization of the network itself. \item Network generation models used to generate small world, scale free, and small world-scale free networks differ largely from the behavior of real networks in terms of resilience suggesting structural flaws in existing network generation models. \end{itemize} The second experiment studies the resilience of the four real social networks in terms of different attack strategies on nodes, which was found to be more interesting in the previous experiment. The results can be summarized below: \begin{itemize} \item We observe only minor differences between random and almost random failures for blog, epinions and twitter networks as compared to the author network which demonstrates some differences between the two strategies. \item Attack on Targeted nodes clearly differs from random and almost random failures whereas the author network seems to be the most fragile. The blog, epinions and twitter network demonstrate graceful degradation in performance in terms of size of biggest component. \end{itemize} Rest of the paper is structured as follows: In the next section we discuss several studies pertaining to resilience of different types of networks. Section \ref{sec::data} provides the details of the real world datasets and the networks generated using different network generation models. In section \ref{sec::experimentation}, we describe our experimental set up and the metrics used for analysis. Section \ref{sec::results} explains the results obtained and provide findings from the experimentation and finally we conclude in section \ref{sec::conclusion} also giving future research directions. \section{Related Work}\label{sec::related} One of the earliest studies to demonstrate that scale free networks are more robust against random failures was conducted by \cite{albert00}. The authors also discuss the vulnerability of scale free networks to targeted attacks. Cohen \textit{et al.} \cite{cohen00,cohen01} study the resilience of internet under random and targeted attacks on nodes. For the case of random attacks, they conclude that even after $100\%$ removal of nodes, the connectivity of the biggest component remains intact that spans the whole of the network. The authors claim that this condition will remain true for other networks if their connectivity distribution follows power law with power law coefficient less than 3. For the case of targeted attacks, scale free networks are highly sensitive to targeted attacks on nodes as the biggest connected component disintegrates much sooner. Holme \textit{et al.} \cite{holme02a} study attacks on edges using betweenness centrality where edges with the highest centrality are removed. They show that recalculating betweenness centrality after each deletion is a more effective attack strategy for complex networks. Paul \textit{et al.} \cite{paul04} discuss that networks with a given degree distribution may be very resilient to one type of failure or attack but not to another. They determine network design strategies to maximize the robustness of networks to both intentional attacks and random failures keeping the cost of the network constant where cost is measured in terms of network connections. Analytical solutions for site percolation on random graphs with general degree distributions were studied by \cite{callaway00} for a variety of cases such as site and bond percolation. Serrano \textit{et al.} \cite{serrano06b} introduce a framework to analyze percolation properties of random clustered networks and small world-scale free networks. They find that the high number of triads can affect some properties such as the size and resilience of biggest connected component. Wang \textit{et al.} \cite{wang06} studied the robustness of scale free networks to random failures from the perspective of network heterogeneity. They examine the relationship of entropy of the degree distribution, minimal connectivity and scaling component obtaining optimal design for scale free networks against random failure. Estrada \cite{estrada06} studied sparse complex networks having high connectivity known as good expansion. Using a graph spectral method, the author introduces a new parameter to measure the good expansion and classify 51 real-world complex networks into four groups with different resilience against targeted node attacks. Wang and Rong\cite{wang09} analyse the response of scale free networks to different types of attacks on edges during cascading propagation. They used the scale free model \cite{barabasi99} and reported that scale free networks are more fragile to attacks on the edges with the lowest loads than the ones with the highest loads. Liu \textit{et al.} also affirm that scale free networks are highly resilient to random failures. The authors suggest network design guidelines which maximize the network robustness to random and targeted attacks. A comprehensive study conducted by Magnien \textit{et al.}\cite{magnien11} survey the impact of failures and attacks on Poisson and power law random networks considering the main results of the field acquired. The authors also list new findings which are stated as under: \begin{itemize} \item Focusing on the random failure of nodes and edges, although previous researchers had predicted completely different behavior for Poisson and power law networks, in practice the differences, are vital but not huge. Our results re-enforce these results specially for the case of social networks. \item The authors also invalidate the explanation that targeted attacks are very efficient on power-law networks because they remove many links, random removal of as many links also result in breakdown of the network. \item Networks with Poisson degree distribution behave similarly in case of random node failures and targeted attacks, it must be noted that their threshold is significantly lower in the second case. This goes against the often claimed assumption that, because all nodes have almost the same degree in a Poisson network, there is little difference between random node failures and targeted attacks. \end{itemize} Resilience has not been extensively studied for social networks. Moreover, studies focus on networks that are either only scale free or their sizes are not comparable to online social networks readily available around us. Considering the new findings that deviate with the previous results, we get a strong motivation to further investigate resilience of different types of complex networks with a focus on social networks. Our empirical results reaffirm most of these findings of \cite{magnien11} where our focus is on semantically different social networks. \section{Data Sets}\label{sec::data} We have used four semantically different real world networks which represent social communication of different forms. These are the Political Blog network, Twitter, Epinions and Author network which we are abbreviated as (RN) and are described below. \textit{Political Blog} network is a network of hyperlinks between weblogs on US politics, recorded in 2005 by Adamic and Glance\cite{adamic05}. \textit{Twitter} network is one of the most popular online social networks for communication among online users and we have used the dataset extracted by \cite{hashmi12}. \textit{Epinions} network is a who-trust-whom online network of a customer analysis website Epinions and the data is downloaded from the stanford website ({\url{http://snap.stanford.edu/data/}}) where it is publicly available. The \textit{Author} network is a co-authorship network where two authors are linked with an edge, if they co-authored a common work(an article, book etc). The dataset is made available by Vladimir Batagelj and Andrej Mrvar: Pajek datasets (\url{http://vlado.fmf.uni-lj.si/pub/networks/data/}). For all these networks, we only consider the biggest connected component and treat these networks as simple and undirected. Table \ref{tbl::networks} shows the number of nodes and edges in these networks along with the edge-node ratio. For each of these real networks, we generated equivalent size networks using four network generation models referred above. The introduction of real data not only allowed us to select realistic edge-node ratio, but also to compare these models with real data. We have also used four network generation models to represent different types of networks. The small world (SW) model of Watts and Strogatz\cite{watts98}, the scale free (SF) model of Barabasi and Albert\cite{barabasi99}, the Small world-Scale free (HK) model of Holme and Kim\cite{holme02} and, the Erd\"{o}s (RD) model for Random graphs. The small world model can be tuned to the desired number of nodes and edges by initializing a regular graph where each node has a degree of $n$. The scale free model can be tuned by the number of edges each new node has in the network where all nodes connect preferentially. Similarly the model for small world-scale free networks can be tuned by the number of nodes each new node connects to, giving us a network with the desired edge-node ratio approximately. A random network is generated using n nodes and m edges where the degree distribution $p_k$ of the network follows a Poisson distribution $p_k = e^{-\lambda} \frac{\lambda^k}{k!}$. The networks we generated all had $\lambda > 1$ which signifies that most nodes in the network have a degree close to the mean degree of the network. \begin{table} \centering \begin{tabular}{|l|c|c|c|} \hline Network & Nodes & Edges & Edge-Node Ratio \\ \hline \hline Blog & 1222 & 16714 & 13.6 \\ \hline Twitter & 2492 & 17658 & 7.0 \\ \hline Epinions & 2000 & 48720 & 24.3 \\ \hline Author & 3621 & 9461 & 2.6 \\ \hline \end{tabular} \caption{Network Statistics for different social networks} \label{tbl::networks} \end{table} For the purpose of experimentation and empirical analysis, we generated 5 artificial networks each for small world, scale free, small world-scale free and random networks equivalent to the 4 different social networks giving us a total of 80 networks. We averaged the readings obtained for these networks although the networks had very little variations with standard deviations of less than 1 in all the cases. Table \ref{tbl::metrics} shows the degree of most connected nodes, clustering coefficients and average path lengths for the generated networks in comparison to real networks. A clear similarity among all these networks is the low average path length which indicates that on average, nodes in all these networks lie close to each other following the famous `six degrees of separation' rule. All the real networks are both small world and scale free in nature, the scale free networks have a low clustering coefficient and the degree distribution of small world networks and random networks follow a Poisson distribution with $\lambda >1$. \begin{table} \centering \begin{tabular}{|l|c|c|c|c|c|} \hline Data Set & {\ Real Network \ } & {\ RD \ } & {\ SW \ } & {\ SF \ } & {\ HK \ } \\ \hline \hline & \multicolumn{5}{|c|}{Highest Degree of a Node\ } \\ \hline \hline Blog & 351 & 46 & 47 & 211 & 321 \\ \hline Twitter & 237 & 27 & 27 & 253 & 319 \\ \hline Epinions & 1192 & 77 & 72 & 373 & 560\\ \hline Author & 102 & 15 & 16 & 201 & 183 \\ \hline \hline & \multicolumn{5}{|c|}{Clustering Coefficient\ } \\ \hline \hline Blog & 0.32 & 0.02 & 0.56 & 0.07 & 0.24 \\ \hline Twitter & 0.13 & 0.005 & 0.49 & 0.03 & 0.27 \\ \hline Epinions & 0.27 & 0.02 & 0.58 & 0.08 & 0.22\\ \hline Author & 0.53 & 0.001 & 0.31 & 0.01 & 0.42 \\ \hline \hline & \multicolumn{5}{|c|}{Average Path Length\ } \\ \hline \hline Blog & 2.7 & 2.5 & 3.2 & 2.4 & 2.2 \\ \hline Twitter & 3.4 & 3.2 & 4.2 & 2.9 & 2.8 \\ \hline Epinions & 2.2 & 2.2 & 3.0 & 2.2 & 2.0\\ \hline Author & 5.31 & 5.07 & 6.41 & 3.4 & 4.0 \\ \hline \multicolumn{6}{|c|}{\ } \\ \hline \end{tabular} \caption{Rd=Random Network, Sw=Small World, Sf=Scale Free, Hk=Holme and Kim Model for small world-scale free networks. Table shows different metrics calculated for the real and artificially generated networks for comparison.} \label{tbl::metrics} \end{table} \section{Experimentation}\label{sec::experimentation} As described above, we studied resilience considering six attack strategies, three of which are for nodes and three for edges. These are Targeted attack on Nodes, Random failure of Nodes, Almost Random failure of Nodes, Targeted attack on Edges, Random failure of Edges and Almost Random failure of Edges. Each of these strategies is described below: \textbf{Targeted attacks on nodes and edges:} The attack strategy for targeted removal of nodes removes nodes in decreasing order of their degree (connectivity). This strategy is used by many other researchers\cite{guillaume05} for such studies. To determine targeted edges, we propose a slightly different version from the one used by \cite{guillaume05}. The authors removed edges connected to high degree nodes which suits well for networks like scale free networks. Our method is inspired by the concept of funneling in social networks \cite{newman01b} where most connections of a person to other people are usually through a small set of people and connections with one or two famous personalities reduces the distance from all other people in the social network. Thus important edges linking many people would be the ones between high degree people. We assign a weight $\mathit{W}(e_{i,j})$ to all $m$ edges where $i,j$ represents the edge between nodes $i$ and $j$ based on the degree of each node using the equation: $ \mathit{W}(e_{i,j})= deg(i) + deg(j)$. Nodes are removed in decreasing order of $\mathit{W}$ in an attempt to remove edges that connect most connected people in the network. \textbf{Random failure of nodes and edges:} Random removal of nodes and edges is modeled by a series of failures of nodes or edges selected randomly from the network with equal probability. \textbf{Almost random failure of nodes and edges:} These attack strategies were described by \cite{guillaume05} as more efficient attack strategies in case of scale free networks. Almost random failure of nodes removes randomly selected nodes with degree atleast 2 and almost random failure of edges removes edges between vertices where the degree of each vertex is atleast 2. \textbf{Quantifying Resilience of a network:} In order to quantify the resilience of a network, we use the two most commonly applied methods, one measures the number of nodes and the other measures the average path length of the biggest connected component in the network after each attack . The percentage of nodes still connected after an attack provides an estimation of how resilient networks are. Similarly the increase in the average distance from any one node to the other also provides an estimation of how resilient the networks are after each attack. We have studied the effects after every $10\%$ removal of either nodes or edges against the percentage of nodes remaining in the biggest connected component and the average path length of this component. \section{Results and Discussion}\label{sec::results} \begin{figure} \begin{center} \includegraphics[width=0.99\textwidth]{blog.jpg} \end{center} \vspace{-20pt} \caption{RN=Blog, HK=Small world-scale free, RD=Random, SF=Scale free, SW=Small world. X-axis: $\%$ of nodes (a,c,e) and edges (b,d,f) removed from the network,Y-axis: $\%$ of nodes (left) and APL (right) of the biggest connected component.} \label{fig::blog} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.99\textwidth]{epinions.jpg} \end{center} \vspace{-20pt} \caption{RN=Epinions, HK=Small world-scale free, RD=Random, SF=Scale free, SW=Small world. X-axis: $\%$ of nodes (a,c,e) and edges (b,d,f) removed from the network, Y-axis: $\%$ of nodes (left) and APL (right) of the biggest connected component.} \label{fig::epinions} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.99\textwidth]{author.jpg} \end{center} \vspace{-20pt} \caption{RN=Author, HK=Small world-scale free, RD=Random, SF=Scale free, SW=Small world. X-axis: $\%$ of nodes (a,c,e) and edges (b,d,f) removed from the network, Y-axis: $\%$ of nodes (left) and APL (right) of the biggest connected component.} \label{fig::author} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.99\textwidth]{twitter.jpg} \end{center} \vspace{-20pt} \caption{RN=Twitter, HK=Small world-scale free, RD=Random, SF=Scale free, SW=Small world. X-axis: $\%$ of nodes (a,c,e) and edges (b,d,f) removed from the network, Y-axis: $\%$ of nodes (left) and APL (right) of the biggest connected component.} \label{fig::twitter} \end{figure} Figures (\ref{fig::blog}, \ref{fig::epinions}, \ref{fig::author} and \ref{fig::twitter}) show the results for all the four datasets with different attack strategies on nodes and edges. The first findings are for the cases where we studied targeted attack on edges, random attacks on nodes, random attacks on edges, almost random attack on nodes and almost random attack on edges. For all the these networks, we find that the real networks behave similarly to all 4 classes of networks, small world, scale free, small world-scale free and random networks if the same fraction of nodes or edges are removed as shown in figures. We justify these results based on the idea that increasing the minimum mean connectivity of nodes increases the robustness of networks to targeted and random attacks also discussed by \cite{liu05a}. For all the social networks under consideration, they have very high average connectivity as shown in Table \ref{tbl::networks}. Even for the case of author network which has an average connectivity of $2.6$, it is still high as compared to internet networks previously studied in the literature. Another generic finding is with respect to the clustering coefficients of different networks. Although there are extreme differences in random and social networks, having low values of even $0.001$ and high values of around $0.5$ (see Table \ref{tbl::metrics}) respectively. Still the behavior in terms of resilience remains the same for all these networks. This indicates that the presence or absence of triads does not reflect on the robustness of a network. The analysis of scale free and small world-scale free networks which are fragile to targeted attacks when compared to small world and random networks is also very interesting. This result is the direct implication of the large number of edges removed from scale free and small world-scale free networks as a result of targeted attack on high degree nodes. Since nodes with very high degree are absent from small world and random networks, the same fraction of edges is not removed upon removal of high degree nodes and they give an impression that they are more resilient to targeted attack on nodes. The experiment on targeted attack on edges provides a contradiction to this result as equal number of edges are removed from real networks, scale free, small world, small world-scale free and random networks and the results show that the behavior of all these networks is almost the same. This claim is further justified from our results of random attack on edges as, again, they reveal similar behvior for all these classes of networks both in terms of size of biggest connected component and APL. Another important result is the behavior of network generation models against the real networks for targeted attack on nodes. All the real world networks disintegrate more quickly than the artificially generated networks for all the four data sets used for experimentation. This observation highlights the fact that network generation models fail to accurately capture all the structural properties of real world networks. This is due to the structural organization of real social networks as compared to the artificially generated networks. In real networks, there is a high percentage of low degree nodes that are connected through high degree nodes only, when these high degree nodes are removed in case of targeted attacks, they immediately become disconnected. On the other hand, artificially generated networks are all based on random connectivity among nodes, and they are not necassarily connected only through high degree nodes, which makes them more resilient when high degree nodes are removed from the network. We discuss the results for each set of first experiment below: \textbf{Targeted attacks on Nodes:} As a general trend, both random and small world networks behave almost similarly for all the datasets. Further more, they are more resilient than small world-scale free (HK) networks and scale free networks (SF). Another important discovery is the behavior of all the real data sets in comparison to the artificially generated networks. Real datasets disintegrate faster than any other model as shown in figures \ref{fig::blog}(left), \ref{fig::epinions}(left), \ref{fig::author}(left) and \ref{fig::twitter}(left) for the case of targeted attacks. The least resilient network is the Author network which disintegrates after $10\%$ highest degree nodes are removed. For the case of Blog and Twitter network, around $40\%$ removal of high degree nodes is sufficient to break the entire network as the size of the biggest component falls below $10\%$, whereas epinions network falls below $10\%$ after around $50\%$ removal of nodes making it more resilient to targeted attacks. Again the edge-node ratio plays an important role as clearly epinions network has the highest value of $24.3$. This, when compared with the Author network with edge-node ration of $2.6$ indicates how having more edges nullifies the effects of targeted attacks on networks. The above similarity in the behavoir of networks is further reinforced after looking the behavoir of APL in figures \ref{fig::blog}(right), \ref{fig::epinions}(right), \ref{fig::author}(right) and \ref{fig::twitter}(right) where variations can only be observed in the case of targetted attack on nodes. \textbf{Random failure of Nodes:} The behavior of random removal of nodes for all the six cases reveals an interesting similarity specially for the case of generated scale free network, small world network, random network and the real data sets. Particularly for the Blog and Epinions data, almost $100\%$ similar behavoir is evident from figure \ref{fig::blog}(left) and figure \ref{fig::epinions}(left). Twitter and Author networks also show high similarity as shown in figures \ref{fig::author}(left) and \ref{fig::twitter}(left). A linear decay is observed in the number of nodes present in the biggest component against linear removal of nodes which suggests that the nodes remain connected even after $90\%$ of the nodes are removed which demonstrates very high resilience for all these networks againt random node failures. The APL of small world networks for all data sets has a slightly higher value indicating minor difference in the empirical values, but the overall behavior and decay pattern is the same for all networks. \textbf{Almost Random failure of Nodes:} Just as random failures, almost random failure of nodes demonstrates a high similarity among the different classes of networks and the real networks. Differences can be observed only for the case of author network which has a much lower edge-node ratio. The behavior of the real author network deviates slightly from the other classes of networks. This is contradictory to the results of \cite{guillaume05} where they showed that this strategy is more efficient than random failures. The networks used to show these results by \cite{guillaume05} had an edge-node ratio of less than 3 and where the networks we use here have a much higher edge-node ratio with the exception of the author network, which has an edge-node ratio of 2.6 and thus we can see differences in the results of random failures and almost random failures in the author network. \textbf{Targeted attacks on Edges:} All the networks show an equivalent resilience against targeted attack on edges when compared to random removal of edges. The author network in Figure \ref{fig::author}(left) again shows an early breakdown of the biggest component further proving our claim of high mean connectivity being an important reason for resilient structures. \textbf{Random failure of Edges:} A slight variation in the resilience can be observed for all the networks. All real networks show a tendency to disintegrate more than the generated networks specially after the removal of $60\%$ edges. Author network is the least resilient case where all the generated and the real networks disintegrate into smaller components after a removal of around $60\%$ edges. Since the author network has the least edge-node ratio (see Table \ref{tbl::metrics}), this behavior further proves that other networks show a resilient behavior because of high mean connectivity of the nodes. \textbf{Almost Random failure of Edges:} All the dataset behave exactly the same except for the case of epinions network where after the removal of $60\%$ edges result in different pattern. The small world and the random network behave similarly as they are least resilient. The scale free and small world-scale free networks behave similarly being more resilient and epinions network is in between these two behaviors. The second experiment compares different attack strategies on nodes using four real networks as shown in figure \ref{fig::attackresults}. The previous experiment revealed that targeted attack on nodes is the most efficient attack strategy in terms of different classes of networks. The second experiment compares different attack strategies on nodes for different social networks. The first findings from this experiment are that there are only minor differences in random attacks and almost random attacks when the edge node ratio of the networks is high. Slight differences can be observed for the author network in figure \ref{fig::attackresults}(b) which has comparatively low edge-node ratio. This is in contradiction to the results of \cite{guillaume05}, who studied internet graphs with much less edge-node ratio. Internet graphs are known to have star-like structures where a single node sits (known as hub) in between many other nodes providing efficient connectivity among many nodes. Removing nodes with degree 2 or more unintentionally targets these hubs which in turn results in breakdown of the network. In contrast to this, social networks do not have hubs. Removing nodes with degree 2 or more does not break the network specially for networks with high edge-node ratio because there are many paths that connect a single node, thus making it more resilient to this type of attack. The second finding is as expected, the effectiveness of targeted attack on nodes as compared to random and almost random failures. The author networks has a low percolation threshold and the network breaks immediately into relatively larger size connected components. The Blog, Epinions and Twitter network demonstrate a more graceful degradations with a high percolation threshold as most of the nodes remain connected into a single connected component even after the removal of $40\%$ to $60\%$ high degree nodes. \begin{figure} \begin{center} \includegraphics[width=0.7\textwidth]{attackresults.jpg} \end{center} \vspace{-18pt} \caption{Comparative analysis of different attack strategies on nodes for the 4 semantically different social networks.} \label{fig::attackresults} \end{figure} \section{Conclusion}\label{sec::conclusion} In this paper, we have studied the behavior of small world, scale free and small world-scale free networks in comparison to random and four semantically different social networks. Our results show that that behavior of all these classes of networks remains the same under targeted attack on edges, random attack on nodes and edges, almost random attack on nodes and edges both in terms of size of biggest component and average path length. The behavior of these networks change under targeted attack on nodes. Interesting behavoir was observed on the basis of clustering coefficient and targeted attack on edges. Furthermore structural differences were observed between real social networks and all network generation models. Insignificant differences were observed between random failure of nodes and edges when compared with almost random failures. We intend to extend this study by incorporating large size social networks. The networks studied are unweighted and undirected, and we intend to analyze the behavior of these networks as well. Another important characteristic of social networks is the temporal dimension which plays an important role in dictating many social processes such as information diffusion and epidemics and we would also like to study resilience for temporal social networks. \bibliographystyle{abbrv}
2,869,038,156,543
arxiv
\section{Introduction} Understanding effective equilibrium interactions between two charged mesoscopic bodies immersed in a solution, is essential in various fields of colloid science, from physics \cite{Ben95} to biochemistry \cite{Jonsson99}. References \cite{Gelbart00,HL00,Belloni00,Levin02,Grosberg02,Messina09,NKNP10} offer a general overview. A breakthrough in the field was achieved when it was realized in the 1980s, from numerical evidences, that equivalently charged surfaces may effectively attract each other, under strong enough Coulombic couplings. Such couplings can be realized in practice by increasing the valency of the counter-ions involved \cite{Guldbrand84}. This ``anomalous'' like-charge attraction explains the formation of DNA condensates \cite{Bloomfield96} or aggregates of colloidal particles \cite{Linse99}. A complementary interesting although simpler to rationalize problem is the possibility of an effective repulsion between two plates with opposite uniform surface charges. The weak-coupling limit is described by the Poisson-Boltzmann (PB) mean-field approach. Formulating the Coulomb problem as a field theory, the PB equation can be viewed as the first-order term of a systematic expansion in loops \cite{Attard88}. While the like-charge attraction is not predicted by the PB theory \cite{Neu99,Sader,T00,Andelman06}, the opposite-charge repulsion can occur already in the mean-field treatment \cite{Parsegian72,Paillusson11}, since it is merely an entropic effect with a large cost for confining particles in a small volume. A remarkable theoretical progress has been made during the past decade in the opposite strong-coupling (SC) limit, formulated initially for a single wall or two parallel walls at small separation. The topic was pioneered by Rouzina and Bloomfield \cite{Rouzina96} and developed further by Shklovskii, Levin with collaborators \cite{Shklovskii,Levin02}. An essential aspect is that counter-ions form two-dimensional (2D) highly correlated layers at charged walls at temperature $T=0$. For small but non vanishing temperatures, the structure of interfacial counter-ions remains close to its ground-state counterpart. Within the field-theoretical formulation, which has been put forward by Netz and collaborators in \cite{Moreira00,Netz01}, the leading SC behavior is a single-particle theory in the potential of the charged wall(s). Next correction orders as obtained as a virial or fugacity expansion in inverse powers of the coupling constant $\Xi$, defined below; we refer to this approach as the virial strong-coupling (VSC) theory. The method requires a renormalization of infrared divergences via the electroneutrality condition. A comparison with Monte Carlo (MC) simulations \cite{Moreira00} indicated the adequacy of the VSC approach to capture the leading large-$\Xi$ behaviour of the density profile, which was an important achievement in the field. The first correction has the right functional form in space but an incorrect prefactor, whose values even depart further from the MC ones as the coupling constant $\Xi$ grows. This deficiency was attributed by the authors to the existence of an infinite sequence of higher-order logarithmic terms in the fugacity which have to be resummed to recover the correct value of the prefactor. The {\em leading} order of the VSC theory was generalized to non-symmetrically charges plates \cite{Kanduc08,Paillusson11}, image charge effects \cite{Kanduc07}, presence of salt \cite{Kanduc10} and to various curved (spherical and cylindrical) geometries, for a review, see \cite{Naji05}. Beyond Refs. \cite{Moreira00}, several investigations assessed numerically the adequacy of the leading order VSC approach \cite{Moreira00,Najicyl05,Kanduc07,Kanduc08,Dean09,Kanduc10}. Since the coupling constant $\Xi\propto 1/T^2$, the zero temperature is contained in the VSC approach as the limit $\Xi\to\infty$. This question requires some care though, since a natural rescaled distance $\widetilde{z}=z/\mu$ in the direction perpendicular to the plate(s) is set by the Gouy-Chapman length $\mu\propto T$, which tends to zero as $T\to 0$. From this point of view, the VSC method can be seen as a low-temperature theory approaching $T=0$ under a special spatial scaling of particle coordinates. One of the restrictions is the applicability of the theory to small (rescaled) distances between the charged plates. There exist other possibilities to approach the zero temperature limit. One of them is to construct an expansion in $\Xi$ around the limit $\Xi\to\infty$, under the fixed ratio of the distance $d$ (in the two plate problem) and the lattice spacing $a$ of the Wigner crystal formed at $T=0$. The low-temperature theory proposed by Lau et al. \cite{Lau00}, can be considered in some respect as being of this kind. The considered model consists of two staggered hexagonal Wigner crystals of counter-ions condensed on the plates; the particles are not allowed to move in the slab between the plates. The attraction between the plates at zero and non-zero temperatures, which results from the interaction of the staggered Wigner crystals and from the particle fluctuations, can be computed. Since the particles are not allowed to leave their Wigner plane, the counter-ion profile between the two plates is trivial and there is no need for a spatial scaling. Such a model is interesting on its own, but has a restricted applicability to realistic systems of counterions because the particles are assumed to stick to the plates. This assumption may be perhaps acceptable at large distances between plates, but discards from the outset the excitations that are relevant at small distances, where the counter-ions unbind from the interfaces (see e.g. \cite{Netz01,Moreira00} and the analysis below). An interpolation between the Poisson-Boltzmann (low $\Xi$) and SC regimes (high $\Xi$), based on the idea of a ``correlation hole'', was the subject of a series of works \cite{Nordholm84,Chen06,Hatlo10}. The correlation hole was specified empirically in Refs. \cite{Chen06} and self-consistently, as an optimization condition for the grand partition function, in \cite{Hatlo10}. An interesting observation in \cite{Hatlo10}, corroborated by a comparison with the MC simulations, was that the first correction in the SC expansion is proportional to $1/\sqrt{\Xi}$, and not to $1/\Xi$ as suggested by the VSC theory. Our exact expansion below shows that indeed, the first correction scales like $1/\sqrt{\Xi}$. Recently, for the geometries of one plate and two equivalently charged plates with counter-ions only, we proposed another type of SC theory \cite{Samaj11a}. It is based on a low-temperature expansion in particle deviations around the ground state formed by the 2D Wigner crystal of counter-ions at the plate(s). The approach points to the primary importance of the structure of the ground state, a point emphasized by some authors, see e.g. \cite{Levin99}. Our starting point therefore resembles that of Ref. \cite{Lau00}, but in the subsequent analysis, the particles vibrations around their Wigner lattice positions are allowed along all directions, including the direction perpendicular to the crystal plane along which the particle density varies in a nontrivial way. The theory is formulated in the set-up of the original VSC approach: An SC expansion around the same limit $\Xi\to\infty$ is made, together with the same scaling of the coordinate in the direction perpendicular to the plate(s), $\widetilde{z}=z/\mu$. Since the formation of the Wigner crystal is the basic ingredient from which the method starts, we shall refer to it as the WSC theory. Its leading order stems from a single-particle theory, and is identical to the leading order obtained in the VSC approach. In the present planar geometry, both WSC and VSC differ beyond the leading order, when the first correction is considered. In this respect, in assessing the physical relevance of WSC and VSC, comparison to ``exact'' numerical data is essential. Remarkably, the first WSC correction has the functional form in space of the VSC prediction, but the prefactor is different: Its $1/\sqrt{\Xi}$ dependence on the coupling parameter and the value of the corresponding prefactor are in excellent agreement with available data of MC simulations, while the VSC prediction is off by several orders of magnitude under strong Coulombic couplings \cite{Moreira00}. Unlike the VSC theory, the WSC expansion is free of divergences, without any need for a renormalization of parameters. The WSC expansion turns out to be in inverse powers of $\sqrt{\Xi}$, and not of $\Xi$ like in the case of the VSC expansion. Due to its relatively simple derivation and algebraic structure, the WSC method has a potential applicability to a large variety of SC phenomena. In particular, the WSC can be worked out beyond the leading order for asymmetric plates, which, to our knowledge, was not done at the VSC level, possibly due to the technical difficulty to overcome. The specific 2D Coulomb systems with logarithmic pair interactions were treated at WSC level in Ref. \cite{Samaj11b}. In this paper, we aim at laying solid grounds for the WSC method. We develop the mathematical formalism initiated in Ref. \cite{Samaj11a}, which is based on a cumulant expansion, to capture systematically vibrations of counter-ions around their Wigner crystal positions. This formalism enables us to deal, in the leading order plus the first correction, also with asymmetric, likely or oppositely charged plates. The results obtained are in remarkable agreement with MC data, for large as well as intermediate values of the coupling parameter $\Xi$. The paper is organized as follows. The one-plate geometry is studied in Sec. \ref{sec:oneplate}. An analysis is made of counter-ions vibrations around their ground-state positions in the Wigner crystal, along both transversal and longitudinal directions with respect to the plate surface. The cumulant technique, providing us with the WSC expansions of the particle density profile in powers of $1/\sqrt{\Xi}$, is explained in detail. Sec. \ref{sec:twoplates} deals with the geometry of two parallel plates at small separation. The cumulant technique is first implemented for equivalently charged plates and afterwards for asymmetrically charged plates. In the case of the opposite-charged plates, the WSC results for the pressure are in agreement with MC simulations for small plate separations and lead to the correct (nonzero) large-distance asymptotics. In the case of the like-charged plates, the accurate WSC results for the pressure are limited to small plate separations. All obtained results represent an essential improvement over the VSC estimates. Concluding remarks are given in Sec. \ref{sec:concl}. Before we embark on our study, a semantic point is in order. Some authors refer to the VSC approach as the ``SC theory''. Clearly, the VSC route is not the only theory that can be put forward to describe the strong coupling regime. In what follows, the SC limit refers to $\Xi\to \infty$, and we carefully discriminate between VSC and WSC predictions, that will both be tested against Monte Carlo data. \section{One-plate geometry} \label{sec:oneplate} \subsection{Definitions and notations} We start with the one plate problem in the 3D Euclidean space of points ${\bf r}=(x,y,z)$ pictured in Fig. \ref{fig:geometry}a. In the half-space $\Lambda'=\{ {\bf r},z<0\}$, there is a hard wall of dielectric constant $\varepsilon$ which is impenetrable to particles. A uniform surface-charge density $\sigma e$, $e$ being the elementary charge and $\sigma>0$, is fixed at the wall surface $\Sigma$ localized at $z=0$. The $q$-valent counter-ions (classical point-like particles) of charge $-q e$, immersed in a solution of dielectric constant $\varepsilon$, are confined to the complementary half-space $\Lambda=\{ {\bf r},z\ge 0\}$. In this work, we consider the homogeneous dielectric case only, without electrostatic image forces. The system is in thermal equilibrium at the inverse temperature $\beta = 1/(k_{\rm B}T)$. \begin{figure}[htb] \begin{center} \includegraphics[width=0.45\textwidth]{Fig1.eps} \caption{The two geometries considered: a) one plate; b) two parallel plates at distance $d$. The neutralizing counter-ions have charge $-q e$.} \label{fig:geometry} \end{center} \end{figure} The potential energy of an isolated counter-ion at distance $z$ from the wall is, up to an irrelevant constant, given by \begin{equation} \label{eq:potentialenergy} E(z) = \frac{2\pi q e^2 \sigma}{\varepsilon} z . \end{equation} The system as a whole is electroneutral; denoting the (infinite) number of counter-ions by $N$ and the (infinite) area of the wall surface by $\vert\Sigma\vert$, the electroneutrality condition reads \begin{equation} \label{eq:electroneutrality} q N = \sigma \vert\Sigma\vert . \end{equation} There are two relevant length scales describing, in Gaussian units, the interaction of counter-ions with each other and with the charged surface. The Bjerrum length \begin{equation} \ell_{\rm B} = \frac{\beta e^2}{\varepsilon} \end{equation} is the distance at which two unit charges interact with thermal energy $k_{\rm B}T$. The Gouy-Chapman length \begin{equation} \mu = \frac{1}{2\pi q \ell_{\rm B}\sigma} \end{equation} is the distance from the charged wall at which an isolated counter-ion has potential energy (\ref{eq:potentialenergy}) equal to thermal energy $k_{\rm B}T$. The $z$ coordinate of particles will be usually expressed in units of $\mu$, \begin{equation} \widetilde{z} = \frac{z}{\mu} . \end{equation} The dimensionless coupling parameter $\Xi$, quantifying the strength of electrostatic correlations, is defined as the ratio \begin{equation} \Xi = \frac{q^2 \ell_{\rm B}}{\mu} = 2\pi q^3 \ell_{\rm B}^2 \sigma . \end{equation} The strong-coupling regime $\Xi\gg 1$ corresponds to either low temperatures, or large valency $q$ or surface charge $\sigma e$. The counter-ion averaged density profile $\rho(z)$ depends on the distance $z$ from the wall. It will be considered in the rescaled form \begin{equation} \widetilde{\rho}(\widetilde{z}) \equiv \frac{\rho(\mu\widetilde{z})}{ 2\pi\ell_{\rm B}\sigma^2} . \end{equation} The electroneutrality condition (\ref{eq:electroneutrality}) then takes two equivalent expressions \begin{equation} \label{eq:rhoelectroneutrality} q \int_0^{\infty} dz \rho(z) = \sigma , \qquad \int_0^{\infty} d\widetilde{z} \widetilde{\rho}(\widetilde{z}) = 1 . \end{equation} The contact-value theorem for planar wall surfaces \cite{Henderson78} relates the total contact density of particles to the surface charge density on the wall and the bulk pressure of the fluid $P$. For 3D systems of identical particles, it reads \begin{equation} \label{eq:contacttheorem1} \beta P = \rho(0) - 2\pi \ell_{\rm B}\sigma^2 . \end{equation} Since in the present case of a single isolated double layer, the pressure vanishes, \begin{equation} \label{eq:rhocontacttheorem} \rho(0) = 2\pi \ell_{\rm B}\sigma^2 , \qquad \widetilde{\rho}(0) = 1 , \end{equation} that can be viewed as a constraint that any reasonable theory should fulfill. \subsection{The Virial Strong Coupling approach} With our choice of reduced units, the exact density profile is a function of two variables only: $\widetilde{\rho}(\widetilde{z},\Xi)$. It is well behaved when $\Xi \to \infty$, which is nevertheless a limit where in unscaled variables, all counterions stick to the plate, forming the Wigner crystal ($\rho(z,\Xi) \propto \delta(z)$ for $\Xi\to\infty$). The purpose of the present discussion is to resolve the structure of the double-layer at large but finite $\Xi$. According to the VSC method \cite{Moreira00,Netz01}, the density profile of counter-ions can be formally expanded in the SC regime as a power series in $1/\Xi$: \begin{equation} \label{eq:virialSC1} \widetilde{\rho}(\widetilde{z},\Xi) = \widetilde{\rho}_0(\widetilde{z}) + \frac{1}{\Xi} \widetilde{\rho}_1(\widetilde{z}) + {\cal O}\left( \frac{1}{\Xi^2} \right) , \end{equation} where \begin{equation} \label{eq:virialSC2} \widetilde{\rho}_0(\widetilde{z}) = e^{-\widetilde{z}} , \qquad \widetilde{\rho}_1(\widetilde{z}) = e^{-\widetilde{z}} \left( \frac{\widetilde{z}^2}{2} - \widetilde{z} \right) . \end{equation} The leading term $\widetilde{\rho}_0(\widetilde{z})$, which comes from the single-particle picture of counter-ions in the linear surface-charge potential, is in agreement with the MC simulations \cite{Moreira00}. Indeed, for large $\Xi$, the particles' excursion perpendicular to the plane, which is always quantified by $\mu$, is much smaller than the lateral spacing between ions (denoted $a$ below) \cite{Netz01}. As a consequence, these ions experience the potential of the bare plate, while the interactions with other ions become negligible by symmetry. On the other hand, the MC simulations indicate that the sub-leading term $\widetilde{\rho}_1(\widetilde{z})$ has the expected functional form (for sufficiently large coupling $\Xi>10$), but the prefactor $1/\Xi$ is incorrect. On the basis of the prediction (\ref{eq:virialSC1}), the MC data were fitted in \cite{Moreira00} by using the formula \begin{equation} \label{eq:theta1} \widetilde{\rho}(\widetilde{z},\Xi) - \widetilde{\rho}_0(\widetilde{z}) = \frac{1}{\theta} \widetilde{\rho}_1(\widetilde{z}) , \end{equation} where $\widetilde{\rho}(\widetilde{z},\Xi)$ is the density profile obtained from MC simulations and $\theta$ is treated as a fitting parameter. According to the VSC result (\ref{eq:virialSC1}), $\theta$ should be given by $\theta=\Xi$ plus next-leading corrections. As is seen in the log-log plot of Fig. \ref{fig:prof_theta}, the numerically obtained values of $\theta$ are much smaller than $\Xi$, and the difference between $\theta$ and $\Xi$ even grows with increasing the coupling constant. \begin{figure}[htb] \begin{center} \includegraphics[width=0.45\textwidth,clip]{Fig2.eps} \caption{The fitting parameter $\theta$, defined by Eq. (\ref{eq:theta1}), vs. the coupling constant $\Xi$ for one-plate geometry. The MC values reported in Ref. \cite{Moreira00} are shown with filled diamonds, the original prediction $\theta=\Xi$ of the VSC theory with the dashed line; the solid curve is for our WSC prediction, given by Eq. (\ref{eq:prof_theta}).} \label{fig:prof_theta} \end{center} \end{figure} \subsection{The Wigner Strong Coupling expansion} Our approach is based on the fact that in the asymptotic ground-state limit $\Xi\to\infty$, all counter-ions collapse on the charged surface $z=0$, forming a 2D Wigner crystal \cite{Shklovskii,Levin02}. It is well known \cite{Bonsall77} that the lowest ground-state energy for the 2D Wigner crystal is provided by the hexagonal (equilateral triangular) lattice. Each point of this lattice has 6 nearest neighbors forming a hexagon, see Fig. \ref{fig:hexagonal}. The 2D lattice points are indexed by $\{ j=(j_1,j_2) \}$, where $j_1$ and $j_2$ are any two integers (positive, negative or zero): \begin{equation} {\bf R}_j = (R_j^x,R_j^y) = j_1 \bm{a}_1 + j_2 \bm{a}_2 , \end{equation} where \begin{equation} \bm{a}_1 = a (1,0) , \qquad \bm{a}_2 = a \left( \frac{1}{2},\frac{\sqrt{3}}{2} \right) \end{equation} are the primitive translation vectors of the Bravais lattice and $a$ is the lattice spacing. Since at each vertex, there is just one particle, we can identify $j$ with particle labels, $j=1,\ldots,N$ ($N\to\infty$). There are two triangles per vertex, so the condition of global electroneutrality (\ref{eq:electroneutrality}) requires that \begin{equation} \label{eq:defa} \frac{q}{\sigma} = \frac{\sqrt{3}}{2} a^2. \end{equation} Note that in the large-$\Xi$ limit, the lateral distance between the nearest-neighbor counter-ions in the Wigner crystal $a$ is much larger than the characteristic length $\mu$ in the perpendicular $z$-direction, $a/\mu\propto \sqrt{\Xi}\gg 1$. As invoked above, this very feature explains why a single particle picture provides the leading order term in a SC expansion, so that the two different approaches discussed here (VSC and WSC) coincide to leading order. The same remark holds for the two plates problem that will be addressed in section \ref{sec:twoplates}. It should be emphasized though that this coincidence of leading orders is specific to the planar geometry. The $z$-coordinate of each particle in the ground state is zero, $Z_j=0$. \begin{figure}[htb] \begin{center} \includegraphics[width=0.25\textwidth,clip]{Fig3.eps} \caption{Hexagonal structure of the 2D Wigner crystal: $\bm{a}_1$ and $\bm{a}_2$ are the primitive translation vectors.} \label{fig:hexagonal} \end{center} \end{figure} We denote the ground-state energy of the counter-ions on the Wigner lattice plus the homogeneous surface-charge density $\sigma e$ by $E_0$. For $\Xi$ large but not infinite, the fluctuations of ions around their lattice positions, in all three spatial directions, begin to play a role. Let us first shift one of the particles, say $j=1$, from its Wigner lattice position $({\bf R}_1,Z_1=0)$ by a small vector $\delta{\bf r} = (x,y,z)$ ($\vert \delta{\bf r}\vert\ll a$) and look for the corresponding change in the total energy $\delta E = E - E_0\ge 0$. The first contribution to $\delta E$ comes from the interaction of the shifted counter-ion with the potential induced by the homogeneous surface charge density: \begin{equation} \label{eq:firstcontribution} \delta E^{(1)}(z) = \frac{2\pi q e^2 \sigma}{\varepsilon} z . \end{equation} The second contribution to $\delta E$ comes from the interaction of the shifted particle $1$ with all other particles $j\ne 1$ on the 2D hexagonal lattice: \begin{eqnarray} \delta E^{(2)}(x,y,z) = \phantom{aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa} \nonumber \\ \frac{(qe)^2}{\varepsilon} \sum_{j\ne 1} \left[ \frac{1}{\sqrt{(R_{1j}^x+x)^2+(R_{1j}^y+y)^2+z^2}} - \frac{1}{R_{1j}} \right] , \nonumber \\ \end{eqnarray} where ${\bf R}_{1j} = (R_{1j}^x,R_{1j}^y) = {\bf R}_1 - {\bf R}_j$ and $R_{1j} = \vert {\bf R}_{1j} \vert$. Rescaling the lattice positions by $a$ and taking into account the inequalities $x/a,y/a,z/a\ll 1$, this expression can be expanded as an infinite series in powers of $x/a$, $y/a$ and $z/a$ by using the formula \begin{equation} \label{eq:texpansion} \frac{1}{\sqrt{1+t}} = 1 - \frac{1}{2} t + \frac{3}{8} t^2 - \frac{5}{16} t^3 + \cdots , \quad t\ll 1 . \end{equation} Up to harmonic terms, the expansion reads \begin{equation} \label{eq:harmonic} \delta E^{(2)}(x,y,z) = \frac{(qe)^2}{2 \varepsilon a^3} C_3 \left[ \frac{1}{2}(x^2+y^2) - z^2 \right] . \end{equation} Here, $C_3$ is the special $s=3$ case of dimensionless hexagonal lattice sums \begin{equation} C_s = \sum_{j\ne 1} \frac{1}{(R_{1j}/a)^s} , \end{equation} which can be expressed from the general theory \cite{Zucker} as \begin{eqnarray} C_3 & = & \sum_{j,k=-\infty\atop (j,k)\ne (0,0)}^{\infty} \frac{1}{(j^2 + j k + k^2)^{3/2}} \nonumber \\ & = & \frac{2}{\sqrt{3}} \zeta\left(\frac{3}{2}\right) \left[ \zeta\left( \frac{3}{2},\frac{1}{3}\right) - \zeta\left( \frac{3}{2},\frac{2}{3}\right) \right] \label{eq.C3} \end{eqnarray} with $\zeta(z,q) = \sum_{n=0}^{\infty} 1/(q+n)^z$ the generalized Riemann zeta function and $\zeta(z) \equiv \zeta(z,1)$ (this function should not be confused with the parameter $\zeta$, appearing without arguments below after Eq. (\ref{eq:defzeta}), that will measure the asymmetry between two charged plates). Explicitly, $C_3=11.034\ldots$. The absence of the linear $x$, $y$ terms and of the mixed $xy$ term in (\ref{eq:harmonic}) is caused by the fact that every lattice point is at a center of inversion. The invariance of the hexagonal lattice with respect to the rotation around any point by the angle $\pi/3$ implies the lattice sum equalities \begin{eqnarray} \sum_{j\ne 1} f(R_{1j}) \left( R_{1j}^x \right)^2 & = & \sum_{j\ne 1} f(R_{1j}) \left( R_{1j}^y \right)^2 \nonumber \\ & = & \frac{1}{2} \sum_{j\ne 1} f(R_{1j}) R_{1j}^2 , \end{eqnarray} which were also used in the derivation of (\ref{eq:harmonic}). Note that the $x^2$ and $y^2$ harmonic terms in Eq. (\ref{eq:harmonic}) have positive signs which is consistent with the stability of the Wigner crystal in the $(x,y)$ plane. On the other hand, the minus sign of the $z^2$ term does not represent any stability problem due to the presence of the positive linear contribution in (\ref{eq:firstcontribution}), which is dominant for small $z$-distances. The total energy change is given by $\delta E(x,y,z) = \delta E^{(1)}(z) + \delta E^{(2)}(x,y,z)$. Finally, let us write down the $z$-dependent part of the dimensionless energy shift $-\beta \delta E$, with $z$ expressed in units of $\mu$: \begin{equation} -\beta \delta E(0,0,\mu\widetilde{z}) \sim -\widetilde{z} + \frac{\alpha^3}{2} \frac{C_3}{\sqrt{\Xi}} \widetilde{z}^2 , \quad \alpha = \frac{3^{1/4}}{2\sqrt{\pi}} . \end{equation} We see that in the limit $\Xi\to\infty$, as advocated above, the two-body interaction term of the shifted ion with all other ions on the Wigner crystal is of order $1/\sqrt{\Xi}$ and therefore negligible in comparison with the one-body potential term $-\widetilde{z}$ due to the surface charge density. This leading single-particle picture is common to both VSC and WSC approaches. As concerns the two-body interaction terms $\widetilde{z}^p$ of higher orders ($p=3,4,\ldots$), their coefficients are proportional to $q^2 \ell_{\rm B} \mu^p/a^{p+1} \propto 1/\Xi^{(p-1)/2}$. The present scheme thus represents a systematic basis for an expansion in powers of $1/\sqrt{\Xi}$. The generalization of the above formalism to independent shifts of all particles from their lattice positions is straightforward. Let us shift every particle $j=1,2,\ldots,N$ from its lattice position $({\bf R}_j,Z_j=0)$ by a small vector $\delta {\bf r}_j=(x_j,y_j,z_j)$ ($\vert \delta {\bf r}_j \vert\ll a$) and study the corresponding energy change $\delta E$. As before, the first (one-body) contribution to $\delta E$ is given by \begin{equation} -\beta \delta E^{(1)}(\{\mu\widetilde{z}_j\}) = - \sum_{j=1}^N \widetilde{z}_j . \end{equation} The second (two-body) contribution to $\delta E$ is expressible as \begin{eqnarray} \delta E^{(2)}(\{x_j\},\{y_j\},\{z_j\}) = \phantom{aaaaaaaaaaaaaaaaaa} \nonumber \\ \frac{(qe)^2}{2 \varepsilon} \sum_{j,k=1\atop (j\ne k)}^N \frac{1}{R_{jk}} \left[ \frac{1}{\sqrt{1+\mu_{jk}+\nu_{jk}}} - 1 \right] , \end{eqnarray} where the dimensionless $\mu_{jk}$ and $\nu_{jk}$ involve the particle coordinates along and perpendicular to the Wigner crystal, respectively: \begin{eqnarray} \mu_{jk} & = & 2(x_j-x_k) \frac{R_{jk}^x}{R_{jk}^2} + 2(y_j-y_k) \frac{R_{jk}^y}{R_{jk}^2} \nonumber \\ & & + \frac{1}{R_{jk}^2} \left[ (x_j-x_k)^2 + (y_j-y_k)^2 \right] , \\ \nu_{jk} & = & \frac{1}{R_{jk}^2} (z_j-z_k)^2 . \end{eqnarray} Performing the expansion of type (\ref{eq:texpansion}) in small $\mu_{jk}$ and $\nu_{jk}$, we end up with \begin{equation} -\beta \delta E^{(2)}(\{x_j\},\{y_j\},\{z_j\}) = S_z + S_W + S_{z,W} , \end{equation} where \begin{equation} \label{Sz} S_z = \frac{q^2 \ell_{\rm B}}{2} \sum_{j,k=1\atop (j\ne k)}^N \frac{1}{R_{jk}} \left( \frac{1}{2} \nu_{jk} - \frac{3}{8} \nu_{jk}^2 + \cdots \right) \end{equation} contains particle shifts exclusively in the $z$ direction, \begin{equation} \label{Sw} S_W = \frac{q^2 \ell_{\rm B}}{2} \sum_{j,k=1\atop (j\ne k)}^N \frac{1}{R_{jk}} \left( \frac{1}{2} \mu_{jk} - \frac{3}{8} \mu_{jk}^2 + \cdots \right) \end{equation} contains particle shifts exclusively in the $(x,y)$ Wigner plane and \begin{eqnarray} \label{Szw} S_{z,W} & = & \frac{q^2 \ell_{\rm B}}{2} \sum_{j,k=1\atop (j\ne k)}^N \frac{1}{R_{jk}} \left[ - \frac{3}{4} \mu_{jk} \nu_{jk} \right. \nonumber \\ & & \left. + \frac{15}{16} \left( \mu_{jk}^2 \nu_{jk} + \mu_{jk} \nu_{jk}^2 \right) + \cdots \right] \end{eqnarray} mixes particle shifts along the $z$ direction with those along the $(x,y)$ plane. We are interested in the particle density profile defined by $\rho({\bf r}) = \langle \sum_{j=1}^N \delta({\bf r}-{\bf r}_j) \rangle$, where $\langle \cdots \rangle$ means thermal equilibrium average over the Boltzmann weight $\exp(-\beta\delta E)$ with \begin{eqnarray} \label{dE} -\beta \delta E & = & -\beta \delta E^{(1)} -\beta \delta E^{(2)} \nonumber \\ & = & -\sum_{j=1}^N \widetilde{z}_j + S_z + S_W +S_{z,W} . \end{eqnarray} The ground-state energy $E_0$ is a quantity which is independent of the particle coordinate shifts and as such disappears for the statistical averages. The system is translationally invariant in the $(x,y)$ plane, so that the particle density is only $z$-dependent, $\rho({\bf r})=\rho(z)$. We shall consider separately in (\ref{dE}) the terms containing exclusively particle shifts in $z$ direction, transversal to the wall, and those which involve longitudinal particle shifts along the Wigner $(x,y)$ plane. \subsection{Contribution of transversal particle shifts} Let us forget for a while the terms $S_W$ and $S_{z,W}$ in (\ref{dE}) and consider only the particle $z$-shifts in the ``most relevant'' $S_z$, \begin{equation} \label{eq:deltae} -\beta\delta E = - \sum_{j=1}^N \widetilde{z}_j + S_z . \end{equation} Expressing $z$ in units of $\mu$, $S_z$ in Eq. (\ref{Sz}) can be written as an infinite series in powers of $1/\sqrt{\Xi}$, the first terms of which read \begin{eqnarray} \label{eq:Sz} S_z & = & \frac{\alpha^3}{4\sqrt{\Xi}} \sum_{j,k=1\atop (j\ne k)}^N \frac{1}{(R_{jk}/a)^3} (\widetilde{z}_j-\widetilde{z}_k)^2 \nonumber \\ & & - \frac{3 \alpha^5}{16 \,\Xi^{3/2}} \sum_{j,k=1\atop (j\ne k)}^N \frac{1}{(R_{jk}/a)^5} (\widetilde{z}_j-\widetilde{z}_k)^4 + \cdots . \end{eqnarray} In the limit $\Xi\to\infty$, $S_z$ is a perturbation with respect to the one-body part in (\ref{eq:deltae}). To obtain the particle density, we add to the one-body potential $\widetilde{z}$ an auxiliary (generating or source) potential $\beta u({\bf r})$, which will be set to $0$ at the end of calculations. The partition function of our $N$-particle system \begin{equation} \label{eq:partition} Z_N[w] = \frac{1}{N!} \int_{\Lambda} \prod_{i=1}^N \left[ d{\bf r}_i w({\bf r}_i) e^{-\widetilde{z}_i} \right] \exp(S_z) \end{equation} thereby becomes a functional of the generating Boltzmann weight $w({\bf r})=\exp[-\beta u({\bf r})]$. The particle density at point ${\bf r}$ is obtained as the functional derivative \begin{equation} \label{eq:density} \rho({\bf r}) = \frac{\delta}{\delta w({\bf r})} \ln Z_N[w] \Big\vert_{w({\bf r})=1} , \end{equation} which is of course a function of $\Xi$, in addition to ${\bf r}$. To treat $S_z$ as the perturbation, we define the $S_z=0$ counterpart of the partition function (\ref{eq:partition}) \begin{eqnarray} Z_N^{(0)}[w] & = & \frac{1}{N!} \int_{\Lambda} \prod_{i=1}^N \left[ d{\bf r}_i w({\bf r}_i) e^{-\widetilde{z}_i} \right] \nonumber \\ & = & \frac{1}{N!} \left[ \int_{\Lambda} d{\bf r} w({\bf r}) e^{-\widetilde{z}} \right]^N , \end{eqnarray} which corresponds to non-interacting particles in an external potential. It is clear that \begin{equation} \ln \left( \frac{Z_N[w]}{Z_N^{(0)}[w]} \right) = \ln \langle \exp(S_z) \rangle_0 , \end{equation} where $\langle \cdots \rangle_0$ denotes the averaging over the system of non-interacting particles defined by $Z_N^{(0)}$. We are left with the cumulant expansion of $\ln\langle\exp(S_z)\rangle_0$: \begin{eqnarray} \ln \langle \exp(S_z) \rangle_0 & = & \sum_{n=1}^{\infty} \frac{1}{n!} \langle S_z^n \rangle_0^{(c)} \nonumber \\ & = & \langle S_z \rangle_0 + \frac{1}{2} \left( \langle S_z^2 \rangle_0 - \langle S_z \rangle_0^2 \right) + \cdots . \end{eqnarray} An important property of the cumulant expansion is that if $\langle S_z \rangle_0$ is an extensive (proportional to $N$) quantity, the higher-order terms will also be. In other words, the contributions of $N^2$, $N^3$, etc. orders will cancel with each other. We conclude that \begin{equation} \label{eq:lnZ} \ln Z_N[w] = \ln Z_N^{(0)}[w] + \langle S_z\rangle_0 + \frac{1}{2} \left( \langle S_z^2 \rangle_0 - \langle S_z \rangle_0^2 \right) + \cdots . \end{equation} The particle density results from the substitution of this expansion into (\ref{eq:density}), and the subsequent application of the functional derivative with respect to $w({\bf r})$, taken at $w({\bf r})=1$. The leading SC behavior of the particle density stems from $\ln Z_N^{(0)}[w]$. Since \begin{eqnarray} \frac{\delta}{\delta w({\bf r})} \ln Z_N^{(0)}[w] \Big\vert_{w({\bf r})=1} & = & \frac{N e^{-\widetilde{z}}}{\int_{\Lambda} d{\bf r} e^{-\widetilde{z}}} = \frac{N}{\vert\Sigma\vert\mu} e^{-\widetilde{z}} \nonumber \\ & = & (2\pi\ell_{\rm B}\sigma^2) e^{-\widetilde{z}} \end{eqnarray} we have $\widetilde{\rho}_0(\widetilde{z}) \sim e^{-\widetilde{z}}$, which coincides with the leading VSC term presented in (\ref{eq:virialSC2}). The first correction to the density profile stems from $\langle S_z \rangle_0$, namely from the first term in the series representation of $S_z$ (\ref{eq:Sz}): \begin{equation} \label{eq:firstcorrection} \langle S_z \rangle_0 \sim \frac{\alpha^3}{4\sqrt{\Xi}} \sum_{j,k=1\atop (j\ne k)}^N \frac{1}{(R_{jk}/a)^3} \langle \left( \widetilde{z}_j^2+\widetilde{z}_k^2-2\widetilde{z}_j\widetilde{z}_k \right) \rangle_0 . \end{equation} A useful property of the averaging $\langle \cdots \rangle_0$ is its independence on the particle (lattice site) index, e.g. for $p=1,2,\ldots$ we have \begin{eqnarray} \langle \widetilde{z}_j^p \rangle_0 & = & \frac{\int_{\Lambda} \prod_{i=1}^N \left[ d{\bf r}_i w({\bf r}_i) e^{-\widetilde{z}_i} \right] \widetilde{z}_j^p}{\int_{\Lambda} \prod_{i=1}^N \left[ d{\bf r}_i w({\bf r}_i) e^{-\widetilde{z}_i} \right]} \nonumber \\ & = & \frac{\int_{\Lambda}d{\bf r} w({\bf r}) e^{-\widetilde{z}} \widetilde{z}^p}{ \int_{\Lambda}d{\bf r} w({\bf r}) e^{-\widetilde{z}}} \equiv [\widetilde{z}^p]_0 . \end{eqnarray} Simultaneously, due to the absence of interactions in $\langle \cdots \rangle_0$, correlation functions of particles decouple themselves, e.g. $\langle \widetilde{z}_j \widetilde{z}_k \rangle_0 = [\widetilde{z}]_0^2$ for $j\ne k$. Thus, the relation (\ref{eq:firstcorrection}) becomes \begin{equation} \langle S_z \rangle_0 \sim \frac{\alpha^3}{2\sqrt{\Xi}} N C_3 \left( [\widetilde{z}^2]_0 - [\widetilde{z}]_0^2 \right) . \end{equation} It is easy to show that \begin{equation} \frac{\delta}{\delta w({\bf r})} [\widetilde{z}^p]_0 \Big\vert_{w({\bf r})=1} = \frac{1}{\vert\Sigma\vert\mu} e^{-\widetilde{z}} \left( \widetilde{z}^p - p! \right) , \end{equation} where we used the equality $[\widetilde{z}^p]_0\vert_{w({\bf r})=1} = p!$. The formula for the density profile, in the leading order plus the first correction, then reads \begin{equation} \label{eq:leadingpluscorrection} \widetilde{\rho}(\widetilde{z},\Xi) = e^{-\widetilde{z}} + \frac{3^{3/4}}{8\pi^{3/2}} \frac{C_3}{\sqrt{\Xi}} e^{-\widetilde{z}} \left( \frac{\widetilde{z}^2}{2} - \widetilde{z} \right) + {\cal O}\left( \frac{1}{\Xi} \right) . \end{equation} Note that the electroneutrality (\ref{eq:rhoelectroneutrality}) and the contact theorem (\ref{eq:rhocontacttheorem}) are satisfied by this density profile. In Fig. \ref{fig:prof}, we compare the appropriately rescaled first correction to the leading SC profile obtained in (\ref{eq:leadingpluscorrection}) (solid curve) with MC data \cite{Moreira00} at $\Xi=10^3$ (filled squares). The agreement is excellent. On the other hand, the VSC prediction is off by a factor $1000^{1/2}$. \begin{figure}[htb] \begin{center} \includegraphics[width=0.45\textwidth,clip]{Fig4.eps} \caption{Single charged wall: Comparison between the rescaled analytical first correction to the strong coupling profile from Eq. (\ref{eq:leadingpluscorrection}) (solid curve) and the MC results of Ref. \cite{Moreira00} (filled squares). Here, $\Xi=10^3$ and $\widetilde\rho_0(z)$ denotes the leading order term $\exp(-\widetilde z)$, that is subtracted from the numerical data to probe the correction.} \label{fig:prof} \end{center} \end{figure} Comparing our WSC result (\ref{eq:leadingpluscorrection}) with the VSC Eqs. (\ref{eq:virialSC1}) and (\ref{eq:virialSC2}) we see that the first corrections have the same functional dependence in $\widetilde{z}$, but different prefactors. In terms of the fitting parameter $\theta$ introduced in (\ref{eq:theta1}), the VSC estimate $\theta=\Xi$ is compared with the present value \begin{equation} \label{eq:prof_theta} \theta = \frac{8\pi^{3/2}}{3^{3/4}} \frac{1}{C_3} \sqrt{\Xi} = 1.771\ldots \sqrt{\Xi} . \end{equation} As is seen from Fig. \ref{fig:prof_theta}, this formula (solid curve) is in full agreement with the data of MC simulations (filled diamonds). In the series representation of $S_z$ (\ref{eq:Sz}), the first term is of order $\Xi^{-1/2}$ and the second one is of order $\Xi^{-3/2}$. In view of (\ref{eq:lnZ}), the second correction to the density profile stems from $(\langle S_z^2 \rangle_0-\langle S_z \rangle_0^2)/2$ with $S_z$ represented by its first term, and not from $\langle S_z \rangle_0$ with $S_z$ represented by its second term. Let us analyze in detail the average \begin{eqnarray} \langle S_z^2 \rangle_0 & \sim & \left( \frac{\alpha^3}{4\sqrt{\Xi}} \right)^2 \sum_{(j\ne k)} \frac{1}{(R_{jk}/a)^3} \sum_{(m\ne n)} \frac{1}{(R_{mn}/a)^3} \nonumber \\ & & \times \frac{\int_{\Lambda} \prod_{i=1}^N \left[ d{\bf r}_i w({\bf r}_i) e^{-\widetilde{z}_i} \right] (\widetilde{z}_j-\widetilde{z}_k)^2 (\widetilde{z}_m-\widetilde{z}_n)^2}{ \int_{\Lambda} \prod_{i=1}^N \left[ d{\bf r}_i w({\bf r}_i) e^{-\widetilde{z}_i} \right]} . \nonumber \\ & & \end{eqnarray} For a fixed pair of site indices $(j\ne k)$, there exist seven topologically different possibilities for the pair $(m\ne n)$: $$ \left. \begin{array}{cc} m=j, & n=k; \cr n=j, & m=k; \end{array} \right\} \qquad \mbox{factor $2$} $$ $$ \left. \begin{array}{cc} m=j, & n\ne j,k; \cr n=j, & m\ne j,k; \cr m=k, & n\ne j,k; \cr n=k, & m\ne j,k; \cr \end{array} \right\} \qquad \mbox{factor $4$} $$ $$ m\ne j,k, \quad n\ne j,k,m. \} \qquad \mbox{factor $1$} $$ Here, respecting the properties of the averaging $\langle\cdots\rangle_0$, those possibilities which lead to the same result are grouped together. After simple algebra, we find that \begin{eqnarray} \langle S_z^2 \rangle_0 & \sim & \frac{\alpha^6}{4 \Xi} \Big\{ N C_3^2 \left( [\widetilde{z}^4]_0 - 4 [\widetilde{z}^3]_0 [\widetilde{z}]_0 + 3 [\widetilde{z}^2]_0^2 \right) \nonumber \\ & & + [(NC_3)^2-4NC_3^2+2NC_6] \left( [\widetilde{z}^2]_0 - [\widetilde{z}]_0^2 \right)^2 \Big\}. \nonumber \\ & & \end{eqnarray} The ``undesirable'' disconnected term of order $N^2$ is cancelled by the subtraction of $\langle S_z \rangle_0^2$. After performing the functional derivatives with respect to $w({\bf r})$, taken at $w({\bf r})=1$, we end up with the next correction to the profile (\ref{eq:leadingpluscorrection}) of the form \begin{equation} \frac{3^{3/2}}{64 \pi^3} \frac{1}{\Xi} e^{-\widetilde{z}} \left[ C_3^2 \left( \frac{\widetilde{z}^4}{8} - \frac{\widetilde{z}^3}{2} + \frac{\widetilde{z}^2}{2} - \widetilde{z} \right) + C_6 \left( \frac{\widetilde{z}^2}{2} - \widetilde{z} \right) \right] . \end{equation} Note that this correction does not break the electroneutrality condition (\ref{eq:rhoelectroneutrality}) nor the contact theorem (\ref{eq:rhocontacttheorem}). \subsection{Contribution of longitudinal and mixed particle shifts} Now we consider in (\ref{dE}) also the term $S_W$ with purely longitudinal particle shifts in the Wigner plane and the term $S_{z,W}$ with mixed transversal and longitudinal shifts. Denoting particle shifts in the infinite Wigner plane as ${\bf u}_j=(x_j,y_j)$, these terms possess the important translational symmetry: \begin{eqnarray} S_W(\{ {\bf u}_j \}) & = & S_W(\{ {\bf u}_j+{\bf u} \}) , \nonumber \\ S_{z,W}(\{ {\bf u}_j,z_j \}) & = & S_{z,W}(\{ {\bf u}_j+{\bf u},z_j \}) , \label{eq:transl} \end{eqnarray} where ${\bf u}$ is any 2D vector. We first investigate the scaling properties of $S_W$ and $S_{z,W}$. Let us expand $S_W$ up to quadratic $x,y$-deviations: \begin{eqnarray} \label{Sw2} S_W & = & \frac{q^2\ell_{\rm B}}{4a} \sum_{j,k=1\atop (j\ne k)}^N \frac{(R_{jk}^y/a)^2-2(R_{jk}^x/a)^2}{(R_{jk}/a)^5} \left( \frac{x_j-x_k}{a} \right)^2 \nonumber \\ & & + \frac{q^2\ell_{\rm B}}{4a} \sum_{j,k=1\atop (j\ne k)}^N \frac{(R_{jk}^x/a)^2-2(R_{jk}^y/a)^2}{(R_{jk}/a)^5} \left( \frac{y_j-y_k}{a} \right)^2 \nonumber \\ & & - \frac{3q^2\ell_{\rm B}}{2a} \sum_{j,k=1\atop (j\ne k)}^N \frac{(R_{jk}^x R_{jk}^y)/a^2}{(R_{jk}/a)^5} \frac{(x_j-x_k)(y_j-y_k)}{a^2} \nonumber \\ & & + \cdots . \end{eqnarray} Terms linear in $(x_j-x_k)/a$ and $(y_j-y_k)/a$ vanish because every point of the hexagonal Wigner crystal is a center of inversion. We saw that in the $z$ direction the relevant length scale is determined by the Gouy-Chapman length $\mu$: Rescaling the $z$ coordinates by $\mu$, the (leading) linear potential term $\widetilde{z}$ is independent of the coupling constant $\Xi$ while the next terms are proportional to inverse powers of $\sqrt{\Xi}$ and therefore vanish in the SC limit. The natural length scale in the Wigner $(x,y)$ plane is the lattice spacing $a$, but this is not the relevant scale in statistical averages. The relevant length $\lambda$ is determined by the requirement that the rescaling of coordinates $x_j=\lambda X_j$ and $y_j=\lambda Y_j$ in (\ref{Sw2}) leads to a dimensionless and $\Xi$-independent (leading) quadratic term. Since $q^2\ell_{\rm B}/a \propto \sqrt{\Xi}$, we have \begin{equation} \frac{\lambda}{a} \propto \frac{1}{\Xi^{1/4}} , \qquad \frac{\lambda}{\mu} \propto \Xi^{1/4} \end{equation} (the numerical prefactors are unimportant), i.e. the relevant scale is ``in between'' $\mu$ and $a$. The higher-order terms in $S_W$, which contain the deviations $(x_j-x_k)$ and $(y_j-y_k)$ in powers $p=3,4,\ldots$, scale like $1/\Xi^{(p-2)/4}$ and therefore vanish in the limit $\Xi\to\infty$. Let us now consider the leading expansion terms of the mixed quantity $S_{z,W}$: \begin{eqnarray} S_{z,W} & = & - \frac{3 q^2\ell_{\rm B}}{4a} \sum_{j,k=1\atop (j\ne k)}^N \frac{[(z_j-z_k)/a]^2}{(R_{jk}/a)^5} \nonumber \\ & & \times \left[ \frac{R_{jk}^x}{a} \left( \frac{x_j-x_k}{a} \right) + \frac{R_{jk}^y}{a} \left( \frac{y_j-y_k}{a} \right) \right] \nonumber \\ & & + \frac{3 q^2\ell_{\rm B}}{8a} \sum_{j,k=1\atop (j\ne k)}^N \frac{[(z_j-z_k)/a]^2}{(R_{jk}/a)^7} \nonumber \\ & & \times \Bigg\{ \left[ 4 \left( \frac{R_{jk}^x}{a} \right)^2 - \left( \frac{R_{jk}^y}{a} \right)^2 \right] \left( \frac{x_j-x_k}{a} \right)^2 \nonumber \\ & & + \left[ 4 \left( \frac{R_{jk}^y}{a} \right)^2 - \left( \frac{R_{jk}^x}{a} \right)^2 \right] \left( \frac{y_j-y_k}{a} \right)^2 \nonumber \\ & & + 10 \frac{R_{jk}^x}{a} \frac{R_{jk}^y}{a} \left( \frac{x_j-x_k}{a} \right) \left( \frac{y_j-y_k}{a} \right) \Bigg\} \nonumber \\ & & + \cdots . \end{eqnarray} Rescaling the particle coordinates as follows $z_j=\mu \widetilde{z}_j$, $x_j=\lambda X_j$, $y_j=\lambda Y_j$, the first term is of order $1/\Xi^{3/4}$ and the second one is of order $1/\Xi$. To obtain the density profile, one proceeds in analogy with the previous case of transversal vibrations. We introduce the partition function of our $N$-particle system \begin{equation} \label{eq:partition2} Z_N[w] = \frac{1}{N!} \int_{\Lambda} \prod_{j=1}^N \left[ d{\bf r}_i w({\bf r}_i) e^{-\widetilde{z}_i} \right] e^{S_W} e^{S_z+S_{z,W}} \end{equation} with the generating Boltzmann weight $w({\bf r})$. We take as the unperturbed system the one with one-body potentials $-\widetilde{z}_i$ in $z$ direction and $S_W$ in $(x,y)$ plane, and treat $S_z+S_{z,W}$ as the perturbation. Using the cumulant method, we obtain \begin{equation} \ln Z_N[w] = \ln Z_N^{(0)}[w] + \langle S_z\rangle_0 + \langle S_{z,W}\rangle_0 + \cdots , \end{equation} where $\langle \cdots \rangle_0$ denotes the averaging over the unperturbed system with the partition function \begin{equation} Z_N^{(0)}[w] = \frac{1}{N!} \int_{\Lambda} \prod_{i=1}^N \left[ d{\bf r}_i w({\bf r}_i) e^{-\widetilde{z}_i} \right] \exp(S_W) . \end{equation} The particle density is given by Eq. (\ref{eq:density}). The additional appearance of $\exp(S_W)$ in the averaging over the unperturbed system is a complication which can be sometimes removed trivially by using the translational invariance of $S_W$ (\ref{eq:transl}). We shall document this fact on the leading SC behavior of the particle density at point ${\bf r}=({\bf u},z)$ which stems from $\ln Z_N^{(0)}[w]$: \begin{eqnarray} \frac{\delta}{\delta w({\bf r})} \ln Z_N^{(0)}[w] \Big\vert_{w({\bf r})=1} \phantom{aaaaaaaaaaaaaaaaaa} \nonumber \\ = \frac{N e^{-\widetilde{z}}}{\mu} \frac{\int_{\Sigma}\prod_{i=2}^N d^2 u_i e^{S_W({\bf u}_1={\bf u})}}{ \int_{\Sigma}\prod_{i=1}^N d^2 u_i e^{S_W}} . \label{eq:Z0} \end{eqnarray} Since the surface of the plate $\Sigma$ is infinite, we shift in the denominator the integral variables $i\ne 1$ as follows ${\bf u}_i\to {\bf u}_i+{\bf u}_1-{\bf u}$ which transforms $S_W\to S_W({\bf u}_1={\bf u})$. Integrating over ${\bf u}_1$, the ratio of integrals in (\ref{eq:Z0}) is ${\bf u}$-independent, and reads $1/\vert\Sigma\vert$. By this simple technique, it can be shown that the contribution to the density profile coming from the functional derivative of $\langle S_z\rangle_0$ is not affected by $S_W$, which decouples from the $z$-variables. We remember from the previous part about transversal deviations that $\langle S_z \rangle_0$ is of order $1/\sqrt{\Xi}$. The description is a bit more complicated in the case of \begin{equation} \langle S_{z,W} \rangle_0 = \frac{\int_{\Lambda} \prod_{i=1}^N \left[ d{\bf r}_i w({\bf r}_i) e^{-\widetilde{z}_i} \right] \exp(S_W) S_{z,W}}{ \int_{\Lambda} \prod_{i=1}^N \left[ d{\bf r}_i w({\bf r}_i) e^{-\widetilde{z}_i} \right] \exp(S_W)} . \end{equation} In the corresponding contribution to the density profile, obtained as the functional derivative with respect to $w({\bf r})$ at $w({\bf r})=1$, the $z$ and $(x,y)$ subspaces decouple from one another. The $z$ variables are considered in the rescaled form $\widetilde{z}=z/\mu$. To perform the integration over the Wigner plane, we rescale the $(x,y)$ variables to the ones $\lambda(X,Y)$; this ensures that the quadratic part of $S_W$ is $\Xi$-independent and all higher-order terms $p=3,4,\ldots$, proportional to $1/\Xi^{(p-2)/4}$, vanish in the SC limit $\Xi\to\infty$. Thus the leading dependence on $\Xi$ is given by the scaling factor of $S_{z,W}$ under the coordinate transformations $z=\mu\widetilde{z}$ and $(x,y)=\lambda(X,Y)$, which was found to be of order $1/\Xi^{3/4}$. This contribution does not alter the first correction $\propto 1/\sqrt{\Xi}$. To calculate explicitly the second correction is a complicated task, because the quadratic part of $S_W$ in the exponential $\exp(S_W)$ involves all interactions of particles on the Wigner crystal. The explicit diagonalization of $S_W$ can be done e.g. in the small wave vector limit \cite{Bonsall77}. The fact that the longitudinal vibrations in the plane of the Wigner crystal have no effect on the leading term and the first correction of the particle density profile is a general feature of the WSC theory. In what follows, we shall ignore these degrees of freedom, restricting ourselves to the leading term and the first correction, proportional to $1/\sqrt{\Xi}$. \section{Parallel plates at small separation} \label{sec:twoplates} Next we study the geometry of two parallel plates $\Sigma_1\equiv 1$ and $\Sigma_2\equiv 2$ of the same (infinite) surface $\vert \Sigma_1\vert = \vert \Sigma_2\vert = \vert\Sigma\vert$, separated by a distance $d$, see Fig. \ref{fig:geometry}b. The $z=0$ plate 1 carries the constant surface charge density $\sigma_1 e$, while the other plate 2 at $z=d$ is charged by $\sigma_2 e$. The electric potential between the plates is, up to an irrelevant constant, given by \begin{equation} \label{eq:one-body} \phi(z) = -\frac{2\pi (\sigma_1-\sigma_2)e}{\varepsilon} z. \end{equation} $N$ mobile counter-ions of charge $- q e$ (the valency $q>0$), which are in the region between the walls $\Lambda = \left\{ {\bf r},0\le z\le d \right\}$, compensate exactly the fixed charge on the plates: \begin{equation} \label{eq:electro1} q N = (\sigma_1+\sigma_2) \vert\Sigma\vert . \end{equation} Without any loss of generality we can assume $\sigma_1>0$, so that the asymmetry parameter \begin{equation} \zeta = \frac{\sigma_2}{\sigma_1} \ge -1 . \label{eq:defzeta} \end{equation} This parameter should not be confused with the Riemann function introduced in Eq. (\ref{eq.C3}). By rescaling appropriately model's parameters, it is sufficient to consider the interval $-1\le \zeta \le 1$. The limiting value $\zeta=-1$ corresponds to the trivial case $\sigma_2=-\sigma_1$ with no counter-ions between the plates. The symmetric case $\zeta=1$ corresponds to equivalently charged plates $\sigma_2=\sigma_1$. Note that in all cases considered, there is only one type of mobile ion in the interstitial space $0\leq z \leq d$. Because of the asymmetry between the surface charges, there exist two Gouy-Chapman lengths \begin{equation} \mu_1 = \frac{1}{2\pi\ell_{\rm B}q \sigma_1} \equiv \mu , \quad \mu_2 = \frac{1}{2\pi\ell_{\rm B}q \vert\sigma_2\vert} = \frac{\mu}{\vert\zeta\vert}. \end{equation} Similarly, we can define two different coupling parameters \begin{equation} \Xi_1 = \frac{q^2 \ell_{\rm B}}{\mu_1} \equiv \Xi , \quad \Xi_2 = \frac{q^2 \ell_{\rm B}}{\mu_2} = \vert\zeta\vert \Xi . \end{equation} Here, for the ease of comparison, we follow the convention of Ref. \cite{Kanduc08}: all quantities will be rescaled by their plate 1 counterparts, i.e. $\widetilde{z}=z/\mu_1$, and \begin{equation} \widetilde{\rho}(\widetilde{z}) = \frac{\rho(\mu\widetilde{z})}{2\pi\ell_{\rm B}\sigma_1^2} , \quad \widetilde{P} = \frac{\beta P}{2\pi\ell_{\rm B}\sigma_1^2} . \end{equation} The reduced density is a function of three arguments: $\widetilde z$, $\widetilde d$ and $\Xi$ while the reduced pressure depends on two: $\widetilde d$ and $\Xi$. For notational simplicity, the dependence on $\widetilde d$ and $\Xi$ will often be implicit in what follows. Note also that $\widetilde P = \epsilon P / (2 \pi e^2 \sigma_1^2)$, so that the rescaling factor required to defined the dimensionless pressure is temperature independent. This is not the case of the rescaling factor applied to distances, since the Gouy-Chapman lengths scale as $T$. The electroneutrality condition (\ref{eq:electro1}) can be written in two equivalent ways \begin{equation} \label{eq:electro2} \int_0^d dz \rho(z) = \frac{\sigma_1+\sigma_2}{q} , \quad \int_0^{\widetilde{d}} d\widetilde{z} \, \widetilde{\rho}(\widetilde{z}) \, =\, 1+\zeta . \end{equation} The contact-value theorem (\ref{eq:contacttheorem1}), considered at $z=0$ and $z=d$ boundaries, takes two equivalent forms \begin{equation} \label{eq:contacttheorem2} \widetilde{P} = \widetilde{\rho}(0) - 1 = \widetilde{\rho}(\widetilde{d}) - \zeta^2 , \end{equation} which provides a strong $d$ and $\Xi$ independent constraint for $\widetilde{\rho}(0) -\widetilde{\rho}(d)$. In the case of oppositely charged surfaces $-1<\zeta\le 0$, the ground state of the counter-ion system is the same as for the isolated plate 1, i.e. all $N$ counter-ions collapse on the surface, and create the hexagonal Wigner crystal. For this region of $\zeta$ values, one can easily adapt the WSC technique from the one-plate geometry for {\it a priori} any distance $d$ between the plates. The case of like-charged plates $0<\zeta\le 1$ is more subtle. The ground state of the counter-ion system corresponds to a bilayer Wigner crystal, as a consequence of Earnshaw theorem \cite{Earnshaw}. The lattice spacings of each layer are denoted $b_1$ and $b_2$; they are the direct counterpart of the length scale $a$ introduced in section \ref{sec:oneplate}. The bilayer structure is, in general, complicated and depends on the distance $d$ \cite{Goldoni96,Messina03,Lobaskin07}. For this region of $\zeta$ values, the WSC technique cannot be adapted directly from the one-plate geometry, except for small distances between the plates such that $d\ll b$, where $b=\min\{b_1,b_2\}$. The point is that each particle experiences, besides the direct linear one-body potential (\ref{eq:one-body}) induced by homogeneously charged plates, an additional perturbation due to the repulsive interactions with other $q$-valent ions. This additional potential is, for $d\ll b$, small compared to (\ref{eq:one-body}). This opens the way to a perturbative treatment along similar lines as in section \ref{sec:oneplate}, in which the leading one-body description is then fully equivalent to the one derived within the VSC method. First we shall address the symmetric $\zeta=1$ case which ground state was studied extensively in the past. The symmetric configuration is of special importance in the VSC method: Although the leading SC result for the density profile and the pressure was derived for all values of the asymmetry parameter $-1\le \zeta \le 1$ \cite{Kanduc08}, the first SC correction (inconsistent with MC simulations) is available up to now only for $\zeta=1$ \cite{Moreira00,Netz01}. After solving the SC limit for the symmetric case, we shall pass to asymmetric, oppositely and likely charged, surfaces and solve the problem in the leading SC order plus the first correction. \subsection{Equivalently charged plates} For $\sigma_1=\sigma_2=\sigma$, the electric field between the walls vanishes. At $T=0$, the classical system is defined furthermore by the dimensionless separation \begin{equation} \label{eq:eta} \eta = d \sqrt{\frac{\sigma}{q}} = \frac{1}{\sqrt{2\pi}} \frac{\widetilde{d}}{\sqrt{\Xi}} . \end{equation} A complication comes from the fact that counter-ions form, on the opposite surfaces, a bilayer Wigner crystal, the structure of which depends on $\eta$ \cite{Goldoni96,Messina03,Lobaskin07}. Two limiting cases are clear. At the smallest separation $\eta=0$, a single hexagonal Wigner crystal is formed. Due to global neutrality, its lattice spacing $b$ is given by \begin{equation} \label{eq:spacing} \frac{q}{2\sigma} = \frac{\sqrt{3}}{2} b^2 . \end{equation} The lattice spacing is simply related to that of the one plate problem by $b=a/\sqrt{2}$. At large separations $\eta\to \infty$, each of the plates has its own Wigner hexagonal structure and these structures are shifted with respect to one another. The transition between these limiting phases corresponds to the following sequence of structures (in the order of increasing $\eta$ \cite{Goldoni96}): a mono-layer hexagonal lattice (I, $0\le\eta\le \eta_0$), a staggered rectangular lattice (II, $\eta_0<\eta\le 0.26$), a staggered square lattice (III, $0.26<\eta\le 0.62$), a staggered rhombic lattice (IV, $0.62<\eta\le 0.73$) and a staggered hexagonal lattice (V, $0.73<\eta$). The three ``rigid'' structures I, III and V, which do not change within their stability regions, are shown in Fig. \ref{fig:Structures}. The primary cells of intermediate ``soft'' II and IV lattices are changing with $\eta$ within their stability regions. The existence of phase I in a small, but finite interval of $\eta$, is a controversial issue \cite{Goldoni96,Messina03,Lobaskin07}, and therefore, so is the case of the precise value of the threshold $\eta_0$. Whether $\eta_0$ is vanishing or is a very small number, remains an open problem. Here, we perform expansions of thermodynamic quantities in powers of $d/b\ll 1$ (or, equivalently, $\eta \propto\widetilde{d}/\sqrt{\Xi}\ll 1$ since the scale $\widetilde{d}$ is fixed while $\Xi$ becomes large). We therefore need to know the ground state structure for $d/b\propto \eta=0$, which is clearly structure I, irrespective of the ``$\eta_0$ controversy'', with a lattice spacing given by (\ref{eq:spacing}). We shall thus document our WSC expansion on structure I. \begin{figure}[htb] \begin{center} \includegraphics[width=0.35\textwidth]{Fig5.eps} \caption{Rigid ground-state structures I, III and V of counter-ions on two parallel charged plates; open and filled symbols correspond to particle positions on the opposite surfaces.} \label{fig:Structures} \end{center} \end{figure} Let ${\bf R}_j=(R_j^x,R_j^y)$ be the position vector of the particle localized on the shared hexagonal Wigner lattice of type I; $Z_j=0$ if the particle $j=1,\ldots,N/2$ belongs to the plate $\Sigma_1$ (say filled symbols of Structure I in Fig. \ref{fig:Structures}) and $Z_j=d$ if the particle $j=N/2+1,\ldots,N$ belongs to the plate $\Sigma_2$ (open symbols of Structure I in Fig. \ref{fig:Structures}). Let us shift all particles from their lattice positions $\{ {\bf R}_j,Z_j=0\vee d \}$ to $\{ (x_j,y_j,z_j) \}$ and look for the corresponding energy change $\delta E$ from the ground state. Since the potential induced by the surface charge on the walls is constant between the walls and the linear in $z$ contribution of Wigner crystals is negligible if $d/b\ll 1$, the corresponding $\delta E^{(1)}=0$. The $z$-coordinates of particles, constrained by the distance $d$ between the plates, are much smaller than the Wigner lattice spacing $b$, i.e. both $d^2$ and $(z_j-z_k)^2$ are much smaller than $\vert {\bf R}_j-{\bf R}_k\vert^2$ for $j\ne k$. The harmonic in $z$ part of the energy change thus reads \begin{eqnarray} \label{eq:deltainteraction} \delta E^{(2)}_z & = & - \frac{(qe)^2}{4\varepsilon} \sum_{j,k=1\atop (j\ne k)}^N \frac{(z_j- z_k)^2}{ \vert {\bf R}_j-{\bf R}_k \vert^3} \nonumber \\ & & + \frac{(qe)^2}{2\varepsilon} \sum_{j\in\Sigma_1} \sum_{k\in\Sigma_2} \frac{d^2}{ \vert {\bf R}_j-{\bf R}_k\vert^3} . \end{eqnarray} Note that the first (quadratic in $z$) term carries only the information about the single Wigner crystal of lattice spacing $b$. The information on how the lattice sites are distributed between the two plates within structure I is contained in the second constant (from the point of view of thermal averages irrelevant) term which compensates the first one if the counter-ions are in their ground-state configuration. The harmonic terms in the $(x,y)$ plane prove immaterial for the sake of our purposes. The total energy change is given, as far as the $z$-dependent contribution is concerned, by $-\beta\delta E = S_z$ with \begin{eqnarray} S_z \sim \frac{\left( \sqrt{2}\alpha \right)^3}{4\sqrt{\Xi}} \sum_{j,k=1\atop (j\ne k)}^N \frac{(\widetilde{z}_j-\widetilde{z}_k)^2}{(R_{jk}/b)^3} . \end{eqnarray} The only difference between this two-plate $S_z$ and the one-plate $S_z$ (\ref{eq:Sz}) consists in the factor $2^{3/2}$ due to the different lattice spacing of the corresponding Wigner crystals, $b=a/\sqrt{2}$. To derive the density profile, we use the cumulant technique with the one-body Boltzmann factor equal to 1 (no external potential). The leading SC behavior stems from $Z_N^{(0)}[w] = \left[ \int_{\Lambda}d{\bf r}w({\bf r})\right]^N/N!$. Since \begin{equation} \frac{\delta}{\delta w({\bf r})} \ln Z_N^{(0)}[w] \Big\vert_{w({\bf r})=1} = \frac{N}{\vert\Sigma\vert d} = (2\pi\ell_{\rm B}\sigma^2) \frac{2}{\widetilde{d}} \end{equation} we have in the leading SC order the constant density $\widetilde{\rho}_0(\widetilde{z})\sim 2/\widetilde{d}$. This is the one-particle result in zero potential, respecting the electroneutrality condition (\ref{eq:electro2}) with $\zeta=1$. The same leading form was obtained by the VSC method \cite{Moreira00,Netz01}. The physical meaning is simple: due to their strong mutual repulsion, the counter-ions form a strongly modulated structure along the plate and consequently decouple in the transverse direction, where they only experience the electric field due to the two plates. In the symmetric case $\zeta=1$, this field vanishes and the resulting ionic density is uniform along $z$: from electroneutrality, it reads $\widetilde \rho_0 = 2/\widetilde d$. The situation changes in the asymmetric case, where one can anticipate $\widetilde \rho_0$, again driven by the non vanishing but uniform bare plates field, to be exponential in $z$. The first correction to the density profile stems from \begin{equation} \langle S_z \rangle_0 \sim \frac{\sqrt{2}\alpha^3}{\sqrt{\Xi}} N C_3 \left( [\widetilde{z}^2]_0 - [\widetilde{z}]_0^2 \right) , \end{equation} where \begin{equation} [\widetilde{z}^p]_0 \equiv \frac{\int_{\Lambda}d{\bf r} w({\bf r}) \widetilde{z}^p}{ \int_{\Lambda}d{\bf r} w({\bf r})} , \quad p=1,2,\ldots. \end{equation} Simple algebra yields \begin{equation} \frac{\delta}{\delta w({\bf r})} [\widetilde{z}^p]_0 \Big\vert_{w({\bf r})=1} = \frac{1}{\vert\Sigma\vert d} \left( \widetilde{z}^p - \frac{\widetilde{d}^p}{p+1} \right) , \end{equation} where we used that $[\widetilde{z}^p]_0\vert_{w({\bf r})=1} = \widetilde{d}^p/(p+1)$. The density profile $\widetilde{\rho}(\widetilde{z})$ is thus obtained in the form \begin{equation} \label{eq:profile} \widetilde{\rho}(\widetilde{z}) = \frac{2}{\widetilde{d}} + \frac{1}{\theta} \frac{2}{\widetilde{d}} \left[ \left( \widetilde{z}-\frac{\widetilde{d}}{2} \right)^2 - \frac{\widetilde{d}^2}{12} \right] + {\cal O}\left( \frac{1}{\Xi} \right) , \end{equation} where \begin{equation} \label{eq:theta2} \theta(\zeta=1) = \frac{(4\pi)^{3/2}}{3^{3/4}} \frac{1}{C_3} \frac{1}{\sqrt{2}} \sqrt{\Xi} = 1.252\ldots \sqrt{\Xi} . \end{equation} This density profile respects the electroneutrality condition (\ref{eq:electro2}) with $\zeta=1$. The functional form of (\ref{eq:profile}) coincides with that of Moreira and Netz \cite{Moreira00,Netz01}. For (not yet asymptotic) $\Xi=100$, the previous VSC result $\theta=\Xi$ is far away from the MC estimate $\theta\simeq 11.2$ \cite{Moreira00}, while our formula (\ref{eq:theta2}) gives a reasonable value $\theta\simeq 12.5$. In the evaluation of the $\theta$ factor in Eq. (\ref{eq:theta2}), we use the exact result (\ref{eq.C3}) for the lattice sum $C_3$ of the mono-layer hexagonal structure I, which was the starting point of our expansion. It is instructive to compare (\ref{eq:theta2}) with the corresponding $\theta$ factors calculated for the structures III and V presented in Fig. \ref{fig:Structures}. Using a representation of the lattice sums in terms of quickly convergent integrals over products of Jacobi theta functions, we find that $\theta=1.232\ldots \sqrt{\Xi}$ for the structure III and $\theta=1.143\ldots \sqrt{\Xi}$ for the structure V. These values show only a slight dependence of $\theta$ on the structure of the ground state. \begin{figure}[htb] \begin{center} \includegraphics[width=0.45\textwidth,clip]{Fig6.eps} \caption{Phase diagram following from the WSC equation of state (\ref{eq:eos}), for symmetric like-charged plates ($\zeta=1$). The solid curve, which shows the points where $P=0$, divides the $(\Xi,\widetilde{d})$ plane onto its attractive $(P<0)$ and repulsive $(P>0)$ parts. The dashed line is the original VSC prediction \cite{Netz01}. The filled squares are the MC data from Ref. \cite{Moreira00} with $\Xi>20$. The filled circle indicates the terminal point of the attraction/repulsion separatrix, obtained within WSC. The question mark is a reminder that the upper branch of the isobaric curve $P=0$ is such that $\widetilde d \propto \sqrt{\Xi}$, whereas our results are meaningful under the proviso that $\widetilde d \ll \sqrt{\Xi}$.} \label{fig:phases} \end{center} \end{figure} Applying the contact-value theorem (\ref{eq:contacttheorem2}) to the density profile (\ref{eq:profile}), the pressure $P$ between the plates is given by \begin{equation} \label{eq:eos} \widetilde{P} = - 1 + \frac{2}{\widetilde{d}} + \frac{\widetilde{d}}{3\theta} + {\cal O}\left(\frac{1}{\Xi}\right) . \end{equation} A similar result was obtained within the approximate approach of Ref. \cite{Hatlo10}, with the underestimated ratio $\theta/\sqrt{\Xi}=3\sqrt{3}/2=0.866\ldots$. Equation (\ref{eq:eos}) provides insight into the like charge attraction phenomenon. The attractive ($P<0$) and repulsive ($P>0$) regimes are shown in Fig. \ref{fig:phases}. Although our results hold for $\widetilde{d} \ll \sqrt{\Xi}$ and for large $\Xi$, the shape of the phase boundary where $P=0$ (solid curve) shows striking similarity with its counterpart obtained numerically \cite{Moreira00,Chen06}. For instance, the terminal point of the attraction region, shown by the filled circle in Fig. \ref{fig:phases}, is located at $\widetilde{d}=4$, a value close to that which can be extracted from \cite{Moreira00,Chen06}. However, for $\Xi<20$, our results depart from the MC data, and in particular, WSC underestimates the value of $\Xi$ at the terminal point: we find $\Xi_{term} \simeq 4.53$ (corresponding to a critical value $\theta_{term}$=8/3), whereas the numerical data reported in \cite{Moreira00} yields $\Xi_{term} \simeq 12$. The previous results apply to the VSC approach as well, where the functional form of the equation of state is the same as in WSC. Since we have $\theta=\Xi$ in VSC, we conclude that $\Xi_{term} = 8/3 \simeq 2.66$ within VSC, which is indeed the value that can be seen in Fig. \ref{fig:phases}. Clearly, accounting correctly for the behaviour of the counter-ion mediated pressure for $\Xi\leq 20$ requires to go beyond the strong-coupling analysis. In addition, one has to be cautious as far as the location of the upper branch of the attraction/repulsion boundary is concerned: It is such that $\widetilde d/\sqrt{\Xi}$ is of order unity and hence lies at the border of validity of our expansion. \begin{figure}[htb] \begin{center} \includegraphics[width=0.45\textwidth,clip]{Fig7.eps} \caption{The symmetric case $\zeta=1$: The maximum attraction distance $\widetilde{d}_{\text{max}}$ (dashed line) is defined by $\partial\widetilde{P}/\partial\widetilde{d}=0$. The solid curve $\widetilde{d}^*$ is the boundary between attractive and repulsive regimes.} \label{fig:dmax} \end{center} \end{figure} There is another feature of the equation of state under strong coupling that can be captured by our analysis: The distance of maximal attraction, where the pressure is most negative. We predict the maximum attraction, following from $\partial\widetilde{P}/\partial\widetilde{d}=0$, to be reached at $\widetilde{d}_{\text{max}} = \sqrt{6 \theta} \propto \Xi^{1/4}$. Since $\widetilde{d}_{\text{max}}/\sqrt{\Xi} \propto \Xi^{-1/4} \to 0$ in the asymptotic limit $\Xi\to\infty$, we can consider the latter prediction, shown by the dashed line in Fig. \ref{fig:dmax}, as asymptotically exact. We note that it is fully corroborated by the scaling laws reported in \cite{Chen06}, while VSC yields the scaling behaviour $\widetilde{d}_{\text{max}} \propto \Xi^{1/2}$. \begin{figure}[htb] \begin{center} \includegraphics[width=0.45\textwidth,clip]{Fig8.eps} \caption{The dependence of $\widetilde{P}-2/\widetilde{d}$ on the plate separation $\widetilde{d}$ for three values of the coupling constant $\Xi=100$, $10$ and $0.5$. Here $\zeta=1$ (symmetric case). The plots yielded by the WSC equation of state (\ref{eq:eos}) are represented by dashed lines. Monte Carlo data \cite{Moreira00} are shown with symbols: open circles for $\Xi=100$, filled diamonds for $\Xi=10$ and open diamonds for $\Xi=0.5$. For completeness, the Poisson-Boltzmann prediction is provided (dotted line in the upper part of the graph).} \label{fig:Moreira16} \end{center} \end{figure} We now analyze in more details the short distance behaviour of the pressure. The difference $\widetilde{P}-2/\widetilde{d}$, which is equal to $-1$ in the leading SC order and is linear in $\widetilde{d}$ as concerns the first correction, is plotted in Fig. \ref{fig:Moreira16} as a function of the (dimensionless) plate separation $\widetilde{d}$. Three values of the coupling constant were considered: $\Xi=100$, $10$ and $0.5$. The plots obtained from the equation of state (\ref{eq:eos}) are shown by dashed lines and the MC data \cite{Moreira00} are represented by symbols. The accuracy of the WSC method is good, surprisingly also for small values of $\Xi=10$ and $0.5$, where the approach is not supposed to hold. As concerns the (leading term plus the first correction) VSC equation of state \cite{Netz01}, corresponding to our Eq. (\ref{eq:eos}) with $\theta=\Xi$, the plots for $\Xi=10$ and $100$ are close to the $\widetilde{d}$ axis, and far from the Monte Carlo data; we consequently do not present them in the figure. For $\Xi=0.5$, the VSC prediction is in good agreement with the MC simulations \cite{Moreira00}. It is interesting to note that in the distance range $\widetilde d < 2$, the $\Xi=0.5$ data depart from the mean-field (Poisson-Boltzmann) results \cite{Moreira00}, see Fig. \ref{fig:Moreira16}: there, the inter-plate distance becomes comparable or smaller to $b$, which means that the discrete nature of the particles can no longer be ignored; At larger distances only does the continuum mean-field description hold. For small inter-plate distances, we expect the single particle picture to take over, no matter how small $\Xi$ is. This explains that $\widetilde{P}-2/\widetilde{d}\to -1$, but there is then no reason that WSC or VSC would provide the relevant $\widetilde d$ correction at small $\Xi$. The fact that WSC and VSC agree with each other here at $\Xi=0.5$ is a hint that such a correspondence with MC is incidental (and indeed, in this range of couplings, $\Xi$ and $\Xi^{1/2}$ are of the same order). It would be interesting to have MC results at very small $\Xi$ values, and to concomitantly develop a theory for the first pressure correction to the leading term $2/\widetilde{d}-1$. \subsection{Asymmetrically charged plates} The sequence of ground states for asymmetric like-charged plates $(0<\zeta\le 1)$ may be even more complex than the one for the symmetric $\zeta=1$ case; in dependence on the distance $d$, the bilayer Wigner crystal can involve commensurate as well as incommensurate structures of counter-ions. In addition, related work in spherical geometry \cite{Messina09,Messina00} has shown that the ground state in general breaks local neutrality (the two partners acquire an electrical charge, necessarily opposite). The possibility of, in principle, an infinite number of irregular structures might complicate numerical calculations; we are not aware about a work dealing with this subject. Fortunately, the same simplification as for the equivalently charged plates arises at small separations between the plates $d/b\ll 1$, where the lateral lattice spacing $b$ of the single Wigner crystal is now given by the requirement of the global electroneutrality, as follows: \begin{equation} \label{eq:spacingb} \frac{q}{\sigma_1+\sigma_2} = \frac{\sqrt{3}}{2} b^2 . \end{equation} Since the $z$-coordinates of particles between the plates are much smaller than $b$, we can use the harmonic $z$-expansion of the interaction energy of type (\ref{eq:deltainteraction}), where only the (irrelevant) constant term reflects the formation of some nontrivial bilayer structure. Our task is to derive the particle density profile for the energy change from the ground state of the form \begin{equation} - \beta \delta E = - \kappa \sum_{j=1}^N \widetilde{z}_j + S_z , \end{equation} where $\kappa = 1-\zeta=1-\sigma_2/\sigma_1$ and \begin{eqnarray} S_z & \sim & \frac{q^2 \ell_{\rm B}}{4} \sum_{j,k=1\atop (j\ne k)}^N \frac{(z_j-z_k)^2}{\vert {\bf R}_j-{\bf R}_k\vert^3} \nonumber \\ & = & \frac{\left( \sqrt{1+\zeta}\alpha \right)^3}{4\sqrt{\Xi}} \sum_{j,k=1\atop (j\ne k)}^N \frac{(\widetilde{z}_j-\widetilde{z}_k)^2}{(R_{jk}/b)^3} . \end{eqnarray} We use the cumulant technique with the one-body Boltzmann factor $\exp(-\kappa\widetilde{z})$. The final result for the density profile reads \begin{eqnarray} \label{eq:totalprof} \widetilde{\rho}(\widetilde{z}) & = & (1-\zeta^2) \frac{e^{-\kappa\widetilde{z}}}{1-e^{-\kappa\widetilde{d}}} \Bigg\{ 1 + \frac{\left( \sqrt{1+\zeta}\alpha \right)^3 C_3}{2\sqrt{\Xi}} \nonumber \\ & & \times \left[ \widetilde{z}^2 - t_2 - 2 t_1 (\widetilde{z}-t_1) \right] + {\cal O}\left( \frac{1}{\Xi} \right) \Bigg\} , \end{eqnarray} where \begin{eqnarray} \label{eq:t1t2} t_1(\kappa) & = & \frac{\int_0^{\widetilde{d}} d\widetilde{z} \widetilde{z} e^{-\kappa\widetilde{z}}}{\int_0^{\widetilde{d}} d\widetilde{z} e^{-\kappa\widetilde{z}}} = \frac{1}{\kappa} - \frac{\widetilde{d}}{e^{\kappa\widetilde{d}}-1} , \\ t_2(\kappa) & = & \frac{\int_0^{\widetilde{d}} d\widetilde{z} \widetilde{z}^2 e^{-\kappa\widetilde{z}}}{\int_0^{\widetilde{d}} d\widetilde{z} e^{-\kappa\widetilde{z}}} \nonumber \\ & = & \frac{2}{\kappa^2} - \frac{1}{e^{\kappa\widetilde{d}}-1} \left( \frac{2\widetilde{d}}{\kappa} + \widetilde{d}^2 \right) . \end{eqnarray} For example, the density profile $\widetilde{\rho}$ for $\zeta=0.5$, $\Xi=86$ and $\widetilde{d}=2.68$ is depicted in Fig. \ref{fig:n_Kanduc_dz0.5}. The dashed curve corresponds to the leading SC profile \begin{equation} \label{eq:leadingscprof} \widetilde{\rho}_0(\widetilde{z}) = (1-\zeta^2) \frac{e^{-\kappa\widetilde{z}}}{1-e^{-\kappa\widetilde{d}}} , \end{equation} which is the same in both VSC and WSC theories. For the parameters of Fig. \ref{fig:n_Kanduc_dz0.5}, the leading order profile reads \begin{equation} \widetilde{\rho}_0(\widetilde{z}) = \frac{3}{4} \frac{e^{-\widetilde{z}/2}}{1-e^{-1.34}} . \end{equation} The WSC profile (\ref{eq:totalprof}), involving also the first SC correction, is represented by the solid curve. The filled circles are the MC data of Ref. \cite{Kanduc08}. The ratio $\widetilde{\rho}/\widetilde{\rho}_0$, which is trivially equal to 1 in the leading SC order, is presented in the inset of the figure; we see that the first correction improves substantially the agreement with MC data. A similar conclusion is reached in the case where one plate is uncharged ($\zeta=0$), see Fig. \ref{fig:n_Kanduc_dz0}: for the highest coupling investigated numerically in Ref. \cite{Kanduc08} ($\Xi=86$), the agreement between the WSC approach and Monte Carlo data for the density profile is excellent, and subtle deviations from the leading order term $\rho_0$ are fully captured. It can be seen in the inset of Fig. \ref{fig:n_Kanduc_dz0} that the agreement is no longer quantitative when the coupling parameter is decreased by a factor of 10. As may have been anticipated, the density profile close to the highly charged plate located at $\widetilde z = 0$ is well accounted for by our treatment, while the agreement with MC deteriorates when approaching the uncharged plate located at $\widetilde z = \widetilde d$. We may anticipate that the WSC approach would fare better against Monte Carlo at smaller inter-plate separations. \begin{figure}[htb] \begin{center} \includegraphics[width=0.45\textwidth,clip]{Fig9.eps} \caption{The density profile $\widetilde{\rho}$ for $\zeta=0.5$, $\Xi=86$ and $\widetilde{d}=2.68$. The dashed curve corresponds to the leading SC profile $\widetilde{\rho}_0$ (\ref{eq:leadingscprof}), the solid curve also involves the first correction in (\ref{eq:totalprof}). MC data (filled circles) come from Ref. \cite{Kanduc08}. The inset shows the ratio $\widetilde\rho/\widetilde\rho_0$, for a finer test of the correction to leading order $\widetilde\rho_0$. } \label{fig:n_Kanduc_dz0.5} \end{center} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[width=0.45\textwidth,clip]{Fig10.eps} \caption{Same as in the inset of Fig. \ref{fig:n_Kanduc_dz0.5}, for $\zeta=0$, and two different values of the coupling parameter $\Xi$. The two plates are located at $\widetilde z=0$ and $\widetilde z = 2.68$. Here, $\zeta=0$ means that the plate at $\widetilde z = 2.68$ is uncharged. The symbols are for the Monte Carlo data of Ref. \cite{Kanduc08}.} \label{fig:n_Kanduc_dz0} \end{center} \end{figure} Either of the contact-value relations (\ref{eq:contacttheorem2}) implies the same pressure: \begin{equation} \label{eq:p} \widetilde{P} = \widetilde{P}_0 + \frac{1}{\sqrt{\Xi}} \widetilde{P}_1 + {\cal O}\left( \frac{1}{\Xi} \right) , \end{equation} where \begin{equation} \label{eq:p0} \widetilde{P}_0 = - \frac{1}{2} (1+\zeta^2) + \frac{1}{2} (1-\zeta^2) \coth\left( \frac{1-\zeta}{2}\widetilde{d} \right) \end{equation} is the leading SC contribution, already obtained within the VSC method in \cite{Kanduc08}, and \begin{eqnarray} \widetilde{P}_1 & = & \frac{3^{3/4} (1+\zeta)^{5/2} C_3}{4 (4\pi)^{3/2}} \frac{\widetilde{d}}{\sinh^2\left(\frac{1-\zeta}{2}\widetilde{d} \right)} \nonumber \\ & & \times \left[ \left( \frac{1-\zeta}{2}\widetilde{d} \right) \coth\left( \frac{1-\zeta}{2}\widetilde{d} \right) - 1 \right] \label{eq:p1} \end{eqnarray} is the coefficient of the first $1/\sqrt{\Xi}$ correction. \begin{figure}[htb] \begin{center} \includegraphics[width=0.45\textwidth,clip]{Fig11.eps} \caption{Oppositely charged plates: The phase boundary where $\widetilde{P}=0$, which discriminates the attractive regime (at large distances) from the repulsive one (at small distances). The MC data for $\zeta=-0.5$ (filled squares) come from Ref. \cite{Kanduc08}.} \label{fig:dzmoins} \end{center} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[width=0.45\textwidth,clip]{Fig12.eps} \caption{Rescaled pressure versus the plate distance for likely charged plates with the asymmetry parameter $\zeta=0.5$: The dashed curve corresponds to the leading term of the VSC theory, which is equivalent to the WSC one (\ref{eq:p0}). The small-$\widetilde{d}$ expansion of the WSC pressure (\ref{eq:stateasym}) is represented by solid curves. Filled symbols represent the MC data \cite{Kanduc08} for the couplings $\Xi=86$ (squares), $\Xi=8.6$ (diamonds) and $\Xi=0.32$ (circles in the inset). In the inset, which is a zoom on the small distance region, the mean-field Poisson-Boltzmann (PB) prediction is also displayed.} \label{fig:dzpluss} \end{center} \end{figure} While the first correction to the pressure $\widetilde{P}_1$ vanishes in both limits $\widetilde{d}\to 0$ and $\widetilde{d}\to\infty$, $\widetilde{P}_0$ is in general nonzero and therefore dominates in these asymptotic regions. Let us first consider the large-$\widetilde{d}$ limit: \begin{equation} \label{eq:infty} \lim_{\widetilde{d}\to\infty} \widetilde{P} = \lim_{\widetilde{d}\to\infty} \widetilde{P}_0 = - \zeta^2 . \end{equation} Such a result is correct for oppositely charged plates $-1<\zeta\le 0$. In that case indeed, for sufficiently distant plates, all counter-ions stay in the neighborhood of plate 1 and compensate partially its surface charge, that is reduced from the bare value $\sigma_1e$ to $\vert\sigma_2\vert e$. We are left with a capacitor of opposite surface charges $\pm \sigma_2 e$ whose dimensionless pressure is attractive and just equal to $-\zeta^2$. In other words, again for large distances, the negative counter-ions are expelled from the vicinity of the negatively charged plate 2, with a resulting vanishing charge density $\widetilde\rho(\widetilde d)$. From the contact theorem, this implies that the pressure reads $\widetilde P = -\zeta^2$. Hence, the leading SC order (common to VSC and WSC), {\it a priori} valid at short distances, yields the correct result at large distances also. This points to the adequacy of the WSC result (\ref{eq:p})-(\ref{eq:p1}) in the whole range of $\widetilde{d}$ values for oppositely charged plates, which is consistent with our previous analysis about the simple nature of the ground state (independent on the inter-plate distance, at variance with the $\zeta>0$ case). In addition, we emphasize that the effect of the first correction coefficient (\ref{eq:p1}) is very weak. This fact is documented in Fig. \ref{fig:dzmoins}: Each solid curve with a fixed asymmetry parameter $\zeta<0$ represents a phase boundary between the anomalous repulsion of oppositely charged plates at small distances and their ``natural'' attraction at large distances. At $\Xi\to\infty$, using the condition $\widetilde{P}_0=0$ in (\ref{eq:p0}) implies the phase boundary at \cite{Kanduc08} \begin{equation} \label{eq:dstar} \widetilde{d}^* = - 2 \frac{\ln\vert\zeta\vert}{1-\zeta} , \qquad \mbox{$\Xi\to\infty$ \quad ($-1<\zeta<1$).} \end{equation} Considering also the first correction (\ref{eq:p1}) in (\ref{eq:p}) we see in Fig. \ref{fig:dzmoins} that the phase boundary $\widetilde{P}=0$ is almost independent of $\Xi$, except for very small negative values of $\zeta$. Consequently, the first correction to the leading SC behaviour is generically negligible for oppositely charged plates. \begin{figure}[htb] \begin{center} \includegraphics[width=0.45\textwidth,clip]{Fig13.eps} \caption{Phase diagram for like-charged plates with asymmetry parameter $\zeta=0.5$. The phase boundary given by the leading VSC and WSC order \cite{Kanduc08} is represented by the dashed line. The phase boundary following from our WSC result (\ref{eq:stateasym}) and (\ref{eq:theta3}) is represented by the solid curve; for comparison, the filled squares are MC data from Ref. \cite{Kanduc08}.} \label{fig:dtildedzplusKanduc} \end{center} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[width=0.45\textwidth,clip]{Fig14.eps} \caption{The WSC phase boundaries for like-charged plates, in the $(\Xi,\widetilde{d})$ plane and for various values of the asymmetry parameter $\zeta$.} \label{fig:dzplus} \end{center} \end{figure} On the other hand, the asymptotic result (\ref{eq:infty}) is apparently physically irrelevant for like-charged plates ($0<\zeta\le 1$). For sufficiently large distances $d$, the counter-ions stay in the neighborhood of both plates 1 and 2 and {\it a priori} neutralize their surface charges, so that the asymptotic pressure should vanish. Therefore, for $\zeta>0$, we cannot expect the same bonus as for $\zeta<0$, and our WSC results (\ref{eq:p})-(\ref{eq:p1}) hold provided that $\widetilde{d}\ll\sqrt{\Xi}$ as was already the case for $\zeta=1$. In addition, the small-$\widetilde{d}$ expansion of the pressure reads \begin{eqnarray} \label{eq:stateasym} \widetilde{P} & = & - \frac{1+\zeta^2}{2} + \frac{1+\zeta}{\widetilde{d}} + \left[ \frac{(1-\zeta)^2(1+\zeta)}{12} \right. \nonumber \\ & & \left. + \frac{1}{3\theta(\zeta)} + {\cal O}\left( \frac{1}{\Xi}\right) \right] \widetilde{d} + {\cal O}(\widetilde{d}^2) , \end{eqnarray} where \begin{equation} \label{eq:theta3} \theta(\zeta) = \frac{(4\pi)^{3/2}}{3^{3/4}} \frac{1}{C_3} \frac{4}{(1+\zeta)^{5/2}} \sqrt{\Xi}. \end{equation} As it should, this is the generalization of the special $\zeta=1$ result (\ref{eq:theta2}) to all positive asymmetries. The plot of the rescaled pressure versus the plate distance for likely charged plates with the asymmetry parameter $\zeta=0.5$ is presented in Fig. \ref{fig:dzpluss}. The dashed curve corresponds to the leading term of the VSC theory, which is equivalent to the leading WSC one (\ref{eq:p0}). The small-$\widetilde{d}$ expansion of the WSC pressure (\ref{eq:stateasym}) is represented by solid curves. The comparison with filled symbols of the MC data \cite{Kanduc08} shows a good agreement for the coupling constants $\Xi=86$ (squares), $\Xi=8.6$ (diamonds) and even for relatively small $\Xi=0.32$ (circles in the inset). The agreement goes somewhat beyond the expected distance range of the validity of the expansion (\ref{eq:stateasym}), but is restricted to the small $\widetilde d$ range. The phase diagram for $\zeta=0.5$ is pictured in Fig. \ref{fig:dtildedzplusKanduc}. The phase boundary given by the leading $\Xi\to\infty$ order of the VSC method \cite{Kanduc08} is represented by the dashed line. As repeatedly emphasized above, it corresponds to the leading WSC order as well. The phase boundary following from our leading plus first correction WSC result (\ref{eq:stateasym}) and (\ref{eq:theta3}) is represented by the solid curve; the agreement with MC data of Ref. \cite{Kanduc08} (filled squares) is very good. The phase boundaries for like-charged plates with various values of the asymmetry parameter $\zeta$, following from our WSC result (\ref{eq:stateasym}) and (\ref{eq:theta3}), are drawn in the $(\Xi,\widetilde{d})$ plane in Fig. \ref{fig:dzplus}. It is seen that by decreasing $\zeta$ the anomalous attraction region becomes smaller. \begin{figure}[htb] \begin{center} \includegraphics[width=0.45\textwidth,clip]{Fig15.eps} \caption{The WSC phase boundaries for like-charged plates, in the $(\zeta,\widetilde{d})$ plane and for various values of the coupling constant $\Xi$.} \label{fig:dtildedzplus} \end{center} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[width=0.45\textwidth,clip]{Fig16.eps} \caption{The WSC phase diagram (solid curves) in the whole range of the asymmetry parameter $\zeta$, for the coupling constant $\Xi=10^3$. For comparison, the phase diagram in the leading SC order (\ref{eq:dstar}) is represented by dashed curves; for oppositely charged plates $-1<\zeta\le 0$, the difference between the solid and dashed curves is invisible, due to the already pointed out smallness of the first correction for $\zeta<0$.} \label{fig:dtildedz} \end{center} \end{figure} The WSC phase boundaries for like-charged plates, in the $(\zeta,\widetilde{d})$ plane and for various values of the coupling constant $\Xi$, are drawn in Fig. \ref{fig:dtildedzplus}. For small values of the asymmetry parameter $\zeta$, e.g. below $\zeta\sim 0.29$ for $\Xi=10^3$, we see that the attractive ``pocket'' disappears. This phenomenon is entirely driven by the first correction, as in revealed by Fig. \ref{fig:dtildedz}, which further shows the phase diagram in the whole range of the asymmetry parameter $\zeta$, for the coupling constant $\Xi=10^3$. For comparison, the phase boundaries between the repulsion and attractive regions in the leading SC order, given by (\ref{eq:dstar}), are pictured by dashed curves. With the corresponding leading contribution to the pressure, the attractive region always exists. \section{Conclusion} \label{sec:concl} In this paper, we have established the mathematical grounds for the Wigner Strong Coupling (WSC) theory which describes the strong-coupling regime of counter-ions at charged interfaces, starting from the Wigner structure formed at zero temperature. The results for both likely and oppositely charged plates are in excellent agreement with Monte Carlo data, which represents an improvement over the previously proposed Virial SC approach. By construction, our expansion should be more reliable the larger the coupling parameter $\Xi$, but we found that it remains trustworthy for intermediate values of the coupling constant (say $\Xi=100$), and in some cases down to $\Xi=10$ or 20. The geometries studied are those of one or two {\em planar} interfaces. An important remark is that the leading results in the SC expansion follow from a single counter-ion picture because the dominant (linear) electric potential stems from the plate only; the contribution due to the interaction with other counter-ions on the same plate is harmonic and therefore sub-dominant. As a consequence, the leading terms of the VSC and WSC theories coincide. This fact has been outlined on several occasions, but can nevertheless not be considered as a general statement. Indeed, the situation changes for a {\em curved} (say, cylindrical or spherical) wall surfaces since then the interactions of an ion with other counter-ions contribute to the dominant field, no matter how close to the interface this ion can be. This is why the leading ion profile around a charged cylinder or sphere will in general differ from that obtained within the original VSC approach \cite{Naji05}. Inclusion of curvature effects in the WSC treatment is a task for the future. In the present work, we have also assumed that the charges on the plates are uniformly smeared, which opens the way to the powerful use of the contact theorem to obtain the pressure. As a consequence, the interesting case of discrete fixed charges on the plates \cite{MoreiraN02,Henle04,Trav06,Trav09}, is beyond the scope of the present analysis. A generalization of the formalism to quantum statistical systems of counter-ions is straightforward: Vibrations of counter-ions around their Wigner-lattice positions possess energy spectrum of quantized harmonic oscillators. Another perspective is to formulate a strong-coupling theory valid for an arbitrary distance between the plates. Indeed, both the original Virial SC and the present Wigner SC theories are so far limited, in the two plate case, to the regime $\widetilde d \ll \Xi^{1/2}$, which means that the inter plate distance should be smaller than the lattice spacing $a$ in the underlying Wigner crystal (up to an irrelevant prefactor, the quantities $a$ and $b$ introduced in this article refer to the same length). It is important to emphasize here that the limitation $\widetilde d \ll \Xi^{1/2}$ is not intrinsic to the strong coupling limit, but is a technical requirement that should be enforced to allow for the validity of the single particle picture, and subsequent higher order corrections as worked out here. Performing the SC expansion for distances $\widetilde d \gg \Xi^{1/2}$ requires to bypass the single particle picture, which is a challenging goal. Finally, in view of possible applications to real colloidal systems, it seems important to account for the low dielectric constant of colloidal particles, taking due account of image charge effects \cite{Kanduc07,Levin11}. Work along these lines is in progress. \begin{acknowledgments} We would like to thank C. Texier for useful discussions. L. \v{S}. is grateful to LPTMS for hospitality. The support received from the grants VEGA No. 2/0113/2009 and CE-SAS QUTE is acknowledged. \end{acknowledgments}
2,869,038,156,544
arxiv
\section{Introduction} \label{sec:intro} Since spiral galaxies are rotationally supported systems, disc rotation curves generally serve as valuable tracers of the gravitational potential in the galactic plane. Through traditional mass-modelling, the observed curve is routinely used to infer the mass distribution of galaxies and hence their dark matter contents (e.g. Begeman 1987; Kent 1987; Geehan et al. 2006). In contrast, the thickness of the gas layer depends on the vertical gravitational force and thus traces the potential perpendicular to the plane e.g.,(Narayan \& Jog 2002a). Recently, the rotation curve and the outer galactic {\mbox{H\,{\sc i}}} flaring data have been used together to probe the dark matter halos of a few galaxies. The rotation curve mainly determines the mass enclosed within a given radius, and therefore the power-law index of the density profile of the halo. The flaring curve, on the other hand, determines its shape uniquely. So, both the constraints have to be used on an equal footing to correctly determine the parameters of the dark matter halo of any galaxy. The {\mbox{H\,{\sc i}}} scale height data coupled with the rotation curve has been used to study the dark matter halos of NGC 4244 (Olling 1996) , NGC 891 (Becquaert \& Combes 1997) and the Galaxy (Olling \& Merrifield 2000, 2001) in the past. Narayan et al. (2005) studied the Galactic dark matter halo by rigorously incorporating the self-gravity of the gas into their model for the Galaxy unlike some of the previous studies given in the literature. They concluded that a steeper-than-isothermal, spherical halo best fits the observations, the scale height data at that time being available up to galactocentric distances of 24 $kpc$. These results were confirmed by Kalberla et al. (2007), who, however, included a dark matter ring in their model to explain their extended {\mbox{H\,{\sc i}}} scale height data available till 40 $kpc$. In our previous work (Banerjee \& Jog 2008), we studied the dark matter halo of M31, where we developed a model similar to the Galaxy (Narayan et al. 2005) . However, in addition, we included the bulge into the model, and also varied the shape of the halo as a free parameter, unlike the Galaxy case. Further, we fitted the rotation curve over the entire radial range instead of pinning it at a single point like the Galaxy case. We scanned the four dimensional grid of the four free parameters characterizing the halo, in a systematic manner, and found that an isothermal halo of an oblate shape of axis ratio $q$ = 0.4 gives the best fit to the available data. In this paper, we apply for the first time a similar approach to study the dark matter halo properties of a low surface brightness (LSB) ``superthin'' galaxy: UGC 7321. UGC 7321 is a bulgeless, pure disc galaxy of Hubble Type Sd, and has a highly flattened stellar disc with a planar-to-vertical axis ratio of 10.3. A few of its key properties are summarized in Table 1. The galaxy has an extended {\mbox{H\,{\sc i}}} disc, and the scale height data are available up to 6-7 disc scale lengths (Matthews \& Wood 2003). So it is highly suitable for the application of the above method to probe its dark matter halo properties. \begin{table*} \begin{minipage}{140mm} \caption{UGC 7321 Paramaters}\footnote{All quantities are taken from Matthews et al. (1999) and Uson \& Matthews (2003) which assume d = 10 $Mpc$} \label{tab:gmrt} \vskip 0.1in \begin{tabular}{ll} \hline Parameters& Value \\ \hline \hline $A_{opt}$($kpc$) & $16.3$ \footnote{Linear diameter at limiting observed $B$-band isophote of 25.5 $mag$ $arcsec^{-2}$} \\ $L_{B}$($L_{\odot}$) & $1.0$ $\times$ $10^{9}$ \footnote{Blue luminosity} \\ $M_{{\mbox{H\,{\sc i}}}}$($M_{\odot}$) & $1.1$ $\times$ $10^{9}$ \footnote{${\mbox{H\,{\sc i}}}$ mass}\\ $h_{R}$($kpc$) & $2.1$ \footnote{disc scale length measured from $R$-band image}\\ $z_{0}$($pc$) & $150$ \footnote{stellar scale height obtained from an exponential fit}\\ $\mu_{B,i}(0)$($mag$ $arcsec^{-2}$) & $23.5$ \footnote{Deprojected (face-on) central disc surface brightness in the $B$ band, corrected for internal and Galactic extinction} \\ $v_{rot}$($km$$s^{-1}$) & $105$ \\ $Star$ $formation$ $rate$ ($M_{\odot}$ per year for massive stars $\ge 5 M_{\odot}$ & $\sim 0.02$ \\ \hline \end{tabular} \end{minipage} \end{table*} Based on traditional mass-modelling which only uses the observed rotation curve as the constraint, it has been found that the late-type, low surface brightness galaxies are generally dark matter dominated, often within the inner portions of their stellar discs (de Blok \& McGaugh 1997) . In the case of UGC 7321, other lines of evidence have already suggested that it, too, is a highly dark matter-dominated galaxy. It has large ratios of its dynamical mass to its {\mbox{H\,{\sc i}}} mass and blue luminosity, ($M_{\rm dyn}/M_{HI}$ = 31 and $M_{\rm dyn}/L_{B}$ = 29, respectively; (cf. Roberts \& Haynes 1994) , and an extraordinarily small stellar disc scale height ($\sim$150~$pc$ for a distance of 10~$Mpc$ based on an exponential fit; Matthews 2000). These properties suggest the need for a massive dark halo to stabilize the disc against vertical bending instabilities (Zasov et al. 1991) . UGC 7321 is devoid of a central bulge component (Matthews et al. 1999) and its molecular gas content appears to be dynamically insignificant (Matthews \& Gao 2001; Matthews \& Wood 2001). We, therefore, model the galaxy as a gravitationally coupled, two-component system of stars and atomic hydrogen gas with the dark matter halo acting as a source of external force to this system. We use a four-parameter density profile for the dark matter halo (de Zeeuw \& Pfenniger 1988; Bequaert \& Combes 1997): the core density, the core radius, the power-law density index and the axis ratio of the halo being the four free parameters characterizing it. We methodically vary the four parameters within their respective feasible ranges, and try to obtain an optimum fit to both the observed rotation curve and the vertical scale height data at the same time. As we shall see, this method predicts a spherical, isothermal halo with a core density of about 0.039- 0.057 $M_{\odot}$ $pc$$^{-3}$ and core radius of 2.5 - 2.9 $kpc$ for this galaxy. The layout of the present paper is as follows. We briefly discuss the model in \S 2, and in \S 3 the method of solving the equations and the input parameters used is discussed. In \S 4, we present the results, followed by the discussion and conclusions in \S\S 5 and 6, respectively. \section{Description of the model used} \label{sec:des} \subsection{Gravitationally coupled, two-component, galactic disc model} \label{ssec:grav_coup} The galaxy is modelled as a gravitationally-coupled, two-component system of stars and atomic hydrogen gas embedded in the dark matter halo, which exerts an external force on the system while remaining rigid and non-responsive itself. This is a simplified version of the Galaxy case (Narayan \& Jog 2002b) , where a gravitationally-coupled, three-component system of stars, atomic and molecular hydrogen was considered. Here, the two components, present in the form of discs, are assumed to be axisymmetric and coplanar with each other for the sake of simplicity. Also, it is assumed that the components are in a hydrostatic equilibrium in the vertical direction. Therefore, the density distribution of each component will be jointly determined by the Poisson equation, and the corresponding equation for pressure equilibrium perpendicular to the midplane. In terms of the galactic cylindrical co-ordinates ($R$, $\phi$, $z$), the Poisson equation for an azimuthally symmetric system is given by \\ $$\frac{{\partial}^2{\Phi}_{total}}{{\partial}z^2} + \frac{1}{R}\frac{{\partial}}{{\partial}R}(R \frac{{\partial}\Phi_{total}}{{\partial}R}) = 4\pi G(\sum_{i=1}^{2} \rho_i + \rho_{h}) \eqno(1) $$ \\ where $\rho_i$ with i = 1 to 2 denotes the mass density for each disc component while $\rho_h$ denotes the mass density of the halo. $\Phi_{total}$ denotes the total potential due to the disc and the halo. For a nearly constant rotation curve as is the case here, the radial term can be neglected as its contribution to the determination of the {\mbox{H\,{\sc i}}} scale height is less than ten percent as was noted by earlier calculations Narayan et al. (2005). So, the above equation reduces to \\ $$\frac{{\partial}^2\Phi_{total}}{{\partial}z^2} = 4\pi G(\sum_{i=1}^{2} \rho_i + \rho_{h}) \eqno(2) $$ The equation for hydrostatic equilibrium in the z direction is given by Rohlfs (1977) $$ \frac{\partial}{{\partial}z}(\rho_{i}\langle(v_{z}^{2})_{i}\rangle) + \rho_{i}\frac{{\partial}\Phi_{total}}{{\partial}z} = 0 \eqno(3) $$ \\ \noindent where $\langle(v_{z}^{2})_{i}\rangle$ is the mean square random velocity along the $z$ direction for the $i^{th}$ component. Further we assume that each component is isothermal i.e., the random velocity $v_{z}$ remains constant with $z$. Combining eq. (2) and eq. (3), we get $$ \langle(v_{z}^{2})_{i}\rangle \frac{\partial}{{\partial}z}[\frac{1}{\rho_{i}}\frac{{\partial}\rho_{i}}{{\partial}z}] = -4\pi G(\sum_{i=1}^{2} \rho_i + \rho_{h}) \eqno(4)$$ \noindent This represents a set of two coupled, second-order, ordinary differential equations which needs to be solved to obtain the vertical density distribution of each of the two components. Although the net gravitational potential acting on each component is the same, the response will be different due to the different velocity dispersions of the two components. \subsection{ Dark Matter Halo } \label{ssec:DM halo} We use the four-parameter dark matter halo model (de Zeeuw \& Pfenniger 1988; Bequaert \& Combes 1997) with the density profile given by $$\rho(R,z) = \frac{\rho_0}{\large [ 1+\frac{m^{2}}{{{R_c}}^{2}}\large]^p} \eqno(5) $$ \\ where $ m^{2}$=$R^{2} + ({z^{2}}/{q^{2}})$, $\rho_0$ is the central core density of the halo, ${R_c}$ is the core radius, $p$ is the power-law density index, and $q$ is the vertical-to-planar axis ratio of the halo (spherical: $q$ = 1; oblate: $q$ $<$ 1; prolate: $q$ $>$ 1). \section {Numerical Solution of the Equations \& Input Parameters} \label{sec: Num} \subsection{Solution of equations} \label{ssec: Sol} For a given halo density profile, eq. (4) is solved in an iterative fashion, as an initial value problem, using the fourth-order, Runge-Kutta method of integration, with the following two initial conditions at the mid-plane (i.e., $z$ = 0) for each component: $$ \rho_i = (\rho_0)_i, \qquad \frac{d\rho_i}{dz} = 0 \eqno(6) $$ \\ As the modified mid-plane density $(\rho_0)_i $ for each component is not known a priori, the net surface density $\Sigma_i(R)$, given by twice the area under the curve of $\rho_i(z)$ versus z, is used as the secondary boundary condition, as this quantity is known from observations (see \S 3.2). The required value of $(\rho_i)_0$ is thus determined by a trial and error method, which gives the required $\rho_i(z)$ distribution after four iterations with an accuracy to the second decimal place. Existing theoretical models suggest a sech$^2$ profile for an isothermal density distribution. But for a three-component disc, the vertical distribution is shown to be steeper than a sech$^2$ function close to the mid-plane (Banerjee \& Jog 2007).However, at large $z$ values, it is close to a sech$^2$ distribution. Hence we use the half-width-at-half-maximum of the resulting model vertical distribution to define the scale height as was done in Narayan \& Jog (2002a,b). \subsection {Input Parameters} \label{ssec: Input} We require the vertical velocity dispersion and the surface density of each of the two galactic disc components to solve the coupled set of equations at a given radius. The central stellar surface density is derived directly from the optical surface photometry (Matthews et al. 1999) by assuming a reasonable stellar mass-to-light ratio. The deprojected $B$-band central surface brightness of UGC~7321 (corrected for extinction) translates to a central luminosity density of 26.4~$M_{\odot}$~pc$^{-2}$. Using the $B-R$ color of the central regions ($\sim$1.2; Matthews et al. (1999) ) and the ``formation epoch: bursts'' models from Bell \& de Jong (2001) predicts $(M/L)_{\star}$ = 1.9, which we adopt here. (Other models by Bell \& de Jong give values of $(M/L)_{\star}$ ranging from 1.7 to 2.1). This in turn yields a central stellar surface density of 50.2~$M_{\odot}$ pc$^{-2}$ for UGC~7321. The stellar velocity dispersion of this galaxy has been indirectly estimated to be 14.3 $km$$s^{-1}$ at the centre of the galaxy ($R$ = 0) (Matthews 2000). This is very close to the value of the central (vertical) stellar velocity dispersion (16 $km$$s^{-1}$) for the dwarf spiral galaxy UGC 4325 measured by Swaters (1999), and to the value (20 $km$$s^{-1}$) estimated analytically for the superthin galaxy IC 5249 by van der Kruit et al. (2001). We assume the central value of velocity dispersion to fall off exponentially with radius with a scale length of 2 $R_{d}$ (which is equal to 4.2 $kpc$ for UGC 7321) as is seen in the Galaxy (Lewis \& Freeman 1989). Uson \& Matthews (2003) give the deprojected {\mbox{H\,{\sc i}}} surface density for UGC 7321 as a function of radius. The velocity dispersion of {\mbox{H\,{\sc i}}} is obtained from the Gaussian fits to the edges of position-velocity cuts on the observed data. This gives a value between 7-9 $km$$s^{-1}$. The data are consistent to the typical value of the {\mbox{H\,{\sc i}}} dispersion in other galaxies (See \S 5.2 for a detailed discussion). The molecular hydrogen gas, H$_{2}$, has not been taken into account, as it appears to be dynamically insignificant compared to the other components of the disc. Matthews \& Gao (2001) detected a weak CO signal from the central $\sim$2.7~$kpc$ of UGC~7321, which translates to a total molecular hydrogen mass of H$_{2}\approx2.3\times10^{7}~M_{\odot}$ (although this value is uncertain by at least a factor of 2-3 as a result of uncertainties in optical depth effects and the appropriate value of the CO-to-H$_{2}$ conversion factor). This corresponds to a mean H$_{2}$ surface density of $\Sigma_{H2}\approx 1~M_{\odot}$ in the inner galaxy, which agrees fairly well with independent estimates from the dust models of Matthews \& Wood (2001) and from a study of the distribution of dark clouds from {\it Hubble Space Telescope} images (J. S. Gallagher \& L. D. Matthews, unpublished). Therefore, the presence of H$_{2}$ has been ignored in subsequent calculations. \section{Results and analysis} \label{sec: Results} We perform an exhaustive scanning of the grid of parameters characterizing the dark matter density profile to obtain an optimum fit to both the observed rotation curve and the scale height data. To start with, we consider a spherical halo ($q$ = 1) for simplicity. \begin{table*} \begin{minipage}{140mm} \caption{3D grid of dark halo parameters scanned} \label{tab:gmrt} \vskip 0.1in \begin{tabular}{lll} \hline $Parameter$ & $Range$ & $Resolution$\\ \hline $\rho_{0}$($M$$_{\odot}$ $p$c$^{-3}$) & $0.0001 - 0.1$ & $0.0001$\\ & $0.001 - 0.5$ & $0.001$ \\ $R_{c}$($kpc$) & $1.5 - 12$ & $0.1$ \\ $p$ & $1 - 2$ & $0.5$ \\ \hline \hline \end{tabular} \end{minipage} \end{table*} We vary the remaining three free parameters characterizing the density profile of the halo (see eq. (5)) within their respective feasible ranges (as summarized in Table 2), and obtain the contribution of the halo to the rotation curve for each such grid point in this three-dimensional grid. The power-law density index $p$ is allowed to take the values 1, 1.5 and 2 successively. Here, a value of $p$ = 1 corresponds to the standard isothermal case used routinely for simplicity and also because it corresponds to the flat rotation curve. The value of $p$ = 1.5 refers to the NFW profile Navarro et al. (1996) at large radii, whereas $p$ = 2 gives an even steeper dark-matter halo profile, as was found for the Galaxy case Narayan et al. (2005). For each value of $p$, the core density $\rho_{0}$ and the core radius $R_{c}$ are varied as given in Table 2 to ensure an exhaustive scanning for the dark matter halo parameters since we have little prior knowledge of the plausible values these parameters can take in a superthin galaxy. \subsection{The rotation curve constraint} \label{sec: rotcurve} \noindent The total rotational velocity at each radius is obtained by adding the contribution from the stars, the gas and halo in quadrature as $$ {v^{2}(R)} = v_{star}^{2}(R) + v_{gas}^{2}(R) + v_{halo}^{2}(R) \eqno(7) $$ \\ Here the way to obtain the different terms is discussed below. This result is matched with the observed rotational velocity at all radii. The deprojected gas surface density versus radius data for UGC 7321 (Uson \& Matthews 2003) can be modelled as one which remains constant at 5 $M$$_{\odot}$ $pc$$^{-2}$ at galactocentric radii less than 4 $kpc$, and which then falls off exponentially with a scale length of 2.8 $kpc$. The gas surface density does not include a correction for He. For this radial distribution, we calculated the contribution of the gas to the rotation curve (using eq. (2-158) \& (2-160) of Binney \& Tremaine (1987)), and found it to be negligible compared to that of the stellar component. However, it was included in the calculations for the sake of completeness. The rotational velocity at any radius $R$ for a thin exponential stellar disc is given by Binney \& Tremaine (1987) $$ v_{star}^{2}(R) = 4\pi G \Sigma_{0} R_{d} y^{2} [I_0(y)K_0(y) - I_1(y)K_1(y)] \eqno(8)$$ \\ where $\Sigma_{0}$ is the disc central surface density, $R_{d}$ the disc scale length and $y$ = $R$/{2$R$$_{d}$}, $R$ being the galactocentric radius. The functions $I_{n}$ and $K_{n}$ (where n=0 and 1) are the modified Bessel functions of the first and second kind, respectively. For the spherical halo, the rotational velocity, $v_{halo}(R)$, is given by \\ $$ v_{halo}^{2}(R) = \frac{G M_{halo} (R)}{R} \eqno(9) $$ \\ where $M$$_{halo} (R)$, the mass enclosed within a sphere of radius $R$ for a the given halo density profile, and is obtained from the density as given by the right-hand side of eq. (5). For an oblate halo of axis ratio $q$ and density index $p$, the circular speed $v_{halo}(R)$ is obtained by differentiating the expression for the potential from Sackett \& Sparke (1990), and Becquaert \& Combes (1997) to be: \\ $$ v_{halo}^{2}(R) = 4 \pi G \rho_{0} q \int_{0}^{1/q} \frac{R^2 x^2 [ 1 + \frac{R^2 x^2}{R_{c}^2 ( 1 + \epsilon^2 x^2)} ]^{-p}}{( 1 + \epsilon^2 x^2)^2} dx \eqno(10)$$ \\ \noindent where $\epsilon = (1- q^2)^{1/2}$. We obtain the value of the integral numerically in each case. Thus upon obtaining the rotation curve corresponding to each grid point, we perform the ${\chi}^{2}$ analysis comparing computed to the observed {\mbox{H\,{\sc i}}} rotation curve. The observed rotation curve is taken from Uson \& Matthews (2003) and has 30 data points with very small error-bars (typically a few percent of the observed velocity amplitudes even after accounting for systematic uncertainties). It was derived by implicitly assuming a constant (Gaussian) {\mbox{H\,{\sc i}}} velocity dispersion of 7 $kms^{-1}$. Ideally, we should have considered only those grid points which give ${\chi}^{2}$ values of the order of 30 (i.e., the number of data points) as those giving appreciably good fits to the observed curve Bevington (1969). But we relax this criterion and choose a larger range of grid points around the minimum i.e grid points which give ${\chi}^{2}$ values less than 300 for applying the next constraint i.e the vertical {\mbox{H\,{\sc i}}} scale height data. This allows us to impose the simulataneous constraints (planar + vertical) on our model. (See \S 4.3 for a discussion). So finally we get 36 grid points for $p$ = 1, 80 for $p$ = 1.5 and 69 for $p$ = 2 case. As we shall see later, the final set of best-fit parameters obtained give reasonably good fits to both the observed rotation curve and the scale height data. \subsection{The {\mbox{H\,{\sc i}}} scale height constraint} \label{sec: HI scaleheight} For each value of $p$, we obtain the {\mbox{H\,{\sc i}}} scale height distribution beyond 3 disc scale lengths, for each of the grid points filtered out by the first constraint as discussed in the previous section. Next we perform the ${\chi}^{2}$ analysis of our model {\mbox{H\,{\sc i}}} scale height versus radius curves with respect to the observed one and try to fit our model to the observed data only beyond 3 disc scale lengths in keeping with the earlier studies in the literature (Narayan \& Jog 2005; Banerjee \& Jog 2008). For M31, the surface-density and therefore the vertical gravitational force due to the dark matter halo exceeds that of the disc only in the outer regions (See Fig.6 of Banerjee \& Jog 2008). As the disc dynamics in this region are controlled by the halo alone, the above method helps us in studying the effect of the halo on the scale height distribution, decoupled from that of the other components. For the case of UGC 7321, at first we take the gas velocity dispersion to be equal to 7 $km$$s^{-1}$ . However it fails to give a good fit to the observed data. Next we try both 8 $km$$s^{-1}$ and 9 $km$$s^{-1}$ successively, but choose the latter for subsequent calculations as it gives much better fit to the observed data as compared to the 8 $km$$s^{-1}$ case. For the choice of $v_{z}$ = 9 $kms^{-1}$, the best-fit core density is 0.041 $M$$_{\odot}$$pc$$^{-3}$ and a core radius is 2.9 $kpc$, as indicated by the smallest ${\chi}^{2}$ value. The small value of the best-fit halo core radius thus obtained indicates that the halo becomes important already at small radii. This suggests that the fitting of the theoretical curve with the observed one should not be restricted only to regions beyond 3 $R_{d}$ for an LSB galaxy like UGC 7321 as the halo is already important at small radii. Hence, we next fit the scale height data over entire radial range (i.e., 2-12 $kpc$) with the same constant $v_{z}$ value of 9 $kms^{-1}$. The best-fit values change by less than a few percent compared to the above case where the fit was done only beyond 3 $R_{d}$. The best-fit core density now becomes 0.039 $M$$_{\odot}$$pc$$^{-3}$ wheras the best-fit core radius continues to be 2.9 $kpc$. Since the disagreement of the observed rotation curve with the predicted one is mostly in the inner galaxy, we check if the fit can be improved by reducing the central value of the stellar surface density by twenty percent or so, keeping the $v_{z}$ value contant at 9 $km$$s^{-1}$. This is reasonable as there are uncertainties of at least that order in evaluating both the $M$/$L$ ratio and the deprojected surface brightness of the stellar disc. However, this variation fails to improve the results significantly. We then take a cue from the nature of the mismatch of our model curve with the observed one, which clearly shows the need to use a higher value of gas velocity dispersion in the inner parts, while a slightly lower value is required in the outer regions. Also, the nature of the mismatch rules out an oblate halo as a possible choice as that will lower the scale heights throughout the entire radial range, thus making the fits worse in the inner regions. To account for this, we then repeat the whole procedure by imposing a small gradient in the gas velocity dispersion by letting it vary linearly between 9.5 $km$$s^{-1}$ at $R$ = 7 $kpc$ and 8 $km$$s^{-1}$ at $R$ = 12.6 $kpc$. Although such a variation is ad-hoc, the observational constraints on this value are weak enough to allow for a small variation with galactocentric radius, with 9.5 $km$$s^{-1}$ approaching the upper limit allowed by the data. Using the same gradient in the inner regions, we get a gas velocity dispersion of 10.8 $km$$s^{-1}$ at $R$ = 2 $kpc$. We may note here that a similar gradient in the {\mbox{H\,{\sc i}}} velocity dispersion was obtained in the case of the Galaxy (Narayan \& Jog 2002b) and led to a better fit to the observed scale height in the inner Galaxy (See \S 5 for a detailed discussion). A fit to the whole range of observations (2 - 12 $kpc$) gives an isothermal halo of spherical shape with a core density of 0.043 $M$$_{\odot}$ pc$^{-3}$ and a core radius of 2.6 $kpc$ best fits the observations. These values are only slightly different (within 10 percent) from the values obtained with a constant velocity of $9 kms^{-1} $. In Fig.1, we give our best-fit for the case of constant $v_{z}$ = 9 $km$$s^{-1}$, and the case with a $v_{z}$ slightly falling with radius as compared to the fit to the rotation curve alone, superimposed on the observed one. Our model curves follow the trend of the observed data well throughout the entire radial range. \begin{figure}[h!] \begin{center} \rotatebox{0}{\epsfig{file=f1.eps,width=3.5in}} \end{center} \caption{Plot of the rotational velocity (in $kms^{−1}$) versus radius (in kpc) for the best-fit case of a spherical isothermal halo and a constant gas velocity dispersion of vz = 9 $kms^{−1}$ (dashed line) and for the case of vz falling slightly with radius (dotted line) superimposed on the best-fit to the rotation curve alone. Overall, the model rotation curves follow the trend of the observed data. } \label{fig:rotcur} \end{figure} In Fig.2, we compare the best-fit scale height distributions for the above two cases with the observed one. Clearly, the case with a gradient in gas velocity dispersion gives a remarkably better fit (${\chi}^{2}$ value 2.8), although as far as ${\chi}^{2}$ values are concerned, the case of constant $v_{z}$ = 9 $km$$s^{-1}$ cannot be ruled out altogether (${\chi}^{2}$ value 14.7) (This is because basic statistics suggests that the fit to the model is considered to be reasonably good if the ${\chi}^{2}$ value is of the order of the number of data points in the fit as discussed earlier at the end of \S 4.1. Here the total number of data points in the {\mbox{H\,{\sc i}}} scale height data is 11.) \begin{figure}[h!] \begin{center} \rotatebox{0}{\epsfig{file=f2.eps,width=3.5in}} \end{center} \caption{Plot of {\mbox{H\,{\sc i}}} scale height (in pc) versus radius (in $kpc$) for the best-fit case of a spherical isothermal halo with constant gas velocity dispersion ($v_{z}=9$~$km$s$^{-1}$; dotted line) and for $v_{z}$ declining slowly with radius (solid line). In this case, the model curves have been fitted over the entire radial range. The model with constant $v_{z}$ predicts an {\mbox{H\,{\sc i}}} scale height distribution that does not reproduce the observed values in the inner regions of the galaxy ($R<7$~$kpc$). Assuming a slight gradient in $v_{z}$ clearly gives a better fit} \label{fig:scht} \end{figure} \subsection{Quality of individual fits as a result of imposing two simultaneous constraints} We reiterate the fact that our method is aimed at obtaining an optimum fit to both the observational constraints, namely the rotation curve and the HI scale height data. This evidently results in a compromise in the quality of individual fits to either of the observed curves (See Fig.1 \& 2). Traditional mass modelling techniques resort to the rotation curve constraint alone, and therefore the fit is much better. However imposing two simultaneous constraints on the theoretical model gives a more realistic picture than the case in which best-fit is sought to a single constraint alone. It is noteworthy that even when the fit is sought to the rotation curve alone, the best-fit $R_{c}$ continues to be of the order of $R_{D}$ which is tha main result of this work. However the $\rho_{0}$ value obtained is different in the two cases. \section{Discussion} \label{sec:dis} The dark halo properties and overall stability of superthin galaxies like UGC~7321 are of considerable interest in the context of galaxy formation and evolution. In particular, such galaxies seem to pose a significant challenge to hierarchical models of galaxy formation, whereby galaxies are built-up through violent mergers of subgalactic clumps since such mergers may result in significant disc heating and trigger instabilities (e.g., D'Onghia \& Burkert 2004, Eliche-Moral et al. 2006, Kormendy \& Fisher 2005). While theorists have predicted that the thinnest galaxy disks must require massive dark halos for stabilization (Zasov et al. 1991; Gerritsen \& de Blok 1999), little information has been available on the dark halo properties of individual superthin galaxies until now. UGC 7321 is the first superthin galaxy for which both a detailed rotation curve and the gas layer thickness were derived Uson \& Matthews (2003). This has allowed us to use both these constraints simultaneously to characterize its dark halo properties, as well as to obtain new insight into the stability of its disc against star formation. Below we comment further on the implications of several of our key findings. \subsection {The small core radius of the dark matter halo} \label{ssec: core radii} The core radii of the dark matter halos of massive high surface brightness galaxies studied so far are usually found to be comparable to their optical size, or equivalently, 3-4 times larger than the exponential stellar disc length Gentile et al. (2004). The Galaxy has a core radius of 8-9.5 $kpc$ which is equal 3$R_{d}$ (Narayan et al. 2005) while M31 has a core radius equal to 21 $kpc$ which is almost equal to 4$R_{d}$ (Banerjee \& Jog 2008). For UGC 7321, we find a very small core radius of 2.5-2.9 $kpc$, which is just slightly greater than its disc scale length ($R_{d}$ = 2.1 $kpc$). This shows that the dark matter becomes important at small radii consistent with previous mass-modelling of LSB spirals, based on other techniques (de Blok \& McGaugh 1997; de Blok et al. 2001). This is illustrated in another way in Fig.3, which gives a comparative plot of the surface-density of the stars, gas and the halo with radius in this galaxy. The halo surface density was calculated within the total gas scale height as was done for M31 (Banerjee \& Jog 2008). It clearly shows that the surface-density and hence the gravitational potential of the halo becomes comparable to that of the disc already at R = 2$R_{d}$. This behaviour is quite different from that of a high surface density galaxy like M31 (cf Fig.6, Banerjee \& Jog 2008), where the halo contribution starts to dominate at much larger radii (5$R_{d}$). Our results support the idea that superthin disks like UGC 7321 are among the most dark matter-dominated of disc galaxies. \begin{figure}[h!] \begin{center} \rotatebox{0}{\epsfig{file=f3.eps,width=3.5in}} \end{center} \caption{Plot comparing the surface-density (in $M_{\odot}$ $pc^{-2}$) of the stars, gas and the dark-matter halo as a function of radius (in $kpc$). It clearly shows that the gravitational potential of the halo dominates over that of the disc as early as two disc scale lengths ($r$ = 4 $kpc$)} \label{fig:sdensity} \end{figure} \subsection{Dependence on gas parameters} $\bullet$ \textbf{Gradient in gas velocity dispersion} As noted earlier, if we impose a constant velocity dispersion, we require a value of 9 $km$$s^{-1}$ to get a reasonably good fit to the observed scale height data, while an even better fit requires a velocity gradient implying even larger dispersion in the inner region (Fig.2). In the earlier work for the Galaxy (Narayan et al. 2005), a slope of -0.8 $km$$s^{-1}$ $kpc^{-1}$ for the gas velocity dispersion was obtained for the inner Galaxy between 2-12 $kpc$ (pinned at 8 $km$$s^{-1}$ at 8.5 $kpc$) as it gave the best-fit to the nearly constant {\mbox{H\,{\sc i}}} scaleheights. Oort (1962) had tried the same idea but had needed a higher gradient of -2 $km$$s^{-1}$ $kpc^{-1}$ since he did not include the gas gravity and therefore needed a larger variation to account for the constant {\mbox{H\,{\sc i}}} scaleheight in the inner Galaxy. Narayan et al. (2005) tried to constrain the halo properties using the outer galaxy {\mbox{H\,{\sc i}}} data, where they had used gas velocity gradient of -0.2 $km$$s^{-1}$ $kpc^{-1}$. This is similar to the value we have for UGC 7321. This was based on the fact that some galaxies show a falling velocity dispersion which then saturates to 7 $km$$s^{-1}$ (See Narayan et al. 2005 for a discussion). Recently, Petric \& Rupen (2007) have measured the {\mbox{H\,{\sc i}}} velocity dispersion across the disc of the face-on galaxy NGC 1058. The authors find the {\mbox{H\,{\sc i}}} velocity dispersion to have a fairly complex distribution, but nonetheless show a clear fall-off with radius (see Fig.8 of their paper). Using this figure, one can estimate a gradient of roughly -0.1 $km$$s^{-1}$$kpc^{-1}$ in the outer disc, which is consistent with values observed for other galaxies. A similar fall-off has also been seen in NGC 6946 (Boomsma et al. 2008) as well as in several other galaxies (Bottema et al. 1986; Dickey et al. 1990; Kamphius 1993). So this gives some observational support to our assumption.\\ $\bullet$ \textbf{Superposition of two HI phases} A more realistic case would be to treat the HI as consisting of two phases or components, characterized by a warm ($v_{z}$ = 11 $kms^{-1}$) and a cold medium ($v_{z}$= 7 $kms^{-1}$) respectively. These values match the range seen in the above fits and represent the two phases as observed in the Galaxy (Kulkarni \& Heiles 1988). However,observationally the fraction of mass in these two phases as a function of radius is not known. Assuming that this fraction is constant with radius, we let its value vary as a free parameter. The best-fit ${\chi}^{2}$ in this case is 13.7 as compared to 2.8 for the case with a velocity gradient treated earlier. Although we do not get as good a fit as was obtained in the case where there is a gradient in the velocity, the best-fit core radius $R_{c}$ still comes out to be 2.5 kpc which is again of the order of $R_{D}$. That the dark matter dominates at small radii therefore still remains a robust result irrespective of the input gas parameters used. The best-fit case gives the fraction of HI in the cold medium to be 0.2.\\ We had taken this ratio to be constant for simplicity. Interestingly, this assumption is justified by the recent detailed study by Dickey et al. (2009) involving absorption and emission spectra in $21$ cm in the outer Galaxy. They use this to map the distribution of the cold and warm phases of the HI medium, and surprisingly find this ratio to be a robust quantity in the radial range of $R_{sun}$ to $3$ $R_{sun}$. They find this ratio is $\sim 0.15 - 0.2$, which agrees well with the best-fit ratio 0.2 that we obtain. It is interesting that this ratio obtained by two different techniques is similar in the two galaxies. The case with a gradient with a higher velocity dispersion within the optical radius gives the lowest ${\chi}^{2}$ value (Fig.2), which we adopt as our best case. We note that this choice is not inconsistent with the constant phase ratio measured by Dickey et al. (2009) which was for the outer Galaxy. $\bullet$ \textbf{High value of the gas velocity dispersion} The high gas velocity dispersion required to get an improved fit to the scaleheight data is surprising given the superthin nature of the galaxy, whose small stellar scale height implies that it is among the dynamically coldest of galactic disks (e.g., Matthews 2000). The origin of this high gas velocity is beyond the scope of this paper. However, independent of its origin, this high value of the gas velocity dispersion can partly explain why star formation is inefficent in UGC 7321. This is because, to first order, a higher gas dispersion will tend to suppress star formation since Toomre Q criterion ( Q $<$ 1) is less likely to be satisfied, hence the disc is less likely to be unstable to star formation. \\ \section{Conclusions} We have modelled the LSB superthin galaxy UGC 7321 as a gravitationally-coupled system of stars and {\mbox{H\,{\sc i}}} gas, responding to the gravitational potential of the dark-matter halo, and used the observed rotation curve and the {\mbox{H\,{\sc i}}} vertical scale heights as simultaneous constraints to determine the dark halo parameters. We find that the best-fit gives a spherical, isothermal halo with a central density in the range of 0.039-0.057 $M$$_{\odot}$ $pc$$^{-3}$ and core radius of 2.5-2.9 $kpc$. The value of the best-fit core density is comparable to values obtained for HSB galaxies. The core radius is comparable to that of the disc scale length unlike HSB galaxies studied by this method, implying the importance of the dark-matter halo at small radii in UGC 7321. Thus we find that UGC 7321 is dark matter dominated at all radii, and the results of our analysis support the idea that the thinnest of the galaxies are the most dark matter dominated.\\
2,869,038,156,545
arxiv
\section{Introduction} The asymptotic Borel map sends a function, admitting an asymptotic expansion in a sectorial region, into the formal power series providing such expansion. In many instances it is important to decide about the injectivity and surjectivity of this map when considered between so-called Carleman-Roumieu ultraholomorphic classes and the corresponding class of formal series, defined by restricting the growth of some of the characteristic data of their elements (the derivatives of the functions, the remainders in the expansion, or the coefficients of the series) in terms of a given weight sequence $\M=(M_p)_{p\in\N_0}$ of positive real numbers (see Subsection~\ref{subsectCarlemanclasses} for the definition of such classes). While the injectivity has been fully characterized for sectorial regions and general weight sequences~\cite{Mandelbrojt,Salinas,JimenezSanzSchindlInjectSurject}, the surjectivity problem is still under study. The classical Borel-Ritt-Gevrey theorem of B.~Malgrange and J.-P.~Ramis~\cite{Ramis1}, solving the case of Gevrey asymptotics (for which $\M=(p!^{\alpha})_{p\in\N_0}$, $\alpha>0$), was partially extended to different more general situations by J.~Schmets and M.~Valdivia~\cite{SchmetsValdivia00}, V.~Thilliez~\cite{Thilliez95,Thilliez03} and the authors~\cite{SanzFlatProxOrder,JimenezSanzSchindlInjectSurject,JimenezSanzSchindlSurjectDC}. Summing up, it is known now that the strong nonquasianalyticity condition (snq) for $\M$, equivalent to the fact that the index $\gamma(\M)$ introduced by V. Thilliez is positive (see Subsection~\ref{subsectIndexGammaM}), is indeed necessary for surjectivity. Moreover, for an unbounded sector $S_{\gamma}$ of opening $\pi\gamma$ ($\gamma>0$) in the Riemann surface of the logarithm and for regular weight sequences in the sense of E.~M.~Dyn'kin~\cite{Dynkin80} (see Subsection~\ref{subsectstrregseq} for the definitions), the Borel map is surjective whenever $\gamma<\gamma(\M)$, while it is not for $\gamma>\gamma(\M)$ (the situation for $\gamma=\gamma(\M)$ is still unclear in general). It is important to note that the proof in this more general situation is not constructive, but rests on the characterization, by abstract functional-analytic techniques, of the surjectivity of the Stieltjes moment mapping in Gelfand-Shilov spaces defined by regular sequences due to A. Debrouwere~\cite{momentsdebrouwere}. This information is transferred into the asymptotic framework in a halfplane by means of the Fourier transform, and in~\cite{JimenezSanzSchindlSurjectDC} Laplace and Borel transforms of arbitrary order allow to conclude for general sectors. However, in the particular case of classes given by strongly regular sequences in the sense of V. Thilliez, the proof of surjectivity of the Borel map~\cite{Thilliez03} rests on the construction of optimal flat functions in suitable sectors and a double application of Whitney extension results. Subsequently, A. Lastra, S. Malek and the third author~\cite{LastraMalekSanzContinuousRightLaplace} reproved surjectivity in a more explicit way by means of formal Borel- and truncated Laplace-like transforms, defined from suitable kernel functions obtained from those optimal flat functions. The aim of this paper is to study the connection between the existence of such optimal flat functions in a sector and the surjectivity of the Borel map in it, as well as to provide a constructive technique for such extension results. For strongly regular sequences an equivalence is obtained between these two facts; however, for regular sequences we will get surjectivity whenever optimal flat functions are available, but no general procedure for their construction is currently available. Nonetheless, we will present a family of (non strongly) regular sequences for which such optimal flat functions can be provided in any sector of the Riemann surface of the logarithm, what agrees with the fact that the index $\gamma(\M)$ is in this case equal to $\infty$. We end by showing how optimal flat functions and extension results can be obtained for convolved sequences, in case the factor sequences admit such constructions separately. Some examples are commented on in regard with this technique. The paper is organized as follows. Section~\ref{sectPrelimin} consists of all the preliminary information concerning weight sequences and some indices or auxiliary functions associated with them, and the main facts about ultraholomorphic classes and the (asymptotic) Borel map defined for them. In Section~\ref{sectFlatFunctions} we define optimal flat functions and carefully detail how their existence entails the surjectivity of the Borel map in ultraholomorphic classes defined by regular sequences. In the particular case of sequences of moderate growth, different statements are presented relating the property of strong non-quasianalyticity to the existence of such flat functions. In Section~\ref{sectConstrOptFlatNon_SR_Seq} we give a family of sequences (among which the classical $q$-Gevrey sequences are found) for which optimal flat functions can be constructed. We need to work first in $\C\setminus(-\infty,0]$, and then apply a ramification in order to reason for arbitrary sectors in the Riemann surface of the logarithm. Finally, the last section is devoted to the work with convolved sequences. \section{Preliminaries}\label{sectPrelimin} \subsection{Weight sequences and their properties}\label{subsectstrregseq} We set $\N:=\{1,2,...\}$, $\N_{0}:=\N\cup\{0\}$. In what follows, $\bM=(M_p)_{p\in\N_0}$ will always stand for a sequence of positive real numbers with $M_0=1$. We define its {\it sequence of quotients} $\m=(m_p)_{p\in\N_0}$ by $m_p:=M_{p+1}/M_p$, $p\in \N_0$; the knowledge of $\M$ amounts to that of $\m$, since $M_p=m_0\cdots m_{p-1}$, $p\in\N$. The following properties for a sequence will play a role in this paper:\par (i) $\M$ is \emph{logarithmically convex} (for short, (lc)) if $M_{p}^{2}\le M_{p-1}M_{p+1}$, $p\in\N$.\par (ii) $\M$ is \emph{stable under differential operators} or satisfies the \emph{derivation closedness condition} (briefly, (dc)) if there exists $D>0$ such that $M_{p+1}\leq D^{p+1} M_{p}$, $p\in\N_{0}$.\par (iii) $\M$ is of, or has, \emph{moderate growth} (for the sake of brevity, (mg)) if there exists $A>0$ such that $M_{p+q}\le A^{p+q}M_{p}M_{q}$, $p,q\in\N_0$.\par (iv) $\M$ satisfies the condition (nq) if $$\sum_{p=0}^{\infty}\frac{M_{p}}{(p+1)M_{p+1}}<+\infty. $$ \indent (v) Finally, $\M$ satisfies the condition (snq) if there exists $B>0$ such that $$ \sum^\infty_{q= p}\frac{M_{q}}{(q+1)M_{q+1}}\le B\frac{M_{p}}{M_{p+1}},\qquad p\in\N_0.$$ It is convenient to introduce the notation $\hM:=(p!M_p)_{p\in\N_0}$. All these properties are preserved when passing from $\M$ to $\hM$. In the classical work of H.~Komatsu~\cite{komatsu}, the properties (lc), (dc) and (mg) are denoted by $(M.1)$, $(M.2)'$ and $(M.2)$, respectively, while (snq) for $\M$ is the same as property $(M.3)$ for $\widehat{\M}$. Obviously, (mg) implies (dc). The sequence of quotients $\m$ is nondecreasing if and only if $\M$ is (lc). In this case, it is well-known that $(M_p)^{1/p}\leq m_{p-1}$ for every $p\in\N$, the sequence $((M_p)^{1/p})_{p\in\N}$ is nondecreasing, and $\lim_{p\to\infty} (M_p)^{1/p}= \infty$ if and only if $\lim_{p\to\infty} m_p= \infty$. In order to avoid trivial situations, we will restrict from now on to (lc) sequences $\M$ such that $\lim_{p\to\infty} m_p =\infty$, which will be called \emph{weight sequences}. Following E.~M.~Dyn'kin~\cite{Dynkin80}, if $\M$ is a weight sequence and satisfies (dc), we say $\hM$ is \emph{regular}. According to V.~Thilliez~\cite{Thilliez03}, if $\M$ satisfies (lc), (mg) and (snq), we say $\M$ is \emph{strongly regular}; in this case $\M$ is a weight sequence, and the corresponding $\hM$ is regular. We mention some interesting examples. In particular, those in (i) and (iii) appear in the applications of summability theory to the study of formal power series solutions for different kinds of equations. \begin{enumerate}[(i)] \item The sequences $\M_{\al,\be}:=\big(p!^{\al}\prod_{m=0}^p\log^{\be}(e+m)\big)_{p\in\N_0}$, where $\al>0$ and $\be\in\R$, are strongly regular (in case $\be<0$, the first terms of the sequence have to be suitably modified in order to ensure (lc)). In case $\be=0$, we have the best known example of a strongly regular sequence, $\M_{\al}:=\M_{\al,0}=(p!^{\al})_{p\in\N_{0}}$, called the \emph{Gevrey sequence of order $\al$}. \item The sequence $\M_{0,\be}:=(\prod_{m=0}^p\log^{\be}(e+m))_{p\in\N_0}$, with $\be>0$, satisfies (lc) and (mg), and $\m$ tends to infinity, but (snq) is not satisfied. \item For $q>1$ and $1<\sigma\le 2$, $\M_{q,\sigma}:=(q^{p^\sigma})_{p\in\N_0}$ satisfies (lc), (dc) and (snq), but not (mg). In case $\sigma=2$, we get the well-known \emph{$q$-Gevrey sequence}. \end{enumerate} Two sequences $\bM=(M_{p})_{p\in\N_0}$ and $\bL=(L_{p})_{p\in\N_0}$ of positive real numbers, with $M_0=L_0=1$ and with respective quotient sequences $\m$ and $\bl$, are said to be \emph{equivalent}, and we write $\M\approx\bL$, if there exist positive constants $A,B$ such that $A^pM_p\le L_p\le B^pM_p$, $p\in\N_0$. They are said to be \emph{strongly equivalent}, denoted by $\m\simeq\bl$, if there exist positive constants $a,b$ such that $am_p\le \ell_p\le bm_p$, $p\in\N_0$. Whenever $\m\simeq\bl$ we have $\M\approx\bL$, but not conversely. In case $M_0$ or $L_0$ is not equal to 1, the previous definitions of equivalence are meant to deal with the normalized sequences $(M_{p}/M_0)_{p\in\N_0}$ or $(L_{p}/L_0)_{p\in\N_0}$. \subsection{Index $\ga(\M)$ and auxiliary functions for weight sequences}\label{subsectIndexGammaM} The index $\gamma(\M)$, introduced by V.~Thilliez~\cite[Sect.\ 1.3]{Thilliez03} for strongly regular sequences $\M$, can be equally defined for (lc) sequences, and it may be equivalently expressed by different conditions: \begin{enumerate}[(i)] \item A sequence $(c_p)_{p\in\N_0}$ is \emph{almost increasing} if there exists $a>0$ such that for every $p\in\N_0$ we have that $c_p\leq a c_q $ for every $ q\geq p$. It was proved in~\cite{JimenezSanzSRSPO,JimenezSanzSchindlIndex} that for any weight sequence $\M$ one has \begin{equation}\label{equa.indice.gammaM.casicrec} \gamma(\bM)=\sup\{\gamma>0:(m_{p}/(p+1)^\gamma)_{p\in\N_0}\hbox{ is almost increasing} \}\in[0,\infty]. \end{equation} \item For any $\be>0$ we say that $\m$ satisfies the condition $(\gamma_{\be})$ if there exists $A>0$ such that \begin{equation* \sum^\infty_{\ell=p} \frac{1}{(m_\ell)^{1/\be}}\leq \frac{A (p+1) }{(m_p)^{1/\be}}, \qquad p\in\N_0.\tag{$\gamma_\be$} \end{equation*} In~\cite{PhDJimenez,JimenezSanzSchindlIndex} it is proved that for a weight sequence $\M$, \begin{equation}\label{equa.indice.gammaM.gamma_r} \gamma(\M)=\sup\{\be>0; \,\, \m \,\, \text{satisfies } (\gamma_{\be})\,\};\quad \gamma(\M)>\be\iff\m\text{ satisfies }(\gamma_\be). \end{equation} \end{enumerate} If we observe that the condition (snq) for $\M$ is precisely $(\gamma_1)$ for $\widehat{m}$, the sequence of quotients for $\hM$, and that $\ga(\hM)=\ga(\M)+1$ (this is clear from~\eqref{equa.indice.gammaM.casicrec}), we deduce from the second statement in~\eqref{equa.indice.gammaM.gamma_r} that \begin{equation}\label{eq.snqiffgammaMpositive} \textrm{$\M$ satisfies (snq) if, and only if, }\ga(\M)>0. \end{equation} Given a weight sequence $\M=(M_p)_{p\in\N_0}$, we write $$\o_{\M}(t):=\sup_{p\in\N_0} \ln\left(\frac{t^{p}}{M_{p}}\right),\qquad t>0,$$ and $\o_{\M}(0)=0$. This is the classical function associated with the sequence $\M$, see~\cite{komatsu}. Another associated function will play a key-role, namely \begin{equation* h_\M(t):=\inf_{p\in\N_0}M_p t^p, \quad t>0. \end{equation*} The functions $h_\M$ and $\omega_\M$ are related by \begin{equation}\label{functionhequ2} h_\M(t)=\exp(-\omega_\M(1/t)),\quad t>0. \end{equation} In~\cite[Prop.~3.2]{komatsu} we find that, for a weight sequence $\M$, \begin{equation}\label{eq.MpfromomegaM} M_p=\sup_{t>0}t^p\exp(-\o_{\M}(t))=\sup_{t>0}t^p h_{\M}(1/t),\quad p\in\N_0. \end{equation} By definition we immediately get the following. \begin{lemma}\label{functionhproperties} Let $\M=(M_p)_{p\in\N_0}$ be a weight sequence, then: \begin{itemize} \item[$(i)$] $t\in(0,\infty)\mapsto h_\M(t)$ is nondecreasing and continuous, \item[$(ii)$] $h_{\M}(t)\le 1$ for all $t>0$, $h_\M(t)=1$ for all $t$ sufficiently large and $\lim_{t\rightarrow 0}h_\M(t)=0$. \end{itemize} \end{lemma} Let us also introduce the counting function $\nu_{\m}$ by \begin{equation}\label{countinfunctionnu} \nu_{\m}(\lambda):=\#\{p\in\N_0: m_p\le\lambda\}. \end{equation} If $\M$ is a weight sequence, then the functions $\nu_{\m}$ and $\omega_{\M}$ are related by the following useful integral representation formula, e.g. see \cite[$(3.11)$]{komatsu} and \cite{Mandelbrojt}: \begin{equation}\label{assofuncintegral} \omega_\M(x)= \int_0^{x}\frac{\nu_{\m}(\lambda)}{\lambda}d\lambda= \int_{m_0}^{x}\frac{\nu_{\m}(\lambda)}{\lambda}d\lambda,\quad x>0. \end{equation} \subsection{Asymptotic expansions, ultraholomorphic classes and the asymptotic Borel map}\label{subsectCarlemanclasses} $\mathcal{R}$ stands for the Riemann surface of the logarithm. $\C[[z]]$ is the space of formal power series in $z$ with complex coefficients. For $\gamma>0$, we consider unbounded sectors bisected by direction 0, $$S_{\gamma}:=\{z\in\mathcal{R}:|\hbox{arg}(z)|<\frac{\gamma\,\pi}{2}\}$$ or, in general, unbounded sectors with bisecting direction $d\in\R$ and opening $\ga\,\pi$, $$S(d,\ga):=\{z\in\mathcal{R}:|\hbox{arg}(z)-d|<\frac{\ga\,\pi}{2}\}.$$ A sector $T$ is said to be a \emph{proper subsector} of a sector $S$ if $\overline{T}\subset S$ (where the closure of $T$ is taken in $\mathcal{R}$, and so the vertex of the sector is not under consideration). In this paragraph $S$ is an unbounded sector and $\M$ a sequence. We start by recalling the concept of uniform asymptotic expansion. We say a holomorphic function $f\colon S\to\C$ admits $\widehat{f}$ as its \emph{uniform $\M$-asymptotic expansion in $S$ (of type $1/A$ for some $A>0$)} if there exists $C>0$ such that for every $p\in\N_0$, one has \begin{equation}\left|f(z)-\sum_{n=0}^{p-1}a_nz^n \right|\le CA^pM_{p}|z|^p,\qquad z\in S.\label{desarasintunifo} \end{equation} In this case we write $f\sim_{\M,A}^u\widehat{f}$ in $S$, and $\widetilde{\mathcal{A}}^u_{\M,A}(S)$ denotes the space of functions admitting uniform $\M$-asymptotic expansion of type $1/A$ in $S$, endowed with the norm \begin{equation}\label{eq.NormUniformAsymptFixedType} \left\|f\right\|_{\M,A,\overset{\sim}{u}}:=\sup_{z\in S,p\in\N_{0}}\frac{|f(z)-\sum_{k=0}^{p-1}a_kz^k|}{A^{p}M_{p}|z|^p}, \end{equation} which makes it a Banach space. $\widetilde{\mathcal{A}}^u_{\{\M\}}(S)$ stands for the $(LB)$ space of functions admitting a uniform $\{\M\}$-asymptotic expansion in $S$, obtained as the union of the previous classes when $A$ runs over $(0,\infty)$. When the type needs not be specified, we simply write $f\sim_{\{\M\}}^u\widehat{f}$ in $S$. Note that, taking $p=0$ in~\eqref{desarasintunifo}, we deduce that every function in $\widetilde{\mathcal{A}}^u_{\{\M\}}(S)$ is a bounded function. Finally, we define for every $A>0$ the class $\mathcal{A}_{\M,A}(S)$ consisting of the holomorphic functions $f$ in $S$ such that $$ \left\|f\right\|_{\M,A}:=\sup_{z\in S,p\in\N_{0}}\frac{|f^{(p)}(z)|}{A^{p}M_{p}}<\infty. $$ ($\mathcal{A}_{\M,A}(S),\left\|\,\cdot\, \right\| _{\M,A}$) is a Banach space, and $\mathcal{A}_{\{\M\}}(S):=\cup_{A>0}\mathcal{A}_{\M,A}(S)$ is called a \emph{Carleman-Roumieu ultraholomorphic class} in the sector $S$, whose natural inductive topology makes it an $(LB)$ space.\par We warn the reader that these notations, while the same as in the paper~\cite{JimenezSanzSchindlSurjectDC}, do not agree with the ones used in~\cite{SanzFlatProxOrder,JimenezSanzSchindlInjectSurject}, where $\widetilde{\mathcal{A}}^u_{\{\M\}}(S)$ was denoted by $\widetilde{\mathcal{A}}^u_{\M}(S)$, $\mathcal{A}_{\M,A}(S)$ by $\mathcal{A}_{\M/\bL_1,A}(S)$, and $\mathcal{A}_{\{\M\}}(S)$ by $\mathcal{A}_{\M/\bL_1}(S)$. If $\M$ is (lc), the spaces $\mathcal{A}_{\{\M\}}(S)$ and $\widetilde{\mathcal{A}}^u_{\{\M\}}(S)$ are algebras, and if $\M$ is (dc) they are stable under taking derivatives. Moreover, if $\M\approx\bL$ the corresponding classes coincide. Since the derivatives of $f\in\mathcal{A}_{\M,A}(S)$ are Lipschitz, for every $p\in\N_{0}$ one may define \begin{equation}\label{eq.deriv.at.0.def} f^{(p)}(0):=\lim_{z\in S,z\to0 }f^{(p)}(z)\in\C. \end{equation} As a consequence of Taylor's formula and Cauchy's integral formula for the derivatives, there is a close relation between Carleman-Roumieu ultraholomorphic classes and the concept of asymptotic expansion (the proof may be easily adapted from~\cite{balserutx}). \begin{prop}\label{propcotaderidesaasin} Let $\M$ be a sequence and $S$ be a sector. Then, \begin{enumerate}[(i)] \item If $f\in\mathcal{A}_{\hM,A}(S)$ then $f$ admits $\widehat{f}:=\sum_{p\in\N_0}\frac{1}{p!}f^{(p)}(0)z^p$ as its uniform $\M$-asymptotic expansion in $S$ of type $1/A$, where $(f^{(p)}(0))_{p\in\N_0}$ is given by \eqref{eq.deriv.at.0.def}. Moreover, $\|f\|_{\M,A,\overset{\sim}{u}}\le \|f\|_{\hM,A}$, and so the identity map $\mathcal{A}_{\hM,A}(S)\hookrightarrow \widetilde{\mathcal{A}}^u_{\M,A}(S)$ is continuous. Consequently, we also have that $\mathcal{A}_{\{\hM\}}(S)\subseteq \widetilde{\mathcal{A}}^u_{\{\M\}}(S)$ and $\mathcal{A}_{\{\hM\}}(S)\hookrightarrow \widetilde{\mathcal{A}}^u_{\{\M\}}(S)$ is continuous. \item If $S$ is unbounded and $T$ is a proper subsector of $S$, then there exists a constant $c=c(T,S)>0$ such that the restriction to $T$, $f|_T$, of functions $f$ defined on $S$ and admitting a uniform $\M$-asymptotic expansion in $S$ of type $1/A>0$, belongs to $\mathcal{A}_{\hM,cA}(T)$, and $\|f|_T\|_{\hM,cA}\le \|f\|_{\M,A,\overset{\sim}{u}}$. So, the restriction map from $\widetilde{\mathcal{A}}^u_{\M,A}(S)$ to $\mathcal{A}_{\hM,cA}(T)$ is continuous, and it is also continuous from $\widetilde{\mathcal{A}}^u_{\{\M\}}(S)$ to $\mathcal{A}_{\{\hM\}}(T)$. \end{enumerate} \end{prop} One may accordingly define classes of formal power series \begin{equation}\label{eq.defBanachFormalPowerSeries} \C[[z]]_{\M,A}=\Big\{\widehat{f}=\sum_{p=0}^\infty a_pz^p\in\C[[z]]:\, \left|\,\widehat{f} \,\right|_{\M,A}:=\sup_{p\in\N_{0}}\displaystyle \frac{|a_{p}|}{A^{p}M_{p}}<\infty\Big\}. \end{equation $(\C[[z]]_{\M,A},\left| \,\cdot\, \right|_{\M,A})$ is a Banach space and we put $\C[[z]]_{\{\M\}}:=\cup_{A>0}\C[[z]]_{\M,A}$, again an $(LB)$ space. It is natural to consider the \textit{asymptotic Borel map} $\widetilde{\mathcal{B}}$ sending a function $f\in\widetilde{\mathcal{A}}^u_{\M,A}(S)$ into its $\M$-asymptotic expansion $\widehat{f}\in\C[[z]]_{\M,A}$. By Proposition~\ref{propcotaderidesaasin}.(i) the asymptotic Borel map may be defined from $\widetilde{\mathcal{A}}^u_{\{\M\}}(S)$ or $\mathcal{A}_{\{\hM\}}(S)$ into $\C[[z]]_{\{\M\}}$, and from $\mathcal{A}_{\hM,A}(S)$ into $\C[[z]]_{\M,A}$. If $\M$ is (lc), $\widetilde{\mathcal{B}}$ is a homomorphism of algebras; if $\M$ is also (dc), differentiation commutes with $\widetilde{\mathcal{B}}$. Moreover, it is continuous when considered between the corresponding Banach or $(LB)$ spaces previously introduced. Finally, note that if $\M\approx\bL$, then $\C[[z]]_{\{\M\}}=\C[[z]]_{\{\bL\}}$, and the corresponding Borel maps are in all cases identical. Since the problem under study is invariant under rotation, we will focus on the surjectivity of the Borel map in unbounded sectors $S_{\gamma}$. So, we define \begin{align*} S_{\{\hM\}}:=&\{\gamma>0; \quad \widetilde{\mathcal{B}}:\mathcal{A}_{\{\hM\}}(S_\gamma)\longrightarrow \C[[z]]_{\{\M\}} \text{ is surjective}\} ,\\ \widetilde{S}^u_{\{\M\}}:=&\{\gamma>0; \quad\widetilde{\mathcal{B}}:\widetilde{\mathcal{A}}^u_{\{\M\}}(S_\gamma)\longrightarrow \C[[z]]_{\{\M\}} \text{ is surjective}\}. \end{align*} We again note that these intervals were respectively denoted by $S_{\M}$ and $\widetilde{S}^u_{\M}$ in~\cite{JimenezSanzSchindlInjectSurject}.\par It is clear that $S_{\{\hM\}}$ and $\widetilde{S}^u_{\{\M\}}$ are either empty or left-open intervals having $0$ as endpoint, called \emph{surjectivity intervals}. Using~Proposition~\ref{propcotaderidesaasin}, we easily see that \begin{align} (\widetilde{S}^u_{\{\M\}})^{\circ}\subseteq S_{\{\hM\}}\subseteq \widetilde{S}^u_{\{\M\}}, \label{equaContentionSurjectIntervals} \end{align} where $I^{\circ}$ stands for the interior of the interval $I$. \section{Optimal flat functions in ultraholomorphic classes}\label{sectFlatFunctions} The following result appeared, in a slightly different form, in~\cite[Lemma~4.5]{JimenezSanzSchindlInjectSurject} \begin{lemma}\label{lemma.SurjectImpliesgammaMpositive} Let $\M$ be a weight sequence. If $\widetilde{S}^u_{\{\M\}}\neq\emptyset$, then $\M$ satisfies (snq) or, equivalently, $\ga(\M) >0$. \end{lemma} Subsequently, a converse, more precise statement appeared in~\cite[Th.~3.7]{JimenezSanzSchindlSurjectDC} under the additional hypothesis of condition (dc). \begin{theo}\label{teorSurject.dc} Let $\hM$ be a regular sequence such that $\gamma(\M)>0$. Then, $$(0,\gamma(\M))\subseteq S_{\{\hM\}}\subseteq \widetilde{S}^u_{\{\M\}}\subseteq (0,\gamma(\M)]. $$ In particular, if $\gamma(\M)= \infty$, we have that $S_{\{\hM\}}= \widetilde{S}^u_{\{\M\}}= (0,\infty)$. \end{theo} So, the surjectivity of the Borel map for regular sequences is governed by the value of the index $\ga(\M)$. Our aim is to relate surjectivity in a sector to the existence of optimal flat functions in it, which we now define. \begin{defi}\label{optimalflatdef} Let $\M$ be a weight sequence, $S$ an unbounded sector bisected by direction $d=0$, i.e., by the positive real line $(0,+\infty)\subset\mathcal{R}$. A holomorphic function $G\colon S\to\C$ is called an \emph{optimal $\{\M\}$-flat function} in $S$ if: \begin{itemize} \item[$(i)$] $G(x)>0$ for all $x>0$, \item[$(ii)$] There exist $K_1,K_2,K_3,K_4>0$ such that for all $z\in S$, one has \begin{equation}\label{optimalflat} K_1h_{\M}(K_2|z|)\le|G(z)|\le K_3h_{\M}(K_4|z|). \end{equation} \end{itemize} \end{defi} Besides the symmetry imposed by condition $(i)$, we note that the estimates in the right-hand inequality from~\eqref{optimalflat} amount to the fact that $|G(z)|\le K_3K_4^pM_p|z|^p$ for every $p\in\N_0$ and every $z\in S$, what exactly means that $G\in\widetilde{\mathcal{A}}^u_{\{\M\}}(S)$ and is \emph{$\{\M\}$-flat}, i.e., its uniform $\{\M\}$-asymptotic expansion is given by the null series. The left-hand inequality imposed in~\eqref{optimalflat} makes the function optimal in a sense, as its rate of decrease at 0 is accurately specified by the function $h_{\M}$. If $G$ is an optimal $\{\M\}$-flat function in $\widetilde{\mathcal{A}}^u_{\{\M\}}(S)$, the kernel function $e\colon S\to\C$ given by \begin{equation*} e(z):=G\left(\frac{1}{z}\right),\quad z\in S, \end{equation*} is such that $e(x)>0$ for all $x>0$, and there exist $K_1,K_2,K_3,K_4>0$ such that \begin{equation}\label{eq.Bounds_e_sector} K_1h_{\M}\left(\frac{K_2}{|z|}\right)\le |e(z)|\le K_3h_{\M}\left(\frac{K_4}{|z|}\right),\quad z\in S. \end{equation} For every $p\in\mathbb{N}_0$ we define the $p$-th moment of the function $e(z)$, given by \begin{equation*} \mo(p):=\int_0^\infty t^{p}e(t)\,dt. \end{equation*} Note that the positive value $\mo(0)$ need not be equal to 1. The following result is crucial for our aim. \begin{prop}\label{prop.boundsMomentsSigmaEntre1y2} Suppose $\hM$ is a regular sequence and $G$ is an optimal $\{\M\}$-flat function in $\widetilde{\mathcal{A}}^u_{\{\M\}}(S)$. Consider the sequence of moments $\mo:=(\mo(p))_{p\in\N_0}$ associated with the kernel function $e(z)=G(1/z)$. Then, there exist $B_1,B_2>0$ such that \begin{equation}\label{eq.boundsMmomentsdc} \mo(0)B_1^{p}M_p\le \mo(p)\le \mo(0)B_2^{p}M_p,\quad p\in\mathbb{N}_0. \end{equation} In other words, $\M$ and $\mo$ are equivalent. \end{prop} \begin{proof1} On the one hand, because of the right-hand inequalities in~\eqref{eq.Bounds_e_sector} and Lemma~\ref{functionhproperties}.(ii), for every $p\in\N_0$ and $s>0$ we may write \begin{align*} \mo(p)&=\int_0^s t^p e(t)\,dt +\int_s^\infty \frac{1}{t^2}t^{p+2}e(t)\,dt\\%nonumber\\ &\le K_3 \int_0^s t^p\,dt + K_3\sup_{t>0}t^{p+2}h_\M\left(\frac{K_4}{t}\right)\int_s^\infty \frac{1}{t^2}\,dt \\%nonumber\\ &= K_3 \frac{s^{p+1}}{p+1} +K_3\frac{1}{s}K_4^{p+2}M_{p+2} \le K_3 \left(\frac{s^{p+1}}{p+1} +\frac{(K_4D)^{p+2}M_{p}}{s}\right). \end{align*} Note that in the last equality we have used \eqref{eq.MpfromomegaM}, and then we have applied (dc) with a suitable constant $D>0$. Since $s>0$ was arbitrary, we finally get \begin{equation*} \mo(p)\le \inf_{s>0}K_3 \left(\frac{s^{p+1}}{p+1} +\frac{(K_4D)^{p+2}M_{p}}{s}\right)= K_3\frac{p+2}{p+1}(K_4D)^{p+1}(M_{p})^{(p+1)/(p+2)} \le \mo(0)B_2^pM_p \end{equation*} for a suitably enlarged constant $B_2>0$ (observe that, eventually, $M_p\ge 1$). On the other hand, by the left-hand inequalities in~\eqref{eq.Bounds_e_sector} and Lemma~\ref{functionhproperties}.(i), for every $p\in\N_0$ and $s>0$ we may estimate $$ \mo(p)\ge \int_0^s t^p e(t)\,dt \ge K_1 \int_0^s t^p h_{\M}\left(\frac{K_2}{t}\right)\,dt\ge K_1 h_{\M}\left(\frac{K_2}{s}\right)\frac{s^{p+1}}{p+1}. $$ Then, again by~\eqref{eq.MpfromomegaM}, we deduce that $$ \mo(p)\ge \frac{K_1}{p+1} \sup_{s>0}h_{\M}\left(\frac{K_2}{s}\right)s^{p+1}= \frac{K_1}{p+1} K_2^{p+1}M_{p+1}\ge \mo(0)B_1^p M_p $$ for a suitable constant $B_1>0$ (note that $\M$ is eventually nondecreasing). \end{proof1} We can already state the following main result. The forthcoming implication $(i)\Rightarrow(iv)$ for strongly regular sequences $\M$ was first obtained by V. Thilliez~\cite[Th.~3.2.1]{Thilliez03}, and the proof heavily rested on the moderate growth condition for $\M$ because of the use of Whitney extension results in the ultradifferentiable setting. In~\cite{LastraMalekSanzContinuousRightLaplace} the implication $(i)\Rightarrow(ii)$ was proved again for strongly regular sequences, but with a completely different technique, and it is this approach which allows here for the weakening of condition (mg) into (dc). \begin{theo}\label{tpral} Let $\hM$ be a regular sequence and $\ga>0$. Each of the following statements implies the next one: \begin{itemize} \item[(i)] There exists an optimal $\{\M\}$-flat function in $\widetilde{\mathcal{A}}^u_{\{\M\}}(S_{\ga})$. \item[(ii)] There exists $c>0$ such that for every $A>0$ there exists a linear continuous map $T_{\M,A}\colon\C[[z]]_{\M,A}\to \widetilde{\mathcal{A}}^u_{\M,cA}(S_{\ga})$ such that $\widetilde{\mathcal{B}}\circ T_{\M,A}$ is the identity map in $\C[[z]]_{\M,A}$ (i.e., $T_{\M,A}$ is an extension operator, right inverse for $\widetilde{\mathcal{B}}$). \item[(iii)] The Borel map $\widetilde{\mathcal{B}}\colon \widetilde{\mathcal{A}}^u_{\{\M\}}(S_{\ga})\to\C[[z]]_{\{\M\}}$ is surjective. In other words, $(0,\ga]\subset\widetilde{S}_{\{\M\}}^u$. \item[(iv)] $(0,\ga)\subset S_{\{\hM\}}$. \item[(v)] $\ga\le\ga(\M)$. \end{itemize} \end{theo} \begin{proof1} $(i)\Rightarrow (ii)$ Let $A>0$ and $\widehat{f}=\sum_{p=0}^\infty a_pz^p\in\C[[z]]_{\M,A}$ be given. Let $(\mo(p))_{p\in\N_0}$ be the sequence of moments associated to the function $e(z)=G(1/z)$, where $G$ is an optimal $\{\M\}$-flat function in $\widetilde{\mathcal{A}}^u_{\{\M\}}(S_{\ga})$. By the definition of the norm in $\C[[z]]_{\M,A}$ (see~\eqref{eq.defBanachFormalPowerSeries}), we have \begin{equation*} |a_p|\le |\widehat{f}|_{\M,A}A^{p}M_p,\quad p\in\N_0. \end{equation*} From the left-hand inequalities in~\eqref{eq.boundsMmomentsdc}, we deduce that \begin{equation}\label{eq.boundsCoeffBorelTransf} \left|\frac{a_p}{\mo(p)}\right|\le \frac{|\widehat{f}|_{\M,A}}{\mo(0)}\left(\frac{A}{B_1}\right)^{p},\quad p\in\N_0. \end{equation} Hence, the formal Borel-like transform of $\widehat{f}$, $$ \widehat{g}=\sum_{p=0}^\infty\frac{a_p}{\mo(p)}z^p, $$ is convergent in the disc $D(0,R)$ for $R=B_1/A>0$, and it defines a holomorphic function $g$ there. Choose $R_0:=B_1/(2A)<R$, and define \begin{equation*} T_{\M,A}(\widehat{f}\,)(z):=\frac{1}{z}\int_{0}^{R_0}e\left(\frac{u}{z}\right)g(u)\,du,\qquad z\in S_{\ga}, \end{equation*} which is a truncated Laplace-like transform of $g$ with kernel $e$. By virtue of Leibniz's theorem for parametric integrals and the properties of $e$, we deduce that this function, denoted by $f$ for the sake of brevity, is holomorphic in $S_{\ga}$. We will prove that $f\sim^u_{\{\bM\}}\hat{f}$ uniformly in $S_{\ga}$, and that the map $\widehat{f}\mapsto f$, which is obviously linear, is also continuous from $\C[[z]]_{\M,A}$ into $\widetilde{\mathcal{A}}^u_{\M,cA}(S_{\ga})$ for suitable $c>0$ independent from $A$. Let $p\in\N_0$ and $z\in S_{\ga}$. We have \begin{align*} f(z)-\sum_{n=0}^{p-1}a_nz^n &= f(z)-\sum_{n=0}^{p-1}\frac{a_n}{\mo(n)}\mo(n)z^n\\ &= \frac{1}{z}\int_{0}^{R_0}e \left(\frac{u}{z}\right) \sum_{n=0}^{\infty}\frac{a_{n}}{\mo(n)}u^n\,du -\sum_{n=0}^{p-1}\frac{a_n}{\mo(n)}\int_{0}^{\infty}v^{n}e(v)\,dv\, z^n. \end{align*} After a change of variable $u=zv$ in the last integral, one may use Cauchy's residue theorem and the right-hand estimates in~(\ref{eq.Bounds_e_sector}) in order to rotate the path of integration and obtain $$ z^n\int_{0}^{\infty}v^{n}e(v)dv= \frac{1}{z}\int_{0}^{\infty}u^{n}e\left(\frac{u}{z}\right)\,du. $$ So, we can write the preceding difference as \begin{equation*} \frac{1}{z}\left(\int_{0}^{R_0}e\left(\frac{u}{z}\right) \sum_{n=p}^{\infty}\frac{a_{n}}{\mo(n)}u^n\,du -\int_{R_0}^{\infty}e\left(\frac{u}{z}\right) \sum_{n=0}^{p-1}\frac{a_n}{\mo(n)}u^{n}\,du\right). \end{equation*} Then, we have \begin{equation}\label{eq.RemainderTwoSums} \left|f(z)-\sum_{n=0}^{p-1}a_nz^n\right|\le \frac{1}{|z|}(f_{1}(z)+f_2(z)), \end{equation} where $$ f_{1}(z)=\left|\int_{0}^{R_0}e \left(\frac{u}{z}\right) \sum_{n=p}^{\infty}\frac{a_{n}}{\mo(n)}u^n \,du\right|,\quad f_{2}(z)=\left|\int_{R_0}^{\infty}e \left(\frac{u}{z}\right) \sum_{n=0}^{p-1}\frac{a_n}{\mo(n)}u^n\,du\right|.$$ We now estimate $f_1(z)$ and $f_2(z)$. Observe that for every $u\in(0,R_0]$ we have $0<Au/B_1\le 1/2$. So, from~\eqref{eq.boundsCoeffBorelTransf} we get \begin{equation*} \sum_{n=p}^{\infty}\frac{|a_{n}|}{\mo(n)}u^n\le \frac{|\widehat{f}|_{\M,A}}{\mo(0)} \sum_{n=p}^{\infty}\left(\frac{Au}{B_1}\right)^n\le \frac{2|\widehat{f}|_{\M,A}}{\mo(0)}\left(\frac{A}{B_1}\right)^p u^p. \end{equation*} Hence, \begin{equation}\label{eq.Estimates_f1} f_{1}(z)\le \frac{2|\widehat{f}|_{\M,A}}{\mo(0)} \left(\frac{A}{B_1}\right)^p \int_{0}^{R_0}\left|e\left(\frac{u}{z}\right)\right| u^p\,du. \end{equation} Regarding $f_{2}(z)$, for $u\ge R_0$ and $0\le n\le p-1$ we have $(u/R_0)^n\le (u/R_0)^p$, so $u^n\le R_0^nu^p/R_0^p$. Again by~(\ref{eq.boundsCoeffBorelTransf}), and taking into account the value of $R_0$, we may write $$\sum_{n=0}^{p-1}\frac{|a_n|}{\mo(n)}u^n\le \frac{|\widehat{f}|_{\M,A}}{\mo(0)}\frac{u^p}{R_0^p} \sum_{n=0}^{p-1}\left(\frac{AR_0}{B_1}\right)^n\le \frac{|\widehat{f}|_{\M,A}}{\mo(0)}\left(\frac{2A}{B_1}\right)^p u^p.$$ Then, we get \begin{equation}\label{eq.Estimates_f2} f_2(z)\le \frac{|\widehat{f}|_{\M,A}}{\mo(0)} \left(\frac{2A}{B_1}\right)^p \int_{R_0}^{\infty}\left|e \left(\frac{u}{z}\right) \right|u^{p}\,du. \end{equation} In order to conclude, note that the second inequality in~\eqref{eq.Bounds_e_sector}, followed by the first one, and the fact that $e(x)>0$ for $x>0$, together imply that for every $z\in S_{\ga}$ and every $u>0$ we have $$ |e(u/z)|\le K_3h_{\M}\left(K_4\frac{|z|}{u}\right)\le \frac{K_3}{K_1}\,e\left(\frac{K_2u}{K_4|z|}\right). $$ We use this fact, a simple change of variable and the right-hand estimates in~\eqref{eq.boundsMmomentsdc}, and obtain that \begin{align*} \int_{0}^{\infty}\left|e \left(\frac{u}{z}\right) \right|u^{p}\,du &\le \int_{0}^{\infty}\frac{K_3}{K_1}\,e\left(\frac{K_2u}{K_4|z|}\right) u^{p}\,du\\ &=\frac{K_3}{K_1}\left(\frac{K_4|z|}{K_2}\right)^{p+1}\mo(p) \le \frac{\mo(0)K_3K_4}{K_1K_2}\left(\frac{K_4B_2}{K_2}\right)^{p}M_p|z|^{p+1}. \end{align*} This estimate can be taken into both~\eqref{eq.Estimates_f1} and~\eqref{eq.Estimates_f2}, and from~\eqref{eq.RemainderTwoSums} we easily get that for every $p\in\N_0$, $$ \left|f(z)-\sum_{n=0}^{p-1}a_nz^n\right|\le \frac{3K_3K_4}{K_1K_2}|\widehat{f}|_{\M,A} \left(\frac{2K_4B_2A}{K_2B_1}\right)^pM_p|z|^p,\quad z\in S_{\ga}, $$ and so $f$ admits $\widehat{f}$ as its uniform $\{\M\}$-asymptotic expansion in $S_{\ga}$. Moreover, recalling the definition~\eqref{eq.NormUniformAsymptFixedType} of the norm in these spaces with uniform asymptotics and fixed type, if we put $c:=2K_4B_2/(K_2B_1)>0$, we see that $f\in\widetilde{\mathcal{A}}^u_{\M,cA}(S_{\ga})$ and $$ \|f\|_{\M,cA,\overset{\sim}{u}}\le \frac{3K_3K_4}{K_1K_2}|\widehat{f}|_{\M,A}, $$ what proves the continuity of the linear map $T_{\M,A}$. $(ii)\Rightarrow (iii)$ Immediate for any weight sequence $\M$. $(iii)\Rightarrow (iv)$ It follows from~\eqref{equaContentionSurjectIntervals}, again valid for any weight sequence. $(iv)\Rightarrow (v)$ This statement is a consequence of Theorem~\ref{teorSurject.dc}. \end{proof1} \begin{rema} The crucial step taken by V. Thilliez~\cite[Th.~2.3.1]{Thilliez03} for strongly regular sequences was the explicit construction of optimal $\{\M\}$-flat functions in sectors $S_{\ga}$ for every $\ga>0$ such that $\ga<\ga(\M)$, what almost closes the circle of implications in Theorem~\ref{tpral}. The facts in Theorem~\ref{tpral}.$(ii)$ and Proposition~\ref{propcotaderidesaasin}.$(ii)$ together guarantee that for every $\delta\in(0,\ga)$ there exists $c'>0$ such that for every $A>0$ there exists a linear and continuous extension operator from $\C[[z]]_{\M,A}$ into $\mathcal{A}_{\hM,c'A}(S_{\delta})$. In fact, V. Thilliez stated his main result in this regard~\cite[Th.~3.2.1]{Thilliez03} in terms of the existence of such extension operators for every $\delta<\ga(\M)$. \end{rema} The following three corollaries become now clear. \begin{coro} Let $\M$ be a weight sequence satisfying (mg), and $\ga>0$. The following are equivalent: \begin{itemize} \item[$(i)$] $\gamma(\M)>\gamma$, \item[$(ii)$] There exists $\gamma_1>\gamma$ such that the space $\widetilde{\mathcal{A}}^u_{\{\M\}}(S_{\gamma_1})$ contains optimal $\{\M\}$-flat functions. \item[$(iii)$] There exists $\gamma_1>\gamma$ such that the Borel map $\widetilde{\mathcal{B}}:\widetilde{\mathcal{A}}^u_{\{\M\}}(S_{\gamma_1})\to \C[[z]]_{\{\M\}}$ is surjective., i.e., $\gamma_1\in\widetilde{S}^u_{\{\M\}}$. \end{itemize} \end{coro} \begin{proof1} $(ii)\Rightarrow(iii)$ and $(iii)\Rightarrow(i)$ are respectively contained in Theorem~\ref{tpral} and Theorem~\ref{teorSurject.dc}, under weaker hypotheses. $(i)\Rightarrow(ii)$ follows from~\cite[Th.~2.3.1]{Thilliez03} and Proposition~\ref{propcotaderidesaasin}.\end{proof1} As a consequence of \eqref{eq.snqiffgammaMpositive} we deduce the following result. \begin{coro}\label{coro.optimalflat} Let $\M$ be a weight sequence satisfying (mg). The following are equivalent: \begin{itemize} \item[$(i)$] $\M$ is strongly regular. \item[$(ii)$] There exists $\gamma>0$ such that the space $\widetilde{\mathcal{A}}^u_{\{\M\}}(S_{\gamma})$ contains optimal $\{\M\}$-flat functions. \item[$(iii)$] There exists $\gamma>0$ such that the Borel map $\widetilde{\mathcal{B}}:\widetilde{\mathcal{A}}^u_{\{\M\}}(S_{\gamma})\to \C[[z]]_{\{\M\}}$ is surjective. In other words, $\widetilde{S}^u_{\{\M\}}\neq\emptyset$. \end{itemize} \end{coro} Note that, according to Proposition~\ref{propcotaderidesaasin}, in the previous items $(ii)$ and $(iii)$ one could change $\widetilde{\mathcal{A}}^u_{\{\M\}}(S_{\gamma})$ and $\widetilde{S}^u_{\{\M\}}$ into $\mathcal{A}_{\{\hM\}}(S_{\gamma})$ and $S_{\{\hM\}}$, respectively. \begin{coro}\label{coro.optimalflat0} Let $\M$ be a weight sequence satisfying (mg), and $\ga>0$. The following are equivalent: \begin{itemize} \item[$(i)$] $\gamma(\M)>\gamma$, \item[$(ii)$] There exists $\gamma_1>\gamma$ such that the space $\mathcal{A}_{\{\hM\}}(S_{\gamma_1})$ contains optimal $\{\M\}$-flat functions, \item[$(iii)$] There exists $\gamma_1>\gamma$ such that $\widetilde{\mathcal{B}}:\mathcal{A}_{\{\hM\}}(S_{\gamma_1})\to \C[[z]]_{\{\M\}}$ is surjective, i.e., $\gamma_1\in S_{\{\hM\}}$. \end{itemize} \end{coro} The implication $(ii)\Rightarrow(i)$ in the version of Corollary~\ref{coro.optimalflat} for the space $\mathcal{A}_{\{\hM\}}(S_{\gamma})$ can be shown independently by using a result from J. Bruna~\cite{Brunaext80}, where a precise formula for nontrivial flat functions in Carleman-Roumieu ultradifferentiable classes, appearing in a work of T. Bang~\cite{Bang53}, is exploited. For the sake of completeness, we will present this proof below. \begin{theo}\label{teor.optimalflat1} Let $\M$ be a weight sequence satisfying (mg). If there exists $\gamma>0$ such that $\mathcal{A}_{\{\hM\}}(S_{\gamma})$ contains optimal $\{\M\}$-flat functions, then $\M$ is strongly regular. \end{theo} The proof requires two auxiliary results which we state and prove now. First, given a weight sequence $\M$, the sequence of quotients $\m=(m_p)_{p\in\N_0}$ is nondecreasing and tends to infinity, but it can happen that it remains constant on large intervals $[p_0,p_1]$ of indices, so that the counting function $\nu_{\m}$ defined in \eqref{countinfunctionnu} yields $\nu_{\m}(m_{p_0})=\nu_{\m}(m_{p_1})=p_1+1$. However, in some applications or proofs it would be convenient to have $\nu_{\m}(m_p)=p+1$ for all $p\ge 0$. This can be assumed without loss of generality by the following result. \begin{lemma}\label{strictincreasinglemma} Let $a=(a_p)_{p\ge 1}$ be a nondecreasing sequence of positive real numbers satisfying $\lim_{p\rightarrow+\infty}a_p=+\infty$ (it suffices that $a_{p-1}<a_{p}$ holds true for infinitely many indices $p$). Then there exists a sequence $b=(b_p)_{p\ge 1}$ of positive real numbers such that $p\mapsto b_p$ is strictly increasing and satisfies $$ 0<\inf_{p\ge 1}\frac{b_p}{a_p}\le\sup_{p\ge 1}\frac{b_p}{a_p}<+\infty. $$ \end{lemma} So, in the language of weight sequences, we prove that for any weight sequence $\M$ there exists a strongly equivalent weight sequence $\L$ (and so $\M\approx\L$) such that $\nu_{\boldsymbol{\ell}}(\ell_p)=p+1$ for all $p\in\N_0$. Note that equivalent weight sequences define the same Carleman-Roumieu ultraholomorphic classes and associated classes of sequences. \begin{proof1} Since $a$ is nondecreasing and $\lim_{p\rightarrow+\infty}a_p=+\infty$ there exists a sequence $(p_j)_{j\ge 1}$ of indices such that $a_{p_j-1}<a_{p_j}=\dots=a_{p_{j+1}-1}<a_{p_{j+1}}$ for all $j\ge 1$ (and so $p_1\ge 2$). For all $j\ge 1$ we have now $a_{p_j}/(a_{p_j-1})>1+\varepsilon_j$ for a sequence $(\varepsilon_j)_{j\ge 1}$ with possibly small strictly positive numbers $\varepsilon_j$. Finally we put $p_0:=1$.\vspace{6pt} We take some arbitrary $A>1$ and choose $\delta_j>0$ small enough so as to have $(1+\delta_j)^{p_{j+1}-p_j-1}\le\min\{A,1+\varepsilon_{j+1}\}$. Then the sequence $(\delta_j)_{j\ge 0}$ satisfies \begin{equation}\label{strict1} (1+\delta_j)^{p_{j+1}-p_j-1}\le 1+\varepsilon_{j+1}<\frac{a_{p_{j+1}}}{a_{p_{j+1}-1}},\quad (1+\delta_j)^{p_{j+1}-p_j-1}\le A,\quad j\ge 0. \end{equation} We define now $b$ as follows: \begin{equation}\label{strict2} b_q:=a_q\;\text{if}\;q=p_j,\;j\ge 0,\hspace{20pt}b_q:=(1+\delta_j)b_{q-1}\;\text{if}\;1+p_j\le q\le p_{j+1}-1,\;j\ge 0. \end{equation} So we have by iteration $b_q=(1+\delta_j)^{q-p_j}b_{p_j}=(1+\delta_j)^{q-p_j}a_{p_j}=(1+\delta_j)^{q-p_j}a_q>a_q$ for all $q$ with $1+p_j\le q\le p_{j+1}-1$, $j\ge 0$. On each such interval of indices the mapping $q\mapsto b_q$ is now clearly strictly increasing since $1+\delta_j>1$ for all $j$. Moreover, by the first half in \eqref{strict1}, we have $b_{p_{j+1}-1}=(1+\delta_j)^{p_{j+1}-p_j-1}a_{p_j}<b_{p_{j+1}}$. Hence the sequence $q\mapsto b_q$ is strictly increasing.\vspace{6pt} By definition \eqref{strict2} we have $b_q=a_q$ for all $q=p_j$, $j\ge 0$, and $b_q>a_q$ otherwise. We conclude if we show that $b_q\le A a_q$ for all $q$ with $1+p_j\le q\le p_{j+1}-1$, $j\ge 0$. For this, since $q\mapsto b_q$ is strictly increasing, it suffices to observe that, thanks to the second half in \eqref{strict1}, we have $b_{p_{j+1}-1}=(1+\delta_j)^{p_{j+1}-p_j-1}a_{p_j}\le Aa_{p_j}=Aa_{p_{j+1}-1}$. \end{proof1} The second result is the following. \begin{lemma}\label{equivalenceofdual} Let $\M$ be a weight sequence. Then $\M$ satisfies (mg) if and only if $\omega_\M(t)=O(\nu_{\m}(t))$ as $t\rightarrow+\infty$. \end{lemma} \begin{proof1} The condition (mg) for $\M$ is equivalent to $m_n\le A(M_n)^{1/n}$ for some $A\ge 1$ and all $n\in\N$ (e.g., see~\cite[Lemma 2.2]{RainerSchindlExtension17}). It is also known that $\omega_\M(m_n)=\log\left(m_{n}^n/M_n\right)$ for $n\in\N$ (see~\cite[Chapitre I]{Mandelbrojt}). So, if $m_{n-1}\le t<m_{n}$ for some $n\ge 1$, we get $$ \omega_\M(t)\le\omega_\M(m_n)=n\log\left(\frac{m_n}{M_n^{1/n}}\right)\le n\log(A)=\log(A)\nu_{\m}(t), $$ that is, $\omega_\M(t)=O(\nu_{\m}(t))$ as $t\rightarrow+\infty$.\vspace{6pt} Conversely, suppose that there exists $A\ge 1$ such that $\omega_\M(t)\le A\nu_{\m}(t)$ for all $t\ge m_0$. By \cite[Lemma 2.2]{RainerSchindlExtension17}, (mg) for $\M$ holds true if and only if there exists $H\geq 1$ such that for all $t$ large enough one has $2\nu_{\m}(t)\leq\nu_{\m}(H t)+H$, and this we will prove. Take $H\ge \exp(2A)$ and $t\ge m_0$. Using \eqref{assofuncintegral}, and since $\nu_{\m}$ is nondecreasing, we estimate \begin{align*} \nu_{\m}(Ht)&\ge A^{-1}\omega_\M(Ht)= A^{-1}\int_{m_0}^{Ht}\frac{\nu_{\m}(\lambda)}{\lambda}d\lambda\ge A^{-1}\int_t^{Ht}\frac{\nu_{\m}(\lambda)}{\lambda}d\lambda \\& \ge A^{-1}\nu_{\m}(t)\int_t^{Ht}\frac{1}{\lambda}d\lambda= A^{-1}\log(H)\nu_{\m}(t)\ge 2\nu_{\m}(t), \end{align*} as desired. We mention that an alternative, more abstract proof can be based in the theory of O-regular variation and Matuszewska indices for functions. By \cite[Th. 4.4]{JimenezSanzSchindlIndex} we have that the lower Matuszewska indices of $\nu_{\m}$ and $\omega_{\M}$ agree, that is, $\beta(\nu_{\m})= \beta(\omega_\M)$, and by \cite[Cor.2.17 and Cor. 4.2]{JimenezSanzSchindlIndex} we know $\M$ has (mg) if and only if $\beta(\nu_{\m})>0$. So, if $\beta(\nu_{\m})>0$, by \cite[Th. 4.3]{JimenezSanzSchindlIndex} we have that $\liminf_{t\to\infty} \frac{\nu_{\m}(t)}{\omega_M(t)}>0$, and we deduce that $\omega_\M(t)=O(\nu_{\m}(t))$ as $t\rightarrow+\infty$. Conversely, if $\omega_\M(t)=O(\nu_{\m}(t))$ as $t\rightarrow+\infty$, then $\liminf_{t\to\infty} \frac{\nu_{\m}(t)}{\omega_M(t)}>0$, so by \cite[Th. 4.3]{JimenezSanzSchindlIndex} we have that $\beta(\omega_{\M})>0$, and we are done. \end{proof1} \noindent\textbf{Proof of Theorem~\ref{teor.optimalflat1}. } We follow the proof of necessity for \cite[Th.~2.2]{Brunaext80}. By Lemma \ref{strictincreasinglemma} and the remark following it, we can assume without loss of generality that $\m$ is strictly increasing. Let $G$ be an optimal $\{\M\}$-flat function in $\mathcal{A}_{\{\hM\}}(S_{\gamma})$ for some $\gamma>0$. So, there exists some $A>0$ such that $$ p_{\M,A}(G):=\sup_{n\in\N_0,x\in(0,+\infty)}\frac{|G^{(n)}(x)|}{A^n n!M_n}<+\infty. $$ This shows that the Carleman-Roumieu ultradifferentiable class $\mathcal{E}_{\{\hM\}}((-\varepsilon,+\infty))$, consisting of all smooth complex-valued functions $g$ defined on the interval $(-\varepsilon,\infty)$ for some $\varepsilon>0$, and such that $$\sup_{n\in\N_0,x\in(-\varepsilon,+\infty)}\frac{|g^{(n)}(x)|}{D^n n!M_n}<+\infty $$ for suitable $D>0$, contains nontrivial flat functions (it suffices to extend $G$ by 0 for $x\in(-\varepsilon,0]$). Then, the well-known Denjoy-Carleman theorem (e.g., see \cite[Th.~1.3.8]{hoermander}) yields that $\M$ satisfies (nq). Let now $$ R_n:=\sum_{k\ge n}\frac{1}{(k+1)m_k}<+\infty,\quad n\in\N_0, $$ and let the function $\overline{h}$ be defined by $\overline{h}(t):=n$ if $R_{n+1}<t\le R_n$, $n\in\N_0$. By \cite[$(14)$, p. 142]{Bang53} we obtain that $$ G(x)=|G(x)|\le p_{\M,A}(G)\exp\left(-\overline{h}(Aex)\right),\quad x\in(0,+\infty). $$ Combining this with the left-hand side of \eqref{optimalflat}, with~\eqref{functionhequ2} and setting $C:=p_{\M,A}(G)$, we get $$ \exp\left(\overline{h}(Aex)\right)\le\frac{C}{|G(x)|}\le C K_1^{-1}\exp(\omega_{\M}(1/(K_2x))),\quad x\in(0,+\infty). $$ If we put $t=Aex$ and $B:=Ae/K_2$, we obtain that for every $t>0$, \begin{equation}\label{optimalflattheoremequ1} \overline{h}(t)\le \log(C K_1^{-1}) +\omega_{\M}(B/t). \end{equation} By Lemma \ref{equivalenceofdual}, there exists $C_1\ge 1$ such that $\omega_\M(s)\le C_1\nu_{\m}(s)+C_1$ for $s>0$. Choosing $t=B/m_n$ in~\eqref{optimalflattheoremequ1}, we see that $$ \overline{h}(B/m_n)\le \log(C K_1^{-1}) +\omega_{\M}(m_n)\le \log(C K_1^{-1}) +C_1\nu_{\m}(m_n)+C_1=\log(C K_1^{-1}) +C_1(n+1)+C_1, $$ since $\m$ is strictly increasing. Hence, $\overline{h}(B/m_n)\le C_2(n+1)$ for some $C_2\in\N$ and all $n\in\N_0$. By definition of $\overline{h}$, we get $R_{C_2(n+1)+1}\leq B/m_n$, i.e., $$ m_n\sum_{k\ge C_2(n+1)+1}\frac{1}{(k+1)m_k}\le B,\quad n\in\N_0. $$ Finally, \begin{align*} m_n\sum_{k\ge n}\frac{1}{(k+1)m_k}&=m_n\sum_{k\ge C_2(n+1)+1}\frac{1}{(k+1)m_k}+m_n\sum_{k=n}^{C_2(n+1)}\frac{1}{(k+1)m_k}\\ &\le B+m_n\frac{n(C_2-1)+C_2+1}{(n+1)m_{n}}\le B+2C_2, \end{align*} which is (snq) for $\M$.\hfill$\Box$ \section{Construction of optimal flat functions for a family of non strongly regular sequences}\label{sectConstrOptFlatNon_SR_Seq} As deduced in Theorem~\ref{tpral}, the construction of optimal $\{\M\}$-flat functions in sectors within an ultraholomorphic class, given by a regular sequence $\hM$, provides extension operators and surjectivity results. Although such construction is only available in general for strongly regular sequences, we wish to present here a family of (non strongly) regular sequences for which this technique works. We recall that, for logarithmically convex sequences $(M_p)_{p\in\N_0}$, the condition (dc) is equivalent to the condition $\log(M_p)=O(p^2)$, $p\to\infty$ (see~\cite[Ch.~6]{Mandelbrojt}). On the other hand, the condition (mg) implies that the sequence is below some Gevrey order (there exists $\a>0$ such that $M_p=O(p!^{\a})$ as $p\to\infty$; see e.g.~\cite{Matsumoto84,Thilliez03}). We will work, for $q>1$ and $1<\sigma\le 2$, with the sequences $\M_{q,\sigma}:=(q^{p^\sigma})_{p\in\N_0}$. They are clearly weight sequences and, by~\eqref{equa.indice.gammaM.casicrec}, it is immediate that $\ga(\M_{q,\sigma})=\infty$, so they satisfy (snq) (see~\eqref{eq.snqiffgammaMpositive}). According to the previous comments, they satisfy (dc) but not (mg). So, $\hM_{q,\sigma}$ is regular, but $\M_{q,\sigma}$ is not strongly regular. The case $\sigma=2$ is well-known, as it corresponds to the so-called \emph{$q$-Gevrey sequences}, appearing in the study of formal solutions for $q$-difference equations. First, we will construct a holomorphic function on $\C\setminus(-\infty,0]$ which will provide, by restriction, an optimal $\{\M_{q,\sigma}\}$-flat function in any unbounded sector $S_{\ga}$ with $0<\ga<2$. Subsequently, we will obtain such functions on general sectors of the Riemann surface $\mathcal{R}$ of the logarithm by ramification. This, according to Theorems~\ref{teorSurject.dc} and~\ref{tpral}, agrees with the fact that $\ga(\M_{q,\sigma})=\infty$. \subsection{Flatness in the class given by $\Ms$} It will be convenient to note that for a fixed $\sigma\in(1,2]$, there exists a unique $s\ge 2$ such that $\sigma=s/(s-1)$. We start by suitably estimating the function $$\o_{\Ms}(t)= \sup_{p\in\mathbb{N}_0} \ln\left(\frac{t^{p}}{q^{p^\sigma}}\right)=\sup_{p\in\mathbb{N}_0} (p\ln(t)-p^{s/(s-1)}\ln(q)),\quad t>0. $$ Due to the fact that $\o_{\Ms}(t)=0$ for $t\leq1$ (since $m_0=M_1/M_0=M_1=q>1$ and by~\eqref{assofuncintegral}), we will restrict our attention to the case $t>1$. Obviously, $\o_{\Ms}(t)$ is bounded above by the supremum of $x\ln(t)-x^{s/(s-1)}\ln(q)$ when $x$ runs over $(0,\infty)$, which is easily obtained by elementary calculus and occurs at the point $$ x_0=\left(\frac{(s-1)\ln(t)}{s\ln(q)}\right)^{s-1}. $$ If we put \begin{equation}\label{eq.def.b_qs} b_{q,s}:=\frac{1}{s}\left(\frac{s-1}{s\ln(q)}\right)^{s-1}, \end{equation} then \begin{equation}\label{o_Ms_upbound} \o_{\Ms}(t)\leq \left(\frac{(s-1)\ln(t)}{s\ln(q)}\right)^{s-1}\ln(t)- \left(\frac{(s-1)\ln(t)}{s\ln(q)}\right)^{s}\ln(q)=b_{q,s} \ln^s(t),\quad t>1. \end{equation} On the other hand, for $t> q^{s/(s-1)} $ (what amounts to $x_0>1$) we also have that $\o_{\Ms}(t)$ is at least the value of $x\ln(t)-x^{s/(s-1)}\ln(q)$ at $x=\flo{x_0}$, that is, \begin{align} \o_{\Ms}(t)&\ge \flo{\left(\frac{(s-1)\ln(t)}{s\ln(q)}\right)^{s-1}}\ln(t)- \flo{\left(\frac{(s-1)\ln(t)}{s\ln(q)}\right)^{s-1}}^{s/(s-1)} \ln(q)\nonumber\\ &\ge \left(\left(\frac{(s-1)\ln(t)}{s\ln(q)}\right)^{s-1}-1\right)\ln(t)- \left(\frac{(s-1)\ln(t)}{s\ln(q)}\right)^s\ln(q)\nonumber\\ &=b_{q,s} \ln^s(t)-\ln(t).\label{o_Ms_interbound} \end{align} \begin{lemma} For every $t\ge q^{2s/(s-1)}$ it holds \begin{equation}\label{eq.EstimateOmegaQSigma} b_{q,s}\ln^s(t)-\ln(t)\ge b_{q,s}\ln^s\left(\frac{t}{q^{s/(s-1)}}\right)- \ln\left(q^{s/(s-1)}\right). \end{equation} \end{lemma} \begin{proof1} Observe that every $t\ge q^{2s/(s-1)}$ may be written as $t=q^{ys/(s-1)}$ for some $y\ge 2$. Then, we have that \begin{equation*} b_{q,s}\ln^s(t)-b_{q,s}\ln^s\left(\frac{t}{q^{s/(s-1)}}\right)= b_{q,s}\left(\frac{s\ln(q)}{s-1}\right)^s\big(y^s-(y-1)^s\big)= \frac{\ln(q)}{s-1}\big(y^s-(y-1)^s\big). \end{equation*} By the mean value theorem, $y^s-(y-1)^s>s(y-1)^{s-1}$, and since $s\ge 2$ and $y\ge 2$, we have $(y-1)^{s-1}\ge y-1$. So we deduce that $$ \frac{\ln(q)}{s-1}\big(y^s-(y-1)^s\big)>\frac{s\ln(q)}{s-1}(y-1)= \ln(t)-\ln\left(q^{s/(s-1)}\right), $$ as desired. \end{proof1} Combining~\eqref{o_Ms_upbound} with~\eqref{o_Ms_interbound} and~\eqref{eq.EstimateOmegaQSigma}, and using~\eqref{functionhequ2}, we get \begin{equation}\label{Ms_bound_h} \exp\left(-b_{q,s}\ln^s\left(\frac{1}{t}\right)\right) \leq h_{\Ms}(t) \leq q^{s/(s-1)}\exp\left(-b_{q,s}\ln^s\left(\frac{1}{q^{s/(s-1)}t}\right)\right), \qquad 0<t\leq q^{-2s/(s-1)}. \end{equation} We can say that these estimates express optimal $\{\Ms\}$-flatness. \subsection{Optimal $\{\Ms\}$-flat function in $S_2$} The estimates in~\eqref{Ms_bound_h} suggest considering the function $\exp\big(-b_{q,s}\log^s\big(1/z\big)\big)$, with, say, principal branches, as a candidate for providing optimal flat functions. However, its analyticity in wide sectors is not guaranteed. Moreover, even in small sectors around the direction $d=0$, its behaviour at $\infty$ might not be as desired: For example, when $s=2$ it tends to 0 as $0<x\to\infty$, what excludes the possibility of proving the first inequality in~\eqref{optimalflat}. Because of these reasons, we will first define a suitably modified function in the sector $S_2=\C\setminus(-\infty,0]$, prove its flatness there, and then turn to general sectors by composing it with an appropriate ramification. We define \begin{equation}\label{eq.defG2} G_2^{q,s}(z):=\exp\left(-b_{q,s}\log^s\left(1+\frac{1}{z}\right)\right),\quad z\in S_2, \end{equation} where the principal branch of the logarithm is chosen for both $\log$ and the power $w\mapsto w^s=\exp(s\log(w))$ involved. Observe that if $z\in S_2$, then $1+1/z \in\mathbb{C}\setminus(-\infty,1]$, and so $\log(1+1/z)=\ln(|1+1/z|)+i\arg(1+1/z)\notin(-\infty,0]$. This ensures that the map \begin{align*} z&\mapsto \log^{s}\left(1+\frac{1}{z}\right)= \exp\left(s\log\left(\log(1+\frac{1}{z})\right)\right) \end{align*} is also holomorphic in $S_2$, and so is $G_2^{q,s}$. Moreover, $G_2^{q,s}(x)>0$ for $x>0$. In order that $G_2^{q,s}$ is an optimal $\{\Ms\}$-flat function in $S_2$, we are only left with proving the estimates~\eqref{optimalflat}. It turns out to be more convenient to work with the associated kernel $$ e_2(z):=G_2^{q,s}(1/z)=\exp(-b_{q,s}\log^s(1+z)),\quad z\in S_2, $$ and verify the following result. \begin{lemma}\label{e2-bounds} There exist positive constants $C_1,C_2,C_3,C_4$ such that $$ C_1 e_2(C_2 |z|)\leq |e_2(z)|\leq C_3 e_2(C_4 |z|),\quad z\in S_2. $$ \end{lemma} \begin{proof1} We prove first the left inequality. Let $z\in S_2$ and $K>0$. Since $K|z|+1>1$, it is clear that $\log^s(K|z|+1)=\ln^s(K|z|+1)>0$. On the other hand, we get \begin{align} \Re\left(\log^s(z+1)\right)&\le |\log^s(z+1)|=|\log(z+1)|^s\nonumber\\ &= \left(\ln^2|z+1|+\arg^2(z+1)\right)^{\frac{s}{2}}\leq \left(\ln^2|z+1|+\pi^2\right)^{\frac{s}{2}}.\label{real_logs} \end{align} If we suppose that $|z|>2$, then $\ln|z+1|>0$ and $\ln^2|z+1|\leq \ln^2(|z|+1)$. Let us take $C_2>e^\pi$, then $\lim_{x\to\infty}(\ln(C_2 x+1)-\ln(x+1))=\ln(C_2)>\pi$, and there exists a constant $R>2$ such that for $x\ge R$ we will have that $\ln(C_2 x+1)-\ln(x+1)\ge\pi$. So, let us take $z\in S_2$ with $|z|\geq R$, then \begin{equation*} \ln^2(C_2 |z|+1)-\ln^2(|z|+1)=\ln\left((C_2 |z|+1)(|z|+1) \right)\left(\ln(C_2 |z|+1)-\ln(|z|+1)\right)\geq \pi^2. \end{equation*} Consequently, \begin{equation*} \left( \ln^2|z+1|+ \pi^2\right)^{\frac{s}{2}}\leq \left( \ln^2(|z|+1)+ \pi^2\right)^{\frac{s}{2}}\leq \ln^s(C_2|z|+1). \end{equation*} Then, from \eqref{real_logs}, we obtain that $\Re\left(\log^s(z+1)\right)\leq \ln^s(C_2|z|+1)$, and we can deduce that $$ |e_2(z)|=\exp\big(-b_{q,s}\Re(\log^s(z+1))\big)\geq \exp\big(-b_{q,s}\ln^s(C_2|z|+1)\big)= e_2(C_2 |z|),\qquad z\in S_2,\,|z|\geq R. $$ Finally, since the function $|e_2(z)|$ stays bounded and bounded away from 0 for bounded $|z|$ (in particular, it tends to 1 when $z$ tends to 0 in $S_2$), there exists a constant $C_1>0$ such that $C_1 e_2(C_2 |z|)\leq |e_2(z)|$ for all $z\in S_2$. We now turn to the right inequality. In the first place, we observe that for every $z\in S_2$, \begin{equation}\label{eq.RealPartPowerLog} \Re(\log^s(z+1))= |\log^s(z+1)|\cos(\arg(\log^s(z+1)))= |\log(z+1)|^s\cos(s\arg(\log(z+1))). \end{equation} Now, \begin{equation}\label{arglog-inequality} s|\arg(\log(z+1))|= s\left|\arctan\left(\frac{\arg(z+1)}{\ln|z+1|}\right)\right|\leq s\left|\arctan\left(\frac{\pi}{\ln|z+1|}\right)\right|. \end{equation} Hence, setting $$ R_0:=1+\exp\left(\frac{\pi}{\tan\left(\pi/(2s)\right)}\right)\ge 2, $$ we get that $|z|>R_0$ implies that $|z+1|>R_0-1\ge 1$, and therefore $\ln|z+1|> 0$ and $$ \frac{\pi}{\ln|z+1|}<\tan\left(\frac{\pi}{2s}\right). $$ From this and \eqref{arglog-inequality} we deduce that $\cos(s\arg(\log(z+1)))>0$. Then, continuing with~\eqref{eq.RealPartPowerLog}, \begin{align}\label{realpart_logs_R0} \Re(\log^s(z+1))&\ge |\Re(\log(z+1))|^s\cos(s\arg(\log(z+1))) \nonumber\\ &=\ln^s|z+1|- \ln^s|z+1|\frac{\sin^2(s\arg(\log(z+1)))}{1+\cos(s\arg(\log(z+1)))}. \end{align} Now, from the equality in~\eqref{arglog-inequality} we see that $s\arg(\log(z+1))\to 0$ as $z\to \infty$ in $S_2$, and moreover $$ \lim_{\substack{z\to \infty\\z\in S_2}}\left[ \left(\frac{\sin^2(s\arg(\log(z+1)))}{1+\cos(s\arg(\log(z+1)))} \right) \Big/\left(\frac{s^2\arg^2(z+1)}{2\ln^2|z+1|}\right)\right]=1. $$ Therefore, there exist $R_1\geq R_0$ and $C>0$ such that \begin{equation*} \frac{\sin^2(s\arg(\log(z+1)))}{1+\cos(s\arg(\log(z+1)))}\leq C \frac{1}{\ln^2|z+1|},\qquad |z|>R_1. \end{equation*} We deduce from \eqref{realpart_logs_R0} that for $z\in S_2$ with $|z|>R_1$, \begin{equation}\label{realpart_logs_R1} \Re(\log^s(z+1))\geq \ln^s|z+1|-C\ln^{s-2}|z+1| \geq \ln^s(|z|-1)-C\ln^{s-2}(|z|+1). \end{equation} We would be almost done if we obtain, for the right-hand side in~\eqref{realpart_logs_R1}, a lower bound in terms of, say, $\ln^s(1+|z|/2)$ for $|z|$ sufficiently large. This is easy in case $s=2$, for it suffices to take $|z|>4$ in order to have $3<1+|z|/2<|z|-1$, and so if $|z|\ge R_2:=\max\{R_1,4\}$ we have $$ \Re(\log^s(z+1))\geq \ln^s(|z|-1)-C\ge \ln^s\left(1+\frac{|z|}{2}\right)-C. $$ In case $s>2$, it is not difficult to check that $$ \lim_{x\to+\infty}\left(\ln^s(x-1)-C\ln^{s-2}(x+1)- \ln^s\left(1+\frac{x}{2}\right)\right)=+\infty, $$ so that, according to~\eqref{realpart_logs_R1}, there exists $R_2\ge R_1$ such that for $z\in S_2$ with $|z|\ge R_2$ one has \begin{equation*} \Re(\log^s(z+1))\geq \ln^s\left(1+\frac{|z|}{2}\right). \end{equation*} In any case, we can deduce an upper estimate of the form \begin{align*} |e_2(z)|&=\exp\big(-b_{q,s}\Re(\log^s(z+1))\big)\\ &\leq e^C\exp\left(-b_{q,s}\ln^s\left(1+\frac{|z|}{2}\right)\right)= e^Ce_2\left(\frac{|z|}{2}\right), \quad z\in S_2,\,|z|> R_2, \end{align*} which can be extended to the whole of $S_2$ by suitably enlarging the constant $C$, as before. \end{proof1} We are ready for the main objective of this section. \begin{theo}\label{teor.FlatFunctionS2} The function $G_2^{q,s}$ defined in~\eqref{eq.defG2} is an optimal $\{\Ms\}$-flat function in $S_2$. \end{theo} \begin{proof1} The previous lemma ensures that there exist positive constants $C_1$, $C_2$, $C_3$, $C_4$ such that \begin{equation}\label{G2-bound} C_1 \exp\left(-b_{q,s}\ln^s\left(1+\frac{C_2}{|z|}\right)\right)\leq |G_2^{q,s}(z)|\leq C_3 \exp\left(-b_{q,s}\ln^s\left(1+\frac{C_4}{|z|}\right)\right),\quad z\in S_2. \end{equation} Observe that these inequalities guarantee that $|G_2^{q,s}(z)|$ is bounded and bounded away from 0 as soon as $|z|\ge r$ for any fixed $r>0$, since then $$ C_1 \exp\left(-b_{q,s}\ln^s\left(1+C_2/r\right)\right)\leq |G_2^{q,s}(z)|\leq C_3. $$ As the same is true for $h_{\Ms}(t)$ for every $t\ge t_0$ and any fixed $t_0>0$ (see Lemma~\ref{functionhproperties}), we only need to check the estimates~\eqref{optimalflat} for small enough $|z|$. For $|z|\le C_4$ it is clear that $\ln(1+C_4/|z|)>\ln(C_4/|z|)\ge 0$. Then, from the first inequality in~\eqref{Ms_bound_h} we have that \begin{align*} |G_2^{q,s}(z)|&\leq C_3\exp\left(-b_{q,s}\ln^s\left(1+\frac{C_4}{|z|}\right)\right)\\ &\leq C_3 \exp\left(-b_{q,s}\ln^s\left(\frac{C_4}{|z|}\right)\right) \leq C_3 h_{\Ms}\left(\frac{|z|}{C_4}\right),\quad |z|\leq C_4q^{-2s/(s-1)}, \end{align*} and we have proved the right-hand part of~\eqref{optimalflat}. By using the left-hand side of~\eqref{G2-bound}, we have for $z\in S_2$ that \begin{equation*} |G_2^{q,s}(z)|\geq C_1 \exp\left(-b_{q,s}\ln^s\left(\frac{C_2}{|z|}\right)\right) \exp\left(-b_{q,s}\left[\ln^s\left(1+\frac{C_2}{|z|}\right)- \ln^s\left(\frac{C_2}{|z|}\right)\right]\right). \end{equation*} The mean value theorem gives that $\ln^s(1+C_2/|z|)-\ln^s(C_2/|z|)$ tends to zero if $|z|\to 0$, and we deduce that there exists $L$ such that \begin{equation*} |G_2^{q,s}(z)|\geq L\exp\left(-b_{q,s}\ln^s\left(\frac{C_2}{|z|}\right)\right), \quad |z|\leq 1. \end{equation*} The second inequality in~\eqref{Ms_bound_h} implies now that, as long as $|z|\le\min\{1,C_2q^{-s/(s-1)}\}$, we have \begin{equation*} |G_2^{q,s}(z)|\geq Lq^{-s/(s-1)} h_{\Ms}\left(\frac{|z|}{C_2q^{s/(s-1)}}\right), \end{equation*} and so the left-hand side of~\eqref{optimalflat} also holds. \end{proof1} \subsection{Optimal $\{\Ms\}$-flat function in arbitrary sectors} Let us consider a sector $S_\gamma\subset\mathcal{R}$ with $\gamma>2$, and define the function \begin{equation}\label{eq.defGgamma} G_{\ga}^{q,s}(z):=\exp\left(-b_{q,s}\left(\frac{\gamma}{2}\right)^s \log^s\left(1+z^{-2/\gamma}\right)\right) =\left(G_2^{q,s}(z^{2/\gamma})\right)^{(\gamma/2)^s},\quad z\in S_{\ga}. \end{equation} The map $z\mapsto z^{2/\gamma}$ is holomorphic from $S_{\ga}$ into $S_2$, and so $G_{\ga}^{q,s}$ is holomorphic in $S_{\ga}$. We will prove that this function is an optimal $\{\Ms\}$-flat function in this sector. It is clear that $G_{\ga}^{q,s}(x)>0$ for every $x>0$, so only the estimates in~\eqref{optimalflat} are pending. As before, we consider the kernel $$e_\gamma(z):=G_{\ga}^{q,s}(1/z)= \exp\left(-b_{q,s}\left(\frac{\gamma}{2}\right)^s \log^s\left(1+z^{2/\gamma}\right)\right) =\left(e_2(z^{2/\gamma})\right)^{(\gamma/2)^s},\quad z\in S_{\ga}.$$ \begin{lemma}\label{egamma-bounds} There exist constants $B_1,B_2,B_3,B_4>0$ such that \begin{equation}\label{eq.Bounds_e_gamma} B_1 e_2(B_2 |z|)\leq |e_\gamma(z)|\leq B_3 e_2(B_4 |z|),\quad z\in S_\gamma. \end{equation} \end{lemma} \begin{proof1} According to the definition of $e_\ga$ and by applying Lemma~\ref{e2-bounds}, there exist constants $C_1,C_2,C_3,C_4>0$ such that for every $z\in S_\ga$ one has $$ \left(C_1 e_2(C_2 |z|^{2/\gamma})\right)^{(\ga/2)^s}\leq |e_\gamma(z)|=\left|e_2(z^{2/\gamma})\right|^{(\gamma/2)^s}\leq \left(C_3 e_2(C_4 |z|^{2/\gamma})\right)^{(\ga/2)^s}.$$ We recall that the function $|e_2(z)|$ stays bounded and bounded away from 0 for bounded $|z|$; from the previous estimates, the same can be said about $|e_{\ga}(z)|$, and so we can prove~\eqref{eq.Bounds_e_gamma} by restricting our considerations to large enough values of $|z|$ and well chosen $B_2, B_4>0$, and then suitably enlarging the constants $B_1, B_3>0$ involved. Let us observe that \begin{align*} \left(e_2(C_4 |z|^{2/\gamma})\right)^{(\ga/2)^s}&= \exp\left(-b_{q,s}\ln^s\left[(1+C_4|z|^{2/\ga})^{\ga/2}\right]\right),\\ e_2(B_4|z|)&=\exp\left(-b_{q,s}\ln^s(1+B_4|z|)\right). \end{align*} So, we will be done with the upper estimates in~\eqref{eq.Bounds_e_gamma} if we see that $$ \ln^s(1+B_4|z|)-\ln^s[(1+C_4|z|^{2/\ga})^{\ga/2}] $$ admits an upper bound for large enough $|z|$ and suitably chosen $B_4>0$. But this follows from the clear fact that \begin{equation*} \ln^s(1+B_4|z|)-\ln^s\left[\left(1+C_4|z|^{2/\ga}\right)^{\ga/2}\right] \sim -s\ln\left(\frac{C_4^{\ga/2}}{B_4}\right)\ln^{s-1}(1+B_4|z|), \quad |z|\to\infty, \end{equation*} where $\sim$ means that the quotient of both expressions tends to 1. Indeed, in view of this equivalence it suffices to choose any $B_4<C_4^{\ga/2}$ in order to have the desired estimation for suitably large $B_3$ and $|z|$. In the same way, one can obtain the first estimation in~\eqref{eq.Bounds_e_gamma} by choosing any $B_2>C_2^{\ga/2}$ and considering large enough $B_1>0$ and $|z|$. \end{proof1} \begin{coro}\label{coro.FlatFunctionSgamma} The function $G_{\ga}^{q,s}$ defined in~\eqref{eq.defGgamma} is an optimal $\{\Ms\}$-flat function in $S_{\ga}$. \end{coro} \begin{proof1} By the previous lemma, there exist $B_1,B_2,B_3,B_4>0$ such that \begin{equation*} B_1\exp\left(-b_{q,s}\ln^s\left(1+\frac{B_2}{|z|}\right)\right)\leq |G_{\ga}^{q,s}(z)|\leq B_3 \exp\left(-b_{q,s}\ln^s\left(1+\frac{B_4}{|z|}\right)\right),\quad z\in S_\ga. \end{equation*} Note that these estimates are essentially those in~\eqref{G2-bound}, and so the conclusion follows in exactly the same way as in the proof of Theorem~\ref{teor.FlatFunctionS2}. \end{proof1} \begin{rema} We mention that a similar approach has been followed in the preprint~\cite{JimenezLastraSanzRapidGrowth}, by A. Lastra and the first and third authors, in order to construct extension operators for the ultraholomorphic classes associated with the sequences $\M^{\tau,\sigma}=(p^{\tau p^{\sigma}})_{p\in\N_0}$, for $\tau>0$ and $\sigma\in(1,2)$. These sequences have appeared in a series of papers by S. Pilipovi{\'c}, N. Teofanov and F. Tomi{\'c}~\cite{ptt15,ptt16,ptt,ptt21}, inducing ultradifferentiable spaces of so-called extended Gevrey regularity. However, in that case the construction of suitable kernels for our technique involves the Lambert function, whose handling is not so convenient. This fact has caused our results to be available only in sectors strictly contained in $S_2$, in spite of the fact that $\ga(\M_{\tau,\sigma})=\infty$, what would in principle allow for such extension operators to exist in sectors of arbitrary opening. \end{rema} \section{Convolved sequences, flat functions and extension results} We show in this section that whenever two weight sequences are given and there exist optimal flat functions in the respectively associated classes, then optimal flat functions exist in the class defined by the so-called convolved sequence as well. Moreover, the extension technique works if one of the convolved sequences satisfies (dc). \subsection{Convolved sequences} Let $\mathbb{M}^1=(M_p^1)_{p\in\N_0}$, $\mathbb{M}^2=(M_p^2)_{p\in\N_0}$ be two sequences of positive real numbers, then the \emph{convolved sequence} $\L:=\mathbb{M}^1\star\mathbb{M}^2$ is $(L_p)_{p\in\N_0}$ given by \begin{equation*} L_p:=\min_{0\le q\le p}M^1_qM^2_{p-q},\quad p\in\mathbb{N}_0, \end{equation*} see \cite[$(3.15)$]{komatsu}. Hence, obviously $\mathbb{M}^1\star\mathbb{M}^2=\mathbb{M}^2\star\mathbb{M}^1$. For all $p\in\mathbb{N}_0$ we have $L_p\le\min\{M^1_0M^2_p,M^2_0M^1_p\}$. So, if in addition $M^1_0=M^2_0=1$, then we get $L_0=1$ and \begin{equation}\label{convolvedbelow} L_p\le\min\{M^1_p,M^2_p\}, \quad p\in\N_0. \end{equation} Given $\M=(M_p)_{p\in\N_0}$ with $M_0=1$, put $\L=(L_p)_{p\in\N_0}=\M\star\M$. The condition (mg) states precisely that there exists $A>0$ such that $M_p\le A^p L_p$ for every $p\in\N_0$; according to~\eqref{convolvedbelow}, $\M$ satisfies (mg) if and only if $\M$ and $\M\star\M$ are equivalent. \begin{rema}\label{convolvesequrem0} Let $\M,\mathbb{M}^1,\mathbb{M}^2$ be weight sequences. \begin{itemize} \item[(i)] In \cite[Lemma 3.5]{komatsu} the following facts are shown: $\mathbb{M}^1\star\mathbb{M}^2$ is again a weight sequence. The corresponding quotient sequence $\bm^1\star\bm^2$ is obtained when rearranging resp. ordering the sequences $\bm^1$ and $\bm^2$ in the order of growth. This yields, by definition of the counting function (see~(6)), that for all $t\ge 0$ one has $$\nu_{\bm^1\star\bm^2}(t)= \nu_{\bm^1}(t)+\nu_{\bm^2}(t);$$ so, by~(7) we get $$\omega_{\mathbb{M}^1\star\mathbb{M}^2}(t)= \omega_{\mathbb{M}^1}(t)+\omega_{\mathbb{M}^2}(t),\quad t\ge 0,$$ and by~(4) we obtain \begin{equation}\label{functionhforconvolved} h_{\mathbb{M}^1\star\mathbb{M}^2}(t)= h_{\mathbb{M}^1}(t)h_{\mathbb{M}^2}(t),\quad t>0. \end{equation} \item[(ii)] If either $\mathbb{M}^1$ or $\mathbb{M}^2$ has (dc), then $\mathbb{M}^1\star\mathbb{M}^2$ as well: As said before, for sequences $(M_p)_{p\in\N_0}$ satisfying (lc), the condition (dc) amounts to the condition $\log(M_p)=O(p^2)$, $p\to\infty$. Then, it suffices to apply \eqref{convolvedbelow}. \item[(iv)] As seen in item (i), for every $t\ge 0$ we have \begin{equation*} 2\omega_{\mathbb{M}}(t)=\omega_{\mathbb{M}\star\mathbb{M}}(t). \end{equation*} Since $\mathbb{M}$ satisfies (mg) if and only if there exists $H\ge 1$ such that \begin{equation*} 2\omega_{\mathbb{M}}(t)\le\omega_{\mathbb{M}}(Ht)+H,\quad t\ge 0 \end{equation*} (see \cite[Prop.~3.6]{komatsu}), it turns out that (mg) amounts to the fact that $$ \omega_{\mathbb{M}\star\mathbb{M}}(t)\le \omega_{\mathbb{M}}(Ht)+H,\quad t\ge 0, $$ for some $H\ge 1$, or in other words, \begin{equation*} h_{\mathbb{M}}(t)\le e^Hh_{\mathbb{M}\star\mathbb{M}}(Ht),\quad t>0. \end{equation*} \end{itemize} \end{rema} \subsection{Optimal flat functions and extension procedure}\label{subsectOptFlatConvolved} Let $\mathbb{M}^1$ and $\mathbb{M}^2$ be weight sequences such that optimal flat functions $G_{\mathbb{M}^1}$ and $G_{\mathbb{M}^2}$ exist in the corresponding classes with uniform asymptotic expansion in a given sector $S$. Then, we claim that $G_{\mathbb{M}^1\star\mathbb{M}^2}:=G_{\mathbb{M}^1}\cdot G_{\mathbb{M}^2}$ is an optimal flat function (on the same sector $S$) in the class associated with the sequence $\mathbb{M}^1\star\mathbb{M}^2$. Obviously, $G_{\mathbb{M}^1\star\mathbb{M}^2}(x)>0$ for all $x>0$. Suppose $K_m$ and $J_m$, $m=1,2,3,4$, are the constants appearing in~\eqref{optimalflat} for $G_{\mathbb{M}^1}$ and $G_{\mathbb{M}^2}$, respectively. By \eqref{functionhforconvolved} we get that, for all $z\in S$, $$|G_{\mathbb{M}^1}(z)\cdot G_{\mathbb{M}^2}(z)|\le K_3 h_{\mathbb{M}^1}(K_4|z|) J_3h_{\mathbb{M}^2}(J_4|z|)\le K_3J_3 h_{\mathbb{M}^1}(D|z|)h_{\mathbb{M}^2}(D|z|)= Ch_{\mathbb{M}^1\star\mathbb{M}^2}(D|z|),$$ with $C:=K_3J_3$ and $D:=\max\{K_4,J_4\}$, since each function $h_{\mathbb{M}}$ is nondecreasing. Similarly, we can estimate \begin{align*} |G_{\mathbb{M}^1}(z)\cdot G_{\mathbb{M}^2}(z)|&\ge K_1 h_{\mathbb{M}^1}(K_2|z|) J_1h_{\mathbb{M}^2}(J_2|z|)\\ &\ge K_1J_1 h_{\mathbb{M}^1}(D_1|z|)h_{\mathbb{M}^2}(D_1|z|)= C_1h_{\mathbb{M}^1\star\mathbb{M}^2}(D_1|z|), \end{align*} with $C_1:=K_1J_1$ and $D_1:=\min\{K_2,J_2\}$, and the conclusion follows. In case at least one of the sequences $\M^1$ and $\M^2$ satisfies (dc), $\mathbb{M}^1\star\mathbb{M}^2$ does so, and the extension operators from Theorem~\ref{tpral} will be available for the convolved sequence. \subsection{Some examples} Fix $q>1$ and $\sigma\in(1,2]$. Let us put $\L_{q,\sigma}:=\Ms\star\Ms$, $\L_{q,\sigma}=(L_p)_{p\in\N_0}$. It is not difficult to check that $$ L_{2p}=q^{2p^{\sigma}},\quad L_{2p+1}=q^{{p^\sigma}+(p+1)^\sigma}, \qquad p\in\N_0. $$ Observe that $2p^\sigma=2^{1-\sigma}(2p)^\sigma$, so that $L_{2p}$ equals the $2p$-th term of the sequence $\M_{q^{2^{1-\sigma}},\sigma}$. Regarding the odd terms, it is a consequence of Taylor's formula at $x=0$ for the functions of the form $x\mapsto (1+x)^{\alpha}$, $\alpha>0$, that $$ p^{\sigma}+(p+1)^\sigma-2^{1-\sigma}(2p+1)^\sigma=O(p^{\sigma-2}),\quad p\to\infty. $$ Since $\sigma\in(1,2]$, we deduce that $\L_{q,\sigma}$ is equivalent to $\M_{q^{2^{1-\sigma}},\sigma}$. According to Subsection~\ref{subsectOptFlatConvolved}, an optimal flat function in the class associated with $\L_{q,\sigma}$ in, say, the sector $S_2$ is the function $$ G(z):=G_2^{q,s}(z)G_2^{q,s}(z)= \exp\left(-2b_{q,s}\log^s\left(1+\frac{1}{z}\right)\right),\quad z\in S_2. $$ It is not a surprise that, from the definition~\eqref{eq.def.b_qs} of $b_{q,s}$ and the relation between $\sigma$ and $s$, one obtains $b_{q^{2^{1-\sigma}},s}=2b_{q,s}$, and so $G$ is precisely $G_2^{q^{2^{1-\sigma}},s}$, what agrees with the aforementioned equivalence of sequences. If we consider instead $1<\sigma<2$ and $\mathbb{J}:=\M_{q,\sigma}\star\M_{q,2}$, $\mathbb{J}=(J_p)_{p\in\N_0}$, the computation of the terms $J_p$ is no longer possible in closed form, since their values depend for general $p$ on the position of $\sigma$ within the interval $(1,2)$. However, the previous subsection shows that, for $s$ associated with $\sigma$ as usual, the function $$ G(z):=G_2^{q,s}(z)G_2^{q,2}(z)= \exp\left(-b_{q,s}\log^s\left(1+\frac{1}{z}\right)- b_{q,2}\log^2\left(1+\frac{1}{z}\right)\right),\quad z\in S_2, $$ is a flat function in the class associated with $\mathbb{J}$ in $S_2$. Note that $s$ is not equal to 2, hence the very aspect of the exponent in this function, and the fact that the restriction $G|_{(0,\infty)}$ is closely related to the function $h_{\mathbb{J}}$ (see Definition~\ref{optimalflatdef}), shows that $\mathbb{J}$ is not equivalent to any of the sequences $\Ms$. Since the sequence $\mathbb{J}$ does satisfy (dc), the extension procedure described in this paper is available for the classes associated with $\mathbb{J}$. Observe that these examples of optimal flat functions can also be provided in general sectors $S_\ga$, $\ga>2$, by using the functions $G_{\ga}^{q,s}$ introduced in~\eqref{eq.defGgamma}. \vskip.2cm \noindent\textbf{Acknowledgements}: The first three authors are partially supported by the Spanish Ministry of Science and Innovation under the project PID2019-105621GB-I00. The fourth author is supported by FWF-Project P33417-N.
2,869,038,156,546
arxiv
\section{Introduction} String theory (ST) and Vasiliev's equations (VE) \cite{Vasiliev:1988sa,Vasiliev:2003ev} are the only known examples of consistent theories of interacting higher-spin (HS) particles.\footnote{ See \cite{Bekaert:2010hw,Sagnotti:2011qp} for some recent reviews.} Although their current formulations provide mathematically elegant descriptions involving infinitely many auxiliary fields, some important aspects, as the number of derivatives involved in the cubic vertices or the possible (non-)local nature of higher-order interactions, are hidden. Indeed, the cubic vertices involving fields in the first Regge trajectory of the open bosonic string have been obtained only recently in \cite{Taronna:2010qq,Sagnotti:2010at}. Leaving aside Chan-Patton factors, their illuminating form is given by \be \left|V_{3}\right>= \exp\left\{\tfrac12\sum_{i\neq j} \sqrt{2\alpha'}\, a^{\dagger}_{i}\cdot p_{j} +a^{\dagger}_{i}\cdot a^{\dagger}_{j} \right\}\,\left| 0 \right>_{123} \qquad \left[(a^{\mu})^{\dagger}\equiv \alpha_{-1}^{\mu}\right], \ee where \mt{i,j=1,2,3} label the Fock spaces associated to the interacting particles. From the latter expression, the \mt{s_{1}\!-\!s_{2}\!-\!s_{3}} interactions, as well as the corresponding coupling constants, can be easily extracted via a Taylor expansion of the exponential function. Besides reflecting the world-sheet (Gaussian integral) origin of ST, the latter fulfills all requirements dictated by the compatibility with both the string spectrum and the corresponding global symmetries. Hence, many key properties can be deduced by investigating the consistent cubic vertices. Concerning VE, one might ask what is the form of the cubic vertices and what can one learn from them. Given the analogy to ST, we expect to understand how the global HS symmetry constrains the massless \mt{s_{1}\!-\!s_{2}\!-\!s_{3}} interactions making the entire spectrum of VE as a single immense multiplet. This question has been partly addressed in \cite{Sezgin:2002ru,Sezgin:2003pt,Boulanger:2008tg}, where it has been shown that, starting from VE, the extraction of the cubic vertices requires infinitely many field redefinitions making the analysis very involved. On the other hand, moving from a top-down to a bottom-up viewpoint, one may ask oneself which HS cubic interactions lead to fully nonlinear HS theories and whether ST and VE are the only solutions or not. This question (called Gupta or Fronsdal program) can be tackled solving the Noether procedure for HS fields. The latter is a perturbative scheme (order by order in the number of fields) whose aim is to classify all consistent interactions starting from a given free theory. The first step of such procedure is to find out the most general couplings of three massive or massless HS particles in an arbitrary constant-curvature background. For the case of symmetric HS fields, this problem has been addressed in \cite{Joung:2011ww,Joung:2012rv}, where, making use of the ambient-space formalism, all possible \mt{s_{1}\!-\!s_{2}\!-\!s_{3}} interactions were provided.\footnote{ See \cite{Fradkin:1986qy,Vasilev:2011xf} for a frame-like approach to the problem of massless interactions, and \cite{Buchbinder:2006eq,Zinoviev:2008ck,Boulanger:2008tg,Zinoviev:2010cr,Fotopoulos:2010nj,Bekaert:2010hk,Polyakov:2012qj} for other works on HS cubic interactions in (A)dS.} Notice that the aforementioned program is closely related to the classification of all consistent CFTs in arbitrary dimensions. More precisely, in the context of AdS/CFT, the Noether procedure seems to share many analogies with the problem of classifying all possible consistent OPEs of HS operators $\mathcal O_{i}(x)$\,. In turn, this is tantamount to enumerating all possible tensor structures of three-point functions \mt{\left< \mathcal O_{i}(x_{1})\,\mathcal O_{j}(x_{2})\,\mathcal O_{k}(x_{3})\right>}, from which all conformal blocks can be computed.\footnote{ See \cite{Giombi:2011rz,Costa:2011mg,Costa:2011dw,SimmonsDuffin:2012uy, Stanev:2012nq,Zhiboedov:2012bm} for the correlation functions of three conserved currents.} In this respect, it would be interesting to understand the dictionary between the Noether procedure requirements on the bulk side and the conformal symmetry on the boundary side. This perspective can possibly clarify the role of Lagrangian locality, usually assumed in the bulk, or of possible alternatives, and may provide a new look into the AdS/CFT correspondence itself. Moreover, it is worth mentioning that the CFT results (3-pt functions) can be obtained \emph{a priori} from the AdS results (cubic vertices) attaching the Boundary-to-Bulk propagators to the vertices. In the present paper we present the construction of HS cubic interactions in (A)dS along the lines of \cite{Joung:2011ww,Joung:2012rv}. We shall show that this problem is equivalent to finding polynomial solutions of a rather simple set of linear PDEs. Each solution is in one-to-one correspondence to a consistent cubic interaction. Let us stress that, since the solution space is linear, an arbitrary linear combination of these cubic vertices is also consistent, leaving their relative coupling constants unfixed (at this order). The latter are constrained within the Noether procedure either by compatibility with the global symmetries of the free theory and/or by consistency of quartic interactions. Let us stress once again the connection to the conformal bootstrap program which may entail key (still unclear) requirements dictated by the Noether procedure. The organization of the paper is the following: in Section \ref{sec: Noether} we introduce the Noether procedure which represents the main tool of our construction. In Section \ref{sec: flat} we briefly review how to apply such a scheme to derive cubic interactions in flat space. Section \ref{sec: Amb HS} is devoted to the formulation of the free theories in the ambient space. The ambient-space action at the cubic level is discussed in Section \ref{sec: Amb act}. Finally, in Sections \ref{sec: HS cubic} and \ref{sec: sol} we present the solution to the cubic order of the Noether procedure in (A)dS. \section{Noether procedure} \label{sec: Noether} The aim of the Noether procedure is to find all consistent (at least classically) interacting structures associated to a given set of particles, order by order in the number of fields.\footnote{ See \cite{Berends:1984rq} for a detailed discussion.} In the case of massless spin 1 or spin 2 particles ($A_{\mu}$ or $h_{\mu\nu}$), this corresponds to identifying the consistent interactions starting from Maxwell or Fierz-Pauli Lagrangians. Arbitrary vertices involving $A_{\mu}$ or $h_{\mu\nu}$ would mostly cause a propagation of \emph{unphysical} DOFs, which, at the free level, are removed by the linear gauge symmetries: \mt{\delta^{\sst (0)}A_{\mu}=\partial_{\mu}\alpha} and \mt{\delta^{\sst (0)}h_{\mu\nu}=\partial_{\mu}\xi_{\nu}+\partial_{\nu}\xi_{\mu}}\,. Hence, a key condition for the consistency of the interacting theories is the existence of gauge symmetries which are nonlinear deformations of the linear ones. Let us consider an arbitrary set of gauge fields $\varphi^{a}$ (where $a$ labels different fields) with free action $S^{\sst (2)}$ and linear gauge symmetries \mt{\delta^{\sst (0)}\varphi^{a}}\,. The problem is to find the corresponding nonlinear action $S$ together with the non-linear gauge symmetries $\delta\varphi^{a}$\,. For this purpose, one can consider the following perturbative expansions: \ba &&S=S^{\sst (2)}+S^{\sst (3)}+S^{\sst (4)}+\cdots\,, \label{S}\\ &&\delta\,\varphi^{a}=\delta^{\sst (0)}\varphi^{a}+\delta^{\sst (1)}\varphi^{a}+ \delta^{\sst (2)}\varphi^{a}+\cdots\,, \label{ddelta} \ea where the superscript $(n)$ denotes the number of fields involved. Taking the variation of the action \eqref{S} under the gauge transformation \eqref{ddelta}, one ends up with a system of gauge invariance conditions: \ba && \delta^{\sst (0)}S^{\sst (2)}=0\,, \label{N1} \\ && \delta^{\sst (0)} S^{\sst (3)}+\delta^{\sst (1)}S^{\sst (2)}=0\,, \label{N2} \\ && \delta^{\sst (0)} S^{\sst (4)}+\delta^{\sst (1)}S^{\sst (3)}+ \delta^{\sst (2)}S^{\sst (2)}=0\,, \label{N3} \\ && \hspace{40pt} \cdots \nonumber \ea The first equation implies the linear gauge invariance of the free theory. The second equation is a condition for both the cubic interactions $S^{\sst (3)}$ and the first-order gauge deformations $\delta^{\sst (1)}\varphi^{a}$\,, and so on. Since the second term in \eqref{N2} is proportional to the free EOM, condition \eqref{N2} implies \be\label{N2-} \delta^{\sst (0)}S^{\sst (3)}\approx 0\,, \ee where $\approx$ means equivalence up to the free EOM. Solving this equation one can identify all cubic interactions consistent with the linear gauge symmetries. In the case of massless spin 1, one finds two independent interaction terms which schematically read \be\label{s1 cubic} S^{\sst (3)}= \lambda^{\sst 1\!-\!1\!-\!1}_{1}\,{\textstyle\int}\,A\,A\,F\ +\ \lambda^{\sst 1\!-\!1\!-\!1}_{0}\,{\textstyle\int}\,F\,F\,F\,. \ee The first is the one-derivative YM vertex while the second is the three-derivative Born-Infeld one. In the massless spin 2 case, there are three independent interactions: \be\label{s2 cubic} S^{\sst (3)}= \lambda^{\sst 2\!-\!2\!-\!2}_{2}\,{\textstyle\int}\,h\,\partial h\,\partial h\ +\ \lambda^{\sst 2\!-\!2\!-\!2}_{1}\,{\textstyle\int}\,R\,R\ +\ \lambda^{\sst 2\!-\!2\!-\!2}_{0}\,{\textstyle\int}\,R\,R\,R\,, \ee where the first is the two-derivative gravitational minimal coupling while the other two come from the expansions of (Riemann)$^{2}$ and (Riemann)$^{3}$ and involve four and six derivatives respectively. As one can see from these lower-spin examples, the general solutions to eq.~\eqref{N2-} are \mt{s_{1}\!-\!s_{2}\!-\!s_{3}} vertices with different number of derivatives associated with the coupling constants $\lambda_{n}^{\sst s_{1}\!-\!s_{2}\!-\!s_{3}}$\,. It is worth noticing that these coupling constants are independent at this level. The next step consists in solving \eqref{N2} for the first-order gauge transformations $\delta^{\sst (1)}\varphi^{a}$ associated to each solution $S^{\sst (3)}$ found at the previous step. In the lower-spin cases, only the first vertices in \eqref{s1 cubic} and \eqref{s2 cubic} lead to nontrivial deformations: \be\label{s1,2 tr} \delta^{\sst (1)}A=\lambda^{\sst 1\!-\!1\!-\!1}_{1}\,A\,\alpha\,, \qquad \delta^{\sst (1)}h=\lambda^{\sst 2\!-\!2\!-\!2}_{2}\, (h\,\partial \xi-\partial h\,\xi)\,, \ee of the linear gauge transformations. The latter correspond to the standard non-Abelian YM gauge transformations and to diffeomorphisms respectively. Although not deforming the gauge symmetries, the remaining vertices can be completed to the full non-linear order keeping consistency with the gauge transformations \eqref{s1,2 tr}. They form the first elements of a class of higher-derivative gauge or diffeomorphism invariants, where the remaining elements appear at higher orders $S^{\sst (n)}$\,. General (HS) gauge theories present as well two types of cubic vertices: the ones deforming the linear gauge symmetries, and the ones giving rise to possible higher-derivative gauge invariants. Although the former define the deformed non-Abelian gauge algebra, the second are also relevant since they provide possible (quantum) counter-terms. Hence, if no independent non-deforming vertices survive at higher-orders, then no counter-terms would be available and the theory would be UV finite. This issue can be initially addressed solving eq.~\eqref{N3} which involves quartic interactions as well as the spectrum of the theory \cite{Taronna:2011kt,Dempster:2012vw}. So far we have only considered the case of gauge theories. Constructing interactions of massive HS fields also raises similar problems. Arbitrary interaction vertices would mostly violate the Fierz conditions resulting in the propagation of unphysical DOFs. One way to proceed is to introduce Stueckelberg symmetries into the massive theory and perform the Noether procedure similarly to the massless case.\footnote{ See e.g. \cite{Zinoviev:2008ck} for some explicit constructions.} \section{Flat-space case} \label{sec: flat} In this section we consider flat-space cubic vertices since their construction reveals some of the key ideas also used in the (A)dS case. See the review \cite{Bekaert:2010hw} for an exhaustive list of references on the cubic interactions, and \cite{Fotopoulos:2010ay,Buchbinder:2012iz,Metsaev:2012uy,Henneaux:2012wg} for more recent developments. In order to deal with arbitrary HS fields, it is useful to introduce auxiliary variables $u^\mu$, which are the analogue of the string oscillators (\mt{\alpha_{-1}^{\mu}\leftrightarrow u^{\mu}}), and define the generating function: \be \varphi^{\sst A}(x,u)= \tfrac1{s!}\,\varphi^{\sst A}_{\mu_{1}\cdots\mu_{s}}(x)\, u^{\mu_{1}}\cdots u^{\mu_{s}}\,. \ee Here, the superscript $\st A$ labels different HS fields, and in the following we use \mt{{\st A}=a} for massless fields and \mt{{\st A}=\alpha} for massive ones. In this notation, the most general form of the cubic vertices is \be\label{gen cub} S^{\sst (3)}=\int d^{d}x\,C_{\sst A_{1}A_{2}A_{3}} (\partial_{x_{i}},\partial_{u_{i}})\, \varphi^{\sst A_{1}}(x_{1},u_{1})\,\varphi^{\sst A_{2}}(x_{2},u_{2})\, \varphi^{\sst A_{3}}(x_{3},u_{3})\, \Big|_{{}^{x_{i}=x}_{u_{i}=0}}\,, \ee where \mt{i=1,2,3}. Different functions $C_{\sst A_{1}A_{2}A_{3}}$ describe different vertices, embodying the coupling constants of the theory. Notice that $C_{\sst A_{1}A_{2}A_{3}}$ plays the same role as the state $\left|V_{3}\right>$ in the BRST (String field theory) approach. Restricting the attention to the parity invariant interactions, the dependence of $C_{\sst A_{1}A_{2}A_{3}}$ on the six vectors $\partial_{x_{i}}$ and $\partial_{u_{i}}$, is through the 21 (6+9+6) Lorentz scalars \mt{\partial_{x_{i}}\!\cdot\partial_{x_{j}}}, \mt{\partial_{u_{i}}\!\cdot\partial_{x_{j}}} and \mt{\partial_{u_{i}}\!\cdot\partial_{u_{j}}}. For instance, a vertex of the form: \be \varphi_{\mu\nu\rho\lambda}\,\partial^\mu\,\partial^\nu\, \varphi^{\rho\sigma}_{\phantom{\rho\sigma}\sigma}\,\partial_\tau\,\varphi^{\tau\lambda}\,, \ee is encoded by \be C=(\partial_{u_{1}}\!\cdot\partial_{x_{2}})^2\,(\partial_{u_{1}}\!\cdot\partial_{u_{2}})\,(\partial_{u_{1}}\!\cdot\partial_{u_{3}})\,(\partial_{u_{2}}\!\cdot\partial_{u_{2}})\,(\partial_{u_{3}}\!\cdot\partial_{x_{3}})\,. \ee Notice that not all $C_{\sst A_{1}A_{2}A_{3}}$'s are physically distinguishable but there exist two kinds of ambiguities. The first is due to the triviality of total derivative terms (or integrations by parts). This ambiguity can be fixed by removing $\partial_{u_{i}}\!\cdot\partial_{x_{i-1}}$ in terms of the other Lorentz scalar operators as \be\label {int by part} \partial_{u_{i}}\!\cdot\partial_{x_{i-1}}= -\,\partial_{u_{i}}\!\cdot\partial_{x_{i+1}}-\partial_{u_{i}}\!\cdot\partial_{x_{i}} +\partial_{u_{i}}\!\cdot\partial_{x}\qquad [i\simeq i+3]\,. \ee The second ambiguity is related to the possibility of performing non-linear field redefinitions which can create \emph{fictive} interaction terms. However, these vertices are all proportional to the linear EOM so that the corresponding ambiguity can be fixed by disregarding the on-shell vanishing vertices. This amounts to neglect the dependence of the function $C_{\sst A_{1}A_{2}A_{3}}$ on the $\partial_{x_{i}}\!\cdot\partial_{x_{j}}$'s, since the latter can be expressed as \be \label{field redef} \partial_{x_{i}}\!\cdot\partial_{x_{i-1}}=\tfrac12 \left(\partial_{x_{i+1}}^{2}\!\!- \partial_{x_{i}}^{2}-\partial_{x_{i-1}}^{2}\right) +\tfrac12\,\partial_{x}\cdot\! \left(\partial_{x_{i}}+\partial_{x_{i-1}}-\partial_{x_{i+1}}\right), \ee and, up to EOM, the $\partial_{x_{i}}^{2}$'s can be replaced by the $\partial_{u_{i}}\!\cdot\partial_{x_{i}}$'s and the $\partial_{u_{i}}^{2}$'s. For instance, the (Fronsdal's) massless HS EOM reads \be\label{Fronsdal} \left[\partial_{x}^{2}-u\cdot\partial_{x}\,\partial_{u}\cdot\partial_{x} +\tfrac12\,(u\cdot\partial_{x})^{2}\,\partial_{u}^{\,2}\right] \varphi^{a}(x,u)\approx0\,. \ee Taking into account the latter ambiguities, the vertex function $C_{\sst A_{1}A_{2}A_{3}}$ can only depend on 12 (3+3+6) Lorentz scalars: $\partial_{u_{i}}\!\cdot\partial_{x_{i+1}}$\,, $\partial_{u_{i}}\!\cdot\partial_{x_{i}}$ and $\partial_{u_{i}}\!\cdot\partial_{u_{j}}$\,. It is worth noticing that at the cubic order, there are no non-local vertices since the $\partial_{x_{i}}\!\cdot\partial_{x_{j}}$'s have been removed while the other scalar operators can only enter with positive powers (otherwise the tensor contractions do not make sense). When a gauge field, say $\varphi^{a_{1}}$, enters the interaction, the function $C_{\sst a_{1}A_{2}A_{3}}$ is further constrained by the condition \eqref{N2-}. In this notation, the linear HS gauge transformations, \mt{\delta^{\sst (0)}\varphi^{a}_{\mu_{1}\cdots\mu_{s}}=\partial^{}_{(\mu_{1}}\varepsilon^{a}_{\mu_{2}\cdots \mu_{s})}}\,, read \be\label{free gt} \delta^{\sst (0)}\varphi^{a}(x,u)=u\cdot\partial_{x}\,\varepsilon^{a}(x,u)\,. \ee Hence the cubic-order gauge consistency condition \eqref{N2-} gives \be\label{gauge inv C} \left[\,C_{a_{1}{\sst A_{2}A_{3}}}(\partial_{u_{i}}\!\cdot\partial_{x_{i+1}}\,, \partial_{u_{i}}\!\cdot\partial_{x_{i}}\,, \partial_{u_{i}}\!\cdot\partial_{u_{j}})\,,\, u_{1}\cdot\partial_{x_{1}}\,\right]\approx 0\,, \ee where $\approx$ means equivalence modulo the Fronsdal equation \eqref{Fronsdal}. In order to tackle the above equation, it is convenient to split the function $C_{a_{1}{\sst A_{2}A_{3}}}$ into two parts: the one which does not involve any divergence, $\partial_{u_{i}}\!\cdot\partial_{x_{i}}$\,, trace, $\partial_{u_{i}}^{2}$ or auxiliary fields (we call it the \emph{transverse and traceless} (TT) part), and the one which does (DTA part). Let us notice that the TT part is precisely what survives after eliminating unphysical DOF. Indeed, besides the mass-shell condition, the Fierz system: \be\label{Fierz} {\rm Fierz\ system}:\quad (\partial_{x}^{2}-m^{2})\,\varphi^{\sst A}=0\,, \quad \partial_{u}\cdot\partial_{x}\,\varphi^{\sst A}=0\,, \quad \partial_{u}^{2}\,\varphi^{\sst A}=0\,, \ee involves also the transverse and traceless conditions.\footnote{ When \mt{m=0}, one has to quotient the system by the gauge symmetries \eqref{free gt} with parameters $\varepsilon^{a}$ satisfying the same conditions \eqref{Fierz}.} Therefore, the TT part of the action plays a key role encoding the on-shell content of the theory. On the other hand, the part containing divergences, traces or auxiliary fields vanishes after gauge fixing.\footnote{ Indeed the light-cone interaction vertices \cite{Metsaev:2005ar} can be obtained solely from the TT part by going to the light-cone gauge.} The next question is whether it is possible to determine the TT part of the vertex without using any information about the other part. From a physical point of view, this ought to be possible since the physical (on-shell) interactions cannot depend on the unphysical ones.\footnote{ The light-cone gauge approach is consistent in its own without calling for some additional conditions on its corresponding covariant off-shell description.} Concerning massive fields, the TT conditions already assure the propagation of the correct DOF and no further constraints has to be imposed on the TT parts of the cubic interactions. Let us also comment on the remaining parts of the cubic interactions involving divergences and traces. For massless fields, the latter turns out to be completely determined by their TT part enforcing gauge invariance \cite{Manvelyan:2010jr,Sagnotti:2010at}. Similarly, when massive fields are involved, after introducing Stueckelberg fields into the TT part (see \cite{Joung:2012rv} for details), one may in principle determine the remaining parts of the action requiring the consistency with Stueckelberg gauge symmetries. In the following, we show how to determine the TT parts of $C_{\sst A_{1}A_{2}A_{3}}$ from eq.~\eqref{gauge inv C}. First, after removing all the ambiguities through eqs.~(\ref{int by part}\,,\,\ref{field redef}), any functional $F$ can be \emph{univocally} written in terms of its TT part and the remaining part as \mt{F=[F]_{\rm\sst TT}+[F]_{\rm\sst DTA}}\,. Hence, eq.~\eqref{N2-} can be split into two equations: \be\label{N2--} \left[\delta^{\sst (0)}S^{\sst (3)}\right]_{\rm\sst TT} \approx 0\,, \qquad \left[\delta^{\sst (0)}S^{\sst (3)}\right]_{\rm\sst DTA} \approx 0\,, \ee where, henceforth, $\approx$ means equivalence modulo the Fierz system \eqref{Fierz}. Second, as the gauge variations of divergences, traces or auxiliary fields are proportional to themselves up to $\partial_{x_{i}}^{2}$-terms: \mt{\left[\delta^{\sst (0)}[S^{\sst (3)}]_{\rm\sst DTA}\, \right]_{\rm\sst TT}\approx 0}\,, the first equation in \eqref{N2--} gives an independent condition for the TT parts, $[S^{\sst (3)}]_{\rm\sst TT}$\,, of the interactions: \be\label{N2---} \left[\delta^{\sst (0)}S^{\sst (3)}\right]_{\rm\sst TT} =\left[\delta^{\sst (0)}\big\{ \left[S^{\sst (3)}\right]_{\rm\sst TT} +\left[S^{\sst (3)}\right]_{\rm\sst DTA}\big\}\right]_{\rm\sst TT} \approx \left[\delta^{\sst (0)}\left[S^{\sst (3)}\right]_{\rm\sst TT}\right]_{\rm\sst TT} \approx 0\,. \ee At this point, $[S^{\sst (3)}]_{\rm\sst TT}$ can be expressed as in eq.~\eqref{gen cub} through a function $C^{\sst\rm TT}_{\sst A_{1}A_{2}A_{3}}(y_{i},z_{i})$ of 6 variables: \be\label{y and z} y_{i}=\partial_{u_{i}}\!\cdot\partial_{x_{i+1}}\,,\qquad z_{i}=\partial_{u_{i+1}}\!\!\cdot\partial_{u_{i-1}}\,. \ee Then, assuming the first field to be massless, \mt{{\st A_{1}}=a_{1}}, eq.~\eqref{N2---} gives a condition for $C^{\sst\rm TT}_{a_{1}{\sst A_{2}A_{3}}}$ analogous to \eqref{gauge inv C}. Using the Leibniz rule, we obtain a rather simple differential equation: \be\label{flat PDE} \left[y_{2}\,\partial_{z_{3}}-y_{3}\,\partial_{z_{2}} +\tfrac12\,(m_{2}^{\,2}-m_{3}^{\,2})\,\partial_{y_{1}}\right] C^{\sst\rm TT}_{a_{1}{\sst A_{2}A_{3}}}=0\,, \ee where $m_{i}$ is the mass of the $i$-th field. When two or three massless fields are involved in the interactions, one has respectively two or three differential equations given by the cyclic permutations of eq.~\eqref{flat PDE}. Depending on the cases, the corresponding solutions $C^{\sst\rm TT}_{\sst A_{1}A_{2}A_{3}}$ are constrained to depend on some of the $y_{i}$'s and the $z_{i}$'s only through the \emph{building blocks}: \be g=y_{1}\,z_{1}+y_{2}\,z_{2}+y_{3}\,z_{3}\,, \ee \be \label{flath} h_{i}=y_{i+1}\,y_{i-1} +\tfrac12\left[m_{i}^{\,2}- (m_{i+1}+m_{i-1})^{2}\right] z_{i}\,. \ee As an example, we consider the interactions of three massless HS fields where the consistent cubic interactions are encoded in an arbitrary function: \be \label{3flat} C^{\sst\rm TT}_{a_{1}a_{2}a_{3}}=\mathcal K_{a_{1}a_{2}a_{3}}(y_1,y_2,y_3,g)\,. \ee Leaving aside Chan-Paton factors, the latter can be expanded as \be \label{Kexpan} \mathcal K= \sum_{n=0}^{\min\{s_1,s_2,s_3\}}\,\lambda_{n}^{s_1\!-\!s_2\!-\!s_3}\,g^n\,y_1^{s_1-n}\,y_2^{s_2-n}\,y_3^{s_3-n}\,, \ee where the \mt{\lambda_{n}^{s_1\!-\!s_2\!-\!s_3}}'s are independent coupling functions that ought to be fixed by the quest for consistency of higher order interactions. From the latter expression it is straightforward to see that the number of consistent couplings is \mt{\min\{s_1,s_2,s_3\}+1}\,, while the number of derivatives contained in each vertex is \mt{s_1+s_2+s_3-2\,n}\,. In particular, focussing on the \mt{2\!-\!2\!-\!2} case, eq.~\eqref{Kexpan} gives \be {\cal K}=\lambda_2^{\sst 2\!-\!2\!-\!2}\,g^2+\lambda_1^{\sst 2\!-\!2\!-\!2}\,g\,y_1\,y_2\,y_3+\lambda_0^{\sst 2\!-\!2\!-\!2}\,y_1^2\,y_2^2\,y_3^2\,, \ee that exactly reproduces eq.~\eqref{s2 cubic}. \section{Ambient-space formalism for HS} \label{sec: Amb HS} In order to address the HS interaction problem around an arbitrary constant-curvature background (\emph{i.e.} (A)dS space), one can still rely on the Noether procedure introduced in Section \ref{sec: Noether}. However, in this case the starting point are the HS free theories in (A)dS, where besides massive and massless particles, new types of particles (called partially-massless) appear \cite{Deser:2001us}.\footnote{ In the case of mixed-symmetry HS fields in AdS, even the notion of massless-ness changes with respect to the flat-space case \cite{Metsaev:1995re}.} Moreover, the cubic interactions built on top of the free theories would involve (A)dS covariant derivatives whose non-commuting nature makes the construction cumbersome. Their commutators give rise to lower-derivative pieces proportional to the cosmological constant, making the vertices inhomogeneous in the number of derivatives. The ambient-space formalism proves to be a convenient tool in dealing with free (A)dS HS, and hence, it represents a natural framework in order to construct (A)dS cubic interactions. Furthermore, recently it has been intensively used in the context of \emph{Mellin amplitude} in the computations of Witten diagrams \cite{Penedones:2010ue,Paulos:2011ie}. The ambient-space formalism \cite{Fronsdal:1978vb,Biswas:2002nk} consists in regarding the $d$-dimensional (A)dS space as the hyper-surface \mt{X^{2}=\epsilon\,L^{2}} embedded into a $(d+1)$-dimensional flat-space. In our convention the ambient metric is \mt{\eta=(-,+,\,\ldots,+)}, so that AdS (\mt{\epsilon=-1}) is Euclidean while dS (\mt{\epsilon=1}) is Lorentzian. Focussing on the region $\epsilon\,X^{2}>0$\,, there exists an isomorphism between symmetric tensor fields in (A)dS, $\varphi_{\mu_{1}\cdots\mu_{s}}$\,, and those in ambient space, $\Phi_{\sst M_{1}\cdots M_{s}}$\,, satisfying the \emph{homogeneity} and \emph{tangentiality} (HT) conditions: \ba &{\rm Homogeneity}: \qquad &(X\cdot\partial_{X}-U\cdot\partial_{U}+2+\mu)\,\Phi(X,U)=0\,, \nn &{\rm Tangentiality}: \qquad &X\cdot \partial_{U}\,\Phi(X,U)=0\,. \label{HT} \ea Here we have used the auxiliary-variable notation for the ambient-space fields: \be \Phi(X,U)=\tfrac1{s!}\, \Phi_{\sst M_{1}\cdots M_{s}}(X)\,U^{\sst M_{1}}\cdots U^{\sst M_{s}}\,. \ee The degree of homogeneity $\mu$ parametrizes the (A)dS mass-squared term appearing in the (A)dS Lagrangian: \be m^{2}=\tfrac{(-\epsilon)}{L^{2}}\,\big[\,(\mu-s+2) (\mu-s-d+3)-s\,\big]\,, \label{mass} \ee so that \mt{\mu=0} corresponds to the massless case. In the ambient-space formalism, the EOM of both massless and massive HS fields are given by the Fronsdal ones \eqref{Fronsdal} after replacing $(x,u)$ by $(X,U)$\,. Let us remind the reader that the concept of \emph{massless-ness} in (A)dS is not related to the vanishing of the mass term but rather to the appearance of gauge symmetries. In fact, if one postulates the latter to be of the form: \be\label{amb gauge tr} \delta^{\sst (0)}\Phi(X,U)=U\cdot\partial_{X}\,E(X,U)\,, \ee then the compatibility with the HT conditions \eqref{HT} alone restricts both the possible values of $\mu$ and the normal(radial) components of $E$\,. In particular, when \mt{\mu=0,1, \ldots, s-1}, then there exist compatible higher-derivative gauge symmetries: \be \delta^{\sst (0)}\,\Phi(X,U)=(U\cdot\partial_X)^{\mu+1}\,\Omega(X,U) \qquad [\,E=(U\cdot\partial_{X})^{\mu}\,\Omega\,]\,, \label{pm gt} \ee with the gauge parameters $\Omega$ satisfying \be (X\cdot\partial_X-U\cdot\partial_U-\mu)\,\Omega(X,U)=0\,, \qquad X\cdot\partial_U\,\Omega(X,U)=0\,. \ee On the other hand, for other values of $\mu$, no gauge symmetries (in absence of auxiliary fields) are allowed, implying that the corresponding fields are massive. Notice that the massless field, \mt{\mu=0}, is the first member of a class of representations where the other members, with \mt{\mu=1,2,\ldots,s-1}, are called partially-massless. However, partially-massless fields describe unitary representations only in dS. Before closing this section, let us discuss the flat limit from the ambient-space viewpoint. The latter consists first in translating the coordinate system as \mt{X^{\sst M}\to X^{M}+L\,N^{\sst M}}\,, where $N$ is a constant vector satisfying $N^{2}=\epsilon$\,, and second, in taking the \mt{L\to \infty} limit. As a result, the HT conditions \eqref{HT} reduces to \be \left(N\cdot\partial_{X}-\sqrt{-\epsilon}\,M\right) \Phi(X,U)=0\,, \qquad N\cdot\partial_{U}\,\Phi(X,U)=0\,, \ee where the flat mass $M$ is related to the (A)dS \emph{mass} $\mu$ as \be \sqrt{-\epsilon}\,M=-\lim_{L\to \infty}\,\frac{\mu}L\,. \ee Notice that in this limit, all (A)dS representations become massless, while, in order to recover massive representations in flat space one should consider the $\mu\rightarrow\infty$ limit. \section{Ambient-space action} \label{sec: Amb act} In the previous section we have shown how to describe HS fields in (A)dS making use of the ambient-space language. In Section \ref{sec: HS cubic} we shall use the latter framework in order to solve the Noether procedure. For this purpose, one needs to know first of all how to express the (A)dS action in terms of ambient-space quantities. As far as the Lagrangian is concerned, no subtleties arise since, together with the isomorphism between (A)dS and ambient-space fields, there is an analogous one between (A)dS-covariant derivatives $\nabla_{\mu}$ and ambient-space ones \mt{\partial_{X^{\sst M}}-(X_{\sst M}/X^{2})\,X\cdot\partial_X}. Hence, any Lagrangian ${\cal L}_{\sst\rm (A)dS}$ written in terms of (A)dS intrinsic fields is in one-to-one correspondence with the ambient-space one ${\cal L}_{\sst\rm Amb}$\,. More precisely, considering a single term in the Lagrangian, the two descriptions are related by \be {\cal L}{\sst\rm Amb}(\,\Phi,\partial\,\Phi, \partial\,\partial\,\Phi, \ldots\,)= \left(\tfrac RL\right)^{\Delta}\, {\cal L}_{\sst\rm (A)dS}(\,\varphi,\nabla\varphi,\nabla\nabla\varphi, \ldots\,)\,, \ee where $\Delta$ is a constant depending on the spins and the $\mu$-values of the fields as well as on the number of derivatives entering ${\cal L}_{\rm\sst Amb}$\,. Regarding the action, the first attempt would be to consider \be \int d^{d+1}X\,{\cal L}_{\sst\rm Amb} =\left(\int_{0}^{\infty} d R \left(\tfrac{R}{L}\right)^{d+\Delta}\right)\times \left(\int_{{\rm\sst (A)dS}} d^{d}x\sqrt{-\epsilon\,g}\,{\cal L}_{\rm\sst (A)dS}\right). \ee However, the latter contains a diverging radial integral so that controlling its gauge invariance becomes ambiguous. A way of solving this problem would be to introduce a cut-off in order to regulate the radial integral, or similarly, a boundary for the ambient space. Then, the presence of the boundary breaks gauge invariance which can be restored only by adding boundary (total-derivative) terms in the action. The latter are the analogue of the Gibbons-Hawking-York boundary term needed in order to amend the Einstein-Hilbert action in manifolds with boundary. Another equivalent way is suggested by the fact that the ambient space can be considered as a tool to rewrite intrinsic $d$-dimensional (A)dS quantities in a manifestly $SO(1,d)$-covariant form. In this respect, with the aid of a delta function, one can simply rewrite the (A)dS action in the ambient-space language as \be S=\int d^{d}x\,\sqrt{-\epsilon\, g}\,{\cal L}_{\rm\sst (A)dS}= \int d^{d+1}X\,\delta\!\left(\sqrt{\epsilon\, X^{2}}-L\right)\,{\cal L}_{\rm\sst Amb}\,. \ee As a candidate for the Lagrangian ${\cal L}_{\rm\sst Amb}$\,, one may think to use the flat $d$-dimensional one where all the $d$-dimensional quantities are replaced by $(d+1)$-dimensional ones. However, in general this way does not lead to a consistent (A)dS action. The reason is that, because of the delta function, total-derivative terms in ${\cal L}_{\sst\rm Amb}$ no longer vanish but contribute as \be \label{intbyparts} \delta\!\left(\sqrt{\epsilon\, X^{2}}-L\right) \partial_{X^M}\left(\,\cdots\right) =-\,\delta^{\prime}\!\left(\sqrt{\epsilon \,X^{2}}-L\right) \tfrac{\epsilon\,X_M}{\sqrt{\epsilon\,X^{2}}}\,\left(\,\cdots\right)\neq 0\,. \ee Thereby, in order to compensate these terms, the Lagrangian ${\cal L}_{\sst \rm Amb}$ has to be amended by additional total-derivative contributions. It is worth noticing that the latter vertices contain a lower number of derivatives compared to the initial vertices in ${\cal L}_{\sst \rm Amb}$\,. Actually, this is the ambient-space analogue of what happens in the intrinsic formulation: the replacement of ordinary derivatives by covariant ones requires the inclusion of additional lower-(covariant)derivative vertices in the Lagrangian. As previously discussed, a consistent (A)dS action consists of vertices containing terms with different number of derivatives sized by proper powers of $L^{-2}$\,: \be {\cal L}_{\sst\rm Amb}={\cal L}_{\sst\rm Amb}(L^{-2})\,. \ee In the ambient-space formalism, it is convenient to rather size such contributions by different derivatives of the delta function: \be \delta^{\sst [n]}\!\left(\sqrt{\epsilon \,X^{2}}-L\right) \qquad \left[\,\delta^{\sst [n]}(R-L)=\left(\tfrac{1}{R}\,\tfrac{d}{dR}\right)^n\,\delta(R-L)\,\right], \ee since the latter naturally appear in the terms \eqref{intbyparts} that need to be compensated. Indeed, thanks to the following identity: \be \label{deltafunc} \delta^{\sst [n]}(R-L)\,\,R^{\lambda} =\frac{(-2)^{n}\,[(\lambda-1)/2]_{n}}{(L^{2})^{n}}\,\delta(R-L)\,R^{\lambda}\,, \ee arbitrary powers of $L^{-2}$ can be always absorbed into derivatives of the delta function. Therefore, the ambient-space Lagrangian can be expanded as \be\label{L series} \delta\!\left(\sqrt{\epsilon \,X^{2}}-L\right)\,{\cal L}_{\sst \rm Amb}(L^{-2}) =\sum_{n\ge0} \delta^{\sst [n]}\!\left(\sqrt{\epsilon \,X^{2}}-L\right)\,{\cal L}_{\sst\rm Amb}^{\sst [n]}\,, \ee where the ${\cal L}_{\sst\rm Amb}^{\sst [n]}$'s do not involve any power of $L^{-2}$. In order to conveniently handle the above series, it is useful to express $\delta^{\sst [n]}$ by means of an auxiliary variable $\hat{\delta}$ as \be \delta^{\sst [n]}(R-L)=\exp\left(\tfrac{\epsilon\,L}{R}\,\tfrac{d}{dR} \,\tfrac{d}{d\hat\delta}\right)\,\delta(R-L)\, \left(\epsilon\,\tfrac{\hat\delta}L\right)^{n}\,\Big|_{\hat\delta=0}\,. \ee For simplicity, in the following we work with the rule \mt{\delta^{\sst [n]}(R-L)=\delta(R-L)\,(\epsilon\,\hat{\delta}/L)^n}.\footnote{ The $1/L$ in the definition of $\hat\delta$ has been introduced to provide a well-defined flat limit: the corresponding rule in flat space becomes \mt{\delta(N\cdot X)\,\partial_{X^{\sst M}}\,(\,\cdots) =-\,\delta(N\cdot X)\ \hat\delta\ N_{\sst M}\,(\,\cdots)}\,.} The advantage of introducing the auxiliary variable $\hat\delta$ lies on the simple rule in dealing with total derivatives: \be \label{intdeltahat} \delta\Big(\sqrt{\epsilon\,X^2}-L\Big)\,\partial_{X^M}\, (\,\cdots)= -\,\delta\Big(\sqrt{\epsilon\,X^2}-L\Big)\,\tfrac{\hat\delta}{L}\,X_{\sst M}\, (\,\cdots)\,. \ee Moreover, it also allows one to factorize the delta function in the series \eqref{L series} and rewrite the Lagrangian as a polynomial function in $\hat\delta/L$\,: \be {\cal L}_{\sst\rm Amb}={\cal L}_{\sst\rm Amb}(\tfrac{\hat\delta}L)\,. \ee \section{Construction of HS cubic interactions in (A)dS} \label{sec: HS cubic} In this section we present the solution to the Noether procedure at the cubic level of for arbitrary symmetric HS fields in (A)dS. The logic is the same as in the flat-space case discussed in Section \ref{sec: flat}, thus in the following we mainly focus on those points wherein the peculiarities of (A)dS arise. Apart from the presence of the delta function, the discussions which led to the most general form of the TT parts of the cubic vertices still hold. The only subtleties are related to the total-derivative terms in \eqref{int by part} and \eqref{field redef} which no longer vanish. However, as we shall explain below, their contributions can be reabsorbed into redefinitions of the cubic vertices. Hence, the most general expression for the TT parts of the cubic vertices reads \ba \label{cubicact1} [S^{\sst {(3)}}]_{\sst\rm TT} \eq \int d^{d+1}X\ \delta\Big(\sqrt{\epsilon\,X^2}-L\Big)\, C^{\sst\rm TT}_{\sst {A_{1}A_{2}A_{3}}} \!\left(\tfrac{\hat\delta}L; Y_{i},Z_{i}\right)\times\nn && \qquad \times\, \Phi^{\sst A_{1}}(X_1,U_1)\, \Phi^{\sst A_{2}}(X_2,U_2)\, \Phi^{\sst A_{3}}(X_3,U_3)\, \Big|_{^{ X_i=X}_{ U_i=0}}\,, \ea where the $Y_i$'s and the $Z_i$'s are defined analogously to \eqref{y and z}. Let us remind the reader that, as we have discussed in the previous section, the inhomogeneity of the vertices in the number of derivatives is encoded in the $\hat\delta/L$-dependence of the function $C_{\sst A_{1}A_{2}A_{3}}^{\sst\rm TT}$\,. Whenever a gauge field joins the interactions, the cubic vertices are constrained to satisfy the gauge compatibility condition \eqref{N2---} associated to that field. Assuming the first field to be (partially-)massless (\emph{i.e.} \mt{\mu_{1}\in\{0,1,\ldots,s_{1}-1\}}), one gets \be \label{gaugeconscond1} \left[\,C^{\sst\rm TT}_{{a}_{1}{\sst A_{2}A_{3}}}\!\left(\tfrac{\hat\delta}L;Y,Z\right)\,, \, (U_1\!\cdot\partial_{X_1})^{\mu_1+1}\,\right]\Big|_{U_1=0}\approx 0\,, \ee where $\approx$ means again equivalence modulo the $\partial_{X_{i}}^{\,2}$'s\,, $\partial_{U_{i}}\!\cdot\partial_{X_{i}}$'s and $\partial_{U_{i}}^{\,2}$'s\,, \emph{i.e.} modulo the ambient-space Fierz system. Aside from the higher-derivative nature of the gauge transformations, the key difference with respect to the flat case is the non-triviality of the total-derivative terms arising from the commutations of \mt{U_{1}\!\cdot\partial_{X_{1}}} with the $Y_{i}$'s and the $Z_{i}$'s. Let us sketch how these total-derivative terms can be dealt with: \begin{itemize} \item Because of the identity \eqref{intdeltahat}, the total-derivatives terms give rise to contributions of order $\hat\delta/L$ and proportional to the operators \mt{X\cdot\partial_{X_{i}}} or \mt{X\cdot\partial_{U_{i}}}\,. \item Appearing right after the delta function, the latter can be replaced by \mt{X_{i}\cdot\partial_{X_{i}}} and \mt{X_{i}\cdot\partial_{U_{i}}} respectively. \item Pushing these operators to the right and making them act on the fields, one can use the HT conditions \eqref{HT} to replace \mt{X_{i}\!\cdot\partial_{X_{i}}} by the corresponding homogeneity degrees and \mt{X_{i}\!\cdot\partial_{U_{i}}} by zero. \end{itemize} All in all, one can recast the condition \eqref{gaugeconscond1} into a higher-order partial differential equation of the form: \be \label{pmdiff} \prod_{n=0}^{\mu_1}\Big[\,Y_2\,\partial_{Z_3}-Y_3\,\partial_{Z_2} +\tfrac{\hat\delta}{L}\left(Y_2\,\partial_{Y_2}-Y_3\,\partial_{Y_3} -\tfrac{\mu_1+\mu_2-\mu_3-2\,n}2 \right)\partial_{Y_{1}}\Big]\, C^{\sst\rm TT}_{a_{1}\sst A_{2}A_{3}}\!\left(\tfrac{\hat\delta}L;Y,Z\right)=0\,. \ee It is worth noticing that the \emph{masses} of the other two fields, $\mu_{2}$ and $\mu_{3}$\,, enter the equation as \emph{effective masses}, \mt{\mu_{2}-2\,Y_2\,\partial_{Y_2}} and \mt{\mu_{3}-2\,Y_3\,\partial_{Y_3}}\,, dressed by number operators. Therefore, even in the massless case (\mt{\mu_{2}=\mu_{3}=0}) a mass-like term survives. Again, depending on the number of (partially-)massless fields involved in the interactions, one can have a system of (up to three) differential equations given by the cyclic permutations of eq.~\eqref{pmdiff}. \section{Solutions of HS cubic interactions in (A)dS} \label{sec: sol} In this section we discuss the polynomial solutions of the system of PDEs given by eq.~\eqref{pmdiff} and possible cyclic permutations thereof. Indeed, since the generating function $\Phi(X,U)$ is a formal series in $U^{\sst M}$\,, the latter are the only relevant ones. Our discussion mainly focuses on the interactions involving three massless fields which are of capital importance due to their connections to VE. \subsection{Three massless case} In the three massless case (\mt{\mu_{i}=0}\,, \mt{i=1,2,3}), one has a system of three second order PDEs of the form: \be\label{3massAdS} \Big[\,Y_{{i+1}}\,\partial_{Z_{{i-1}}}\!-Y_{i-1}\,\partial_{Z_{{i+1}}} +\tfrac{\hat\delta}{L}\left(Y_{{i+1}}\,\partial_{Y_{{i+1}}}-Y_{{i-1}}\,\partial_{Y_{{i-1}}} \right)\partial_{Y_{i}}\Big]\, C^{\sst\rm TT}_{a_{1} a_{2}a_{3}}\!\left(\tfrac{\hat\delta}L;Y,Z\right)=0\,, \ee where \mt{[i\simeq i+3]}\,. The latter can be solved via standard techniques (the Laplace transform and the method of characteristics), and its solutions are given by \ba \label{solansatz} C^{\sst\rm TT}_{ a_{1}a_{2}a_{3}}\eq \exp\left\{-\tfrac{\hat\delta}{L}\,\left[Z_1\,\partial_{Y_2}\,\partial_{Y_3}+Z_1\,Z_2\,\partial_{Y_3}\,\partial_{G}+\mbox{cyclic}+Z_1\,Z_2\,Z_3\,\partial_G^2\right]\right\} \nn && \ \times\,{\cal K}_{a_{1}a_{2}a_{3}}(Y_1,Y_2,Y_3,G)\,, \ea where ${\cal K}_{a_{1}a_{2}a_{3}}$ is an arbitrary polynomial function of the $Y_i$'s and \mt{G=Y_1\,Z_1+Y_2\,Z_2+Y_3\,Z_3}\,. Notice that in the flat limit, one recovers the coupling \eqref{3flat}. The exponential function provides the correct lower-derivative tails needed for the consistency of the corresponding (A)dS interactions. For instance, considering the lowest-derivative $4\!-\!4\!-\!4$ interaction, \mt{{\cal K}=\lambda_{\sst 4}^{\sst 4\!-\!4\!-\!4}\,G^{4}}\,, one gets \ba\label{444} C^{\sst\rm TT}\eq \lambda_{\sst 4}^{\sst 4\!-\!4\!-\!4}\left[ G^{4}-12\,\tfrac{\hat\delta}L\,Z_{1}\,Z_{2}\,Z_{3}\,G^{2} +12 \left(\tfrac{\hat\delta}L\right)^{\!2}\,Z_{1}^{2}\,Z_{2}^{2}\,Z_{3}^{2} \right]\nn \eq \lambda_{\sst 4}^{\sst 4\!-\!4\!-\!4}\left[G^{4}+\tfrac{12\,\epsilon\,(d+3)}{L^{2}}\,Z_{1}\,Z_{2}\,Z_{3}\,G^{2} +\tfrac{12\,(d+3)(d+5)}{L^{4}}\,Z_{1}^{2}\,Z_{2}^{2}\,Z_{3}^{2}\right]\,, \ea where, in the second line we have used the identity \eqref{deltafunc} in order to replace the powers of $\hat\delta/L$ by those of $L^{-2}$. It is worth mentioning another way of presenting the solution \eqref{solansatz}. It consists in encoding all the $\hat{\delta}$ contributions into total derivatives as \be C^{\sst \rm TT}_{a_{1}a_{2}a_{3}}= {\cal K}_{a_{1}a_{2}a_{3}}\big(\tilde{Y}_{1},\tilde{Y}_{2},\tilde{Y}_{2},\tilde{G}\big)\,, \label{amb K} \ee where the $\tilde{Y}_i$'s and $\tilde{G}$ are the (A)dS deformations: \ba \label{amb G} && \tilde{Y}_{i} = Y_{i}+\alpha_{i}\,\partial_{U_{i}}\cdot\partial_{X}\,, \ \qquad\tilde{G} = \sum_{i=1}^{3} (Y_{i} + \beta_{i}\,\partial_{U_{i}}\cdot\partial_{X})\,Z_{i}\,,\nn && (\alpha_{1},\alpha_{2},\alpha_{3}\,;\,\beta_{1},\beta_{2},\beta_{3}) =(\alpha,-\tfrac{1}{\alpha+1},-\tfrac{\alpha+1}{\alpha}\,;\, \beta,-\,\tfrac{\beta+1}{\alpha+1},-\,\tfrac{\alpha-\beta}{\alpha})\,, \ea of the flat-space building blocks $Y_i$'s and $G$\,. The equivalence between the two representations \eqref{solansatz} and \eqref{amb K} of the cubic interactions can be shown carrying out the integration by parts of all the total-derivative terms present in \eqref{amb K}. Notice that the freedom of $\alpha$ and $\beta$ reflects a redundancy in expressing the building blocks in terms of total derivatives. Finally, let us conclude the discussion on the interactions of massless HS fields providing the example \eqref{444} in terms of ambient-space tensor contractions: \ba &&[S^{\sst (3)}]_{\sst\rm TT}=\lambda_{\sst 4}^{\sst 4\!-\!4\!-\!4}\,\int d^{d+1}X\ \delta\Big(\sqrt{\epsilon\,X^2}-L\Big)\,\nn &&\times\Big[\Phi_{\sst MNPQ}\,\partial^{\sst M}\,\partial^{\sst N}\,\partial^{\sst P}\,\partial^{\sst Q}\,\Phi_{\sst RSTV}\,\Phi^{\sst RSTV}+8\,\Phi_{\sst MNPQ}\,\partial^{\sst M}\,\partial^{\sst N}\,\partial^{\sst P}\,\Phi_{\sst RSTV}\,\partial^{\sst V}\,\Phi^{\sst RSTQ}\nn &&\quad\ +\,6\,\Phi_{\sst MNTV}\,\partial^{\sst M}\,\partial^{\sst N}\,\Phi_{\sst PQRS}\,\partial^{\sst R}\,\partial^{\sst S}\,\Phi^{\sst PQTV}+12\,\partial_{\sst S}\,\Phi_{\sst MNR}^{\phantom{\sst MNR}\sst T}\,\partial^{\sst M}\,\partial^{\sst N}\,\Phi_{\sst PQRT}\,\partial^{\sst R}\,\Phi^{\sst PQRS}\nn &&\quad\ +\,\tfrac{12\,\epsilon\,(d+3)}{L^{2}}\,\Big(\Phi_{\sst MNS}^{\phantom{\sst MNS}\sst T}\,\partial^{\sst M}\,\partial^{\sst N}\,\Phi_{\sst PQRT}\,\Phi^{\sst PQRS}+2\,\Phi_{\sst MRS}^{\phantom{\sst MRS}\sst T}\,\partial^{\sst M}\,\Phi_{\sst NPQT}\,\partial^{\sst N}\,\Phi^{\sst PQRS}\Big)\nn &&\quad\ +\,\tfrac{4\,(d+3)(d+5)}{L^{4}}\,\Phi_{\sst MNPQ}\,\Phi^{\sst MNRS}\,\Phi^{\sst PQ}_{\phantom{\sst PQ}\sst RS}\Big]\,. \ea \subsection{General cases} The interactions of three massless fields represent a subclass of the interactions one can envisage depending on the values of the $\mu_{i}$'s\,. Let us notice that in the general cases the solutions are given by intersections of the solution spaces of the PDE \eqref{pmdiff} and its cyclic permutations. Therefore, we start our discussion from the solutions of one PDE, for which it is instructive to first analyze the corresponding equation in flat space \eqref{flat PDE}. The latter exhibits a singular point in correspondence of the value \mt{m_2=m_3}\,. Indeed, aside from this value, a rescaling of \mt{m_2^2-m_3^2} is tantamount to a rescaling of $y_1$\,. Therefore, any polynomial solution with \mt{m_2\neq m_3} can be smoothly deformed to a solution with \mt{m_2=m_3}\,, while the opposite is not true. Consequently, the solution space with \mt{m_2=m_3} is always bigger than (or equal to) the one with \mt{m_2\neq m_3}\,. Indeed, an explicit analysis shows that the \mt{m_{2}\neq m_{3}} solutions \mt{{\cal K}(y_{2},y_{3},h_{2},h_{3},z_{1})} can be always expressed in terms of the \mt{m_{2}=m_{3}} solutions \mt{{\cal K}(y_{1},y_{2},y_{3},g,z_{1})}, while the opposite is not true. The latter phenomenon has a richer counterpart in (A)dS, where the role of \mt{m_{2}^{2}-m_{3}^{2}} in eq.~\eqref{flat PDE} is played by the combinations: \be\label{mu value} \mu_1+\mu_2-\mu_3-2\,(Y_2\,\partial_{Y_2}-Y_3\,\partial_{Y_3}+n) \qquad [n=0,\ldots,\mu_{1}]\,, \ee in eq.~\eqref{pmdiff}. Indeed, because of the number operator \mt{Y_2\,\partial_{Y_2}-Y_3\,\partial_{Y_3}}, eq.~\eqref{mu value} may have several vanishing points for \mt{\mu_1+\mu_2-\mu_3\in 2\,\mathbb{Z}}\,. More precisely, in correspondence of the latter values, one can consider an ansatz of the form: \be \label{Y3ans} C^{\sst\rm TT}_{a_{1}\sst A_{2}A_{3}}\!\left(\tfrac{\hat\delta}L;Y,Z\right)=Y_{2,3}^{|M|}\,\bar{C}^{\sst\rm TT}_{a_{1}\sst A_{2}A_{3}}\!\left(\tfrac{\hat\delta}L;Y,Z\right)\qquad \left[M=\tfrac{\mu_1+\mu_2-\mu_3-2\,n}2\right], \ee where we use $Y_{2}$ for \mt{M>0}\, and $Y_{3}$ for \mt{M<0}\,. Plugging this ansatz into the original equation \eqref{pmdiff}, one ends up with an analogous equation for $\bar{C}^{\sst\rm TT}_{a_{1}\sst A_{2}A_{3}}$\,, whose $n$-th factor coincides with the operator appearing in the massless case \eqref{3massAdS}. Therefore, the solutions of the massless equation provide solutions of the original equation through the ansatz \eqref{Y3ans}. Notice that, when $\mu_1+\mu_2-\mu_3\notin 2\,\mathbb{Z}$\,, the aforementioned solutions are no longer available since they become non-polynomial. In all the cases which can not be covered by the ansatz \eqref{Y3ans}, the solutions can be expressed as arbitrary functions of the building blocks: \be \tilde H_i=\partial_{U_{i-1}}\!\cdot\partial_{X_{i+1}}\,\partial_{U_{i+1}}\!\cdot\partial_{X_{i-1}}\!-\partial_{X_{i+1}}\!\cdot\partial_{X_{i-1}}\,Z_i\,, \ee which are the (A)dS deformations of the flat-space building blocks $h_{i}$ \eqref{flath}. It is worth noticing that this pattern is similar to what happens in flat space where the $h_{i}$-type solutions exist independently on the mass values, while the massless-type ones (involving $g$) only appear for particular values of the $m_{i}$'s. Moving to the cases in which more than one equation is involved, one has to consider intersections of the corresponding solution spaces. Since in flat space the only \emph{enhancement point} arises for \mt{m_{i}=m_{i+1}}\,, one is led to five different cases: (1) three massless (\mt{m_{1}=m_{2}=m_{3}=0}), (2) two massless and one massive (\mt{m_{1}=m_{2}=0\,, m_{3}\neq 0}), (3) one massless and two massive with different masses (\mt{m_{1}=0\,, m_{2}\neq m_{3}}), (4) one massless and two massive with equal masses (\mt{m_{1}=0\,, m_{2}=m_{3}}), (5) three massive. On the other hand, due to the presence of a richer pattern of enhancement points (\mt{\mu_i+\mu_{i+1}-\mu_{i-1}\in 2\,\mathbb{Z}}), more combinations appear in (A)dS. The analysis of the above cases goes beyond the scope of the present letter, and we refer to the forthcoming paper \cite{Joung:2012} for the detailed discussion. \acknowledgments{ We are grateful to D. Francia, K. Mkrtchyan and A. Sagnotti for helpful discussions. The present research was supported in part by Scuola Normale Superiore, by INFN (I.S. TV12) and by the MIUR-PRIN contract 2009-KHZKRX.} \providecommand{\href}[2]{#2}\begingroup\raggedright
2,869,038,156,547
arxiv
\section{Introduction} An important advance in the investigation of quantum fluids was recently achieved with the experimental observation of high quantum degeneracy and off-diagonal long-range coherence, in a gas of exciton- polaritons in a two-dimensional semiconductor microcavity \cite{kasprzak06,balili07}. High quantum degeneracy has been also observed in long-living polariton systems close to thermal equilibrium \cite{deng06}. Bose-Einstein condensation (BEC) is the most natural way of describing these findings. However, due to the peculiarity of the polariton system, in particular the finite polariton lifetime, the intrinsic 2-D nature and the presence of interface disorder, the existing theoretical frameworks rather interpret the phenomenon in strict analogy either with laser physics \cite{laussy04,schwendimann06} or with the BCS transition of Fermi particles \cite{keeling04,marchetti06,szymanska06}. In particular, the problem of the polariton kinetics and of the non-equilibrium effects, due to the finite polariton lifetime and the relaxation bottleneck, have been investigated in many works \cite{schwendimann06,szymanska06,doan05,sarchi06,wouters07}. From these studies, the main effect of non-equilibrium seems to be a significant depletion of the condensate - with corresponding loss of long-range coherence - \cite{sarchi07} and the appearance of a diffusive excitation spectrum at low momenta \cite{szymanska06,wouters07}. In spite of the high relevance of these theoretical descriptions, two basic questions remain still unanswered. Are the experimental findings correctly interpreted in terms of a quantum field theory of interacting bosons? Could the achievement of polariton BEC give new insights into the fundamental physics of interacting Bose systems? In this work we tackle these two questions, by showing that polaritons can be modeled borrowing from the theory of interacting Bose particles. In particular, we describe self-consistently the linear exciton-photon coupling and the exciton-nonlinearities, by generalizing the Hartree-Fock-Popov (HFP) description of BEC to the case of two coupled Bose fields at thermal equilibrium. In this way, we compute the density-dependent energy shifts and the phase diagram and we find a very good agreement with the recent experimental findings. Then, we apply the present theory to derive the full set of equations describing the density-density response of the polariton condensate to an external perturbation. Focusing on the photon density response, which is directly related to photoluminescence, we predict different response of the collective modes to optical (light) or mechanical (coherent phonons) perturbations. Since this behavior is driven by the presence of a coherent exciton field, we suggest that an experiment investigating these features could possibly solve the tricky problem of assessing the nature of the polariton condensate. \section{Theory} \label{sec:theory} The physics of the polariton system is basically that of two linearly coupled oscillators, the exciton and the cavity photon fields \cite{savona95,savona99b}. Considering the limit of low density, the exciton field can be treated as a Bose field, subject to two kinds of interactions, the mutual exciton-exciton interaction and the effective exciton-photon interaction, originating from the saturation of the exciton oscillator strength \cite{laikhtman07}. Therefore, to describe polariton BEC we extend to the case of two coupled interacting Bose fields the formalism adopted in describing the BEC of a single Bose field \cite{shi98,pita03}. We express the exciton and photon field operators in the Heisenberg representation via the notation \begin{equation} \hat{\Psi}_x({\bf r},t)=\frac{1}{\sqrt{A}}\sum_{\bf k} e^{i{\bf k}\cdot {\bf r}} \hat{b}_{\bf k}(t)\,, \label{eq:fieldopb} \end{equation} and \begin{equation} \hat{\Psi}_c({\bf r},t)=\frac{1}{\sqrt{A}}\sum_{\bf k} e^{i{\bf k}\cdot {\bf r}} \hat{c}_{\bf k}(t)\,, \label{eq:fieldopc} \end{equation} where $A$ is the system area, while $\hat{b}_{\bf k}$ and $\hat{c}_{\bf k}$ are independent Bose operators $([\hat{b}_{\bf k},\hat{c}_{\bf k'}^{\dagger}]=0)$. Notice that in this work we assume scalar exciton and photon fields. However, the theory can be generalized to include their vector nature, accounting for light polarization and exciton spin\footnote{Shelykh {\em et al.} \cite{shelykh06} have recently studied the effects of polarization and spin at $T=0$, within the Gross-Pitaevskii limit restricted to the lower polariton field}. We adopt a finite system area $A$ in order to model the effect of confinement, due to both the intrinsic disorder \cite{langbein02,richard05b} and to the finite size of the excitation spot \cite{deng03,richard05}. While in 2-D, in the thermodynamic limit, the occurrence of BEC would be prevented by the divergence of thermal fluctuations \cite{hohenberg67}, the finite size modifies the density of states, resulting in a finite amount of thermal fluctuations \cite{lauwers03}. The dependence of the results on $A$ is discussed in Section \ref{sec:discuss}. The exciton-photon Hamiltonian, including the exciton non-linearities, reads \begin{equation} \hat{H}=\hat{H}_{0}+\hat{H}_{R}+\hat{H}_{x}+\hat{H}_{s}\,. \label{eq:Hcomp} \end{equation} The non-interacting exciton-photon Hamiltonian is \begin{equation} \hat{H}_{0}=\sum_{\bf k}\left(\epsilon^{x}_{\bf k} \hat{b}^{\dagger}_{\bf k}\hat{b}_{\bf k}+\epsilon^{c}_{\bf k} \hat{c}^{\dagger}_{\bf k}\hat{c}_{\bf k}\right)\,, \label{eq:nonintH} \end{equation} $\epsilon^{x}_{\bf k}=\hbar^2k^2/2m_x$ is the free exciton energy dispersion, $m_x$ is the exciton effective mass, $\epsilon^{c}_{\bf k}=\epsilon^{c}_{\bf 0}\sqrt{1+(k/k_z)^2}$ is the free photon dispersion, $\epsilon^{c}_{\bf 0}=\hbar (c/n_c) k_z$, $c$ is the velocity of light, $n_c$ is the refractive index, $k_z=\pi/L_c$, and $L_c$ is the cavity length. The term \begin{equation} \hat{H}_{R}=\hbar\Omega_{R}\sum_{\bf k}(b^{\dagger}_{\bf k}\hat{c}_{\bf k}+h.c.)\, \label{eq:lincoupH} \end{equation} describes the linear exciton-photon coupling. The term \begin{equation} \hat{H}_{x}=\frac{1}{2A}\sum_{{\bf k}, {\bf k'}, {\bf q}}v_{x}({\bf k}, {\bf k'}, {\bf q})\hat{b}^{\dag}_{{\bf k}+{\bf q}}\hat{b}^{\dag}_{{\bf k'}-{\bf q}}\hat{b}_{{\bf k'}}\hat{b}_{\bf k}\, \label{eq:XXintH} \end{equation} is the effective exciton-exciton scattering Hamiltonian, modeling both Coulomb interaction and the non-linearity due to the Pauli exclusion principle for electrons and holes forming the exciton. The remaining term \begin{equation} \hat{H}_{s}=\frac{1}{A}\sum_{{\bf k}, {\bf k'}, {\bf q}}v_{s}({\bf k}, {\bf k'}, {\bf q})(\hat{c}^{\dag}_{{\bf k}+{\bf q}}\hat{b}^{\dag}_{{\bf k'}-{\bf q}}\hat{b}_{{\bf k'}}\hat{b}_{\bf k}+h.c.)\, \label{eq:satintH} \end{equation} models the effect of Pauli exclusion on the exciton oscillator strength \cite{laikhtman07}, that is reduced for increasing exciton density \cite{schmitt-rink85}. In this work we account for the full momentum dependence of $v_{x}({\bf k},{\bf k'},{\bf q})$ and $v_{s}({\bf k},{\bf k'},{\bf q})$ \cite{rochat00,okumura01}. In particular, these potentials vanish at large momentum, thus preventing the ultraviolet divergence typical of a contact potential \cite{pita03}, without introducing an arbitrary cutoff. \subsection{Bogoliubov ansatz} To describe the condensed system, we extend the Bogoliubov ansatz \cite{pita03} to both the exciton and photon Bose fields, \begin{equation} \hat{\Psi}_{x(c)}({\bf r},t)=\Phi_{x(c)}({\bf r},t)+\tilde{\psi}_{x(c)}({\bf r},t)\,, \label{eq:bogans} \end{equation} i.e. the total field is expressed as the sum of a classical symmetry-breaking term $\Phi_{x(c)}$ for the condensate wave function, and of a quantum fluctuation field $\tilde{\psi}_{x(c)}$. The Bogoliubov ansatz imposes to introduce anomalous propagators for the excited particles, describing processes where a pair of particles is scattered inwards or outwards the condensate \cite{shi98,fetter71}. The resulting $16$ thermal propagators in the matrix form (in the energy-momentum representation, assuming a spatially uniform system) are \begin{equation} G({\bf k},i\omega_n)=\left(\begin{array}{c c} g^{xx}({\bf k},i\omega_n) & g^{xc}({\bf k},i\omega_n) \\ g^{cx}({\bf k},i\omega_n) & g^{cc}({\bf k},i\omega_n) \end{array}\right), \label{eq:defgreen} \end{equation} where the elements of each 2 $\times$ 2 matrix block are ($j,l=1,2$; $\chi,\xi=x,c$) \begin{equation} g_{jl}^{\chi\xi}({\bf k},i\omega_n)=-\int_{0}^{\beta}d\tau e^{i\omega_{n}\tau}\langle \hat{O}_{\chi}^{j}\left({\bf k},\tau\right)\hat{O}_{\xi}^{l}\left({\bf k},0\right)^{\dagger}\rangle_{\tau,\beta}\,, \label{eq:gij} \end{equation} $\hbar\omega_n=2\pi n/\beta,n=0,\pm 1,...$ are the Matsubara energies for bosons, $\beta=1/k_BT$ and the symbol $\langle ...\rangle_{\tau,\beta}$ indicates the thermal average of the time ordered product. Here, to represent the exciton and the photon fields, we adopt the compact notation $\hat{O}_{\xi}^{1}({\bf k})=\hat{O}_{\xi}({\bf k})$, $\hat{O}_{\xi}^{2}({\bf k})=\hat{O}_{\xi}^{\dagger}(-{\bf k})$ and $\hat{O}_{x}({\bf k})=\hat{b}_{\bf k}$, $\hat{O}_{c}({\bf k})=\hat{c}_{\bf k}$. Correspondingly, the generalized one-particle density \begin{equation} n^{\chi\xi}=n_0^{\chi\xi}+\tilde{n}^{\chi\xi}\,, \label{eq:densgener} \end{equation} with $\chi,\xi=x,c$, is separated into the contribution of the condensate $n^{\chi\xi}_0=\Phi^{*}_{\chi}\Phi_{\xi}$ and of the excited particles \begin{equation} \tilde{n}^{\chi\xi}=\sum_{\bf k\neq 0}n^{\chi\xi}_{\bf k}= \sum_{\bf k\neq 0}\langle \hat{O}_{\chi}^{2}({\bf k}) \hat{O}_{\xi}^{1}({\bf k})\rangle\,. \label{eq:excdensgen} \end{equation} This latter quantity represents the excited-state density matrix, expressed in the exciton-photon basis, and it is directly related to the corresponding normal propagator via the well known relation \cite{shi98} \begin{equation} \tilde{n}^{\chi\xi}_{\bf k}=-\int \frac{d\omega}{\pi} \mbox{Im}\{(g^{\chi\xi}_{11})^{ret}({\bf k},\omega)\}n_B(\omega)\,, \label{eq:densgr} \end{equation} where the retarded Green's function \begin{equation} (g^{\chi\xi}_{11})^{ret}({\bf k},\omega)=g^{\chi\xi}_{11}({\bf k},i\omega_n\rightarrow \omega+i0^+) \end{equation} is the analytical continuation to the real axis of the imaginary-frequency Green's function \cite{shi98}. \subsection{Condensate wave function} Within the Popov approximation, for a uniform system, the two coupled equations for the condensate amplitudes are \begin{eqnarray} & i\hbar\dot{\Phi}_{x}&=[\epsilon^{x}_0 -2 \mbox{Re}\{\bar{v}_{s}({\bf 0,0})n^{xc}_{\bf 0}+ 2\left.\sum_{\bf k}\right.^{\prime} \bar{v}_{s}({\bf k,0})\tilde{n}^{xc}_{\bf k}\} \nonumber \\ &&+(\bar{v}_x({\bf 0,0})n^{xx}_{\bf 0}+2\left.\sum_{\bf k}\right.^{\prime} \bar{v}_x({\bf k,0})\tilde{n}^{xx}_{\bf k})]\Phi_{x}\nonumber \\ &&+(\hbar\Omega_{R}-\sum_{\bf k} \bar{v}_s({\bf k,0})n^{xx}_{\bf k})\Phi_{c}, \nonumber \\ &i\hbar\dot{\Phi}_{c}&=\epsilon^{c}_0 \Phi_c +\hbar\Omega_{R}\Phi_{x}\nonumber \\ &&-[\bar{v}_{s}({\bf 0,0})n^{xx}_{\bf 0}+2\left.\sum_{\bf k,0}\right.^{\prime} \bar{v}_{s}({\bf k,0})\tilde{n}^{xx}_{\bf k}]\Phi_{x} \label{eq:GPeq} \end{eqnarray} where $\left.\sum_{\bf k}\right.^{\prime}=\sum_{{\bf k}\neq 0}$ and \begin{equation} \bar{v}_{x(s)}({\bf k,q})=\frac{1}{2}\left[v_{x(s)}({\bf k,q,0})+v_{x(s)}({\bf k,q,k-q})\right]\,. \end{equation} We assume that both the condensate fields evolve with the same characteristic frequency $E/\hbar$, i.e. \begin{equation} \Phi_{x(c)}(t)=e^{-i \frac{E}{\hbar}t}\Phi_{x(c)}(0)\,. \end{equation} By replacing this evolution into Eq. (\ref{eq:GPeq}), we obtain a generalized set of two coupled time-independent Gross-Pitaevskii equations, which can be formally written in the matrix form \begin{equation} E\left(\begin{array} {c} X_0 \\ C_0 \end{array}\right)=\hat{L}^{GP}\left(\begin{array} {c} X_0 \\ C_0 \end{array}\right)\,,\label{eq:GPeq_matr} \end{equation} where \begin{eqnarray} \hat{L}^{GP}_{11}&=&\epsilon^{x}_0-2 \mbox{Re}\{\bar{v}_{s}({\bf 0,0})n^{xc}_{\bf 0}+ 2\left.\sum_{\bf k}\right.^{\prime} \bar{v}_{s}({\bf k,0})\tilde{n}^{xc}_{\bf k}\} \\ &&+(\bar{v}_x({\bf 0,0})n^{xx}_{\bf 0}+2\left.\sum_{\bf k}\right.^{\prime} \bar{v}_x({\bf k,0})\tilde{n}^{xx}_{\bf k}) \nonumber \\ \hat{L}^{GP}_{12}&=&\hbar\Omega_{R}-\sum_{\bf k} \bar{v}_s({\bf k,0})n^{xx}_{\bf k} \nonumber \\ \hat{L}^{GP}_{21}&=&\hbar\Omega_{R}-\bar{v}_{s}({\bf 0,0})n^{xx}_{\bf 0}-2\left.\sum_{\bf k}\right.^{\prime} \bar{v}_{s}({\bf k,0})\tilde{n}^{xx}_{\bf k} \nonumber \\ \hat{L}^{GP}_{22}&=&\epsilon^{c}_0\,, \label{eq:GPeq_matr2} \end{eqnarray} we have defined the normalized Hopfield coefficients of the condensate state as $\Phi_{x}=X_0 \Phi$ and $\Phi_{c}=C_0 \Phi$, satisfying $|X_0|^2+|C_0|^2=1$, and $n_0=|\Phi|^2$ is the actual density of the polariton condensate. The two solutions $E=E^{lp(up)}$ of Eq. (\ref{eq:GPeq_matr}) define the lower and upper polariton condensate modes \begin{equation} \Phi_{lp(up)}=X^{lp(up)*}_0{\Phi}_{x}+C^{lp(up)*}_0{\Phi}_{c}\,. \end{equation} The condensate energy is given by the lower energy solution $E_0^{lp}$, which corresponds to the minimal energy of the polariton states. In the present U(1) symmetry-breaking approach, we can identify the condensate energy with the chemical potential of the polariton system, i.e. $E_0^{lp}=\mu$. The grand-canonical thermal average has to be taken accordingly. \subsection{Beliaev equations} In analogy with the standard field theory for a single Bose field \cite{shi98}, the $4\times 4$ matrix propagator $G({\bf k},i\omega_n)$ obeys the Dyson-Belaev equation \begin{equation} G\left({\bf k},i\omega_n\right)=G^{0}\left( {\bf k},i\omega_n\right)\left[{\bf 1}+\Sigma\left({\bf k},i\omega_n\right) G\left({\bf k},i\omega_n\right)\right], \label{eq:dyson_kdep} \end{equation} where we have introduced the matrix of the non-interacting propagators \begin{equation} G^{0}\equiv \{g^{0}_{jl}({\bf k},i\omega_n)\}_{jl}^{\chi\xi}=\delta_{\chi\xi}\delta_{jl}[(-)^{j}i\omega_n-\epsilon_{ k}^{(\xi)}+\mu]^{-1} \end{equation} and the $4\times 4$ self-energy matrix \begin{equation} \Sigma({\bf k},i\omega_n)=\left(\begin{array}{c c} \Sigma^{xx}( {\bf k},i\omega_n) & \Sigma^{xc}( {\bf k},i\omega_n) \\ \Sigma^{cx}({\bf k},i\omega_n) & \Sigma^{cc}({\bf k},i\omega_n) \end{array}\right)\,. \label{eq:selfen_kdep} \end{equation} Within the HFP limit, the self-energy elements are independent of frequency and read \begin{eqnarray} &&\Sigma_{jj}^{xx}({\bf k})=2\sum_{\bf q}\left[\bar{v}_x({\bf k,q}) n^{xx}_{\bf q}-\bar{v}_{s}({\bf k,q})\left(n^{cx}_{\bf q}+n^{xc}_{\bf q}\right)\right], \nonumber \\ &&\Sigma_{12}^{xx}({\bf k})=\left(\Sigma_{21}^{xx}\right)^{*}=\bar{v}_x({\bf k,0}) \Phi_{x}^{2}-2\bar{v}_{s}({\bf k,0})\Phi_{x}\Phi_{c}, \nonumber \\ &&\Sigma_{11}^{xc}({\bf k})=\Sigma_{22}^{xc}({\bf k})=\hbar\Omega_{R}\left(1-2\sum_{\bf q} \bar{v}_s({\bf k,q})n^{xx}_{\bf q}\right), \nonumber \\ &&\Sigma_{12}^{xc}({\bf k})=\left(\Sigma_{21}^{xc}({\bf k})\right)^{*}=-\bar{v}_{s}\Phi_{x}^{2}, \label{eq:sigmapopov_kdep} \end{eqnarray} while $\Sigma_{jl}^{cx}({\bf k})=\Sigma_{jl}^{xc}(-{\bf k})$ and $\Sigma_{jl}^{cc}({\bf k})=0$. The solutions of Eq.~(\ref{eq:dyson_kdep}) can be written analytically in terms of the self-energy elements and the unperturbed propagators. For example we obtain \begin{equation} g^{xx}_{11}({\bf p})=\frac{g^{x}_{0}({\bf p}) \left[ 1 - g^{x}_{0}(-{\bf p}) N_D^*({\bf p}) \right]}{\left| 1 - g^{x}_{0}({\bf p}) N_D({\bf p}) \right|^2 - \left| g^{x}_{0}({\bf p}) N_B({\bf p}) \right|^2}\,, \label{eq:sol_dys} \end{equation} where ${\bf p}\equiv {\bf k},i\omega_n$, \begin{eqnarray} N_D({\bf p})&=&\Sigma_{11}^{xx}({\bf k}) + g_{0}^{c}({\bf p}) |\Sigma_{11}^{xc}({\bf k})|^2 + g_{0}^{c}(-{\bf p}) |\Sigma_{12}^{xc}({\bf k})|^{2}\,, \nonumber \\ N_B({\bf p})&=&\Sigma_{12}^{xx}({\bf k}) + \left[g_{0}^{c}({\bf p}) + g_{0}^{c}(-{\bf p})\right]\Sigma_{11}^{xc}({\bf k}) \Sigma_{12}^{xc}({\bf k})\,, \nonumber \end{eqnarray} and \begin{equation} g^{xx}_{21}({\bf p})=\frac{g^{x}_{0}(-{\bf p}) N^*_B({\bf p})}{\left[ 1 - g^{x}_{0}(-{\bf p}) N^*_D({\bf p}) \right]}g^{xx}_{11}({\bf p})\,. \label{eq:sol_dys2} \end{equation} For each value of ${\bf k}$, the analytic continuation of each Green's function $g_{jl}^{\chi\xi}({\bf k},z)$ shares the same four simple poles at $z=\pm E^{lp(up)}_{{\bf k}}$, i.e. \begin{eqnarray} g^{xx}_{11}({\bf k},z)&=&\sum_{j=lp,up}\frac{|X^{j}_u({\bf k})|^2}{z-E^{j}({\bf k})} + \frac{|X^{j}_v({\bf k})|^2}{z+E^{j}({\bf k})^*} \nonumber \\ g^{xx}_{12}({\bf k},z)&=&\sum_{j=lp,up}\frac{X^{j}_u({\bf k})^*X^{j}_v({\bf k})}{z-E^{j}({\bf k})} + \frac{X^{j}_v({\bf k})^*X^{j}_u({\bf k})}{z+E^{j}({\bf k})^*} \nonumber \\ g^{cc}_{11}({\bf k},z)&=&\sum_{j=lp,up}\frac{|C^{j}_u({\bf k})|^2}{z-E^{j}({\bf k})} + \frac{|C^{j}_v({\bf k})|^2}{z+E^{j}({\bf k})^*} \nonumber \\ g^{xc}_{11}({\bf k},z)&=&\sum_{j=lp,up}\frac{X^{j}_u({\bf k})^*C^{j}_u({\bf k})}{z-E^{j}({\bf k})} + \frac{X^{j}_v({\bf k})^*C^{j}_v({\bf k})}{z+E^{j}({\bf k})^*} \nonumber \,, \label{eq:sol_Hopf} \end{eqnarray} and so on.\footnote{Here we write the general expression with complex energies $E^{lp(up)}({\bf k})$. Within such a notation, the formulas can be in principle extended to include a phenomenological imaginary part to the energies, in order to account for the finite radiative lifetime of polaritons.} The poles of the propagators represent the positive and negative Bogoliubov-Beliaev eigen-energies of the lower- and upper-polariton modes. The residual in each pole depends on the corresponding generalized Hopfield coefficients. We point out that the polariton excitation modes for a given ${\bf k}$ can be also obtained by directly diagonalizing the problem \begin{equation} E({\bf k})\left(\begin{array}{c}X_u \\ X_v \\ C_u \\ C_v \end{array}\right)({\bf k})=\hat{L}_{HFB}({\bf k}) \left(\begin{array}{c} X_u \\ X_v \\ C_u \\ C_v \end{array}\right)({\bf k})\,, \end{equation} with \begin{equation} \hat{L}_{HFP}=\left(\begin{array}{cccc} \tilde{\epsilon}_{\bf k}^x + \Sigma_{11}^{xx} & \Sigma_{12}^{xx} & \Sigma^{xc}_{11} & \Sigma_{12}^{xc} \\ -\Sigma_{21}^{xx} & -(\tilde{\epsilon}_{\bf k}^x +\Sigma_{22}^{xx})^* & -\Sigma^{xc}_{21} & -\Sigma_{22}^{xc} \\ \Sigma^{cx}_{11} & \Sigma_{12}^{cx} & \tilde{\epsilon}_{\bf k}^c - \mu & 0 \\ -\Sigma_{21}^{xc} & -\Sigma^{cx}_{11} & 0 & -\tilde{\epsilon}_{\bf k}^{c*}\end{array}\right)\,, \end{equation} and $\tilde{\epsilon}_{\bf k}^{x(c)}=\epsilon_{\bf k}^{x(c)} -\mu$. The components of the 4 eigenvectors ${\bf h}_j(k)\equiv (X_u, X_v, C_u, C_v)_j(k)$ ($j=1,...,4$) are again the generalized Hopfield coefficients corresponding to the normal $(X_u, C_u)$ and anomalous $(X_v, C_v)$ components of the polariton field, in analogy with the one-field HFP theory. They obey the normalization relation \begin{equation} |X^{j}_u|^2-|X^{j}_v|^2+|C^{j}_u|^2-|C^{j}_v|^2=1\,, \label{eq:genHopfnorm} \end{equation} a condition which guarantees that the operator destroying the lower (upper) polariton excitation with wave vector ${\bf k}$, \begin{eqnarray} \hat{\pi}^{j}_{\bf k} &=& X^{j}_u ({\bf k}) \hat{b}_{\bf k} + X^{j}_v({\bf k}) \hat{b}^{\dagger}_{-\bf k} \nonumber \\ &&+ C^{j}_u({\bf k}) \hat{b}_{\bf k} + C^{j}_v({\bf k}) \hat{c}^{\dagger}_{-\bf k} \,, \label{eq:piHopf} \end{eqnarray} $j=lp,up$, obey Bose commutation rules. The lower (upper) polariton one-particle operators $\hat{p}^{j}_{\bf k}$ are then defined by \begin{equation} \hat{\pi}^{j}_{\bf k} = u^{j}({\bf k}) \hat{p}_{\bf k}+ v^{j}(-{\bf k})^* \hat{p}_{-\bf k}^{\dagger} \,, \end{equation} where the normal and anomalous polariton coefficients are given by \begin{eqnarray} &&u^{j}({\bf k})=\left[X^{j}_u ({\bf k}) + C^{j}_u ({\bf k})\right]^{1/2}\,, \nonumber \\ &&v^{j}({\bf k})=\left[X^{j}_v ({\bf k}) + C^{j}_v ({\bf k})\right]^{1/2}\,, \nonumber \\ &&|u^{j}({\bf k})|^2-|v^{j}({\bf k})|^2=1\,. \label{eq:quantumflpol} \end{eqnarray} The normal modes of excitation are thermally populated via the Bose distribution \begin{equation} \bar{N}^{j}_{\bf k}\equiv \langle \hat{\pi}^{j\dagger}_{\bf k} \hat{\pi}^{j}_{\bf k} \rangle =\frac{1}{e^{\beta E^{j}_{\bf k} }-1}\,, \end{equation} while the lower- and upper-polariton one-particle densities are given by \begin{equation} \tilde{n}^{j}_{\bf k}\equiv \frac{1}{A} \left[\left(|u^{j}({\bf k})|^2+|v^{j}({\bf k})|^2\right) \bar{N}^j_{\bf k} + |v^{j}({\bf k})|^2\right]\,. \label{eq:quantumflpol2} \end{equation} The first and the second term of the sum represent the thermal and quantum fluctuations, respectively. Therefore, for a fixed total polariton one-particle density $n_{p}$, the density of the polariton condensate is given by \begin{equation} n_0 \equiv |\Phi|^2=n_{p}-\sum_{{\bf k}\neq 0} [\tilde{n}^{lp}_{\bf k}+\tilde{n}^{up}_{\bf k}]\,. \label{eq:condfr} \end{equation} From $n_0$, the exciton and the photon condensed densities are finally obtained via Eq.~(\ref{eq:GPeq_matr2}). Hence, for a given polariton density $n_p$ and temperature $T$, a self-consistent solution can be obtained by solving iteratively Eqs. (\ref{eq:GPeq_matr}), (\ref{eq:dyson_kdep}), (\ref{eq:densgr}) and (\ref{eq:condfr}), until convergence of the chemical potential $\mu$ and the density matrix ${n}_{\chi\xi}({\bf k})$ is reached. From this self-consistent solution, we obtain the exciton and photon components of the condensate fraction as well as the spectrum of collective excitations and the one-particle populations. We point out that the self-consistent solution must be independent on the initial condition used in Eq.~(\ref{eq:dyson_kdep}) and (\ref{eq:GPeq_matr}). Fast convergence of the iterative procedure in the numerical calculations is obtained by starting from the ideal gas solution, i.e. the solution obtained by neglecting the two-body interactions, and considering the resulting polariton states occupied accordingly to the Bose distribution. \section{Results~and~discussion} \label{sec:discuss} For calculations we adopt parameter modeling typical GaAs based microcavity samples \cite{balili07,deng06,deng03}, in particular we assume a linear coupling strength $\hbar\Omega_{R}=7~\mbox{meV}$, corresponding to 12 embedded GaAs quantum wells, and the photon-exciton detuning $\delta=\epsilon^{c}_0-\epsilon^{x}_0=3$~meV. Where not differently specified, we consider a system area $A = 1000~\mu \mbox{m}^2$ and a polariton temperature $T=10$~K. For the interaction potentials $v_x$ and $v_s$, we use momentum-dependent values following Rochat et al.~\cite{rochat00}. \subsection{Spectral and thermodynamic properties} In Fig. \ref{fig:disp} we show the energy-momentum dispersion of the collective excitations, $\pm E^{lp}_{{\bf k}}$ and $\pm E^{up}_{{\bf k}}$, as obtained at the critical polariton density $n_p=5~\mu\mbox{m}^{-2}$ and far above the critical density, i.e. $n_p=50~\mu\mbox{m}^{-2}$. The curves correspond to the positive- and negative-weight resonances for the lower- and the upper-polariton. We notice that, for the largest value of $n_p$, the polariton splitting decreases, due to both the exciton saturation, decreasing the effective exciton-photon coupling $\Sigma^{xc}_{11}$, and the change of the exciton-photon detuning produced by the exciton blueshift, given by $\Sigma_{11}^{xx}$. However this variation is quantitatively small, suggesting that, at equilibrium, the polariton structure should be robust even far above the condensation threshold. We also mention that, close to zero momentum, the dispersion of the lower polariton branch, above threshold, becomes linear, giving rise to phonon-like Bogolubov modes, as in the standard equilibrium single-field theory \cite{pita03}. \begin{figure} \includegraphics*[width=\linewidth,height=.595\linewidth]{fig1.eps} \caption{The dispersion of the normal modes of the system for polariton density $n_p=5 \mu\mbox{m}^{-2}$ (solid) and $n_p=50 \mu\mbox{m}^{-2}$ (dashed). The uncoupled photon (dotted) and exciton (dash-dotted) modes are also shown.}\label{fig:disp} \end{figure} The modification of the energy splitting between the lower and the upper polariton branch is accurately characterized in Fig. \ref{fig:shifts}, where the energy shifts of the two polariton modes at ${\bf k}=0$ are plotted as a function of the density. Exciton saturation and interactions result in a global blue-shift of the lower polariton and a red-shift of the upper polariton. The shifts are linear as a function of the density, but their slope varies close to threshold. As highlighted in the inset, the slope of the lower polariton shift changes by a factor of two across the threshold, because the contribution of the condensed populations ($n^{0}_{xx}$, $n^0_{xc}$) is one half the contribution of the thermal populations ($\tilde{n}_{xx}$, $\tilde{n}_{xc}$), as seen in Eq.~(\ref{eq:GPeq}). \begin{figure} \includegraphics[width=\linewidth,height=.6\linewidth]{fig2.eps} \caption{Lower (solid) and upper (dashed) polariton energies at $k=0$ vs polariton density $n_p$. Inset: Double logarithmic plot of the lower polariton energy. The thin dotted lines highlight the two different slopes below and above the density threshold.}\label{fig:shifts} \end{figure} We now turn to the thermodynamic properties of the system. In Fig. \ref{fig:phdiagr}(a), we report the density-temperature BEC phase diagram, as computed for $A=1000~\mu \mbox{m}^2$. The phase boundary in the calculations has been set by the occurrence of a finite fraction of polariton condensate larger than $1\%$. In the plot, a few values of the quantity $|X_0|^2$ along the phase boundary are indicated. This quantity represents the exciton amount in the polariton condensate. It decreases for increasing density, depending on the exciton saturation and the change in detuning. For very large densities this quantity eventually vanishes, corresponding to the crossover to a photon-laser regime. However, for the studied GaAs model microcavity, the variation of the exciton amount in the condensate field remains very small up to densities far above the experimentally estimated polariton density \cite{deng06,deng03}. This is basically due to the positive cavity detuning, sufficiently large to be robust to the exciton nonlinear energy shift. On the other hand, due to the same feature and to the flat energy dispersion of the exciton-like lower polariton states, a large population can be accommodated in the excited state when the temperature exceeds $25$~K, thus dramatically increasing the BEC transition density, eventually leading to the direct occurrence of photon-lasing. In particular, for this system, we predict that equilibrium polariton BEC is impossible for temperatures larger than $30$~K. In Fig. \ref{fig:phdiagr}(b), we show a detail of the low-$T$ region of the phase diagram, computed for two different system areas $A=100~\mu \mbox{m}^2$ and $A=1000~\mu \mbox{m}^2$. In a homogeneous two-dimensional system, in the limit of infinite size, a true condensate cannot exist due to the divergence of low-energy thermal fluctuations. The transition to a superfluid state is instead expected, giving rise to the Berezinski-Kosterlitz-Thouless crossover with spontaneous unbinding of vortices. The divergence of the condensate fluctuations has however a logarithmic dependence on the system size. Fig. \ref{fig:phdiagr}(b) shows this behaviour as a slow increase of the critical density for increasing $A$. Quantitatively, the critical density varies by no more than a factor 2 at $T=1$~K, for the two considered values of the system area. This difference becomes even smaller for larger temperatures. The predicted dependence on the system size could be experimentally verified only in samples with improved interface quality and manifesting thermalization at very low polariton temperature \cite{sarchi07b}. \begin{figure \includegraphics*[width=\linewidth,height=1.0\linewidth]{fig3.eps} \caption{(a) Phase diagram of the polariton condensation, computed for $A=1000~\mu \mbox{m}^2$. The exciton fraction in the condensate $|X_0|^2$, along the phase boundary, is indicated in boxes. (b) Detail of the low-$T$ region where we also display the transition boundary computed for $A=100~\mu \mbox{m}^2$.}\label{fig:phdiagr} \end{figure} \section{Linear response in the HFP limit} Within the present theory, it is easy to compute the density fluctuation of the exciton or the photon field produced by a perturbation acting on either of them. This quantity is particularly interesting because as we suggest later, it might be used to study the nature of the polariton condensate via fully optical or mechanical perturbations. We consider the time dependent perturbation \begin{equation} \hat{H}_{pert}(t)=\int d{\bf r} \hat{n}^{\chi\chi}({\bf r},t)V_{ext}({\bf r},t)\,, \label{eq:hpert} \end{equation} driven by the external potential $V_{ext}({\bf r},t)$ affecting the field $\hat{\Psi}_{\chi}({\bf r},t)$ ($\chi=x,c$). This perturbation results in a density fluctuation \begin{equation} \delta n^{\xi\xi}({\bf r},t) \equiv n^{\xi\xi}({\bf r},t) - n_{eq}^{\xi\xi}({\bf r},t)\,, \end{equation} ($\xi=x,c$) around the equilibrium value $n^{eq}_{\xi\xi}({\bf r},t)$. Within the linear response limit, this fluctuation is given by \cite{pita03,fetter71} \footnote{We are only considering the response of the exciton and/or the photon density to a perturbation affecting one of the two species. For this reason we are not considering expressions involving the non diagonal terms of the density operator $\hat{n}^{xc}$.} \begin{equation} \delta n^{\xi\chi}({\bf r},t)=\frac{i}{\hbar}\int_{-\infty}^t dt' \int d{\bf r'} C_{\xi\chi}({\bf r'},t';{\bf r},t) V_{ext}({\bf r'},t')\,, \label{eq:densresp} \end{equation} where \begin{equation} C_{\xi\chi}({\bf r'},t';{\bf r},t)=\langle\left[\hat{n}^{\chi\chi}({\bf r'},t'),\hat{n}^{\xi\xi}({\bf r},t)\right]\rangle\,.\label{eq:denscomm} \end{equation} For a spatially uniform system in steady state, the density commutator Eq. (\ref{eq:denscomm}) only depends on ${\bf r-r'}$ and $t-t'$. Then, by Fourier transforming Eq. (\ref{eq:densresp}), we get the expression \begin{eqnarray} \delta n^{\xi\chi}({\bf k},\omega)=D^{r}_{\xi\chi}({\bf k},\omega)V_{ext}({\bf k},\omega)\,, \label{eq:densresp2} \end{eqnarray} where \begin{equation} D^{r}_{\xi\chi}({\bf k},\omega)=-\frac{1}{\hbar}\int \frac{d\omega'}{2\pi}\frac{C_{\xi\chi}({\bf k},\omega-\omega')}{\omega'+i0^+} \end{equation} is the retarded density-density correlation \cite{fetter71}, and $C_{\xi\chi}({\bf k},\omega)$ is the Fourier transform of $C_{\xi\chi}({\bf r},t)$. Using the framework developed in Section \ref{sec:theory}, we write the real-time density operators $\hat{n}^{\chi\chi}({\bf r},t)$ in the polariton basis, via Eqs.~(\ref{eq:fieldopb}-\ref{eq:fieldopc}) and by inverting the transformation Eq. (\ref{eq:piHopf}) \begin{equation} \hat{O}_{\chi}({\bf k},t)=\sum_{j=lp,up}\Pi^j_{\chi}({\bf k})e^{-i\frac{E_{\bf k}^{j}}{\hbar} t} \hat{\pi}^{j}_{\bf k}+\Upsilon^j_{\chi}({\bf k}) e^{i\frac{E_{\bf k}^{j}}{\hbar} t} \hat{\pi}^{j\dagger}_{-\bf k}\,. \end{equation} Here, the factors $\Pi^j_{\chi}({\bf k})$ and $\Upsilon^j_{\chi}({\bf k})$ define the components of the exciton (photon) field on the forward and backward propagating lower and upper polariton eigenmodes. Within the HFP limit, the density commutator consists of three contributions \cite{minguzzi97}\footnote{In the HFP approximation, the coupling between the density fluctuations of the condensate and the non condensate is neglected, because the HFP ground state is defined as the vacuum of quasiparticles. For a trapped gas of atoms, this approximation is found to be in qualitative agreement with experiments, although, close to $T_c$, deviations are reported. A better quantitative agreement has been obtained beyond the HFP limit, by means of Random Phase Approximation techniques \cite{minguzzi97,minguzzi04}. For our present purposes, however, the HFP limit is an adequate approximation.} \begin{eqnarray} C_{\xi\chi}({\bf r},t)&=&^0C_{\xi\chi}^{I}({\bf r},t)+^0C_{\xi\chi}^{II}({\bf r},t)+\tilde{C}_{\xi\chi}({\bf r},t) \\ &=&\left(\Phi^*_{\xi}(0)\Phi_{\chi}(t)\langle \left[\tilde{\psi}_{\xi}({\bf 0},0),\tilde{\psi}_{\chi}^{\dagger}({\bf r},t)\right]\rangle-h.c.\right)\nonumber \\ &+&\left(\Phi^*_{\xi}(0)\Phi^*_{\chi}(t)\langle \left[\tilde{\psi}_{\xi}({\bf 0},0),\tilde{\psi}_{\chi}({\bf r},t)\right]\rangle-h.c.\right)\nonumber \\ &+&\langle \left[\tilde{\psi}^{\dagger}_{\xi}({\bf 0},0)\tilde{\psi}_{\xi}({\bf 0},0),\tilde{\psi}_{\chi}^{\dagger}({\bf r},t)\tilde{\psi}_{\chi}({\bf r},t)\right]\rangle \nonumber\,, \end{eqnarray} the first two terms arising from the presence of the condensate fields. Correspondingly, the retarded density-density correlation can be written as the sum of three terms \begin{equation} D^{r}_{\xi\chi}({\bf k},\omega)= \hspace{.05cm}^{0}D^{I}_{\xi\chi}({\bf k},\omega)+\hspace{.05cm}^0D^{II}_{\xi\chi}({\bf k},\omega)+\tilde{D}_{\xi\chi}({\bf k},\omega)\,. \end{equation} The first two terms survive only in the presence of a condensate while the third one only depends on the thermal population (however it is affected by the modification of the one-particle spectrum induced by the condensate). In detail, the first term describes the excitation of particles out of the condensate and it is given by \begin{equation} ^0D^{I}_{\xi\chi}({\bf k},\omega)=\sum_{j=lp,up}\left[N^j_{\xi\chi}({\bf k},\omega)+N^j_{\xi\chi}({\bf k},-\omega)^*\right]\,, \end{equation} with \begin{equation} N^j_{\xi\chi}({\bf k},\omega)=\frac{\Phi_{\xi}\Phi^*_{\chi}\Pi_{\xi}^{j*}({\bf k})\Pi_{\chi}^j({\bf k})+\Phi_{\xi}^*\Phi_{\chi}\Upsilon_{\xi}^{j*}({\bf k})\Upsilon_{\chi}^j({\bf k})}{\hbar\omega-E_{\bf k}^j+i0^+}\,. \label{eq:respterm1} \end{equation} The second term describes the de-excitation of the thermal population into the condensate and it is given by \begin{equation} ^0D^{II}_{\xi\chi}({\bf k},\omega)=\sum_{j=lp,up}\left[A^j_{\xi\chi}({\bf k},\omega)+A^j_{\xi\chi}({\bf k},-\omega)^*\right]\,, \end{equation} with \begin{equation} A^j_{\xi\chi}({\bf k},\omega)=\frac{\Phi_{\xi}\Phi_{\chi}\Pi_{\xi}^{j*}({\bf k})\Upsilon^j_{\chi}({\bf k})+\Phi_{\xi}^*\Phi^*_{\chi}\Upsilon_{\xi}^{j*}({\bf k})\Pi^j_{\chi}({\bf k})}{\hbar\omega-E_{\bf k}^j+i0^+}\,.\label{eq:respterm2} \end{equation} The third term describes the oscillations of the thermal population and it is given by \begin{equation} \tilde{D}_{\xi\chi}({\bf k},\omega)=\sum_{j=lp,up} \left[T^j_{\xi\chi}({\bf k},\omega)+T^j_{\xi\chi}({\bf k},-\omega)^*\right]\,, \end{equation} with \begin{eqnarray} T^j_{\xi\chi}({\bf k},\omega)&=&\sum_{l,{\bf q}}\left[\frac{F^{jl}_{\xi\chi}({\bf k,q})\left(\bar{N}^l_{\bf q}-\bar{N}^j_{\bf q-k}\right)}{\hbar\omega+E_{\bf q-k}^j-E_{\bf q}^l+i0^+}\right. \nonumber \\ &&+\left.\frac{R^{jl}_{\xi\chi}({\bf k,q})\left(1+\bar{N}^l_{\bf q}+\bar{N}^j_{\bf q-k}\right)}{\hbar\omega+E_{\bf q-k}^j+E_{\bf q}^l+i0^+}\right]\, \end{eqnarray} and \begin{eqnarray} F^{jl}_{\xi\chi}({\bf k,q})&=&\Pi^j_{\xi}({\bf q-k})\Pi_{\chi}^{j*}({\bf q-k})\Pi_{\xi}^{l*}({\bf q})\Pi_{\chi}^l({\bf q})\nonumber \\ &+&\Pi_{\xi}^j({\bf q-k})\Upsilon_{\chi}^{j*}({\bf q-k})\Pi_{\xi}^{l*}({\bf q})\Upsilon_{\chi}^{l}({\bf q})\,, \\ R^{jl}_{\xi\chi}({\bf k,q})&=&\Pi^j_{\xi}({\bf q-k})\Pi_{\chi}^{j*}({\bf q-k})\Upsilon_{\xi}^{l}({\bf q})\Upsilon_{\chi}^{l*}({\bf q})\nonumber \\ &+&\Pi_{\xi}^j({\bf q-k})\Upsilon_{\chi}^{j*}({\bf q-k})\Upsilon_{\xi}^l({\bf q})\Pi_{\chi}^{l*}({\bf q})\,. \end{eqnarray} We use these equations to study the fluctuation $\delta n_{cc}$ of the photon density, directly related to the photoluminescence measured in experiments, produced by an optical (affecting the photon field) or mechanical (affecting the exciton field) perturbation. We take, as external potential, a plane wave with wave vector ${\bf k_{ext}}$, delta-pulsed in time, i.e. $V_{ext}({\bf r},t)=V_0 e^{i{\bf k}_{ext} \cdot {\bf r}} \delta(t-t_0)$. In this case, from Eq. (\ref{eq:densresp2}), we see that $\delta n^{\xi\chi}({\bf k},\omega)=V_0 D^{r}_{\xi\chi}({\bf k},\omega)\delta({\bf k-k}_{ext})$, i.e. the response is diagonal in ${\bf k}$ and is simply proportional to the correlation $D^r$. The imaginary part of $D^r$ describes the energy transfer to the system and thus it is the most relevant function. \begin{widetext} \begin{center} \begin{figure} \includegraphics[width=.7\textwidth]{fig4.eps}% \caption[]{Imaginary part of the retarded density-density correlations $D^r_{cc}({\bf k},\omega)$ (panel a and c) and $D^r_{cx}({\bf k},\omega)$ (b and d), as computed for $T=10$~K and $n_p=5~\mu\mbox{m}^{-2}$. In panels (a) and (b) we display the $k-\omega$ dependence of the correlation in grey tones. In panels (c) and (d) the same quantities are plotted as a function of the energy $\hbar\omega$ and at the wave vector $\bar{k}=0.5~\mu\mbox{m}^{-1}$. In panels (c)-(d), the dashed line represents the contribution $\tilde{D}^r_{\xi\chi}({\bf k},\omega)$ arising from the oscillations of the non condensate. Delta peaks arise from the oscillation of the condensate. At high energy (i.e. at the upper polariton energy), the photon-photon response corresponds to absorption at positive energy and gain at negative energy, while the photon-exciton response has the opposite behavior.}\label{fig:linresp} \end{figure} \end{center} \end{widetext} The resulting quantities $-\mbox{Im}\{D^{r}_{c\chi}({\bf k},\omega)\}$ with $\chi=c,x$ are shown in Fig.~\ref{fig:linresp}. We display the results for the photon-photon (panels (a)-(c)) and photon-exciton (panels (b)-(d)) correlation in the condensed regime. Poles at negative energy are present for both quantities, due to the spectrum modification induced by condensation. While the condensate contribution defines collective modes with infinite lifetime (delta-peaks), the contribution from the thermal population is responsible for a finite linewidth, as can be argued from the equations above. The surprising result of this analysis is however the unusual behavior manifested by the photon-exciton response at high energy. In this case, the collective mode corresponds to gain at positive energy and to absorption at negative energy. Conversely, for the photon-photon response, we expect to observe the opposite behavior. This feature is due to the phases of the Hopfield coefficients. As shown by Eqs. (\ref{eq:respterm1},\ref{eq:respterm2}), the opposite nature of the collective modes generated by optical or mechanical perturbations would be observed in experiments only if an exciton coherent field $\Phi_x$ is present. Furthermore, the relative amplitude of the two responses is proportional to the fraction of coherence of each field, giving direct access to the amount of exciton condensate. In this respect, we point out that, although any kind of perturbation would affect simultaneously both the exciton and the photon field \cite{hvam06}, the geometry of the system can be chosen in such a way that the effect of the perturbation on one of the two fields be dominant (see for example the static perturbation affecting the exciton field used by Balili \emph{et al.} \cite{balili07}). We thus suggest that an ideal tool to observe these features, would be a pump and probe experiment where the probe be in turn optical or mechanical (for example produced by a coherent acoustic waves \cite{hvam06}). \section{Conclusions} We generalized the HFP theory to the case of two coupled Bose fields at equilibrium. The theory allows modeling the BEC of microcavity polaritons in very close analogy with the BEC of a weakly interacting gas \cite{fetter71,pita03}. In particular we treat simultaneously both the linear exciton-photon coupling and interactions. We account for the presence of a non condensed population as well. Within this description we are able to predict the modification of the spectrum and the thermodynamic properties for increasing density. Since the theory allows to describe simultaneously the properties of the polariton, the photon and the exciton fields, it allows the understanding of typical optical measurements. In particular, our analysis supports the interpretation of the recent experimental findings \cite{kasprzak06,balili07,deng06} in terms of BEC of a trapped gas. We have applied the theory to compute the density-density response of a polariton gas to an external perturbation. This quantity can be characterized in pump and probe experiments realized with an optical or mechanical probe. In particular, we predict that the upper polariton energy collective modes generated in the two cases have a response of opposite sign. We suggest that the observation of this feature would be a proof of the presence of an exciton condensate and would answer the long standing question about the connection between polariton BEC and a laser phenomenon. We acknowledge financial support from the Swiss National Foundation through project N. PP002-110640.
2,869,038,156,548
arxiv
\section{Introduction} \label{sec:intro} The most luminous stars in low metallicity galaxies are of special interest. These may have masses exceeding $200~\textrm{M}_\odot$, defining the upper mass limit of stars. Until recently, such massive objects were only found in the cores of young massive clusters \citep{crowther2010}, but the first such object has now been found in apparent isolation \citep{bestenlehner2011}. This motivates a search for very massive stars in Local Group dwarf galaxies, or even more distant systems, where current instrumentation does not yet allow stellar clusters to be spatially resolved. Depending on mass and metallicity, the mass-loss rates of the brightest stars may be so high that their winds become optically thick, resulting in hydrogen-rich WN spectra \citep{dekoter1997}. Such targets provide important tests for the theory of line-driven winds \citep{grafener2011,vink2011}. Massive stars up to about $80~\textrm{M}_\odot$ in metal poor environments receive special attention as well, as those that have a rapidly rotating core at the end of their lives may produce broad line Type Ic supernovae or hypernovae, which are perhaps connected to long duration gamma-ray bursts \citep{moriya2010}.\\ In this context, massive stars in the Magellanic Clouds have been extensively studied \citep[see e.g.][]{evans2004,evans2011}. Extending such studies to more distant galaxies requires sensitive spectrographs mounted at the largest telescopes and, so far, has only be attempted at low resolution \citep{bresolin2006,bresolin2007,castro2008}. Such low resolution studies can be hampered by the presence of nebular emission, as due to their strong UV flux, massive stars ionise the ambient environment creating H\,{\sc ii} regions. Through observations at higher spectral resolution, nebular emission, rather than being a complicating factor, may help to further constrain physical properties of the ionising source \citep[see e.g.][]{kudritzki1990}.\\ In this paper we take the first step towards quantitative spectroscopy of massive stars outside the Local Group. We present the first medium-resolution spectrum of a luminous early-type source in NGC\,55 ($\sim$2.0\,Mpc) and its surrounding region, obtained with the new X-shooter spectrograph at the ESO {\it Very Large Telescope} (VLT). The medium spectral resolution ($R \sim 6000$) and unique spectral coverage of X-shooter allow for a detailed analysis of the stellar spectrum, and, importantly, for an improved correction of the nebular emission lines that can be distinguished from the underlying stellar spectrum. \subsection*{NGC~55} \begin{figure} \includegraphics[width=8.5cm]{pictures/NGC55_eso_inset_good_2.eps} \caption{$B-V-\mathrm{H}\alpha$ image of the host galaxy NGC\,55 (MPG/ESO, Wide Field Imager) with a zoomed in region (FORS H$\alpha$ image) that indicates the location of our source. The rectangles show the projections of the VIS entrance slit ($0.9''\times11''$) at the different observing epochs: (1) is 13/08/09, (2) is 27/09/09 and (3) is 30/09/09. C1\_31 is located where the slit projections cross. C2\_35 is another massive star candidate, of which we see the surrounding nebular emission in our slit (see Section~\ref{sec:C235}).} \label{fig:regionslit} \end{figure} NGC\,55 (see Fig.~\ref{fig:regionslit}) is located in the foreground of the Sculptor Group, at a distance of approximately 2.0~Mpc \citep[][and refererences therein]{gieren2008}. Though it is difficult to determine its morphological type due to its high inclination of $\sim$$80^{\circ}$ \citep{hummel1986}, the galaxy is likely of type SB(s)m: a barred spiral with an irregular appearance, very similar to the Large Magellanic Cloud \citep{devaucouleurs1972}. Metallicity measurements of NGC\,55 show a range of values between $0.23$ and $0.7\,\mathrm{ Z}_{\odot}$, all determined by analysis of forbidden oxygen line emission \citep[see e.g.][]{webster1983,stasinska1986,zaritsky1994,tuelmann2003}.\\ The blue massive star population of NGC\,55 has been studied in the context of the Araucaria project \citep{gieren2005}. As part of this project, \citet{castro2008} have presented low resolution ($R=780$) optical ($390-490$ nm) VLT/FORS2 spectra of approximately 200 blue massive stars in NGC\,55. In search for the most massive star in this galaxy, we selected NGC\,55\,C1\_31 (for the remainder of this paper C1\_31, RA 0:15:00.01, DEC -39:12:41.39, indicated in Fig.~\ref{fig:regionslit}) because of its brightness and its classification as an early O-type supergiant. \\ In the following section we describe the observations and data reduction for both the stellar spectrum (\S \ref{sec:redphotspec}) and the nebular spectra as a function of location along the slit (\S \ref{sec:rednebspec}). In Section~\ref{sec:photospec}, we constrain the overall properties of the central source by comparing the observed hydrogen and helium line profiles of C1\_31 with simulated profiles. In Section~\ref{sec:surrh2} we analyse the nebular spectra. In Section~\ref{sec:modelhii} we explore the effect of luminosity and temperature of the ionizing source on the properties of the surrounding nebula, resulting in a consistent picture on both the properties of the central ionizing source and the surrounding nebula (\S \ref{sec:consistentpic}). We summarise the main conclusions in Section~\ref{sec:conclusion}. \section{Observations and Data Reduction} \label{sec:obsanddatared} \begin{table} \caption{Overview of the observations. Date and mid-exposure time of the observations; Seeing measured from $R$ band acquisition images; Angular distance from the source to the Moon and fraction of lunar illumination (FLI); Exposure time per arm; Position angle (North to East) of the slit on the sky (along the parallactic angle at the time of the observations).\label{tab:observations}} \begin{tabular}{ cccccc } \hline & Date &Seeing & Moon dist. &Exp. T. & Pos. Ang.\\ & Time (UT) & ($R$-band) & FLI & (s) & \\ \hline 1& 13/08/09 & $0.69-0.88''$ & $69^{\circ}$ & $2 \times 900$ & $53.4^{\circ}$\\ & 08:52 & & 56\% & &\\ 2& 27/09/09 & $0.87-0.93''$ & $66^{\circ}$ & $4 \times 900$ &$-52.3^{\circ}$\\ & 03:43 & & 60\% & &\\ 3& 30/09/09 & $0.96-1.50''$ & $45^{\circ}$ & $4 \times 900$ &$-56.4^{\circ}$\\ & 03:26 & & 85\% & &\\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{Overview of the X-shooter instrument properties. Instrument arm; Wavelength range; Projected slit size; Measured resolving power $R=\lambda/\Delta\lambda$; Resolving power according to the X-Shooter User Manual.\label{tab:arms}} \begin{tabular}{ ccccc } \hline Arm & Range (nm) &Slit dimensions & $R$ & $R_\mathrm{th}$ \\ \hline UVB & $300-560$ & $0.8''\times 11''$ &$6268\pm179$ & 6200 \\ VIS & $550-1020$ & $0.9''\times 11''$ &$7778\pm264$ & 8800 \\ NIR & $1020-2480$ & $0.6''\times 11''$ &$7650\pm258$ & 8100 \\ \hline \end{tabular} \end{table} The observations of C1\_31 were obtained as part of X-shooter Science Verification (SV) Runs 1 and 2. Spread over three nights, the total exposure time is 2.5h. The observations were carried out in nodding mode using a nod throw of $5''$. We refer to Table~\ref{tab:observations} for the details of the observations and the observing conditions. \\ Light that enters X-shooter is split in three arms using dichroics: UV-Blue (UVB), VISual (VIS) and Near-IR (NIR). Each instrument arm is a fixed format cross-dispersed \'echelle spectrograph \citep{dodorico2006,vernet2011}. Table~\ref{tab:arms} gives for each arm the wavelength range, the projected dimensions of the slit and the resolving power. The data are reduced with the X-shooter pipeline version 1.2.2 \citep{modigliani2010,goldoni2011}. Although our source is observed in nodding mode, we have reduced the UVB and VIS science frames separately, using the staring mode reduction recipe. We follow the full cascade of X-shooter pipeline steps ('physical model mode'), up to obtaining two-dimensional (2D) straightened spectra, without sky subtraction. See Section~\ref{sec:redphotspec} for the steps through which we obtain the stellar spectrum of C1\_31, and Section~\ref{sec:rednebspec} for the extraction procedure of the nebular spectra along the spatial direction of the slit.\ \subsection{The sky-corrected stellar spectrum} \label{sec:redphotspec} \subsubsection*{UVB and VIS arms} One-dimensional object and sky spectra are extracted from the 2D spectra. The sky spectrum is extracted from regions with the lowest continuum and nebular line emission. In the nights with four consecutive exposures, cosmic rays have been removed by taking the median value of each pixel in the four exposures. The sky spectra are subtracted from the object spectra, thereby correcting for the contamination by moonlight as well. After applying the barycentric correction, the sky-corrected object spectra of the three nights have been combined. Table~\ref{tab:snr} gives the signal-to-noise ratio (SNR) per resolution element of the result of each night and of the combined spectrum. For the stellar line analysis we normalised the spectrum. Independently, we also calibrated the flux of the UVB and VIS spectra with standard star BD$+17^{\circ}\,4708$ (sdF8) taken on 27/09/09, since there were no appropriate flux standard observations for each individual night. We did not correct the spectrum for slit losses. \subsubsection*{NIR arm} We reduced the NIR spectra following the pipeline cascade for nodding mode up to extracted 1D spectra, and we combined the three nights. A telluric standard star (HD\,4670, B9 V) is used to correct for the telluric absorption features in the combined NIR spectrum, and to calibrate the relative flux. The NIR spectrum is scaled to match the absolutely calibrated VIS spectrum. The SNR of the combined NIR spectrum of all three nights is $\sim$3 in the $J$ band, and $<1$ in the $H$ and $K$ bands. This is too low to detect stellar or nebular features in the spectrum; however, the level of the continuum can be retrieved by binning the flux in the atmospheric bands. The result is shown in Fig.~\ref{fig:fullspec_dered}. \begin{table} \caption{Signal-to-noise ratio (SNR) per resolution element of different parts of the UVB and VIS spectra per night, and of the final combined spectra.\label{tab:snr}} \begin{tabular}{ l @{ }c@{ } r r r r } \multicolumn{6}{l}{}\\ \multicolumn{6}{l}{\textit{UVB arm}}\\ && \multicolumn{3}{c}{SNR [range (nm)]}& \\ Night && [424:428] & [460:465] & [505:510] & \\ \hline 13/08/09 && 16.9& 21.3 & 21.4 & \\ 27/09/09 && 14.0& 21.2 & 13.7 &\\ 30/09/09 && 7.2& 9.8 & 9.8 &\\ \hline combined && 21.2 & 29.3 & 27.4 & \\ \multicolumn{6}{l}{}\\ \multicolumn{6}{l}{\textit{VIS arm}}\\ && \multicolumn{3}{c}{SNR [range (nm)]}&\\ Night && [604:609]& [675:680]& [811:816]& [975:978] \\ \hline 13/08/09 && 12.1& 15.5& 22.0& 6.3 \\ 27/09/09 && 10.3& 12.0& 19.8& 5.2 \\ 30/09/09 && 6.2 & 9.8 & 13.3& 4.8 \\ \hline combined && 16.2 & 20.9 & 30.0 & 8.7 \\ \end{tabular} \end{table} \subsection{The nebular emission spectra} \label{sec:rednebspec} The 2D spectra of each night are combined using the median value. We flux calibrate the combined 2D spectra with the same photometric standard star as we used for the object spectrum. Following the trace of C1\_31, we extract 20 sub-apertures from each night's combined 2D spectrum, resulting in a set of 1D nebular spectra that are spatially separated by $0.75''$ each\footnote{The sub-aperture size of $0.75''$ is just below the average FWHM of the seeing.}. Independently, we apply this combining and sub-aperture extraction procedure as well to the non-flat-fielded 2D spectra, now using the sum. This allows us to determine the number of photons $N$, and thus the photon noise error $\sqrt{N}$ for every emission line in the nebular spectra. These errors are propagated in the values of the nebular properties described in Section~\ref{sec:surrh2}. \\ \section{Analysis: The C1\_31 stellar spectrum} \label{sec:photospec} \begin{figure*} \includegraphics[width=17cm]{pictures/ngc55spectrumUVBVIS_nirpoints_newphot_c.eps} \caption{Flux-calibrated spectrum of NGC55\,C1\_31 (bottom), and the same spectrum (top) corrected for extinction with $R_V=3.24$ and $A_{V,\mathrm{star}}=2.3$ (see text). The dashed line shows the FASTWIND spectral energy distribution for the best fitting combination of models (see Section~\ref{sec:consistentpic}); the solid line is a Kurucz model of a B0 star \citep{kurucz1979,kurucz1993}. The ranges of X-shooters instrument arms are indicated at the top of the graph. The open squares point to the updated $V$ and $I$ magnitudes (priv. comm. Pietrzy\'nski). For clarity, the NIR spectrum is shown here with a smoothing of 7 points. The triangles show the integrated values of the observed flux in the $J$ and $H$ bands; the diamonds show their extinction corrected values.} \label{fig:fullspec_dered} \end{figure*} Fig.~\ref{fig:fullspec_dered} shows the combined flux-calibrated stellar spectrum of C1\_31. The extinction corrected flux should show a Rayleigh-Jeans wavelength dependence ($F_{\lambda} \propto \lambda^{-4}$), because we expect an early-type star based on the classification in \citet{castro2008}. To lift the observed spectrum to the slope of the scaled Kurucz model of a B0 star, we need to de-redden our spectrum with $A_{V,\mathrm{star}}=2.3\pm0.1$, adopting $R_V=3.24$ \citep{gieren2008}. We apply the parametrized extinction law by \citet{cardelli1989}. As a check, we do the same exercise by varying both $R_V$ and $A_{V,\mathrm{star}}$. With $3.0 \lesssim R_V \lesssim 3.5$, the spectrum can be de-reddened to the intrinsic $F_{\lambda} \propto \lambda^{-4}$ slope with an $A_{V,\mathrm{star}}= 2.3\pm0.1$. With $R_V$ outside this range, the de-reddened spectrum does not match the models for any value of $A_{V,\mathrm{star}}$.\\ \citet{castro2008} report magnitudes $V=18.523$ and $I=19.239$ for NGC55\,C1\,31, which were obtained as part of the Araucaria Cepheid search project \citep{pietrzynski2006}. The $I$ magnitude is in good agreement with our flux-calibrated spectrum; $V$ is not. The reported color $V-I=-0.716$ is even bluer than a theoretical Rayleigh-Jeans tail, suggesting a problem with the photometry\footnote{This has been confirmed by the authors. Corrected photometric values are $V=19.87\pm0.05$, $I=19.25\pm0.05$ (priv. comm. Pietrzy\'nski), in agreement with our findings.}. From our flux-calibrated spectrum we obtain $V\simeq20.1\pm0.1$. Using $A_{V,\mathrm{star}}=2.3$, and $d=2.0$\,Mpc, we derive an absolute magnitude of $M_V=-8.7\pm0.4$ for the source. \\ \par Fig.~\ref{fig:compare_obj} compares the observed UVB spectrum of C1\_31 between 380 and 490 nm with that of spectral standard stars \citep{walborn1990}. In the C1\_31 spectrum the Balmer and He\,{\sc i} lines show artifacts of the nebular emission correction. However, most of the line wings are left unaffected thanks to the relatively high spectral resolution of X-shooter. Based on the non-detection of He\,{\sc ii}\,$\lambda$4541 and the presence of He\,{\sc i}\,$\lambda$4471 in the C1\_31 spectrum one can not classify this source as an early O-star. By adding artificial noise to the standard star spectra, matching the SNR of our observations in this range, we estimate that the He\,{\sc ii}\,$\lambda$4541 line of a star with spectral type later than O7.5 will not be detectable, thus suggesting a late spectral subtype. In Section~\ref{sec:modeltemp} we will use stellar atmosphere models to constrain the effective temperature more quantitatively. Supergiants may have strong emission in He\,{\sc ii}\,$\lambda$4686 due to their stellar wind, but none of the standard stars show a feature as broad as that in our observations. Broad He\,{\sc ii}\,$\lambda$4686 emission lines are the strongest features in Wolf-Rayet stars of type WN. This line profile will be analysed in more detail in Section~\ref{sec:heii4686}. \begin{figure} \includegraphics[width=8.5cm]{pictures/compare_obj_sg_new_c.eps} \caption{The normalised sky-corrected object spectrum (top) between 380 and 490\,nm compared with the spectra of standard O supergiants from O3 to O8.5 \citep{walborn1990}. Note the very broad He\,{\sc ii}\,$\lambda$4686 emission.} \label{fig:compare_obj} \end{figure} When we compare the X-shooter spectrum to the FORS2 spectrum of C1\_31 in \citet{castro2008}, we conclude that the general appearance is similar, including the shape of He\,{\sc ii}\,$\lambda$4686. \citeauthor{castro2008} observed He\,{\sc ii}\,$\lambda$4200 weakly in emission as well; this we can not confirm. The FORS2 observations will have suffered from similar nebular contamination in the Balmer lines as well as in some He\,{\sc i} lines, which may have hampered earlier classification, but correcting for this is even more difficult at lower spectral resolution. \subsection{Stellar line profile modelling} \label{sec:lineprofilemodel} \begin{table*} \caption{Parameters used for the described FASTWIND models, which are a subset of a larger grid. $R_*$, $\log g$ and $v_{\infty}=2.6 \times v_{\mathrm{esc}}$ depend on the other variables. The first four models have the same $M$, $L_*$ and $\dot{M}$, but different $T_{\mathrm{eff}}$ causing the remaining parameters to be modified. The next three models are variations on MOD31, but with different values for $\dot{M}$. MOD69 is like MOD31, but with an artificially high wind acceleration parameter $\beta$. The last two models MOD89 and MOD90 provide the best fitting multiple model configuration (in a visual flux ratio $\sim4:1$). The bolometric luminosities $L_*$ are chosen such that they reproduce the measured $M_V$ with ten times MOD89 and two times MOD90. \label{tab:modparam}} \begin{tabular}{ ccccccccccc} \hline ID &$M$ & $T_{\mathrm{eff}}$ &$L_*$ &$\log g$ & $R_*$& $\dot{M}$& $v_{\infty}$ & $\beta$ & $N_{\mathrm{He}}/N_{\mathrm{H}}$ &$Z$\\ &$(\mathrm{M}_{\odot})$ & (K) &$(\mathrm{L}_{\odot})$& (cm s$^{-2}$) &$(\mathrm{R}_{\odot})$ & $(\mathrm{M}_{\odot}\mathrm{~yr}^{-1})$ & (km~s$^{-1}$) & & & ($\mathrm{Z}_{\odot}$)\\ \hline MOD39 & 40 & 27\,500 & $10^{ 5.6 }$ & 2.99 & 33.69 & $6.00\times 10^{-6}$ & 1750.33 & 1.0 & 0.1 & 0.3\\ MOD31 & 40 & 30\,000 & $10^{ 5.6 }$ & 3.30 & 23.40 & $6.00\times 10^{-6}$ & 2100.40 & 1.0 & 0.1 & 0.3\\ MOD32 & 40 & 32\,500 & $10^{ 5.6 }$ & 3.44 & 29.94 & $6.00\times 10^{-6}$ & 2275.43 & 1.0 & 0.1 & 0.3\\ MOD33 & 40 & 35\,000 & $10^{ 5.6 }$ & 3.57 & 17.19 & $6.00\times 10^{-6}$ & 2450.47 & 1.0 & 0.1 & 0.3\\ \hline MOD10 & 40 & 30\,000 & $10^{ 5.6 }$ & 3.30 & 23.40 & $1.00\times 10^{-6}$ & 2100.40 & 1.0 & 0.1 & 0.3\\ MOD11 & 40 & 30\,000 & $10^{ 5.6 }$ & 3.30 & 23.40 & $3.00\times 10^{-6}$ & 2100.40 & 1.0 & 0.1 & 0.3\\ MOD12 & 40 & 30\,000 & $10^{ 5.6 }$ & 3.30 & 23.40 & $1.00\times 10^{-5}$ & 2100.40 & 1.0 & 0.1 & 0.3\\ \hline MOD69 & 40 & 30\,000 & $10^{ 5.6 }$ & 3.30 & 23.40 & $6.00\times 10^{-6}$ & 2100.40 & 3.0 & 0.1 & 0.3\\ \hline MOD89 & 30 & 30\,000 & $10^{ 5.24 }$ & 3.54 & 15.46 & $2.00\times 10^{-6}$ & 2237.86 & 1.0 & 0.1 & 0.3\\ MOD90 & 80 & 50\,000 & $10^{ 5.83 }$ & 4.26 & 10.98 & $3.00\times 10^{-5}$ & 4336.76 & 1.0 & 0.8 & 0.3\\ \hline \end{tabular} \end{table*} Although we are going to make clear in this paper that C1\_31 is very likely a composite source, we first approach the spectrum as if it is produced by a single star. The derived physical parameters therefore represent averages of the flux-weighted components contributing to the spectrum. We note, though, that the integrated light from clusters - in particular the hydrogen ionising radiation - is often dominated by only a few of the most massive components.\\ The profiles of spectral lines are responsive to various stellar parameters such as effective temperature, mass-loss rate, surface gravity, chemical abundances and rotation speed. We have used FASTWIND \citep{puls2005} to model stellar atmospheres and to compute profiles of spectral lines. FASTWIND calculates non-LTE line blanketed stellar atmospheres and is suited to model stars with strong winds. We first tried to apply a genetic fitting algorithm with FASTWIND models \citep[see][]{mokiem2005} to the observed spectrum, in order to fit a large number of parameters at the same time. This did not result in well constrained parameters, because the shape and width of He\,{\sc ii}\,$\lambda$4686 could not be fitted at the same time as the other lines. \\ Instead we constructed a grid of FASTWIND models, and constrained the parameters by comparing the observed profiles with the models. The main grid varies effective temperature $T_{\mathrm{eff}}=27\,500 - 35\,000$\,K with steps of $2\,500$\,K; and covers values for the mass-loss rate $\dot{M}=1,3,6$ and $10 \times 10^{-6}\mathrm{~M}_{\odot}\mathrm{~yr}^{-1}$ and luminosity $\log(L_*/L_{\odot})=5.4,5.6$ and $5.8$. All models have a mass $M=40\mathrm{~M}_{\odot}$, wind acceleration parameter $\beta=1.0$, and helium to hydrogen number density $N_{\mathrm{He}}/N_{\mathrm{H}}=0.1$. Radius $R_*$ and surface gravity $g$ are fixed by the other parameters. The terminal wind velocity $v_{\infty}$ is assumed to be 2.6 times the surface escape velocity \citep[see][]{lamers1995}. In Table~\ref{tab:modparam} we show the parameter values of a subset of the grid, i.e. the models discussed in more detail in this paper. Unless stated otherwise, the synthesized line profiles are produced using a microturbulent velocity $v_\mathrm{turb}=10$\,km\,s$^{-1}$ and projected rotational velocity $v_{\mathrm{rot}} \sin({i})=150$\,km\,s$^{-1}$, with $i$ the inclination of the stellar rotation axis relative to the plane of the sky. An instrumental profile matching the resolution of X-shooter is applied as well.\\ We first investigate the overall impact of the effective temperature (\S\ref{sec:modeltemp}), rotational velocity (\S\ref{sec:vrot}), and mass-loss rate (\S\ref{sec:massloss}) by comparing with models involving a single star. In \S\ref{sec:heii4686} we show that the observed profile of He\,{\sc ii}\,$\lambda$4686 can not be reproduced with a single star. In \S\ref{sec:luminosity} we will discuss the luminosity constraint that follows from $M_V$. \begin{figure} \includegraphics[width=8.5cm]{pictures/eatprofiles_060411_all_helium_final.eps} \caption{Observed line profiles compared with model line profiles computed for various effective surface temperatures (see text). \label{fig:plotslineprofiles}} \end{figure} \subsubsection{Effective temperature} \label{sec:modeltemp} The He\,{\sc i} and He\,{\sc ii} lines can be used to determine the characteristic effective temperature of the source. Fig.~\ref{fig:plotslineprofiles} shows the observed profiles of He\,{\sc i}\,$\lambda$4387, He\,{\sc i}\,$\lambda$4471, He\,{\sc ii}\,$\lambda$4200 and He\,{\sc ii}\,$\lambda$4541 along with profiles from atmosphere models that only differ in effective temperature. A visual comparison of the profiles shows that the models with $T_{\mathrm{eff}}< 35\,000$ K best reproduce the spectrum as otherwise He\,{\sc ii} would have been detected, and He\,{\sc i}\,$\lambda$4387 would not have been as deep. \subsubsection{Rotational velocity} \label{sec:vrot} Since the observed profile of He\,{\sc i}\,$\lambda$4387 is not affected by nebular emission, we can use it to constrain the characteristic rotational broadening. Fig.~\ref{fig:vrot} shows the He\,{\sc i}\,$\lambda$4387 profile of MOD31 ($T_{\mathrm{eff}}=30\,000$ K) convolved with rotational profiles to simulate three different rotational velocities: $v_{\mathrm{rot}} \sin({i})=$ 50, 150 and 250~km~s$^{-1}$. The line with $v_{\mathrm{rot}} \sin({i})= 50~\mathrm{km~s}^{-1}$ is clearly not broad enough to fit the observed profile, while the line with $v_{\mathrm{rot}} \sin({i})= 250~\mathrm{km~s}^{-1}$ appears to be too broad. We conclude that the width of the lines is best reproduced by models with $v_{\mathrm{rot}} \sin({i})=150 \pm 50$~km~s$^{-1}$. \subsubsection{Mass-loss rate} \label{sec:massloss} In the nebular spectrum H$\alpha$ is very strongly in emission. This is mostly due to the surrounding H\,{\sc ii} region. In the sky-corrected object spectrum (Fig.~\ref{fig:halpahe}), the H$\alpha$ line wings are clearly visible, and reveal that H$\alpha$ is in emission in the spectrum of C1\_31 as well. \\ The profile of H$\alpha$ is very sensitive to the mass-loss flux $\dot{M}/4\pi R_*^{2}$. In the right panel in Fig.~\ref{fig:halpahe} we plot the observed spectrum together with the line profiles from single-star models with $T_{\mathrm{eff}}=30\,000$~K and $R_*=23.4\,\mathrm{~R}_{\odot}$, and four different mass-loss rates: $1\times 10^{-5}$, $6\times 10^{-6}$, $3\times 10^{-6}$, and $1\times 10^{-6} \mathrm{~~M}_{\odot}\mathrm{~yr}^{-1}$. Depending on the mass-loss rate, the line is either in emission or in absorption. The computed profile for MOD31 reproduces the observed profile best, and corresponds to a mass-loss flux close to $\sim$$9 \times10^{-10}\mathrm{~ M}_{\odot}$~yr$^{-1}\mathrm{~ R}_{\odot}^{-2}$ (i. e. $\dot{M}=6\times 10^{-6} \mathrm{~~M}_{\odot}\mathrm{~yr}^{-1}$ for $R_*=23.4\mathrm{~R}_{\odot}$). \subsubsection{He\,{\sc ii}\,$\lambda$4686} \label{sec:heii4686} In the spectrum of C1\_31, He\,{\sc ii}\,$\lambda$4686 is in emission, which is a common feature in spectra of O-type supergiants (see Fig.~\ref{fig:compare_obj}). However, in none of the comparison spectra this line is as broad as in our spectrum ($\sim$$3000$ km s$^{-1}$). The equivalent width is only $-3.6\pm0.04$\,\AA, i.e. much weaker than in typical WN star spectra. The feature is of stellar origin as no nebular counterpart is detected. \\ The strength of He\,{\sc ii}\,$\lambda$4686 depends on various parameters in the FASTWIND models. The modeled line can be made stronger by (1) increasing the effective temperature (a larger fraction of the helium will be ionised), (2) increasing the mass-loss rate, (3) increasing the helium abundance, (4) increasing the wind acceleration parameter $\beta$ or (4) decreasing the terminal wind velocity. But these modifications only make the line stronger, not much broader. To simulate the profile of the line in both strength and width using rotational broadening, a rotational velocity of $v_{\mathrm{rot}} \sin({i})=$ 1200~km~s$^{-1}$ is needed: Fig.~\ref{fig:halpahe} shows one of the models (MOD69) for which this line has an equivalent width of $-3.58\,\AA$, convolved with a rotational profile simulating rotational velocities of $v_{\mathrm{rot}}\sin({i})=150$, 600, 900 and 1200\,km\,s$^{-1}$. Only when $v_{\mathrm{rot}}\sin({i})\sim1200$\,km s$^{-1}$, the line is broad enough to reproduce the observed profile. This $v_{\mathrm{rot}}$ is, for acceptable radii and masses, higher than the escape velocity. He\,{\sc ii}\,$\lambda$4686 is formed in the wind, therefore its width can be much higher than the surface rotational velocity. But with such a dense and fast wind, H$\alpha$ would have been much broader as well, and the photospheric He\,{\sc i} and He\,{\sc ii} absorption lines would not be visible. Therefore we conclude that the He\,{\sc ii}\,$\lambda$4686 emission line has a different origin: it is a diluted WN feature (see Section~\ref{sec:consistentpic}). \begin{figure} \center \includegraphics[width=6cm]{pictures/HeI4387_varying_vrot_only_060411_c.eps} \caption{The observed line profile of He\,{\sc i}\,$\lambda$4387 compared with the same model profile computed for various projected rotational velocities.\label{fig:vrot}} \end{figure} \begin{figure} \includegraphics[width=8.5cm]{pictures/eatprofiles_060411_all_halphahe_3_.eps} \caption{Observed line profiles compared with model line profiles for various mass-loss rates (left panel) and projected rotational velocities (right panel). \label{fig:halpahe}} \end{figure} \subsubsection{Luminosity} \label{sec:luminosity} In the calibration of O-stars by \citet{martins2005} the visually brightest supergiant has $M_V=-6.35$. C1\_31 is with $M_V=-8.7$ almost an order of magnitude brighter. It is therefore likely that C1\_31 is a composite object such as a cluster containing several luminous stars. The slit width ($0.8''$ for UVB) at a distance of 2.0~Mpc corresponds to a physical size of 7.8~pc, which is large enough to contain for example the Orion Trapezium cluster, or even more massive open clusters such as Tr~14 \citep{sana2010} or NGC~6231 \citep{sana2008}. \\ In summary, the normalised line profiles of hydrogen and helium, except He\,{\sc ii}\,$\lambda$4686, can be reproduced by a single-star model with parameters $T_{\mathrm{eff}}\sim 30\,000$ K and $\dot{M} \sim 6 \times 10^{-6}\mathrm{~ M}_{\odot}$~yr$^{-1}$, $R_*\sim23.4\,\mathrm{R}_{\odot}$ and $v_{\mathrm{rot}}\sin({i})=150 \pm 50$~km~s$^{-1}$, i.e. a late O supergiant star (MOD31 in Table~\ref{tab:modparam}). A star with this temperature would need a luminosity of $\log(L_*/L_{\odot})\sim6.3$ to reproduce $M_V$, which is too high for a single O-type supergiant. The considerations regarding the He\,{\sc ii}\,$\lambda$4686 line also point in the direction of the spectrum being a composite of different sources, weighed by their visual brightness. This scenario will be explored in Section~\ref{sec:consistentpic}. \section{Analysis: The nebular emission line spectrum} \label{sec:surrh2} \begin{table} \caption{Reddening corrected (see text) line flux ratios with respect to H$\beta$ for the strongest unblended emission lines in the nebular spectrum close to our central source. \label{tab:lineratio}} \begin{tabular}{l@{$\,\lambda$}r l@{\,$\pm$\,}l l@{\,$\pm$\,}l l@{\,$\pm$\,}l} \multicolumn{2}{l}{Line}&\multicolumn{2}{c}{13/08/09} & \multicolumn{2}{c}{27/09/09} & \multicolumn{2}{c}{Average} \\ \multicolumn{2}{l}{}&\multicolumn{2}{c}{Ratio} & \multicolumn{2}{c}{Ratio} & \multicolumn{2}{c}{Ratio} \\ \hline [O\,{\sc ii}] & 3726.0 & 1.554 & 0.050 & 1.476 & 0.033 & 1.500 & 0.028 \\ [O\,{\sc ii}] & 3728.8 & 2.271 & 0.065 & 2.188 & 0.044 & 2.214 & 0.036 \\ H-9 & 3835.4 & 0.062 & 0.011 & 0.060 & 0.007 & 0.061 & 0.006 \\ [Ne\,{\sc iii}] & 3868.8 & 0.325 & 0.018 & 0.306 & 0.012 & 0.312 & 0.010 \\ H-8 & 3889.1 & 0.201 & 0.014 & 0.198 & 0.009 & 0.199 & 0.008 \\ H$\delta$ & 4102.0 & 0.275 & 0.014 & 0.271 & 0.010 & 0.272 & 0.008 \\ H$\gamma$ & 4340.5 & 0.517 & 0.019 & 0.505 & 0.013 & 0.509 & 0.011 \\ [O\,{\sc iii}] & 4363.2 & 0.039 & 0.004 & 0.037 & 0.003 & 0.037 & 0.002 \\ He\,{\sc i} & 4471.0 & 0.034 & 0.005 & 0.034 & 0.004 & 0.034 & 0.003 \\ He\,{\sc ii} & 4686.0 & 0.006 & 0.002 & 0.003 & 0.001 & 0.003 & 0.001 \\ H$\beta$ & 4861.3 & 1.000 & 0.027 & 1.000 & 0.019 & 1.000 & 0.016 \\ [O\,{\sc iii}] & 4958.9 & 1.171 & 0.029 & 1.144 & 0.020 & 1.153 & 0.016 \\ [O\,{\sc iii}] & 5006.7 & 3.477 & 0.074 & 3.426 & 0.052 & 3.443 & 0.043 \\ He\,{\sc i} & 5876.0 & 0.110 & 0.007 & 0.113 & 0.005 & 0.112 & 0.004 \\ [S\,{\sc iii}] & 6312.0 & 0.015 & 0.002 & 0.017 & 0.001 & 0.016 & 0.001 \\ [N\,{\sc ii}] & 6548.0 & 0.060 & 0.004 & 0.062 & 0.003 & 0.061 & 0.002 \\ H$\alpha$ & 6562.8 & 3.040 & 0.063 & 3.052 & 0.045 & 3.048 & 0.037 \\ [N\,{\sc ii}] & 6583.4 & 0.179 & 0.007 & 0.193 & 0.005 & 0.188 & 0.004 \\ He\,{\sc i} & 6678.0 & 0.029 & 0.003 & 0.028 & 0.002 & 0.028 & 0.002 \\ [S\,{\sc ii}] & 6716.5 & 0.269 & 0.009 & 0.292 & 0.007 & 0.283 & 0.005 \\ [S\,{\sc ii}] & 6730.8 & 0.189 & 0.007 & 0.206 & 0.005 & 0.200 & 0.004 \\ [Ar\,{\sc v}]& 7005.9 & 0.009 & 0.001 & 0.006 & 0.001 & 0.007 & 0.001 \\ [Ar\,{\sc iii}]& 7135.8 & 0.087 & 0.004 & 0.086 & 0.003 & 0.087 & 0.002 \\ [Ar\,{\sc iii}]& 7751.1 & 0.022 & 0.001 & 0.022 & 0.001 & 0.022 & 0.001 \\ Pa-10 & 9015.0 & 0.017 & 0.001 & 0.018 & 0.001 & 0.018 & 0.001 \\ [S\,{\sc iii}] & 9068.9 & 0.200 & 0.005 & 0.203 & 0.004 & 0.202 & 0.003 \\ Pa-9 & 9229.0 & 0.025 & 0.002 & 0.024 & 0.001 & 0.024 & 0.001 \\ [S\,{\sc iii}] & 9531.0 & 0.475 & 0.011 & 0.437 & 0.008 & 0.449 & 0.006 \\ Pa-7 & 10049.4 & 0.045 & 0.004 & 0.046 & 0.003 & 0.046 & 0.002 \\ \hline \end{tabular} \end{table} The nebular emission spectra show hydrogen recombination lines of the Balmer and Paschen series, He\,{\sc i} lines, and forbidden lines of O\,{\sc ii}, O\,{\sc iii}, S\,{\sc ii}, S\,{\sc iii}, N\,{\sc ii}, Ne\,{\sc iii}, Ar\,{\sc iii}, Ar\,{\sc iv} and Ar\,{\sc v}. To these spectra, no sky-correction could be applied, because nebular emission covers the full slit. Therefore, every nebular line we measure might have a contribution from the sky continuum. We minimize this error by subtracting the local continuum next to the line in wavelength. The ratios with respect to H$\beta$ of the nebular lines at the position of our source are listed in Table~\ref{tab:lineratio}. The extinction corrected integrated specific intensity of H$\beta$ is $5.8\times 10^{-15}\,\mathrm{erg~s}^{-1} \mathrm{~cm}^{-2}\mathrm{~arcsec}^{-2}$. \\ We use the diagnostics for electron temperature and oxygen abundance from \citet{pagel1992}: \begin{equation} T = \frac{1.432}{ \log R - 0.85 +0.03 \log T + \log \left(1+0.0433xT^{0.06}\right) } \,\,,\label{eq:T} \end{equation} where $T \equiv T \left( \textrm{O\,{\sc iii}} \right)$, the electron temperature in the region where oxygen is doubly ionized, in units of $10^4$\,K. $R$ and $x$ are given by \begin{eqnarray} R &= &\frac{I_{\lambda4959}+I_{\lambda 5007}}{I_{\lambda 4363}} \, , \label{eq:R}\\ x &= &10^{-4} n_{\mathrm{e}} T_2^{-1/2} \, , \label{eq:x} \end{eqnarray} where $I$ is the integrated specific intensity intensity of the indicated emission line and $n_{\mathrm{e}}$ is the electron density in cm$^{-3}$. $T_2$ is the electron temperature in units of $10^4$\,K in the singly ionized region, and follows from model calculations by \citet{stasinska1990}: \begin{eqnarray} T_2^{-1} &\equiv & \left[ T \left( \textrm{O\,{\sc ii},\,N\,{\sc ii},\,S\,{\sc ii}} \right) \right]^{-1} = 0.5\left( T^{-1}+0.8\right) \, . \label{eq:T2} \end{eqnarray} \noindent The mean ionic abundance ratios for O{\,\sc ii} and O{\,\sc iii} along the line of sight are calculated as follows: \begin{eqnarray} 12+\log \left( \textrm{O{\,\sc ii}\,/\,H{\,\sc ii}} \right)&=& \log \frac{I_{\lambda 3726}+I_{\lambda 3729}}{I_{\mathrm{H}\beta}} +5.890 +\frac{1.676}{T_2}\nonumber \\ & & -0.40 \log T_2 + \log \left( 1+1.35 x \right); \label{eq:ox1}\\ 12+\log \left( \textrm{O{\,\sc iii}\,/\,H{\,\sc ii}} \right)&=& \log \frac{I_{\lambda4959}+I_{\lambda 5007}}{I_{\mathrm{H}\beta}} +6.174 +\frac{1.251}{T}\nonumber \\ & & -0.55 \log T ; \label{eq:ox2} \end{eqnarray} \noindent The total oxygen abundance is obtained by adding Eqs.~\ref{eq:ox1} and \ref{eq:ox2}, i.e. by assuming these two ionisation stages to be dominant in the H\,{\sc ii} region. \\ \subsection{Nebular emission properties along the slit} \label{sec:nebprop} \par Fig.~\ref{fig:spatial130809} shows the nebular properties along the slit for the spectra obtained on 13/08/09 and 27/09/09. The orientation of the slit differs between these observations: positive offset is $\sim$North-East for 13/08/09 and $\sim$North-West for 27/09/09 (see Fig.~\ref{fig:regionslit}). We do not show the results for the 30/09/09 observation, which has a similar position angle and gives a similar result to 27/09/09, though with larger error bars. As mentioned in Section~\ref{sec:rednebspec}, the errors are obtained by propagating the photon noise on the intensity of the used lines. We choose $n_{\mathrm{e}}= 20\,\mathrm{cm}^{-3}$ arbitrarily, but in agreement with the low density limit of the density-sensitive [O\,{\sc ii}] and [S\,{\sc ii}] ratios we measure (see panel (d) in Fig.~\ref{fig:spatial130809}, and the analysis below). The error on the density is not propagated into the errors on the other parameters, because they all depend very weakly on the density (see Eqs.~\ref{eq:T} and \ref{eq:x}). In the following sections we discuss the panels in Fig.~\ref{fig:spatial130809}. \begin{figure*} \includegraphics[width=8.7cm]{pictures/profile_properties_130809_subset_merge_c.eps} \includegraphics[width=8.7cm]{pictures/profile_properties_270909_subset_merge_c.eps} \caption{Spatial profiles of emission lines and properties measured from the emission line spectrum in 20 sub-apertures along the slit On the x-axis is distance in arcsec from C1\_31. The left and right frames, labeled by their observing dates, correspond to observations with different slit orientations, see Fig.~\ref{fig:regionslit} and Table~\ref{tab:observations}. The image at the top is a part of the 2D spectrum around 468.6\,nm. Panel (a) shows the average level of the continuum between $411.0-433.5$\,nm, without taking into account the difference in $A_{V,\mathrm{neb}}$ along the slit. It also shows the intensity of the two strong nebular lines H$\beta$ and [O\,{\sc iii}] $\lambda5007$, with respect to the value at the location of the central source. Panel (b) shows the flux of the He\,{\sc ii}\,$\lambda$4686 nebular emission line with respect to H$\beta$. Panel (c) shows the value of $A_{V,\mathrm{neb}}$, derived from a number of Balmer and Paschen emission lines, assuming Case B recombination. Panel (d) shows [O\,{\sc ii}] and [S\,{\sc ii}] ratios that are sensitive to electron density. Panel (e) shows $T\left( \textrm{O\,{\sc iii}} \right)$ calculated from the ratio $R$ of the [O\,{\sc iii}] lines (Eq.~\ref{eq:T}). Panel (f) shows the total oxygen abundance $12+\log \left( \mathrm{O} / \mathrm{H} \right)$. } \label{fig:spatial130809} \end{figure*} \subsubsection*{Stellar continuum and nebular emission} Panel (a) shows the shape of the continuum between $411.0-433.5$\,nm; here we see the trace of our central source. We do not take into account the difference in $A_V$ along the slit (see panel c). The nebular emission in H$\beta$ and [O\,{\sc iii}] $\lambda$5007, two of the strongest emission lines in the spectra, is also shown. [O\,{\sc iii}] $\lambda$5007 is a forbidden transition and thus only emitted by low density nebulae. The 27/09/09 observation suggests that the nebular emission is built up from various discrete peaks, of which one is centered on C1\_31. This part of the nebula, with a radius of $\sim$$20-30$\,pc, is likely ionized by C1\_31. Therefore, the nebular properties at the central location in the slit are related to the properties of C1\_31 (See Section~\ref{sec:modelhii}). \subsubsection*{He\,{\sc ii}\,$\lambda$4686\,/\,H$\beta$} Panel (b) shows the ratio of the He\,{\sc ii}\,$\lambda$4686 and H$\beta$ nebular emission line. Around C1\_31, this nebular line is not present, but it is very pronounced in the 13/08/09 spectrum around offset $-4.5''$. This is also clearly visible in the 2D spectrum around this line (top left image of Fig.~\ref{fig:spatial130809}). This feature will be discussed in more detail in Section~\ref{sec:C235}. This nebular feature is much narrower than the wind He\,{\sc ii}\,$\lambda$4686 line we discussed in Section~\ref{sec:heii4686}. The nebular He\,{\sc ii}\,$\lambda$4686 line has a Gaussian FWHM of $\sim$1\,\AA\, like the other nebular lines, slightly larger than the resolution element at this wavelength ($\sim$0.75\,\AA). \subsubsection*{Extinction} Panel (c) shows the values of $A_{V,\mathrm{neb}}$, which is the extinction derived from the ISM hydrogen line ratios. Adopting $R_V=3.24$ \citep{gieren2008}, $A_{V,\mathrm{neb}}$ is found for each aperture by minimizing the following expression: \begin{equation} \chi_{\mathrm{red}}^2=\frac{\chi^2}{\nu}=\frac{1}{\nu} \sum_{lines} \frac{(q-q_0)^2}{\sigma_{q}^2} \end{equation} with $q_0$ the theoretical ratio for Case B recombination in the low density limit at $T=10\,000$\,K \citep[see e.g.][]{osterbrock2006}, $q$ the intensity of a line with respect to H$\beta$ after applying $A_{V,\mathrm{neb}}$ (following \citealt{cardelli1989}), $\sigma_{q}^2$ the variance on $q$, and $\nu$ the number of degrees of freedom. We use H-9, H$\gamma$, H$\delta$, H$\alpha$, Pa-10, Pa-9 and Pa-7. The confidence interval on $A_{V,\mathrm{neb}}$ is given by the value for which $\chi_{\mathrm{red}}^2$ rises by 1. $A_{V,\mathrm{neb}}$ is used in the derivation of the properties per aperture shown in panels (d) to (f) of Fig.~\ref{fig:spatial130809}; the error on $A_{V,\mathrm{neb}}$ is not propagated as its influence on the other parameters is small.\\ $A_{V,\mathrm{neb}}$ at the location of C1\_31 is derived to be $1.30\pm0.15$ and $1.15\pm0.10$ in the 13/08/09 and 27/09/09 spectra respectively. This is lower than $A_{V,\mathrm{star}}=2.3\pm0.1$ (see Section~\ref{sec:photospec} and Fig.~\ref{fig:fullspec_dered}). Furthermore, we note that both $A_{V,\mathrm{neb}}$ and $A_{V,\mathrm{star}}$ are larger than $A_V=0.45$ from \citet{gieren2008}. However, this latter value is an average for NGC\,55 as a whole. This range of values reflects local variations and are to be expected, especially in an almost edge-on galaxy.\\ \subsubsection*{Electron density} Panel (d) shows [O\,{\sc ii}] and [S\,{\sc ii}] ratios that are sensitive to electron density \citep[see e.g.][]{osterbrock2006}. For a value $>1.4$, both ratios are in the low density ($<10^2$\,cm$^{-3}$) limit, so this measurement only provides an upper limit. Because these line pairs are very close in wavelength, their ratios are not affected by $A_{V,\mathrm{neb}}$. \subsubsection*{Electron temperature} Panel (e) gives the electron temperature calculated from the ratio $R$ of the [O\,{\sc iii}] lines (Equation~\ref{eq:T}). In the 27/09/09 observation (Fig.~\ref{fig:spatial130809}, right) in the apertures with $\mathrm{offset}>2''$, [O\,{\sc iii}] $\lambda 4363$ was hardly detected resulting in an underestimation of the error in $T \left( \textrm{O\,{\sc iii}} \right)$. Taking this into account, and weighing the better quality spectra more strongly, we conclude that around the location of C1\_31 $T \left( \textrm{O\,{\sc iii}} \right) = 11500\,\pm\,600$\,K, and that there are no significant gradients in the two spatial directions indicated in Fig.~\ref{fig:regionslit}. \subsubsection*{Oxygen abundance} Panel (f) shows the total oxygen abundance $[\mathrm{O} / \mathrm{H}]=12+\log \left( \mathrm{O} / \mathrm{H} \right)$. The slight drop in [O/H] that we see at $\mathrm{offset}>2''$ in the 27/09/09 observation is a propagated effect from the uncertain determination of $T \left( \textrm{O\,{\sc iii}} \right)$. Excluding this region, we find an average of $[\mathrm{O} / \mathrm{H}] = 8.18 \pm 0.03$, which corresponds to $Z=0.31 \pm 0.04\,Z_{\odot}$ adopting $[\mathrm{O}/ \mathrm{H}]_{\odot} = 8.69 \pm 0.05$ \citep{asplund2009}. Though lower oxygen abundances are reported for NGC\,55 (\citealp[$8.08 \pm 0.10$,][]{tuelmann2003}), on average slightly higher values are measured (\citealp[$8.23-8.39$,][]{webster1983}; \citealp[8.53,][]{stasinska1986}; \citealp[$8.35\pm0.07$,][]{zaritsky1994}). \subsection{C2\_35} \label{sec:C235} C2\_35 (RA 0:14:59.68, DEC -39:12:42.84) is located $4.5''$ West South-West of C1\_31 (see Fig.~\ref{fig:regionslit}). Its strongly ionised surrounding nebula is visible in the 13/08/09 spectrum (Fig.~\ref{fig:spatial130809}, left, panel a). At offset $-4.5''$ we see a weak continuum, as the point-spread function of C2\_35 is mostly outside the slit. We detect strong (forbidden) nebular line emission (panel a), but the most striking feature is the high He\,{\sc ii}\,$\lambda$4686\,/\,H$\beta=0.03$ ratio (left, panel~b and top image). The FORS2 spectrum of C2\_35, classified as an early OI by \citet{castro2008}, is similar to C1\_31. The broad He\,{\sc ii}\,$\lambda$4686 wind feature is indicative of the presence of a hot WR star. On top of the broad wind feature, there is a narrow nebular emission line. He\,{\sc ii} emission lines from nebulae are only rarely seen and often associated with strong X-ray sources \citep[see e.g][]{pakull1986,kaaret2004}, but see \citet{shirazi2012}. No obvious X-ray source is detected in archival XMM-Newton and Chandra observations at the location of C2\_35 (priv. comm. R. Wij\-nands). The exposure times of these images, however, would not be sufficient to detect the X-ray emission of, for example, a stellar mass black hole at this distance. \\ \section{Model H\,{\sc ii} region} \label{sec:modelhii} \begin{figure} \includegraphics[width=8.5cm]{pictures/tlustygrid_TO3_new_c.eps} \caption{Upper panel: the predicted $T$(O\,{\sc iii}) from the synthesized nebular spectrum as a function of $T_{\mathrm{eff}}$ and $L_*$ of a central ionizing O-star \citep{lanz2003} in a CLOUDY model with an inner radius of 0.1~pc. The solid and dashed horizontal lines are the observed $T$(O\,{\sc iii}) and error estimate for the direct environment of C1\_31. Lower panel: the predicted nebular line ratio He\,{\sc ii}\,$\lambda\,4686$/H$\beta$ as a function of $T_{\mathrm{eff}}$ and $L_*$ of a central ionising star. The black dots correspond to realistic dwarf and supergiant O-type stars in the $T_{\mathrm{eff}}$ and $L_*$ grid \citep{martins2005}.} \label{fig:CLOUDY_TO3} \end{figure} If the stellar spectrum is a composition of different sources, various solutions are possible. However, the properties $T$(O\,{\sc iii}), $n_{\mathrm{e}}$, and [O/H] of the surrounding nebula are constrained (see Section~\ref{sec:surrh2}), and the nebular emission profiles along the slit give some idea of the size of the ionised region. The ionising source, of which the constituents are constrained by the stellar spectrum, should be able to produce a region with properties we derive from the nebular spectrum. In this section we will use the spectral synthesis code CLOUDY (version 08.00, \citealt{ferland1998}) to investigate which properties of the nebula and of the central ionising source have a strong influence on the observables that we measured. This will allow us to constrain $T_\mathrm{eff}$ and $L_*$ of the ionising source.\\ CLOUDY is designed to simulate gaseous interstellar media. From a given set of conditions such as luminosity and spectral shape of the ionizing source, the program computes the thermal, ionisation and chemical structure of a region as well as the emitted spectrum. In order to compare our model results to what has been observed, we will mainly use the simulated emitted spectrum. Since we only know the total abundance of oxygen, we use the abundance pattern corresponding to the Orion Nebula provided by CLOUDY \citep{baldwin1991,rubin1991,osterbrock1992,savage1996}, and scale the metallicity such that the oxygen abundance matches our observed value. The total hydrogen density $n_\mathrm{H}$ is set to $20$~cm$^{-3}$, consistent with the low electron density limit derived from [O\,{\sc ii}] and [S\,{\sc ii}], assuming that all hydrogen is ionised. We do not include dust grains\footnote{We exclude dust grains to keep the model simple. The extinction we measure both in the stellar spectrum and the hydrogen emission lines could as well be due to dust that is outside the surrounding ionised region.}. We use a spherical geometry: the inner radius of the cloud is 0.1 pc and the outer radius is set where the temperature drops below 4\,000\,K. \\ We examined the ratio $R$ (Eq.~\ref{eq:R}) of the modeled output spectra, and computed $T$(O\,{\sc iii}), as a function of $T_{\mathrm{eff}}$ and $L_*$ of a synthetic O-star spectrum \citep{lanz2003} as central ionising source; see the upper panel of Fig.~\ref{fig:CLOUDY_TO3}. A grid of $T_{\mathrm{eff}}=30\,000-55\,000$\,K and $\log(L/L_{\odot})=3-6$ has been examined; the black dots indicate realistic stars in this grid according to the calibration by \citet{martins2005}. We see that a higher $T_{\mathrm{eff}}$ of the central star leads to a higher $T$(O\,{\sc iii}), while the effect of $L_*$ on the `measured' $T$(O\,{\sc iii}) is smaller. $L_*$ does however influence the size of the cloud: a higher $L_*$ leads to a larger cloud. A cloud with metallicity $Z=0.3\,\mathrm{ Z}_{\odot}$ needs a $T_{\mathrm{eff}}\sim44\,000-55\,000$ K star, depending on the luminosity, to produce the measured $T$(O\,{\sc iii}) of $11\,500\pm600$\,K (horizontal line). Furthermore, we find that the metallicity of the cloud has an even stronger influence on $T$(O\,{\sc iii}); lower metallicities result in hotter clouds because they are less efficiently cooled. Changing $n_\mathrm{H}$ only influences the spatial scale of the cloud: a density ten times as low results in a region four times as large. \\ We also analysed the nebular line ratio He\,{\sc ii}\,$\lambda\,4686$/H$\beta$ as a function of $T_{\mathrm{eff}}$ and $L_*$ of a central ionising source; see the lower panel of Fig.~\ref{fig:CLOUDY_TO3}. Below $T_{\mathrm{eff}}\simeq40\,000$\,K, no significant He\,{\sc ii}\,$\lambda\,4686$ line is predicted for any of the luminosities in our grid. For $T_{\mathrm{eff}}>40\,000$\,K, the line can be produced, and is stronger with respect to H$\beta$ for more luminous sources. The ratio He\,{\sc ii}\,$\lambda\,4686$/H$\beta$ decreases strongly by increasing the inner radius of the model cloud. The nebula directly around C1\_31 does not show He\,{\sc ii}\,$\lambda\,4686$, therefore the inner radius is likely to be 1~pc or more. $T$(O\,{\sc iii}) is not affected significantly by changing the inner radius. \section{Discussion: a consistent picture} \label{sec:consistentpic} \begin{figure} \includegraphics[width=8.3cm]{pictures/eatprofiles_8990_allnew2.eps} \caption{The observed normalised spectrum (solid lines) together with the combined line profiles (dotted lines) from the models described in Table~\ref{tab:modparam}, representing the late O giant component MOD89 (dash-dot-dot lines) and the WN-like component MOD90 (dashed lines), in the ratio $4:1$.} \label{fig:combinedmodelspec} \end{figure} \begin{figure} \includegraphics[width=8.3cm]{pictures/fullspec_CLOUDY_8990_c.eps} \caption{The spectral energy distribution at the inner radius of the model cloud ($r_{\mathrm{in}}=0.1$\,pc), split in the different components. It is clear that in the visual, the 30\,000\,K component (MOD89, dot-dashed lines) dominates the light, while the hydrogen ionising photons (left of the vertical dashed line, with $hv>13.6$\,eV) are provided mainly by the hot (50\,000\,K) WN component (MOD90, dashed lines). The dotted profile shows the total input spectrum. \label{fig:fullspec_cloudy}} \end{figure} In the previous sections, we have put constraints on the stellar parameters of C1\_31 and its surrounding region. We suggest that C1\_31 is not a single object, but rather a stellar cluster. We summarise the main arguments below. \begin{enumerate} \item[(a)] the line profiles can not be reproduced by one single stellar atmosphere model, especially not He\,{\sc ii}\,$\lambda$4686 (Section~\ref{sec:heii4686}); \item[(b)] the visual absolute magnitude $M_V=-8.7$ of this source is very high for a single object (Section~\ref{sec:photospec}); \item[(c)]a very hot central object ($\sim$50\,000\,K) is necessary to produce a $T$(O\,{\sc iii}) of $\sim$11\,500\,K in the surrounding nebula (Section~\ref{sec:modelhii}), but the ''average'' spectral type suggests that the majority of the luminosity in the visual is produced by $T_{\mathrm{eff}}\lesssim35\,000$ K stars (Section~\ref{sec:modeltemp}). \end{enumerate} The spectrum of a composite object results in a superposition of all components, weighed by their relative brightness at the wavelength considered. All lines that we analysed are close in wavelength, so we adopt a general ratio in brightness which is the ratio in $V$. Given a universal initial mass function (IMF), a cluster will consist of many cool low-mass stars, and only a few very luminous and hot ones. In this analysis we will focus on the most massive and luminous members, because these dominate the cluster spectrum, as well as the flux of ionising photons. To this end we present a simple combination of FASTWIND models that (a) reproduces all observed line profiles, (b) has an absolute magnitude $M_V=-8.7$, and (c) when put into a CLOUDY model of an H\,{\sc ii} region, produces an electron temperature $T$(O\,{\sc iii})\,$\sim11\,500$\,K, for an adopted metallicity $Z=0.3\,\mathrm{Z}_{\odot}$ and hydrogen density $n_\mathrm{H}=20$\,cm$^{-3}$.\\ \par Model MOD90 (see Table~\ref{tab:modparam}) has a high temperature ($T_{\mathrm{eff}}=50\,000$~K), a high mass-loss rate ($3\times10^{-5}\,\mathrm{M}_{\odot}\mathrm{~yr}^{-1}$) and an enhanced helium abundance $N_{\mathrm{He}}/N_{\mathrm{H}}=0.8$. It mimics a Wolf-Rayet WN star \citep[see e.g.][]{crowther2008}. These properties result in a strong and broad He\,{\sc ii}\,$\lambda$4686 emission feature (Fig.~\ref{fig:combinedmodelspec}, upper right). To reproduce the observed shape of He\,{\sc ii}\,$\lambda$4686, we combine this profile with a model with a weak He\,{\sc ii}\,$\lambda$4686 absorption profile: model MOD89, with $T_{\mathrm{eff}}=30\,000$~K. This model resembles a late-O/early-B giant or bright giant. We create a combined profile of 20\% MOD90 and 80\% MOD89. With this flux ratio, the He\,{\sc i} and He\,{\sc ii} absorption lines resemble a 30\,000~K star, because in this respect the MOD89 model is dominant. The H$\alpha$ wings are well reproduced (Fig.~\ref{fig:combinedmodelspec}, upper right). The other Balmer lines in the WN component MOD90 are also affected by the strong wind, which results in a shallower line or even a P-Cygni profile in the case of H$\beta$ and H$\gamma$. However, the strong absorption profile of the late O giant component MOD89 dominates, and the combined profiles match the observed ones.\\ If the cluster consists of ten stars like MOD89 and two like MOD90, the ensemble would have a visual magnitude of $M_V=-8.63$, in agreement with the observed value $M_V=-8.7\pm0.4$. The flux ratio in the visual would be $\sim4:1$, due to the different bolometric corrections (see Table~\ref{tab:mv}). The visual flux is dominated by the late O-type component, while further to the UV, the hot Wolf-Rayet component would dominate. The latter is required to reproduce the observed electron temperature in de cloud. \\ \begin{table} \caption{Properties of the final composition of the cluster. Our model cluster has two different components: a late O giant component (MOD89) and a Wolf Rayet WN-type component (MOD90), see also Table~\ref{tab:modparam}. Model ID, effective temperature, bolometric luminosity, bolometric correction, absolute visual magnitude, number of stars of this type in the cluster, total absolute magnitude of this component, and fraction of the total visual flux provided by this component. \label{tab:mv}} \begin{tabular}{lccc@{ } ccc@{ }c} \hline ID & $T_{\mathrm{eff}}$ &$L_*$ & BC & $M_V$ & \# & $M_{V}$ & Fraction \\ &(K) &$(\mathrm{L}_{\odot})$ & & & & total & of $F_{\mathrm{vis}}$\\ \hline MOD89 & 30\,000 &$10^{5.24}$ & $-2.42$ & $-5.93$ & 10 & $-8.43$ & 0.8 \\ MOD90 & 50\,000 &$10^{5.83}$ & $-3.90$ & $-5.95$ & 2 & $-6.70$ & 0.2 \\ \hline \end{tabular} \end{table} We have used the combined spectrum of 10 times MOD89 and 2 times MOD90 as ionising source in a CLOUDY model. The SED at the inner radius of the cloud is shown in Fig.~\ref{fig:fullspec_cloudy}, where we see that the ionising flux is indeed dominated by the WN component MOD90. We use the same configuration as we did in Section~\ref{sec:modelhii}, with $Z=0.3\,\mathrm{Z}_{\odot}$. We choose $n_{\mathrm{H}}=20$~cm$^{-3}$, such that CLOUDY produces an ionised region with a radius of $\sim$20\,pc, see Section~\ref{sec:nebprop}. From the synthesized nebular spectrum we infer $T$(O\,{\sc iii}) $\sim10\,800$\,K, slightly lower than the measured $T$(O\,{\sc iii}) $\sim11\,500\pm600$\,K, but in reasonable agreement. According to the model in Section~\ref{sec:modelhii} and its results in Figure~\ref{fig:CLOUDY_TO3}, a 50\,000~K star would be able to heat the cloud to $\sim$11\,500\,K. However, adding more late O stars decreases $T$(O\,{\sc iii}). The predicted nebular line ratio He\,{\sc ii}\,$\lambda\,4686$/H$\beta$ for this cluster composition is 0.06, which challenges the non-detection of nebular line He\,{\sc ii}\,$\lambda\,4686$ around C1\_31. This disagreement can be reconciled by increasing the inner radius of the model cloud to 1~pc or more.\\ In principle, information as to the stellar content may also be derived from considering the mass-loss rates of the contributing stars. For our late O\,II/III source (MOD89) the adopted mass-loss rate of $2\times10^{-6}\mathrm{~ M}_{\odot}$~yr$^{-1}$ is rather large compared to theoretical expectations for such a star at a metallicity of $0.3\,\mathrm{Z}_{\odot}$, being an order of magnitude higher than predicted by \citet{vink2001}, after correcting the empirical mass-loss rate for wind inhomogeneities \citep{mokiem2007}. Using the Vink et al. prescription for the WN star (MOD90) yields a much smaller discrepancy of a factor $\sim$2. The discrepancy in the O star mass-loss rate can be partly reconciled by taking a smaller number of brighter stars, for instance supergiants, in the following way: A brighter star has a larger radius. In order to preserve the H$\alpha$ profile shape, the quantity $Q \propto \dot{M} / (R_*^{3/2} v_{\infty})$ needs to remain invariant \citep{dekoter1998}. This implies that for fixed temperature $\dot{M} \propto L_*^{3/4}$. However, the expected mass-loss rate scales as $\dot{M} \propto L_*^{2.2}$, therefore a smaller number of brighter O stars may still match the strong H$\alpha$ emission line wings and better reconcile observed and theoretical mass-loss rates. We did not pursue this strategy in view of the uncertainties that are involved, for instance those relating to the stellar mass (which enters the problem as mass-loss is expected to scale with mass as $\dot{M} \propto M^{-1.3}$). Moreover, we remark that higher than expected mass-loss rates have been reported for O stars in low-metallicity galaxies \citep{tramper2011}. \section{Summary and conclusions} \label{sec:conclusion} We have analysed the VLT/X-shooter spectrum of C1\_31, one of the most luminous sources in NGC\,55, and its surroundings. We conclude that NGC\,55\,C1\_31 is a cluster consisting of several massive stars, including at least one WN star, of which we observe the integrated spectrum. \\ The H, He\,{\sc i} and He\,{\sc ii} lines in the stellar spectrum have been compared to synthesized spectra from a grid of FASTWIND non-LTE stellar atmosphere models. All normalised lines except He\,{\sc ii}\,$\lambda\,4686$ can be reproduced by a single-star model with $T_{\mathrm{eff}} \lesssim 35\,000$\,K, $\dot{M} \sim2\times10^{-6}\mathrm{~ M}_{\odot}$~yr$^{-1}$, and $v_{\mathrm{rot}}\sin(i)=150 \pm 50$~km\,s$^{-1}$. He\,{\sc ii}\,$\lambda\,4686$ has an equivalent width of $-3.6\pm0.4$\,\AA, but is $\sim$3000\,km\,s$^{-1}$ wide. No single star model is able to produce matching profiles for all lines simultaneously.\\ Analysis of the nebular emission spectrum along the slit yields an electron density $n_{\mathrm{e}}\leq10^2$\,cm$^{-3}$, electron temperature $T \left( \textrm{O\,{\sc iii}} \right) = 11\,500\,\pm\,600$\,K, and oxygen abundance $[\mathrm{O}/\mathrm{H}]=8.18 \pm 0.03$, which corresponds to a metallicity $Z=0.31\pm0.04\,\mathrm{Z}_{\odot}$. A grid of CLOUDY models suggests that a hot ($\sim$50\,000\,K) ionising source is necessary to reproduce the observed $T \left( \textrm{O\,{\sc iii}} \right)$ in a H\,{\sc ii} region with comparable density and metallicity.\\ We have also presented an illustrative cluster composition that reproduces all observed spectral features, the visual brightness of the target, and which is able to maintain an H\,{\sc ii} region with properties similar to those derived from the nebular spectrum. In our model, the cluster contains several blue (super)giants and one or more WN stars. While the proposed composition might not be unique, the presence of at least one very hot, helium rich star with a high mass-loss is a robust conclusion. High angular resolution imaging reaching a resolution of 0.05\arcsec\ (corresponding to a physical distance of about 0.5~pc) would provide an improvement of a factor 10 to 20 compared to our seeing-limited observations and would help to constrain the composition of the cluster. This makes NGC\,55\,C1\_31 a prime target for ELT-class telescopes combining high angular resolution and integral field or multi-object spectroscopy. \section*{Acknowledgments} We acknowledge the X-shooter Science Verification team. We thank Andrea Modigliani and Paolo Goldoni for their support in data reduction. We also thank Christophe Martayan and Rudy Wij\-nands for helpful discussions as well as Norberto Castro and Grzegorz Pietrzy\'nski for communicating updated photometry. We thank the referee for constructive comments. \bibliographystyle{mn2e_fix}
2,869,038,156,549
arxiv
\section{#1}} \usepackage{lipsum} \usepackage{tikz} \usepackage{fancyvrb} \usepackage{listings} \usepackage{enumitem} \usepackage{tcolorbox} \usepackage{multicol} \usepackage{siunitx} \lstdefinestyle{common}{ xleftmargin=.5em, xrightmargin=.5em, frame=single,framesep=.5em,framerule=0pt, fancyvrb=true, basicstyle=\ttfamily, keywordstyle=\color{cyan!50!blue!75!black}\bfseries, commentstyle=\color{red!50!black}\itshape, stringstyle=\ttfamily\color{green!50!black}, numbers=none, showspaces=false, showstringspaces=false, fontadjust=true, keepspaces=true, flexiblecolumns=true, emphstyle=\color{red}, } \lstdefinestyle{TeX}{ style=common, backgroundcolor=\color{blue!5}, aboveskip=5pt, belowskip=5pt, language=[LaTeX]TeX, moretexcs={ % abstract, addbibresource, iscramset, keywords, mainmatter, maketitle, printbibliography, subsection, subsubsection, url, urldef, href, includegraphics, ldots, parencite, citeauthor, citeyear, citetitle, midrule, toprule, bottomrule % }, fancyvrb=true, } \lstdefinestyle{console}{ style=common, backgroundcolor=\color{gray!10}, aboveskip=5pt, belowskip=5pt, } \newlist{options}{description}{1} \setlist[options]{% beginpenalty=10000,% itemsep=.5\parskip plus .3\parskip minus .2\parskip, parsep=.5\parskip plus .3\parskip minus .2\parskip, topsep=.5\parskip plus .3\parskip minus .2\parskip, partopsep=.5\parskip plus .3\parskip minus .2\parskip, style=nextline,labelindent=1em,% font=\normalfont\ttfamily} \colorlet{macro color}{cyan!50!blue!75!black} \colorlet{option color}{red!50!black} \colorlet{generic color}{green!40!black} \newcommand\macro[1]{{\textcolor{macro color}{\ttfamily\bfseries\string#1}}} \newcommand\option[1]{\textcolor{option color}{\ttfamily#1}} \newcommand\generic[1]{\textcolor{generic color}{\ttfamily\itshape\makebox{<#1>}}} \newtcolorbox{pseudoTeX}{colback=blue!5,colframe=blue!5,before=\nobreak} \let\LaTeXorig\LaTeX \renewcommand\LaTeX{\bgroup\fontfamily{lmr}\selectfont\upshape\LaTeXorig\egroup} \iscramset{ title={ How does a Pre-Trained Transformer Integrate Contextual Keywords? Application to Humanitarian Computing }, short title={How does a Pre-Trained Transformer Integrate Contextual Keywords }, author={ short name={Valentin Barriere}, full name=Valentin Barriere\thanks{corresponding author}, affiliation={ European Commission \\ Joint Research Centre (JRC) - Ispra % \\ \href{mailto:[email protected]}{\url{[email protected]}} }, }, author={ full name=Guillaume Jacquet, affiliation={ European Commission \\ Joint Research Centre (JRC) - Ispra \\ \href{mailto:[email protected]}{\url{[email protected]}} }, }, } \usepackage{tikzpagenodes} \usetikzlibrary{fit} \newcommand\iscramshowframe{% \begin{tikzpicture}[remember picture,overlay] \node[fit=(current page text area),inner sep=0,node contents=,name=cpta]; \node[fit=(current page header area),inner sep=0,node contents=,name=cpha]; \node[fit=(current page footer area),inner sep=0,node contents=,name=cpfa]; \node[fit=(current page),inner sep=0,node contents=,name=cp]; \path (cpta.north) -- (cpta.center) coordinate[pos=.5] (mid); \draw[red] (cpta.north west) rectangle (cpta.south east); \draw[red] (cpha.north west) rectangle (cpha.south east); \draw[red] (cpfa.south west) -- (cpfa.south east); \draw[red,dashed] (cpta.north west) -- (cpta.north west -| cp.west) coordinate[pos=.5] (mid); \draw[<->,red] (mid) -- (mid |- cp.north) node[pos=.5,above,sloped]{\SI{1}{in}}; \draw[red,dashed] (cpta.south west) -- (cpta.south west -| cp.west) coordinate[pos=.5] (mid); \draw[<->,red] (mid) -- (mid |- cp.south) node[pos=.5,above,sloped]{\SI{1}{in}}; \draw[<->,red] (cpta.west) -- (cpta.west -| cp.west) node[pos=.5,above,sloped]{\SI{1}{in}}; \draw[<->,red] (cpta.east) -- (cpta.east -| cp.east) node[pos=.5,above,sloped]{\SI{1}{in}}; \draw[red,dashed] (cpha.north east) -- (cpha.north east -| cp.east) coordinate[pos=.5] (mid); \draw[<->,red] (mid) -- (mid |- cp.north) node[pos=.5,above,sloped]{\SI{1/2}{in}}; \draw[red,dashed] (cpfa.south east) -- (cpfa.south east -| cp.east) coordinate[pos=.5] (mid); \draw[<->,red] (mid) -- (mid |- cp.south) node[pos=.5,above,sloped]{\SI{1}{cm}}; \end{tikzpicture}% } \usepackage{times} \usepackage{latexsym} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{microtype} \usepackage{multirow} \usepackage{tcolorbox} \usepackage{tabularx} \usepackage{graphicx, wrapfig, subcaption, setspace, booktabs} \usepackage{url} \usepackage{latexsym} \usepackage{graphicx} \usepackage{pifont} \newcommand{\cmark}{\ding{51}}% \newcommand{\xmark}{\ding{55}}% \usepackage{adjustbox} \usepackage{flushend} \usepackage{verbatim} \newcommand{\rougeval}[1]{\textcolor{red}{\textbf{{#1}}}} \newcommand{\vertval}[1]{\textcolor{olive}{\textbf{{#1}}}} \usepackage{courier} \usepackage{xcolor} \newcommand{\corrval}[2]{{#2}} \newcommand{\addval}[1]{{#1}} \newcommand{\delval}[1]{\textcolor{red}{[{#1}]}} \newcommand{\gj}[1]{{\color{blue} \textbf{GJ: #1}}} \begin{document} \maketitle \makeatletter \makeatother \abstract{ In a classification task, dealing with text snippets and metadata usually requires to deal with multimodal approaches. When those metadata are textual, it is tempting to use them intrinsically with a pre-trained transformer, in order to leverage the semantic information encoded inside the model. This paper describes how to improve a humanitarian classification task by adding the crisis event type to each tweet to be classified. Based on additional experiments of the model weights and behavior, it identifies how the proposed neural network approach is partially over-fitting the particularities of the Crisis Benchmark, to better highlight how the model is still undoubtedly learning to use and take advantage of the metadata's textual semantics. } \keywords{Transformers, Contextual keywords, Humanitarian Computing} \section{Introduction} It is frequent to fuse information from different channels in order to help the classifier disambiguate some examples \cite{Xu2013}. Nonetheless, systems that mix together the different information fluxes inside the model are mainly used in multimodal data-processing \cite{Baltrusaitis} and less frequent for a simple modality. The rise of Transformer models \cite{Vaswani2017} created a new set of possibilities in text processing. Nowadays, the new architectures are pushing the boundaries of the state-of-the-art \cite{Raffel2019,Brown2020} in general tasks and no longer only in Machine Translation. Those architectures allow us to efficiently process an input composed of 2 sentences, like for Natural Language Inference tasks \cite{Williams2017}, so that they can interact together and the fusion happens inside the model. This is interesting since the model is able to infer the meanings contained in both sentences in context. In this paper, we propose the use of textual metadata semantics, by encoding it textually in the input of the classifie \corrval{. In this way,}{ in a way} the model can detect metadata as separate information and yet is able to use cross-input attention between the metadata and the text content. This can be seen as related to some works done on zero-shot learning using Natural Language Inference (NLI) where the semantic content of the label is encoded inside the classifier \cite{Yin2020,Halder2020}. \addval{\cite{Clark2019} studied the attention mechanism of a BERT model and clustered the attention heads. \cite{Rogers2020a} reviewed the understanding of BERT-based transformers in the literature, highlighting that semantics information was spread across the entire model . From the best of our knowledge, no work has been done to analyze semantics between a contextual keyword and its associated sentence by retrieving the most important words with the help of the attention weights, then clustering the retrieved words embeddings.} In the context of a crisis caused by a natural or human source, it is important to be react quickly and to be aware of the population needs \cite{Alam2020}. The resources created by social media users can contain crucial information \cite{Olteanu2015, Imran2016b} because of the poster witnessing the situation directly \cite{Zahra2020}. The type of event is highly variable, and the system needs to be relevant in the context of a natural disaster like a fire or a hurricane \cite{Waqas2019, Alam2018a}, a train crash or a pandemic (\cite{Qazi2020}). Moreover, it is known that out-of-domain prediction is harder \cite{Alam2018b}. We chose to work on the Crisis Benchmark 14-event-types dataset \cite{Alam2021benchmark} on a 11-humanitarian-class supervised setting, in order to use the event type as metadata to enhance the quality of the predictions, comparing the results using BERT \cite{Devlin2018}, RoBERTa \cite{Liu2019} and T5 \cite{Raffel2019}.\addval{ It is important to note that in this type of task, the urge of retrieving information from social media arrives after the beginning of the disaster, hence, the type of disaster is always known and it is an information available for use.} We found that in every configuration the event-aware models obtain better results than their Vanilla counterparts. We then investigated this improvement by an intensive study of the dataset, the learned model behaviors and predictions. In addition to the experiments using the official partition, we used a Leave-One-Event-Type-Out setting (LOETO) in order to prevent the model from overfitting some particularities of the dataset events. \addval{This work is directly in continuation with the work of \cite{Alam2021benchmark}, in which they add the event type to the tweet content using a simple concatenation, leading to no improvement over the Vanilla model.} % \vspace*{-.1cm} \section{Method} \vspace*{-.1cm} The proposed approach consists of the integration of metadata information inside the input text representation. We chose a dataset where this was available and where the authors of the original dataset claimed that their method was not working. We decided to integrate the event type into the learning algorithm using a NLI configuration for 3 transformer models. After running several experiments, we observed an increase in all the models' performances, independently of the architecture. We ran some analyses in order to investigate whether this gain came from the textual content of the metadata, or from mechanical memorization of the label distribution conditioned by the metadata. We chose to focus on the Crisis Benchmark's humanitarian classification task for space reasons. The reader should note that we also made experiments on the informativeness binary classification task described in \citet{Alam2021benchmark} that lead to positive results. \vspace*{-.1cm} \subsection{Incorporating Event Type} \vspace*{-.1cm Instead of naively concatenating the event-type and the tweet together, we used the patterns offered by the different models to separate the two types of information. In this way, we are not breaking the syntax and the rhythm of the tweet sentence by adding a piece of text that does not belong in the initial sentence. For BERT and RoBERTa, we used the special tokens in order to separate the 2 sentences. For T5, we used a text pattern, with a new task name and the \texttt{sentence} and \texttt{context}. Examples of the different pre-processing configurations we used are visible in Table \ref{tab:config_preproc}. \begin{table*}[] \centering \caption{Examples of text pre-processing for each model} \resizebox{.9\textwidth}{!}{% \begin{tabular}{l|l} Model & Example \\ \hline BERT & \footnotesize{\texttt{{[}CLS{]} fire {[}SEP{]} After deadly Brazil nightclub fire, safety questions emerge. {[}SEP{]}}} \\ RoBERTa & \footnotesize{\texttt{\textless s\textgreater fire \textless/s\textgreater After deadly Brazil nightclub fire, safety questions emerge. \textless/s\textgreater}} \\ T5 & \footnotesize{\texttt{cbmk context: fire sentence: After deadly Brazil nightclub fire, safety questions emerge.}} \end{tabular} } \label{tab:config_preproc} \vspace*{-.5cm} \end{table*} By calling $y$ the humanitarian label, $\mathbf{x}$ the observed tweet and $m$ the metadata, we can reformulate the prediction of the Vanilla model as $P(y|\mathbf{x})$. Instead of modeling $P(y|\mathbf{x})$, the event-aware configuration models a conditioned probability $P(y|\mathbf{x}, m)$. Another way to integrate the metadata would be to model the joint probability of the class $y$ and the metadata $m$ in $P(y, m|\mathbf{x})$. We chose the conditioned model in order to easily use the semantic information contained inside the keyword, similarly to the way some zero-shot NLI models are handling the labels \cite{Yin2020}. \vspace*{-.1cm} \subsection{Model and dataset analysis} \vspace*{-.1cm We ran some analysis of the event-aware model, in order to verify if the model was integrating semantic information regarding the textual content of the event. Transformers and more generally Neural Networks models are known to be black boxes, but some works have been done in the direction of interpreting them \cite{Clark2019}. In combination with analysis of the model and its behavior, we also analyze the dataset and its label distribution, allowing us to detect some patterns on which the neural network could rely on. We seek to answer the following questions: \begin{tcolorbox} \textbf{-} \textit{Dataset label distribution}: What does the labels distribution look like for each event ? \textbf{-} \textit{Predicted label distribution}: What is the impact of conditioning over an event on the predictions distribution? \textbf{-} \textit{Out-of-domain learning}: Is the event-aware model still better on a Leave-One-Event-Type-Out setting? \textbf{-} \textit{Attention weights}: What words are influenced by the metadata event type token? \end{tcolorbox} We ran the models with the \texttt{transformers} library \cite{Wolf2019}. For BERT and RoBERTa, we used a learning rate of 1e-6, Adam algorithm, for maximum 20 epochs, while we ran the T5 for 3 epochs, using a learning rate of 3e-4. For RoBERTa, we manually added and trained a new layer\footnote{Layer managing the \texttt{token\_type\_ids}} allowing to specify the token types, which is normally not used during RoBERTa pre-training. On the LOETO setting, we created the dev by taking 25\% of the test set, otherwise we used the official train/dev/test partition. We normalized the size of every sequence up to 128 tokens in total. \vspace*{-.2cm} \section{Experiments and Analysis} \vspace*{-.1cm} \subsection{Dataset} \vspace*{-.1cm For our experiments, we used the Crisis Benchmark dataset \cite{Alam2021benchmark} composed of 87,557 tweets collected during several crisis events, that can be separated into 14 event types: bombing, collapse, crash, disease, earthquake, explosion, fire, flood, hazard, hurricane, landslide, shooting, volcano, or none. This dataset has been labeled into 11 humanitarian classes: \textit{Affected individuals, Caution and advice, Displaced and evacuations, Donation and volunteering, Infrastructure and utilities damage, Injured or dead people, Missing and found people, Not humanitarian, Requests or needs, Response efforts, Sympathy and support}. We refer the readers to the original paper for more details on the dataset \cite{Alam2021benchmark}. \vspace*{-.2cm} \subsection{Classifier results} \vspace*{-.1cm} The results of the experiments comparing the event-aware and event-unaware models are shown in Table \ref{tab:results}. We used unweighted means of the Precision, Recall and F1, as well as global Accuracy and weighted F1. The best results among the transformers are obtained without surprise with the T5, which has 220M parameters, more than BERT and RoBERTa (110 and 125M). For each transformer, the event-aware model reaches higher performances compared to Vanilla model. The results event per event are available in Table \ref{tab:per_event}, the only event that is worse in the event-aware setting is '\textit{fire}'. Finally, we believe the reason our Vanilla models obtain significantly better results than \citet{Alam2021benchmark} remains in a better fine-tuning of the transformers, with a lower learning rate and a higher number of epochs. \begin{table}[] \centering \caption{Results on the humanitarian classification task. } \begin{tabular}{c|c|ccc|cc} Model & Event & Prec & Rec & u-F1 & w-F1 & Acc \\ \hline \hline BERT \cite{Alam2021benchmark} & \cmark & 70.1 & 71.3 & 70.7 & 86.5 & 86.5 \\ RoBERTa \cite{Alam2021benchmark}& \cmark & 70.2 & 72.3 & 71.1 & 87.0 & 87.0 \\ \hline \multirow{2}{*}{BERT} & \xmark & 73.5 & 71.9 & 72.5 & 87.5 & 87.5 \\ & \cmark & 75.3 & 72.5 & 73.7 & 88.3 & 88.1 \\ \hline \multirow{2}{*}{RoBERTa} & \xmark & 74.2 & 73.6 & 73.7 & 87.9 & 88.0 \\ & \cmark & 74.1 & 74.5 & 74.1 & 88.5 & 88.5 \\ \hline \multirow{2}{*}{T5} & \xmark & 75.0 & 74.4 & 74.6 & 88.3 & 88.4 \\ & \cmark & 76.7 & 73.8 & \textbf{75.1} & \textbf{88.8} & \textbf{88.9} \end{tabular} \label{tab:results} \end{table} \vspace*{-.2cm} \subsection{Analysis} \vspace*{-.1cm} We run analyses on the dataset as well as BERT weights and behavior in order to interpret the good results of the event-aware model compared to the Vanilla model. \vspace*{-.1cm} \subsubsection*{Dataset label distribution} It is interesting to look at the distribution of the labels regarding the event type, available in Figure \ref{fig:dist_labels_per_event} for the training set\footnote{The distributions are comparable on the test set}. We can see the type of labels appearing in each event type is highly imbalanced: for some events, like \textit{Landslide} and \textit{Volcano}, there is even only one label, which is \textit{Not humanitarian}. These events, which are highly unrepresentative in term of labels distribution, only represent a small portion of the dataset (Figure \ref{fig:dist_labels_per_event}). \begin{figure*} \centering \hspace*{-1.cm} \includegraphics[width=1.15\textwidth]{images/hist_dist_labels_per_event_type_proportion_and_all_events_fontsize12.png} \caption{Distributions of labels regarding the event type in the train set, with the proportion of each event type} \label{fig:dist_labels_per_event} \end{figure*} By comparing the distributions with the model's performances, we can see that the only event type where the event-aware is worst is '\textit{fire}', where the distribution of labels is closer to uniform. With such diverse distributions of labels regarding the event, it is obvious that memorizing the label distribution of each event will be highly beneficial for the model, but just an overfitting of the model on the dataset. \subsubsection*{Predicted Label Distribution} In order to check if conditioning over an event was distorting the predictions, we calculated the Kullback-Leibler divergences between the test set distribution $\mathcal{D}_{t}$, the distributions of predictions over the test set conditioned by an event $E$, $\mathcal{D}_{t}(E)$, and the event's label distributions $\mathcal{D}_{e}(E)$. We found that, when conditioned over an event, the model predicts on average over the test set, a distribution that is closer to its respective event distribution (0.62) than to the test set distribution (0.69) (Eq. \ref{eq:kl}). This seems to confirm the impact of the event type token in overfitting some dataset particularities. \begin{equation} \sum_{E}KL(\mathcal{D}_{e}(E)||\mathcal{D}_{t}(E)) < \sum_{E}KL(\mathcal{D}_{t}||\mathcal{D}_{t}(E)) \label{eq:kl} \end{equation} \normalsize % \subsubsection*{Leave One Event Type Out} We ran a LOETO cross-validation in addition to the experiments using the official partition of the dataset. This setting allows the comparison of results in a situation where the event-aware model is not able to infer the label distribution of the event type. This method obtains necessarily worse results than when testing on samples from an event type that was used during learning, the goal is to compare the Vanilla BERT and the event-aware BERT. If the event-aware model reaches higher performances than the Vanilla, then adding textual metadata has a positive impact on the performances. \begin{table}[] \centering \caption{Results of the BERT model on LOETO} \begin{tabular}{l|lll|l} Model type& Prec & Rec & F1 & Acc \\ \hline \hline Vanilla & 40.0 & 54.9 & 44.1 & 65.4 \\ Event-aware & 47.0 & 55.2 & 45.2 & \textbf{67.6} \\ \end{tabular} \label{tab:loete} \end{table} \begin{table}[] \centering \caption{Accuracies (differences with Vanilla) for each event type of the event-aware BERT on the humanitarian classification task, for official partition and LOETO} \begin{tabular}{llllllll} \multicolumn{1}{l|}{Partition} & None & Bombing & Collapse & Crash & Disease & Earthquake & Explosion \\ \hline \hline \multicolumn{1}{l|}{Official} & 91.2 (\vertval{1.2}) & 96.7 (\vertval{0.4}) & 88.8 (0.0) & 89.3 (\vertval{1.1}) & 98.6 (\vertval{2.9}) & 77.0 (\vertval{1.2}) & 96.6 (\vertval{0.3}) \\ \multicolumn{1}{l|}{LOETO} & 34.3 (\vertval{5.0}) & 89.7 (\rougeval{-4.3}) & 44.1 (\vertval{19.7}) & 81.5 (\rougeval{-0.3}) & 59.4 (\rougeval{-11.3}) & 49.4 (\rougeval{-1.6}) & 93.1 (\vertval{1.4}) \\ & & & & & & & \\ Fire & Flood & Hazard & Hurricane & Lanslide & Shooting & Volcano & \textbf{Average} \\ \hline \hline 81.5 (\rougeval{-1.2}) & 90.7 (\vertval{0.7}) & 52.8 (0.0) & 88.0 (\vertval{0.6}) & 100 (\vertval{1.6}) & 87.5 (0.0) & 97.1 (0.0) & 88.3 (\vertval{0.8}) \\ 67.6 (\rougeval{-4.2}) & 85.3 (\vertval{1.7}) & 49.8 (\vertval{1.4}) & 71.7 (\vertval{5.0}) & 92.6 (\rougeval{-0.6}) & 77.8 (\vertval{7.1}) & 72.0 (\rougeval{-2.8}) & 67.6 (\vertval{2.2}) \\ \end{tabular} \label{tab:per_event} \end{table} The results shown in Table \ref{tab:loete} are explicit: adding the event type also improves the results when the system is faced with a new event it had not seen before. In Table \ref{tab:per_event} we can see on the results that the event-aware model is not homogeneously performing according to event types. \subsubsection*{Attention weights} In order to understand how the event type is influencing the decision, we look at the interaction mechanism between the event type and the tweet by studying the attention weights of the BERT model. To avoid any model overfitting on specific words corresponding to specific events, we ran this study in a LOETO configuration, so that the model has never seen the event type token before and cannot make a correlation between this token and words appearing for this event type. For each event, we counted the number of times every token from the tweet was linked to the event token with an attention weight bigger than an arbitrary threshold of 0.5. We discarded the weights between the punctuation and the stop-words\footnote{from nltk toolbox} tokens. Then we took the 50 tokens with the highest tf-idf, extracted their embeddings with a BERT model, and proceeded with a clustering. A visualization of the words and clusters for \textit{hurricane} has been made in Figure \ref{fig:cloud . We figured out that the tokens interacting directly with the event type were related to the type of disaster\footnote{\textit{hurricane}, \textit{cyclone}, \textit{storm}, \textit{tornado}, \textit{typhoon}...}, proper names\footnote{\textit{Irma}, \textit{Sandy}, \textit{Harvey}, \textit{Vanuatu},…}, and the classes of the task.\footnote{sympathy, material damages, human damages, warnings, evacuations,...} This means that even for an event type not contained in the training set, the model is capable of using the semantics of this event in order to better infer the class of the tweet. \begin{figure} \centering \includegraphics[width=.8\textwidth]{images/hurricane_2_clusters_italique.png} \caption{Tokens interacting the most with the event type '\textit{hurricane}'. Clusters of the top-50 tokens.} \label{fig:cloud} \end{figure} \section{Conclusion} In this article, we studied the effect of adding textual metadata information inside three transformer models, using the classical ways to separate different sentences in an input. We ran experiments on a Humanitarian Computing dataset, adding to each tweet its respective event type\addval{, which is contextual information always available,} in order to better classify it into 11 classes. We discovered that this method improves the results,\addval{ even when the event type has never been seen during training phase. It} can be applied to different transformers and we obtained a new state-of-the-art on this dataset. Finally, we carried out an analysis of the dataset and the event-aware BERT model weights and behavior in order to shed light on the reasons for this increase in performance. We show that the event-aware model is not only memorizing the unbalanced label distribution of the event type, but also learning semantics relation between the text and the event type token. We discovered that the tokens locally interacting with the event token were related to the lexical fields of the event type, and the classes of the classification task \newpage
2,869,038,156,550
arxiv
\section{Introduction} The reliability of modern applications, running either on the cloud or on mobile devises, is paramount for their success. Despite heavy investment in software quality processes, including testing, static analysis, and code reviews, bugs are still propagated to production-level systems, hampering user experience. Bugs may manifest as application crashes, whose triaging, root causing and fixing demands strong expertise on how the application is structured and how it works. As such crashes happen in environments not fully controlled by the application developers, debugging engineers often can only rely on telemetry to root cause the crashes. The problem we focus on is the debugging of Facebook iOS application crashes that happen as a result of the application running out of memory (OOM). An OOM crash happens when the Facebook mobile application consumes memory above a certain specified threshold allocated by the operating system, which then causes the operating system to kill the application. These issues, while usually relatively infrequent, can occasionally affect a significant portion of the user base as result of buggy code or configuration changes. In some cases, a code change causes a new type of OOM crash, which we term an emerging crash. Particularly, when these crashes occur in popular portions of the application, engineers wish to debug these crashes as quickly as possible; every hour saved during the debugging process can result in significantly less user crashes. Compared to other types of crashes, OOM crashes do not usually produce actionable stack traces, which would normally help developers localize and quickly debug the crash. Consequently, OOM crashes are some of the hardest to debug. To alleviate the lack of signal, the Facebook suite of mobile applications contain telemetry modules that provide useful information collected from applications before they crash. Telemetry works by sampling internal application object metrics at regular intervals, or at developer specified points in the application lifetime. When a crash is detected and the user gives permission, those samples are uploaded to Facebook's internal systems for further analysis. Debug engineers can then stitch together individual samples from each mobile application to form a \emph{multivariate timeseries} per session, representing its object allocation count and overall memory leading up to the time of the crash. Currently, engineers tasked with debugging OOM crashes manually comb through dozens to hundreds of crashing sessions, for each session examining a visual plot of the object metric allocation counts over time. This data is hard to comprehend, as each session can have hundreds of object allocation timeseries with distinct behavior. With luck, engineers are able to rely on previous knowledge and heuristics to notice correlations between when an application hits the maximum memory limit and anomalous behavior in one or a few object allocations across many different sessions. In turn, such correlations may give hints as to the root cause of the OOM error. However, combing through hundreds of crashes containing hundreds of allocation timeseries in order to spot such patters is taxing for engineers, which is why OOM crashes can take long to root cause. In this work, we present \kaboom{}, a method and a corresponding implementation to automatically cluster application crashes using timeseries fingeprinting. The inputs to the \kaboom{} model are multivariate time series, where each univariate timeseries is the count of a particular object allocations over a crashing session's lifetime. \kaboom{} uses an autoencoder model to embed the time series into a clustering space, where similar crash embeddings are closer to each other and unrelated crash embeddings are far away from each other. For training, \kaboom{} assumes that a relatively small (currently, under 20) number of different crash types exist in the input space. To ensure maximum cluster separability, it conditions its encoder on crash types found on early samples immediately after deployment for a new version. In production, the first output of \kaboom{} is the cluster into which an input session belongs to. The second output is a list of likely "important" object metrics per cluster, which is obtained by using our model's cluster label assignments to run object-level comparisons between the clusters. We apply the \kaboom{} model on a real-world use case, specifically i{\sc os}{} {\sc oom} crashes from the Facebook application over the course of five weeks. Since \kaboom{} is an unsupervised model for a novel application, our evaluation is constrained by the lack of both ground truth data and competing solutions. We thus proceed to demonstrate the effectiveness of \kaboom{} through a series of analyses of its ability to capture important aspects of the problem at hand. Initially, we compare the clustering ability of different embedding model configurations, without and with the clustering module. Then, we evaluate \kaboom{}'s ability to learn to identify emerging types of crashes, i.e., crash types that are new between successive application versions or successive times. Finally, we demonstrate how \kaboom{} can help engineers in root causing {\sc oom}{} crashes. This work makes the following concrete contributions: \begin{itemize} \item The concept of \emph{timeseries fingerprinting} for analyzing crashing application sessions. \item \kaboom{}, an end-to-end unsupervised pipeline that uses timeseries fingerprinting to cluster crashes. \item A method for explaining assignments of incoming application traces to particular clusters. \end{itemize} \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{overall_flow.pdf} \caption{System Diagram \textmd{KabOOM takes in crashing traces. The embedding model (the encoder portion of an autoencoder) is the part of the model that is trained, and learns to embed OOM traces. \kaboom{} then runs K-Means clustering on embeddings of a validation dataset, which assigns each crash to one of $k$ (user-chosen) clusters. Finally, \kaboom{} runs cluster contrastive comparison between the $k$ generated cluster labels and outputs important memory objects per cluster.}} \label{fig:overall} \end{figure*} \section{Clustering Crash Sessions} As applications evolve, so do their failures. New application versions introduce new paths to failure and it is precisely those new paths that are interesting for engineers that debug crashes. It is important from a quality perspective that new crashes which have a significant impact on application stability or are experienced by many users to be caught as early as possible. We therefore model crash categorization as an unsupervised learning problem, as we do not know a priori the crash types or their number. In a nutshell, our approach, \kaboom{}, clusters similar crashing application sessions based on object allocation timeseries and, per cluster, surfaces those allocation timeseries that led to the crash represented by a cluster. \kaboom{} works in three phases: \begin{itemize} \item In the \emph{training} phase, it learns a model to embed (fingerprint) multivariate timeseries. Assuming a single platform (e.g., i{\sc os}{}), and to account for changes in application behaviours due to updates, \kaboom{} models need to be trained and deployed on each new application version. \item In the \emph{calibration} phase, \kaboom{} uses the trained models to process a sufficiently large sample of current traces in order to instantiate representative crash clusters (triggered nightly). \item In the \emph{production} phase, it continuously processes incoming traces and assigns them to the pre-instantiated clusters. \end{itemize} \subsection{Input data} \label{sec:inputdata} An incoming application trace is an $t \times m$ matrix, where $t$ represents the sampling time and $m$ represents the available metrics. Depending on how long an application had been running prior to the crash, $m$ and $t$ can be in the order of 100s or 1000s of metrics and samples, respectively. It is important to note that as traces are a result of user activity, not all traces have the same metrics. We therefore introduce a homogenization step that removes all timeseries that are not present in more than a configurable threshold of traces. In case a timeseries is missing from a trace, we add it as an zero-filled vector. Moreover, we trim all timeseries to a configurable number of timesteps, starting from the most recent measurement; we assume that the root cause of a crash is probably manifested a few timesteps before the crash happened. If an object is present but for fewer than $m$ timesteps, we zero-pad the readings from the left of the matrix, i.e. the oldest readings. Finally, for each input trace, all object timeseries are individually scaled with feature range of $[0,1]$. \begin{figure*} \subfigure[Stacking Autoencoder] { \includegraphics[scale=0.45]{ae.pdf} } \unskip\ \vrule\ \subfigure[Stacking Variational Autoencoder] { \includegraphics[scale=0.45]{vae.pdf} } \caption{Architecture of the \kaboom{} autoencoders. Lengths are indicative, but proportionally correct. In both cases, the encoder part is further conditioned by a cluster learning module.} \label{fig:model} \end{figure*} \subsection{Embedding Model - Training} Clustering on high dimensional data is impractical. Initially, the data is sparse, which makes identification of appropriate clusters difficult. The goal of the embedding step is therefore to reduce the input dimensionality to a very compact representation, while maintaining enough information for good cluster separation. To do so, the input trace is fed into an autoencoder. Depending on how the data is fed to the model, we can identify two types of models: \begin{itemize} \item Stacking models: Stacking models stack all $m$ dimension vectors (columns) on top of each other. The input to the model is an one-dimensional vector, whose length is $n \times m$. Effectively, stacking models process the whole trace in one step. \item Window models: Window models slide a window of length $s$ along the $t$ dimension, resulting in $t - s$ inputs of size $s \times m$. Window models need multiple steps to process a trace, but can process longer traces. \end{itemize} \begin{table}[t] \caption{Typical sizes in the \kaboom{} setting} \label{tab:sizes} \begin{tabular}{llr} \toprule \multicolumn{3}{l}{\emph{Input data}} \\ & Timesteps & 1,000 -- 10,000 \\ & Metrics & 2,500 -- 3,500 \\ & Timesteps (after reduction -- $t$) & 100 \\ & Metrics (after reduction -- $m$) & 600 -- 700 \\ & \# training samples & 5,000 \\ \midrule \multicolumn{3}{l}{\emph{Stacking Model}} \\ & Input length & $t \times m$ \\ & AE $z$ length & $\dfrac{t \times m}{64}$ \\ & VAE $z$ length & 256 \\ \bottomrule \end{tabular} \end{table} After experimentation with both types of models, we decided to exclusively use stacking models, as they are both faster to train and lead to better cluster separation for our main task. Our description therefore focuses on stacking models. \paragraph{Basic Autoencoder} Autoencoders (AEs) are a class of unsupervised representation learning models developed in deep learning. Generally, autoencoder models learn compact representations of supplied input examples by reconstructing them in a much smaller dimensional space compared to the input space. Autoencoders comprise two separate networks: the encoder network takes inputs $x$ from some original input space and maps them to a (usually lower-dimensional) latent space specified by the researcher, to produce $z = f_{\theta_f}(x)$. The decoder network then takes $z$ as input and maps that back the original input space, to produce $\hat{x} = g_{\theta_g}(z)$, where $\hat{x}$ is hopefully similar to $x$. Choices for the structure of the networks represented by $f$ and $g$ vary, with the simplest choices being traditional feed-forward neural networks, and more complex networks employing recurrent and convolutional layers, as well as any other techniques seen across deep learning models. Both the encoder and decoder networks are trained using stochastic gradient descent to minimize some notion of distance between $\hat{x}$ and $x$, commonly the mean squared error (MSE) between each input $x$ and its reconstructed output $r$, where MSE is calculated as~\cite{bengio2013representation, goodfellow2016deep}: \[ MSE=\frac{1}{n}\sum_{i=1}^n(x_i-\hat{x_i})^2 \] \kaboom{}'s stacking AE is a straightforward fully-connected AE. It comprises 2 fully connected layers (see Figure \ref{fig:model}). The input and the output layer have $t \times m$ dimensions. The input dimensions are first progressively reduced until the embedding layer $z$ and then expanded. \paragraph{Variational Autoencoder} In the \kaboom{} setting, simple stacking AEs face a specific problem: the input dimensionality is too high ($\sim$70,000 dimensions) to effectively compress with 1 - 2 intermediate layers. As the data is very sparse, more layers give the AE the opportunity to overfit and make the model too big to fit on a GPU with reasonable VRAM. Also, as the $z$ dimension is a function of the input size, the resulting embeddings are still too long for efficient clustering at scale. For those reasons, we also introduced VAEs as an alternative representation learning technique. VAEs, proposed by Kingma et. al.~\cite{kingma2014auto}, combine variational Bayesian inference methods with the autoencoder architecture to better capture stochasticity within input data. As opposed to learning a static mapping from the input space to latent space, VAEs learns a probabilistic distribution over the training data. Underlying VAEs is the assumption that the input data are distributed using a prior known distribution; for continuous data, the Gaussian distribution is often assumed by default. The encoder portion of a VAE, often denoted $q(\cdot)$ in the literature, approximates $q_\theta(z|x) \approx P(z | x)$.The decoder portion of a VAE, often denoted $p(\cdot)$ in the literature, approximates $p_\phi(x|z) \approx P(x | z)$. In this notation, $\theta$ and $\phi$ parameterize the encoder and decoder respectively, and can be thought of as the neural network weights and biases which training optimizes over. $P(x | z)$ is usually assumed to also take on a normal form, and is sampled from a normal distribution parameterized by $z$. VAEs are trained by minimizing the negative evidence lower bound (ELBO). When the prior distribution is assumed to be Gaussian, the ELBO can be calculated as follows: \[ ELBO(\phi, \theta) = - \mathbb{E}[\text{log } p_\phi(x|z)] + D_{\text{KL}}(q_\theta(z) || P(z)) \] Intuitively, the first term represents how likely the observed data is given our choice of latent variable $z$, and the second term is there to penalize our model if its choice for the latent distribution ($q_\theta(z)$) strays too far from our assumed functional prior ($P(z)$), where distance here is measured by Kullback–Leibler divergence between the two distributions. Just like traditional autoencoders, stochastic gradient descent is used to minimize ELBO loss while training a VAE. \kaboom{}'s VAE is shallow: the input layer downscales the input by a factor of 8 and then feeds two vectors which, when combined, learn the parameters for a Gaussian approximation of the input distribution. We chose a Gaussian prior to simplify the calculation of the ELBO loss, using the reparametrization trick as described in ~\cite{kingma2014auto}. The variational nature of the VAE model enables us to compress the input to a high degree, keeping similar inputs close in the embedding space; in turn, clustering on those should produce better cluster separation. \subsection{Clustering Model - Training} In addition to the embedding models described above, \kaboom{} uses an additional model to fine tune the encoders, so that the generated embeddings are more amenable to clustering. The way that \kaboom{} improves the embeddings is based on the Deep Embedding Clustering (DEC) model~\cite{Xie2016DEC}. DEC expects an encoder model and an initial set of cluster centroids (which also implies that the number of clusters $k$ must be fixed before training the model). DEC then proceeds as follows. For every iteration, DEC calculates the distance (which we call $Q_{ij}$) of each embedded point $z_i$ (crashing session in our case) to each of the provided cluster centroids $\mu_j$, using a $t$-distribution kernel. This quantity represents how well the model-produced embeddings fit with the currently chosen cluster centroids. Then, DEC calculates a chosen "target distribution," (which we call $P_{ij}$) which encodes how close we want each point $z_i$ to be to a cluster centroid $\mu_j$, and represents what we want embeddings to look like. The choice of the distribution of $P$ is crucial and intuitively should accomplish the following two things: have each point be close to a centroid and have clusters be relatively similar in size. DEC's objective is to minimize the Kullback–Leibler divergence between the distributions $Q_{ij}$ and $P_{ij}$ (i.e. $\sum_i\sum_jKL(P||Q)$). Back propagating the KL loss updates to both the cluster centroids and the encoder concludes one iteration of the algorithm. Effectively, DEC simultaneously conditions the encoder to produce embeddings that are easier to cluster, and moves the centroids of each cluster to produce better clusters. The cluster centroids calculated by DEC can then be used to initialize K-Means. The K-Means algorithm, and hence the DEC model, requires the number of clusters $k$ to which to assign incoming data points. A heuristic method to select a $k$ appropriate for a given dataset is the so-called ``elbow'' method: KMeans is run for a range of $k$s, from which we select the one at which a metric (usually the Shiloutte coefficient, see~\Cref{sec:metrics}) stops improving at a significant rate. In our case, we cannot apply the elbow method on the non-embedded datapoints as this would be computationally expensive, but we still need to provide a $k$ for calculating the initial cluster centroids we provide as input to DEC. To solve this problem, we set this initial $k$ to a high number (currently 20), so we effectively train DEC to learn at least $k$ different cluster centroids. \subsection{Initializing Clusters - Calibration} During the calibration phase, after the neural networks are done from both the autoencoder/variational autoencoder reconstruction loss and the DEC clustering loss, we fix the neural networks. We then initialize clusters which we can compare production requests (i.e. live crash data) against. To determine the number of clusters our model will be using, we employ the elbow method - we run the K-Means clustering algorithm on the embeddings of held-out validation dataset generated using our AE model and compare clustering metrics across different choices of cluster numbers to find an optimal $k'$ for the particular deployment (see Figure \ref{fig:elbow}). Note that usually $k > k'$, where $k$ is fixed at usually 20 during neural network training time as mentioned previously. \subsection{Explaining cluster assignments - Production} There were not straightforward ways to provide explanations for our model's decisions made in determining cluster assignments, a problem commonly seen across many deep learning approaches. We investigated providing interpretations for the clusters by treating our VAE model as a black box and applying methods accordingly. The simplest way to we examined the resulting clusters was to run some pattern mining techniques across the data from different clusters. We did this by taking each cluster in turn and comparing the presence of certain features in that cluster versus all other clusters combined. We examined three types of features for all the clusters: the presence of objects (i.e. if the objects had any non-zero readings in a cluster), the lack of presence of objects, and the average values of objects across the clusters. \subsection{Deployment - Production} A new model is trained and deployed for each new application version, to account for internal changes in the application code. The deployment of a \kaboom{} model for a particular version is delayed until enough data for a particular version has been collected. After deployment for a particular app version, \kaboom{} accepts a crashing session and outputs a label indicating the cluster the crash belongs in. Those labels are integrated in a crash analysis tool that helps engineers query crashes according to those labels. \section{Evaluation} The problem \kaboom{} is trying to solve lays in the field of unsupervised learning. We cannot know neither whether the clusters that \kaboom{} proposes are optimal for a given dataset, nor whether placing a particular crash in a particular cluster is correct. Therefore, to evaluate \kaboom{}, we cannot rely on existing labeled data, as those do not exist. Moreover, the focus of our work is not to develop a novel multivariate timeseries clustering method, but to provide a working solution for our debugging engineers. We therefore only evaluate \kaboom{} with variations of itself and refrain from performing a full-blown evaluation with competing timeseries clustering models.\footnote{Note that our literature research did not reveal any method for clustering multivariate timeseries.} Our evaluation strategy is therefore mostly qualitative in nature. Our evaluation is guided by the following research questions: \begin{itemize} \item \textbf{RQ1} Which model best separates crash clusters? \item \textbf{RQ2} Can \kaboom{} identify emerging types of crashes? \item \textbf{RQ3} How can \kaboom{} explain crash clusters? \end{itemize} Our evaluation is performed on a machine with 56 vCPUs, 256GB RAM and 2 Nvidia Tesla P100-SXM2 GPUs with 16GB VRAM. \subsection{Dataset} Our dataset includes {\sc oom}{} error related crashes from the Facebook i{\sc os}{} application collected over a period of 3 weeks. The crashes correspond to a single major application version. From this period we sample over 2,500 crashes.\footnote{We cannot reveal the exact number of crashes due to company policy} Each crashing session contains object allocation counts at different points in time, with their size ranging in the order of several hundred timeseries. The original data is split into two sets: a training set ($\approx$2,500 crashes), and a hold-out validation set ($\approx$200 crashes). We use the training set to train the embedding and clustering models, and the validation set for evaluation. Within a major version, the Facebook i{\sc os}{} application receives frequent updates, which may introduce new types of crashes. During one of the three weeks from which we sampled the training data, a memory related bug caused a set of crashes, which was quickly fixed. We denote crashes related to this bug as emerging regression traces. Since the \kaboom{} deployment servicing the release has not been trained to detect this particular crash, it should have difficulties assigning those crashes to its existing clusters. Consequently, emerging regression traces help us test whether our model can flag emerging types of crashes. Finally, we collect crashing sessions from a period of 2 weeks right after the train set (and subsequently the regression traces), which we call "later crashes". These sessions serve as a comparison point against which we can assess how well our model differentiates regression data versus "normal" looking data, and whether there are natural drifts in the time series data over time or whether data is relatively stationary. All sessions are trimmed to 100 timesteps and 592 object timeseries, using the process outlined in \Cref{sec:inputdata}. \subsection{Metrics} \label{sec:metrics} As we are tackling an unsupervised task with no ground truth labels, we resort to similarity and participation based metrics. Specifically, for each clustering assignment, we compute the following metrics: \begin{description} \item[Silhouette Coefficient] measures the mean distance of a single data point to all other points in the cluster and compares it to the mean distance of the same point to all items of the closest cluster~\cite{rousseeuw1987silhouettes}. Ranges from -1 (incorrect clustering) to 1 (very dense clustering). \item[Calinski-Harabasz Index] measures the ratio of the sum of between-clusters dispersion and of within-cluster dispersion~\cite{calinski1974dendrite}. A higher value indicates well-separated clusters. \item[Davies-Bouldin Index] is based on the ratio between within-cluster and between-cluster distances~\cite{davies1979cluster}. If two clusters are close together, but have a large spread, then this ratio will be large, indicating that these clusters are not very distinct. A low value indicates a better clustering. The Davies-Bouldin metric can be used in addition to the Silhouette Coefficient when performing elbow analysis. \end{description} It should be noted that all metrics above are known not to perform well in certain scenarios involving particular data distributions or clustering algorithms~\cite{halkidi2001clustering}. Consequently, their interpretation is tied to the dataset under evaluation, so they cannot be used to compare approaches across datasets. We accompany the analysis of all metrics above with visual inspection of t-{\sc sne}{} plots. t-{\sc sne}{} is a dimensionality reduction technique~\cite{van2008visualizing} whose core property is strong separation of unrelated data points in a (usually) two dimensional space. \subsection{Method} To measure which of the \kaboom{} model variants creates the best cluster assignments (\textbf{RQ1}), we train them using default settings on our training set. Specifically, we test 4 variants: i) AE, ii) VAE, iii) AE + DEC, and iv) VAE + DEC. We fix the number of epochs to 1000 to ensure a reasonable training time (less than 40 minutes in all cases, using a single GPU) and set the embedding size $z$ to 256 for VAE models and 925 for AE ones. We train each of the four proposed models in turn and then embed the held-out validation dataset using the trained models. We then directly compare cluster metrics and the t-{\sc sne}{} visualizations. To determine whether \kaboom{} can identify emerging types of crashes (\textbf{RQ2}), we select the best performing model including the optimal number of clusters and we use it to embed the emerging crashes dataset. We compare the results to the data in the validation set, using the same t-{\sc sne}{} plot. If there is a noticeable difference in the visualization, i.e., t-{\sc sne}{} plots the emerging crashes away from existing crash clusters, we can conclude that \kaboom{} embeddings are useful for distinguishing ``normal'' crashes (which engineers already know of) from new ``abnormal'' crashes (which engineers should investigate). Finally, we examine the output of running various tools on the output of the clusters, as detailed in section \ref{explainability} (\textbf{RQ3}). It is rather hard to determine whether cluster explanations generated by tools make sense, even for those with significant domain knowledge about t-{\sc sne}{}, {\sc oom}{}s and the Facebook i{\sc os}{} application code. We use some prior knowledge about the Facebook i{\sc os}{} application code to assess how reasonable the generated explanations are. \section{Results} \subsection{RQ1: Cluster representation learning} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{cluster-score-analysis.pdf} \caption{Elbow Analysis. \textmd{Plot of silhouette score and Davies-Bouldin Index for different numbers of clusters supplied to K-Means for the iOS OOM dataset. }} \label{fig:elbow} \end{figure} Before performing clustering with an algorithm like K-Means, we need to determine the optimal number of clusters through elbow analysis. The results for our i{\sc os}{} dataset can be seen in~\Cref{fig:elbow}. We can immediately observe that the two variations of the models that were trained with the cluster learning addon (DEC) have significantly higher Shilouette score and significantly lower Davies-Bouldin score. This indicates that our strategy to condition our embedding generation for clustering has proved worthwhile. It is interesting to observe that before the application of DEC, the VAE method would have to be discarded based on its terrible scores (we omitted the VAE line from the Davies-Bouldin plot for clarity --- it was consistently over 6); after applying DEC though, the VAE+DEC model is significantly better that all other variations. Both metrics are optimized across the board at about 10-12 clusters. For the second part of RQ1, we visually compare the embeddings generated by each clustering method (see~\Cref{fig:RQ1}). A good embedding should enable KMeans to put similar crashes together, which would be depicted on a t-{\sc sne}{} plot as a single colored ``bubble'' with no overlaps. Without DEC, the AE embeddings seem to work better than the VAE ones; we see in~\Cref{fig:RQ1}(a) that the AE embeddings form relatively consistent clusters, whereas the VAE ones, in~\Cref{fig:RQ1}(b), are not. The application of DEC makes the clusters more tight in both cases. The corresponding scores (as indicated at the top of the Figures) are significantly in favour of the VAE+DEC solution, especially in ths case of 11 clusters. This is despite the fact that the VAE embeddings have 75\% less dimensions (256 vs 975), which is desirable for computing those clusterings effectively. This could be due to the fact that the DEC loss function conditions the parameters in the VAE's sampling distribution to produce embeddings much closer to the learned cluster centroids. It should be noted here that those results may be different in other datasets (e.g., other application versions), but in practice we have observed that the VAE+DEC solution performs the best for our use case. \begin{figure*} \subfigure[AE (5 clusters)] { \includegraphics[width=.32\textwidth]{ae-c5-h925-e1001-322.pdf} } \subfigure[AE + DEC (5 clusters)] { \includegraphics[width=.32\textwidth]{ae-dec-c5-h925-e1001-322.pdf} } \subfigure[AE + DEC (11 clusters)] { \includegraphics[width=.32\textwidth]{ae-dec-c11-h925-e1001-322.pdf} } \subfigure[VAE] { \includegraphics[width=.32\textwidth]{vae-c5-h256-e1001-322.pdf} } \subfigure[VAE + DEC (5 clusters)] { \includegraphics[width=.32\textwidth]{vae-dec-c5-h256-e1001-322.pdf} } \subfigure[VAE + DEC (11 clusters)] { \includegraphics[width=.32\textwidth]{vae-dec-c11-h256-e1001-322.pdf} } \caption{Cluster separation for various versions of the \kaboom{} autoencoder.} \label{fig:RQ1} \end{figure*} \subsection{RQ2: Identifying emerging crash types} \label{RQ2} With this research question, we evaluate \kaboom{}'s ability to identify and flag new crash types. \Cref{fig:RQ2} presents the results of applying an already trained VAE+DEC model (a) to an application regression in the middle of the version's lifecycle that has been identified and fixed by engineers (b) and a set of crashes late in the examined version's lifecycle. The initial model was trained with data from the first week of the application's lifecycle. Note that the K-Means was not re-run with the new regression-related data, so they are not part of a cluster in the diagram. The \kaboom{} embedding model is able to produce embeddings that set the regression data apart. The distance of the regression sessions is high in both the t-{\sc sne}{} space and in the original embedding space (not shown in the figure). Debug engineers determined that the root cause for this regression was related to a particular application view that was allocating memory incorrectly. \kaboom{} would have been able to flag this emerging regression simply by examining whether the distance of the embedding vector to existing cluster centroids is over a pre-determined threshold. \Cref{fig:RQ2}(c) presents a t-{\sc sne}{} visualization of the original validation clusters, the aforementioned regression, and additional crashes collected over a month after the regression (but still in the same application version). The new data from this time period also lies far from both the validation data and the regression-related data. Intuitively, it's extremely likely (and indeed happened in this case) that various memory-related bugs have been fixed and thus our samples of new crashes should shift in distribution. From a practical perspective, this means that \kaboom{} should be periodically retrained on incoming data, especially since newer crashes are virtually indistinguishable from regression-related crashes. \begin{figure*} \subfigure[Initial clusters] { \includegraphics[width=.32\textwidth]{322.clusters.pdf} } \subfigure[Initial clusters and an emerging regression] { \includegraphics[width=.32\textwidth]{322_sev.pdf} } \subfigure[Initial clusters, the previous regression and newer crashes] { \includegraphics[width=.32\textwidth]{322_sev_pos_control.pdf} } \caption{\kaboom{} crash clustering for version 3xx of the Facebook i{\sc os}{} app. We observe that \kaboom{} can identify crashes its has not seen before by clustering them in different parts of the embedding space.} \label{fig:RQ2} \end{figure*} \subsection{RQ3: Explaining cluster membership} \label{explainability} \begin{table} \begin{tabular}{||c || c c || c c c||} \hline Cluster & Rank & Object Name & Presence \% & Other \% & F1 \\ \hline 1 & 1 & Obj A & 0.50 & 0.12 & 0.62 \\ & 2 & Obj B & 0.38 & 0.17 & 0.58 \\ & 3 & Obj C & 0.42 & 0.30 & 0.53 \\ 2 & 1 & Obj D & 0.72 & 0.08 & 0.64 \\ & 2 & Obj E & 0.41 & 0.17 & 0.59 \\ & 3 & Obj F & 0.42 & 0.53 & 0.53 \\ 3 & 1 & Obj G & 0.12 & 0.01 & 0.64 \\ & 2 & Obj H & 0.35 & 0.25 & 0.54\\ & 3 & Obj I & 0.12 & 0.13 & 0.49\\ \hline \end{tabular} \caption{Example output of presence-based cluster comparison of objects. For each cluster, the top 3 objects are ranked by their F1 score.} \label{tab:explain} \end{table} To explain cluster assignments and provide actionable feedback, we add a post processing step of comparing session contents between sessions in clusters. The most direct way of assessing object importance is comparing the frequency at which an object is present in the sessions with respect to clusters. We define an object to be present in a session if it has any non-zero readings throughout the session. We aggregate the results and rank the top several results per cluster. \Cref{tab:explain} is a depiction of how the results look when presented to the engineer. In \Cref{tab:explain}, for each object, given a cluster, we calculate the percentage of sessions in the cluster with the object present (presence \%). For the same object, we also calculate the percentage of sessions for which it is present across all other clusters combined (other \%). We then calculate an F1-score, given by $F = \frac{presence \%}{presence \% + \frac{1}{2}(presence \% + other \%)}$, which we use to rank most significant objects per cluster in descending order. In practice, we found that presence-based cluster comparison was a pretty reasonable way of characterizing the important time series in a cluster, for two reasons. The first is that the top-ranked results per group are often related to each other in a clear way, often because they are allocated from the same code files/portions of the application. The second is that, when applied to the emerging crash data, as compared to the original five clusters, the top several objects revealed are all related to the root cause of the emerging crashes. Likewise, we run the same analysis for the lack of presence. We define an object to be not present in a session if it has all 0 readings for the session. The results for lack of presence do not look as conclusive as the straightforward presence comparisons, mainly due to the sparsity of the OOM data - only a handful of objects are present in all $\approx200$ sessions in the validation set, and a significant amount of objects are present in only one session. Thus, compared to presence comparison, lack of presence comparisons suffer from many more objects to consider and much more noise. We run analysis on the average values of objects when they are present across the sessions as well. We filter the objects by a minimum threshold of presence first, 5\%, so that objects present in less than 5\% of a given cluster do not skew average results. Then, for each cluster, we compare the average value of of each object against its average value in other clusters. We calculate the mean value of an object for a cluster as the mean value of an object over the timesteps it is present across the sessions it is present in that cluster. Many of the results of average value are different from presence-based analysis, but are not extremely meaningful. The top couple of results contain common objects that are shared across all sessions across clusters, and upon manual inspection didn't relate very closely to any previous OOM issues. Finally, we also run "mutation tests" on the inputs. For every object in each cluster, we change its values first. If all values of an object are zero, we set the timeseries values to be the average value of the object across all 200 traces (where average is calculated as described in the average value analysis above). If instead the object is present, then we set all values to 0 for that object in the session. We compare the model's clustering of this mutated session against the model's clustering of the non-mutated session, and if it changes, we consider the mutated object important to cluster assignment. In practice, many of the objects that are important to certain clusters overlap with those deemed important by the presence-based analysis. Very rarely does a mutation to average values cause the model to change clusters, but very often the zeroing of objects leads to a change in cluster. Zeroing out objects intuitively should be related to object presence, since the act of zeroing out effectively changes an object from being present to not being present. In practice, the results from lack of presence comparison and average value comparisons across clusters do not tend to produce helpful results. Out of the presence comparisons and mutation tests, results from mutation tests seem to be slightly less direct than presence-based comparison, so we prefer the latter, and only display the latter to our engineers. \section{Discussion and Future Work} In this section, we revisit our design decisions and propose paths to interesting future work. \paragraph{What is \kaboom{} good for?} While the \kaboom{} model proved useful in the {\sc oom}{} use case, its embedding model can be useful for several other use cases, including the following: \begin{itemize} \item Cluster participation counting to prioritize handling: when in production, \kaboom{} can help engineers prioritize their debugging efforts by simply counting how many crashes belong to each cluster. \item Check for crash types between versions: Engineers can use multiple versions of the \kaboom{} model to check whether bugs where fixed in newer versions or check whether known crash types where re-introduced. \item Real-time identification of emerging crashes: when a minor application version is deployed to clients, \kaboom{} can immediately flag emerging regressions within a few hours. \kaboom{} can thus act as an effective alarm to quickly rollback a new release if several new application crashes occur. \item Non-{\sc oom}{} crashes: \kaboom{}'s current focus has been {\sc oom}{} crashes, due to an existing use case. However, there is nothing that prevents \kaboom{} to be trained against other types of regressions, for example performance-related ones. \end{itemize} \paragraph{Input trace sizes} To tackle incoming data sizes, \kaboom{} is currently restricted to processing the last 100 timesteps before a crash. The actual root cause for a crash however may not manifest in the last 100 timesteps. For example, a misbehaving class might have allocated 1000s of objects in the past resulting in increasing memory pressure. While the class timeseries will be part of the processed trace, the model may be learning patterns relating to increasing values to other timeseries. \kaboom{} must come up with a compact 1-dimensional representation of large, sparse 2-dimensional matrices, while also exploiting the temporal properties of the input. To solve this problem, we have experimented with sliding a window of size $n$, along the input time $t$ axis (producing $t - n $ inputs), and training a recurrent autoencoder to reconstruct the last timestep of each window, to no avail. The recently introduced Interfusion model~\cite{li2021multivariate} attempts to solve this problem by coming up with two embeddings, one per the timing and metrics axis, respectively, using a sliding window over the input. Perhaps combining those two embeddings using a further recurrent layer may prove useful for \kaboom{}. \paragraph{Model architecture} There are various other architectural designs that may improve \kaboom{} performance. On the top of our list lay real and fake discrimination, and attention mechanisms. The first extension is an addition of a discrimination task while training the encoder portion of the network, as in references~\cite{ranjan2018fake, ma2019learning}. The discrimination task used by these approaches is similar to the discriminator module employed by generative adversarial networks. The task would introduce "fake" examples generated by sampling non-crashing sessions, which then are encoded through the encoder and a separate decoder network then predicts whether that input was a "real" sample from the dataset or a "fake" one. This approach could also be seen as a way to further regularize the hidden representation via multi-task learning ~\cite{evgeniou2004regularized}. Though attention mechanisms rose to prominence for sequence-to-sequence tasks~\cite{vaswani2017attention}, different time series models have also began incorporating attention mechanisms for forecasting ~\cite{shih2019temporal, liang2018geoman, huang2019dsanet} and classification ~\cite{song2018attend, karim2017lstm}. Beyond potentially improving the quality of representations learned, the addition of attention could provide more interpretability directly into the decisions the model is making, aiding the visual results that \kaboom{} presents to the debugging engineers. \paragraph{What is a good clustering?} \kaboom{} optimizes clustering for incoming data distributions, but how can we know that the resulting clustering is good for engineers? Before involving end users, one way to explore this is by checking intra-cluster consistency by analyzing the most popular objects within all sessions in a cluster. Our experience shows that presence-based cluster comparison can help both engineers and researchers working on the clustering model to gain confidence in the model's outputs. \paragraph{Efficient deployment} \kaboom{} models need to be re-trained and deployed periodically, as a result of object value drift as bugs are resolved in new application versions (discussed in \ref{RQ2}). This results in overhead in training and maintaining models. More investigation is needed on the nature of how metrics drift over time as bugs are fixed and new code is pushed. Depending on exactly how object time series drift over time, further research could focus on the feasibility of online model training as crash data is collected, which would eliminate the need for multiple models and save manual effort in retraining models. \section{Related Work} There has been relatively little work done in clustering multivariate timeseries to perform incident analysis. From a model point-of-view, there exists similar work that leverage neural networks to generate embeddings to detect anomalies and catch incidents before they happen or address them after they happen. From a problem point-of-view, there have been several works that employ time-series clustering to software incident analysis, but rely on traditional notions of clustering. Recent research in incident anomaly detection has incorporated recurrent neural networks ~\cite{islam2020anomaly} and convolutional neural networks ~\cite{munir2018deepant} to have superior performance on problems with plentiful data and little prior knowledge compared to traditional methods of time series clustering. Additionally, modifications to these types of networks allow for addition of stochastic units in the model to capture some of the randomness in input noise, with a common application being in variational auto-encoders. The most recent algorithms, such as OmniAnomaly ~\cite{su2019robust} and Interfusion ~\cite{li2021multivariate}, combine varational auto-encoder models with recurrent networks and convolutional networks to achieve state of the art performance across a plethora of datasets, including a server machine dataset ~\cite{su2019robust}. While several of these models also use neural networks to generate embeddings, because the embeddings are trained to optimize for the anomaly detection problem, simply appending a clustering step to those architectures is not enough to produce \kaboom{}-like results. In particular, the representations that several of these models learn try to collapse the learned representations into a single, dense space, such that they would only produce one cluster across the whole dataset. For the relatively few works where timeseries clustering is used for incident analysis, clustering appears primarily as a prior step to anomaly detection. Both Li et. al. ~\cite{li2018robust} and Qian et. al. ~\cite{qian2020large} cluster key performance indicator (KPI) time series to figure out common patterns, as input to their anomaly detection algorithms. Both papers explore correlation-based methods to compute distance between KPI time series, and then apply traditional clustering algorithms such as DBSCAN. We are not aware of any further research that directly applies clustering for service monitoring or regression detection to directly learn and reason about the underlying data. Thus, \kaboom{} is the first work to combine models from the traditional timeseries clustering space, and apply it to a service monitoring setting with multivariate data. \section{Conclusion} In this paper, we present a new technique, timeseries fingerprinting, for tackling the problem of unsupervised crash categorization at scale. \kaboom{} trains an autoencoder model to learn to embed incoming crashes, represented by multivariate timeseries of object allocations, into a much smaller dimensional space, where traditional distances between points are easy to compute. The embeddings are learned in such a way that crashes with related objects and thus behavior lie close together, and unrelated crashes lie far apart. Moreover, \kaboom{} learns cluster centroids tuned to the incoming data distributions, which enables it to both separate different types of crashes and to identify when new types of crashes occur. Finally, \kaboom{} offers intuitive explanations for what types of patterns are common to groups of crashes, based on contrastive examination of the objects that comprise them. Based on our current experiences and developer feedback, in future iterations, we plan to train \kaboom{} to compress arbitrarily sized input traces, to enhance the cluster explanations using causal learning, and to apply \kaboom{} for post-mortem analysis of services. \bibliographystyle{ACM-Reference-Format}
2,869,038,156,551
arxiv
\section{Introduction} Intergalactic magnetic fields are among the most interesting discoveries in modern cosmology. Recently, lower bounds of the order of $B \sim 10^{-15}G$ have been established observationally \cite{Ando,Neronov} and the search for the origin of these fields has intensified. One of the best candidates are clearly primordial fluctuations. But there is also a number of other candidates (for a review, see \cite{Kunze}). In this paper we would like to discuss some different mechanism, based on non-Abelian magnetic fields. As it was shown recently, a spontaneous magnetization appears in non-Abelian gauge theories at high temperature. This was found by analytic methods in \cite{Starinets:1994vi}--\cite{Skalozub:1999bf} and it was confirmed by lattice simulations in \cite{Demchik:2008zz}. The basic idea rests on the known observation that in non-Abelian gauge theories at high temperature a spontaneous vacuum magnetization occurs. This is the consequence of the spectrum of a color charged gluon, \begin{equation} \label{spectrum} p^2_{0} = p^2_{||} + (2 n + 1) g B\qquad(n = - 1, 0, 1,... ), \end{equation} in a homogeneous magnetic background, $B$, $p_{||}$ is a momentum directed along the field. Here, a tachyonic mode is present in the ground state ($n=-1$). In fact, one observes that $p_0^2<0$ resulting from the interaction of the magnetic moment of the spin-1 field with the magnetic field. This phenomenon was first observed by Savvidy \cite{Savvidy:1977as} at zero temperature, $T=0$, and got known as the Savvidy vacuum. However, at zero and low temperature this state is not stable. It decays under emission of gluons until the magnetic field $B$ disappears. The picture changes with increasing temperature where a stabilization sets in. This stabilization is due to vacuum polarization and it depends on two dynamical parameters. These are a magnetic mass of the color charged gluon and an $A_0$-condensate, which is proportional to the Polyakov loop \cite{Ebert:1996tj}. This configuration is perfectly stable, since its energy is below the perturbative one and the minimum is reached for a field of order $g B \sim g^4 T^2$. Although the phenomenon was discovered in $SU(2)$ gluodynamics, it is common to other $SU(N)$ gauge fields which can be used to extend the standard $(SU(2) \times U(1))_{ew} \times SU(3)_c$ model of elementary particles. An important property of such temperature dependent magnetic fields is the vanishing of their magnetic mass, $m_{magn} = 0$. This was found both in one-loop analytic calculations \cite{Bordag:2006pr} and also in lattice simulations \cite{Antropov:2010}. The mass parameter describes the inverse spatial scales of the transverse field components, similarly to the Debye mass $m_D$, related to the inverse space scale for the electric (Coulomb) component. The absence of a screening mass means that the spontaneously generated Abelian chromomagnetic fields are long range at high temperature, as is common for the $U(1)$ magnetic field. Hence, it is reasonable to believe that at each stage of the evolution of the hot universe, spontaneously created, strong, long-range magnetic fields of different types have been present. Owing to the property of being unscreened, they have influenced various processes and phase transitions. The dependence on the temperature of these fields differs from that of the typical $U(1)$ magnetic fields. Recall that, in the latter case, the magnetic (in fact, hypermagnetic) field, created by some specific mechanism, is implemented in a hot plasma and decreases according to the law $B \sim T^2$, which is a consequence of magnetic flux conservation (see, for instance, \cite{Kunze}). However, in the case of spontaneous vacuum magnetization the magnetic flux is {\it not} conserved. Instead, a specific flux value is generated at each temperature. This fact has to be taken into consideration when the cooling pattern of the hot non-Abelian plasma is investigated. This also concerns the $SU(2)_{ew}$ component of the electromagnetic field. In the present paper we estimate the strength of the magnetic field at the temperatures of the electroweak $T_c^{ew}$ phase transition, assuming that this field was spontaneously generated by a mechanism as described above. Although such phenomenon is nonperturbative, we carry out an actual calculation in the framework of a consistent effective potential (EP) accounting for the one-loop, $V^{(1)}$, and the daisy (or ring), $V^{ring}$, diagram contributions of the standard model. In Sect.~2 we qualitatively describe, in more detail, some important aspects of the investigated phenomena. In Sect.~3 the EP of an Abelian constant electromagnetic $B$ field at finite temperature is obtained. It is then used, in Sect.~4, to estimate the magnetic field strength at the electroweak phase transition temperatures. A discussion of the results obtained in the paper together with some prospects for further work are provided in Sect.~5. \section{Qualitative consideration} In this section we describe, in a qualitative manner, the most relevant aspects of the phenomena considered. All of them follow from very basic asymptotic freedom and spontaneous symmetry breaking considerations at finite temperature. Our main assumption is that the intergalactic magnetic field has been spontaneously created at high temperature. We believe this to be a quite reasonable idea because, physically, the magnetization is the consequence of a large magnetic moment for charged non-Abelian gauge fields (let us just remind of the gyromagnetic ratio $\gamma = 2$ for $W$-bosons). This property eventually results in the asymptotic freedom of the model in the presence of external fields. We will discuss the procedure to relate the present value of the intergalactic magnetic field with the one generated in the restored phase. First, we note that in non-Abelian gauge theories magnetic flux conservation does not hold. This is due to spontaneous vacuum magnetization which depends on the temperature. The vacuum acts as a specific source generating classical fields. Second, the magnetization is strongly dependent on the scalar field condensate present in the vacuum at low temperature. This point was investigated at zero temperature by Goroku \cite{Goroku}. For finite temperature, it is considered in the present paper for the first time. The observation is that, in both cases, the spontaneous vacuum magnetization takes place for a small scalar field $\phi \not = 0$, only. For the values of $\phi$ corresponding to any first order phase transition it does not happen. This means that, after the electroweak phase transition occurs, the vacuum polarization ceases to generate magnetic fields and magnetic flux conservation holds again. As a result, the familiar dependence on the temperature $B \sim T^2$ is restored. Another aspect of the problem is the composite structure of the electromagnetic field $A_{\mu}$. The potentials read \begin{eqnarray} \label{AZ} A_\mu &=& \frac{1}{\sqrt{g^2 + g'^2}} ( g' A^3_\mu + g b_\mu ), \nonumber\\ Z_\mu &=& \frac{1}{\sqrt{g^2 + g'^2}} ( g A^3_\mu - g' b_\mu ), \end{eqnarray} where $Z_\mu $ is the $Z$-boson potential, $A^3_\mu, b_\mu$ are the Yang-Mills gauge field third projection in the weak isospin space and the potential of the hypercharge gauge fields, respectively, and $g$ and $g'$ are correspondingly $SU(2)$ and $U(1)_Y$ couplings. After the electroweak phase transition, the $Z$-boson acquires mass and the field is screened. Since the hypermagnetic field is not spontaneously generated, at high temperature only the component $A_\mu = \frac{1}{\sqrt{g^2 + g'^2}} g' A^3_\mu = \sin \theta_w A^3_\mu$ is present. Here $\theta_w $ is the Weinberg angle, $\tan \theta_w = \frac{g'}{g}$. This is the only component responsible for the intergalactic magnetic field at low temperature. In the restored phase the field $b_\mu = 0$, and the complete weak-isospin chromomagnetic field $A^{(3)}_\mu$ is unscreened. This is because the magnetic mass of this field is zero \cite{Antropov:2010}. Thus, the field is a long range one. It provides the coherence length to be sufficiently large. After the phase transition, part of the field is screened by the scalar condensate. In the restored phase, the constituent of the weak isospin field corresponding to the magnetic one is given by the expression \begin{equation} \label{fieldT} B(T) = \sin \theta_w (T) B^{(3)}(T),\end{equation} where $B^{(3)}(T)$ is the strength of the field generated spontaneously. To relate the present value of the intergalactic magnetic field with the field which existed before the electroweak phase transition, we take into consideration that, after the phase transition, the spontaneous vacuum magnetization does not occur. Therefore, for the electroweak transition temperature $T_{ew}$ we can write: \begin{equation} \label{relation} \frac{B(T_{ew})}{B_0} = \frac{T^2_{ew}}{T^2_0} = \frac{\sin \theta_w (T_{ew}) B^{(3)}(T_{ew})}{B_0}. \end{equation} Here $B_0$ is the present value of the intergalaxy magnetic field strength $B_0 \sim 10^{- 15} G$. The left-hand-side relates the value $B(T_{ew})$ with $B_0$. The right-hand-side allows to express the weak isospin magnetic field in the restored phase through $B_0$, knowing the temperature dependence of the Weinberg angle $\theta_w(T)$. This relation contains an arbitrary temperature normalization parameter $\tau$. It can be fixed for given temperature and $B_0$. After that, the field strength values at various temperatures can be calculated. In particular, the total weak isospin field strength is given by the sum $ \cos \theta_w (T_{ew}) B^{(3)}(T_{ew}) + B(T_{ew})$. An important aspect of this scenario is that the precise nature of the till now unknown theory extending the standard model is not very important for estimating the field strength $B$ at temperatures close to $T_{ew}$. This is because any new gauge field of the extended model in question will be screened at the relevant higher temperatures corresponding to the spontaneous symmetry breaking of some basic symmetries. At high temperatures, when these symmetries are restored, related magnetic fields emerge. Using these ideas the value of the field strength at the Planck era has been estimated by Pollock \cite{Pollock}. Summing up, we conclude that our estimate here gives a lower bound on the magnetic field strength for the hot universe. \section{Effective potential at high temperature} As we noted above, spontaneous vacuum magnetization and the absence of a magnetic mass for the Abelian magnetic fields are nonperturbative effects to be precisely determined, in particular, in lattice simulations \cite{Demchik:2008zz,Antropov:2010}. The main conclusions of these investigations are that a stable magnetized vacuum does exist at high temperature and that the magnetic mass of the created field is zero. Concerning the actual value of the field strength, it is close to the one calculated within the consistent effective potential which takes into account one-loop plus daisy diagrams. Thus, in the present investigation we restrict ourselves to that approximation. The main purpose for doing this is to be able to develop analytic calculations, in order to clarify the results obtained. The complete EP for the standard model is given in the review \cite{Demchik:1999}. In the present investigation we are interested in two different limits: \begin{enumerate}\item The weak magnetic field and large scalar field condensate, $h = eB/M_w^2 < \phi^2, \phi = \phi_c/\phi_0, \beta = 1/T$. \item The case of restored symmetry, $\phi = 0, g B \not = 0, T \not = 0$. \end{enumerate}For the first case we show the absence of spontaneous vacuum magnetization at finite temperature. For the second one we estimate the field strength at high temperature. Here $M_w $ is the $W$-boson mass at zero temperature, $\phi_c $ is a scalar field condensate, and $\phi_0$ its value at zero temperature. To demonstrate the first property we consider the one-loop contribution of $W$-bosons (see also Eq.~(27) of Ref.~\cite{Skalozub:1996ax}), \begin{eqnarray} \label{L2t} V^{(1)}_w(T,h,\phi) &=& \frac{h}{\pi^2 \beta^2} \sum\limits_{n = 1}^{\infty} \Bigl[ \frac{(\phi^2 - h)^{1/2}\beta}{n} K_1(n \beta (\phi^2 - h)^{1/2}) \nonumber\\ &-& \frac{(\phi^2 + h)^{1/2}\beta}{n} K_1(n \beta (\phi^2 + h)^{1/2})\Bigr]. \end{eqnarray} Here $n$ labels discrete energy values and $K_1(z)$ is the MacDonald function. The main goal of our investigation is the restored phase of the standard model. To this end, we deduce the high temperature contribution of the complete effective potential relevant for this case using the results in Ref.~\cite{Demchik:1999}. First, we write down the one-loop $W$-boson contribution as the sum of the pure Yang-Mills weak-isospin part $(\tilde{B}\equiv B^{(3)}$), \begin{eqnarray} \label{VW} V^{(1)}_w(\tilde{B}, T) &=& \frac{\tilde{B}^2}{2} + \frac{11}{48} \frac{g^2}{\pi^2} \tilde{B}^2 \log \frac{T^2}{\tau^2} - \frac{1}{3} \frac{( g \tilde{B})^{3/2} T }{\pi} \nonumber\\ &-& i \frac{( g \tilde{B})^{3/2} T }{2 \pi} + O (g^2 \tilde{B}^2), \end{eqnarray} where $\tau$ is a temperature normalization point, and the charged scalars \cite{Skalozub:1996ax}, \begin{equation} \label{Vscalar} V^{(1)}_{sc}(\tilde{B}, T) = - \frac{1}{96} \frac{g^2}{\pi^2} \tilde{B}^2 \log \frac{T^2}{\tau^2} + \frac{1}{12} \frac{( g \tilde{B})^{3/2} T }{\pi} + O (g^2 \tilde{B}^2), \end{equation} describing the contribution of longitudinal vector components. The first term in Eq.~\Ref{VW} is the tree-level energy of the field. This representation is convenient for the case of extended models including other gauge and scalar fields. Depending on the specific case, one can take into consideration the parts \Ref{VW} or \Ref{Vscalar}, correspondingly. In the standard model, the contribution of Eq.~\Ref{Vscalar} has to be included with a factor 2, due to two charged scalar fields entering the scalar doublet of the model. In the case of the Two-Higgs-Doublet standard model, this factor must be 4, etc. The imaginary part is generated because of the unstable mode in the spectrum \Ref{spectrum}. It is canceled by the term appearing in the contribution of the daisy diagrams for the unstable mode \cite{Skalozub:1999bf}, \begin{equation} \label{ringdaisy} V_{unstable} = \frac{g \tilde{B} T}{2 \pi} [\Pi(\tilde{B}, T, n = - 1) - g \tilde{B} ]^{1/2} + i \frac{(g \tilde{B})^{3/2} T}{2 \pi}. \end{equation} Here $\Pi(\tilde{B}, T, n = - 1)$ is the mean value for the charged gluon polarization tensor taken in the ground state $ n = - 1$ of the spectrum \Ref{spectrum}. If this value is sufficiently large, spectrum stabilization due to the radiation correction takes place. This possibility formally follows from the temperature and field dependences of the polarization tensor in the high temperature limit $T \to \infty $ \cite{Bordag08}: $\Pi(\tilde{B}, T, n = - 1)= c~ g^2 T \sqrt{g \tilde{B}} $, where $c > 0$ is a constant which must be calculated explicitly. At high temperature the first term can be larger then $g \tilde{B}$. From Eqs.~\Ref{VW} and \Ref{ringdaisy} it follows that the imaginary part cancels. Hence, we see that taking rings into account leads to vacuum stabilization even if $ \Pi(\tilde{B}, T, n = - 1)$ is smaller then $g \tilde{B}$. Actually, in the latter case, the imaginary part will be smaller than in Eq.~\Ref{VW}. The high temperature limit of the fermion contribution looks as follows, \begin{equation} \label{fermionEP} V_{fermion} = - \frac{\alpha}{\pi} \sum\limits_{f} \frac{1}{6} q^2_f \tilde{B}^2 \log\frac{ T}{\tau} , \end{equation} where the sum is extended to all leptons and quarks, and $q_{f}$ is the fermion electric charge in positron units. Hence, it follows that in the restored phase all the fermions give the same contribution. Now, let us present the EP for ring diagrams describing the long range correlation corrections at finite temperature \cite{Carrington:1992,Demchik:2003}, \begin{eqnarray} \label{Vring} V_{ring} &=& \frac{1}{24 \beta^2} \Pi_{00}(0) - \frac{1}{12 \pi \beta} Tr [\Pi_{00}(0)]^{3/2}\nonumber\\ &+& \frac{(\Pi_{00}(0))^2}{32 \pi^2} \left[\log\bigl(\frac{4 \pi}{\beta (\Pi_{00}(0))^{1/2}}\bigr) + \frac{3}{4} - \gamma \right],~ \end{eqnarray} where the trace means summation over all the contributing states, $\Pi_{00} = \Pi_{\phi}(k = 0, T, B)$ for the Higgs particle; $m_D^2 = \Pi_{00} = \Pi_{00}(k = 0, T, B)$ are the zero-zero components of the polarization functions of gauge fields in the magnetic field taken at zero momenta, called the Debye mass squared, $\gamma$ is Euler's gamma. These terms are of order $\sim g^3 (\lambda^{3/2})$ in the coupling constants. The detailed calculation of these functions is given in Ref.~\cite{Demchik:1999}. We give the results for completeness, \begin{eqnarray} \label{Piscalar} \Pi_{\phi}(0) &=& \frac{1}{24 \beta^2} \left[ 6 \lambda + \frac{6 e^2}{\sin^2 (2\theta_w)} + \frac{3 e^3}{\sin^2 \theta_w} \right] \nonumber\\ &+& \frac{2 \alpha}{\pi} \sum\limits_f \left( \frac{\pi^2 K_f}{3 \beta^2} - |q_f B| K_f \right) \nonumber\\ &+& \frac{(e B)^{1/2}}{8 \pi \sin^2 \theta_w \beta} e^2 3 \sqrt{2} \zeta\left(-\frac{1}{2}, \frac{1}{2}\right). \end{eqnarray} Here $K_f = \frac{m_f^2}{m_w^2} = \frac{G^2_{Yukawa}}{g^2}$ and $ \lambda $ is the scalar field coupling. The terms \mbox{$\sim \!T^2$} give standard contributions to the temperature mass squared coming from the boson and fermion sectors. The $B$-dependent terms are negative (the value of $3 \sqrt{2} \zeta(-\frac{1}{2}, \frac{1}{2}) = - 0.39$). They decrease the value of the screening mass at high temperature. The Debye masses squared for the photons, $Z$-bosons and neutral current contributions are, correspondingly, \begin{eqnarray} \label{mAZ} m^2_{D, \gamma} &=& g^2 \sin^2\theta_w \left[\frac{1}{3 \beta^2} + O(e B \beta^2)\right], \nonumber\\ m^2_{D, Z} &=& g^2 \left(\tan^2\theta_w + \frac{1}{4 \cos^2\theta} \right) \left[\frac{1}{3 \beta^2} + O(e B \beta^2)\right], \nonumber\\ m^2_{D,neutral} &=& \frac{g^2}{ 8 \cos^2\theta_w \beta^2} (1 + 4 \sin^4 \theta_w) + O(e B \beta^2). \end{eqnarray} As one can see, the dependence on $B$ appears at order $O(T^{-2})$. The $W$-boson contributions to the Debye mass of the photons is \begin{equation} \label{mW} m^2_{D, W} = 3 g^2 \sin^2\theta_w \left[\frac{1}{3 \beta^2} - \frac{(g \sin \theta_w B)^{1/2}}{2 \pi \beta}\right]. \end{equation} An interesting feature of this expression is the negative sign of the next-to-leading terms dependent on the field strength. Finally, we give the contribution of the high temperature part in Eq.~\Ref{ringdaisy} $\Pi(\tilde{B}, T, n = - 1)$ \cite{Demchik:1999}, \begin{eqnarray} \label{Piunstable} \Pi(\tilde{B}, T, n = - 1) &=& \alpha \left[12.33 \frac{(g \sin \theta_w B)^{1/2}}{\beta} \right.\nonumber\\ && \left. + 4i \frac{(g \sin \theta_w B)^{1/2}}{\beta} \right]. \end{eqnarray} This expression has been calculated from the one-loop $W$-boson polarization tensor in the external field at high temperature. It contains the imaginary part which comes from the unstable mode in the spectrum \Ref{spectrum}. Its value is small, as compared to the real one. It is of the order of the usual damping constants in plasma at high temperature. Thus, it will be ignored in actual calculations, in what follows. In fact, this part should be calculated in a more consistent scheme which starts with a regularized stable spectrum. On the other hand, as we noted above, the stability problem is a non-perturbative one. The stabilization can be realized not only by the radiation corrections but also by some other mechanisms. For example, due to $A_0$ condensation \cite{Starinets:1994vi} at high temperature. We observed the stable vacuum state in the lattice simulations \cite{Demchik:2008zz}. Therefore, we do believe that this problem has a positive solution. Summing up, we now have all what is necessary in order to investigate in depth the problem of interest. \section{Magnetic field strength at $T_{ew}$ } We will now show that spontaneous vacuum magnetization does not occur at finite temperature and for non-small values of the scalar field condensate $\phi \not = 0$. To this end we notice that the magnetization is produced by the gauge field contribution, given by Eq.~\Ref{L2t}. So, we consider the limit of $\frac{g B}{T^2} \ll 1$ and $\phi^2 > h$. For this case we use the asymptotic expansion of $K_1(z)$, \begin{equation} \label{K1asympt} K_1(z) \sim \sqrt{\frac{\pi}{2 z}} e^{- z} \left( 1 + \frac{3}{8 z} - \frac{15}{128 z^2} + \ldots \right), \end{equation} where $ z = n \beta (\phi^2 \pm h)^{1/2}$. We now investigate the limit $\beta \to \infty, \frac{T}{\phi} \ll 1$. The leading contribution is then given by the first term of the temperature sum in Eq.~\Ref{L2t}. We can also substitute $(\phi^2 \pm h)^{1/2} = \phi ( 1 \pm \frac{ h}{2 \phi^2})$. In this approximation, the sum of the tree level energy and the contribution \Ref{L2t} reads \begin{equation} \label{L2tasympt} {V} = \frac{h^2}{2} - \frac{h^2}{\pi^{3/2}} \frac{T^{1/2}}{\phi^{1/2}} \left( 1 - \frac{T}{2 \phi} \right) e^{- {\phi}/{T}}. \end{equation} The second term is exponentially small and the stationary equation $\frac{\partial{V}}{\partial{h}} = 0$ admits the trivial solution $h = 0$. This estimate can be easily verified in numeric calculation of the total effective potential. Hence, we conclude that, as at zero temperature \cite{Goroku}, after symmetry breaking the vacuum spontaneous magnetization does not take place. To estimate the magnetic field strength in the restored phase at the electroweak phase transition temperature, the total effective potential obtained in the previous section must be used and the parameters entering Eq.~\Ref{relation} need to be calculated. This can be best done numerically. Specifically, we consider here the contribution to this potential accounting for the one-loop $W$-boson terms. The high temperature expansion for the EP coming from charged vector fields is given in Eq.~\Ref{VW}. Assuming stability of the vacuum state, we calculate the value of the chromomagnetic weak isospin field spontaneously generated at high temperature from Eqs.~\Ref{VW} and \Ref{Vscalar}: \begin{equation} \label{fieldT1} \tilde{B}(T) = \frac{1}{16} \frac{g^3}{\pi^2} \frac{T^2}{\displaystyle\left(1 + \frac{5}{12} \frac{g^2}{ \pi^2} \log \frac{T}{\tau}\right)^2 }. \end{equation} We relate this expression with the intergalactic magnetic field $B_0$. Let us introduce the standard parameters and definitions, $\frac{g^2}{4 \pi} = \alpha_s, \alpha = \alpha_s \sin \theta_w^2, \frac{(g')^2}{4 \pi} = \alpha_Y$ and $\tan^2 \theta_w(T) = \frac{ \alpha_Y(T)}{ \alpha_s(T)}$, where $\alpha$ is the fine structure constant. To find the temperature dependence of the Weinberg angle, the behavior of the hypercharge coupling $g'$ on the temperature has to be computed. From Eq.~\Ref{Vscalar} it follows that this behavior is nontrivial. The logarithmic temperature-dependent term is negative. But, as is well known, in asymptotically free models this sign will unavoidably be changed to a positive value due to the contributions of other fields. This particular value is model dependent and we will not calculate it in this paper. Instead, for a rough estimate, we replace it with the zero temperature number: $\sin^2 \theta_w(T) = \sin^2 \theta_w(0)= 0.23$. For the given temperature of the electroweak phase transition, $T_{ew}$, the magnetic field is \begin{equation} \label{B3T} B(T_{ew}) = B_0 \frac{T^2_{ew}}{T^2_0} = \sin \theta_w (T_{ew}) \tilde{B}(T_{ew}).\end{equation} Assuming $T_{ew} = 100 GeV = 10^{11} eV$ and $T_0 = 2.7 K = 2.3267 \cdot 10^{- 4} eV$, we obtain \begin{equation} \label{Bew} B(T_{ew}) \sim 1.85 ~10^{14} G. \end{equation} This value can be considered as a lower bound on the magnetic field strength at the electroweak phase transition temperature. Hence, for the value of $X = \log \frac{T_{ew}}{\tau}$, we have the equation \begin{equation} \label{Xew} B_0 = \frac{1}{2} \frac{\alpha^{3/2} }{\pi^{1/2} \sin^2 \theta_w } \frac{T^2_0}{\displaystyle\left(1 + \frac{5 \alpha}{3 \pi \sin^2 \theta_w } X\right)^{2}}. \end{equation} Since all the values here are known, $\log \tau $ can be estimated. After that the field strengths at different higher temperatures can be found. Of course, our estimate is a rough one because of having ignored the temperature dependence of the Weinberg angle. To guess the value of the parameter $\tau$ we take the field strength $B_0 \sim 10^{- 9} G$, usually used in cosmology (see, for example, \cite{Pollock}). In this case, from Eq.~\Ref{Xew} we obtain $\tau \sim 300 eV$. For the lower bound value $B \sim 10^{- 15}G$ this parameter is much smaller. The strong suppression of the field strength is difficult to explain within the standard model. This point will be discussed below. To take into consideration the fermion contribution Eq.~\Ref{fermionEP}, we have to substitute the expression $\frac{5}{12} \frac{g^2}{ \pi^2} \log \frac{T}{\tau}$ in Eqs.~\Ref{fieldT1} and \Ref{Xew} with the value \begin{equation} \label{plusferm} \left(\frac{5}{3} - \sum\limits_{f} \frac{1}{6} q^2_f \right)\frac{\alpha_s}{ \pi} \log \frac{T}{\tau}. \end{equation} In the above estimate, we have taken into account the one-loop part of the EP of order $\sim g^2$ in the coupling constant. The ring diagrams are of order $\sim g^3$ and provide a small numeric correction to this result in the high temperature approximation. As it was mentioned before, had we taken into account all the terms listed in the previous section the results would have not changed essentially. The field strength at higher temperatures depends on the particular model extending the standard one. Spontaneous vacuum magnetization in the minimal supersymmetric standard model has been investigated in Ref.~\cite{Demchik:2003}. The field strength generated in this model is smaller, as compared with the situation here considered. Pollock \cite{Pollock} has investigated this problem for the case of the Planck era, where magnetic fields of order $B \sim 10^{52} G$ have been estimated. We will further discuss these results in the concluding section. \section{Discussion} Here we summarize our main results. The key issue in the problem under investigation is the spontaneous magnetization of the vacuum, which eliminates the magnetic flux conservation principle at high temperatures. This vacuum polarization is responsible for the value of the field strength $B(T)$ at each temperature and serves as a source for it. We have also shown that, at finite temperature and after the symmetry breaking, a scalar field condensate suppresses the magnetization. Hence it follows that the actual nature of the particular extended model is not essential at sufficiently low temperatures when the decoupling of heavy gauge fields has happened already. These statements are new and come as an interesting surprise, as compared with the standard notions based on the ubiquitous scenario with magnetic flux conservation. In the latter case one assumes that the magnetic field is created by some mechanism at different stages of the universe evolution. Then the temperature dependence ($B \sim T^2$) is regulated by magnetic flux conservation, only. The present value of the intergalactic magnetic field is related in our model with the field strengths at high temperatures in the restored phase. Because of the zero magnetic mass for Abelian magnetic fields, as discovered recently \cite{Antropov:2010}, there is no problem in the generation of fields having a large coherence length. Knowing the particular properties of the extended model it is possible to estimate the field strengths at any temperature. This can be done for different schemes of spontaneous symmetry breaking (restoration) by taking into account the fact that, after the decoupling of some massive gauge fields, the corresponding magnetic fields are screened. Thus, the higher the temperature is the larger will be the number of strong long range magnetic fields of different types that will be generated in the early universe. Now, let us compare our results with those of Ref.~\cite{Pollock}, where spontaneous vacuum magnetization at high temperature was applied to estimate the field strength at the Planck era. In that paper, in order to estimate the field strength, heterotic superstring theory $E_8 \times E_8^{'}$ was considered as a basic ingredient. At the Planck era, the magnetic field strength has been estimated to be of order $\sim 10^{52} G$. In contrast to our considerations, it was assumed there that the magnetic field approximately scales as $B \sim T^2$. That is, vacuum magnetization was taken into account only at the very first moments of the universe evolution. Further, recent results implying a zero magnetic mass for the Abelian chromomagnetic fields also change the picture of the magnetized early universe substantially. According to those, the created magnetic fields existed already on the horizon scales. They were switched off at some mass scales, because of the spontaneous symmetry breaking as the temperature was lowering and the decoupling of heavy gauge fields occured. As a result, at the electroweak phase transition only the component $B^{(3)}$ of the $SU(2)$ weak isospin group remains unscreened and eventually results in the present day intergalactic magnetic field. The processes of decoupling were also not taken into consideration in Ref.~\cite{Pollock}. Thus, it was impossible there to relate the electromagnetic field $B_0$ with the magnetic fields generated at high temperatures. Our analysis has shown that, at the electroweak phase transition temperature, magnetic fields of the order $B(T_{ew}) \sim 10^{14} G$ were present. To estimate the field strengths at high temperatures, one needs to put into play a number of characteristic features of the standard model and its particular extension. First, we note that quarks possess both electric and color charges. Therefore, there is a mixing between the color and usual magnetic fields owing to the quark loops. Second, there are peculiarities related with the particular content of the extended model considered. For example, in the Two-Higgs-Doublet standard model the contribution $\sim (g B)^{3/2} T$ in Eq.~\Ref{VW} is exactly canceled by the corresponding term in Eq.~\Ref{Vscalar}, because of the four charged scalar fields entering the model. They interact with gauge fields with the same coupling constant. However, in this model the doublets interact differently with fermions. This changes the effective couplings of the doublets with gauge fields and results in non-complete cancelations. As a result, a strong suppression of the spontaneously created magnetic field is expected in this model. This, in principle, could explain a very small value of the intergalactic magnetic field at low temperature. There are other peculiarities which influence the high temperature phase of the universe. They require further investigations, which we leave for a future publication. \begin{acknowledgments} The authors are indebted with Michael Bordag for numerous discussions and a reading of the manuscript. One of us (VS) was supported by the European Science Foundation activity "New Trends and Applicatios of Casimir Effect", and also thanks the Group of Theoretical Physics and Cosmology, at the Institute for Space Science, UAB, Barcelona for kind hospitality. EE has been partly supported by MICINN (Spain), projects FIS2006-02842 and FIS2010-15640, by the CPAN Consolider Ingenio Project, and by AGAUR (Generalitat de Ca\-ta\-lu\-nya), contract 2009SGR-994. \end{acknowledgments}
2,869,038,156,552
arxiv
\section{Introduction} Throughout this paper, let $p$ be a prime number, and let $k$ be an algebraically closed field of characteristic $p$. An abelian variety $X$ over $k$ is said to be \emph{supersingular} if it is isogenous to a product of supersingular elliptic curves; it is called \emph{superspecial} if it is isomorphic to a product of supersingular elliptic curves. To each polarised supersingular abelian variety $x=(X_0,\lambda_0)$ of $p$-power polarisation degree, we associate a set $\Lambda_x$ of isomorphism classes of $p$-power degree polarised abelian varieties $(X,\lambda)$ over $k$, consisting of those whose associated quasi-polarised $p$-divisible groups satisfy $(X,\lambda)[p^\infty]\simeq (X_0,\lambda_0)[p^\infty]$. It is known that $\Lambda_x$ is a finite set, and the \emph{mass} of $\Lambda_x$ is defined to be the weighted sum \begin{equation}\label{eq:intromass} \mathrm{Mass}(\Lambda_x):=\sum_{(X,\lambda)\in \Lambda_x} \frac{1}{\vert \mathrm{Aut}(X,\lambda)\vert }. \end{equation} Let $\calA_g$ be the moduli space over $\overline{\bbF}_p$ of $g$-dimensional principally polarised abelian varieties. If $x=(X_0,\lambda_0)$ is a superspecial point in $\calA_g(k)$, that is, $X_0$ is superspecial, then $\Lambda_x$ coincides with the superspecial locus $\Lambda_{g,1}$ of $\calA_g$, which consists of all superspecial points in $\calA_g$, called the \emph{principal genus}. The classical mass formula (see Hashimoto--Ibukiyama \cite[Proposition 9]{hashimotoibukiyama} and Ekedahl \cite[p.~159]{ekedahl}) states that \begin{equation}\label{eq:introsspmass} \mathrm{Mass}(\Lambda_{g,1})=\frac{(-1)^{g(g+1)/2}}{2^g} \left \{ \prod_{i=1}^g \zeta(1-2i) \right \}\cdot \prod_{i=1}^{g}\left\{(p^i+(-1)^i\right \}, \end{equation} where $\zeta(s)$ denotes the Riemann zeta function. More generally, for any integer $c$ with $0 \leq c \leq \lfloor g/2 \rfloor$, let $\Lambda_{g,p^c}$ denote the finite set of isomorphism classes of $g$-dimensional polarised superspecial abelian varieties $(X,\lambda)$ such that $\ker(\lambda) \simeq \alpha_p^{2c}$, where $\alpha_p$ is the kernel of the Frobenius morphism on the additive group $\mathbb{G}_a$. Then one also has $\Lambda_{g,p^c}=\Lambda_x$ for any member $x$ in $\Lambda_{g,p^c}$. The case $c=\lfloor g/2 \rfloor$ is called the \emph{non-principal genus}. As shown by Li-Oort \cite{lioort}, both the principal and non-principal genera describe the irreducible components of the supersingular locus $\mathcal{S}_{g,1}$ of $\mathcal{A}_g$. Similarly, the sets $\Lambda_{g,p^c}$ describe the irreducible components of supersingular Ekedahl-Oort (EO) strata in $\mathcal{A}_g$ cf.~\cite{harashita}. The explicit determination of the class number $\vert \Lambda_{g,p^c}\vert$, i.e., the class number problem, is a very difficult task for large $g$, and is still open for $g=3$ and $c=1$. Nevertheless, an explicit calculation of the mass $\mathrm{Mass}(\Lambda_{g,p^c})$ is more accessible and provides a good estimate for the class number. This mass was calculated explicitly by the third author \cite[Theorem 1.4]{yu2} when $g=2c$ and extended to arbitrary $g$ and~$c$ by Harashita \cite[Proposition 3.5.2]{harashita}. In \cite{yuyu}, J.-D. Yu and the third author explicitly calculated the mass formula for $\Mass(\Lambda_x)$ for an arbitrary principally polarised supersingular abelian surface $x=(X_0,\lambda_0)$. In \cite{ibukiyama}, Ibukiyama investigated principal polarisations of a given supersingular non-superspecial abelian surface $X_0$. He explicitly computed the number of polarisations and the mass of the corresponding principally polarised abelian surfaces. He also showed the agreement with $\vert \Lambda_x \vert$ and $\mathrm{Mass}(\Lambda_x)$ cf.~\cite[Proposition 3.3 and Theorem 3.6]{ibukiyama}, respectively, for a member $x=(X_0,\lambda_0)$ in $\calS_{2,1}$. As an important arithmetic application, Ibukiyama proved Oort's conjecture that the automorphism group of any generic member is $\{\pm 1\}$ for $p\geq 3$, and he gave a counterexample for $p=2$.\\ Inspired by Ibukiyama's work \cite{ibukiyama}, and as a continuation of \cite{yuyu}, in this paper we completely determine the mass formula for $\mathrm{Mass}(\Lambda_x)$ when $g=3$, and prove Oort's conjecture for $p>2$ as an arithmetic application. To describe our results, we introduce some notation; more details will be given in Sections~\ref{sec:formulae} and~\ref{sec:sslocus}. For any abelian variety $X$ over $k$, the \emph{$a$-number} of $X$ is $a(X):=\mathrm{dim}_k \mathrm{Hom}(\alpha_p, X)$. For abelian threefolds $X$ we have $a(X) \in \{1,2,3\}$; when computing the mass, we will separate into cases based on the $a$-number. Further let $E$ be a supersingular elliptic curve over $\mathbb{F}_{p^2}$ with Frobenius endomorphism $\pi_E=-p$, and let $E_k=E\otimes_{\mathbb{F}_{p^2}} k$. For each integer $c$ with $0\leq c \leq \lfloor g/2 \rfloor$, we denote by $P_{p^c}({E^g_k})$ the set of polarisations $\mu$ on ${E^g_k}$ such that $\mathrm{ker} \mu \simeq \alpha_p^{2c}$; one has $P_{p^c}({E^g_k})=P_{p^c}(E^g)$. As superspecial abelian threefolds are unique up to isomorphism, there is a natural bijection $P_{p^c}({E^g_k}) \simeq \Lambda_{g,p^c}$. Let $\mu$ be a polarisation in $P_1(E_k^3)$. As alluded to above, Li and Oort \cite{lioort} show there is a one-to-one natural correspondence between the set $P_1({E^3_k})$ and the set $\Sigma(\mathcal{S}_{3,1})$ of (geometrically) irreducible components of $\mathcal{S}_{3,1}$. More precisely, they consider the moduli space $\mathcal{P}_{\mu}$ (resp.~$\mathcal{P}'_{\mu}$) over $\mathbb{F}_{p^2}$ of three-dimensional (resp.~rigid) polarised flag type quotients with respect to $\mu$. This space is an irreducible scheme which comes with a proper projection morphism $\mathrm{pr}_0: \mathcal{P}_{\mu} \to \mathcal{S}_{3,1}$, such that for each principally polarised supersingular abelian threefold $(X,\lambda)$ there exist a $\mu \in P_1(E_k^3)$ and a $y \in \mathcal{P}_{\mu}$ such that $\mathrm{pr}_0(y) = [(X,\lambda)] \in \mathcal{S}_{3,1}$. Let $C\subseteq \mathbb{P}^2$ be the Fermat curve of degree $p+1$ defined by the equation $X_1^{p+1}+X_2^{p+1}+X_3^{p+1}=0$. There exists a natural proper morphism $\pi: \mathcal{P}_{\mu} \to C$ with $\bbP^1$-fibers, and it is shown (cf.~\cite[Section 9.4]{lioort} and Proposition~\ref{prop:explicitmoduli}) that $\mathcal{P}_{\mu}$ is isomorphic to the $\bbP^1$-bundle $\mathbb{P}_{C}(\mathcal{O}(-1)\oplus \mathcal{O}(1))$ over the Fermat curve $C$. Moreover, the morphism $\pi$ has a section $s~:~C\stackrel{\sim}{\longrightarrow} T\subseteq \calP_{\mu}$, cf.~Definition~\ref{def:T}. In particular, for each $k$-point $(X,\lambda)$ in the component $\pr_0(\calP_\mu)$ of $\calS_{3,1}$ and a point $y \in \mathcal{P}_{\mu}(k)$ lying over $(X,\lambda)$, there exists a unique pair $(t,u)$ where $t = (t_1:t_2:t_3) \in C(k)$ and $u = (u_1:u_2) \in \pi^{-1}(t) \simeq \mathbb{P}^1_t(k)$ that characterises $y$. Moreover, we have (cf.~Proposition~\ref{prop:sections}): \begin{enumerate} \item If $y \in T$ then $a(X) = 3$; \item For any $t \in C(k)$, we have $t \in C(\mathbb{F}_{p^2})$ if and only if for any $y \in \pi ^{-1}(t)$ the corresponding threefold $X$ has $a(X) \geq 2$. \item We have $a(X) = 1$ if and only if $y \notin T$ and $\pi (y) \notin C(\mathbb{F}_{p^2})$. \end{enumerate} We are now ready to state our first two main results, computing the mass for any principally polarised supersingular abelian threefold. \begin{introtheorem}\label{introthm:a2} (Theorem~\ref{thm:massa2}) Let $x = (X,\lambda)\in \mathcal{S}_{3,1}(k)$ with $a(X)\ge 2$, let $\mu\in P_1(E^3)$, and let $y\in \calP'_\mu(k)$ be such that $\mathrm{pr}_0(y) = [(X,\lambda)]$. Write $y=(t,u)$ where $t=\pi(y)\in C(\mathbb{F}_{p^2})$ and $u\in \pi^{-1}(t) \simeq \mathbb{P}^1_t(k)$. Then \[ \mathrm{Mass}(\Lambda_x)=\frac{L_p}{2^{10}\cdot 3^4\cdot 5\cdot 7}, \] where \[ L_p= \begin{cases} (p-1)(p^2+1)(p^3-1) & \text{if } u\in \mathbb{P}_t^1(\mathbb{F}_{p^2}); \\ (p-1)(p^3+1)(p^3-1)(p^4-p^2) & \text{if } u\in\mathbb{P}_t^1(\mathbb{F}_{p^4})\setminus \mathbb{P}_t^1(\mathbb{F}_{p^2}); \\ 2^{-e(p)}(p-1)(p^3+1)(p^3-1) p^2(p^4-1) & \text{ if } u \not\in \mathbb{P}_t^1(\mathbb{F}_{p^4}); \end{cases} \] where $e(p)=0$ if $p=2$ and $e(p)=1$ if $p>2$. \end{introtheorem} \begin{introtheorem}\label{introthm:a1} (Theorem~\ref{thm:anumber1}) Let $x = (X,\lambda) \in \mathcal{S}_{3,1}(k)$ such that $a(X)=1$ and $x\in \pr_0(\calP_\mu)$ for some $\mu \in P_1(E^3)$. Consider an element $y \in \mathcal{P}_{\mu}(k)$ over $x$, which is characterised by the pair $(t,u)$ with $t \in C(k)\setminus C(\mathbb{F}_{p^2})$ and $u \in \mathbb{P}^1_t(k)$. Let $\calD_t$ be as in Definition \ref{def:D}, and let $d(t)$ be as in Definition~\ref{def:dx}. Then \[ \mathrm{Mass}(\Lambda_x)=\frac{p^3 L_p}{2^{10}\cdot 3^4\cdot 5\cdot 7}, \] where \[ \begin{split} L_p = \begin{cases} 2^{-e(p)}p^{2d(t)}(p^2-1)(p^4-1)(p^6-1) & \text{ if } u \notin \calD_t; \\ p^{2d(t)}(p-1)(p^4-1)(p^6-1) & \text{ if } t \notin C(\mathbb{F}_{p^6}) \text{ and } u \in \calD_t; \\ p^6(p^2-1)(p^3-1)(p^4-1) & \text{ if } t \in C(\mathbb{F}_{p^6}) \text{ and } u \in \calD_t. \end{cases} \end{split} \] \end{introtheorem} The mass function on $\calS_{3,1}$ induces a stratification such that the mass function becomes constant on each stratum. By Theorem~\ref{introthm:a2}, the locus of $\calS_{3,1}$ with $a$-number $\ge 2$ decomposes into three strata: one stratum with $a$-number $3$ and two strata with $a$-number $2$. On the locus with $a$-number $1$, the stratification depends on $p$. When $p\ne 2$, the $d$-invariant takes values in $\{3,4,5,6\}$ and $d(t)=3$ if and only if $t\in C(\mathbb{F}_{p^6})$. In this case, Theorem~\ref{introthm:a1} says that the mass function depends only on the $d$-invariant and whether $u\in \calD_t$ or not, and hence there are eight strata. When $p=2$, the $d$-value $d(t)$ is always $3$ and Theorem~\ref{introthm:a1} gives three strata. Our computations of the automorphism groups can be summarised as follows. \begin{introtheorem}\label{introthm:Oort} Let $x = (X,\lambda) \in \mathcal{S}_{3,1}(k)$ and $\mu \in P_1(E^3)$ so that $x\in \pr_0(\calP_\mu)$. Consider an element $y \in \mathcal{P}_{\mu}$ over~$x$, which is characterised by the pair $(t,u)$ with $t \in C(k)$ and $u \in \mathbb{P}^1_t(k)$. Let $\calD_t$ be as in Definition~\ref{def:D} and let $d(t)$ be as in Definition~\ref{def:dx}. \begin{enumerate} \item (Theorem~\ref{thm:gen_autgp}) Suppose that $a(X) =1$, so that $t \in C(k)\setminus C(\mathbb{F}_{p^2})$. Assume that $(t,u)\not \in \calD$, that is, $u\not\in \calD_t$. \begin{enumerate} \item If $p=2$, then $\Aut(X,\lambda)\simeq C_2^3$. \item If $p\ge 5$, or $p=3$ and $d(t)=6$, then $\Aut(X,\lambda)\simeq C_2$, \end{enumerate} where $C_n$ denotes the cyclic group of order $n$. \item (Theorem~\ref{thm:inD}) Suppose that $a(X) = 1$ and that $(t,u)\in \calD$ with $t\not \in C(\mathbb{F}_{p^6})$. \begin{enumerate} \item If $p=2$, then $\Aut(X,\lambda)\simeq C_2^3 \times C_3$. \item If $p=3$ and $d(t)=6$, then $\Aut(X,\lambda)\in \{C_2,C_4\}$. \item For $p\ge 5$, we have the following cases: \begin{itemize} \item [(i)] If $p\equiv -1 \pmod {4}$, then $\Aut(X,\lambda)\in \{C_2,C_4 \}$. \item [(ii)] If $p\equiv -1 \pmod {3}$, then $\Aut(X,\lambda)\in \{C_2,C_6\}$. \item [(iii)] If $p\equiv 1 \pmod {12}$, then $\Aut(X,\lambda)\simeq C_2$. \end{itemize} \end{enumerate} \item (Proposition~\ref{prop:asympt}) Let $\Lambda_{3,1}(C_2):=\{(X,\lambda)\in \Lambda_{3,1}: \Aut(X,\lambda)\simeq C_2\}$ be the set of superspecial principally polarised abelian threefolds satisfying Oort's conjecture. Then \[ \frac{\vert \Lambda_{3,1}(C_2)\vert }{\vert \Lambda_{3,1}\vert }\to 1 \quad \text{as\ $p\to \infty$}. \] \end{enumerate} \end{introtheorem} In particular, Part (1) of Theorem~\ref{introthm:Oort} shows that Oort's conjecture is true precisely for $p \neq 2$. That is, every generic principally polarised supersingular abelian threefold over $k$ of characteristic $\neq 2$ has automorphism group~$C_2$. Schemes in this paper are assumed to be locally Noetherian unless stated otherwise. \\ The organisation of the paper is as follows. Sections~\ref{sec:formulae} and~\ref{sec:sslocus} contain preliminaries, respectively on mass formulae and the structure of the supersingular locus $\mathcal{S}_{3,1}$. In particular, the strategy we will follow in later sections to obtain mass formulae is outlined at the end of Section~\ref{sec:formulae}. Sections~\ref{sec:a2} and~\ref{sec:a1} determine the mass formulae for supersingular abelian threefolds $X$, respectively with $a(X) = 2$ (cf.~Theorem~\ref{introthm:a2}) and $a(X)=1$ (cf.~Theorem~\ref{introthm:a1}). The automorphism groups, as well as the implications for Oort's conjecture, are studied in Section~\ref{sec:Aut} (cf.~Theorem~\ref{introthm:Oort}). The Appendix contains results of independent interest, concerning a set-theoretic intersection arising in Section~\ref{sec:a1}. \section*{Acknowledgements} Parts of this work were carried out when the first author visited the Academia Sinica, and when the first and third authors visited RIMS and Kyoto University. They would like to thank these institutes for their hospitality and excellent working conditions. A part of this paper is contained in the second author's master's thesis written at Tohoku University; he thanks his advisor Nobuo Tsuzuki for enlightening comments, advice and encouragement. The authors are grateful to Ming-Lun Hsieh and Akio Tamagawa for useful discussions, and for proving Propositions~\ref{prop:ML} and \ref{prop:akio}, respectively. They would like to thank Tomoyoshi Ibukiyama and Jiangwei Xue for useful discussions and helpful comments on an earlier manuscript, and the anonymous referee for their comments which improved the exposition. The second author is supported by JSPS grants 15J05073 and 19K14501. The third author is partially supported by MoST grants 107-2115-M-001-001-MY2 and 109-2115-M-001-002-MY3. \section{Mass formulae for supersingular abelian varieties}\label{sec:formulae} \subsection{Set-up and notation}\label{ssec:not}\ Throughout the paper, let $p$ be a prime number, let $g$ be a positive integer, and let $k$ be an algebraically closed field of characteristic $p$. The ground field for objects studied is $k$, unless stated otherwise. For a finite set $S$, write $\vert S\vert $ for the cardinality of $S$. Let $\alpha_p$ be the unique $\alpha$-group of order $p$ over ${\bbF}_p$; it is defined to be the kernel of the Frobenius morphism on the additive group $\mathbb G_a$ over ${\bbF}_p$. For a matrix $A=(a_{ij}) \in \Mat_{m\times n}(k)$ and integer $r$, write $A^{(p^r)}:=(a_{ij}^{p^r})$ for the image of $A$ under the $r$th Frobenius map. Denote by $\widehat \mathbb{Z}=\prod_{\ell} \mathbb{Z}_\ell$ the profinite completion of $\mathbb{Z}$ and by $\A_f=\widehat \mathbb{Z}\otimes_{\mathbb{Z}} \mathbb{Q}$ the finite adele ring of $\mathbb{Q}$. \begin{definition}\label{def:Sgpm} For any integer $d\ge 1$, let $\calA_{g,d}$ denote the (coarse) moduli space over $\overline{\bbF}_p$ of $g$-dimensional polarised abelian varieties $(X,\lambda)$ with polarisation degree $\deg \lambda=d^2$. For any $m \geq 1$, let $\calS_{g,p^m}$ be the supersingular locus of $\calA_{g,p^m}$, which consists of all polarised supersingular abelian varieties in $\calA_{g,p^m}$. Then $\mathcal{S}_{g,1}$ is the moduli space of $g$-dimensional principally polarised supersingular abelian varieties. Denote $\mathcal{S}_{g,p^*} = \cup_{m\geq 1}\mathcal{S}_{g,p^m}$. \end{definition} \begin{definition}\label{def:Lambdax} (1) If $S$ is a finite set of objects with finite automorphism groups in a specified category, then we define the \emph{mass} of $S$ to be the weighted sum \[ \Mass(S):=\sum_{s\in S} \frac{1}{\vert \Aut(s)\vert }. \] (2) For any $x = (X_0, \lambda_0) \in \mathcal{S}_{g,p^*}(k)$, we define \begin{equation}\label{eq:Lambdaxo} \Lambda_{x} = \{ (X,\lambda) \in \mathcal{S}_{g,p^*}(k) : (X,\lambda)[p^{\infty}] \simeq (X_0, \lambda_0)[p^{\infty}] \}, \end{equation} where $(X,\lambda)[p^{\infty}]$ denotes the polarised $p$-divisible group associated to $(X,\lambda)$. Then $\Lambda_x$ is a finite set; see \cite[Theorem 2.1]{yu}. The \emph{mass} of $\Lambda_x$ is defined as \[ \mathrm{Mass}(\Lambda_{x}) = \sum_{(X,\lambda) \in \Lambda_{x}} \frac{1}{\vert \mathrm{Aut}(X,\lambda)\vert}. \] \end{definition} \subsection{Superspecial mass formulae}\label{ssec:sspmass}\ Recall that a superspecial abelian variety over $k$ is an abelian variety isomorphic to a product of supersingular elliptic curves. \begin{definition}\label{def:Lambda} Let $0 \leq c \leq \lfloor g/2 \rfloor$ be an integer. We define $\Lambda_{g,p^c}$ to be the set of isomorphism classes of $g$-dimensional superspecial polarised abelian varieties $(X, \lambda)$ whose polarisation $\lambda$ satisfies $\ker(\lambda) \simeq \alpha_p^{2c}$. Its mass is \[ \mathrm{Mass}(\Lambda_{g,p^c}) = \sum_{(X,\lambda)\in \Lambda_{g,p^c}} \frac{1}{\vert \mathrm{Aut}(X,\lambda) \vert}. \] \end{definition} In particular, $\Mass(\Lambda_{g,p^c})$ is a special case of $\Mass(\Lambda_x)$, cf.~Definition~\ref{def:Lambdax}. Note that the $p$-divisible group of a superspecial abelian variety of given dimension is unique up to isomorphism. Furthermore, the polarised $p$-divisible group associated to any member in $\Lambda_{g,p^c}$ is unique up to isomorphism, cf.~\cite[Proposition 6.1]{lioort}. Thus, if $x = (X, \lambda)$ is any member in~$\Lambda_{g,p^c}$, then we have $\Lambda_x = \Lambda_{g,p^c}$. \begin{theorem}\label{thm:sspmass} \begin{enumerate} \item For any $g \ge 1$, we have \[ \mathrm{Mass}(\Lambda_{g,1}) = \frac{(-1)^{g(g+1)/2}}{2^g} \prod_{i=1}^{g} \zeta(1-2i) \cdot \prod_{i=1}^g (p^i + (-1)^i). \] \item For any $g \ge 1$ and $0 \leq c \leq \lfloor g/2 \rfloor$, we have \[ \begin{split} \mathrm{Mass}(\Lambda_{g,p^c}) = & \frac{(-1)^{g(g+1)/2}}{2^g} \prod_{i=1}^{g} \zeta(1-2i) \cdot \prod_{i=1}^{g-2c} (p^i + (-1)^i) \cdot \prod_{i=1}^c (p^{4i-2}-1) \\ & \cdot \frac{\prod_{i=1}^g (p^{2i}-1)}{\prod_{i=1}^{2c}(p^{2i}-1)\prod_{i=1}^{g-2c} (p^{2i}-1)}. \end{split} \] \end{enumerate} \end{theorem} \begin{proof} (1) See \cite[p.~159]{ekedahl} and \cite[Proposition 9]{hashimotoibukiyama}. (2) This follows from \cite[Proposition 3.5.2]{harashita} by the functional equation for $\zeta(s)$. See also \cite{yu2} for a geometric proof in the case where $g=2c$. \end{proof} Using the fact that $\zeta(-1)=-1/12, \zeta(-3)=1/120$ and $\zeta(-5)=-1/(42\cdot 6)$, we obtain the following corollary. \begin{corollary}\label{cor:sspmassg3} Let $g=3$. \begin{enumerate} \item If $c=0$, then $\Lambda_{g,p^c} = \Lambda_{3,1}$ consists of all principally polarised superspecial abelian threefolds, and \begin{equation}\label{eq:ppg3ssp} \mathrm{Mass}(\Lambda_{3,1}) = \frac{(p-1)(p^2+1)(p^3-1)}{2^{10} \cdot 3^4 \cdot 5 \cdot 7}. \end{equation} \item If $c=1$, then $\Lambda_{g,p^c} = \Lambda_{3,p}$ consists of all polarised superspecial abelian threefolds whose polarisation $\lambda$ has $\ker(\lambda) \simeq \alpha_p \times \alpha_p$, and \begin{equation}\label{eq:npg3ssp} \mathrm{Mass}(\Lambda_{3,p}) = \frac{(p-1)(p^3+1)(p^3-1)}{2^{10} \cdot 3^4 \cdot 5 \cdot 7}. \end{equation} \end{enumerate} \end{corollary} \subsection{From superspecial to supersingular mass formulae}\ For a (not necessary principally) polarised supersingular abelian variety $x = (X _0, \lambda _0) $ over $k$, let $G_{x}$ be the automorphism group scheme over $\mathbb{Z}$ associated to $x$ ; for any commutative ring $R$, the group of its $R$-valued points is defined by \begin{equation}\label{eq:aut} G_{x}(R) = \{ g \in (\text{End}(X_0)\otimes _{\mathbb{Z}}R)^{\times} : g^T \lambda_0 g = \lambda_0\}. \end{equation} \begin{definition}\label{def:arithmass} For a connected reductive group $G$ over $\mathbb{Q}$ with finite arithmetic subgroups and an open compact subgroup $U \subseteq G(\mathbb{A}_{f})$, we define its (arithmetic) mass $\mathrm{Mass}(G, U)$ by \begin{align*} \mathrm{Mass}(G, U) = \sum _{i=1}^h\frac{1}{\vert \Gamma_i \vert}, \quad \Gamma_i:=G(\mathbb{Q}) \cap c_iUc_i^{-1}, \end{align*} where $\{c_1, \cdots , c_h\}$ is a set of representatives for the double coset space $G(\mathbb{Q}) \backslash G(\mathbb{A}_{f}) \slash U$. \end{definition} \begin{proposition}\label{prop:geomarithmass} For any object $ x = (X_0, \lambda_0) \in \mathcal{S}_{g,p^*}(k)$, there is a natural bijection of pointed sets \begin{align*} \Lambda_x \simeq G_{x}(\mathbb{Q}) \backslash G_{x}(\mathbb{A}_{f}) \slash G_{x}(\widehat{\mathbb{Z}}). \end{align*} Moreover, if $(X,\lambda)$ is a member of $\Lambda_x$ which corresponds to the class $[c]$ under the bijection, then $\mathrm{Aut}(X,\lambda) \simeq G_{x}(\mathbb{Q}) \cap cG_{x}(\widehat{\mathbb{Z}})c^{-1}$. In particular, we have \begin{align*} {\rm Mass}(\Lambda_{x}) = {\rm Mass}(G_{x}, G_{x}(\widehat{\mathbb{Z}})), \end{align*} cf. Definition \ref{def:Lambdax}. \end{proposition} \begin{proof} See \cite[Theorems 2.2 and 4.6]{yu3}. Also see \cite[Proposition 2.1]{yuyu} for a proof sketch. \end{proof} \begin{definition}\label{def:mu} Let $U_1,U_2$ be two open compact subgroups of $G_{x}(\mathbb{A}_{f})$. Then we define \[ \mu(U_1/U_2) = \frac{[U_1 : U_1 \cap U_2]}{[U_2 : U_1 \cap U_2]}. \] \end{definition} Interpreting the mass from Definition \ref{def:arithmass} as the volume of a fundamental domain, with notation as above, we have the following lemma. \begin{lemma}\label{lem:massesU1U2} Let $U_1,U_2$ be two open compact subgroups of $G_{x}(\mathbb{A}_{f})$. Then their (arithmetic) masses compare as \[ \mathrm{Mass}(G_{x}, U_2) = \mu (U_1 / U_2){\rm Mass}(G_{x}, U_1). \] \end{lemma} \begin{lemma}\label{lem:minisog} Let $X$ be a supersingular abelian variety over $k$. Then there exists a pair $(Y,\varphi)$, where $Y$ is a superspecial abelian variety and $\varphi: Y\to X$ is an isogeny such that for any pair $(Y',\varphi')$ as above there exists a unique isogeny $\rho: Y'\to Y$ such that $\varphi'=\varphi\circ \rho$. Dually, there exists a pair $(Z,\gamma)$, where $Z$ is a superspecial abelian variety and $\gamma: X\to Z$ such that for any pair $(Z',\gamma')$ as above there exists a unique isogeny $\rho: Z\to Z'$ such that $\gamma'=\rho\circ \gamma$. \end{lemma} \begin{proof} See \cite[Lemma 1.8]{lioort}; also see \cite[Corollary 4.3]{yu:mrl2010} for an independent proof. The proof of \cite[Lemma 1.8]{lioort} contains a gap; see Remark~\ref{countexample:miniso} for a counterexample to the argument. \end{proof} \begin{definition}\label{def:minisog} Let $X$ be a supersingular abelian variety over $k$. We call the pair $(Y,\varphi:Y\to X)$ or the pair $(Z,\gamma:X\to Z)$ as in Lemma~\ref{lem:minisog} \emph{the minimal isogeny} of $X$. \end{definition} \begin{proposition}\label{prop:compmass} Let $x = (X,\lambda)\in \mathcal{S}_{g,p^*}(k)$ and let $\varphi: \tilde{X} \to X$ be the minimal isogeny of $X$. Put $\tilde{x} = (\tilde{X},\tilde{\lambda})$, where $\tilde{\lambda}:=\varphi^* \lambda$. Let $(M,\langle\, , \rangle), (\tilde{M}, \langle\, , \rangle)$ denote the quasi-polarised (contravariant) Dieudonn{\'e} module of $X, \tilde{X}$, respectively. Then $\varphi$ induces an injective map $\varphi^*: \End(X[p^\infty])\hookrightarrow \End(\tilde{X}[p^\infty])$, or equivalently $\varphi^*: \End(M)\hookrightarrow \End(\tilde{M})$, and we have \begin{equation}\label{eq:compmassaut} \begin{split} \mathrm{Mass}(\Lambda_x) &= [\mathrm{Aut}((\tilde{X},\tilde{\lambda})[p^{\infty}]): \mathrm{Aut}((X,\lambda)[p^{\infty}])] \cdot \mathrm{Mass}(\Lambda_{\tilde{x}}) \\ &= [\mathrm{Aut}(\tilde{M}, \langle\, , \rangle): \mathrm{Aut}(M,\langle\, , \rangle)] \cdot \mathrm{Mass}(\Lambda_{\tilde{x}}). \end{split} \end{equation} Here the injective map $\varphi^*$ yields the inclusion map $\mathrm{Aut}(M,\langle\, , \rangle) \subseteq \mathrm{Aut}(\tilde{M}, \langle\, , \rangle)$. \end{proposition} \begin{proof} This may be regarded as a refinement of \cite[Theorem 2.7]{yu}. Through the isogeny $\varphi$, we may view $G_{\tilde{x}}(\widehat{\mathbb{Z}})$ and $\varphi^*G_x(\widehat{\mathbb{Z}})$ as open compact subgroups of the same group $G_{\tilde{x}}(\mathbb{A}_f)$. Using Proposition \ref{prop:geomarithmass} and Lemma \ref{lem:massesU1U2}, we see that \[ \begin{split} \mathrm{Mass}(\Lambda_x) &= \mu(G_{\tilde{x}}(\widehat{\mathbb{Z}}) / \varphi^* G_{x}(\widehat{\mathbb{Z}})) \mathrm{Mass}(\Lambda_{\tilde{x}}) \\ &= \frac{[G_{\tilde{x}}(\widehat{\mathbb{Z}}):G_{\tilde{x}}(\widehat{\mathbb{Z}}) \cap \varphi^* G_{x}(\widehat{\mathbb{Z}})]}{[\varphi^* G_{x}(\widehat{\mathbb{Z}}):G_{\tilde{x}}(\widehat{\mathbb{Z}}) \cap \varphi^* G_{x}(\widehat{\mathbb{Z}})]} \mathrm{Mass}(\Lambda_{\tilde{x}}). \end{split} \] Note that $G_{\tilde{x}}(\widehat{\mathbb{Z}})$ and $\varphi^* G_{x}(\widehat{\mathbb{Z}})$ differ only at $p$. By \cite[Proposition 4.8]{yu:mrl2010}, every endomorphism of $X[p^\infty]$ lifts uniquely to an endomorphism of $\tilde{X}[p^\infty]$. This shows the injectivity of the map $\varphi^*: \End(X[p^\infty]) \to \End(\tilde{X}[p^\infty])$. Therefore, we have the inclusion $G_x(\mathbb{Z}_p) = \mathrm{Aut}((X,\lambda)[p^{\infty}])\hookrightarrow G_{\tilde{x}}(\mathbb{Z}_p)= \mathrm{Aut}((\tilde{X},\tilde{\lambda})[p^{\infty}])$ via $\varphi^*$ and find the first part of Equation \eqref{eq:compmassaut}. By Dieudonn{\'e} module theory, for any polarised supersingular abelian variety $(X,\lambda)$ with quasi-polarised Dieudonn{\'e} module $(M,\langle\, , \rangle)$, we may identify $\mathrm{Aut}((X,\lambda)[p^{\infty}])$ with $\mathrm{Aut}(M,\langle\, , \rangle)$. This yields Equation \eqref{eq:compmassaut}. \end{proof} To summarise, the results of this section provide the following strategy for obtaining a mass formula for any principally polarised supersingular abelian variety: \begin{itemize} \item[(a)] For any supersingular abelian variety $x = (X,\lambda)$, construct the minimal isogeny \newline $\varphi:(\tilde{X},\tilde{\lambda}) \to (X,\lambda)$ from a suitable superspecial abelian variety $\tilde{x} = (\tilde{X},\tilde{\lambda})$. \item[(b)] Use Theorem~\ref{thm:sspmass} (or Corollary~\ref{cor:sspmassg3} if $g=3$) to compute $\mathrm{Mass}(\Lambda_{\tilde{x}})$. \item[(c)] Compute the local index $[\mathrm{Aut}(\tilde{M},\langle , \rangle) : \mathrm{Aut}((M,\langle , \rangle)]$, cf.~\eqref{eq:compmassaut}. \item[(d)] Compute $\mathrm{Mass}(\Lambda_x)$, i.e., compare $\Mass(\Lambda_{\tilde x})$ and $\Mass(\Lambda_x)$ by applying Proposition~\ref{prop:compmass}. \end{itemize} We will carry out these steps, in particular Step~(c), in the next sections in the case where $g=3$. In the next section, we start by studying in detail the moduli space $\mathcal{S}_{3,1}$ of supersingular principally polarised abelian threefolds and the minimal isogenies (cf. Definition \ref{def:minisog}) between threefolds. \section{Structure of the supersingular locus $\mathcal{S}_{3,1}$} \label{sec:sslocus} In this section we describe the supersingular locus $\mathcal{S}_{3,1}$. Its structure will be used to determine minimal isogenies, cf.~Proposition~\ref{prop:miniso}. Finer structures will be introduced in order to compute the local index in Step (c) in the previous section. \subsection{The supersingular locus $\boldsymbol{\mathcal{S}_{g,1}}$ and the mass function}\label{ssec:mod} \ To describe the moduli space $\mathcal{S}_{3,1}$ of supersingular principally polarised abelian threefolds, we will use the framework of polarised flag type quotients (for $g=3$) as developed by Li and Oort \cite{lioort}, which we will briefly describe below (for any $g\ge 1$). Then we will introduce the stratification of $\calS_{g,1}$ induced by the mass values and its local analogue. For any abelian variety $X$, denote by $P(X)$ the set of isomorphism classes of principal polarisations on $X$. Let $E/\mathbb{F}_{p^2}$ be a supersingular elliptic curve whose Frobenius endomorphism is $\pi_E = -p$ and denote $E_k = E \otimes_{\mathbb{F}_{p^2}} k$. Since every polarisation on ${E^g_k}$ is defined over $\mathbb{F}_{p^2}$, we may identify $P({E_k^g})$ with $P(E^g)$. Recall that an $\alpha$-group of rank $r$ over an ${\bbF}_p$-scheme $S$ is a finite flat group scheme over $S$ which is Zariski-locally isomorphic to $\alpha_p^r$. For a scheme $X$ over $S$, put $X^{(p)}:=X\times_{S,F_S} S$, where $F_S:S\to S$ denotes the absolute Frobenius morphism on $S$, and denote by $F_{X/S}:X\to X^{(p)}$ the relative Frobenius morphism. For each integer $i\ge 0$, let $P(E^g,i)$ be the set of isomorphism classes of polarisations $\lambda$ on $E^g$ such that $\ker \lambda=E[\mathsf{F}^{i}]$ with $\mathsf{F}=F_{E/\mathbb{F}_{p^2}}$ and set $P^*(E^g):=P(E^g, g-1)$. The map $\lambda\mapsto p^{\lfloor (g-1)/2\rfloor} \lambda$ gives a bijection $P(E^g)\stackrel{\sim}{\longrightarrow} P^*(E^g)$ if $g$ is odd and $P(E^g,1)\stackrel{\sim}{\longrightarrow} P^*(E^g)$ otherwise. Moreover, the map $\lambda\mapsto (E^g_k, \lambda)$ gives a bijection $P(E^g)\stackrel{\sim}{\longrightarrow} \Lambda_{g,1}$ when $g$ is odd and $P(E^g,1)\stackrel{\sim}{\longrightarrow} \Lambda_{g,p^c}$ when $g=2c$ is even. Thus, \begin{equation} \label{eq:P*Eg} P^*(E^g)\simeq \begin{cases} \Lambda_{g,1}, & \text{if $g$ is odd}; \\ \Lambda_{g,p^c}, & \text{if $g=2c$ is even.} \end{cases} \end{equation} It is known that $|\Lambda_{g,1}|=H_g(p,1)$ for any positive integer $g$ and $|\Lambda_{g,p^c}|=H_g(1,p)$ for any even positive integer $g=2c$, where $H_{g}(p,1)$ (resp.~$H_g(1,p)$) is the class number of principal genus (resp. the non-principal genus); see \cite{lioort} for details. \begin{definition} (cf.~\cite[Section 3]{lioort}) \begin{enumerate} \item Let $g\ge 1$ be an integer. For any $\mu \in P^*(E^g)$, a \emph{$g$-dimensional polarised flag type quotient (PFTQ)} with respect to $\mu$ is a chain of $g$-dimensional polarised abelian schemes over a base $\mathbb{F}_{p^2}$-scheme $S$ \[ (Y_\bullet,\rho_\bullet):(Y_{g-1},\lambda_{g-1}) \xrightarrow{\rho_{g-1}} (Y_{g-2},\lambda_{g-2}) \xrightarrow{\rho_{g-2}} \cdots \xrightarrow{\rho_2} (Y_{1},\lambda_{1})\xrightarrow{\rho_1} (Y_0, \lambda_0),\] such that: \begin{itemize} \item [(i)] $(Y_{g-1},\lambda_{g-1}) = ({E^g}, \mu)\times_{{\rm Spec}\, \mathbb{F}_{p^2}} S$; \item [(ii)] $\ker(\rho_i)$ is an $\alpha$-group of rank $i$ for $1\le i\le g-1$; \item [(iii)] $\ker(\lambda_i) \subseteq \ker (\mathsf{V}^j \circ \mathsf{F}^{i-j})$ for $0\le i\le g-1$ and $0\le j\le \lfloor i/2 \rfloor$, where $\mathsf{F}=F_{Y_i/S}: Y_i\to Y_i^{(p)}$ and $\mathsf{V}=V_{Y_i/S}:Y_i^{(p)}\to Y_i$ are the relative Frobenius and Verschiebung morphisms, respectively. \end{itemize} In particular, $\lambda_0$ is a principal polarisation on $Y_0$. An isomorphism of $g$-dimensional polarised flag type quotients is a chain of isomorphisms $(\alpha_i)_{0\le i \le g-1}$ of polarised abelian varieties such that $\alpha_{g-1}={\rm id}_{Y_{g-1}}$. \item A $g$-dimensional polarised flag type quotient $(Y_\bullet,\rho_\bullet)$ is said to be \emph{rigid} if \[ \ker(Y_{g-1}\to Y_i)=\ker (Y_{g-1}\to Y_0)\cap Y_{g-1}[\mathsf{F}^{g-1-i}],\quad \text{for $1\le i \le g-1$}, \] where $Y_{g-1}[\mathsf{F}^{g-1-i}]:=\ker (\mathsf{F}^{g-1-i}:{Y_{g-1}}\to Y_{g-1}^{(p^{g-1-i})})$. \item Let $\mathcal{P}_{g,\mu}$ (resp.~$\calP'_{g,\mu}$) denote the moduli space over $\mathbb{F}_{p^2}$ of $g$-dimensional (resp.~rigid) polarised flag type quotients with respect to $\mu$. \end{enumerate} \end{definition} Clearly, each member $Y_i$ of $(Y_\bullet,\rho_\bullet)$ is a supersingular abelian variety. \begin{definition}\label{def:anumber} For an abelian variety $X$ over $k$, its $a$-number is defined as \[ a(X) := \dim_k \mathrm{Hom}(\alpha_p,X). \] The $a$-number of a Dieudonn\'{e} module $M$ over $k$ is defined as $a(M) := \dim(M/(\mathsf{F},\mathsf{V})M)$. If $M$ is the Dieudonn{\'e} module of $X$, then $a(M) = a(X)$. When $x \in \calP_{g,\mu}$ corresponds to a polarised flag type quotient $(Y_{g-1},\lambda_{g-1}) \to \cdots \to (Y_1,\lambda_1) \to (Y_0, \lambda_0)$, we say that its $a$-number is $a(x) = a(Y_0)$. \end{definition} According to \cite[Lemma 3.7]{lioort}, $\mathcal{P}_{g,\mu}$ is a projective scheme over $\mathbb{F}_{p^2}$ and $\calP'_{g,\mu}\subset \calP_{g,\mu}$ is an open subscheme. Thus, $\calP'_{g,\mu}$ a quasi-projective scheme over $\mathbb{F}_{p^2}$. The projection to the last member gives a proper $\overline{\mathbb{F}}_p$-morphism \begin{align*} \mathrm{pr}_0 : \mathcal{P}_{g,\mu,\overline{\bbF}_p} & \to \mathcal{S}_{g,1}, \\ (Y_\bullet,\rho_\bullet) & \mapsto (Y_0, \lambda_0). \end{align*} \begin{theorem}[Li-Oort]\ \begin{enumerate} \item The natural morphism \begin{equation}\label{eq:moduli} \mathrm{pr}_0: \coprod _{\mu \in P^*(E^g)}\mathcal{P'}_{g,\mu, \overline{\bbF}_p} \rightarrow \mathcal{S}_{g,1} \end{equation} is quasi-finite and surjective. \item For every $\mu\in P^*(E^g)$, the scheme $\calP'_{g,\mu}$ is non-singular and geometrically irreducible of dimension $\lfloor g^2/4\rfloor$. Moreover, the $a$-number $1$ locus $\calP'_{g,\mu}(a=1)$ is open and dense in $\calP'_{g,\mu}$. \item The morphism $\mathrm{pr}_0$ induces a surjective birational morphism \begin{equation}\label{eq:birational} \mathrm{pr}_0: \coprod _{\mu \in P^*(E^g)}\mathcal{P'}_{g,\mu, \overline{\bbF}_p}/G_\mu \rightarrow \mathcal{S}_{g,1}, \end{equation} where $G_\mu:=\Aut(E^g, \mu)$ is the automorphism group of $(E^g, \mu)$. Moreover, it induces an isomorphism on the $a$-number $1$ loci: \begin{equation}\label{eq:a=1loci} \mathrm{pr}_0: \coprod _{\mu \in P^*(E^g)}\mathcal{P'}_{g,\mu, \overline{\bbF}_p}(a=1) /G_\mu \stackrel{\sim}{\longrightarrow} \mathcal{S}_{g,1}(a=1). \end{equation} \item The supersingular locus $\calS_{g,1}$ is equidimensional of dimension $\lfloor g^2/4\rfloor$. The $a$-number $1$ locus $\calS_{g,1}(a=1)$ is open and dense in $\calS_{g,1}$. It has \begin{equation} \label{eq:classnumber} \begin{cases} H_g(p,1), & \text{for odd integer $g$}; \\ H_g(1,p), & \text{for even integer $g$} \end{cases} \end{equation} geometrically irreducible components. \end{enumerate} \end{theorem} \begin{proof} See \cite[Section 4]{lioort}. \end{proof} Note that $\calP_{3,\mu}'\subset \calP_{3,\mu}$ is dense, while for general $g$ the open subscheme $\calP_{g,\mu}'\subset \calP_{g,\mu}$ is no longer dense, cf.~\cite[Section 9.6]{lioort}. \begin{definition}\ \begin{enumerate} \item Let $k$ be an algebraically closed field of characteristic\ $p>0$ and let \[ \Mass: \calS_{g,1}(k)\to \mathbb{Q}, \quad x\mapsto \Mass(x):=\Mass(\Lambda_x) \] be the mass function. For each mass value $r\in \mathbb{Q}$, i.e. $r=\Mass(x)$ for some point $x\in \calS_{g,1}(k)$, define a subset \begin{equation} \label{eq:massstratum} \calS_{g,1,r}:=\{x\in \calS_{g,1}(k): \Mass(x)=r \}. \end{equation} Then we have a decomposition of the supersingular locus into subsets \begin{equation} \label{eq:massstratification} \calS_{g,1}(k)=\coprod_{r} \calS_{g,1,r}, \end{equation} where $r$ runs through all mass values. Each subset $\calS_{g,1,r}$ is called \emph{the mass stratum with mass value $r$}, and the decomposition \eqref{eq:massstratification} is called the \emph{mass stratification} of $\calS_{g,1}(k)$. \item For each $\mu\in P^*(E^g)$, consider the pull-back of the mass function on $\calS_{g,1}(k)$ by $\mathrm{pr}_0$. We obtain the mass function on $\calP_{g,\mu}(k)$: \[ \Mass: \calP_{g,\mu}(k)\to \mathbb{Q}, \quad y\mapsto \Mass(y):=\Mass(\Lambda_{\mathrm{pr_0}(y)}). \] Similarly, we define the mass stratum $\calP_{g,\mu,r}$ for each mass value $r\in \mathbb{Q}$ as in \eqref{eq:massstratum} and obtain a decomposition of $\calP_{g,\mu}(k)$ into mass strata: \begin{equation} \label{eq:massstratification_calP} \calP_{g,\mu}(k)=\coprod_{r} \calP_{g,\mu,r}, \end{equation} called the \emph{mass stratification} of $\calP_{g,\mu}(k)$. \end{enumerate} \end{definition} When $g=1$, the supersingular locus $\calS_{1,1}$ consists of one mass stratum. When $g=2$, there are three mass strata: one stratum with $a$-number $2$ and two strata with $a$-number $1$. Each mass stratum is a locally closed subset and the collection of mass strata satisfies the stratification property, namely, the closure of each stratum is the union of some strata cf.~\cite{yuyu}. When $g=3$, we will see again from our computation that each mass stratum is a locally closed subset on both $\calP_{3,\mu}$ and $\calS_{3,1}$. However, the collection of mass strata does not satisfy the stratification property on $\calP_{3,\mu}$ (because the structure morphism $\pi: \mathcal{P}_{3,\mu} \to C$ constructed in Proposition~\ref{prop:explicitmoduli} admits a section $T$, which will be formally introduced in Definition~\ref{def:T}) but it does on its open dense subscheme $\calP'_{3,\mu}=\calP_{3,\mu}-T$. We expect that every mass stratum is a locally closed subset for general $g$. The mass stratification encodes arithmetic information (automorphism groups and endomorphism rings) of supersingular abelian varieties. For example, we will see in Section~\ref{sec:Aut} that the automorphism groups of supersingular abelian threefolds jump only when the objects cross different mass strata. Since arithmetic properties generally do not respect geometric properties, we are less optimistic that the collection of mass strata of $\calP'_{g,\mu}$ satisfies the stratification property.\\ Now we introduce a local analogue of the mass stratification where the underlying space $\calS_{g,1}$ is replaced with the moduli space of supersingular $p$-divisible groups, namely, the supersingular Rapoport-Zink space. Fix a $g$-dimensional principally polarised superspecial abelian variety $x_0=(X_0,\lambda_{X_0})$ over $\overline{\bbF}_p$, and let $\underline \bfX_0= (\bfX_0,\lambda_{\bfX_0})=(X_0,\lambda_{X_0})[p^\infty]$ be the associated principally polarised $p$-divisible group. Let $\calM_{\overline{\bbF}_p}^0$ be the Rapoport-Zink space over $\overline{\bbF}_p$ classifying principally polarised quasi-isogenies of $(\bfX_0,\lambda_{\bfX_0})$ of height $0$. For each $\overline{\bbF}_p$-scheme $S$, $\calM^0_{\overline{\bbF}_p}(S)$ is the set of isomorphism classes of pairs $(\underline \bfX, \rho)_S$, where \begin{itemize} \item [(i)] $\underline \bfX=(\bfX,\lambda_{\bfX})$ is a principally polarised $p$-divisible group over $S$; \item [(ii)] $\rho:\bfX_0 \to \bfX$ is a quasi-isogeny over $S$ such that $\rho^* \lambda_{\bfX}=\lambda_{\bfX_0}$. \end{itemize} Two pairs $(\underline \bfX_1, \rho_1)$ and $(\underline \bfX_2, \rho_2)$ are isomorphic if there exists an isomorphism $\alpha:\bfX_1\stackrel{\sim}{\longrightarrow} \bfX_2$ such that $\alpha\circ \rho_1=\rho_2$. One easily sees $\alpha^* \lambda_{\bfX_2}=\lambda_{\bfX_1}$. The Rapoport-Zink space $\calM^0_{\overline{\bbF}_p}$ is a scheme locally of finite type over $\overline{\bbF}_p$, cf.~\cite[Theorem 3.25 and Corollary 2.29]{rapoport-zink}. Let $G_{\underline \bfX_0}$ be the automorphism group scheme of $\underline \bfX_0$ over ${\bbZ}_p$. The group $G_{\underline \bfX_0}({\bbQ}_p)$ of ${\bbQ}_p$-valued points consists of polarised quasi-self-isogenies of $\underline \bfX_0$ over $k$; it is a locally compact topological group. Choose a Haar measure on $G_{\underline \bfX_0}({\bbQ}_p)$ with volume one on the maximal open compact subgroup $G_{\underline \bfX_0}({\bbZ}_p)=\Aut(\underline \bfX_0)$. For each $k$-valued point $\bfx=(\underline \bfX, \rho)\in \calM^0_{\overline{\bbF}_p}(k)$, we may regard its automorphism group $\Aut(\underline \bfX)$ as an open compact subgroup of $G_{\underline \bfX_0}({\bbQ}_p)$ by inclusion: \[ \rho^*:\Aut(\underline \bfX)\hookrightarrow G_{\underline \bfX_0}({\bbQ}_p), \quad h\mapsto \rho^{-1}\circ h\circ \rho. \] \begin{definition} Let the notation be as above. Define a function on $\calM^0_{\overline{\bbF}_p}(k)$ by \begin{equation} \label{eq:vfunction} v: \calM^0_{\overline{\bbF}_p}(k) \to \mathbb{Q}, \quad \bfx=(\underline \bfX,\rho)\mapsto v(\bfx):=\mathrm{vol}(\rho^*(\Aut(\underline \bfX)))^{-1}. \end{equation} For each $v$-value $r\in \mathbb{Q}$, that is, $r=v(\bfx)$ for some $\bfx\in \calM^0_{\overline{\bbF}_p}(k)$, consider the subset \[ \calM^0_r:=\{x\in \calM^0_{\overline{\bbF}_p}(k): v(\bfx)=r\}, \] for which the function $v$ takes value $r$, called \emph{the $v$-stratum with $v$-value $r$}. The Rapoport-Zink space then decomposes in subsets: \[ \calM^0_{\overline{\bbF}_p}(k)=\coprod_{r} \calM^0_r, \] where $r$ runs through all $v$-values in $\mathbb{Q}$, called the $v$-stratification of $\calM^0_{\overline{\bbF}_p}(k)$. Observe that the collection of $v$-strata is independent of the choice of the Haar measure on $G_{\underline \bfX_0}({\bbQ}_p)$ as the function $v'$ associated to a different Haar measure is just a multiple of $v$ by a scalar. \end{definition} Let \[ \widetilde \pi: \calM^0_{\overline{\bbF}_p} \to \calS_{g,1} \] be the Rapoport-Zink uniformisation morphism, cf.~\cite[6.13]{rapoport-zink}. \begin{proposition}\label{massandv} The stratification of $\calM^0_{\overline{\bbF}_p}(k)$ obtained by the pull-back of the mass stratification of $\calS_{g,1}(k)$ by $\widetilde \pi$ coincides with the $v$-stratification. \end{proposition} \begin{proof} We compare the functions $v$ and $\widetilde \pi^* \Mass=\Mass \circ \widetilde \pi$. Let $\bfx=(\underline \bfX, \rho)$ be a $k$-valued point in $\calM^0_{\overline{\bbF}_p}(k)$. Then $\bfx$ lifts to a pair $((X,\lambda_X),\widetilde \rho)$ of a principally polarised supersingular abelian variety $(X,\lambda_X)$ and a polarised quasi-isogeny $\widetilde \rho:(X_0,\lambda_{X_0})\to (X,\lambda_X)$. By the construction of \cite[6.13]{rapoport-zink}, the map $\widetilde \pi$ sends $\bfx$ to $x:=(X,\lambda_X)$. Using Proposition \ref{prop:geomarithmass} and Lemma \ref{lem:massesU1U2}, we see that \[ \begin{split} \Mass(x)=\mathrm{Mass}(\Lambda_x) &= \frac{\mathrm{vol}(G_{x_0}({\bbZ}_p))}{\mathrm{vol}(\widetilde \rho^*(G_{x}({\bbZ}_p)))} \mathrm{Mass}(\Lambda_{x_0}) \\ &= \frac{\mathrm{Mass}(\Lambda_{x_0})}{\mathrm{vol}(\rho^*(\Aut(\bfX, \lambda_{\bfX})))}=\Mass(x_0)\cdot v(\bfx). \end{split} \] Thus, $\widetilde \pi^* \Mass (\bfx)=\Mass(x_0) \cdot v(\bfx)$ for $\bfx\in \calM^0_{\overline{\bbF}_p}(k)$ and the assertion follows. \end{proof} \subsection{The structure of $\calS_{3,1}$}\ Hereafter we will only treat the case where $g=3$. For brevity, we write $\calP_{\mu}$ and $\calP'_{\mu}$ for $\calP_{3,\mu}$ and $\calP'_{3,\mu}$, respectively. Roughly speaking, Equation \eqref{eq:moduli} says that each $\mathcal{P}_{\mu }$ approximates an irreducible component of the supersingular locus $\mathcal{S}_{3,1}$. More precisely, one can show the following structure results; for more details, we refer to \cite[Sections 9.3-9.4]{lioort}. Let $C \subseteq \mathbb{P}^2$ be the Fermat curve defined by the equation $X_{1}^{p+1}+X_{2}^{p+1}+X_{3}^{p+1} = 0$. \begin{proposition}\label{prop:explicitmoduli} The Fermat curve $C$ can be interpreted as the classifying space of isogenies $(Y_2, \lambda_2) \to (Y_1,\lambda_1)$ whose kernel is locally isomorphic to $\alpha_p^2$. Moreover, there is an isomorphism $\mathcal{P}_{\mu} \simeq \mathbb{P}_{C}(\mathcal{O}(-1)\oplus \mathcal{O}(1))$ for which the structure morphism $\pi : \mathbb{P}_{C}(\mathcal{O}(-1)\oplus \mathcal{O}(1)) \to C$ corresponds to the forgetful map $((Y_2,\lambda_2) \to (Y_1,\lambda_1) \to (Y_0, \lambda_0)) \mapsto ((Y_2,\lambda_2) \to (Y_1,\lambda_1))$. \end{proposition} \begin{proof} Let $M_2$ be the polarised contravariant Dieudonn{\'e} module of $Y_2$. Choosing an isogeny $\rho _2$ from ${E^3_k}$ such that $\ker (\rho_2) \simeq \alpha _p^2$ is equivalent to choosing a surjection of Dieudonn{\'e} modules $M_2 \to k^2$. Since Frobenius $\mathsf{F}$ and Verschiebung $\mathsf{V}$ act as zero on $k^2$, this is further equivalent to choosing a one-dimensional subspace of the three-dimensional (since $a(Y_2)=3$) $k$-vector space $M_2/(\mathsf{F}, \mathsf{V})M_2$ which corresponds to a point $(t_1:t_2:t_3) \in \mathbb{P}^2 = \mathbb{P}((M_2/(\mathsf{F}, \mathsf{V})M_2)^{\ast})$. The polarisation $\lambda_2=p\mu$ descends to a polarisation $\lambda_1$ on $Y_1$ through such $\rho_2$, and the condition $\ker (\lambda _1) \subseteq Y_1[\mathsf{F}]$ is equivalent to the condition \begin{align*} t_1^{p+1}+t_2^{p+1}+t_3^{p+1} = 0, \end{align*} which describes the Fermat curve $C$ of degree $p+1$ in $\mathbb{P}^2$. For precise computations, we refer to \cite{katsuraoort}. Let $M_1$ be the polarised Dieudonn{\'e} module of $Y_1$: the polarisation $\lambda_1$ induces a quasi-polarisation $D(\lambda_1) \colon M_1^{\vee} \to M_1$, and we regard $M_1^{\vee}$ as an submodule of $M_1$ under this injection. One has the inclusions $M_1^\vee \subset \mathsf{V} M_2 \subset M_1$ as $\mathsf{V} M_2$ is self-dual with respect to the quasi-polarisation induced by $\lambda_1$ and $\mathsf{V} M_2=(\mathsf{F},\mathsf{V})M_2 \subset M_1$. Choosing a second isogeny $(Y_1,\lambda_1) \to (Y_0, \lambda_0)$ is equivalent to choosing a one-dimensional subspace of the two-dimensional vector space $M_1/M_1^{\vee}$. Thus each fibre of the structure morphism $\pi \colon \mathcal{P}_{\mu} \to C $ is isomorphic to $\mathbb{P}((M_1/M_1^{\vee})^{\ast}) \simeq \mathbb{P}^1$ and this fibration corresponds to a rank two vector bundle $\calV$ on $C$. The canonical one-dimensional space $(\mathsf{F}, \mathsf{V})M_2/M_1^{\vee} \subseteq M_1/M_1^{\vee}$ defines a section $s$ of $\pi \colon \mathcal{P}_{\mu} \to C $ and corresponds to a surjection $\mathcal{V} \to \mathcal{O}(-1)$. By the duality of polarisations, we see that $\mathcal{V}$ is an extension of $\mathcal{O}(-1)$ by $\mathcal{O}(1)$ and this extension splits. \end{proof} Since the Fermat curve $C$ is a smooth plane curve of degree $p+1$, its genus is equal to $p(p-1)/2$. Let $U_3(\mathbb{F}_p)\subseteq \GL_3(\mathbb{F}_{p^2})$ denote the unitary subgroup consisting of matrices $A$ such that $A^T A^{(p)}=\mathbb{I}_3$. We see that for each $A\in U_3(\mathbb{F}_p)$ and $t\in C$, the matrix multiplication $A\cdot t^T$ lies in $C$. This gives a left action of $U_3(\mathbb{F}_p)$ on the curve $C$. It is known that $\vert U_3(\mathbb{F}_p)\vert =p^3(p+1)(p^2-1)(p^3+1)$. A curve is $\mathbb{F}_{p^{2k}}$-maximal (resp.~minimal) if its Frobenius eigenvalues over $\mathbb{F}_{p^{2k}}$ all equal $-p^k$ (resp.~$p^k$). From the well-understood behaviour of Frobenius eigenvalues under field extensions we then derive the following lemma. \begin{lemma}\label{lem:Cmaxmim} We have $\vert C(\mathbb{F}_{p^2}) \vert = p^3 + 1$. Thus, it is $\mathbb{F}_{p^2}$-maximal and hence $\mathbb{F}_{p^4}$-minimal. Moreover, we have $C(\mathbb{F}_{p^2}) = C(\mathbb{F}_{p^4})$. Furthermore, we have \begin{equation} \label{eq:Cpoints} \vert C(\mathbb{F}_{p^{2i}})\vert = \begin{cases} p^{2i}+p^{i+2}-p^{i+1}+1 & \text{if $i$ is odd;}\\ p^{2i}-p^{i+2}+p^{i+1}+1 & \text{if $i$ is even.}\\ \end{cases} \end{equation} \end{lemma} \begin{proof} For each $t=(t_i)\in C(\mathbb{F}_{p^2})$, let $s_i=t_i^{p+1}$. Then $s_i\in {\bbF}_p$ and $s_1+s_2+s_3=0$. So there are $p+1$ points $(s_i)$ in $\mathbb{P}^1({\bbF}_p)$. For each point $(s_i)$, there are $p+1$ (resp.~$(p+1)^2$) points $(t_i)$ over $(s_i)$ if some of the $s_i$ are zero (resp.~otherwise); there are 3 points $(s_i)$ with $s_i=0$ for some $i$. Thus, \[ \vert C(\mathbb{F}_{p^2}) \vert=(p+1-3)(p+1)^2+3(p+1)=p^3+1. \] One checks that this means $C$ is $\mathbb{F}_{p^2}$-maximal. Hence, $C$ is $\mathbb{F}_{p^4}$-minimal and satisfies $\vert C(\mathbb{F}_{p^4}) \vert =p^3+1$. Since $C$ is $\mathbb{F}_{p^{2i}}$-maximal (resp.~$\mathbb{F}_{p^{2i}}$-minimal) if $i$ is odd (resp.~even), the formula \eqref{eq:Cpoints} follows immediately. \end{proof} \begin{lemma}\label{lem:CFp2} Let $t = (t_1:t_2:t_3) \in C(k)$. Then $t \in C(\mathbb{F}_{p^2})$ if and only if $t_1, t_2, t_3$ are linearly dependent over $\mathbb{F}_{p^2}$. \end{lemma} \begin{proof} See \cite[Lemma 2.1]{oort2}. Alternatively, we give the following independent proof: The forward implication is immediate, so we will only show the reverse implication. Assume $t_1, t_2, t_3$ are linearly dependent over $\mathbb{F}_{p^2}$. Then the vectors $(t_i,t_i^{p^2},t_i^{p^4})$ for $i=1,2,3$ are $k$-linearly dependent. If $(t_i,t_i^{p^2},t_i^{p^4})$ for $i=2,3$ are linearly independent, then there exist $a, b \in k$ such that $t_i = at_i^{p^2} + bt_i^{p^4}$ for $i=1,2,3$. If they are linearly dependent, then there exists $a'\in k$ such that $t_i^{p^2} = a' t_i^{p^4}$ for $i=1,2,3$ and hence $t_i = at_i^{p^2}$ with $a^{p^2}=a'$. Therefore, there exist $a, b \in k$ such that $t_i = at_i^{p^2} + bt_i^{p^4}$ for $i=1,2,3$ in either case. Substituting this into the defining equation of $C$, we obtain \[ a^{p+1}\sum_{i=1}^3 t_i^{p^2+p^3} + ab^p \sum_{i=1}^3 t_i^{p^2+p^5} + a^pb \sum_{i=1}^3 t_i^{p^3+p^4} + b^{p+1} \sum_{i=1}^3 t_i^{p^4+p^5} =0. \] Again using the defining equation of $C$, we see that the first, third, and fourth terms vanish, so that also $ab^p \sum_{i=1}^3 t_i^{p^2+p^5}= ab^p(\sum_{i=1}^3 t_i^{p^3+1})^{p^2} =0$. If $a=0$ then the point $t = (t_1 : t_2: t_3)$ is defined over $\mathbb{F}_{p^4}$ and hence, by Lemma \ref{lem:Cmaxmim}, it is defined over $\mathbb{F}_{p^2}$. If $b=0$, then $t$ is defined over $\mathbb{F}_{p^2}$ as well. So we may assume that $\sum_{i=1}^3 t_i^{p^3+1} = 0$. Let $Z:=V(X_1^{p^3+1} + X_2^{p^3+1} + X_3^{p^3+1})$ be the Fermat curve of degree $p^3+1$. Then $t \in C \cap Z$. The intersection number of $C$ and $Z$ is $(p+1)(p^3+1)$ and each point of $C(\mathbb{F}_{p^2})$ is in $C \cap Z$. Since $\vert C(\mathbb{F}_{p^2}) \vert=p^3+1$ by Lemma \ref{lem:Cmaxmim}, it is enough to show that for each point $s \in C(\mathbb{F}_{p^2})$, the local multiplicity of $C$ and $Z$ at $s$ is $p+1$. Since the unitary group $U_3(\mathbb{F}_p)$ acts transitively on $C(\mathbb{F}_{p^2})$, we may assume that $s = (\zeta : 0 :1)$ where $\zeta^{p+1}=-1$. With local coordinates $v = X_1 - \zeta$ and $w = X_2$, the respective equations for $C$ and $Z$ at $y$ become $v^{p+1} + \zeta v^p + \zeta v + w^{p+1}$ and $v^{p^3+1} + \zeta v^{p^3} + \zeta^p v + w^{p^3+1}$. Now we may read off that the local multiplicity, i.e., the valuation of $v$ at $s$, is $p+1$, as required. \end{proof} We will denote $C^0 := C\setminus C(\mathbb{F}_{p^2})$. Slightly abusively, we will tacitly switch between the notations $(t_1, t_2, t_3)$ and $(t_1:t_2:t_3)$. For later use, we define the following: \begin{definition}\label{def:Endt} For $t = (t_1,t_2,t_3) \in k^3$ (viewed as a column vector), let \[ \mathrm{End}(t) = \{ A \in \mathrm{Mat}_3(\mathbb{F}_{p^2}) : A \cdot t \in k\cdot t \}. \] \end{definition} \begin{lemma}\label{lem:Endt} For any $t \in C^0(k)$, the $\mathbb{F}_{p^2}$-algebra $\mathrm{End}(t)$ is isomorphic to either $\mathbb{F}_{p^2}$ or $\mathbb{F}_{p^6}$. \end{lemma} \begin{proof} For any $A \in \mathrm{End}(t)$, we have $A \cdot t = \alpha_A t$ for some $\alpha_A \in k$. The map \[ \begin{split} \mathrm{End}(t) & \to k \\ A & \mapsto \alpha_A \end{split} \] is an $\mathbb{F}_{p^2}$-algebra homomorphism. It is injective, i.e., $A \cdot t = 0$ with $t = (t_1:t_2:t_3)$ implies that $A = 0$, since the $t_i$ are linearly independent over $\mathbb{F}_{p^2}$ by Lemma \ref{lem:CFp2}. Hence, $\mathrm{End}(t)$ is a finite field extension of $\mathbb{F}_{p^2}$. Since $\mathrm{End}(t) \subseteq \Mat_3(\mathbb{F}_{p^2}) = \mathrm{End}((\mathbb{F}_{p^2})^3)$, we may regard $(\mathbb{F}_{p^2})^3$ as a vector space over $\mathrm{End}(t)$. It follows that $[\mathrm{End}(t) : \mathbb{F}_{p^2}] \mid 3$, as required. \end{proof} \begin{lemma}\label{lem:CM} We have \begin{equation} \label{eq:CM} CM:=\{ t \in C^0(k) : \mathrm{End}(t) \simeq \mathbb{F}_{p^6} \} = C^0(\mathbb{F}_{p^6}). \end{equation} \end{lemma} \begin{proof} The containment $\{ t \in C^0(k) : \mathrm{End}(t) \simeq \mathbb{F}_{p^6} \} \subseteq C^0(\mathbb{F}_{p^6})$ is immediate, because $t$ is an eigenvector of a matrix in $\Mat_3(\mathbb{F}_{p^2})$ and can be solved over the ground field $\mathbb{F}_{p^6}$. We will now prove the reverse containment. For each $t \in C^0(\mathbb{F}_{p^6})$, we construct for each element $\alpha\in \mathbb{F}_{p^6}$ a matrix $A \in \mathrm{Mat}_3(\mathbb{F}_{p^6})$ as follows \[ A=A_\alpha:= (t, t^{(p^2)}, t^{(p^4)})\cdot {\rm diag}(\alpha ,\alpha^{p^2}, \alpha^{p^4} ) \cdot (t, t^{(p^2)}, t^{(p^4)})^{-1}. \] Since the $t_i$ are linearly independent over $\mathbb{F}_{p^2}$ by Lemma \ref{lem:CFp2}, the matrix $(t, t^{(p^2)}, t^{(p^4)})$ is invertible. We check that \begin{equation*} \label{eq:A_rational} \begin{split} A^{(p^2)}&=(t^{(p^2)}, t^{(p^4)}, t)\cdot {\rm diag}(\alpha^{p^2} ,\alpha^{p^4},\alpha ) \cdot (t^{(p^2)}, t^{(p^4)}, t)^{-1} \\ &= (t, t^{(p^2)}, t^{(p^4)})\cdot \begin{pmatrix} 0 & 0 & 1\\1 & 0 & 0\\0 & 1 & 0 \end{pmatrix} \cdot {\rm diag}(\alpha^{p^2} ,\alpha^{p^4},\alpha) \cdot \begin{pmatrix} 0 & 1 & 0\\0 & 0 & 1\\1 & 0 & 0 \end{pmatrix}\cdot (t, t^{(p^2)}, t^{(p^4)})^{-1} \\ &=A, \end{split} \end{equation*} and hence $A\in \mathrm{Mat}_3(\mathbb{F}_{p^2})$. We also have that $A_\alpha \cdot t = \alpha t$. Thus, the map $\alpha\in \mathbb{F}_{p^6} \mapsto A_\alpha$ gives an isomorphism $\mathbb{F}_{p^6}\simeq \End(t)$, as required. \end{proof} \begin{remark} \ \begin{enumerate} \item We can also show that $U_3({\bbF}_p)$ acts transitively on $C^0(\mathbb{F}_{p^6}) = CM$. The action on $C(\mathbb{F}_{p^2})$ is also transitive, with stabilisers of size $p^3(p+1)(p^2-1)$; this gives another proof of the result $\vert C(\mathbb{F}_{p^2}) \vert =p^3+1$. \item The proof of Lemma~\ref{lem:Endt} proves the following more general result. Let $F$ be any field contained in a field $K$ and $t_1, t_2,\dots, t_n$ be a set of $F$-linearly independent elements in $K$. Put $t=(t_1,\dots, t_n)^T$ and $\End(t):=\{A\in \Mat_n(F) : A\cdot t\subseteq K\!\cdot t\}$. Then $\End(t)$ is a finite field extension of $F$ of degree dividing $n$. Furthermore, suppose that $t_1,\dots, t_n$ are contained a degree $n$ subextension $E$ of $F$ in $K$. Then the $F$-basis $t_1,\dots, t_n$ of $E$ determines an $F$-algebra embedding $r: E \to \Mat_n(F)$ which is characterised by $r(a)\cdot t=at$ for every $a\in E$. Thus, $E\simeq \End(t)$ and $t$ is an eigenvector of a matrix in $\Mat_n(F)$. This is an abstract way of doing what is done explicitly in the the second part of the proof of Lemma~\ref{lem:CM}. \end{enumerate} \end{remark} \begin{definition}\label{def:T} The morphism $\pi:\calP_\mu\to C$ admits a section $s$ defined as follows. For a base scheme $S$, let $\rho_2: (Y_2,p\mu)\to (Y_1,\lambda_1)$ be an object in $C(S)$. Put $(Y_2^{(p)},\mu^{(p)}):=(Y,\mu)\times_{S,F_S} S$, where $F_S:S\to S$ is the absolute Frobenius map. The relative Frobenius morphism $\mathsf{F}:Y_2 \to Y_2^{(p)}$ gives rise to a morphism of polarised abelian schemes $\mathsf{F}: (Y_2,p \mu)\to (Y_2^{(p)},\mu^{(p)})$. Since $\ker(\rho_2)\subseteq \ker(\mathsf{F})$, the morphism factors through an isogeny $\rho_1:Y_1 \to Y_2^{(p)}$. As $\rho_2^* \rho_1^* \mu^{(p)}=\mathsf{F}^* \mu^{(p)}=p\mu=\rho_2^* \lambda_1$, we see that $\rho_1^* \mu^{(p)}=\lambda_1$ and thus obtain a polarised flag type quotient \[ \begin{CD} (Y_2,p\mu) @>\rho_2>> (Y_1,\lambda_1) @>{\rho_1}>> (Y_2^{(p)},\mu^{(p)}). \end{CD} \] This defines the section $s$, whose image will be denoted by $T$. \end{definition} Recall the definition of the $a$-number from Definition~\ref{def:anumber}. For an abelian threefold $X$ over~$k$, we have $a(X) \in \{1,2,3\}$. \begin{proposition}\label{prop:sections} Let the notation be as above. \begin{enumerate} \item We have $\calP_\mu'=\calP_\mu-T$. \item If $x \in T$ then we have $a(x) =3$. \item For any $t \in C(k)$, we have $t \in C(\mathbb{F}_{p^2})$ if and only if $a(x) \geq 2$ for any $x \in \pi ^{-1}(t)$. \item For any $x \in \calP_\mu(k)$, we have $a(x) = 1$ if and only if $x \notin T$ and $\pi (x) \notin C(\mathbb{F}_{p^2})$. \end{enumerate} \end{proposition} \begin{proof} See \cite[Section 9.4]{lioort}. \end{proof} \subsection{Minimal isogenies}\label{ssec:miniso}\ Given a polarised flag type quotient $Y_2 = E_k^3 \xrightarrow{\rho_2} Y_1 \xrightarrow{\rho_1} Y_0 = X$, the composite map $\rho_1\circ \rho_2: (Y_2,\lambda_2) \to (Y_0,\lambda_0) = (X, \lambda)$ is an isogeny from a superspecial abelian variety $Y_2$. Thus, this isogeny factors through the minimal isogeny of $(X,\lambda)$: \[ (Y_2,\lambda_2) \xrightarrow{\rho_1\circ \rho_2} (\tilde{X},\tilde{\lambda}) \xrightarrow{\varphi} (X, \lambda). \] Since every member $(X,\lambda)\in \mathcal{S}_{3,1}(k)$ can be constructed from a polarised flag type quotient $(Y_\bullet,\rho_\bullet)$, we can construct the minimal isogeny of $(X,\lambda)$ from $(Y_\bullet,\rho_\bullet)$. To describe the minimal isogenies for supersingular abelian threefolds in more detail, in the following proposition we separate into three cases, based on the $a$-number of the threefold. \begin{proposition}\label{prop:miniso} Let $(X,\lambda)$ be a supersingular principally polarised abelian threefold over~$k$. Suppose that $(X,\lambda)$ lies in the image of $\calP_\mu'$ under the map $\calP'_\mu\to \mathcal{S}_{3,1}$ for some $\mu\in P(E^3)$, so that there is a unique PFTQ over $(X,\lambda)$. \begin{enumerate} \item If $a(X) = 1$, then the associated polarised flag type quotient $(Y_2,\lambda_2) \xrightarrow{\rho_2} (Y_1, \lambda_1) \xrightarrow{\rho_1} (Y_0, \lambda_0) = (X, \lambda)$ gives the minimal isogeny $\varphi := \rho_1 \circ \rho_2$ of degree~$p^3$. \item If $a(X) = 2$, then in the associated polarised flag type quotient $Y_2 = E_k^3 \to Y_1 \to Y_0 = X$ we have $a(Y_1) = 3$, so $Y_1$ is superspecial. Thus, the minimal isogeny is $\rho_1: (Y_1, \lambda_1) \to (X, \lambda)$ of degree~$p$, where $\rho_1^* \lambda = \lambda_1$ satisfies $\ker(\lambda_1) \simeq \alpha_p \times \alpha_p$. \item If $a(X) = 3$, then $X$ is superspecial. Thus, $X$ is $k$-isomorphic to $E_k^3$ and the minimal isogeny is the identity map. \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate} \item Let $M_2, M_1, M_0$ denote the Dieudonn{\'e} modules of $Y_2, Y_1, Y_0 = X$, respectively. Then $a(M_2) = 3$. Suppose that $a(M_0) = 1$. By Proposition \ref{prop:sections}, this corresponds to a point $t = (t_1:t_2:t_3) \not\in C(\mathbb{F}_{p^2})$. We claim that $a(M_1) = 2$, which implies the statement. The Dieudonn{\'e} modules satisfy the following inclusions: \begin{align*} \begin{matrix} M_2 &\supseteq &M_1 &\supseteq &M_0 &&&\\ &\rotatebox{-30}{$\supseteq$} &\rotatebox{90}{$\subseteq$} & &\rotatebox{90}{$\subseteq$} &\rotatebox{-30}{$\supseteq$} &&\\ &&(\mathsf{F}, \mathsf{V})M_2 &\supset &(\mathsf{F}, \mathsf{V})M_1&= &(\mathsf{F}, \mathsf{V})M_0&\\ &&&\rotatebox{-30}{$\supseteq$} &\rotatebox{90}{$\subseteq$} &\rotatebox{-30}{$\supseteq$} &\rotatebox{90}{$\subseteq$} &\rotatebox{-30}{$\supseteq$}\\ &&&&(\mathsf{F},\mathsf{V})^2M_2 &= &(\mathsf{F},\mathsf{V})^2M_1 &= &(\mathsf{F},\mathsf{V})^2M_0. \end{matrix} \end{align*} All inclusions follow from the construction of flag type quotients. For the equalities, we note the following: Since $M_2$ is superspecial of genus three, we have $(\mathsf{F}, \mathsf{V})M_2 = \mathsf{F} M_2, (\mathsf{F}, \mathsf{V})^2M_2=pM_2$, and \[ \dim (M_2/\mathsf{F} M_2) = \dim (\mathsf{F} M_2/pM_2) = 3. \] It follows from the definition of flag type quotients that $\dim(M_1/\mathsf{F} M_2) = 1$, so $M_1/\mathsf{F} M_2$ is generated by one element, namely the image of $t$ (abusively again denoted $t$). So $(\mathsf{F}, \mathsf{V}) M_1/pM_2$ is two-dimensional and generated by the two elements $\mathsf{F} t$ and $\mathsf{V} t$, which are $k$-linearly independent since $t \not\in C(\mathbb{F}_{p^2})$, by Lemma \ref{lem:CFp2}. Using this, we see that \[ \dim(\mathsf{F} M_2/(\mathsf{F}, \mathsf{V})M_1) = \dim(\mathsf{F} M_2/pM_2) - \dim((\mathsf{F}, \mathsf{V})M_1/pM_2) = 1 \] and $a(M_1) = \dim (M_1/(\mathsf{F}, \mathsf{V})M_1) = 2$, as claimed. It follows from $\dim (M_1/M_0)=1$ and $a(M_1)=2$ that $\dim (M_0/(\mathsf{F},\mathsf{V})M_1)=1$. As we have assumed that $a(M_0) =\dim(M_0/(\mathsf{F},\mathsf{V})M_0)=1$, the latter implies the equality $(\mathsf{F}, \mathsf{V})M_1= (\mathsf{F}, \mathsf{V})M_0$. Since $\dim (M_0/(\mathsf{F},\mathsf{V})M_1)=1$ and $\dim(M_0/pM_2)=3$, one has $\dim (\mathsf{F},\mathsf{V})M_1/pM_2)=2$. Since $t_1,t_2,t_3$ are $\mathbb{F}_{p^2}$-linearly independent by Lemma~\ref{lem:CFp2}, the vectors $\mathsf{F}^2 t, p t$ and $\mathsf{V}^2 t$ in $\mathsf{F} M_2/p\mathsf{F} M_2$ span a $3$-dimensional subspace and hence $\dim((\mathsf{F},\mathsf{V})^2M_1/p\mathsf{F} M_2)=3$. This shows the equality $pM_2 = (\mathsf{F}, \mathsf{V})^2M_1 = (\mathsf{F},\mathsf{V})^2M_0$. Now put $\Phi:=1+\mathsf{F} \mathsf{V}^{-1}$. We have shown that $\mathsf{V} \Phi M_0=(\mathsf{F},\mathsf{V})M_1$ is not superspecial and that $\Phi^2 M_0=M_2$ is superspecial. Therefore, $M_2$ is the smallest superspecial Dieudonn\'{e} module containing $M_0$. This proves that $\rho_1\circ \rho_2: Y_2\to X$ is the minimal isogeny. \item When $a(M_0)=2$, this corresponds to a point $t = (t_1:t_2:t_3) \in C(\mathbb{F}_{p^2})$. Using the notation from the previous item, we still have that $(\mathsf{F},\mathsf{V})M_1/pM_2$ is generated by $\mathsf{F} t$ and $\mathsf{V} t$, but since the $t_i$ are $\mathbb{F}_{p^2}$-linearly dependent, we have $\dim((\mathsf{F}, \mathsf{V})M_1/pM_2)=1$, so $a(M_1) = 3$. Since $\ker(\lambda_1) \subseteq Y_1[F]\simeq \alpha_p^3$, we have $\ker(\lambda_1) \simeq \alpha_p^2$, as claimed. \item The fact that $a(X) = 3$ if and only if $X$ is superspecial is due to Oort, \cite[Theorem 2]{oort}. \end{enumerate} \end{proof} \begin{remark}\label{countexample:miniso} The proof of \cite[Lemma 1.8]{lioort} uses the claim: If $X$ is a $g$-dimensional supersingular abelian variety with $a(X)<g$, and $X':=X/A(X)$, where $A(X)$ is the maximal $\alpha$-subgroup of $X$, then $a(X')>a(X)$. Now take $Y_1$ the abelian threefold as in Proposition~\ref{prop:miniso}(1). We have computed $a(Y_1)=2$ and \[ \begin{split} a(Y_1/A(Y_1))&=a((\mathsf{F},\mathsf{V})M_1)=\dim\, (\mathsf{F},\mathsf{V})M_0/(\mathsf{F},\mathsf{V})^2M_1 \\ &=\dim\, M_0/(\mathsf{F},\mathsf{V})^2M_1-\dim\, M_0/(\mathsf{F},\mathsf{V})M_0 =2. \end{split} \] This gives a counterexample to the claim. \end{remark} \section{The case $a(X) \ge 2$}\label{sec:a2} Let $x=(X,\lambda)\in \mathcal{S}_{3,1}(k)$ with $a(X)=2$ and let $y\in \calP_\mu\simeq \bbP^1_C(\calO(-1)\oplus \calO(1))$ be the point corresponding to the PFTQ over it: \[ (Y_2,\lambda_2)\xrightarrow{\rho_2} (Y_1,\lambda_1) \xrightarrow{\rho_1} (Y_0,\lambda_0)=(X,\lambda). \] By Propositions~\ref{prop:sections} and \ref{prop:miniso}, $(Y_1,\lambda_1)$ corresponds to a point $t=(t_1,t_2,t_3)\in C(\mathbb{F}_{p^2})$ and $u \in \bbP^1_t(k):=\pi^{-1}(t)$. Moreover, $\rho_1:(Y_1,\lambda_1)\to (X,\lambda)$ is the minimal isogeny. Put $x_1~=~(Y_1,\lambda_1)$. Then $\Lambda_{x_1}=\Lambda_{3,p}$ and by Corollary~\ref{cor:sspmassg3} and Proposition~\ref{prop:compmass} we have \begin{equation} \label{eq:a2formula} \Mass(\Lambda_x)=\frac{(p-1)(p^3+1)(p^3-1)}{2^{10}\cdot 3^4 \cdot 5\cdot 7}\cdot [\Aut(M_1,\<\, \>): \Aut(M,\<\,,\>)], \end{equation} where $(M,\<\,,\>)\subseteq (M_1,\<\,, \>)$ are the quasi-polarised Dieudonn\'{e} modules associated to $(Y_1,\lambda_1)\to (X,\lambda)$. Let $M_1^{\vee}$ denote the dual lattice of $M_1$ with respect to $\<\,,\>$. Then one has $M_1^{\vee}\subseteq M\subseteq M_1$ and $M/M_1^{\vee} \in \bbP(M_1/M_1^{\vee})=\bbP^1_t(k)$ is a one-dimensional $k$-subspace in $M_1/M_1^{\vee}$. Since the morphism $\rho_2$ is defined over $\mathbb{F}_{p^2}$, the threefold $Y_1$ is endowed with the $\mathbb{F}_{p^2}$-structure $Y_1'$ with Frobenius $\pi_{Y_1'}=-p$. The induced $\mathbb{F}_{p^2}$-structure on $\bbP^1_t$ is defined by the $\mathbb{F}_{p^2}$-vector space $V_0:=M_1^\diamond/M^{{t,\diamond}}_1$, where $M_1^\diamond:=\{m\in M_1 : \mathsf{F} m+\mathsf{V} m=0\}$ is the skeleton of $M_1$, cf. \cite[Section 5.7]{lioort}. Since $\ker(\lambda_1) \simeq \alpha_p\times \alpha_p$, the quasi-polarised superspecial Dieudonn\'{e} module $(M_1,\<\,,\>)$ decomposes into a product of a two-dimensional indecomposable superspecial Dieudonn\'{e} module and a one-dimensional such module. By \cite[Proposition 6.1]{lioort}, there is a $W$-basis $e_1$, $e_2$, $e_3$, $f_1$, $f_2$, $f_3$ for $M_1$ such that $\mathsf{F} e_i=-\mathsf{V} e_i=f_i$, $\mathsf{F} f_i=-\mathsf{V} f_i=-p e_i$. for $i=1,2,3$, \[ \<e_1,e_2\>=p^{-1}, \quad \<f_1,f_2\>=1, \quad \<e_3,f_3\>=1, \] and other pairings are zero. Then $M_1^{\vee}$ is spanned by $p e_1, p_2, e_3, f_1,f_2,f_3$ and $M_1/M_1^{\vee}=\Span_k\{e_1,e_2\}$. Let $u=(u_1:u_2)\in \bbP^1_t(k)$ be the projective coordinates of the point corresponding to $M/M_1^{\vee}$. That is, $M/M_1^{\vee}$ is the one-dimensional subspace spanned by $u=u_1 \bar e_1+u_2 \bar e_2$, where $\bar e_i$ denotes the image of $e_i$ in $M_1/M_1^{\vee}$. If $u\in\ \bbP^1_t(\mathbb{F}_{p^2})$, then $a(M)=3$ and $\Mass(\Lambda_x)$ is already computed in Corollary~\ref{cor:sspmassg3}. Suppose then that $u\not\in \bbP^1_t(\mathbb{F}_{p^2})$. In this case, $M_1$ (resp.~$M_1^{\vee}$) is the smallest (resp.~maximal) superspecial Dieudonn\'{e} module containing (resp.~contained in) $M$. Thus, \[ \End(M)=\{g\in \End(M_1): g(M_1^{\vee})\subseteq M_1^{\vee},\ g(M) \subseteq M\}. \] Consider the reduction map \[ m: \End(M_1)=\End(M_1^\diamond) \twoheadrightarrow \End(M_1^\diamond/M_1^{t,\diamond})=\End_{\mathbb{F}_{p^2}}(V_0)=\Mat_2(\mathbb{F}_{p^2}). \] It is clear that $\End(M)$ contains $\ker(m)$ and that $m$ induces a surjective map \[ m: \End(M) \twoheadrightarrow m(\End(M))=\{g\in \Mat_2(\mathbb{F}_{p^2}): g \cdot u\subseteq k \cdot u\}. \] Write $\End(u):=\{ g\in \Mat_2(\mathbb{F}_{p^2}): g \cdot u\subseteq k \cdot u\}$. \begin{lemma}\label{lem:endu} \begin{enumerate} \item If $u\in \bbP^1_t(\mathbb{F}_{p^4})-\bbP^1_t(\mathbb{F}_{p^2})$, then $\End(u)\subseteq \Mat_2(\mathbb{F}_{p^2})$ is an $\mathbb{F}_{p^2}$-subalgebra which is isomorphic to $\mathbb{F}_{p^4}$. \item If $u\in \bbP^1_t(k)-\bbP^1_t(\mathbb{F}_{p^4})$, then $\End(u)=\mathbb{F}_{p^2}$. \end{enumerate} \end{lemma} \begin{proof} This is a simpler version of Lemmas~\ref{lem:Endt} and \ref{lem:CM} so we omit the proof; cf. also \cite[Section 3]{yuyu}. \end{proof} Put $\<\,,\>_1:=p\<\,, \>$. Then $\<\,,\>_1$ induces a non-degenerate alternating pairing, again denoted $\<\,,\>_1:V_0\times V_0\to \mathbb{F}_{p^2}$. The reduction map $m$ then gives rise to the following map \begin{equation} \label{eq:redmapautM1} m: \Aut(M_1,\<\,,\>)=\Aut(M_1,\<\,,\>_1) \to \Aut(V_0,\<\,,\>_1)\simeq \SL_2(\mathbb{F}_{p^2}). \end{equation} \begin{lemma}\label{lem:m} The map $m: \Aut(M_1,\<\,,\>)\to \Aut(V_0,\<\,,\>_1)$ is surjective. \end{lemma} \begin{proof} Since $Y_1$ is supersingular, we have that $\End(Y_1)\otimes {\bbZ}_p\simeq \End(M_1)$ and that $G_{x_1}({\bbZ}_p)\simeq \Aut(M_1,\<\,, \>)$; recall the notation from \eqref{eq:aut}. The group scheme $G_{x_1}\otimes {\bbZ}_p$ is a parahoric group scheme and in particular is smooth over ${\bbZ}_p$. Thus, the map $G_{x_1}({\bbZ}_p)\to G_{x_1}({\bbF}_p)$ is surjective. Now $\Aut(V_0,\<\,,\>_1)=\Res_{\mathbb{F}_{p^2}/{\bbF}_p} \SL_2$ viewed as an algebraic group over ${\bbF}_p$ is a reductive quotient of the special fibre $G_{x_1}\otimes {\bbF}_p$. Therefore, the map $G_{x_1}({\bbF}_p)\to \Aut(V_0,\<\,,\>_1)=\SL_2(\mathbb{F}_{p^2})$ is also surjective. This proves the lemma. \end{proof} We now prove the main result of this section. \begin{theorem}\label{thm:massa2} Let $x=(X,\lambda)\in \mathcal{S}_{3,1}(k)$ with $a(X)\ge 2$ and let $y\in \calP'_\mu(k)$ be a lift of $x$ for some $\mu\in P(E^3)$. Write $y=(t,u)$ where $t=\pi(y)\in C(\mathbb{F}_{p^2})$ and $u\in \pi^{-1}(t)=\bbP^1_t(k)$. Then \begin{equation} \label{eq:massa2} \mathrm{Mass}(\Lambda_x)=\frac{L_p}{2^{10}\cdot 3^4\cdot 5\cdot 7}, \end{equation} where \begin{equation} \label{eq:massa2_1} L_p= \begin{cases} (p-1)(p^2+1)(p^3-1) & \text{if } u\in \mathbb{P}^1_t(\mathbb{F}_{p^2}); \\ (p-1)(p^3+1)(p^3-1)(p^4-p^2) & \text{if } u\in\mathbb{P}^1_t(\mathbb{F}_{p^4})\setminus \mathbb{P}^1_t(\mathbb{F}_{p^2}); \\ 2^{-e(p)}(p-1)(p^3+1)(p^3-1) p^2(p^4-1) & \text{ if } u \not\in \mathbb{P}^1_t(\mathbb{F}_{p^4}); \end{cases} \end{equation} where $e(p)=0$ if $p=2$ and $e(p)=1$ if $p>2$. \end{theorem} \begin{proof} By Lemma~\ref{lem:m}, \[ [\Aut(M_1,\<\ , \>): \Aut(M,\<\,,\>)]=[\SL_2(\mathbb{F}_{p^2}): \SL_2(\mathbb{F}_{p^2})\cap \End(u)^\times]. \] By Lemma~\ref{lem:endu}, \[ \SL_2(\mathbb{F}_{p^2})\cap \End(u)^\times = \begin{cases} \mathbb{F}_{p^4}^1 & \text{if $u\in\bbP^1_t(\mathbb{F}_{p^4})\setminus \mathbb{P}^1_t(\mathbb{F}_{p^2})$;}\\ \{\pm 1\} & \text{if $u\not \in\mathbb{P}^1_t(\mathbb{F}_{p^4})$.} \end{cases} \] It follows that \[ [\Aut(M_1,\<\, \>): \Aut(M,\<\,,\>)]= \begin{cases} p^2(p^2-1) & \text{if $u\in\bbP^1_t(\mathbb{F}_{p^4})\setminus \mathbb{P}^1_t(\mathbb{F}_{p^2})$;}\\ \vert \mathrm{PSL}_2(\mathbb{F}_{p^2}) \vert & \text{if $u\not \in\mathbb{P}^1_t(\mathbb {F}_{p^4})$,} \end{cases} \] so the theorem follows from \eqref{eq:a2formula}. \end{proof} \section{The case $a(X) = 1$}\label{sec:a1} Suppose that $(X,\lambda)$ is a supersingular principally polarised abelian threefold over $k$ with $a(X)=1$. By Proposition~\ref{prop:miniso}(1), there is a minimal isogeny $\varphi: (Y_2,\mu) \to (X,\lambda)$, where $Y_2 = E_k^3$, and where $\varphi^*\lambda = p\mu$ for $\mu \in P(E^3)$ a principal polarisation. In this section we will compute the local index \begin{equation}\label{eq:indexaut} [\mathrm{Aut}((Y_2,\mu)[p^{\infty}]): \mathrm{Aut}((X,\lambda)[p^{\infty}])]. \end{equation} Let $M$ and $M_2$ be the Dieudonn{\'e} modules of $X$ and $Y_2$, respectively. Together with the induced (quasi-)polarisations, we have $(M, \langle , \rangle)$ and $(M_2,\langle , \rangle_2)$, where $\langle , \rangle_2 = p\langle , \rangle$ is again a principal polarisation. (Note that $(M_2, \langle , \rangle_2)$ is the quasi-polarised Dieudonn{\'e} module associated to $(Y_2,\mu)$ and not to $(Y_2, p\mu)$, and that $pM_2 \subseteq M$ by the proof of Proposition \ref{prop:miniso}(1).) The proof of Proposition \ref{prop:miniso}(1) also shows that every automorphism of $M$ can be lifted to an automorphism of $M_2$, i.e., that $\mathrm{Aut}((M,\langle , \rangle)) \subseteq \mathrm{Aut}((M_2,\langle, \rangle_2))$. Then equivalently to \eqref{eq:indexaut}, cf. Proposition \ref{prop:compmass}, we will compute \begin{equation}\label{eq:indexsimple} [\mathrm{Aut}((M_2,\langle, \rangle_2)):\mathrm{Aut}((M,\langle , \rangle))]. \end{equation} \subsection{Determining $\boldsymbol{\mathrm{Aut}((M_2,\langle, \rangle_2))}$}\ Let $W = W(k)$ denote the ring of Witt vectors over $k$. Choose a $W$-basis $e_1, e_2, e_3, f_1, f_2, f_3$ for $M_2$ such that \begin{equation} \label{eq:basis} \mathsf{F} e_i =-\mathsf{V} e_i=f_i, \quad \mathsf{F} f_i =-\mathsf{V} f_i= -pe_i, \quad \langle e_i, f_j \rangle_2 = \delta_{ij}, \quad \langle e_i,e_j \rangle_2 = \langle f_i, f_j \rangle_2 = 0, \end{equation} for all $i,j \in \{1,2,3\}$. Let $D_p$ be the division quaternion algebra over $\mathbb{Q}_p$ and let $\mathcal{O}_{D_p}$ denote its maximal order. We also write $D_p = \mathbb{Q}_{p^2}[\Pi]$ and $\mathcal{O}_{D_p} = \mathbb{Z}_{p^2}[\Pi]$, where $\mathbb{Z}_{p^2} = W(\mathbb{F}_{p^2})$ and $\mathbb{Q}_{p^2} = \mathrm{Frac}W(\mathbb{F}_{p^2})$, and where $\Pi^2 = -p$ and $\Pi a = \overline{a} \Pi$ for any $a \in \mathbb{Q}_{p^2}$. Here $a \mapsto \overline{a}$ denotes the non-trivial automorphism of $\mathbb{Q}_{p^2}/\mathbb{Q}_p$. If we let $*$ denote the canonical involution of $D_p$, then $a^* = \overline{a}$ for any $a \in \mathbb{Q}_{p^2}$, and $\Pi^* = -\Pi$. \begin{lemma}\label{lem:endMtilde} We have $\mathrm{End}(M_2) \simeq \mathrm{Mat}_3(\mathcal{O}_{D_p})$ and hence $\mathrm{Aut}(M_2) \simeq \mathrm{GL}_3(\mathcal{O}_{D_p})$ (not taking the polarisation into account). \end{lemma} \begin{proof} We have $\mathrm{End}(M_2) = \mathrm{End}_{\mathcal{O}_{D_p}}(M_2^{\Diamond})$, where $M_2^{\Diamond} := \{ m \in M_2 : \mathsf{F} m + \mathsf{V} m = 0\}$ denotes the skeleton of $M_2$; this is an $\mathcal{O}_{D_p}$-module where $\Pi$ acts by $\mathsf{F}$ and $\Pi^*$ acts by $\mathsf{V}$. Now the result follows by using the basis $e_1,e_2,e_3$ for $\mathrm{Mat}_3(\mathcal{O}_{D_p})^{\mathrm {op}}$ (the opposite algebra); we choose a convention where the matrices act on the left. We fix the isomorphism $\mathrm{Mat}_3(\mathcal{O}_{D_p})^{\mathrm {op}}\simeq \Mat_3(\calO_D)$ by sending $A$ to $A^*$. \end{proof} We fix the identification $\End(M_2)=\Mat_3(\calO_D)$ by the isomorphism chosen in Lemma~\ref{lem:endMtilde} with respect to the basis in \eqref{eq:basis}. \begin{lemma}\label{lem:autMtilde} We have $\mathrm{Aut}(M_2,\langle, \rangle_2) \simeq \{ A \in \mathrm{GL}_3(\mathcal{O}_{D_p}) : A^*A \simeq \mathbb{I}_3 \}$. \end{lemma} \begin{proof} It suffices to check that $\langle A \cdot e_i, e_j\rangle_2 = \langle e_i, A^* \cdot e_j \rangle_2$ for any $A \in \mathrm{Mat}_3(\mathcal{O}_{D_p})$ and any $i,j \in \{1,2,3\}$. Write $A = (a_{ij})$ and $A^* = (a'_{ij})$ with $a_{ij} = c_{ij} + d_{ij}\Pi$ for $c_{ij},d_{ij} \in \mathbb{Z}_{p^2}$, and with $a'_{ij} = a^*_{ji}$. Then \[ \langle A \cdot e_i, e_j\rangle_2 = \langle \sum_k a_{ik} e_k, e_j\rangle_2 = \langle d_{ij} f_j , e_j\rangle_2 = -d_{ij} \] coincides with \[ \langle e_i, A^* \cdot e_j \rangle_2 = \langle e_i, \sum_k a'_{jk}e_k \rangle_2 = \langle e_i, a'_{ji} e_i\rangle_2 = \langle e_i, \overline{c}_{ij}e_i - d_{ij}f_i\rangle_2 = -d_{ij}, \] as required. \end{proof} \subsection{Endomorphisms and automorphisms modulo $\boldsymbol{pM_2}$}\ As was pointed out earlier, the proof of Proposition \ref{prop:miniso}(1) contains the important observation that $pM_2 \subseteq M$. This allows us to consider the endomorphisms and automorphisms of both $M_2$ and $M$ modulo $p$ (i.e., reducing modulo $pM_2$) and modulo $\Pi$. In Definitions~\ref{def:AutEndM2mod} and~\ref{def:AutEndMmod}, we first define, and introduce notation for, all the endomorphism rings and automorphism groups we are considering. \begin{definition}\label{def:AutEndM2mod} Let $m_p$ denote the reduction-modulo-$p$ map and $m_{\Pi}$ the reduction-modulo-$\Pi$ map. By Lemma~\ref{lem:endMtilde}, for $M_2$ we have \begin{equation}\label{eq:redM2} \mathrm{End}(M_2) \simeq \mathrm{Mat}_3(\mathcal{O}_{D_p}) \xrightarrow{m_p} \mathrm{Mat}_3(\mathbb{F}_{p^2}[\Pi]) \xrightarrow{m_{\Pi}} \mathrm{Mat}_3(\mathbb{F}_{p^2}). \end{equation} On the level of automorphisms (respecting the polarisation) we get \begin{equation}\label{eq:redAutM2} \mathrm{Aut}(M_2,\langle, \rangle_2) \xrightarrow{m_p} G_{(M_2,\langle, \rangle_2)} \xrightarrow{m_{\Pi}} \overline{G}_{(M_2,\langle, \rangle_2)}, \end{equation} where \begin{equation}\label{eq:defGM2} G_{(M_2,\langle, \rangle_2)} := \{ A + B \Pi \in \mathrm{GL}_3(\mathbb{F}_{p^2}[\Pi]) : A\overline{A}^T = \mathbb{I}_3, B^T \overline{A} = \overline{A}^T B \}, \end{equation} (here, $B^T$ denotes the transpose of the matrix $B$), and where \begin{equation}\label{eq:defbarGM2} \overline{G}_{(M_2,\langle, \rangle_2)} := \{ A \in \mathrm{GL}_3(\mathbb{F}_{p^2}) : A^*A = \mathbb{I}_3 \}. \end{equation} \end{definition} \begin{definition}\label{def:AutEndMmod} For $M$ we have $\mathrm{End}(M) = \{ g \in \mathrm{End}(M_2) : g(M) \subseteq M \}$ and $\mathrm{Aut}(M) = \{ g \in \mathrm{Aut}(M_2) : g(M) = M \}$, and \begin{equation}\label{eq:AutM} \mathrm{Aut}(M,\langle, \rangle) = \{ g \in \mathrm{Aut}(M_2,\langle, \rangle_2) : g(M) = M \}. \end{equation} Under the same maps $m_p$ and $m_{\Pi}$, we find \begin{equation}\label{eq:redMp} E_M := m_p(\mathrm{End}(M)) = \{ A \in \mathrm{Mat}_3(\mathbb{F}_{p^2}[\Pi]) : A \cdot M/pM_2\subseteq M/pM_2 \} \end{equation} and $\overline{E}_M := m_{\Pi}(E_M) \subseteq \mathrm{Mat}_3(\mathbb{F}_{p^2})$. These fit in the diagram \begin{equation}\label{eq:diagram} \begin{tikzcd} \ \mathrm{End}(M) \rar{}\dar{m_p} & \mathrm{End}(M_2) = \mathrm{Mat}_3(\mathcal{O}_{D_p}) \dar{m_p} \\ E_M \rar{}\dar{m_{\Pi}} & \mathrm{Mat}_3(\mathbb{F}_{p^2}[\Pi])) \dar{m_{\Pi}} \\ \overline{E}_M \rar{} & \mathrm{Mat}_3(\mathbb{F}_{p^2}) \end{tikzcd} \end{equation} in which all horizontal maps are inclusion maps and the left vertical maps are the surjective reduction maps. On the level of automorphisms, we let \begin{equation}\label{eq:defGM} G_{M} := m_p(\mathrm{Aut}(M)) = \{ A \in \mathrm{GL}_3(\mathbb{F}_{p^2}[\Pi]) : A \cdot M/pM_2 \subseteq M/pM_2 \} \end{equation} and $\overline{G}_M := m_{\Pi}(G_{M})$. For the polarised versions, since $\varphi^* \lambda = p\mu$, we obtain \begin{equation}\label{eq:defGMpol} G_{(M, \langle, \rangle)} := \{ g \in G_{(M_2,\langle, \rangle_2)} : g(M/pM_2) \subseteq M/pM_2 \} \end{equation} and \begin{equation}\label{eq:defGbarMpol} \overline{G}_{(M, \langle, \rangle)} := \{ g \in \overline{G}_{(M_2,\langle, \rangle_2)} : g(M/\Pi M_2) \subseteq M/\Pi M_2 \}. \end{equation} \end{definition} Denote the group of three-by-three symmetric matrices over $\mathbb{F}_{p^2}$ by $S_3(\mathbb{F}_{p^2})$; this group has cardinality $p^{12}$ (since it a six-dimensional $\mathbb{F}_{p^2}$-vector space). Also recall that the group $U_3(\mathbb{F}_p)$ of three-by-three unitary matrices with entries in $\mathbb{F}_{p^2}$ has cardinality $p^3(p+1)(p^2-1)(p^3+1)$. \begin{lemma}\label{lem:sizeGM2pol} In Equation \eqref{eq:defGM2} we have $A \in U_3(\mathbb{F}_p)$ and $B^T\overline{A} \in S_3(\mathbb{F}_{p^2})$. Hence, \begin{equation}\label{eq:sizeGM2pol} \vert G_{(M_2,\langle, \rangle_2)} \vert = \vert U_3(\mathbb{F}_p) \vert \cdot \vert S_3(\mathbb{F}_{p^2}) \vert = p^{15}(p+1)(p^2-1)(p^3+1). \end{equation} \end{lemma} \begin{remark}\label{rem:indexmodp} Now we note, cf. \eqref{eq:indexsimple}, that \begin{equation}\label{eq:indexmodp} [\mathrm{Aut}((M_2,\langle, \rangle_2)):\mathrm{Aut}((M,\langle , \rangle))] = [G_{(M_2,\langle, \rangle_2)} : G_{(M, \langle , \rangle)}]. \end{equation} In light of Lemma \ref{lem:sizeGM2pol}, it now suffices to compute $[G_{(M_2,\langle, \rangle_2)} : G_{(M, \langle , \rangle)}]$. This will take up the remainder of this section. \end{remark} We start by studying the unpolarised automorphisms $G_{M_2}$. Thus, let $g=(a_{ij} + b_{ij}\Pi )_{1 \leq i,j \leq 3} \in {\rm GL}_{3}(\mathbb{F}_{p^2}(\Pi))$ be an (unpolarised) automorphism of $M_2/pM_2$. If we take $\bar{e}_{1}, \bar{e}_{2}, \bar{e}_{3}, \bar{f}_{1}, \bar{f}_{2}, \bar{f}_{3}$ (i.e., the reductions of $e_1, \ldots, f_3$ in the previous subsection) as a basis of $M_{2}/pM_{2}$ in this order, $g$ can be expressed by a matrix of the form \begin{align}\label{eq:matrixg} g = \begin{pmatrix} A & 0 \\ B & A^{(p)} \end{pmatrix}, \end{align} where $A=(a_{ij})_{1 \leq i,j \leq 3}$,$B=(b_{ij})_{1 \leq i,j \leq 3}$, and $A^{(p)} = (a_{ij}^p)_{1 \leq i,j \leq 3}$. Recall from Propositions~\ref{prop:sections} and~\ref{prop:miniso}(1) that the polarised flag type quotient $Y_2 \to Y_1 \to X$ corresponds to a point $t = (t_1:t_2:t_3) \in C^0(k)$ such that $M_1/\mathsf{F} M_2$ is generated by $t_1\bar{e}_1 + t_2 \bar{e}_2 + t_3 \bar{e}_3$, where $M_1$ is the Dieudonn{\'e} module of $Y_1$, and a point $u = (u_1: u_2) \in \bbP^1_t(k):=\pi^{-1}(t)$. We choose a new basis for $M_2/pM_2$ as follows: \begin{align*} \bar{E}_{1}:= \sum _{i=1,2,3}t_{i}\bar{e}_{i}, ~ \bar{E}_{2}:= \sum _{i=1,2,3}t_{i}^{p}\bar{e}_{i}, ~ \bar{E}_{3}:= \sum _{i=1,2,3}t_{i}^{p^{-1}}\bar{e}_{i}, \\ \bar{F}_{1}:= \sum _{i=1,2,3}t_{i}\bar{f}_{i}, ~ \bar{F}_{2}:= \sum _{i=1,2,3}t_{i}^{p}\bar{f}_{i}, ~ \bar{F}_{3}:= \sum _{i=1,2,3}t_{i}^{p^{-1}}\bar{f}_{i}. \end{align*} (This is a basis by Lemma \ref{lem:CFp2}.) Using this basis, $g$ is expressed as \begin{align}\label{eq:newmatrixg} g = \begin{pmatrix} \mathbb{T}^{-1} A \mathbb{T} & 0 \\ \mathbb{T}^{-1} B \mathbb{T} & \mathbb{T}^{-1} A^{(p)} \mathbb{T} \end{pmatrix}, \end{align} where \begin{align}\label{eq:T} \mathbb{T} := \begin{pmatrix} t_{1} & t_{1}^p & t_{1}^{p^{-1}} \\ t_{2} & t_{2}^p & t_{2}^{p^{-1}} \\ t_{3} & t_{3}^p & t_{3}^{p^{-1}} \end{pmatrix}. \end{align} Now we determine the group $G_M \subseteq \mathrm{GL}_3(\mathbb{F}_{p^2}[\Pi])$ of elements preserving $M/pM_2$. Any such element will also preserve $M_1/pM_2$. We prove the following proposition. \begin{proposition}\label{prop:GM} Let $g \in \mathrm{GL}_3(\mathbb{F}_{p^2}[\Pi])$ be an automorphism of $M_{2}/pM_{2}$, expressed as in \eqref{eq:matrixg}. Then $g \in G_M$ (i.e., $g$ preserves $M/pM_2$) if and only if the following hold: \begin{itemize} \item[(a)] We have $A \cdot t = \alpha t$ for some $\alpha \in k$, i.e., $A \in \mathrm{End}(t)$. \item[(b)] The $(1,1)$-component of the matrix $\mathbb{T}^{-1} B \mathbb{T}$ is $u_2u_1^{-1}(\alpha - \alpha^{p^3})$. \end{itemize} \end{proposition} \begin{proof} For an $A \in \mathrm{End}(t)$ (see Definition \ref{def:Endt}) with eigenvalue $\alpha$, it holds by definition that \begin{align}\label{eq:comp} \mathbb{T}^{-1} A \mathbb{T} = \begin{pmatrix} \alpha & \ast & \ast \\ & \ast & \ast \\ & \ast & \ast \end{pmatrix}, \mathbb{T}^{-1} A^{(p)} \mathbb{T} = \begin{pmatrix} \ast & & \\ \ast & \alpha^{p} & \\ \ast & & \alpha^{p^{-1}} \end{pmatrix}. \end{align} As ${\rm det}(A) = \alpha^{1+p^2+p^{-2}}$ and ${\rm det}(A^{(p)}) = {\rm det}(A)^p$, we see that \begin{align}\label{eq:comp2} \mathbb{T}^{-1} A^{(p)} \mathbb{T} = \begin{pmatrix} \alpha^{p^3} & & \\ \ast & \alpha^{p} & \\ \ast & & \alpha^{p^{-1}} \end{pmatrix}. \end{align} By Proposition \ref{prop:miniso}(1), the quotient $M_{1}/pM_2$ is a two dimensional $k$-vector space generated by $\bar{E}_{1}$ and $\bar{F}_{1}$. As $M_1^{\vee} = (\mathsf{F},\mathsf{V})M_1 = pM_2$, we find that $M/pM_2 \subseteq M_{1}/pM_2$ is a one-dimensional $k$-vector space. Take $u_1, u_2 \in k$ so that $M/pM_2$ is generated by the image of $u_{1}\bar{E}_{1} + u_{2}\bar{F}_{1}$. As $M \neq pM_{2}$, we see that $u_{1} \neq 0$. We see that if $g \in \mathrm{GL}_3(\mathbb{F}_{p^2}[\Pi])$ preserves $M_{1}/(\mathsf{F},\mathsf{V})M_{2}$, then it induces an automorphism of $M_{1}/(\mathsf{F},\mathsf{V})M_{1} = M_1/pM_2$ which is expressed as $\left(\begin{smallmatrix} \alpha & \\ \ast & \alpha^{p^3} \end{smallmatrix}\right)$ by \eqref{eq:newmatrixg}, \eqref{eq:comp}, and \eqref{eq:comp2}. Moreover, $g$ also preserves $M/(\mathsf{F},\mathsf{V})M_{1} = M/pM_2$ if and only if the column vector $\left( \begin{smallmatrix} \alpha & \\ \ast & \alpha^{p^3} \end{smallmatrix}\right) \left(\begin{smallmatrix} u_1 \\ u_2\end{smallmatrix}\right)$ is in the subspace spanned by $\left(\begin{smallmatrix} u_1 \\ u_2 \end{smallmatrix}\right)$. This is equivalent to the entry $\ast$ being equal to $u_2u_1^{-1}(\alpha - \alpha^{p^3})$. \end{proof} \begin{remark}\label{rem:choiceoftandu} \begin{enumerate} \item It follows from the construction of polarised flag type quotients that for $(X,\lambda)$ with $a(X) =1$ and a choice $\mu \in P(E^3)$ together with an identification $(\widetilde X,\widetilde \lambda)= ( E^3_k,p\mu)$, there exists a unique pair $(t,u)$ where $t = (t_1:t_2:t_3) \in C^0(k)$ and $u= (u_1:u_2) \in \mathbb{P}^1(k)$ as in the proof of Proposition \ref{prop:GM}. For the rest of the section, we will work with these $(t,u)$. \item The coordinates $(t,u)$ in (1) also give rise to a trivialisation $C^0\times \bbP\simeq \calP_{C^0}$, where $\calP_{C^0}:=\calP_\mu\times_C C^0$, as follows. By Proposition~\ref{prop:explicitmoduli}, points in $\calP_{C^0}$ correspond to pairs $(\overline M_1,\overline M)$: here $\overline M_1\subseteq \overline M_2$ is a four-dimensional subspace generated by the subspace $\mathsf{V} \overline M_2$ and $\overline E_1=t_1 \bar e_1+t_2 \bar e_2+t_3 \bar e_3$ with $(t_1:t_2:t_3)\in C^0$, and $\overline M\subseteq \overline M_2$ is a three-dimensional subspace with $\overline M_1^{\bot} \subseteq \overline M \subseteq \overline M_1$, where $\overline M_1^{\bot}$ is the orthogonal complement of $\overline M_1$ with respect to $\<\,, \>_2$. The two-dimensional vector spaces $\overline M_1/\overline M_1^{\bot}$ for $t\in C^0$ form a rank two vector bundle $\calV=\calO(1)\oplus \calO(-1)|_{C^0}$ over $C^0$. As shown in the proof of Proposition \ref{prop:GM}, the images of $\overline E_1$ and $\overline F_1$ in $\overline M_1/\overline M_1^\bot$ (again denoted by $\overline E_1$ and $\overline F_1$ for simplicity) form a basis, and give rise to two global sections $\widetilde E_1$ and $\widetilde F_1$ of $\calV$ respectively (note that both $\overline E_1$ and $\overline F_1$ are vector-valued functions in $t_1$, $t_2$, and $t_3$). Then the desired trivialisation $C^0\times \bbP \stackrel{\sim}{\longrightarrow} \calP_{C^0}\simeq \bbP(\calV)$ is given by $(t,(u_1:u_2))\mapsto [u_1 \widetilde E_1(t)+u_2 \widetilde F_1(t)]$. Since $M_2$ is the Dieudonn\'{e} module of $E^3_k$, the vector space $\overline M_2$ has an $\mathbb{F}_{p^2}$-structure, so we see that this trivialisation is defined over $\mathbb{F}_{p^2}$. Now let $t\in C^0(k)$ and $u=(0:1)$. The corresponding subspace $\overline M$ is generated by $\overline F_1$ and $\overline M_1^{\bot}=(\mathsf{F},\mathsf{V})\overline M_1$. Therefore, we have $\overline M=\mathsf{V} \overline M_2$, which corresponds a point in $T$. It follows that under the above trivialisation, $T|_{C^0} \simeq C^0\times \{\infty\}$. \end{enumerate} \end{remark} The following lemma follows from Lemma \ref{lem:Endt}, Lemma \ref{lem:CM}, and Proposition \ref{prop:GM}. It describes the \emph{polarised} elements $g \in G_{(M_2,\langle, \rangle_2)}$ that preserve $M_1/pM_2$: for such $g$ of the form \eqref{eq:matrixg}, Proposition \ref{prop:GM}(1) implies that $A \in \mathrm{End}(t)$, while Definition \ref{def:AutEndM2mod}\eqref{eq:defGM2} implies that $A$ is unitary. \begin{lemma}\label{lem:EndtintU} Let $t = (t_{1}:t_{2}:t_{3}) \in C^0(k)$. \begin{enumerate} \item When $t \notin C(\mathbb{F}_{p^6})$, we have \begin{align*} \mathrm{End}(t) \cap U_3(\mathbb{F}_p) \simeq \{ \alpha \in \mathbb{F}_{p^2} : \alpha^{p+1} = 1\}. \end{align*} \item When $t \in C(\mathbb{F}_{p^6})$, we have \begin{align*} \mathrm{End}(t) \cap U_3(\mathbb{F}_p) \simeq \{ \alpha \in \mathbb{F}_{p^6} : \alpha^{p^3+1} = 1 \} . \end{align*} \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item This follows since a diagonal matrix $\alpha \mathbb{I}_3$ with $\alpha \in \mathbb{F}_{p^2}$ is unitary if and only if $\alpha^{p+1} = 1$. \item Take any $A \in \mathrm{End}(t) \cap U_3(\mathbb{F}_p)$. The eigenvalues of $A^{(p)T}$ are $\alpha^{p}, \alpha^{p^3}, \alpha^{p^5}$ where $\alpha$ is the eigenvalue of $A$. As $A$ is unitary, $\alpha^{-1}$ is also an eigenvalue, so we have $\alpha^{-1} \in \{ \alpha^{p}, \alpha^{p^3}, \alpha^{p^5} \}$. In each case, we have $\alpha^{p^3+1} = 1$. For the converse, choose any $\alpha \in \mathbb{F}_{p^6}$ such that $\alpha^{p^3+1} = 1$. By the proof of Lemma~\ref{lem:Endt}, the corresponding $A \in \mathrm{End}(t)$ is given by \begin{align*} A = (t, t^{(p^2)}, t^{(p^4)}){\rm diag}(\alpha, \alpha^{p^2}, \alpha^{p^4})(t, t^{(p^2)}, t^{(p^4)})^{-1}. \end{align*} We compute that \begin{align*} AA^{(p)T} = (t, t^{(p^2)}, t^{(p^4)}) \begin{pmatrix} & s^{-1} & \\ & & s^{-p^2} \\ s^{-p} & & \end{pmatrix} (t^{(p)}, t^{(p^3)}, t^{(p^5)})^T \end{align*} where $s = t_{1}^{p^3+1}+t_{2}^{p^3+1}+t_{3}^{p^3+1}$. That is, $AA^{(p)T}$ is independent of $\alpha$. By the case $\alpha = 1$, we have $AA^{(p)T} = 1$. \end{enumerate}\end{proof} Suppose now that we have $g \in G_{(M_2,\langle, \rangle_2)}$ of the form \eqref{eq:matrixg} preserving $M_1/pM_2$, i.e., we have $A \in \mathrm{End}(t) \cap U_3(\mathbb{F}_p)$ by Lemma \ref{lem:EndtintU}. We now determine the conditions on $B$ so that $g$ also preserves $M/pM_2$, i.e., so that $g \in G_{(M,\langle , \rangle)}$. By \eqref{eq:defGM2}, $B$ satisfies a symmetric condition. Let $S_3(\mathbb{F}_{p^2})A$ (for $A \in \mathrm{End}(t) \cap U_3(\mathbb{F}_p)$ as above) be the $\mathbb{F}_{p^2}$-vector space consisting of matrices of the form $SA$ for some $S \in S_3(\mathbb{F}_{p^2})$. Define a homomorphism of $\mathbb{F}_{p^2}$-vector spaces \begin{equation}\label{eq:psi} \begin{split} \psi _{t, A} : & S_3(\mathbb{F}_{p^2})A \to k \\ & SA \mapsto \text{ the $(1,1)$-component of } \mathbb{T}^{-1} SA \mathbb{T}. \end{split} \end{equation} Similarly define a homomorphism \begin{equation}\label{eq:psix} \begin{split} \psi _{t} : & S_3(\mathbb{F}_{p^2}) \to k \\ & S \mapsto \text{ the $(1,1)$-component of } \mathbb{T}^{-1} S \mathbb{T}. \end{split} \end{equation} Using these notations, we have the following proposition. \begin{proposition}\label{prop:GM2intGM} The group $G_{(M, \langle , \rangle)}$ consists of the matrices of the form \[ \begin{pmatrix} A & 0 \\ SA & A^{(p)} \end{pmatrix} \] satisfying the following conditions: \begin{enumerate} \item $A \in \mathrm{End}(t) \cap U_3(\mathbb{F}_p)$ with eigenvalue $\alpha$; \item $S \in S_3(\mathbb{F}_{p^2})$ is a symmetric matrix; and \item $\psi _{t, A}(SA) = u_{2}u_{1}^{-1}(\alpha - \alpha^{p^3})$. \end{enumerate} The third condition is equivalent to \begin{itemize} \item[(3')] $\psi _{t} (S) = u_{2}u_{1}^{-1}(1 - \alpha^{p^3 -1})$. \end{itemize} \end{proposition} \begin{proof} It follows from \eqref{eq:defGMpol} and Proposition \ref{prop:GM} that for $A \in \mathrm{End}(t) \cap U_3(\mathbb{F}_p)$ with eigenvalue $\alpha$, the matrix $ \begin{pmatrix} A & 0 \\ B & A^{(p)} \end{pmatrix} $ is an element of $G_{(M,\langle,\rangle_2)} \cap G_{(M,\langle, \rangle)}$ if and only if $BA^{-1}$ is a symmetric matrix and the $(1,1)$-component of the matrix $\mathbb{T}^{-1} B \mathbb{T}$ is $u_2u_1^{-1}(\alpha - \alpha^{p^3})$. The latter condition amounts to Condition (3) (and (3')) by noticing that since $\mathbb{T}^{-1} A \mathbb{T}$ is of the form \[ \begin{pmatrix} \alpha & \ast & \ast \\ & \ast & \ast \\ & \ast & \ast \end{pmatrix} \] where $\alpha$ is the eigenvalue of $A$, we have a commutative diagram \begin{equation}\label{eq:diagpsi} \begin{tikzcd} \ S_3(\mathbb{F}_{p^2}) \rar{\psi_x}\dar{\cdot A} & k \dar{\cdot \alpha} \\ S_3(\mathbb{F}_{p^2})A \rar{\psi_{t,A}} & k \ \end{tikzcd}, \end{equation} where the left vertical arrow is multiplying $A$ from the right and the right vertical arrow is multiplying with $\alpha$. \end{proof} The following corollary follows immediately from Proposition \ref{prop:GM2intGM} and summarises the results in this subsection. \begin{corollary}\label{cor:GM2intGMsize} We have \begin{equation}\label{eq:GM2intGMsize} \vert G_{(M, \langle , \rangle)} \vert = \vert \{ A \in \mathrm{End}(t) \cap U_3(\mathbb{F}_p) : u_{2}u_{1}^{-1}(1 - \alpha^{p^3 -1}) \in \mathrm{Im}(\psi_t) \} \vert \cdot \vert \ker(\psi_t) \vert. \end{equation} \end{corollary} \subsection{Analysing $\boldsymbol{\mathrm{Im}(\psi_t)}$ and $\boldsymbol{\ker(\psi_t)}$}\ In the following subsection, we will make Corollary \ref{cor:GM2intGMsize} more explicit by analysing the image and kernel of the homomorphism $\psi_t$. \begin{definition}\label{def:dx} In the notation as above, we set \begin{equation}\label{eq:defdx} d(t) := \dim_{\mathbb{F}_{p^2}}({\rm Im}(\psi _{t})). \end{equation} \end{definition} As $\dim_{\mathbb{F}_{p^2}}(S_3(\mathbb{F}_{p^2})) = 6$, we see that $d(t) \leq 6$, and that \begin{equation}\label{eq:sizeker} \vert \ker(\psi _{t}) \vert = p^{2(6-d(t))}. \end{equation} We prove the following precise result about the values of $d(t)$. \begin{proposition}\label{prop:dx} We have $3 \leq d(t) \leq 6$. When $p = 2$, we have $d(t) = 3$. Let $v = (t_1^2,t_2^2, t_3^2,t_1t_2, t_1t_3, t_2t_3)$ and let \[ \Delta = \left\{ \det\left(v^T, (v^{(p^2)})^T, (v^{(p^4)})^T, \ldots, (v^{(p^{10})})^T\right) = 0 \right\}. \] When $p \neq 2$, we have: \begin{equation}\label{eq:dtvalues} \begin{split} d(t) = 3 \qquad \text{ if and only if } & \qquad t \in C^0(\mathbb{F}_{p^6}); \\ d(t) = 4 \qquad \text{ if and only if } & \qquad t \in C^0(\mathbb{F}_{p^8}); \\ d(t) = 5 \qquad\text{ if and only if } & \qquad t \in \Delta \cap C^0 \setminus \left( C^0(\mathbb{F}_{p^6}) \amalg C^0(\mathbb{F}_{p^8}) \right);\\ d(t) = 6 \qquad \text{ if and only if } & \qquad t \not\in \Delta \cap C^0.\\ \end{split} \end{equation} \end{proposition} \begin{proof} Since $t \in C^0(k)$, we see that $t_i \neq 0$, and without loss of generality we assume that $t_3 = 1$. For $1 \leq i, j \leq 3$, let $I_{ij} $ be the three-by-three matrix whose $(i, j)$-component is one and where all other entries are zero. Then $I_{11}, I_{22}, I_{33}, I_{12}+ I_{21}, I_{13} + I_{31}, I_{23} + I_{32} $ is a basis for $S_3(\mathbb{F}_{p^2})$ over $\mathbb{F}_{p^2}$. We set \begin{equation}\label{eq:wi} \begin{split} &w_1 = \psi_t(I_{11}), w_2 = \psi_t(I_{22}), w_3 = \psi_t(I_{33}), \\ &w_4 = \psi_t(I_{12}+I_{21}), w_5 = \psi_t(I_{13}+I_{31}), w_6 = \psi_t(I_{23}+I_{32}). \end{split} \end{equation} \begin{lemma}\label{lem:wi} The $w_i$ in \eqref{eq:wi} satisfy the following relations: \begin{flalign*} &w_1=t_1^2w_3 , \qquad w_2=t_2^2w_3 , \\ &w_4 = 2t_1t_2w_3, \\ &w_5 = 2t_1w_3, \qquad w_6 = 2t_2w_3, \end{flalign*} and $w_3$ is not zero. \end{lemma} \begin{proof}[Proof of lemma] The inverse matrix of $\mathbb{T}$ is \[ \mathbb{T}^{-1}=\det(\mathbb{T})^{-1} \begin{pmatrix} t_2^p-t_2^{p^{-1}} & t_1^{p^{-1}}-t_1^p & t_1^pt_2^{p^{-1}}-t_1^{p^{-1}}t_2^p \\ t_2^{p^{-1}}-t_2 & t_1-t_1^{p^{-1}} & t_1^{p^{-1}}t_2-t_1t_2^{p^{-1}} \\ t_2^p-t_2 & t_1-t_1^p & t_1^pt_2-t_1t_2^p \end{pmatrix}. \] Since for any matrices $M=(m_{ij})$, $N=(n_{ij})$ and $L=(l_{ij})$ the $(1, 1)$-component of $MNL$ is given by $\sum_{i, j}m_{1i}n_{ij}l_{j1}$, we have \begin{align*} w_1=\det(\mathbb{T})^{-1}(t_2^p-t_2^{p^{-1}})t_1; \\ w_2=\det(\mathbb{T})^{-1}(t_1^{p^{-1}}-t_1^p)t_2. \end{align*} Furthermore, $w_3$ is given by \begin{align*} w_3 &= \det(\mathbb{T})^{-1}(t_1^pt_2^{p^{-1}}-t_1^{p^{-1}}t_2^p) \\ &=\det(\mathbb{T})^{-1}t_1^{-1}(t_1^{p+1}t_2^{p^{-1}}-t_1^{p^{-1}+1}t_2^p) \\ &=\det(\mathbb{T})^{-1}t_1^{-1}(t_2^p-t_2^{p^{-1}}). \end{align*} For the last equality, we used equations $t_1^{p+1}+t_2^{p+1}+1=0$ and $t_1^{p^{-1}+1}+t_2^{p^{-1}+1}+1=0$. Similarly, we see that $w_3=\det(\mathbb{T})^{-1}t_2^{-1}(t_1^{p^{-1}}-t_1^p)$. These computations imply the first two relations of the assertion, and since $t_1, t_2 \not \in \mathbb{F}_{p^2}$, we see that $w_3$ is not zero. Furthermore, we compute that \begin{align*} w_4 &=\det(\mathbb{T})^{-1}((t_2^p-t_2^{p^{-1}})t_2+(t_1^{p^{-1}}-t_1^p)t_1) \\ &=\det(\mathbb{T})^{-1}(t_2^{p+1}-t_2^{p^{-1}+1}+t_1^{p^{-1}+1}-t_1^{p+1}) \\ &=2\det(\mathbb{T})^{-1}t_2(t_2^p-t_2^{p^{-1}}); \\ w_5 &=\det(\mathbb{T})^{-1} ((t_2^p-t_2^{p^{-1}})+(t_1^pt_2^{p^{-1}}-t_1^{p^{-1}}t_2^p)t_1) \\ &=\det(\mathbb{T})^{-1}(t_2^p-t_2^{p^{-1}}+t_1^{p+1}t_2^{p^{-1}}-t_1^{p^{-1}+1}t_2^p) \\ &=2\det(\mathbb{T})^{-1}(t_2^p-t_2^{p^{-1}}). \end{align*} Similarly, we see that $w_6=2\det(\mathbb{T})^{-1}(t_1^{p^{-1}}-t_1^p)$, so we obtain the remaining relations. \end{proof} When $p \neq 2$, we see from Lemma \ref{lem:wi} that \begin{align*} d(t) = \dim_{\mathbb{F}_{p^2}}\langle w_1, w_2, w_3, w_4, w_5, w_6 \rangle = \dim _{\mathbb{F}_{p^2}}\langle 1, t_1 , t_2, t_1t_2, t_1^2, t_2^2 \rangle. \end{align*} In particular, this implies that \begin{align*} d(t) \geq \dim _{\mathbb{F}_{p^2}}\langle w_3, w_5, w_6 \rangle = \dim _{\mathbb{F}_{p^2}}\langle 1, t_1 , t_2 \rangle =3. \end{align*} When $p=2$, by Lemma \ref{lem:CFp2} and Lemma \ref{lem:wi}, we see that $d(t) =3$. So assume $p \neq 2$, and consider \eqref{eq:dtvalues}. By construction (since $t_3 = 1$), we have $t \in \Delta$ if and only if $\dim _{\mathbb{F}_{p^2}}\langle 1, t_1 , t_2, t_1t_2, t_1^2, t_2^2 \rangle \leq 5$. Hence we see that $t \in \Delta \cap C^0$ if and only if $d(t) \leq 5$, which gives the required statement for $d(t) =6$. Also note that if $d(t) \leq 5$ then there exists some conic $Q/\mathbb{F}_{p^2}$ with equation $a_1+a_2t_1+a_3t_2 + a_4t_1t_2 + a_5t_1^2 + a_6t_2^2 = 0$ such that $t \in C^0 \cap Q$. Similarly if $d(t) \leq 4$ then there exist two independent conics $Q_1, Q_2$ such that $t \in C^0 \cap Q_1 \cap Q_2$. In this case, $Q_1$ and $Q_2$ do not have a common component (even defined over $\overline{\bbF}_p$). Otherwise, the intersection $Q_1\cap Q_2$ must be a line $L$ defined over $\mathbb{F}_{p^2}$ (because we require $Q_1\neq Q_2$) and $Q_1=L\cup L_1$ for another line $L_1$ defined over $\mathbb{F}_{p^2}$. This implies that $t\in L$ or $t\in L_1$, a contradiction by Lemma~\ref{lem:CFp2}. If $d(t) \leq 3$ there exist three independent conics $Q_1, Q_2, Q_3$ such that $t \in C^0 \cap Q_1 \cap Q_2 \cap Q_3$. If $t \in C^0(\mathbb{F}_{p^{2a}})$ then $d(t) \leq a$, i.e., if $2 \leq \deg_{\mathbb{F}_{p^2}}(t) \leq a$ then $d(t) \leq a$, for any value of $a$. This shows in particular that if $t\in C^0(\mathbb{F}_{p^6})$, then $d(t)=3$, cf.~Lemma~\ref{lem:CFp2}. Conversely, since $\vert Q_1 \cap Q_2 \vert \le 4$ by B{\'e}zout's theorem we see that if $d(t) \leq 4$ then $\deg_{\mathbb{F}_{p^2}}(t) \leq 4$. That is, then $t \in C^0(\mathbb{F}_{p^8}) \cup C^0(\mathbb{F}_{p^6})$; note that by Lemma~\ref{lem:Cmaxmim} we have $C^0(\mathbb{F}_{p^4}) = \emptyset$. If $d(t)=3$, then the $\mathbb{F}_{p^2}$-subspace $\<1,t_1,t_2,t_1^2,t_2^2,t_1t_2\>$ is equal to the $\mathbb{F}_{p^2}$-subspace $U$ spanned by $1,t_1,t_2$. Since $t_1U\subseteq U$ and $t_2 U\subseteq U$, the algebra $\mathbb{F}_{p^2}[t_1,t_2]=U$ has dimension three and $\deg_{\mathbb{F}_{p^2}}(t)=3$. This implies that $d(t) = 3$ if and only if $t \in C^0(\mathbb{F}_{p^6})$ and hence $d(t) = 4$ if and only if $t \in C^0(\mathbb{F}_{p^8})$. The statement for $d(t) =5$ now follows. \end{proof} \begin{remark}\label{rem:d3deg3} We provide another proof of the implication $d(t)=3\implies \deg_{\mathbb{F}_{p^2}} (t)=3$, since this information may also be useful. Suppose $P_1,P_2,P_3,P_4\in \bbP^2(K)$, where $K$ is a field, are four distinct points not on the same line. Then the conics passing through them form a $\bbP^1$-family. To see this, suppose $Q$ is represented by $F(t)=0$, where $F(t)=a_1t_1^2+a_2t_2^2+a_3t_3^2 + a_4t_1t_2 + a_5t_1 t_2 + a_6t_1 t_3$. By assumption $P_1, P_2, P_3$ are not on the same line. Choose a coordinate for $\bbP^2$ over $K$ such that $P_1=(1:0:0), P_2=(0:1:0)$ and $P_3=(0:0:1)$. Then $a_1=a_2=a_3=0$. The point $P_4=(\alpha_1:\alpha_2:\alpha_3)$ satisfies $(\alpha_1\alpha_2,\alpha_1\alpha_3,\alpha_2 \alpha_3)\neq (0,0,0)$. Thus, $F(P_4)=0$ gives a non-trivial linear relation among $a_4,a_5$, and $a_6$. Suppose now $t\in C^0 \cap Q_1\cap Q_2 \cap Q_3$ with $\mathbb{F}_{p^2}$-linear independent conics $Q_1, Q_2, Q_3$. It suffices to prove $\vert Q_1\cap Q_2\cap Q_3\vert \le 3$. If $\vert Q_1\cap Q_2\vert \le 3$, then we are done. So suppose that $Q_1\cap Q_2=\{P_1,P_2,P_3,P_4\}$. If $Q_3$ contains these four points, then $Q_3$ is a linear combination of $Q_1$ and $Q_2$ over some extension of $\mathbb{F}_{p^2}$ and by descent an $\mathbb{F}_{p^2}$-linear combination of $Q_1$ and $Q_2$, contradiction. Thus, we have shown that $\vert Q_1\cap Q_2\cap Q_3\vert \le 3$. \end{remark} \begin{definition}\label{def:D} Let $\mathcal{P}_{C^0} \simeq C^0\times \mathbb{P}^1$ be the fibre $\mathbb{P}_{C}(\mathcal{O}(-1)\oplus \mathcal{O}(1))\times_{C} C^0$ over $C^0$, cf. Remark~\ref{rem:choiceoftandu}. For each $S \in S_3(\mathbb{F}_{p^2})$, we define a morphism $f_S: C^0 \to \mathcal{P}_{C^0}$ via the map $C^0 \ni t=(t_1:t_2:t_3) \mapsto (t^{(p)}, (1:\psi_t(S)^p)) \in C^0\times \mathbb{P}^1$. Observe from the computation in the proof of Proposition~\ref{prop:dx} that $\psi_t(S)$ is a polynomial function in $t_1^{p^{-1}},t_2^{p^{-1}}, t_3^{p^{-1}}$, and hence that $\psi_t(S)^p$ is a polynomial function in $t_1,t_2, t_3$. The image of $f_S$ defines a Cartier divisor $\calD_S\subseteq \calP_{C^0}$, and we let $\calD$ be the horizontal divisor \begin{align*} \calD = \sum _{S \in S_3(\mathbb{F}_{p^2})}\calD_S. \end{align*} For $t \in C^0(k)$, let $\calD_t = \pi ^{-1}(t) \cap \calD$. That is, $(u_1:u_2) \in \calD_t$ if and only if $u_2u_1^{-1} \in \mathrm{Im}(\psi_t)$. \end{definition} \begin{lemma}\label{lem:imDt} Let $t = (t_1:t_2:t_3) \in C^0(k)$. \begin{enumerate} \item If $t \not\in C^0(\mathbb{F}_{p^6})$, then \begin{align*} \{ \alpha \in \mathbb{F}_{p^2}^{\times} : u_2u_1^{-1}(1-\alpha^{p^3-1}) \in \mathrm{Im}(\psi_t) \} = \begin{cases} \mathbb{F}_{p^2}^{\times} & \text{ if } (u_1:u_2) \in \calD_t; \\ \mathbb{F}_{p}^{\times} & \text{ otherwise.} \end{cases} \end{align*} \item If $t \in C^0(\mathbb{F}_{p^6})$, then \begin{align*} \{ \alpha \in \mathbb{F}_{p^6}^{\times} : u_2 u_1^{-1}(1-\alpha^{p^3-1}) \in \mathrm{Im}(\psi_t) \} = \begin{cases} \mathbb{F}_{p^6}^{\times} & \text{ if } (u_1:u_2) \in \calD_t; \\ \mathbb{F}_{p^3}^{\times} & \text{ otherwise}. \end{cases} \end{align*} \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item First we note that $\mathbb{F}_{p}^{\times} \subseteq \{ \alpha \in \mathbb{F}_{p^2}^{\times} : u_2u_1^{-1}(1-\alpha^{p^3-1}) \in \mathrm{Im}(\psi_t) \}$. Since $\mathrm{Im}(\psi_t)$ is an $\mathbb{F}_{p^2}$-vector space, we have that if $(u_1:u_2) \in \calD_t$, i.e., if $u_2u_1^{-1} \in \mathrm{Im}(\psi_t)$, then $u_2u_1^{-1}(1-\alpha^{p^3-1}) \in \mathrm{Im}(\psi_t)$ for any $\alpha \in \mathbb{F}_{p^2}^{\times}$. Conversely if $u_2u_1^{-1}(1-\alpha^{p^3-1}) \in \mathrm{Im}(\psi_t)$ for some $\alpha \in \mathbb{F}_{p^2} \setminus \mathbb{F}_p$, then $u_2u_1^{-1} \in \mathrm{Im}(\psi_t)$. \item If $t \in C^0(\mathbb{F}_{p^6})$, then $\mathrm{Im}(\psi_t) \subseteq \mathbb{F}_{p^6}$. Since $\dim_{\mathbb{F}_{p^2}}(\mathbb{F}_{p^6}) = 3$ and $d(t) \geq 3$ by Proposition~\ref{prop:dx}, we must have that $\mathrm{Im}(\psi_t) = \mathbb{F}_{p^6}$. The proof now follows from a similar argument as in (1). \end{enumerate} \end{proof} \begin{corollary}\label{cor:GbarMpol} We have \begin{equation*} \begin{split} & \{ A \in \mathrm{End}(t) \cap U_3(\mathbb{F}_p) : u_{2}u_{1}^{-1}(1 - \alpha^{p^3 -1}) \in \mathrm{Im}(\psi_t) \} \simeq \\ & \begin{cases} \{ \alpha \in \mathbb{F}_{p} : \alpha^{p+1} = 1\} & \text{ if } t \notin C^0(\mathbb{F}_{p^6}) \text{ and } u \notin \calD_t; \\ \{ \alpha \in \mathbb{F}_{p^2} : \alpha^{p+1} = 1\} & \text{ if } t \notin C^0(\mathbb{F}_{p^6}) \text{ and } u \in \calD_t; \\ \{ \alpha \in \mathbb{F}_{p^3} : \alpha^{p^3+1} = 1\} & \text{ if } t \in C^0(\mathbb{F}_{p^6}) \text{ and } u \notin \calD_t; \\ \{ \alpha \in \mathbb{F}_{p^6} : \alpha^{p^3+1} = 1\} & \text{ if } t \in C^0(\mathbb{F}_{p^6}) \text{ and } u \in \calD_t. \end{cases} \end{split} \end{equation*} \end{corollary} \begin{proof} This follows from combining Lemma \ref{lem:EndtintU} with Lemma \ref{lem:imDt}. \end{proof} \subsection{Determining $\boldsymbol{[\mathrm{Aut}((M_2,\langle, \rangle_2)):\mathrm{Aut}((M,\langle , \rangle))]}$}\ By Corollary \ref{cor:GM2intGMsize}, Equation \eqref{eq:sizeker}, and the results in the previous subsection, in particular Corollary \ref{cor:GbarMpol}, we immediately obtain the following result. \begin{lemma}\label{lem:GM2intGMexplicit} Define $e(p) = 0$ if $p=2$ and $e(p) = 1$ if $p > 2$. Then \begin{equation}\label{eq:GM2intGMexplicit} \vert G_{(M, \langle , \rangle)} \vert = \begin{cases} 2^{e(p)}p^{2(6-d(t))} & \text{ if } u \notin \calD_t; \\ (p+1)p^{2(6-d(t))} & \text{ if } t \notin C^0(\mathbb{F}_{p^6}) \text{ and } u \in \calD_t; \\ (p^3+1)p^{6} & \text{ if } t \in C^0(\mathbb{F}_{p^6}) \text{ and } u \in \calD_t. \end{cases} \end{equation} \end{lemma} Recall that $d(t) = 3$ when $t \in C^0(\mathbb{F}_{p^6})$. Combining Lemma \ref{lem:GM2intGMexplicit} with Lemma \ref{lem:sizeGM2pol}, and using Remark \ref{rem:indexmodp}, we conclude the following. \begin{corollary}\label{cor:muanumber1} We have \begin{equation}\label{eq:muanumber1} \begin{split} [\mathrm{Aut}((M_2,\langle, \rangle_2)):\mathrm{Aut}((M,\langle , \rangle))] = [G_{(M_2,\langle, \rangle_2)} : G_{(M, \langle , \rangle)}] = \\ \begin{cases} 2^{-e(p)}p^{3+2d(t)}(p+1)(p^2-1)(p^3+1) & \text{ if } u \notin \calD_t; \\ p^{3+2d(t)}(p^2-1)(p^3+1) & \text{ if } t \notin C^0(\mathbb{F}_{p^6}) \text{ and } u \in \calD_t; \\ p^{9}(p+1)(p^2-1) & \text{ if } t \in C^0(\mathbb{F}_{p^6}) \text{ and } u \in \calD_t. \end{cases} \end{split} \end{equation} \end{corollary} Now Corollary \ref{cor:sspmassg3}(1) and Corollary \ref{cor:muanumber1} yield the main result of this section, i.e., the mass formula for a supersingular principally polarised abelian threefold $x = (X, \lambda)$ of $a$-number $1$, cf. Theorem \ref{introthm:a1}. \begin{theorem}\label{thm:anumber1} Let $x = (X,\lambda) \in \mathcal{S}_{3,1}$ such that $a(X)=1$. For $\mu \in P^1(E^3)$, consider the associated polarised flag type quotient $(Y_2,\mu) \to (Y_1, \lambda_1) \to (X, \lambda)$ which is characterised by the pair $(t,u)$ with $t = (t_1:t_2:t_3)\in C^0(k)$ and $u = (u_1:u_2) \in \mathbb{P}^1(k)$. Let $(M_2, \langle, \rangle_2)$ and $(M, \langle, \rangle)$ be the respective polarised Dieudonn{\'e} modules of $Y_2$ and $X$, let $\calD_t$ be as in Definition \ref{def:D}, and let $d(t)$ be as in Definition \ref{def:dx}. Then \begin{equation}\label{eq:anumber1} \begin{split} & \mathrm{Mass}(\Lambda_x) = \mathrm{Mass}(\Lambda_{3,1}) \cdot [\mathrm{Aut}((M_2,\langle, \rangle_2)):\mathrm{Aut}((M,\langle , \rangle))] =\\ \frac{p^{3}}{2^{10}\cdot 3^4 \cdot 5 \cdot 7} & \begin{cases} 2^{-e(p)}p^{2d(t)}(p^2-1)(p^4-1)(p^6-1) & \text{ if } u \notin \calD_t; \\ p^{2d(t)}(p-1)(p^4-1)(p^6-1) & \text{ if } t \notin C^0(\mathbb{F}_{p^6}) \text{ and } u \in \calD_t; \\ p^6(p^2-1)(p^3-1)(p^4-1) & \text{ if } t \in C^0(\mathbb{F}_{p^6}) \text{ and } u \in \calD_t. \end{cases} \end{split} \end{equation} \end{theorem} \section{The automorphism groups} \label{sec:Aut} In this section we discuss the automorphism groups of principally polarised abelian threefolds $(X,\lambda)$ over an algebraically closed field $k\supseteq {\bbF}_p$ with $a(X)=1$. We shall first focus on an open dense locus in $\calP_\mu(a=1)$ (the $a$-number one locus in $\calP_\mu$) in Subsection \ref{sec:notinD} and then discuss a few other cases in Subsections \ref{sec:outsideC6} and \ref{sec:ssp}. To get started, we record some preliminaries in the next subsection. \subsection{Arithmetic properties of definite quaternion algebras over $\boldsymbol{\mathbb{Q}}$}\label{ssec:prelim}\ Let $C_n$ denote the cyclic group of order $n\ge 1$. Let $B_{p,\infty}$ denote the definite quaternion $\mathbb{Q}$-algebra ramified exactly at $\{\infty, p\}$. The class number $h(B_{p,\infty})$ of $B_{p,\infty}$ was determined by Deuring, Eichler and Igusa (cf.~\cite{igusa}) as follows: \begin{equation} \label{eq:hB} h(B_{p,\infty})=\frac{p-1}{12}+\frac{1}{3}\left (1-\left (\frac{-3}{p}\right) \right ) +\frac{1}{4}\left (1-\left (\frac{-4}{p}\right) \right ), \end{equation} where $(\cdot/p)$ is the Legendre symbol. If $h(B_{p,\infty})=1$, then the type number of $B_{p,\infty}$ is one and hence all maximal orders are conjugate. It follows from \eqref{eq:hB} that \begin{equation} \label{eq:h=1} h(B_{p,\infty})=1 \iff p\in \{2,3,5,7,13\}. \end{equation} If $p=2$, the quaternion algebra $B_{2,\infty}\simeq \left(\frac{-1,-1}{\mathbb{Q}}\right)$ is generated by $i,j$ with relations $i^2=j^2=-1$ and $k:=ij=-ji$, and the $\mathbb{Z}$-lattice \begin{equation} \label{eq:O2inf} O_{2,\infty}:=\Span_{\mathbb{Z}} \left \{ 1,i,j, \frac{1+i+j+k}{2} \right \} \end{equation} is a maximal order of $B_{2,\infty}$. Moreover, \begin{equation} \label{eq:E24} O^\times_{2,\infty}=\left \{\pm 1, \pm i, \pm j, \pm k, \frac{\pm 1 \pm i \pm j \pm k}{2}\right \}=:E_{24}, \end{equation} and one has $E_{24}\simeq \SL_2(\mathbb{F}_{3})$ and $E_{24}/\{\pm 1\}\simeq A_4$. If $p=3$, the quaternion algebra $B_{3,\infty}\simeq \left(\frac{-1,-3}{\mathbb{Q}}\right)$ is generated by $i,j$ with relations $i^2=-1, j^2=-3$ and $k:=ij=-ji$, and the $\mathbb{Z}$-lattice \begin{equation} \label{eq:O3inf} O_{3,\infty}:=\Span_{\mathbb{Z}} \left \{ 1,i,\frac{1+j}{2}, \frac{i(1+j)}{2} \right \} \end{equation} is a maximal order of $B_{3,\infty}$. Moreover, \begin{equation} \label{eq:T12} O^\times_{3,\infty}=\<i,\zeta_6\>=:T_{12}, \quad \zeta_6=(1+j)/2, \end{equation} and one has $T_{12}\simeq C_4 \rtimes C_3$ and $T_{12}/\{\pm 1\}\simeq D_3$, the dihedral group of order six. If $p \ge 5$, then $O^\times \in \{C_2,C_4,C_6\}$ for any maximal order $O$ in $B_{p,\infty}$ \cite[V Proposition 3.1, p.~145]{vigneras}. Fix a maximal order $O$ in $B_{p,\infty}$ and let $h(O,C_{2n})$ be the number of right $O$-ideal classes $[I]$ with $O_\ell(I)^\times \simeq C_{2n}$, where $O_\ell(I)$ is the left order of $I$. Then (see \cite{igusa}) \begin{equation} \label{eq:hC46} h(O,C_4)=\frac{1}{2}\left (1-\left (\frac{-4}{p}\right) \right ) \quad \text{and}\quad h(O,C_6)=\frac{1}{2}\left (1-\left (\frac{-3}{p}\right) \right ). \end{equation} \begin{lemma}\label{lm:UnO} \ \begin{enumerate} \item Let $Q$ be a definite quaternion $\mathbb{Q}$-algebra and $O$ a $\mathbb{Z}$-order in $Q$, and let $n\ge 1$ be a positive integer. Then the integral quaternion hermitian group $U(n,O)=\{A\in \Mat_n(O) : A\cdot A^*=\bbI_n\}$ is equal to the permutation unit group $\diag(O^\times, \dots, O^\times) \cdot S_n$. \item Let $O$ be a maximal order in $B_{2,\infty}$. Let $m_2:U(n,O)\to \GL_n(O)\to \GL_n(O/2O)$ be the reduction-modulo-$2$ map. Then $\ker(m_2)=\diag(\{\pm 1\},\dots, \{\pm 1\})\simeq C_2^n$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item Note that $O$ is stable under the involution $*$ since $x^*=\Tr x-x$ and $\Tr x\in \mathbb{Z}$ for any $x$ in $O$. Let $A=(a_{ij})\in U(n,O)$. Then since $A A^*=\bbI_n$, we have $\sum_k a_{ik} a_{ik}^*=1$ for any $1\le i\le n$. Since $a_{ik} a_{ik}^*=0$ or $1$, for any $1 \leq i \leq n$, there is only one integer $1\le k\le n$ such that $a_{ik}\neq 0$ and $a_{ik}\in O^\times$. On the other hand, since $A^* A=\bbI_n$, for any $1\le k\leq n$, there is a only one integer $1\le i\le n$ such that $a_{ik}\neq 0$ and $a_{ik}\in O^\times$. Thus, $A\in \diag(O^\times, \dots, O^\times) \cdot S_n$. Checking the reverse containment $\diag(O^\times, \dots, O^\times) \cdot S_n \subseteq U(n,O)$ is straightforward. \item By \eqref{eq:h=1}, we may assume that $O=O_{2,\infty}$. Since the diagonal entries of elements in $\ker(m_2)$ are all not zero, by part (1) we find $\ker(m_2) \subseteq \diag (O^\times, \dots, O^\times)$. Therefore, it suffices to show that the kernel of the reduction-modulo-$2$ map $m_2: O^\times \to (O/2O)^\times$ is isomorphic to $C_2$. Using \eqref{eq:E24} and $2O=\{a_1+a_2 i+a_3 j+a_4 k : a_i\in \mathbb{Z},\ a_1\equiv a_2 \equiv a_3\equiv a_4 \pmod 2 \}$, one checks that indeed $\ker(m_2)=\{\pm 1\} \subseteq O^\times$. \end{enumerate} \end{proof} \begin{lemma}\label{lm:Vp} Let $D_p$ be the quaternion division ${\bbQ}_p$-algebra and $O_p$ its maximal order. Let $n \geq 1$ be a positive integer. Let $\Pi$ be a uniformiser of $O_p$, and put $V_p:=1+\Pi \Mat_n(O_p)\subseteq \GL_n(O_p)$. If $p\ge 5$, then the torsion subgroup $(V_p)_{\mathrm{tors}}$ of $V_p$ is trivial. \end{lemma} \begin{remark} Before giving the proof, let us note that $p\geq 5$ is best possible. Indeed, when $p=3$, we have \[ D_3=\left(\frac{-1,-3}{\mathbb{Q}_3} \right ), \qquad O_3=\mathbb{Z}_3[i, (1+j)/2]=\mathbb{Z}_3[i, j], \qquad \Pi=j.\] Thus, we find the torsion element $-(1+j)/2\in 1+\Pi O_p$. \end{remark} \begin{proof}[Proof of Lemma~\ref{lm:Vp}] For simplicity, write $(\Pi)$ for the two-sided ideal in $\Mat_n(O_p)$ generated by $\Pi$. We must show that any $\alpha\in (V_p)_{\mathrm{tor}}$ must equal $1$. Since $V_p$ is a pro-$p$ group, we have $\alpha^{p^r}=1$ for some $r\geq 1$. By induction, we may assume that $\alpha^p=1$. Suppose that $\alpha\neq 1$ and write $\alpha=1+\Pi\beta$ for some nonzero $\beta\in\Mat_n(O_p) $. Necessarily, $\beta\not\in (\Pi)$, for otherwise $\alpha\equiv 1\pmod{p}$, which implies that $\alpha=1$ by a lemma of Serre \cite[p.~207]{mumford:av}. Since $p\geq 5$ and $p\mid \binom{p}{i}$ for all $1\leq i\leq p-1$, we find \begin{equation} \label{eq:1} 1=\sum_{i=0}^p \binom{p}{i}(\Pi\beta)^i\equiv 1+p\Pi\beta \pmod{\Pi^4}. \end{equation} This implies that $\beta\in (\Pi)$, which leads to a contradiction. \end{proof} \subsection{The region outside the divisor $\boldsymbol{\calD}$} \label{sec:notinD}\ Recall from Subsection \ref{ssec:mod} that $E$ is a supersingular elliptic curve over $\mathbb{F}_{p^2}$ such that $\pi_E = -p$. Let $\mu_{\mathrm{can}}\in P(E^3)$ be the threefold self-product of the canonical principal polarisation on $E$; this is also called the canonical polarisation on $E^3$. \begin{theorem}\label{thm:gen_autgp} Let $x = (X,\lambda) \in \mathcal{S}_{3,1}(k)$ with $a(X)=1$. For $\mu \in P(E^3)$, consider the associated polarised flag type quotient $(Y_2,p \mu) \to (Y_1, \lambda_1) \to (X, \lambda)$ which is characterised by the pair $(t,u)$ with $t = (t_1:t_2:t_3)\in C^0(k)$ and $u = (u_1:u_2) \in \mathbb{P}^1(k)$. Let $(M_2, \langle, \rangle_2)$ and $(M, \langle, \rangle)$ be the respective polarised Dieudonn{\'e} modules of $(Y_2,\mu)$ and $(X,\lambda)$, let $\calD_t$ be as in Definition \ref{def:D} and let $d(t)$ be as in Definition \ref{def:dx}. Assume that $(t,u)\not \in \calD$, that is, $u\not\in \calD_t$. \begin{enumerate} \item If $p=2$, then $\Aut(X,\lambda)\simeq C_2^3$. \item If $p\ge 5$, or $p=3$ and $d(t)=6$, then $\Aut(X,\lambda)\simeq C_2$. \end{enumerate} \end{theorem} \begin{proof} By Proposition~\ref{prop:miniso}, $(Y_2,p\mu)\to (X,\lambda)$ is the minimal isogeny. Therefore, \begin{equation} \label{eq:AutXpol} \Aut(X,\lambda)=\{h\in \Aut(Y_2,\mu): m_p(h)\in G_{(M,\<\,,\>)} \}. \end{equation} By Proposition~\ref{prop:GM2intGM}, we have an exact sequence \begin{equation}\label{eq:GMpolmodPi} 1\to \ker(\psi_t) \xrightarrow{} G_{(M,\langle, \rangle)} \xrightarrow{m_{\Pi}} \overline{G}_{(M,\langle, \rangle)} \to 1. \end{equation} \begin{enumerate} \item A direct calculation using the mass formula (cf. Corollary~\ref{cor:sspmassg3} and Lemma~\ref{lm:UnO}) shows \[ \Mass(\Lambda_{3,1})=\frac{1}{2^{10}\cdot 3^4} =\frac{1}{24^3\cdot 3!}=\frac{1}{\vert \Aut(E^3,\mu_{\rm can})\vert },\] and hence $\vert \Lambda_{3,1}\vert =1$. Thus, we may assume that $(Y_2,\mu)=(E^3,\mu_{\mathrm{can}})$, and we have $\Aut(Y_2,\mu)=\diag(O^\times,O^\times,O^\times)\cdot S_3$ by Lemma~\ref{lm:UnO} with $O=\End(E)$. As $u\not\in \calD_t$, Corollary~\ref{cor:GbarMpol} yields $\overline G_{(M,\<\,,\>)}=\{\pm 1\}=1$. We see from the proof of Proposition~\ref{prop:dx} that $\ker(\psi_t)$ is the $\mathbb{F}_{p^2}$-subspace generated by $I_{12}+I_{21}$, $I_{13}+I_{31}$ and $I_{23}+I_{32}$ (in the notation of that proof). Therefore, \begin{equation} \label{eq:GMp2} G_{(M,\<\,,\>)}=\left\{ \begin{pmatrix} \bbI_3 & 0 \\ S & \bbI_3 \end{pmatrix}: S=(s_{ij})\in S_3(\mathbb{F}_{p^2}), s_{ii}=0 \ \forall 1\le i\le 3\right\}. \end{equation} Let $h\in \Aut(X,\lambda)\subseteq \diag(O^\times,O^\times,O^\times)\cdot S_3$. Since $m_2(h)$ has non-zero diagonal entries, $h\in \diag(O^\times,O^\times,O^\times)$. One deduces $m_2(h)=1$ from \eqref{eq:GMp2}. Thus, $h\in \ker(m_2)=C_2^3$, by Lemma~\ref{lm:UnO}. On the other hand, $\ker(m_2)\subseteq \Aut(X,\lambda)$ from \eqref{eq:AutXpol}. This proves (1). \item Assume $p\ge 5$. As $u\not\in \calD_t$, Corollary~\ref{cor:GbarMpol} implies that $\overline G_{(M,\<\,,\>)}=\{\pm 1\}$. Lemma~\ref{lm:Vp} implies that the map $m_\Pi:\Aut(X,\lambda)\to \overline G_{(M,\<\,,\>)}$ is injective, because $\ker(m_{\Pi})$ is contained in $(V_p)_{\rm tors}$. Thus, $\Aut(X,\lambda)\simeq C_2$. Now assume $p=3$ and $d(t)=6$. In this case $G_{(M,\<\,,\>)}=\{\pm 1\}$ follows from \eqref{eq:GMpolmodPi} and Corollary~\ref{cor:GbarMpol}. By a lemma of Serre \cite[p.~207]{mumford:av}, the map $m_3:\Aut(X,\lambda)\to G_{(M,\<\,,\>)}$ is injective and hence $\Aut(X,\lambda)\simeq C_2$. \end{enumerate} \end{proof} \begin{corollary}\label{cor:size_Lx} Let the notation and assumptions be as in Theorem~\ref{thm:gen_autgp}. \begin{enumerate} \item If $p=2$, then $\vert \Lambda_x\vert =4$. \item If $p=3$ and $d(t)=6$, then $\vert \Lambda_x\vert =3^{11}\cdot 13$. \item If $p\ge 5$, then \begin{equation} \label{eq:Lx_p>3} \vert \Lambda_x\vert =\frac{p^{3+2d(t)} (p^2-1)(p^4-1)(p^6-1)} {2^{10}\cdot 3^4 \cdot 5 \cdot 7}. \end{equation} \end{enumerate} \end{corollary} \begin{proof} All statements follow from Theorems \ref{thm:anumber1} and \ref{thm:gen_autgp}. For $p=2$, we have $\Aut(X,\lambda)\simeq C_2^3$ for each $(X,\lambda)\in \Lambda_x$ and hence \begin{equation} \label{eq:Lx_p=2} \vert \Lambda_x\vert =\frac{2^3\cdot 2^9 \cdot 3\cdot (3\cdot 5)\cdot (3^2\cdot 7)} {2^{10}\cdot 3^4 \cdot 5 \cdot 7}=4. \end{equation} For $p=3$ and $d(t)=6$, we have $\Aut(X,\lambda)\simeq C_2$ for each $(X,\lambda)\in \Lambda_x$ and hence \begin{equation} \label{eq:Lx_p=3} \vert \Lambda_x\vert =\frac{3^{3+2d(t)} \cdot 2^3\cdot (2^4\cdot 5)\cdot (2^3\cdot 7\cdot 13)}{2^{10}\cdot 3^4 \cdot 5 \cdot 7}= 3^{2d(t)-1}\cdot 13=3^{11}\cdot 13. \end{equation} The same arguement gives \eqref{eq:Lx_p>3} for $p\ge 5$. \end{proof} A $g$-dimensional principally polarised supersingular abelian variety $(X,\lambda)$ over $k$ is said to be \emph{generic} if the moduli point ${\rm Spec}\, k \to \mathcal{S}_{g,1}$ factors through a generic point of $\mathcal{S}_{g,1}$. Recall that the supersingular locus $\mathcal{S}_{g,1}\subseteq \calA_{g,1}\otimes \overline{\bbF}_p$ is a scheme of finite type over $\overline{\bbF}_p$ which is defined over ${\bbF}_p$. Moreover, every geometrically irreducible component of $\mathcal{S}_{g,1}$ is defined over $\mathbb{F}_{p^2}$, cf.~\cite[Section 2.2]{yu:fod_ss}. Oort's conjecture \cite[Problem 4]{edixhoven-moonen-oort} asserts that for any integer $g\ge 2$ and any prime number $p$, every generic $g$-dimensional principally polarised supersingular abelian variety $(X,\lambda)$ over $k$ of characteristic $p$ has automorphism group $\{\pm 1\}$. Oort's conjecture fails with counterexamples in $(g,p)=(2,2)$ or $(g,p)=(3,2)$; see \cite{oort2,ibukiyama}. For fixed $g\ge 2$ and prime number $p$, consider the refined Oort conjecture: \begin{itemize} \item[$(\mathrm{O})_{g,p}$:] Every generic $g$-dimensional principally polarised supersingular abelian variety $(X,\lambda)$ over $k$ of characteristic $p$ has automorphism group $\{\pm 1\}$. \end{itemize} \begin{corollary}\label{cor:Oortconj} Let $(X,\lambda)$ be a generic principally polarised supersingular abelian threefold over $k$ of characteristic $p>0$. Then \[ \Aut(X,\lambda)\simeq \begin{cases} C_2^3 & \text{for $p=2$;} \\ C_2 & \text{for $p\ge 3$.} \end{cases} \] \end{corollary} \begin{proof} This follows immediately from Theorem~\ref{thm:gen_autgp}. \end{proof} In other words, Oort's Conjecture $(\mathrm{O})_{3,p}$ holds precisely when $p\neq 2$. \begin{remark}\label{rem:Autp=23} \begin{enumerate} \item It is shown \cite[Theorem 5.6, p.~270]{oort2} that if $(X,\lambda)$ is a principally polarised supersingular abelian threefold over $k$ of characteristic $2$, then $\Aut(X,\lambda)\supseteq C_2^3$. By Corollary~\ref{cor:Oortconj}, the smallest group $C_2^3$ also appears as $\Aut(X,\lambda)$ for some $(X,\lambda)$. We have seen that the unique member $(E^3,\mu_{\rm can})$ in $\Lambda_{3,1}$ has automorphism group $E_{24}^3\rtimes S_3$ (of order $2^{10}\cdot 3^4$). We expect that $2^{10}\cdot 3^4$ is the maximal order of automorphism groups of \emph{all} principally polarised abelian threefolds over $k$ of any characteristic (including zero). \item According to Hashimoto's result \cite{hashimoto:g=3}, we have $\vert \Lambda_{3,1}\vert =2$ for $p=3$. In this case, we have two isomorphism classes, represented by $(E^3,\mu_{\mathrm{can}})$ and $(E^3,\mu)$. Using Lemma~\ref{lm:UnO}, we compute $\vert\Aut(E^3,\mu_{\mathrm{can}})\vert=2^7\cdot 3^4$ and conclude $\vert\Aut(E^3,\mu)\vert=2^7\cdot 3^4$ from the mass formula $\Mass(\Lambda_{3,1})=1/(2^6\cdot 3^4)$. \end{enumerate} \end{remark} \subsection{The region where $\boldsymbol{t\not\in C(\mathbb{F}_{p^6})}$ and $\boldsymbol{(t,u)\in \calD}$. }\label{sec:outsideC6}\ In this subsection we consider the region $(t,u)\in \calD$ and assume that $t\not\in C(\mathbb{F}_{p^6})$. This extends the region considered in Subsection~\ref{sec:notinD}. \begin{lemma}\label{lm:C_p+1} Let $(X,\lambda)\in \mathcal{S}_{3,1}(k)$ with $a(X)=1$. If $p\ge 3$ and $\Aut(X,\lambda)\subseteq C_{p+1}$, then $\Aut(X,\lambda)\subseteq \{C_2,C_4,C_6\}$. \end{lemma} \begin{proof} Suppose that $\Aut(X,\lambda)=C_{2d}$ with $2d\vert (p+1)$. Then we have a ring homomorphism $\mathbb{Z}[C_{2d}]\to \End(X)$ which maps $C_{2d}$ bijectively to $\Aut(X,\lambda)$. The $\mathbb{Q}$-algebra homomorphism \[ \mathbb{Q}[C_{2d}]=\prod_{d'\vert 2d} \mathbb{Q}[\zeta_{d'}] \to \End^0(X)=\Mat_3(B_{p,\infty}) \] factors through an injective $\mathbb{Q}$-algebra homomorphism \[ \prod_{i=1}^r \mathbb{Q}[\zeta_{d_i}]\hookrightarrow \End^0(X)=\Mat_3(B_{p,\infty}), \] where $\{d_i \vert 2d\} \subseteq \{d' \vert 2d\}$. Since the composition gives an embedding $C_{2d}\hookrightarrow \Aut(X)$, the integers $\{d_i\}$ satisfy $\mathrm{lcm}(d_1,\dots, d_r)=2d$. Since $p\nmid 2d$, the algebra ${\bbZ}_p[C_{2d}]$ is {\'e}tale over ${\bbZ}_p$ and is the maximal order in ${\bbQ}_p[C_{2d}]$. This gives rise to an embedding $\prod_{i=1}^r \mathbb{Z}[\zeta_{d_i}]\otimes {\bbZ}_p \hookrightarrow \End(X)\otimes {\bbZ}_p \simeq \End(X[p^\infty])$. Thus, the decomposition $X[p^\infty]=H_1\times \dots \times H_r$ into a product of supersingular $p$-divisible groups shows $a(X)\ge r$ and hence $r=1$. Therefore, there is a $\mathbb{Q}$-algebra embedding of $\mathbb{Q}(\zeta_{2d})$ into $\Mat_{3}(B_{p,\infty})$. This implies that $\varphi(2d)\vert 6$ (where $\varphi$ denotes Euler's totien function) and hence $2d\in \{2,4,6,14,18\}$. If $2d=14$, then $p\equiv -1 \pmod 7$ and $\ord(p)=2$ in $(\mathbb{Z}/7\mathbb{Z})^\times$. This gives rise to an embedding $\mathbb{Z}[\zeta_{14}]\otimes {\bbZ}_p=\mathbb{Z}_{p^2}\times \mathbb{Z}_{p^2}\times \mathbb{Z}_{p^2} \hookrightarrow \End(X[p^\infty])$ and hence $a(X)=3$, a contradiction. If $2d=18$, then $p\equiv -1 \pmod 9$ and $\ord(p)=2$ in $(\mathbb{Z}/9\mathbb{Z})^\times$. Similarly, we get an embedding $\mathbb{Z}[\zeta_{18}]\otimes {\bbZ}_p=\mathbb{Z}_{p^2}\times \mathbb{Z}_{p^2}\times \mathbb{Z}_{p^2}\hookrightarrow \End(X[p^\infty])$ and $a(X)=3$, again a contradiction. \end{proof} Recall that $\mathbb{F}_{p^2}^1:=\{\alpha\in \mathbb{F}_{p^2}^\times: \alpha^{p+1}=1\}\simeq C_{p+1}$ denotes the group of norm one elements in~$\mathbb{F}_{p^2}^\times$. \begin{theorem}\label{thm:inD} Let the notation be as in Theorem~\ref{thm:gen_autgp}. Assume that $(t,u)\in \calD$ and $t\not \in C(\mathbb{F}_{p^6})$. \begin{enumerate} \item If $p=2$, then $\Aut(X,\lambda)\simeq C_2^3 \times C_3$. \item If $p=3$ and $d(t)=6$, then $\Aut(X,\lambda)\in \{C_2,C_4\}$. \item For $p\ge 5$, we have the following cases: \begin{itemize} \item [(i)] If $p\equiv -1 \pmod {4}$, then $\Aut(X,\lambda)\in \{C_2,C_4\}$. \item [(ii)] If $p\equiv -1 \pmod {3}$, then $\Aut(X,\lambda)\in \{C_2,C_6\}$. \item [(iii)] If $p\equiv 1 \pmod {12}$, then $\Aut(X,\lambda)\simeq C_2$. \end{itemize} \end{enumerate} \end{theorem} \begin{proof} \begin{enumerate} \item As in Theorem~\ref{thm:gen_autgp}(1), we may assume that $(Y_2,\mu)=(E^3,\mu_{\mathrm{can}})$, and by Lemma~\ref{lm:UnO} we have $\Aut(Y_2,\mu)=\diag(O^\times,O^\times,O^\times)\cdot S_3$. Then \[ \begin{split} \Aut(X,\lambda)& =\left \{h\in \Aut(Y_2, \mu): m_2(h)= \begin{pmatrix} a & & \\ & a & \\ & & a \end{pmatrix}, a\in \mathbb{F}_4^1\right \} \\ &=\left \{h\in \diag(O^\times,O^\times,O^\times): m_2(h)= \begin{pmatrix} a & & \\ & a & \\ & & a \end{pmatrix}, a\in \mathbb{F}_4^1 \right \} \\ &=\left \{ \begin{pmatrix} \pm w^j & & \\ & \pm w^j & \\ & & \pm w^j \end{pmatrix}: 0\le j \le 5 \right \}\simeq C_2^3\times C_3, \end{split} \] where $w=(1+i+j+k)/2$ satisfies $w^6=1$. \item In this case, $\overline G_{(M,\<\,,\>)}=\mathbb{F}_{9}^1\simeq C_4$ by Corollary~\ref{cor:GbarMpol}. The proof then follows from the fact that the reduction-modulo-$3$ map is injective. \item In this case, $\overline G_{(M,\<\,,\>)}=\mathbb{F}_{p^2}^1\simeq C_{p+1}$ by Corollary~\ref{cor:GbarMpol}. It follows from Lemma~\ref{lm:Vp} that $\Aut(X,\lambda)$ can be identified with a subgroup of $\overline G_{(M,\<\,,\>)}\simeq C_{p+1}$ as $p\ge 5$. By Lemma~\ref{lm:C_p+1}, $\Aut(X,\lambda)\in \{C_2,C_4,C_6\}$. The assertions for (i), (ii), (iii) and (iv) follow from this assertion. \end{enumerate} \end{proof} Write $\calD_\mu$ for $\calD\subseteq \calP_{\mu}(a=1)$ to emphasise its dependence on $\mu\in P(E^3)$. Recall that $\Psi_\mu: \calP_{\mu}\to \mathcal{S}_{3,1}$ is the map $(Y_\bullet,\rho_\bullet)\mapsto (Y_0,\lambda_0)$. Put $\calD_{\mu, C(\mathbb{F}_{p^6})^c}:=\{(t,u)\in \calD_\mu: t\not\in C(\mathbb{F}_{p^6})\}$. Let $\Lambda_1$ denote the set of $\mathbb{F}_{p^2}$-isomorphism classes of supersingular elliptic curves $E'$ over $\mathbb{F}_{p^2}$ with Frobenius endomorphism $\pi_{E'}=-p$. This set is in bijection with the set $\Cl(B_{p,\infty})$ of right $O$-ideal classes for a fixed maximal order $O$ in $B_{p,\infty}$; see~\cite{deuring} (also cf.\cite[Theorem 2.1]{xueyu}). \begin{proposition}\label{prop:C246}\ \begin{enumerate} \item If $p=3$ and $d(t)=6$, then for all $(X,\lambda)\in \Psi_{\mu}(\calD_{\mu, C(\mathbb{F}_{p^6})^c})$ with $\mu = \mu_{\mathrm{can}}$, one has $\Aut(X,\lambda)\simeq C_4$. \item If $p\ge 5$ and $p\equiv 3 \pmod 4$, then there exists $\mu\in P(E^3)$ such that for all $(X,\lambda)\in \Psi_\mu(\calD_{\mu, C(\mathbb{F}_{p^6})^c})$ one has $\Aut(X,\lambda)\simeq C_4$. \item If $p\ge 5$ and $p\equiv 2 \pmod 3$, then there exists $\mu\in P(E^3)$ such that for all $(X,\lambda)\in \Psi_\mu(\calD_{\mu, C(\mathbb{F}_{p^6})^c})$ one has $\Aut(X,\lambda)\simeq C_6$. \item If $p\ge 11$, then there exists $\mu\in P(E^3)$ such that for all $(X,\lambda)\in \Psi_\mu(\calD_{\mu, C(\mathbb{F}_{p^6})^c})$ one has $\Aut(X,\lambda)\simeq C_2$. \end{enumerate} \end{proposition} \begin{proof} We use the results from Subsection~\ref{ssec:prelim}. If $p=3$, then $O^\times=\Aut(E)=\<i,\zeta_6\>$. If $p\ge 5$ and $p\equiv 2 \pmod 3$ (resp. $p\equiv 3 \pmod 4$), there exists a unique supersingular elliptic curve $E'$ in $\Lambda_1$ such that $O^\times:=\Aut(E')\simeq C_6$ (resp.~$C_4$). If $p\ge 11$, then there exists a supersingular elliptic curve $E'$ in $\Lambda_1$ such that $O^\times:=\Aut(E')\simeq C_2$. Note that if $p\ge 11$ then either $h(B_{p,\infty})\ge 2$ or $p\equiv 1 \pmod {12}$. For cases (2), (3), and (4) we choose a polarisation $\mu\in P(E^3)$ such that $(E^3,\mu)\simeq (E'^3,\mu'_{\mathrm{can}})$, where $\mu'_{\mathrm{can}}$ is the canonical polarisation on $E'^3$ as before. (In case (1) $\mu = \mu_{\mathrm{can}}$ is the unique choice of polarisation.) Then using the same argument as in Theorem~\ref{thm:inD}, the automorphism group $\Aut (X,\lambda)$ for $(X,\lambda)\in \Psi_\mu(\calD_{\mu, C(\mathbb{F}_{p^6})^c})$ consists of elements of the form $\diag(a,a,a)$ with $a\in O^\times$ satisfying $m_3(a)\in \mathbb{F}_4^1$ if $p=3$ (resp. $m_\Pi(a)\in \mathbb{F}_{p^2}^1$ if $p\ge 5$). If $p=3$, we have $m_3(\<i\>)=C_4$. If $p\equiv 3 \pmod 4$, we have $m_\Pi(\<i\>)=C_4$. If $p\equiv 2 \pmod 3$, we have $m_\Pi(\<\zeta_6\>)=C_6$. Thus, $\Aut(X,\lambda)\simeq C_4$ for $p\equiv 3 \pmod 4$ and $\Aut(X,\lambda)\simeq C_6$ for $p\equiv 2 \pmod 3$. In case (4), we have $\Aut(X,\lambda)\simeq C_2$. \end{proof} \begin{remark} \begin{enumerate} \item Given Proposition~\ref{prop:C246}, it remains to check whether the group $C_2$ also appears as $\Aut(X,\lambda)$ in the region $\Psi_\mu(\calD_{\mu, C(\mathbb{F}_{p^6})^c})$ for some $\mu\in P(E^3)$ when $p~=~3,5,7$. \item We assume the condition $d(t)=6$ when $p=3$ in Theorems \ref{thm:gen_autgp} and \ref{thm:inD}. It remains to determine which other automorphism groups occur if this condition is dropped. \end{enumerate} \end{remark} \subsection{The superspecial case} \label{sec:ssp}\ As we have seen in the previous subsection, to investigate the automorphism groups in some special region of $\calP_{\mu}(a=1)$, the knowledge of automorphism groups arising from the superspecial locus $\Lambda_{3,1}$ also plays an important role. In this subsection, we discuss only preliminary results on the automorphism groups of members in $\Lambda_{3,1}$. A complete list of all possible automorphism groups requires much more work; see Question (2) below. We briefly recall some results. For $p=2$, we have $\vert\Lambda_{3,1}\vert=1$ and the unique isomorphism class represented by $(X,\lambda)$ has automorphism group $E_{24}^3\rtimes S_3$. For $p=3$, we have $\vert\Lambda_{3,1}\vert=2$ by Hashimoto's result. In this case, the two isomorphism classes are represented by $(E^3,\mu_{\rm can})$ and $(E^3,\mu)$, respectively, and we have $\Aut(E^3,\mu_{\rm can})=T_{12}^3\rtimes S_3$ so $\vert\Aut(E^3,\mu)\vert=2^7\cdot 3^4$, cf. Remark~\ref{rem:Autp=23}. For $p\ge 5$, the following non-abelian groups occur: \[ \begin{cases} C_2^3 \rtimes S_3 & \text{for $p\equiv 1 \pmod {12}$};\\ C_4^3 \rtimes S_3 & \text{for $p\equiv 3\pmod {4}$};\\ C_6^3 \rtimes S_3 & \text{for $p\equiv 2\pmod {6}$}, \end{cases} \] cf.~Lemma~\ref{lm:UnO}. Unlike the $a$-number one case, it is more difficult to construct a member $(X,\lambda)$ in $\Lambda_{3,1}$ such that $\Aut(X,\lambda)\simeq C_2$. However, it is expected that when $p$ goes to infinity, most members of $\Lambda_{g,1}$ have automorphism group $C_2$. The following result confirms this expectation for $g=3$, based on Hashimoto's result~\cite{hashimoto:g=3}. \begin{proposition}\label{prop:asympt} Let $\Lambda_{3,1}(C_2):=\{(X,\lambda)\in \Lambda_{3,1}: \Aut(X,\lambda)\simeq C_2\}$. Then \begin{equation} \label{eq:asympt} \frac{\vert \Lambda_{3,1}(C_2)\vert }{\vert \Lambda_{3,1}\vert }\to 1 \quad \text{as\ $p\to \infty$}. \end{equation} \end{proposition} \begin{proof} Put $h_2(p):=\vert\Lambda_{3,1}(C_2)\vert$. By \cite[Main Theorem]{hashimoto:g=3}, the main term of $h(p):=\vert\Lambda_{3,1}\vert$ is $H_1(p):=(p-1)(p^2+1)(p^3-1)/(2^9\cdot 3^4\cdot 5\cdot 7)$ and the error term $\varepsilon(p)$ is $O(p^5)$. Observe that $\Mass(\Lambda_{3,1})=H_1(p)/2$. If $(X,\lambda)\not\in \Lambda_{3,1}(C_2)$, then $\vert\Aut(X,\lambda)\vert\ge 4$. This gives the inequality \[ \Mass(\Lambda_{3,1})\le \frac{h_2(p)}{2}+\frac{h(p)-h_2(p)}{4}= \frac{h_2(p)}{4}+\frac{H_1(p)+\varepsilon(p)}{4}. \] From $\Mass(\Lambda_{3,1})=H_1(p)/2$ one deduces that $h_2(p)\ge H_1(p)-\varepsilon(p)$. Since \[ \frac{H_1(p)-\varepsilon(p)}{H_1(p)+\varepsilon(p)}\le {\frac{\vert\Lambda_{3,1}(C_2)\vert}{\vert\Lambda_{3,1}\vert}}\le 1 \quad \text{and} \quad \frac{H_1(p)-\varepsilon(p)}{H_1(p)+\varepsilon(p)}\to 1 \quad \text{as $p\to \infty$}, \] we get the assertion \eqref{eq:asympt}. \end{proof} We end the paper with some open problems. \begin{questions*} \begin{enumerate} \item Let $X$ be a principally polarisable supersingular abelian variety over $k$, and let $P(X)$ be the set of isomorphism classes of principal polarisations on $X$. The mass of $P(X)$ is defined as \begin{equation} \label{eq:massPX} \Mass(P(X)):=\sum_{\lambda\in P(X)} \frac{1}{\vert \Aut(X,\lambda)\vert }. \end{equation} One would like to find a mass formula for $\Mass(P(X))$ and understand the relationship between the sets $P(X)$ and $\Lambda_{(X,\lambda)}$ for a polarisation $\lambda\in P(X)$ when $\dim(X)~=~3$ . Ibukiyama \cite{ibukiyama} studied $P(X)$ for $\dim(X)=2$. He gave a mass formula for $\Mass(P(X))$ and also showed that $P(X)$ is in bijection with the set $\Lambda_{(X,\lambda)}$ for any principal polarisation $\lambda$ on $X$. Note that not every supersingular abelian threefold is principally polarisable: by \cite[Theorem 10.5, p.~71]{lioort} we see that the supersingular locus $\calS_{3,d}\subseteq \calA_{3,d}\otimes \overline{\bbF}_p$ is three-dimensional if $d$ is divisible by a high power of $p$, while $\dim(\mathcal{S}_{3,1})=2$. \item In order to study the automorphism groups of $(X,\lambda)$ with $a(X)=2$, we also need to study the automorphism groups arising from the non-principal genus $\Lambda_{3,p}$; see Proposition~\ref{prop:miniso}. Do we have an asymptotic result similar to Proposition~\ref{prop:asympt} for $\Lambda_{3,p}$? What are the possible automorphism groups arising from $\Lambda_{3,1}$ or from $\Lambda_{3,p}$? We refer to Ibukiyama-Katsura-Oort~\cite{Ibukiyama-Katsura-Oort-1986}, Katsura-Oort~\cite{katsuraoort:compos87} and Ibukiyama~\cite{ibukiyama:autgp1989} for detailed investigations for the principal genus case $\Lambda_{2,1}$ and the non-principal genus case $\Lambda_{2,p}$. Observe that there are natural maps $\Lambda_{2,1}\times \Lambda_{1,1}\to \Lambda_{3,1}$ and $\Lambda_{2,p}\times \Lambda_{1,1}\to \Lambda_{3,p}$. Following the references mentioned above, these maps already produce many automorphism groups of members of $\Lambda_{3,1}$ and $\Lambda_{3,p}$. \item We say two polarised abelian varieties $(X_1,\lambda_1)$ and $(X_2,\lambda_2)$ are isogenous, denoted $(X_1,\lambda_1)\sim (X_2,\lambda_2)$, if there exists a quasi-isogeny $\varphi:X_1\to X_2$ such that $\varphi^* \lambda_2=\lambda_1$. Let $x=(X_0,\lambda_0)\in \calA_{g,1}(k)$ be a geometric point. Define \begin{equation} \label{eq:newLambda_x} \Lambda_x:=\{(X,\lambda)\in \calA_{g,1}(k): (X,\lambda) \sim (X_0,\lambda_0) \text{\ and \ } (X,\lambda)[p^\infty]\simeq (X_0,\lambda_0)[p^\infty] \}. \end{equation} Using the foliation structure on Newton strata due to Oort~\cite{oort:foliation}, one can show that the set $\Lambda_x$ is finite. Note that any two principally polarised supersingular abelian varieties over~$k$ are isogenous, cf.~\cite[Corollary 10.3]{yu:thesis}. Thus, the definition of $\Lambda_x$ in \eqref{eq:newLambda_x} coincides that of $\Lambda_x$ in \eqref{eq:Lambdaxo} when $x\in \mathcal{S}_{g,1}$. That is, a mass function \begin{equation} \label{eq:massfcn} \Mass: \calA_{g,1}(k) \to \mathbb{Q}, \quad x\mapsto \Mass(\Lambda_x) \end{equation} extends the mass function $\Mass(x):=\Mass(\Lambda_x)$ defined on $\mathcal{S}_{g,1}(k)$ as before. One would like to compute or study the properties of such a mass function on $\calA_{g,1}(k)$, starting in low genus $g$. This problem may require developing more explicit descriptions of the foliation structure on Newton strata, or employing analogues of the Rapoport-Zink space which was introduced in Subsection~\ref{ssec:mod}. \end{enumerate} \end{questions*} \section{Appendix: The intersection $C\cap \Delta$} \label{sec:CcapD} Let $C\subseteq \bbP^2$ be the Fermat curve defined by the equation $X_1^{p+1}+X_2^{p+1}+X_3^{p+1}=0$ and $\Delta\subseteq \bbP^2$ the curve defined in Proposition~\ref{prop:dx}. In Section~\ref{sec:a1} we have seen the inclusion \[ C(\mathbb{F}_{p^2})\coprod C^0(\mathbb{F}_{p^6}) \coprod C^0(\mathbb{F}_{p^8})\coprod C^0(\mathbb{F}_{p^{10}}) \subseteq C\cap \Delta \] for $p>2$. In this (independent) section we study the complement of this inclusion. \subsection{Bounds for the degrees}\label{sec:bound}\ Let $\calQ$ denote the set of all conics (including degenerate ones) $Q\subseteq \bbP^2$ defined over $\mathbb{F}_{p^2}$. Then $\Delta=\cup_{Q\in \calQ} Q$. If $t\in C\cap \Delta$, then $t\in C\cap Q$ for some $Q\in \calQ$ and hence $\deg_{\mathbb{F}_{p^2}}(t):=[\mathbb{F}_{p^2}(t):\mathbb{F}_{p^2}]\le 2(p+1)$. We need the following well-known result. \begin{theorem}[Kummer's Theorem]\label{thm:Kummer} Let $K$ be any field and $n\ge 1$ an integer and $a\in K^\times$. If $(n, {\rm char\,} K)=1$, and $\mu_n(K^\mathrm{sep})\subseteq K$, and the element $a \pmod{(K^\times)^n}$ in $K^\times/(K^\times)^n$ has order $n$, then $[K(a^{1/n}):K]=n$. \end{theorem} The authors are grateful to Ming-Lun Hsieh for providing the following proposition. \begin{proposition}\label{prop:ML} There exist a conic $Q\in \calQ$ and a point $t\in C\cap Q$ such that $\deg_{\mathbb{F}_{p^2}}(t)=(p+1)$. \end{proposition} \begin{proof} Choose a generator $u_1$ of $\mathbb{F}_{p^2}^\times$ such that $u_1^p+u_1=-a\neq 0$. Put $u:=a^{-1} u_1$ and let $\alpha$ be a $p+1$-th root of $u$. As $a\in {\bbF}_p^\times$, we have $u^p+u=-1$. Since the element $u \pmod{(\mathbb{F}_{p^2}^\times)^{p+1}}$ in $\mathbb{F}_{p^2}^\times/(\mathbb{F}_{p^2}^\times)^{p+1}=\mathbb{F}_{p^2}^\times/(\mathbb{F}_{p}^\times)$ has order $p+1$, one has $[\mathbb{F}_{p^2}(\alpha):\mathbb{F}_{p^2}]=p+1$ by Kummer's Theorem. Let \[ Q: X_1 X_2=u X_3^2 \quad \text{ and } \quad t:=(\alpha:u \alpha^{-1}:1). \] One sees $t\in C$ as $\alpha^{p+1}+(u \alpha^{-1})^{p+1}+1=u+u^{p+1}\cdot u^{-1}+1=0$. So $t\in C\cap Q$ and $\deg_{\mathbb{F}_{p^2}}(t)=p+1$. \end{proof} The following result, due to Akio Tamagawa, says that the upper bound $2(p+1)$ for $\deg_{\mathbb{F}_{p^2}}(t)$ in $C\cap \Delta$ can be realised. \begin{proposition}\label{prop:akio} There exist a conic $Q\in \calQ$ and a point $t\in C\cap Q$ such that $\deg_{\mathbb{F}_{p^2}}(t)=2(p+1)$. \end{proposition} \begin{proof}[Construction] We first consider the case $p=2$. Let $\zeta$ be a primitive fifth roof of unity in $\overline \mathbb{F}_2$. Since $(\mathbb{Z}/5\mathbb{Z})^\times\simeq \< 2\!\!\mod 5\>$, we have $\mathbb{F}_2(\zeta)=\mathbb{F}_{2^4}$. One computes that $(1+\zeta)^3=1+\zeta+\zeta^2+\zeta^3\neq 1$ and $(1+\zeta)^5=\zeta+\zeta^4 \neq 1$. Therefore $1+\zeta$ generates the cyclic group $\mathbb{F}_{2^4}^\times \simeq C_{15}$. Choose $x,y,z\in \overline \mathbb{F}_2$ such that $x=1$, $y^3=\zeta$ and $z^3=1+\zeta$, and put $t:=(x:y:z)$; we have $1+\zeta+(1+\zeta)=0$. Since $\mathbb{F}_2(z)$ contains $\mathbb{F}_2(\zeta)=\mathbb{F}_{2^4}$, we have $\mathbb{F}_2(z)=\mathbb{F}_{2^4}(z)$. Since $\<1+\zeta\>=\mathbb{F}_{2^4}^\times$, by Kummer's Theorem, $\mathbb{F}_2(z)=\mathbb{F}_{2^4}(z)=\mathbb{F}_{2^{12}}$ and hence $\deg_{\mathbb{F}_{4}}(t)=6=2(p+1)$. Since $x,y\in \mathbb{F}_{2^4}$, there exist $a,b,c\in \mathbb{F}_{2^2}$ such that $a x^2+bxy+c y^2=0$. Let $Q\subseteq \bbP^2$ be the (degenerate) conic defined by the equation $a X_1^2+b X_1 X_2+c X_2^2$. Then the point $t\in C\cap Q$ satisfies the desired property. Assume now that $p>2$. We would like to find solutions $t=(x:y:z)$ with $x\in \mathbb{F}_{p^{4(p+1)}}^\times$, $y\in \mathbb{F}_{p^4}^\times\setminus \mathbb{F}_{p^2}^\times$, and $z\in \mathbb{F}_{p^2}^\times$ satisfying the desired properties. Let \[ f:\mathbb{F}_{p^4}^\times\to \mathbb{F}_{p^4}^\times/ (\mathbb{F}_{p^4}^\times)^{2(p+1)} \] be the natural projection; one has $\mathbb{F}_{p^4}^\times/ (\mathbb{F}_{p^4}^\times)^{2(p+1)} \simeq C_{2(p+1)}$ as $p\neq 2$. Consider the following three sets: \begin{equation} \label{eq:XYZ} \begin{split} &Z:=\{z^{p+1}: z\in \mathbb{F}_{p^2}^\times \}\simeq \mathbb{F}_{p}^\times; \\ &Y:=\{y^{p+1}: y\in \mathbb{F}_{p^4}^\times \}\setminus Z; \\ &X:=\{\xi\in \mathbb{F}_{p^4}^\times: \text{$f(\xi)$ generates the cyclic group $C_{2(p+1)}$}\, \}. \end{split} \end{equation} The sets $Y$ and $Z$ are equipped with an $\mathbb{F}_p^\times$-action and we have \begin{equation} \label{eq:sizeXYZ} \vert Z\vert =p-1, \quad \vert Y\vert =p^2(p-1), \quad \vert X\vert =(p^4-1)\cdot \frac{\varphi(2(p+1))}{2(p+1)}. \end{equation} Let $g$ be the composition \[ \begin{CD} g: \mathbb{F}_{p^4}^\times @>{N}>> \mathbb{F}_{p^2}^\times @>{\rm proj.}>> \mathbb{F}_{p^2}^\times/(\mathbb{F}_{p}^\times)^2\simeq C_{2(p+1)}, \end{CD} \] where $N(\alpha)=\alpha^{p^2+1}$ is the norm map. The map $f$ can be identified with $g$ by a suitable choice of the generators. Since the image $g(\mathbb{F}_{p}^\times)$ is trivial, the image $f(\mathbb{F}_{p}^\times)$ is also trivial. Thus, $X$ is also equipped with an $\mathbb{F}_{p}^\times$-action and hence $-X=X$. We would like to find \begin{equation} \label{eq:etazetaxi} \eta + \zeta = \xi \end{equation} for some $\eta\in Y$, $\zeta\in Z$ and $\xi\in -X=X$. Note that $X$, $Y$ and $Z$ are mutually disjoint: that $Y\cap Z=\emptyset$ follows by definition, and $X\cap Z=\emptyset$ follows from the fact that ${\bbF}_p^\times \subseteq \ker(f)$. Since $f((\mathbb{F}_{p^4}^\times)^{p+1})$ is the $2$-torsion subgroup of $\mathbb{F}_{p^4}^\times/ (\mathbb{F}_{p^4}^\times)^{2(p+1)}\simeq C_{2(p+1)}$ and $f(Y)\subseteq f((\mathbb{F}_{p^4}^\times)^{p+1})$, the image $f(Y)$ contains no generator of $C_{2(p+1)}$. Therefore, we also have $Y\cap X=\emptyset$. We are working on the space $\bbP:=\mathbb{F}_{p^4}^\times/ \mathbb{F}_{p}^\times \simeq \bbP^3(\mathbb{F}_{p})$. The images of $X,Y$ and $Z$ in $\bbP$ are written as $\overline X$, $\overline Y$ and $\overline Z$, respectively. So $\overline Z=\{\bar \zeta\}$ and \[ \vert \overline Z\vert =1, \quad \vert \overline Y\vert =p^2, \quad \vert \overline X\vert =(p^2+1)\cdot \frac{\varphi(2(p+1))}{2}. \] For each point $\bar \eta\in \overline Y$ ($\bar \eta\neq \bar \zeta$), denote by $L_{\bar \eta}\subseteq \bbP$ the line joining the points $\bar \eta$ and $\bar \zeta$. To solve \eqref{eq:etazetaxi}, it suffices to prove that \begin{equation} \label{eq:LintX} \left(\bigcup_{\bar \eta\in \overline Y} L_{\bar \eta} \right )\cap \overline X\neq \emptyset. \end{equation} This is because if $\bar \xi\in L_{\bar \eta}\cap \overline X$ for some $\bar \eta\in \overline Y$, then we have $a \eta+b\zeta=c\xi$ with $a,b,c\in \mathbb{F}_p^\times$ and hence $\eta'+\zeta'=\xi'$ with $\eta'\in Y, \zeta'\in Z$ and $\xi'\in X$. \begin{lemma}\label{lem:Lbareta} For any two distinct points $\bar \eta_1$ and $\bar \eta_2$ of\, $\overline Y$, one has $L_{\bar \eta_1}\cap L_{\bar \eta_2}=\{\bar \zeta\}.$ \end{lemma} \begin{proof} Suppose that $L_{\bar \eta_1}\cap L_{\bar \eta_2}\supsetneq \{\bar \zeta\}$. Then $L_{\bar \eta_1}= L_{\bar \eta_2}$ and $\bar \eta_2\in L_{\bar \eta_1}$. Therefore, $-\eta_2=a \eta_1+b \zeta$ for $a,b\in \mathbb{F}_p^\times$ and hence we have \[ \eta_2 + \eta_1'+ \zeta' =0 \] for some $ \eta_1'\in Y$ and $\zeta'\in Z$. Now write \[ \eta_2=(y_2)^{p+1}, \quad \eta_1'=(y_1')^{p+1}, \quad \zeta'=(z')^{p+1}, \] with $y_2,y_1'\in \mathbb{F}_{p^4}^\times\setminus \mathbb{F}_{p^2}^\times$ and $z'\in \mathbb{F}_{p^2}^\times$. That is, we get a point $(y_2:y_1':z')\in C(\mathbb{F}_{p^4})$. Since $C(\mathbb{F}_{p^4}) =C(\mathbb{F}_{p^2})$ by Lemma~\ref{lem:Cmaxmim}, we have $y_2,y_1'\in \mathbb{F}_{p^2}$, contradiction. \end{proof} By Lemma~\ref{lem:Lbareta}, \[ \bigcup_{\bar \eta\in \overline Y} L_{\bar \eta}=\{\bar \zeta\} \amalg \coprod_{\bar \eta\in \overline Y} L_{\bar \eta}-\{\bar \zeta\}, \] and hence \[ \vert \bigcup_{\bar \eta\in \overline Y} L_{\bar \eta}\vert =1+\vert \overline Y \vert \cdot p=p^3+1,\quad \text{and} \quad \vert \bbP-\bigcup_{\bar \eta\in \overline Y} L_{\bar \eta}\vert =p^2+p. \] To show \eqref{eq:etazetaxi}, we check the inequality \begin{equation} \label{eq:olX} \vert \overline X \vert=(p^2+1)\cdot \frac{\varphi(2(p+1))}{2}> p^2+p \end{equation} for all $p\neq 2$. If $p=3$, then $\vert \overline X\vert =20>12$ holds. For $p\ge 5$, by the inequality $\varphi(n)\ge \sqrt{n/2}$, it suffices to show \[ (p^2+1)\cdot \frac{\sqrt{p+1}}{2}> p^2+p. \] This follows from \[ (p^2+1)^2 (p+1)-4(p^2+p)^2=(p+1)(p^4-4p^3-2p^2+1)>0 \] for $p\ge 5$. Therefore, the inequality \eqref{eq:olX} holds and we have found $\eta, \zeta, \xi$ as in \eqref{eq:etazetaxi}. Now write \[ \zeta=z^{p+1} \ \ (\text{ for } z\in \mathbb{F}_{p^2}^\times), \quad \eta=y^{p+1}\ \ (\text{ for } y\in \mathbb{F}_{p^4}^\times\setminus \mathbb{F}_{p^2}^\times). \] Choose an element $x\in\overline{\bbF}_p$ such that $x^{p+1}=-\xi\in \mathbb{F}_{p^4}^\times$. Since the element $\xi \pmod{(\mathbb{F}_{p^4}^\times)^{p+1}}$ is a generator in $\mathbb{F}_{p^4}^\times/(\mathbb{F}_{p^4}^\times)^{p+1}$, by Kummer's Theorem we have \begin{equation} \label{eq:p+1} [\mathbb{F}_{p^4}(x):\mathbb{F}_{p^4}]=p+1. \end{equation} We claim that $\xi\not\in \mathbb{F}_{p^2}^\times$. Suppose for contradiction that $\xi\in \mathbb{F}_{p^2}^\times$. Then \[ f(\xi)=g(\xi)\in g(\mathbb{F}_{p^2}^\times)=(\mathbb{F}_{p^2}^\times)^2/(\mathbb{F}_{p}^\times)^2\subsetneq \mathbb{F}_{p^2}^\times/(\mathbb{F}_{p}^\times)^2\simeq C_{2(p+1)}. \] Therefore, $f(\xi)$ cannot be a generator of $C_{2(p+1)}$, contradiction. So since $\xi\in \mathbb{F}_{p^4}^\times\setminus \mathbb{F}_{p^2}^\times$, we have $\mathbb{F}_{p^2}(x)\supset \mathbb{F}_{p^2}(\xi)=\mathbb{F}_{p^4}$. This shows that \[ \mathbb{F}_{p^2}(x)=\mathbb{F}_{p^4}(x), \quad \text{and}\quad [\mathbb{F}_{p^2}(x):\mathbb{F}_{p^2}]=2(p+1) \] by \eqref{eq:p+1}. Put $t:=(x:y:z)=(x/z:y/z:1)\in C(\overline{\bbF}_p)$. Then we get \begin{equation} \label{eq:2(p+1)} [\mathbb{F}_{p^2}(t):\mathbb{F}_{p^2}]=2(p+1). \end{equation} Since $y/z\in \mathbb{F}_{p^4}^\times\setminus \mathbb{F}_{p^2}^\times$, there exist $b,c\in \mathbb{F}_{p^2}$ such that \[ \left ( \frac{y}{z} \right )^2+ b\left ( \frac{y}{z} \right )+c=0, \quad \text{or} \quad y^2+byz+c z^2=0. \] Let $Q\in \calQ$ be the (degenerate) conic defined by the equation $X_2^2+b X_2 X_3+cX_3^2=0$. Then $t\in C\cap Q$ and $\deg_{\mathbb{F}_{p^2}}(t)=2(p+1)$. This completes the construction. \end{proof} \subsection{Estimate of $\vert C\cap \Delta\vert $}\label{sec:sizeCD}\ In this subsection, points in $C$ will mean geometric points and $C\cap \Delta$ will mean the set-theoretic intersection. Define \[ \calZ:=\{(t,Q)\in C \times \calQ: t\in \calQ\} \] and consider the following natural maps: \[ \begin{tikzcd} \ & \mathcal{Z} \arrow{dl}{\pi}\arrow{dr}{q} & \ \\ C & \ & \mathcal{Q} \end{tikzcd} \] The degree of the map $q$ is $2(p+1)$. For each $Q\in \calQ$, the fibre over $Q$ has size \[ 2(p+1)-\varepsilon_Q, \] where $\varepsilon_Q=\sum_{r\ge 2}\varepsilon_{Q,r}$ with \[ \varepsilon_{Q,r}=\#\{t\in C\cap Q : \mathrm{mult}_{C\cap \Delta}(t)=r \}\cdot (r-1). \] Thus, $\vert \calZ \vert =2(p+1)(p^{10}+p^{8}+p^6+p^4+p^2+1)-\varepsilon$, where \begin{equation} \label{eq:error} \varepsilon:=\sum_{Q\in \calQ} \varepsilon_Q \end{equation} is the error term coming from intersection multiplicities. \begin{proposition}\label{prop:CcapD} We have $\vert C\cap \Delta\vert =p^{11}+o(p^{11})-\varepsilon$ as $p \to \infty$, where $\varepsilon$ is defined in \eqref{eq:error}. \end{proposition} \begin{remark} We expect that $\varepsilon=o(p^{11})$. Then we would have $\vert C\cap \Delta\vert =p^{11}+o(p^{11})$ as $p \to \infty$. \end{remark} \begin{proof} For any integer $i\ge 1$, define \[ C_i:=\{t \in C(\overline{\bbF}_p): \deg_{\mathbb{F}_{p^2}}(t)=i \}. \] By Lemma~\ref{lem:Cmaxmim}, we have \[ \vert C_1\vert =\vert C(\mathbb{F}_{p^2})\vert =p^3+1, \quad \vert C_3\vert =\vert C^0(\mathbb{F}_{p^6})\vert =p^6+p^5-p^4-p^3, \] \[ \vert C_4\vert =\vert C^0(\mathbb{F}_{p^8})\vert =p^8-p^6+p^5-p^3, \quad \vert C_5\vert =\vert C^0(\mathbb{F}_{p^{10}})\vert =p^{10}+p^7-p^6-p^3. \] Let $\mathbb{F}_{p^2}[X_1,X_2,X_3]_2\subseteq \mathbb{F}_{p^2}[X_1,X_2,X_3]$ denote the subspace of homogeneous polynomials of degree two. For each point $t=(t_1:t_2:t_3)\in C$, the fibre $\pi^{-1}(t)$ is the set $\left(W_t-\{0\}\right)/\mathbb{F}_{p^2}^\times$, where \[ W_t:=\{F\in \mathbb{F}_{p^2}[X_1,X_2,X_3]_2: F(t)=0\}. \] They fit into the following exact sequence \[ \begin{CD} 0 @>>> W_t @>>> \mathbb{F}_{p^2}[X_1,X_2,X_3]_2 @>{\mathrm{ev}_t}>> \mathbb{F}_{p^2}\<t_1^2,t_2^2,t_3^2, t_1t_2,t_1t_3, t_2t_3\> @>>> 0. \end{CD} \] It follows that $\dim(W_t)=6-d(t)$ and $\pi^{-1}(t)\simeq \bbP^{5-d(t)}(\mathbb{F}_{p^2})$, where we redefine $d(t)$ as the dimension of $\mathbb{F}_{p^2}\<t_1^2,t_2^2,t_3^2, t_1t_2,t_1t_3, t_2t_3\>$ -- even for $p=2$. Therefore, the numbers of fibres over $C_i$ for $i=1,3,4,5$ are \[ (p^{8}+p^6+p^4+p^2+1), \quad (p^4+p^2+1), \quad (p^2+1), \quad 1, \] respectively. Then the number of points in $\calZ$ over the union of $C_i$ for $i=1,3,4,5$ is given by \[ \begin{split} A&:=(p^3+1)(p^{8}+p^6+p^4+p^2+1)+ (p^6+p^5-p^4-p^3)(p^4+p^2+1) \\ &\quad \ +(p^8-p^6+p^5-p^3)(p^2+1)+(p^{10}+p^7-p^6-p^3) \\ &=p^{11}+3p^{10}+2p^9+p^8+3p^7-p^6+p^{5}-2p^3+p^2+1. \end{split}\] Thus, \[ \begin{split} B&:=\#\{(t,Q)\in \calZ: \deg_{\mathbb{F}_{p^2}}(t)>5\}=\vert \calZ\vert -A \\ &=p^{11}-p^{10}+p^8-p^7+3p^6+p^5+2p^{4}+4p^3+p^2+2p+1-\varepsilon. \end{split} \] Finally, \begin{equation}\label{eq:formulaCcapD} \begin{split} \vert C\cap \Delta\vert &=\vert \mathrm{Im}(\pi)\vert =\vert C_1\vert +\vert C_3\vert + \vert C_4\vert +\vert C_5\vert +B \\ &=p^{11}+2p^8+2p^6+3p^5+p^4+2p^3+p^2+2p+2-\varepsilon. \end{split} \end{equation} \end{proof} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\href}[2]{#2}
2,869,038,156,553
arxiv
\section{Introduction} Recently, a very unusual waveguide was proposed by R. Marques in \cite{Marqueswaveguide} and then extensively studied by S. Hrabar in \cite{Hrabarwaveguide}. It is a rectangular metallic waveguide periodically loaded by resonant magnetic scatterers, so-called split-ring-resonators (SRR:s) \cite{PendrySRR,MarquesSRR}, which are also used as components of a realization of the left-handed medium (LHM) \cite{SmithWSRR}, composite with negative permittivity and permeability \cite{Veselago,Metaspecial}. The geometry of the Marques waveguide (MW) is presented in Fig.\ref{geometry}. The SRR:s in MW are oriented so that their magnetic moments are orthogonal to the waveguide axis and to one of the walls. \begin{figure}[h] \centering \epsfig{file=geometry.eps, width=8.5cm} \caption{Geometry of a subwavelength Split-Ring-Resonator-loaded metallic waveguide} \label{geometry} \end{figure} The MW support a propagating mode within a frequency band near the resonance of SRR:s even if it is located below cutoff frequency of the hollow waveguide \cite{Marqueswaveguide,Hrabarwaveguide}. The transversal dimensions of the waveguide happen to be much smaller than the wavelength in free space. Thus, loading by SRR:s makes waveguide subwavelength and provide unique method for miniaturization of guiding structures. The mode of MW is backward wave (the group velocity is negative). This effect was interpreted in \cite{Marqueswaveguide} in terms of the effective LHM to which such the loaded waveguide is apparently equivalent. The empty waveguide was considered as an artificial electric plasma with negative permittivity and the array of magnetic scatterers as a magnetic material with negative permeability. This interpretation is not completely adequate because it cannot explain why the effect disappears in the case of loading by isotropic magnetic with negative permeability. Really, it is clear that the hollow waveguide filled by isotropic magnetic with negative permeability does not support guiding modes. Meanwhile, the doubly-negative medium having the isotropic negative permittivity and permeability would support propagating backward waves \cite{Lind}. Also, the LHM interpretation is not instructive in our opinion since it does not allow to notice possibilities to obtain propagation below the cutoff frequency of hollow waveguide with help of the other loadings that SRR:s. The goal of the present study is to give an adequate explanation for the extraordinary propagation effect in MW and to suggest another loadings which would lead to the similar effects. In this paper it is shown that the propagation below the cutoff frequency of hollow waveguide can be achieved with magnetic scatterers oriented longitudinally with respect to waveguide axis, as well as with electric scatterers oriented either longitudinally or transversally. This demonstrates that the miniaturization of the rectangular waveguide at a fixed frequency using loading by the resonant scatterers is not restricted by the case when the scatterers are magnetic and transversally oriented. The miniaturization is possible with help of either magnetic or electric resonant scatterers with either transversal or longitudinal orientation with respect to the waveguide axis. Of course, the miniaturization can be also reached using periodically located capacitive posts, however the loading by resonant scatterers is a qualitatively different effect. In \cite{Hrabarwaveguide} it was pointed out that the miniaturization obtained in this way refers also to the longitudinal size of the waveguide since the period of the loads in incomparably smaller than the wavelength in free space, unlike the propagation in a capacitively loaded waveguide, where the period of the posts is of the order of $\lambda/2$. The mini pass band below the cutoff frequency of a rectangular waveguide loaded by resonant scatterers is caused by the properties of the periodical one-dimensional array (chain) of resonant dipoles and has nothing to do with doubly-negative media. The backward wave appears in a special case of transverse orientation of magnetic scatterers and is not a necessary attribute of such mini-band. It is known that a chain of the resonant scatterers in a homogeneous matrix supports guided modes. In the optical frequency range it refers to chains of metallic nanoparticles \cite{Brongersma,Maier1,Maier2,Weber}. At microwaves it refers to so-called magneto-inductive waveguides (chains of SRR:s) \cite{Shamonina1,Shamonina2,Shamonina3,Shamonina4} or to chains of inductively loaded electric dipoles \cite{Tretlines}. The metallic walls of loaded waveguide perturb the dispersion of the guided mode in a chain but do not cancel the propagation. This is because the wavelength of the guided mode in the chain of resonant scatterers is dramatically shortened compared to that in the matrix. As a result, the energy of a guided mode is concentrated in a narrow domain around the chain, and the interaction between the chain and the waveguide walls turns out to be not critical for the existence of the guided mode. The paper is organized as follows. In the Section II the dispersion properties of the chains of resonant scatterers located in free space are considered. The known results, obtained in the work \cite{Weber} by numerical simulations, are reproduced with help of analytical theory based on local field method. This is a necessary part of the work in the view of comparison with the case of the loaded waveguide. The coincidence with known results can be considered as a validation of our approach. In the Section III the dispersion properties of the chains located inside of the rectangular waveguide are considered using two approaches: an accurate method of local field (as in Section II) and an effective medium filling approximation. The Section IV is devoted to comparison between properties of the chains and loaded waveguides. The Section V contains concluding remarks.The details of the local field theory are given in Appendix. In the present paper we consider both magnetic and electric resonant uniaxial scatterers. As an example of a magnetic scatterer we have chosen the SRR:s \cite{PendrySRR,SmithWSRR,Shelbyscience} (see Fig. \ref{scat}.a). The electric dipoles are represented in our work by the short inductively loaded wires (ILW) \cite{LWD} (see Fig. \ref{scat}.b). Any individual scatterer can be characterized by polarizability relating the dipole moment (magnetic or electric) with the local field (magnetic or electric external field acting to the scatterer). This polarizability is scalar since the only possible direction of the induced dipole moment is possible for a scatterer with given orientation. The details concerning calculation of the polarizabilities for SRR:s and ILW:s are presented in Appendix A. \begin{figure}[h] \centering \epsfig{file=scat.eps, width=7cm} \caption{Geometries of resonant scatterers: a) Split-Ring-Resonator, b) inductively loaded wire dipole.} \label{scat} \end{figure} The inverse values of the polarizabilities $\alpha(\omega)$ and $\alpha_e(\omega)$ (see Appendix A, formulae \r{alpha} and \r{alpe}) of SRR:s and ILW:s have the same dependencies on frequency within the resonant band: \begin{equation} {\rm Re} \{\alpha^{-1}(\omega)\}=A^{-1}\left(\frac{\omega_0^2}{\omega^2}-1\right). \l{inva} \end{equation} Here $A$ is amplitude and $\omega_0$ is resonant frequency, the parameters determined by the geometry of the scatterer. Notice, that the result \r{inva} is also valid for a silver nanosphere in the vicinity of its plasmon resonance \cite{Weber}. Thus, it is clear that there is no principal difference between the dispersion properties of the chain of SRR:s and ILW:s (at microwaves) or silver nanospheres (in the optical range). \section{Chains of uniaxial resonant scatterers} Let us study dispersion properties of the linear chains with period $a$ formed by resonant scatterers. We will consider only two typical orientations of scatterers: longitudinal and transverse. The geometries of the structures are presented in Fig. \ref{chains}. The case of longitudinal orientation was analyzed in \cite{Tretlines}, and the both longitudinal and transverse orientations were studied in \cite{Weber}. In the present section we reproduce the main results of these works with help of the local field approach. \begin{figure}[h] \centering \epsfig{file=chains.eps, width=8cm} \caption{Chains of resonant scatterers. a) Longitudinal orientation. b) Transverse orientation.} \label{chains} \end{figure} \subsection{Basic theory} Without loss of generality we can restrict consideration by the case of the chain of magnetic scatterers (SRR:s). The chains of electric scatterers are dual structures to the considered ones and have completely the same dispersion properties. The spatial distribution of dipole moments of SRR:s corresponding to an eigenmode of a chain is determined by a propagation constant $q$: $M_n=M e^{-jqan}$. Following to the local field approach the dipole moment $M$ of a reference (zeroth) scatterer can be expressed in terms of the magnetic field $\-H_{\rm loc.}$ acting to it: $M=\alpha H^d_{\rm loc.}$, where $H^d_{\rm loc.}=(\-H_{\rm loc.}\.\-d)$ is the projection of the field on the direction of the scatterer ($\vec d=\vec x_0$ for longitudinal orientation of scatterers and $\vec d=\vec y_0$ for transverse one). This local field is a sum of partial magnetic fields $\-H_{m}$ produced at the coordinate origin by all other scatterers with indexes $m\ne 0$: $ \-H_{\rm loc.}=\sum\limits_{m \ne 0} \-H_{m}$. The magnetic field produced by a single scatterer with dipole moment $\-M_m$ at a point with radius vector $\vec R$ is given by dyadic Green's function $\=G(\-R)$: \begin{equation} \-H_m(\vec R)=\mu_0^{-1}\=G(\-R)\-M_m, \l{Ggen} \end{equation} where \begin{equation} \=G(\-R)=\left(k^2\=I+\nabla\nabla\right)\frac{e^{-jkR}}{4\pi R}. \l{G} \end{equation} Since all dipole moments of the chain are oriented along $\-d$ it is enough to use only the $\-d\-d$ component of dyadic Green's function. So, we replace \r{Ggen} by the scalar expression: \begin{equation} H_m^d(\vec R)=\mu_0^{-1}G_{dd}(\-R)M_m, \l{Green} \end{equation} where \begin{equation} G_{dd}(\-R)=\left(k^2+\frac{\partial^2}{\partial d^2}\right)\frac{e^{-jkR}}{4\pi R}, \l{gre}\end{equation} and $d$ means $x$ for the longitudinal case and $y$ for the transverse case. Finally we obtain the expression for the field acting to the reference scatterer in the form: \begin{equation} H^d_{\rm loc.}=\sum\limits_{m\ne 0} G_{dd}(am\vec x_0) e^{-jqam} M. \end{equation} It allows to get dispersion equation for the chains under consideration: \begin{equation} \mu_0\alpha(\omega)^{-1}=C_{d}(\omega,q,a), \l{displine} \end{equation} where $$ C_{d}(\omega,q,a)=\sum\limits_{m\ne 0} G_{dd}(am\vec x_0) e^{-jqam}. $$ In the Appendix B we provide expressions \r{Cx} and \r{Cy} which we use for effective numerical calculations of interaction constants $C_x$ and $C_y$, corresponding to transverse and longitudinal orientations of scatterers, respectively. \subsection{Analysis of dispersion properties} The dispersion diagram for guided modes can be obtained solving transcendental dispersion equation \r{displine} with interaction constants given by expressions \r{Cx} and \r{Cy}. Geometrically, dispersion curves correspond to the lines of level where the surface plot of function ${\rm Re} \{C_{x,y}(\omega,q)\}$ is crossed by $\mu_0 {\rm Re}\{\alpha^{-1}(\omega)\}$. Note, that only the solutions with $k<q<2\pi/a-k$ for $k<\pi/a$ correspond to guided modes: For $|q|<k$, ${\rm Im} \{ C_{x,y}(\omega,q)- \mu_0\alpha^{-1}\} \ne 0$ and dispersion equation \r{displine} has complex solutions corresponding to leaky modes (see details in Appendix B). \begin{figure}[h] \centering \epsfig{file=cxbw.eps, width=8.5cm} \caption{Dependence of ${\rm Re}\{C_x a^3\}$ on normalized frequency $ka/\pi$ and propagation factor $qa/\pi$} \label{cx} \end{figure} \begin{figure}[h] \centering \epsfig{file=cybw.eps, width=8.5cm} \caption{Dependence of ${\rm Re}\{C_y a^3\}$ on normalized frequency $ka/\pi$ and propagation factor $qa/\pi$} \label{cy} \end{figure} The dependencies of $C_x$ and $C_y$ on on normalized frequency $ka/\pi$ and propagation factor $qa/\pi$ are shown in Fig. \ref{cx} and \ref{cy}, respectively. The interaction constants vary in $[-1,1]$ range except the case of $C_y$ with $q$ close to $k$, which has logarithmic singularity at the light line $q=k$. The function $\mu_0 {\rm Re}\{\alpha^{-1}(\omega)\}$ decreases very rapidly within $[-1,1]$ range of values near the resonant frequency $\omega_0$. It means, that guided modes exist only within a narrow bands near the resonant frequency of scatterers. The behavior of dispersion curves can be easily predicted from the plots in Fig. \ref{cx} and \ref{cy}. If at a fixed frequency the interaction constant decays when propagation factor increases then the dispersion curve grows and the eigenmode is forward wave (the group velocity $v_g=\frac{d\omega}{dq}$ is positive), but if the interaction constant grows then the dispersion curve decays and the eigenmode is backward wave (the group velocity $v_g=\frac{d\omega}{dq}$ is negative). From Fig. \ref{cx} it is clear that for any resonant frequencies satisfying to the evident condition $\omega_0<\pi/(a\sqrt{\varepsilon_0\mu_0})$ (corresponding to the propagation below cutoff of the hollow waveguide ) the longitudinal mode is forward wave because $C_x$ decays versus $q$. In the case of transverse modes the situation is different. While $k_0a<0.5\pi$ ($k_0=\omega_0\sqrt{\varepsilon_0\mu_0}$) a two mode regime holds. The interaction constant $C_y$ decays while $q$ is close to $k$, but from a certain $q$ it starts to grow. It means that one of the transverse eigenmodes is forward (with $q\approx k $) and the other one is backward. If the resonant frequency is high enough ($0.5\pi<k_0a<\pi$) the two mode regime disappears and only the forward wave remains. \begin{figure}[h] \centering \epsfig{file=displine2.eps, width=8.5cm} \caption{Dispersion diagram for chains of resonant scatterers: transverse (thick line) and longitudinal (thin line) orientations} \label{displine} \end{figure} The typical dispersion diagram for both longitudinal and transverse modes is presented in Fig. \ref{displine} for the case of scatterers with $A=0.1\mu_0a^3$ and $\omega_0 a=1/\sqrt{\varepsilon_0\mu_0}$. The similar dispersion diagram was obtained in \cite{Weber} (see Fig. 3 of this work) by a numerical simulation. Though in \cite{Weber} the electric scatterers in the optical range were considered, but we consider the magnetic scatterers in the microwave range, the use of duality principle, and normalized frequency $ka/\pi$ and wave vector $qa/\pi$ eliminates this difference. The polarizability of silver nanospheres, for which Fig. 3 from \cite{Weber} was obtained, obeys to expression (8) of \cite{Weber} which is identical to our formula \r{inva}). As it was predicted, the longitudinal mode is forward wave and there is a two mode regime for transverse modes. The dispersion curve for transverse waves has the asymptote $q=k$ and both leaky ($q<k$) and guided ($q>k$) modes exist at very low frequencies where they have almost equal wave vectors. This fact indicates the good matching between the radiated wave and the guided mode. Within the band $0.995<ka<1$ there are two guiding modes at every frequency. The solution corresponding to the backward wave is close to the Bragg's mode ($qa\approx \pi$) whose group velocity is close to zero. The field of this mode is concentrated near the chain within the spatial region $r=\sqrt{z^2+y^2}<1/\sqrt{q^2-k^2}\sim a$. The same concerns the longitudinal mode within the band $1.015<ka<1.020$. If the period of the chain is much smaller than wavelength in free space the waveguide is sub-wavelength (the field of the guided mode is concentrated within a cylindrical domain whose diameter is much smaller than $\lambda$). Thus, the chains of resonant scatterers (electric or magnetic, it does not matter) form sub-wavelength waveguides which can support either forward or backward waves \cite{Brongersma,Maier1,Maier2,Weber,Tretlines,Shamonina1,Shamonina2,Shamonina3,Shamonina4}. \section{Loaded waveguides} \subsection{Basic theory} Let us study the eigenmodes of the rectangular metallic waveguides periodically loaded by resonant uniaxial scatterers. Such structures can be effectively considered as linear chains located inside of the metallic waveguides. The geometries of the four waveguides considered in the present paper are shown in Fig. \ref{geometry} (left sides of subplots). The chains with period $c$ along the waveguide axis are located at the center of rectangular metallic waveguides with dimensions $a\times b$. The structures in Figs. \ref{geometry}.a-d differ by orientation of scatterers (longitudinal or transverse) and their type (electric or magnetic). The first structure (with transversely oriented magnetic scatterers) is the sub-wavelength waveguide (see Fig. \ref{geometry}) suggested by R. Marques \cite{Marqueswaveguide,Hrabarwaveguide}. The other ones are considered in order to show three other possible ways to obtain miniaturization of rectangular waveguides. Note, that the chains of electric scatterers are no more dual to the chains of magnetic scatterers (in contrast to the chains in free space) due to the different interaction of electric and magnetic dipoles with metallic walls. \begin{figure}[h] \centering \epsfig{file=image.eps, width=8.5cm} \caption{Transformation of the waveguide problem to the lattice one with the use of the image principle} \label{image} \end{figure} The dispersion equation for the chains keeps the same form as \r{displine}, but free-space dyadic Green's function $\=G(\vec R)$ \r{G} should be replaced by Green's function of the waveguide which takes into account the metallic walls. This Green's function can be determined with help of the image principle. This approach transforms the eigenmode problem for the loaded waveguide to the problem of the eigenwave propagation in a three-dimensional electromagnetic lattice formed by same scatterers. The details of the transformation are illustrated by Fig. \ref{image} (right parts of all subplots). The electromagnetic crystals obtained in such a way have orthorhombic elementary cell $a\times b\times c$ and their dispersion properties was studied in \cite{Belovhomo} using local field approach. Thus, we can apply the theory of the electromagnetic interaction in dipole crystals presented in \cite{Belovhomo} in order to study dispersion properties of waveguides under consideration. In the coordinate system associated to the axes of the crystal the center of a scatterer with indexes $(m,n,l)$ has coordinates $\vec R_{m,n,l}=(am,bn,cl)^T$. From Fig. \ref{image} it is clear, that distribution of dipole moments in the lattice has the following form: $$ \vec M_{m,n,l}=(-1)^mMe^{-jqcl}\vec x_0, $$ for the case of transverse orientation of magnetic scatterers (Fig. \ref{image}.a); $$ \vec M_{m,n,l}=Me^{-jqcl}\vec z_0, $$ for the case of longitudinal orientation of magnetic scatterers (Fig. \ref{image}.b); $$ \vec P_{m,n,l}=(-1)^nPe^{-jqcl}\vec x_0, $$ for the case of transverse orientation of electric scatterers (Fig. \ref{image}.c); and $$ \vec P_{m,n,l}=(-1)^{m+n}Pe^{-jqcl}\vec z_0, $$ for the case of longitudinal orientation of electric scatterers (Fig. \ref{image}.d). Any of these distributions can be rewritten in terms of a wavevector $\vec q$ as $e^{-j(\vec q\. \vec R_{m,n,l})}$, where the wavevector $\vec q$ for four cases considered above has the form $(\pi/a,0,q)^T$, $(0,0,q)^T$, $(0,\pi/b,q)^T$ and $(\pi/a,\pi/b,q)^T$, respectively. This notation finally makes clear that the waveguide dispersion problems reduce to those of the three-dimensional lattices in the special cases of certain propagation directions. The dispersion equation for three-dimensional electromagnetic crystal formed by magnetic scatterers oriented along $x$-axis has the form \cite{Belovhomo}: \begin{equation} \mu_0\alpha^{-1}(\omega)-C(k,\-q)=0, \l{disper} \end{equation} where \begin{equation} C(k,\-q,a,b,c)=\sum\limits_{(m,n,l)\ne(0,0,0)} G(\-R_{m,n,l}) e^{-j(q_xam+q_ybn+q_zcl)}. \l{C} \end{equation} We call $C(k,\-q,a,b,c)$ as the dynamic interaction constant of the lattice using the analogy with the classical interaction constant from the theory of artificial dielectrics and magnetics \cite{Collin}. The explicit expression for $C$ for the general case was derived in \cite{Belovhomo} and it is given in Appendix C by formula \r{cfinal}. The dispersion equation for waveguides with transverse orientation of scatterers can be directly obtained from equation \r{disper} by substitution $\vec q=(\pi/a,0,q)^T$ and $\vec q=(0,\pi/b,q)^T$ for magnetic and electric scatterers, respectively. Also, in the case of electric scatterers following duality principle $\mu_0\alpha^{-1}(\omega)$ should be replaced by $\varepsilon_0\alpha_e^{-1}(\omega)$. The similar operation for case of longitudinal orientation happens to be possible only after rotation of coordinate axes: $z \rightarrow x'$, $x \rightarrow y'$, $y \rightarrow z'$, since equation \r{disper} requires scatterers to be directed along $x$-axis, but for longitudinal orientation they are directed along $z$-axis. After such manipulation substitution of $\vec q=(q,0,0)^T$ and $\vec q=(q,\pi/a,\pi/b)^T$ (in the new coordinate axes $(x',y',z')$) into equation \r{disper} provide desired dispersion equations for waveguides with transverse orientation of magnetic and electric scatterers, respectively. This way we obtain the following dispersion equations for all loaded waveguides under consideration: \begin{equation} \mu_0\alpha^{-1}(\omega)-C(k,(\pi/a,0,q)^T,a,b,c)=0, \l{dispmt} \end{equation} for transverse orientation of magnetic scatterers; \begin{equation} \mu_0\alpha^{-1}(\omega)-C(k,(q,0,0)^T,c,a,b)=0, \l{dispml} \end{equation} for longitudinal orientation of magnetic scatterers; \begin{equation} \varepsilon_0\alpha_e^{-1}(\omega)-C(k,(0,\pi/b,q)^T,a,b,c)=0, \l{disppt} \end{equation} for transverse orientation of electric scatterers; and \begin{equation} \varepsilon_0\alpha_e^{-1}(\omega)-C(k,(q,\pi/a,\pi/b)^T,c,a,b)=0, \l{disppl} \end{equation} for longitudinal orientation of electric scatterers. The obtained dispersion equations are real valued ones, because the imaginary parts of its components cancels out. It can be clearly seen from Sipe-Kronendonk condition \r{sipe} and following expression proved in \cite{Belovhomo}: \begin{equation} {\rm Im}(C)=\frac{k^3}{6\pi}. \l{imc} \end{equation} \subsection{Effective medium filling approximation} The chain of SRR:s with transverse orientation located in the waveguide has been interpreted in the literature as a piece of an uniaxial magnetic medium \cite{Marqueswaveguide,Hrabarwaveguide}. We call this approach as effective medium filling approximation. It can be applied practically to every waveguide considered in this paper except the case of longitudinal orientation of magnetic scatterers since the uniaxial magnetic model does not describe longitudinal modes. This approach provide qualitatively acceptable results which are compared with exact ones in the next subsection. Let us start from the case of transversely oriented magnetic scatterers (Fig. \ref{image}.a) and consider a chain of parallel uniaxial magnetic scatterers as a piece of infinite resonant uniaxial magnetic. The permeability of such a magnetic is a tensor (dyadic) of the form: $$ \=\mu=\mu \-x_0\-x_0+\mu_0(\-y_0\-y_0+\-z_0\-z_0). $$ The permeability $\mu$ along the anisotropy axis $x$, can be calculated though the individual polarizability of a single scatterer using the Clausius-Mossotti formula: \begin{equation} \mu=\mu_0\left(1+\frac{\alpha(\omega)/(\mu_0V)}{1-C_s(a,b,c) \alpha (\omega)/\mu_0}\right), \l{CM} \end{equation} where $V=abc$ is a volume of the elementary cell of an infinite three-dimensional lattice and $C_s(a,b,c)$ is the known static interaction constant of the lattice \cite{Collin,Belovhomo}. In the case of a simple cubic lattice $a=b=c$ the interaction constant is equal to the classical value $C_s=1/(3V)$. Notice, that we should skip the radiation losses contribution in expression \r{invalph} while substituting into formula \r{CM}. This makes permeability purely real number as it should be for lossless material. This manipulation is based on the fact that the far-field radiation of the single scatterer is compensated by the electromagnetic interaction in a regular three-dimensional array, so that there were no radiation losses for the wave propagating in the lattice \cite{Sipe,Belovhomo}. The dispersion equation for the uniaxial magnetic medium has the following form (see e.g. \cite{BelovMOTL}): \begin{equation} \mu_0 (q_y^2+q_z^2)=\mu (k^2-q_x^2). \l{dispunis} \end{equation} To solve the waveguide dispersion problem is to solve the dispersion problem \r{dispunis} for a special case $\vec q=(\pi/a,0,q)^T$. The substitution of $\vec q=(\pi/a,0,q)^T$ into \r{dispunis} gives: \begin{equation} q^2=\frac{\mu}{\mu_0}\left[k^2-\left(\frac{\pi}{a}\right)^2\right]. \l{qq} \end{equation} For the frequencies below cutoff of the hollow waveguide the expression in the brackets of \r{qq} is negative and for positive $\mu$ there is no propagation. However, if $\mu$ is negative (it happens in accordance with \r{alpha} and \r{CM} within a narrow frequency range near the resonance of the scatterers, just above the resonant frequency of the media) $q$ becomes real. This mini pass-band can be located much lower than the cutoff frequency of the empty waveguide with help of reduction of resonant frequency of the scatterers. It is easy to see from \r{qq} that the mode is backward wave, i.e. $\frac{dq}{d\omega}<0$. This follows from the basic inequality $\frac{d\mu}{d\omega}>0$ (Foster's theorem). So, for transverse magnetic scatterers the effective medium filling model gives (at least qualitatively) a correct result. Namely, it predicts a mini-band within the resonance band of SRRs and the backward wave propagating within it. However, this model is not accurate. The reason of this inaccuracy is simple. In spite of the low frequency of operation (the waveguide dimensions are small compared to the wavelength {\it in free space}) the effective magnetic medium should operate in the regime when its period is comparable with the wavelength {\it in the effective medium} because $q_x=\pi/a$. Such regime for an electromagnetic crystal can not be described with help of homogenization and requires taking into account spatial resonances of the lattice \cite{Belovhomo}. In the case of transversely oriented electric scatterers (Fig. \ref{image}.c) the effective medium filling model implies that the wave propagates in a uniaxial dielectric with permittivity tensor: $$ \=\varepsilon=\mu \-x_0\-x_0+\varepsilon_0(\-y_0\-y_0+\-z_0\-z_0). $$ The permittivity $\varepsilon$ along the anisotropy axis $x$ is given by the Clausius-Mossotti formula: \begin{equation} \varepsilon=\varepsilon_0\left(1+\frac{\alpha_e(\omega)/(\varepsilon_0V)}{1-C_s(a,b,c) \alpha_e (\omega)/\varepsilon_0}\right). \l{CMe} \end{equation} The dispersion equation for such uniaxial dielectric reads \cite{BelovMOTL}: \begin{equation} \varepsilon_0 (q_y^2+q_z^2)=\varepsilon (k^2-q_x^2). \l{dispunise} \end{equation} Solution of waveguide dispersion problem corresponds to the case when $\vec q=(0,\pi/b,q)^T$: \begin{equation} q^2=\frac{\varepsilon}{\varepsilon_0}k^2-\left(\frac{\pi}{b}\right)^2. \l{qqe} \end{equation} This mode propagates only at the frequencies when permittivity takes high positive values $\varepsilon>\varepsilon_0[\pi/(kb)]^2$. In our case of resonant dielectric it happens within a mini-band just {\it below} the resonance of the medium. It is clear from \r{qqe} that the mode is forward wave, i.e. $\frac{dq}{d\omega}>0$. This follows from the Foster's theorem $\frac{d\varepsilon}{d\omega}>0$. In the case of longitudinally oriented electric scatterers (Fig. \ref{image}.d) the solution of the waveguide dispersion problem can be obtained from the dispersion equation \r{dispunise} with $\vec q=(q,\pi/a,\pi/b)^T$ (the axis were transformed in order to have x-axis along dipoles) in the next form: \begin{equation} q^2=k^2-\frac{\varepsilon_0}{\varepsilon} \left[\left(\frac{\pi}{a}\right)^2 + \left(\frac{\pi}{b}\right)^2\right]. \l{qqe2} \end{equation} The mode is propagating at the frequencies when permittivity is positive and rather high, or negative. This mode is forward wave in the same manner as \r{qqe} since $\frac{d\varepsilon}{d\omega}>0$. \subsection{Analysis of dispersion properties} For numerical calculation of dispersion curves using \r{dispmt}, \r{dispml}, \r{disppt}, \r{disppl} and \r{cfinal} we have chosen square waveguides ($a=b=c$) loaded by scatterers with the same parameters which were used for studies of chains: $\omega_0=1/(a\sqrt{\varepsilon_0\mu_0})$, and $A=0.1\mu_0a^3$ for magnetic scatterers, and $A_e=0.1\epsilon_0a^3$ for electric ones. \begin{figure}[h] \centering \epsfig{file=surfm01bw.eps, width=8.5cm} \caption{Dependence of the real part of normalized interaction constant $C(k,\vec q)a^3$ with $\vec q=(\pi/a,0,q)^T$ (corresponding to transverse orientation of magnetic scatterers) on normalized frequency $ka/\pi$ and propagation constant $qa/\pi$. } \label{surfm01} \end{figure} The dependence of the real part of normalized interaction constant $C(k,\vec q)$ with $\vec q=(\pi/a,0,q)^T$ on the normalized frequency $ka/\pi$ and on the propagation constant $qa/\pi$ is presented in Fig. \ref{surfm01}. This interaction constant corresponds to dispersion equation \r{dispmt} for transverse orientation of magnetic scatterers. The value of $\mbox{Re} (C)a^3$ varies within $[-2,0.5]$ interval while the normalized frequency $ka/\pi$ is bounded by unity (which corresponds to the cutoff of hollow waveguide). If a value of the normalized frequency is fixed then the real part of the interaction constant is a monotonously growing function of $qa/\pi$. The dependence of the real part of the interaction constant on frequency is quite weak as compared with rapidly decreasing $\alpha^{-1}(\omega)$ as follows from \r{invalph}. The function $\mu_0 \alpha^{-1}(\omega)$ takes values within $[-2,0.5]$ interval at the frequencies close to resonant frequency $\omega_0$. Therefore, dispersion equation \r{dispmt} has a real solution for $qa/\pi$ within a mini-band of frequencies near the resonant frequency of the scatterers $\omega_0$, and this solution is a decaying function of frequency which corresponds to the backward wave (the group velocity $v_g=\frac{d\omega}{dq}$ is negative). The obtained result, of course, confirms existence of backward wave below cutoff of the hollow waveguide predicted and experimentally demonstrated in \cite{Marqueswaveguide,Hrabarwaveguide}. \begin{figure}[h] \centering \epsfig{file=surfm00bw.eps, width=8.5cm} \caption{Dependence of the real part of normalized interaction constant $C(k,\vec q)a^3$ with $\vec q=(q,0,0)^T$ (corresponding to longitudinal orientation of magnetic scatterers) on normalized frequency $ka/\pi$ and propagation constant $qa/\pi$. } \label{surfm00} \end{figure} In contrast to Fig. \ref{surfm01}, the dependencies of normalized interaction constants $C(k,\vec q)$ with $\vec q=(q,0,0)^T$, $\vec q=(0,\pi/a,q)^T$ and $\vec q=(q,\pi/a,\pi/a)^T$ are decaying functions of $q$ for the fixed values of $k<\pi/a$. It means that solutions of dispersion equations \r{dispml}, \r{disppt} and \r{disppl} are forward waves for any resonant frequency of the scatterer below $\pi/(a\sqrt{\varepsilon_0\mu_0})$. The Fig. \ref{surfm00} shows dependence of the real part of normalized interaction constant $C(k,\vec q)$ with $\vec q=(q,0,0)^T$ on the normalized frequency $ka/\pi$ and on the propagation constant $qa/\pi$. The dependencies for the cases $\vec q=(0,\pi/a,q)^T$ and $\vec q=(q,\pi/a,\pi/a)^T$ are not shown in order to reduce size of the paper. \begin{figure}[h] \centering \epsfig{file=dispcompm01.eps, width=8.5cm} \caption{Dispersion curves for metallic waveguides loaded by magnetic scatterers: exact solution (thick line) and effective medium filling approximation (dashed line) for transverse orientation, and exact solution (thin line) for longitudinal orientation. The effective medium filling model for the longitudinal case is not applicable.} \label{dispm01} \end{figure} The dispersion curves for the case of magnetic scatterers are presented in Fig. \ref{dispm01}. The thick solid line represents the dispersion curve for the transverse mode. It is obtained by numerical solution of transcendental dispersion equation \r{dispmt}. The dashed line shows the result predicted by the model of effective medium filling \r{qq}. The significant frequency shift between the exact and approximate solutions is observed. Also, the effective medium model gives wittingly wrong results with $q>\pi/a$ in the region $ka<1.0055$, and incorrectly describes group velocity for $q>\pi/(2a)$ (for example, it does not describe the Bragg mode with zero group velocity at the point $qa=\pi$). The dispersion curve for the longitudinal mode obtained by numerical solution of equation \r{dispml} is represented by thin line in Fig. \ref{dispm01}. As it was mentioned above, the effective medium filling model can not be applied for description of this mode. \begin{figure}[h] \centering \epsfig{file=dispcompp01.eps, width=8.5cm} \caption{Dispersion curves for metallic waveguides loaded by electric scatterers: exact solution (thick line) and effective medium filling approximation (dashed line) for transverse orientation, and exact solution (thin line) and effective medium filling approximation (thin dashed line) for longitudinal orientation. } \label{dispp01} \end{figure} The dispersion curves for the case of electric scatterers are presented in Fig. \ref{dispp01}. The thick and thin solid lines shows dispersion curves for transverse and longitudinal modes, obtained by numerical solution of dispersion equations \r{disppt} and \r{disppl}, respectively. The dispersion curves provided by effective medium models for transverse and longitudinal modes (formulae \r{qqe} and \r{qqe2}) are plotted by thick and thin dashed lines, respectively. The comparison of exact and approximate solutions shows that the model of effective medium filling gives qualitatively right prediction of dispersion curves behavior in the case of electric scatterers as well as in the case of transverse magnetic scatterers. The drawbacks are also the same: wrong group velocity $q>\pi/(2a)$ and wittingly wrong results with $q>\pi/a$ at some frequencies. The Figs. \ref{dispm01} and \ref{dispp01} demonstrate that the waveguides loaded by electric and magnetic resonant scatterers support modes within the mini-bands below the cutoff frequency of the hollow waveguide. The modes are forward waves, except the case of transverse magnetic scatterers when the mode is forward wave. The bandwidth in the case of transverse electric scatterers is of the same order with bandwidth for magnetic scatterers with the same parameters obtained with help of duality principle, but the bandwidths for the longitudinal modes are significantly narrower than for the transverse ones. \section{Discussion} In the papers \cite{Marqueswaveguide,Hrabarwaveguide} the term `subwavelength waveguide' was applied to a rectangular waveguide with small transversal dimensions as compared to the wavelength in free space. However, there is a lot of other works \cite{Brongersma,Maier1,Maier2,Weber,Tretlines,Shamonina1,Shamonina2,Shamonina3,Shamonina4}. in which the term `subwavelength waveguiding' means the propagation of a wave along the chain of electrically small nearly-resonant particles below the diffraction limit. In this case the transversal size of the spatial domain, in which the field of the guided mode is concentrated, is much smaller that the wavelength in free space. Therefore, this mechanism of the wave transmission is considered as prospective for subwavelength imaging. Since the field of the mode guided along the chain of resonant scatterers is concentrated within a subwavelength cross section, the presence of the metal walls even at the rather small distance turns out to be not crucial for the existence of the guided wave. Thus, any waveguide periodically loaded by the scatterers can be considered as a subwavelength waveguide formed by the chain of nearly resonant scatterers, whose dispersive properties are perturbed by the metal walls. These walls can be described in terms of the image chains forming an infinite lattice. However, the wave propagates along the same direction in every image chain, and the transversal wave numbers $q_x=\pi/a$ or $q_y=\pi/b$ describe the transversal phase distribution of the wave propagating along $z$ but not the energy transport across $z$. The main question is how the image chains of scatterers interact with the real chain, and how this interaction influences its dispersive properties. For that purpose let us compare Fig. \ref{displine} with Figs. \ref{dispm01} and \ref{dispp01}. We can conclude that the presence of metal walls around the chain of resonant scatterers produces the following effects: \begin{itemize} \item It decreases the group velocity and the frequency band of the guided mode which corresponds to the longitudinal orientation of dipoles. \item It cancels the two-mode regime for the transverse orientation of dipoles, so that the dispersion branch becomes backward for magnetic scatterers, and forward for electric ones. \end{itemize} Finally, we would like to emphasize, that the width of the pass band for the waveguide loaded by transversal electric scatterers has the same order as in the case of transversal magnetic scatterers (Marques waveguide \cite{Marqueswaveguide,Hrabarwaveguide}) if the loading scatterers have parameters obtained using duality principle from each other. So, the loading by electric scatterers could be an alternative and even more appropriate solution for the waveguide miniaturization than the design suggested in \cite{Hrabarwaveguide} for this purpose. \section{Conclusion} The dispersion properties of rectangular waveguides loaded by resonant scatterers (magnetic and electric ones) have been studied. The waveguide problem has been transformed using the image theory into the eigenmode problem of an auxiliary three-dimensional electromagnetic crystal. The dispersion properties of such electromagnetic crystal have been modelled using the local field approach. It has been revealed that not only magnetic but also electric resonant scatterers allow to obtain mini pass band below cutoff frequency of the hollow waveguide. The corresponding mini-band turns out to be of the same order as for magnetic scatterers with same individual frequency dispersion. So, the electric scatterers (inductively loaded short wires) could be also prospective for the waveguide miniaturization as well as split-ring-resonators. It has been shown that the loading by scatterers with longitudinal orientation of dipole moments also allows to obtain the mini-band of propagation below the cutoff frequency of the hollow waveguide, but the width of this band is significnatly narrower as compared to the case of transverse orientation. The observed effects are explained in terms of the subwavelength guiding properties of the single chains of scatterers. This explanation is supported by comparison of dispersion properties of the loaded waveguides and the chains of the resonant scatterers in free space. Results of our theory are in good agreement with the known literature data. For the chains of resonant dipoles the results from \cite{Weber} are reproduced. For the rectangular waveguide loaded by split-ring-resonators the same result as in \cite{Marqueswaveguide} has been obtained.
2,869,038,156,554
arxiv
\section{Introduction} Water is ubiquitous on Earth, and a detailed understanding of its properties and behavior under different conditions is of crucial importance in several scientific fields.\cite{ei69,pe99,fr00,ro96} This does not only refer to thermodynamically stable water phases, but also to various metastable phases obtained at ambient and extreme conditions of temperature and pressure. In particular, several forms of amorphous ice have been found and studied by both experimental\cite{mi84,mi85,he89,tu02b,ne06,st07} and theoretical\cite{ts99,gi05,ts05,lo06} methods, but some of their properties still lack a complete understanding. This is mainly due to the peculiar structure of condensed phases of water, where hydrogen bonds between adjacent molecules give rise to rather unusual properties of these phases. The atomic dynamics in amorphous solids cause the appearance of localized low-energy excitations, displaying appreciable deviations from the situation of atomic nuclei harmonically vibrating around their potential minima.\cite{cu87,el90} Such deviations from harmonicity, combined with the quantum character of atomic dynamics, is of great importance in the characterization of amorphous materials. In this context, the classical papers by Phillips\cite{ph72} and Anderson {\em et al.}\cite{an72} opened a research line by modeling low-energy excitations in amorphous solids by two-level systems. In more recent years there have appeared several detailed descriptions of the low-energy motion in this type of materials, beyond the standard tunneling model.\cite{ra98a} When one studies average structural properties of amorphous materials, an interesting question is whether quantum effects can be noticeable in the presence of structural disorder. To be concrete, one may ask whether the radial distribution function (RDF) obtained in classical simulations is appreciably modified by considering quantum atomic delocalization, which can be important mainly at low temperatures. Two factors appear to compete in the broadening of the RDF peaks at low temperature: structural disorder and quantum delocalization.\cite{he98} As a first approach, one expects that for amorphous materials with heavy atoms (small zero-point vibrational amplitudes), structural disorder will broaden the peaks more than zero-point motion, and the opposite may occur for disordered materials with light atoms. For amorphous ice, one may suspect that the presence of hydrogen will make quantum effects appreciable, even for strong structural disorder. Computer modeling of amorphous ice has been employed in recent years to obtain insight into its structural and dynamical properties.\cite{ga96,ok96,gi04,gi05,ts05,se09} The beginning of computer simulations of condensed phases of water at an atomic level dates back more than 40 years,\cite{ba69,ra71} and nowadays a large variety of empirical interatomic potentials can be found in the literature.\cite{ma01,ko04,jo05,ab05,pa06,mc09,go11} Many of them assume a rigid geometry for the water molecule, whereas some others allow for molecular flexibility either with harmonic or anharmonic OH stretches. In recent years, simulations of water using \textit{ab initio} density functional theory (DFT) have been also carried out.\cite{ch03,fe06,mo08} However, hydrogen bonds in condensed phases of water are difficult to describe with currently available energy functionals, making that some properties are not accurately reproduced by DFT calculations.\cite{yo09} Some progress in the description of van der Waals interactions in water within the DFT formalism has been recently made.\cite{wa11,ko11,ak11,pa12} A shortcoming of {\em ab-initio} electronic-structure calculations is that they usually deal with atomic nuclei as classical particles, disregarding quantum effects like zero-point motion. These effects may be accounted for using harmonic or quasiharmonic approximations for the nuclear motion, but the precision of these approaches is not readily estimated when large anharmonicities are present, as can be the case for light atoms like hydrogen in disordered materials. To take into account the quantum character of atomic nuclei, the path-integral molecular dynamics (PIMD) approach has proved to be very useful, since in this procedure the nuclear degrees of freedom can be quantized in an efficient manner, thus including both quantum and thermal fluctuations in many-body systems at finite temperatures.\cite{gi88} This computational technique is now well established as a tool to study problems in which anharmonic effects can be important.\cite{ce95} Thus, a powerful approach could consist in combining DFT to determine the electronic structure and path integrals to describe the quantum motion of atomic nuclei.\cite{ch03,mo08} However, this procedure requires computer resources that would restrict enormously the number of state points that can be considered in actual calculations. Several forms of amorphous ice have been detected in recent years, corresponding to different densities.\cite{gi05,lo06,ma05,bo06,lo11} In the present paper we study high-density amorphous (HDA) ice by PIMD simulations at different pressures and temperatures, to analyze its structural and thermodynamic properties. Interatomic interactions are described by the flexible q-TIP4P/F model, which was recently developed and has been employed to carry out PIMD simulations of liquid\cite{ha09,ha11a,ra11} and solid\cite{ra10,he11,ha11b} water. Here we pose the question of how quantum motion of the lightest atom can influence the structural properties of an amorphous water phase, and in particular if this quantum motion is appreciable for the solid at different densities, i.e. under different external pressures. This refers to the crystal volume and interatomic distances, but also to the mechanical stability of the solid. These questions have been addressed earlier for high- and low-density amorphous ice.\cite{ga96,ur03} In this context, it is usually assumed that increasing quantum fluctuations enhances the exploration of the configuration space, but in certain regimes an increase in quantum fluctuations can lead to dynamical arrest, as found for glass formation.\cite{ma11} The paper is organized as follows. In Sec.\,II, we describe the computational method and the model employed in our calculations. In Sec.~III we discuss the method employed to generate the simulation cells of amorphous ice. Our results are presented in Sec.~IV, dealing with molar volume, interatomic distances, kinetic energy, and bulk modulus of HDA ice. Sec.~V gives a comparison with earlier work, and Sec.~VI includes a summary of the main results. \section{Computational Method} We employ PIMD simulations to obtain several properties of amorphous ice at different temperatures and pressures. This kind of simulations are based on an isomorphism between a quantum system and a classical one, obtained after a discretization of the quantum density matrix along cyclic paths.\cite{fe72,kl90} This isomorphism corresponds to replacing each quantum particle by a ring polymer consisting of $L$ (Trotter number) classical particles, joined by harmonic springs with temperature- and mass-dependent force constant. Details on this simulation procedure can be found elsewhere.\cite{gi88,ce95} The dynamics used in this method is fictitious and does not correspond to the real quantum dynamics of the considered particles, but it helps to effectively sample the many-body configuration space, yielding precise results for the properties of the actual quantum system. Another way to derive such properties could be the use of Monte Carlo sampling, but we have found that this procedure requires for the present problem more computer resources than PIMD simulations. An important advantage of the latter is that the computing codes can be more readily parallelized, which turns out to be a relevant factor for an efficient use of modern computer architectures. Simulations of crystalline and amorphous ice were carried out here in the isothermal-isobaric $NPT$ ensemble ($N$, number of particles; $P$, pressure; $T$, temperature), which allows one to find the equilibrium volume of a solid at given pressure and temperature. We have used effective algorithms for carrying out PIMD simulations in this statistical ensemble, similar to those described in the literature,\cite{ma96,tu98,tu02} and employed earlier in the study of solid and liquid water by PIMD simulations. We have considered temperatures between 50 K and 300 K, and pressures up to 8 GPa. Both negative (tension) and positive (compression) pressures have been employed in the simulations. For negative $P$ we considered amorphous ice in the region of mechanical stability of the solid, down to $P \sim -0.5$ GPa. For comparison with results of PIMD simulations, some simulations of HDA ice have been also performed in the classical limit, which is obtained in our path integral procedure by setting the Trotter number $L = 1$. Our PIMD simulations were carried out on cells including 96 or 216 water molecules, which were generated from pressure treatment of ice Ih and Ic supercells, respectively. The former (96 molecules) corresponds to a $(3a, 2 \sqrt{3} a, 2c)$ supercell, where $a$ and $c$ are the standard hexagonal lattice parameters of ice Ih, whereas the latter (216 molecules) corresponds to a $3 \times 3 \times 3$ supercell of the cubic unit cell of ice Ic. The main purpose of taking these two ice types was to check the influence of the starting crystalline ice on the properties of the amorphous ice resulting under pressure. For all considered variables (structural and thermodynamic), we found in both cases results that coincided with each other within statistical error bars of the simulation procedure. To generate proton-disordered ice supercells (of Ih and Ic types) prior to amorphization, we employed a Monte Carlo procedure to impose that each oxygen atom had two chemically bonded and two H-bonded hydrogen atoms, and with a cell dipole moment close to zero.\cite{bu98,he11} For some particular conditions, we checked that both starting proton disorder and history of the amorphous supercell (amorphization procedure) do not affect significantly the results presented below. In particular, once formed the amorphous material from crystalline ice, we checked that its properties are reversible upon increasing and decreasing temperature and/or pressure. Interatomic interactions were modeled by the point charge, flexible q-TIP4P/F model, developed to study liquid water,\cite{ha09} and that was later employed to study various properties of ice\cite{ra10,he11} and water clusters.\cite{go10} Many of the empirical potentials used earlier for quantum simulations of condensed phases of water treat H$_2$O molecules as rigid bodies.\cite{he05,mi05,he06b} This turns out to be convenient for computational efficiency, but neglects the role of intramolecular flexibility in the structure, dynamics, and thermodynamics of the condensed water phases.\cite{ha09} Moreover, the q-TIP4P/F potential takes into account the significant anharmonicity of the O--H vibration in a water molecule by considering anharmonic stretches, vs. the harmonic potentials employed in most of the simulations that considered quantum effects in these water phases. Technical details on the PIMD simulations presented here are the same as those described and used in Refs.~\onlinecite{ra10,he11}. The Trotter number $L$ has been taken proportional to the inverse temperature ($L \propto 1/T$), so that $L T$ = 6000~K, which allows us to keep roughly a constant precision in the PIMD results at the different temperatures under consideration. The time step $\Delta t$ employed for the calculation of interatomic forces in the molecular dynamics procedure was taken in the range between 0.1 and 0.3 fs, which was found to give adequate convergence for the variables studied here. For given conditions of pressure and temperature, a typical simulation run consisted of $2 \times 10^5$ PIMD steps for system equilibration, followed by $10^6$ steps for the calculation of ensemble average properties. \section{Preparation of amorphous ice} In this Section we give details on the form in which we obtain the disordered structures of HDA ice that are subsequently employed to characterize and study this amorphous phase from PIMD simulations. In particular, we have obtained simulation cells of HDA ice by applying a hydrostatic pressure to ice Ih and ice Ic at several temperatures. This procedure allows us also to check the pressure at which amorphization occurs, and to compare it with data (both experimental and theoretical) reported in the literature. We note that HDA ice has been recently obtained at room temperature from ice VII under rapid compression.\cite{ch11} \begin{figure} \vspace{-1.1cm} \hspace{-0.5cm} \includegraphics[width= 9cm]{fig1.ps} \vspace{-0.8cm} \caption{ Molar volume of ice as a function of pressure, as derived from PIMD simulations at $T$ = 75 K. Open circles represent results derived from simulations starting from ice Ih. Solid circles are data points obtained from simulations starting from amorphous ice. Error bars are in the order of the symbol size. The dashed line is a guide to the eye. An open triangle at $P$ = 0 indicates the volume measured by Mishima {\em et al.}\cite{mi84} The solid line was obtained from the pressure-density data displayed in the review by Loerting and Giovambattista\cite{lo06}, taken from Ref.~\onlinecite{ma04}. } \label{f1} \end{figure} In Fig.~1 we present the pressure dependence of the molar volume of ice at a temperature of 75 K, as derived from our PIMD simulations. Results shown as open circles correspond to simulations starting from ice Ih, for both negative and positive pressures. We observe that in the region between --1 GPa and 1 GPa the volume decreases smoothly as pressure is increased, and at about 1.2 GPa it suffers a sudden decrease which corresponds to ice amorphization. This value is close to the spinodal pressure (limit of mechanical stability) obtained for ice Ih at this temperature in Ref.~\onlinecite{he11b} ($P_s = 1.19 \pm 0.05$ GPa), and to the amorphization pressure obtained in Ref.~\onlinecite{ok96} from classical molecular dynamics simulations. From 1 to 2 GPa the volume decreases by 27\%, and at higher pressures it continues decreasing to reach a value of 10.8 cm$^3$/mol at $P$ = 8 GPa, to be compared with $v$ = 19.36 cm$^3$/mol found for ice Ih at atmospheric pressure. We have carried out this procedure at several temperatures, and the results are similar to those shown in Fig.~1. In particular, the amorphization pressure changes slightly with temperature, and at 250 K we find 0.95 GPa. This was discussed in Ref.~\onlinecite{he11b} in connection with the stability of ice Ih under pressure, and will not be repeated here. Apart from the pressure at which the solid amorphizes in the simulations, more precise values for the amorphization pressure can be obtained from the spinodal line of ice Ih, which gives the limit for the mechanical stability of this solid phase. This pressure corresponds to the vanishing of the bulk modulus (divergence of the compressibility), and can be approached in computer simulations at low temperatures ($T \lesssim 50$ K). At higher $T$ ice Ih amorphizes in the PIMD simulations before reaching the corresponding spinodal pressure, due to nucleation effects leading to the breakdown of the ice Ih structure.\cite{he11b} We note that the formation of HDA ice from ice Ih has not been observed in the laboratory at temperatures so high as 250 K, but we obtain this transition at such temperatures and find an amorphous phase that remains metastable along our simulations (which in fact can only cover time scales shorter than those of actual experiments). HDA ice has been, however, obtained from ice VII at room temperature,\cite{ch11} as mentioned above. Solid water amorphizes in an irreversible way, so that new PIMD simulations at pressures lower than 1 GPa, starting from the amorphous phase, do not recover the crystalline phase. This is shown in Fig.~1 as solid symbols. In these simulations the pressure was gradually reduced down to negative pressures until reaching the limit of mechanical stability of the material. At 75 K we could reach a pressure of --0.4 GPa, and at still more negative pressures the amorphous solid broke down along the simulations, transforming into the gas phase. This point will be discussed below in connection with the bulk modulus of HDA ice. For comparison with our results we present in Fig.~1 the molar volume obtained by Mishima {\em et al.}\cite{mi84} for HDA ice from x-ray diffraction experiments at 80 K and atmospheric pressure. Shown is also the volume-pressure curve derived from the data given in the review by Loerting and Giovambattista,\cite{lo06} taken from Ref.~\onlinecite{ma04} (solid line). We note that between $P$ = 0 and 1 GPa the molar volume of the amorphous material is clearly smaller than that of the crystal, but at negative pressure the volume of the amorph increases fast, due to the proximity of its metastability limit, and therefore approaches the volume of ice Ih. \begin{figure} \vspace{-1.1cm} \hspace{-0.5cm} \includegraphics[width= 9cm]{fig2.ps} \vspace{-0.8cm} \caption{ Oxygen-oxygen radial distribution function of ice Ih (dashed line) and HDA ice (solid line) at $T$ = 100 K and $P$ = 1 atm, as derived from PIMD simulations. } \label{f2} \end{figure} For additional confirmation of the amorphous character of the solid obtained after the cycle indicated by arrows in Fig.~ 1 (first pressure increase up to 8 GPa, and then pressure reduction), we display in Fig.~2 the O--O RDF for ice Ih and amorphous ice, as derived from PIMD simulations at $T$ = 100 K and atmospheric pressure. Upon amorphization, the prominent peaks corresponding to the second and third coordination sphere merge into one broad feature with a maximum at about 4.1 \AA. The maximum of the first peak in the amorphous solid remains close to that of ice Ih, in spite of the appreciable volume reduction suffered by the solid in the amorphization process. Note that the difference in height of the first peaks in the crystalline and amorphous phases is mainly due to the definition of radial distribution function, which is normalized by the density.\cite{ch87} In fact, we calculate $g(r)$ as \begin{equation} g(r) = \frac{d(r)}{D} \; , \end{equation} where $D$ is the mean density (atoms per unit volume) and $d(r)$ is the ``local density'' at distance $r$ from a reference atom: \begin{equation} d(r) = \frac{N(r)}{4 \pi r^2 \Delta r} \; , \end{equation} $N(r)$ being the number of atoms at distances between $r$ and $r + \Delta r$. \section{Properties of high-density amorphous ice} \subsection{Volume} As shown above in Fig.~1 and discussed in Sect.~III, ice Ih suffers an important reduction in volume upon amorphization at about 1.2 GPa. This volume decrease is associated to a softening of intermolecular O--H bridges, accompanied by a reduction in the mean distance to oxygen atoms in the second and third coordination shells (see Fig.~2). After amorphization, the molar volume continues decreasing as pressure is raised, and we find at 75 K a reduction from 13.11 cm$^3$/mol at $P$ = 2 GPa to 10.81 cm$^3$/mol at 8 GPa, which means a decrease of 18\% respect to the volume at 2 GPa. We emphasize that the volume of the amorph decreases smoothly in the whole pressure region considered here, and we did not detect in this region any other phase change as those reported in the literature. In fact, Hemley {\em et al.}\cite{he89} observed a pressure-induced re-crystallization of HDA ice at about 4 GPa. Also, a slow transformation of HDA to cubic ice on slow depressurization has been observed.\cite{jo07} In our simulations the amorph remains in its metastable state in the whole range of temperature and pressure studied here. \begin{figure} \vspace{-1.1cm} \hspace{-0.5cm} \includegraphics[width= 9cm]{fig3.ps} \vspace{-0.8cm} \caption{ Temperature dependence of the molar volume of ice at atmospheric pressure as derived from PIMD simulations: Ih (circles) and HDA ice (squares). Results of classical simulations for HDA ice are displayed as triangles. A solid line represents data obtained from classical molecular dynamics simulations by Tse {\em et al.}\cite{ts05} For ice Ih the error bars are smaller than the symbol size. Lines are guides to the eye. } \label{f3} \end{figure} In Fig.~3 we show the molar volume of ice Ih (circles) and the HDA phase (squares) as a function of temperature at atmospheric pressure, as derived from our PIMD simulations. At $P$ = 1 atm and $T$ = 75 K, after releasing the pressure applied for amorphization, we find a molar volume $v$ = 15.57 cm$^3$/mol, which corresponds to a density $\rho$ = 1.16 g/cm$^3$, in good agreement with the values given by Mishima {\em et al.} at zero pressure: $\rho$ = 1.17 g/cm$^3$ in Ref.~\onlinecite{mi84} and 1.19 g/cm$^3$ in Ref.~\onlinecite{mi85}. For comparison with the PIMD results, we also display in Fig.~3 classical data derived here with the q-TIP4P/F potential (triangles), as well as those obtained by Tse {\em et al.}\cite{ts05} using the SPC/E potential. At low temperature, the molar volume found by these authors is similar to that found here in the classical simulations, and it becomes closer to the PIMD results as temperature rises. Comparing our results with the q-TIP4P/F force field, we find at 50 K an increase in molar volume of about 0.85 cm$^3$/mol due to nuclear quantum effects, which amounts to a 6\% of the classical value. In the temperature region up to 250 K we did not observe any transition from HDA ice to a low-density amorphous phase with density $\rho$ = 0.94 g/cm$^3$ (molar volume: 19.1 cm$^3$/mol), as experimentally observed at about 120--130 K,\cite{mi85,ts99,ne06,tu02b} and HDA ice remained as a metastable phase along our PIMD simulation runs. Something similar happens with the classical simulations reported in Ref.~\onlinecite{ts05}. We believe that such a transition between amorphous phases is not captured by the simulations due to the short time window that in fact can be observed in the calculations, similarly to the difficulties found in this kind of simulations to directly obtain transitions between different crystalline phases. This question could possibly be solved by performing direct coexistence simulations as those reported in the literature for liquid-solid transitions.\cite{ha09} It is clear that the amorphous material has a higher density, or a smaller molar volume than ice Ih, but it expands with temperature much faster than the crystalline solid. At atmospheric pressure we find for the amorph a volume increase of 2.6 cm$^3$/mol in the temperature range from 50 to 250 K. In the same temperature region, the volume of ice Ih is found to rise less than 0.3 cm$^3$/mol, in part due to the negative thermal expansion around 100 K (not clearly observed at the scale of Fig.~3). Thus, we find for the thermal expansion coefficient of HDA ice at 100 K: $\alpha = 8.1 \times 10^{-4}$ K$^{-1}$. This large volume expansion of the amorph, as compared with ice Ih, is indeed related with the lower bulk modulus of the amorphous material at $P$ = 1 atm (see below). \subsection{Interatomic distances} \begin{figure} \vspace{-1.1cm} \hspace{-0.5cm} \includegraphics[width= 9cm]{fig4.ps} \vspace{-0.8cm} \caption{ Mean intramolecular O--H distance as a function of pressure for amorphous ice at 75 K, as derived from PIMD (squares) and classical (circles) simulations. Error bars are in the order of the symbol size. Lines are guides to the eye. } \label{f4} \end{figure} In this section we present results for interatomic distances in amorphous ice between atoms in the same and adjacent molecules. This can shed light on the structural changes suffered by the material when temperature and/or pressure are modified. We first show in Fig.~4 the mean intramolecular O--H distance as a function of pressure at a temperature $T$ = 75 K. In connection with this figure, there are two results that should be emphasized. First, at a given pressure, we find that the O--H bond distance increases appreciably due to nuclear quantum effects. In fact, a classical simulation at $T$ = 75 K and ambient pressure yields a mean O--H distance of 0.966 \AA, to be compared with 0.980 \AA\ derived from PIMD simulations, which means an increase of 1.4\% in the bond length due to nuclear quantum motion. This difference is much larger than the temperature-induced change in $d$(O--H) at atmospheric pressure, which amounts to about 0.002 \AA\ in the range from 50 to 250 K. The increase due to quantum motion is rather constant in the whole pressure range studied here. Another important result observed in Fig.~4 is that at a given temperature the O--H distance increases as pressure is raised, contrary to the usual contraction of atomic bonds for increasing $P$. This somewhat anomalous fact is due to the characteristic structure of ice with hydrogen bonds connecting contiguous water molecules, which gives rise to an anticorrelation between the strength of molecular O--H bonds and intermolecular H bridges.\cite{li99} In fact, increasing the pressure causes a hardening of intermolecular H bridges, with an associated weakening of intramolecular O--H bonds, and a concomitant increase in the bond length. This weakening of intramolecular O--H bonds in ice for rising pressure has been observed experimentally and reported in the literature.\cite{ny06} It is interesting to compare the O--H bond distance in amorphous ice with that obtained for ice Ih in the same type of PIMD simulations. At atmospheric pressure and 75 K, we find for ice Ih a mean distance $d$(O--H) = 0.984 \AA, i.e. about 0.4\% longer than in HDA ice at the same conditions.\cite{he11} At the same temperature and $P$ = 1 GPa, close to the amorphization pressure of ice Ih we found a distance difference of 0.6\%. This means that the average O--H distance decreases upon amorphization of the solid, which is consistent with a weakening of H bridges between adjacent molecules, causing a strengthening of the intramolecular bonds, and therefore a shortening of the corresponding O--H distance. These differences between O--H distances in Ih and HDA ice are consistent with changes in the stretching vibrational frequencies, as observed from Raman scattering experiments. In fact, for HDA ice one observes at 80 K a broad Raman band with a maximum at $\approx$ 3200 cm$^{-1}$,\cite{sa06} to be compared with the largest feature appearing in the O--H stretching region of ice Ih at about 3090 cm$^{-1}$.\cite{wa77} This hardening of the stretching vibrations upon amorphization is consistent with the general trend found for water molecules, when the intramolecular O--H distance decreases in different crystal surroundings:\cite{oj92} $\Delta \omega$(O-H)/$\Delta d$(O-H) = --2.4 $\times 10^4$ cm$^{-1}$/\AA. We note, for comparison, that Bellissent-Funel {\em et al.}\cite{be87} found an intramolecular O--D distance of 0.97 \AA, from neutron scattering experiments on high-density amorphous D$_2$O. Another related aspect of the O--H distance is its temperature dependence. For ice Ih at atmospheric pressure, this distance is known to decrease as temperature is raised, as a consequence of the hardening of the bond for increasing temperature. This is in line with an enhancement of molecular motion for rising $T$, which causes a weakening of the H bridges and an associated enhancement of intramolecular bond strength. Something similar is found for amorphous ice in our PIMD simulations, where the average O--H distance decreases from 0.9804(2) to 0.9788(2) \AA\ when $T$ rises from 75 to 250 K. \begin{figure} \vspace{-1.1cm} \hspace{-0.5cm} \includegraphics[width= 9cm]{fig5.ps} \vspace{-0.8cm} \caption{ Oxygen-hydrogen radial distribution function at 75 K and $P$ = 1 atm, as derived from quantum PIMD simulations for H$_2$O (solid line) and D$_2$O (dashed line) amorphous ice, as well as from classical molecular dynamics simulations (dashed-dotted line). Inset: RDF in the region around 1 \AA, showing the peak corresponding to intramolecular O--H bonds. } \label{f5} \end{figure} Given that nuclear quantum motion is appreciable in the mean O--H distance in water molecules in amorphous ice, it is expected that isotopic effects can be observed in the RDF. In Fig.~5 we display the O--H radial distribution functions for HDA ice, as derived from PIMD simulations for normal and deuterated water, as well as from classical molecular dynamics simulations. The RDF derived from classical simulations is similar to that found earlier in this kind of simulations for HDA ice.\cite{gi05} For interatomic distances $r$ from 1.5 to 5 \AA, we observe in Fig.~5 that quantum effects cause a broadening of the peaks. This is particularly observable in the peaks at about 1.8 \AA\ and 3.2 \AA. The first peak in this RDF, corresponding to the intramolecular O--H bonds, is much higher, and is displayed in the inset. For the classical model, it is about 15 times larger than the peak at 1.8 \AA. All peaks are found to broaden due to quantum effects, and their widths are larger for smaller isotopic mass. In fact, for the peak corresponding to intramolecular O--H bonds, we obtained a full width at half maximum of 0.05, 0.14, and 0.16 \AA, for classical ice, quantum D$_2$O, and quantum H$_2$O, respectively. Note that in this respect the classical limit behaves as the large-mass limit. \begin{figure} \vspace{-1.1cm} \hspace{-0.5cm} \includegraphics[width= 9cm]{fig6.ps} \vspace{-0.8cm} \caption{ Hydrogen-hydrogen radial distribution function for HDA ice at $P$ = 1 atm. Solid and dashed lines show results obtained from PIMD and classical simulations, respectively, at $T$ = 75 K. The dashed-dotted line was derived from neutron diffraction experiments at 80 K.\cite{fi02b} } \vspace{0.6cm} \label{f6} \end{figure} In Fig.~6 we show the hydrogen-hydrogen RDF of HDA ice as derived from classical (dashed line) and PIMD simulations (solid line). As expected, quantum motion of H broadens the peaks in the RDF with respect to the classical result. The peak corresponding to H--H pairs inside water molecules, at 1.53 \AA, has a height of 5.4 in the classical RDF, almost 4 times more than the quantum result of 1.45. In Fig.~6 we also present the H--H RDF obtained by Finney~{\em et al.}\cite{fi02} from neutron diffraction experiments at 80 K. Note that this curve does not include the intramolecular H--H pairs in the original publication. The next peak, corresponding to hydrogen pairs in adjacent (H-bonded) molecules appears nearly at the same position in the RDF derived from experiment and that obtained from PIMD simulations. However, the former is higher than the latter. This is the main difference between both results, that in general are rather similar. \begin{figure} \vspace{-1.1cm} \hspace{-0.5cm} \includegraphics[width= 9cm]{fig7.ps} \vspace{-0.8cm} \caption{ Oxygen-oxygen radial distribution function for HDA ice at $P$ = 1 atm. Solid and dashed lines represent results derived from PIMD and classical simulations, respectively, at $T$ = 75 K. The dashed-dotted line was derived from neutron-scattering experiments at 80 K.\cite{bo06} } \label{f7} \end{figure} Quantum effects are not only associated to hydrogen, due to its small mass, but are also observable in the oxygen-oxygen RDF of amorphous ice, as displayed in Fig.~7. In this figure, the solid line represents $g(r)$ derived from PIMD simulations at $T$ = 75 K and atmospheric pressure, and the dashed line corresponds to the RDF derived from classical molecular dynamics simulations at the same conditions of pressure and temperature. The classical result is similar to the oxygen-oxygen RDF derived in Ref.~\onlinecite{se09} for several interatomic potentials. For comparison, we also show in Fig.~7 the O--O RDF of HDA ice derived by Bowron {\em et al.} from neutron scattering experiments\cite{bo06} (dashed-dotted line). As in the case of the O--H RDF, we observe a broadening of the peaks when nuclear quantum motion is considered, as a consequence of the associated atomic delocalization. This is particularly observable for the first peak at about 2.8 \AA, whose height decreases when quantum effects are taken into account. The result derived from PIMD simulations is closer to the experimental data than the classical one, but the peak derived from the quantum simulations appears at $r$ = 2.77 \AA, a distance slightly larger than that corresponding to the maximum of the RDF derived from experiment ($r$ = 2.72 \AA). We note, however, that the position of the peak derived from another neutron diffraction work\cite{fi02} is closer to that obtained in our simulations, but its height is somewhat smaller than that found here. RDFs of HDA ice have been also derived from x-ray diffraction measurements.\cite{bo86} It is also interesting to analyze the effect of pressure on the shape of the O--O RDF of amorphous ice, as it can give information on the molecular reorganization in the way to amorphous phases with still high density. With this purpose, in Fig.~8 we present the O--O RDF at 75 K for different pressures: atmospheric pressure along with $P$ = 1, 3, and 6 GPa. We first observe that the peak at about 2.8 \AA, corresponding to the first coordination shell, moves to shorter distances as the pressure is raised, in line with a decrease in the O--O distance associated to the corresponding volume reduction ($d V / d P < 0$). It is also remarkable that the broad feature appearing in the RDF around 4 \AA\ (at atmospheric pressure) sharpens and moves to smaller distances for increasing pressure, indicating that water molecules in the second coordination shell come closer to those in the first shell. This feature in the O--O RDF coincides with that observed in very-high density amorphous (VHDA) ice.\cite{fi02b,lo06} Other features of the RDF appearing at larger $r$ also move to shorter distances, as observed in the region between 5 and 6 \AA\ in Fig.~8. As shown above, we do not observe any clear discontinuity in the volume as pressure is increased, so that according to our results VHDA seems to appear as a high-pressure regime of HDA. Although in principle it is not evident that the phase obtained by applying pressure to HDA ice is the same as the VHDA obtained by isobaric heating,\cite{fi02b} our results are in agreement with earlier simulations,\cite{ma05} favoring a continuous transition from HDA to VHDA ice. From an experimental point of view, there are indications\cite{lo06} pointing in the direction that structural differences between HDA and VHDA become less prominent as pressure increases. In any case, there is some evidence against a first order-like nature of the transition between HDA and VHDA, suggesting a continuous character of the transition.\cite{lo11} \begin{figure} \vspace{-1.1cm} \hspace{-0.5cm} \includegraphics[width= 9cm]{fig8.ps} \vspace{-0.8cm} \caption{ Oxygen-oxygen radial distribution function for amorphous ice at $T$ = 75 K and several pressures, as derived from PIMD simulations. Different lines represent results for $P$ = 1 atm, and 1, 3, 6 GPa, as indicated in the labels. } \label{f8} \end{figure} \subsection{Kinetic energy} In this section we present the kinetic energy of atomic nuclei in HDA ice, as derived from our PIMD simulations. The kinetic energy, $E_k$, of atomic nuclei depends on their mass and spacial delocalization, so that it can give information on the environment and interatomic interactions seen by the considered nuclei. We note that this does not occur for classical simulations, since in this case each degree of freedom contributes to the kinetic energy by an amount that depends only on temperature, i.e., $k_B T / 2$. A typical quantum effect associated to the atomic motion in solids is that the kinetic energy converges at low temperature to a value related to zero-point motion, contrary to the classical result where $E_k$ vanishes in the limit $T \to$ 0 K. Path integral simulations allow us to obtain the kinetic energy of the considered quantum particles. For a particle with a certain mass at a given temperature, the larger the spread of the quantum paths, the smaller the kinetic energy, in line with the expectancy that a larger quantum delocalization is associated with a reduction in the kinetic energy.\cite{gi88,he11} We have calculated here the kinetic energy $E_k$ by using the so-called virial estimator, which has an associated statistical uncertainty appreciably lower than the potential energy of the system.\cite{he82,tu98} \begin{figure} \vspace{-1.1cm} \hspace{-0.5cm} \includegraphics[width= 9cm]{fig9.ps} \vspace{-0.8cm} \caption{ Kinetic energy of hydrogen in ice Ih and amorphous ice as a function of pressure at two temperatures: 75 K (squares) and 250 K (circles). Open and solid symbols correspond to ice Ih and amorphous ice, respectively. Error bars for the crystalline phase are less than the symbol size. Lines are guides to the eye. } \label{f9} \end{figure} In Fig.~9 we display $E_k$ for hydrogen as a function of pressure at two temperatures: 75 and 250 K, as derived from our PIMD simulations of amorphous ice (solid symbols). At each temperature, $E_k$ increases slowly as pressure rises, corresponding to an overall increase of vibrational frequencies. For comparison, we also present results for hydrogen in ice Ih at the same temperatures (open symbols). In this case, the kinetic energy increases with rising pressure faster than for amorphous ice, in the whole region where ice Ih is found to be mechanically stable. This is a consequence of the larger quantum delocalization of hydrogen in amorphous ice, as compared with ice Ih at the same temperature, or equivalently, to the presence in the amorphous solid of modes with lower vibrational frequency. At atmospheric pressure, $E_k$ for HDA ice increases only by about 1.5\% from 75 to 250 K, reflecting the fact that at these temperatures most vibrational modes with large hydrogen contribution (i.e., libration and stretching modes) are nearly in their ground state. Moreover, changing the external pressure modifies even less the kinetic energy of hydrogen. In fact, at 75 K it rises by about 0.15\% in the range from atmospheric pressure to $P$ = 8 GPa. \begin{figure} \vspace{-1.1cm} \hspace{-0.5cm} \includegraphics[width= 9cm]{fig10.ps} \vspace{-0.8cm} \caption{ Kinetic energy of oxygen in ice Ih and amorphous ice as a function of pressure at two temperatures: 75 K (squares) and 250 K (circles). Open and solid symbols correspond to ice Ih and amorphous ice, respectively. Error bars for the crystalline and amorphous solids are less than the symbol size. Lines are guides to the eye. } \label{f10} \end{figure} In Fig.~10 we show the pressure dependence of the kinetic energy of oxygen in ice (HDA and Ih) at the same temperatures as those presented in Fig.~9 for hydrogen. We observe again that the kinetic energy rises with increasing pressure, and that at a given pressure, $E_k$ for oxygen in amorphous ice is smaller than in ice Ih. At 75 K and atmospheric pressure the kinetic energy of oxygen results to be about 4 times smaller than that of hydrogen, as could be expected from the larger mass of the former atom. At this temperature, $E_k$ for oxygen rises by about 22\% when a pressure of 8 GPa is applied. This relative change is much larger than that found for hydrogen at the same conditions (a 0.15\%). For increasing temperature, we find also an important rise in $E_k$ for oxygen, e.g., at $P$ = 1 atm we have a change of a 44\% from 75 to 150~K, to be compared with a rise of 1.5\% in the case of hydrogen. This is indeed a consequence of the much larger mass of oxygen, which contributes mainly to vibrational modes with low frequency, and therefore with excited states non-negligibly populated at these temperatures. We note that the vertical scales in Figs.~9 and 10 are different, and what seems to be a very large change in the kinetic energy of hydrogen when comparing Ih and HDA ices, is in relative terms less than the change in $E_k$ for oxygen. In fact, at $P$ = 1 GPa and $T$ = 75 K (close to the amorphization pressure of ice Ih), the kinetic energy of oxygen in HDA ice is 167 J/mol lower than that in ice Ih, to be compared with a decrease of 270 J/mol in $E_k$ for hydrogen. In relative terms these differences amount to a decrease of 4.5\% for oxygen vs. 1.9\% for hydrogen. This is in agreement with the fact that the structural changes associated to ice amorphization affect mainly to low-frequency vibrational modes (translational modes of the whole water molecule), with a major relative contribution of oxygen to the kinetic energy. The relative difference between kinetic energies in ice Ih and HDA ice decreases as temperature is raised, as observed in Figs.~9 and 10 at $T$ = 250 K, mainly because the nuclear motion becomes ``more classical,'' and therefore $E_k$ is less sensitive to the environment and actual motion of the atomic nuclei under consideration. The kinetic energy of hydrogen and oxygen in liquid water and ice Ih has been studied in detail earlier from PIMD simulations, and compared with data derived from deep inelastic neutron scattering in the case of hydrogen.\cite{ra11} In that paper, a study of the contribution of different vibrational modes to the kinetic energy was presented. For HDA ice we find here that, at a given temperature, $E_k$ increases as pressure is raised for both hydrogen and oxygen. This result is not obvious, since the O--H stretching frequencies are known to decrease for increasing pressure (their mode Gr\"uneisen parameter is negative\cite{ra12,pa12}). However, the overall contribution of the vibrational modes to $E_k$ increases with pressure, since the contribution of modes with positive Gr\"uneisen parameter dominates, as analyzed elsewhere from a quasi-harmonic approximation for crystalline ice phases.\cite{ra12} \subsection{Bulk modulus} The compressibility of ice displays peculiar properties associated to the hydrogen-bond network. For the crystalline phases of ice, and ice Ih in particular, the compressibility is smaller than what one could suspect from the large cavities present in its structure, which could be expected to collapse under pressure before molecules could approach close enough to repel each other. This is in fact not the case, and for ice Ih the H bonds holding the structure are known to be rather stable, as manifested by the relatively high pressure necessary to break down the crystal.\cite{mi84} Here we present results of PIMD simulations for the isothermal bulk modulus of HDA ice at different temperatures and pressures, and compare them with those derived for ice Ih. The isothermal compressibility $\kappa$ of ice, or its inverse the bulk modulus [$B = 1/\kappa = - V ( {\partial P} / {\partial V} )_T$] can be directly derived from PIMD simulations in the isothermal-isobaric ensemble. In this ensemble, $B$ can be obtained from the mean-square fluctuations of the volume, $\sigma_V^2 = \langle V^2 \rangle - \langle V \rangle^2$, by using the expression\cite{la80,he08} \begin{equation} B = \frac{k_B T \langle V \rangle}{\sigma_V^2} \; , \label{bulkm} \end{equation} $k_B$ being Boltzmann's constant. This expression has been employed earlier to obtain the bulk modulus of different types of solids from path-integral simulations.\cite{he00c,he08,he11} \begin{figure} \vspace{-1.1cm} \hspace{-0.5cm} \includegraphics[width= 9cm]{fig11.ps} \vspace{-0.8cm} \caption{ Pressure dependence of the bulk modulus of amorphous ice at $T$ = 75 K (solid circles) and 250 K (solid squares), as derived from PIMD simulations. Open circles correspond to ice Ih at 75 K. Error bars for the crystalline phase are in the order of the symbol size. Dashed lines are guides to the eye. The solid line was obtained by numerical differentiation from the pressure-density data displayed in the review by Loerting and Giovambattista,\cite{lo06} adapted from Ref.~\onlinecite{ma04}. } \label{f11} \end{figure} In Fig.~11 we present the bulk modulus of amorphous ice as a function of pressure, as derived from our PIMD simulations at 75 K (solid circles) and 250 K (solid squares). For comparison, we also present results for ice Ih at 75 K (open circles). The bulk modulus of amorphous ice is found to increase linearly as a function of pressure in the range considered here. From linear fits to the data shown for amorphous ice in Fig.~11, we find a slope $\partial B /\partial P$ = 7.1(2) and 7.0(2) at 75 and 250 K, respectively, i.e. both values coincide within the precision of our results. This contrasts clearly with the pressure dependence of the bulk modulus found for ice Ih (open circles in Fig.~11). In fact, for this crystalline phase of water one finds that at $P >$ 0.3~GPa, $B$ decreases as pressure is raised, until eventually reaching the limit of mechanical stability of the phase (spinodal line) and the associated amorphization of the material. Thus, at pressures close to 1 GPa, the bulk modulus of ice Ih is lower than that of the amorphous phase, contrary to the result obtained at lower pressures. For comparison, we also show in Fig.~11 the pressure dependence of the bulk modulus of HDA ice at 80 K (solid line), derived by numerical differentiation of the pressure-density data given in Ref.~\onlinecite{lo06} (adapted from \onlinecite{ma04}). At atmospheric pressure and 75 K, we find for amorphous ice $B$ = 9.6(4) GPa, to be compared with a value of $B$ = 14.0(3) GPa derived for ice Ih at the same conditions. This reflects an important increase in the compressibility of ice upon amorphization, i.e. a lowering of the bulk modulus of the material by about a 30\%. Note that this reduction in the bulk modulus is similar to the relative volume change upon amorphization, as discussed in Sect.~III. From our data in the temperature region between 75 and 250 K, $B$ extrapolates to zero at negative pressures in the order of --0.5 to --1 GPa, which gives the limit of mechanical stability of this amorphous material, where the solid breaks down giving rise to the gas phase. As discussed elsewhere,\cite{sc95,he11b} the possibility of studying a solid in metastable conditions, close to a spinodal line is limited by the appearance of nucleation events, which cause the breakdown of the solid. For atomistic simulations such as those employed here, the probability of those nucleation events at low temperatures is relatively low, and the metastable range of the solid that can be explored is rather large. In fact, we can go here to conditions near the limit of mechanical stability at negative pressures (limit $B \to 0$). As $T$ increases the probability of nucleation becomes higher, and the accessible pressure range in the simulations is reduced. \section{Comparison with earlier work} Data of earlier simulations of HDA ice have been already presented in the previous section along with the results of our PIMD simulations. Here we discuss and summarize the main similarities and discrepancies between our results and those given in some earlier works. Simulations of the amorphization of ice Ih using classical molecular dynamics were carried out by Tse and Klein,\cite{ts87,ts90} who employed the TIP4P intermolecular potential, and found ice amorphization at pressures around 1.2--1.3 GPa at temperatures between 80 and 100 K. These values are close to the limit for mechanical stability (spinodal pressure, $P_s$) of ice Ih obtained from PIMD simulations, i.e., $P_s$ = 1.12 and 1.26 GPa at 50 and 100 K, respectively.\cite{he11b} Seidl {\em et al.}\cite{se09} carried out detailed classical molecular dynamics simulations of HDA ice in the isothermal-isobaric ensemble using several force fields. In particular, they studied the glass-transition at a pressure of 0.3 GPa, and found some indications of a glass-to-liquid transition at a temperature around 200 K, which could suggest that HDA ice is a proxy of an ultraviscous high-density liquid. From our present results, we cannot find any evidence of such a transition, and a detailed study of this point with PIMD simulations would require at present enormous computational resources. In connection with our work, Tse {\em et al.}\cite{ts05} studied amorphous ice by classical molecular dynamics simulations with the SPC/E potential. They presented O--O RDFs very similar to those obtained in our classical simulations (shown in Fig.~7). These authors found a continuous and smooth increase in the molar volume of HDA ice as temperature is raised, without a sharp change indicating transformation from high-density to low-density amorphous phases. This is in line with the results of our PIMD simulations shown in Fig.~3. In particular, they found at 100 K a molar volume of 15.4 cm$^3$/mol, close to our classical result (triangles in Fig.~3) and smaller than the value derived from our PIMD simulations with the q-TIP4P/F potential ($v$ = 16.2 cm$^3$/mol). At higher temperatures the results by Tse {\em et al.} become closer to our PIMD results, and are therefore somewhat higher than our classical data. This can be a consequence of the differences between the force fields employed in both works. Earlier studies of path integral simulations of amorphous ice are scarce. Gai {\em et al.}\cite{ga96} studied structural properties of HDA ice at 77 K by path-integral Monte Carlo simulations, using the SPC/E potential model, which treats the water molecules as rigid bodies. Their simulations were carried out in the constant volume, canonical ($NVT$) ensemble, and dealt with both H$_2$O and D$_2$O ice. The O--H RDF obtained by these authors for D$_2$O amorphous ice is similar to that obtained here, and presented in Fig.~5. In particular, they found well-defined peaks for the first and second coordination shells at about 1.8 and 3.3 \AA, respectively. However, for H$_2$O ice they found a RDF very different to that found here, in which the peaks at 1.8 \AA\ (intermolecular O--H bridges) and 3.3 \AA\ were missing, and had been replaced by a broad feature extending from about 2.5 to 4 \AA. These authors argued that the hydrogen bonding network that is present for D$_2$O either disappears or is totally mixed with the second nearest-neighbor shell. Apart from the constant volume employed by Gai {\em et al.}\cite{ga96} in their simulations vs the constant pressure employed in ours, the main difference between both kinds of calculations seems to be the interatomic potential: rigid molecules in Ref.~\onlinecite{ga96} vs flexible molecules in our calculations. We do not find, however, a direct explanation why the use of rigid molecules should cause a so strong difference between the structures of H$_2$O and D$_2$O amorphous ice, as suggested by the results of Gai {\em et al.} \section{Summary} PIMD simulations provide us with a suitable tool to analyze effects of nuclear quantum motion in amorphous ice at finite temperatures. Here, we have presented results of PIMD simulations of HDA ice in the isothermal-isobaric ensemble at different pressures and temperatures. This kind of simulations have allowed us to obtain structural and thermodynamic properties of this metastable material in a large region of pressures, including tensile stresses ($P < 0$). The HDA ice studied here was obtained computationally by pressure-induced amorphization of cubic and hexagonal ice (Ih and Ic). We observe an important reduction of the volume upon amorphization at a pressure of about 1.2 GPa. The resulting ice at atmospheric pressure and 100 K is 19\% denser than its crystalline precursor, but it is found to be softer, in the sense that its compressibility and thermal expansion coefficient are clearly larger. At $P$ = 1 atm and $T$ = 75 K, the compressibility $\kappa$ of HDA ice is about 50\% larger than that of ice Ih, and the thermal expansion coefficient $\alpha$ for the amorphous solid is found to be $9 \times 10^{-4}$ K$^{-1}$, whereas it is negative for ice Ih at 75 K. We have assessed the importance of quantum effects by comparing results obtained from PIMD simulations with those obtained from classical simulations. Structural variables are found to change when nuclear quantum motion is considered, especially at low temperatures. Thus, the crystal volume, interatomic distances, and radial distribution functions suffer appreciable modifications in the range of temperature and pressure considered here. At 50 K the molar volume of HDA ice is found to rise by 0.85 cm$^3$/mol (a 6\% of the classical value), and the intramolecular O--H distance increases by 1.4\% due to quantum motion. The zero-point vibrational motion of atomic nuclei is large enough to also change appreciably structural observables of the amorphous solid, such as the radial distribution function at low temperatures. In fact, from PIMD simulations we observe a broadening of the peaks in the RDFs, as compared with classical molecular dynamics simulations. For different isotopes we also observe a change in the RDFs of HDA ice. In particular, the width of the peaks in the O--H RDF is found to depend on the hydrogen isotope under consideration, but the general features of this RDF are basically the same for both H$_2$O and D$_2$O, contrary to earlier path-integral Monte Carlo results.\cite{ga96} At a given temperature, the kinetic energy of both hydrogen and oxygen is found to increase for rising pressure. This increase is, however, slower than that obtained in ice Ih. Such an increase is associated to the rise in vibrational zero-point energy and an overall increase in the vibrational frequencies of the solid. However, the intramolecular O--H distance is found to increase as pressure is raised, with a decrease in the frequency of the corresponding stretching modes. In HDA ice the bulk modulus is found to increase linearly as a function of pressure, in the whole region studied here. At 75 K we find $\partial B / \partial P$ = 7.1. Although quantitative values found by using the q-TIP4P/F potential can change by employing other interatomic potentials, the main conclusions obtained here can hardly depend on the potential employed in the simulations. Quantum simulations similar to those presented here can give information on the atom delocalization and anharmonic effects in other kinds of amorphous ice. An extension of this work could consist in studying amorphous ice at still higher pressures, where very-high density amorphous ice could be characterized, including nuclear quantum effects. \begin{acknowledgments} This work was supported by Ministerio de Ciencia e Innovaci\'on (Spain) through Grant FIS2009-12721-C04-04 and by Comunidad Aut\'onoma de Madrid through Program MODELICO-CM/S2009ESP-1691. \end{acknowledgments}
2,869,038,156,555
arxiv
\section{Introduction} \label{sec:intro} Electric fields are widely used to manipulate particles and fluids. For example, separation of emulsified water from crude oil in the petroleum refining process is achieved by the application of electric fields, which facilitate drop coalescence \citep{Waterman,Eow:2002}. An important question pertains to the influence of surface-active substances (surfactants, compounds that lower the surface tension between liquids), which are naturally present in the crude oil (asphaltenes, resins, acids), on the process droplet attraction and coalescence. The effect of surfactants and electric fields on drop dynamics has been largely studied in two-dimensions, whilst in three-dimensions the literature is limited. The effect of surfactant (no electric field) has been studied using simulations based on the boundary integral method \citep{Li-Pozrikidis:1997,Pozrikidis:2004,Stone-Leal:1990,Yon-Pozrikidis:1998,Eggleton-Tsai-Stebe:2001,Bazhlekov:2006,Feigl:2007, Vlahovska:2005,Rother:2006}, the diffuse-interface-method \citep{Teigen:2011}, a front-tracking method \citep{Muradoglu-Tryggvason:2008} or a conserving volume-of-fluid method \citep{James-Lowengrub:2004}. The effect of electric fields on clean drops (no surfactant) has been studied theoretically, numerically and experimentally both for single and multiple drops \citep{Lac-Homsy, Karyappa-Thaokar:2014, Lanauze:2015, Ha:2000a, DavidS:2017b, Fernandez:2008a, Casas:2019, Baygents:1998, Lin:2012, Mhatre:2015, Salipante-Vlahovska:2010, sozou_1975, Zabarankin}, and we refer the interested reader to our recent work \citep{Chiara:2020} for a more extensive bibliography. In that paper we presented a detailed analysis of the three-dimensional interaction of a drop pair in a uniform electric field; we showed that the pair dynamics are not simple attraction or repulsion; depending on the angle between the center-to-center line with the undisturbed electric field, the relative motion of the two particles can be quite complex. For example, they can attract in the direction of the field and move towards each other, pair up, and then separate in the transverse direction. The combined effect of surfactants and electric fields is a virtually unexplored problem in terms of numerical experiments, especially when considering multiple drops. This is due to the numerous computational challenges associated with the complex moving geometries and the multi-physics nature of the problem. Teigen and Munkejord used a level-set method in an axisymmetric, cylindrical coordinate system to investigate the interaction between surfactant-covered drops with a uniform electric field for a single drop and, more recently, Poddar et al., studied theoretically the electrorheology of a dilute emulsion of surfactant-covered drops \citep{poddar_mandal_bandopadhyay_chakraborty_2019}. Other theoretical studies developed asymptotic analyses \citep{Ha:1995,Herve:2013,PhysRevE.99.063104} to investigate the deformation and the effects of surfactant transport on the deformation of a single viscous drop under a DC electric field. In this note we built upon our previous work \citep{Chiara:2019, Chiara:2020} and explore the effect of an insoluble surfactant on a drop pair electrohydrodynamics. \section{Problem formulation} \begin{figure} \centerline{\includegraphics[width=.70\linewidth]{fig1.pdf}} \begin{picture}(0,0)(0,0) \put(-130,100){\rotatebox{90}{{\Large$\rightarrow$}}${\bf E}^\infty$} \put(0,20){$x$} \put(-100,25){$y$} \put(-100,150){$z$} \put(-25,70){$1$} \put(15,180){$2$} \end{picture} \caption{\footnotesize{ Two initially spherical identical drops with radius $a$, permittivity ${\varepsilon}_{\mathrm{d}}$ and conductivity $\sigma_{\mathrm{d}}$ suspended in a fluid permittivity ${\varepsilon}_{\mathrm{s}}$ and conductivity $\sigma_{\mathrm{s}}$ and subjected to a uniform DC electric field ${\bf E}^\infty=E_0{\bf \hat z}$. {The angle between the line-of-centers vector and the field direction is $\Theta=\arccos({\bf \hat z}\cdot{\bf \hat d})$.}}} \label{fig1} \end{figure} Let us consider two identical neutrally-buoyant and charge-free drops with radius $a$, viscosity $\eta_{\mathrm{d}}$, conductivity $\sigma_{\mathrm{d}}$, and permittivity ${\varepsilon}_{\mathrm{d}}$ suspended in a fluid with viscosity $\eta_{\mathrm{s}}$, conductivity $\sigma_{\mathrm{s}}$, and permittivity ${\varepsilon}_{\mathrm{s}}$. The mismatch in drop and suspending fluid properties is characterized by the conductivity, permittivity, and viscosity ratios \begin{equation} \mbox{\it R}=\frac{\sigma_{\mathrm{d}}}{\sigma_{\mathrm{s}}}\,,\quad \mbox{\it S}=\frac{{\varepsilon}_{\mathrm{d}}}{{\varepsilon}_{\mathrm{s}}}\,,\quad \lambda=\frac{\eta_{\mathrm{d}}}{\eta_{\mathrm{s}}}\,. \end{equation} A monolayer of insoluble surfactant is adsorbed on the drop interfaces. At rest, the surfactant distribution is uniform and the equilibrium surfactant concentration is $\Gamma_{\mathrm{eq}}$; the corresponding interfacial tension is $\gamma_{\mathrm{eq}}$. The distance between the drops' centroids is $d$ and the angle between the drops' line-of-centers with the applied field direction is $\Theta$. The unit separation vector between the drops is defined by the difference between the position vectors of the drops' centers of mass ${\bf \hat d}=({\bf x}_2^c-{\bf x}^c_1)/d$. The unit vector normal to the drops line-of-centers and orthogonal to ${\bf \hat d}$ is ${\bf{\hat{t}}}$. The problem geometry is sketched in Figure \ref{fig1}. We adopt the leaky dielectric model~\citep{Melcher-Taylor:1969}, which assumes creeping flow and charge-free bulk fluids acting as Ohmic conductors. The assumption of charge-free fluids decouples the electric and hydrodynamic fields in the bulk. Accordingly, \begin{equation} \label{stress_bulk} \eta \nabla^2{\bs u}-\nabla p=0\,,\quad \nabla\cdot {\bf E}=0\,, \end{equation} where ${\bs u}$ and $p$ are the fluid velocity and pressure, and ${\bf E}$ is the electric field. Far away from the drops, ${\bf E}^{\mathrm{s}}\rightarrow {\bf E}^\infty=E_0{\bf \hat z}$ and ${\bs u}\rightarrow 0$. The coupling of the electric field and the fluid flow occurs at the drop interfaces $\cal{D}$, where the charges brought by conduction accumulate. The Gauss' law dictates that while the electric field in the electroneutral bulk fluids is solenoidal, at the drop interface the electric displacement field, ${\varepsilon} {\bf E}$, is discontinuous and its jump corresponds to the surface charge density \begin{equation} {\varepsilon}\left(E_n^{\mathrm{s}}-\mbox{\it S} E_n^{\mathrm{d}}\right)=q\,, \quad {\bf x}\in \cal{D} \end{equation} where $E_n={\bf E}\cdot{\bf n}$, and ${\bf n}$ is the outward pointing normal vector to the drop interface. The surface charge density adjusts to satisfy the current balance \begin{equation} \label{currencond1} \frac{\partial q}{\partial t}+\nabla_s\cdot\left({\bs u} q\right) =\sigma_{\mathrm{s}} \left(E_n^{\mathrm{s}}-\mbox{\it R} E_n^{\mathrm{d}}\right)\,, \quad {\bf x}\in \cal{D}\,. \end{equation} {In this study, we neglect charge relaxation and convection, thereby reducing the charge conservation equation to continuity of the electrical current across the interface as originally proposed by \cite{Taylor:1966}} \begin{equation} \label{currencond} E_n^{\mathrm{s}}=\mbox{\it R} E_n^{\mathrm{d}}\,. \end{equation} This simplification implies ${\varepsilon}^2_{\mathrm{s}} E_0^2/(\eta_{\mathrm{s}}\sigma_{\mathrm{s}})\ll1$. This condition is satisfied for the typical fluids used in experiments such as castor oil (conductivity is $\sim 10^{-11}$ S/m, viscosity is $\sim 1$ Pa.s) and low field strengths $E_0\sim 10^4$ V/m. The electric field acting on the induced surface charge gives rise to electric shear stress at the interface. The tangential stress balance yields \begin{equation} \label{stress balanceT} \left({\bf I}-{\bf n}\bn\right)\cdot \left( {\bf T}^{\mathrm{s}}- {\bf T}^{\mathrm{d}}\right)\cdot{\bf n}+q{\bf E}_t=-{\bm \nabla}_s\gamma \,, \quad {\bf x}\in \cal{D}\,, \end{equation} where $T_{ij}=-p\delta_{ij}+\eta (\partial_j u_i+\partial_i u_j)$ is the hydrodynamic stress and $\delta_{ij}$ is the Kronecker delta function. The electric tractions is calculated from the Maxwell stress tensor $T^{\mathrm{el}}_{ij}={\varepsilon} \left(E_iE_j-E_kE_k\delta_{ij}/2\right)$. $\gamma$ is the interfacial tension, which depends on the local surfactant concentration $\Gamma$. ${\bf E}_t={\bf E}-E_n{\bf n}$ is the tangential component of the electric field, which is continuous across the interface, and ${\bf I}$ is the idemfactor. The normal stress balance is \begin{equation} \label{stress balance} {\bf n} \cdot\left( {\bf T}^{\mathrm{s}}- {\bf T}^{\mathrm{d}}\right)+\frac{1}{2}\left(\left(E_n^{{\mathrm{s}}}\right)^2-\mbox{\it S} \left(E_n^{{\mathrm{d}}}\right)^2-(1-\mbox{\it S})E_t^2\right)=\gamma\,\nabla_s\cdot {\bf n} \,, \quad {\bf x}\in \cal{D}\,, \end{equation} concentration $\Gamma$. The evolution of the distribution of an insoluble, diffusing surfactant is governed by a time-dependent convective equation \citep{Stone:1990, Wong-Maldarelli} \begin{equation} \label{eq:surfactant evolution} \frac{\partial \Gamma}{\partial t}+{\bm \nabla}_{\mathrm{s}}\cdot \left({\bs u}_{\mathrm{s}}\Gamma\right)+\Gamma \left({\bs u}\cdot{\bf n}\right)\nabla_{\mathrm{s}}\cdot {\bf n}-D\nabla^2_s\Gamma=0\,\quad{\mbox{at}}\quad r=r_s\, \end{equation} where ${\bm \nabla}_{\mathrm{s}}$ is the surface gradient operator, ${\bm \nabla}_{\mathrm{s}}=\left({\bf I}-{\bf n}\bn\right)\cdot{\bm \nabla} $. We adopt a linear equation of state for the interfacial tension \begin{equation} \label{surfactant equation of state:3} \gamma(\Gamma)=\gamma_{\mathrm{eq}}-\left.\frac{\partial \gamma}{\partial \Gamma}\right|_{\mathrm{eq}}\,\left(\Gamma-\Gamma_{\mathrm{eq}}\right). \end{equation} Henceforth, all variables are nondimensionalized using the radius of the undeformed drops $a$, the undisturbed field strength $E_0$, a characteristic applied stress $\tau_c={\varepsilon}_{\mathrm{s}} E_0^2$, and the properties of the suspending fluid. Accordingly, the time scale is $t_c=\eta_{\mathrm{s}}/\tau_c$ and the velocity scale is $u_c=a \tau_c/\eta_{\mathrm{s}}$. The surfactant concentration is normalized by $\Gamma_{\mathrm{eq}}$ and the interfacial tension - by $\gamma_{\mathrm{eq}}$. The ratio of the magnitude of the electric stresses and surface tension defines the electric capillary number, the relative strength of the distorting viscous and restoring Marangoni stresses is reflected by the Marangoni number and the importance of surfactant diffusion is given by the Peclet number \begin{equation} \mbox{\it Ca}=\frac{{\varepsilon}_{\mathrm{s}} E_0^2 a}{\gamma_{\mathrm{eq}}}\,,\quad\mbox{\it Ma\,}^{-1}=\frac{{\varepsilon}_{\mathrm{s}} E_0^2 a}{\Delta\gamma}\,,\quad \mbox{\it Pe}=\frac{{\varepsilon}_{\mathrm{s}} E_0^2 a^2}{\eta_{\mathrm{s}} D}\,. \end{equation} The characteristic magnitude of the surface-tension variations that result from perturbations of the local surfactant concentration $\Gamma$ about the equilibrium value $\Gamma_{\mathrm{eq}}$ is \begin{equation*} \Delta\gamma=-\Gamma_{\rm eq}\left( \frac{\partial\gamma}{\partial \Gamma} \right)_ {\Gamma = \Gamma_{\rm eq}} \end{equation*} It is convenient to define the elasticity number, which is independent of the externally applied stresses \begin{equation} \label{elasticity} \mbox{\it E}=\frac{\gamma_0-\gamma_{\mathrm{eq}}}{\gamma_{\mathrm{eq}}}=\mbox{\it Ca}\mbox{\it Ma\,}\,. \end{equation} {\section{Numerical method}} We utilize the boundary integral method to solve for the flow and electric fields. Details of our three-dimensional formulation can be found in \citep{Chiara:2019}. In brief, the electric field is computed following \citep{Lac-Homsy,Baygents:1998}: \begin{equation} \label{eq:BIE01} {\bf E}^\infty+\sum_{j=1}^2 \int_{{\cal{D}}_j} \frac{\hat{{\bf x}}}{4\pi r^3} {\left({\bf E}^{\mathrm{s}}-{\bf E}^{\mathrm{d}}\right)\cdot{\bf n}}dS({\bf y})= \begin{cases} {\bf E}^{\mathrm{d}}({\bf x})&\mbox{if } {\bf x} $ inside $ \cal{D}, \\ \frac{1}{2} \left({\bf E}^{\mathrm{d}}({\bf x})+{\bf E}^{\mathrm{s}}({\bf x})\right) &\mbox{if } {\bf x} \in\cal{D}, \\ {\bf E}^{\mathrm{s}}({\bf x})&\mbox{if } {\bf x} $ outside $ \cal{D}. \\ \end{cases} \end{equation} where ${\bf \hat x}={\bf x}-{\bf y}$ and $r=|{\bf \hat x}|$. The normal and tangential components of the electric field are calculated from the above equation \begin{equation} \label{eq:E_n} \begin{split} E_n({\bf x})&=\frac{2\mbox{\it R}}{\mbox{\it R}+1}{\bf E}^\infty \cdot {\bf n}+\frac{\mbox{\it R}-1}{\mbox{\it R}+1} \sum_{j=1}^2 {\bf n}({\bf x})\cdot\int_{{\cal{D}}_j} \frac{{\bf \hat x} }{2\pi r^3} E_n({\bf y})dS({\bf y})\,,\\ {\bf E}_t({\bf x})&=\frac{{\bf E}^{\mathrm{s}}+{\bf E}^{\mathrm{d}}}{2}-\frac{1+\mbox{\it R}}{2\mbox{\it R}}E_n {\bf n}\,. \end{split} \end{equation} For the flow field, we have developed the method for fluids of arbitrary viscosity, but for the sake of brevity here we list the equation in the case of equiviscous drops and suspending fluids. The velocity is given by \begin{equation} \label{eq:main_eq} 2{\bs u}({\bf x})=-\sum_{j=1}^2 \left( \frac{1}{4\pi}\int_{{\cal{D}}_j} \left(\frac{{\bf f}({\bf y})}{\mbox{\it Ca}}-{\bf f}^E({\bf y})\right)\cdot \left(\frac{{\bf I}}{r}+\frac{{\bf \hat x}\xhat}{r^3} \right)dS({\bf y})\right)\,, \end{equation} where ${\bf f}$ and ${\bf f}^E$ are the interfacial stresses due to surface tension and electric field \begin{equation} \label{eq:interfacial_force} {\bf f}=\gamma({\bf x}) {\bf n}\nabla \cdot {\bf n} -{\bm \nabla}_{\mathrm{s}} \gamma\,, \end{equation} \begin{equation} \label{eq:el_force} {\bf f}^E=\left({\bf E}^{\mathrm{s}}\cdot {\bf n}\right){\bf E}^{\mathrm{s}}-\frac{1}{2} \left({\bf E}^{\mathrm{s}}\cdot{\bf E}^{\mathrm{s}}\right){\bf n}-\mbox{\it S}\left(\left({\bf E}^{\mathrm{d}}\cdot {\bf n}\right){\bf E}^{\mathrm{d}}-\frac{1}{2} \left({\bf E}^{\mathrm{d}}\cdot{\bf E}^{\mathrm{d}}\right){\bf n}\right)\,. \end{equation} For a clean drop, the surface tension coefficient $\gamma({\bf x})$ will be constant, and the second term in (\ref{eq:interfacial_force}), the so-called Marangoni force, will vanish.\\ Drop velocity and centroid are computed from the volume averages \begin{equation} {\bf U}_j=\frac{1}{V}\int_{V_j}{\bs u} dV=\frac{1}{V}\int_{{\cal{D}}_j}{\bf n}\cdot\left({\bs u}{\bf x}\right) dS\,,\quad {\bf x}^c_j=\frac{1}{V}\int_{V_j}{\bf x} dV=\frac{1}{2V}\int_{{\cal{D}}_j}{\bf n}\left({\bf x}\cdot {\bf x}\right) dS\,. \end{equation} To solve the system of equations \refeq{eq:E_n}, \refeq{eq:main_eq}, \refeq{eq:surfactant evolution} we use the Galerkin formulation based on a spherical harmonics representation presented in \cite{Chiara:2019}. In the current study, we update the time scheme to the adaptive fourth order Runge-Kutta introduced in \cite{kennedy2003}. This choice allows to treat the convective term that appear in the surfactant evolution equation \refeq{eq:surfactant evolution} explicitely, and the diffusive term implicitely. To make the implicit part of the solver efficient also for large diffusion coefficients (i.e. Small P\'eclet numbers), a preconditioner designed in \citep{pallson} results to be fundamental to reduce the number of iterations for the convergence. All variables (position vector, velocities, electric field, surfactant concentration etc) are expanded in spherical harmonics which provides an accurate representation even for relatively low expansion order. In this respect, to make sure that all the geometrical quantities of interest (e.g. mean curvature) are computed with high accuracy as well, we use the adaptive upsampling procedure proposed by \cite{rahimian2015}. A specialized quadrature method for the singular and nearly singular integrals that appear in the formulation and a reparametrization procedure able to ensure a high-quality representation of the drops also under deformation are used to ensure the spectral accuracy of the method \citep{Sorgentone:2018}. Our numerical method and the asymptotic theory for clean drops was presented and validated in \citep{Chiara:2020}. Here we extend the small-deformation theory and the numerical method to include the effect of the insoluble surfactant.\\ \section{Theory: Far-field interactions} \label{theory} We first analyze the electrostatic interaction of two widely separated spherical drops. In this case, the drops can be approximated by point-dipoles. The disturbance field ${\bf E}_1$ of the drop dipole ${\bf P}_1$ induces a dielectrophoretic (DEP) force on the dipole ${\bf P}_2$ located at ${\bf x}^c_2=d{\bf \hat d}$, given by ${\bf F}(d)=\left({\bf P}_2\cdot \nabla {\bf E}_1\right)|_{r=d}$ The drop velocity under the action of this force can be estimated from Stokes law, ${\bf U}={\bf F} /\zeta$, where $\zeta$ is the friction coefficient. For a surfactant-covered drop, $\zeta=6\pi (3\lambda+2+\chi)/(3(\lambda+1)+\chi)$, where $\chi=\mbox{\it Pe} \mbox{\it Ma\,}$. Thus, \begin{equation} \label{DEPF} {\bf U}_2^{\mathrm{dep}}=2 \frac{\beta_D}{d^4}\left(\frac{\chi+3(1+{\lambda})}{\chi+2+3{\lambda}}\right)\left[\left(1-3\cos^2\Theta\right){\bf \hat d}-\sin\left(2\Theta\right){\bf{\hat{t}}}\right]\,,\quad \beta_D= \left(\frac{\mbox{\it R}-1}{\mbox{\it R}+2}\right)^2 \end{equation} The velocity reduces to the result for clean drops if $\chi=0$ \cite{Chiara:2020}, and for solid spheres if $\chi\rightarrow\infty$. \begin{figure}[h] \centering \includegraphics[width=0.5\linewidth]{fig4.pdf} \caption{\footnotesize{(a) Phase diagram of drop deformations and alignment with the field for viscosity ratio ${\lambda}=1$ and different values of the parameter $\chi=\mbox{\it Pe} \mbox{\it Ma\,}$. The solid lines correspond to $\Phi_s({\lambda}, \mbox{\it R}, \mbox{\it S}, \chi)=0$ given by \refeq{eq:Phi}; in the parameter space above, the line of centers of the two drops rotates away from the applied field direction $\Phi<0$. The dashed lines correspond to the modified Taylor discriminating function \refeq{FT}; in the parameter space above it, drop deformation is oblate and below it - prolate. Above the dot-dashed magenta line $\mbox{\it S}=\mbox{\it R}$, the surface flow is pole-to-equator ($\beta<0$), while below this line the surface flow is equator-to-pole ($\beta>0$) }} \label{figT} \end{figure} In addition to the dipole-dipole interaction, drops interact hydrodynamically. Assuming a spherical drop, the electric shear drives a flow, which is a combination of a stresslet and a quadrupole \cite{Taylor:1966} \begin{equation} \label{ehdU} {\bs u}= \frac{\beta}{ r^2}\left(-1+3 \cos^2\theta\right){\bf \hat r}-\frac{\beta}{r^4}\left(\left(-1+3 \cos^2\theta\right){\bf \hat r}+\sin (2\theta){\bm \hat \theta}\right)\,. \end{equation} The strength of the stresslet is \begin{equation} \beta=\beta_T-\frac{3\mbox{\it Ma\,}}{5(1+{\lambda})}g\,,\quad \beta_T=\frac{9}{10}\frac{\mbox{\it R}-\mbox{\it S}}{\left(1+\lambda\right)\left(\mbox{\it R}+2\right)^2}\,, \end{equation} where $g$ is a parameter describing the surfactant redistribution, $\Gamma=1+g\left(-1+3\cos^2\theta\right)$. The surfactant weakens the EHD flow, because the Marangoni stresses due to nonuniform surfactant concentration oppose the shearing electric traction. At steady state, the surfactant distribution at leading order is given by the balance of surfactant convection by the electrohydrodynamic (EHD) flow and surfactant diffusion, $\nabla_s\cdot{\bs u}=\mbox{\it Pe}^{-1} \nabla^2\Gamma$, which leads to \begin{equation} g=\mbox{\it Pe}\frac{5(1+{\lambda})}{3\left(5(1+{\lambda})+\chi\right)}\beta_T\,, \end{equation} and thus \begin{equation} \beta=\frac{9\left(\mbox{\it R}-\mbox{\it S}\right)}{2\left(\mbox{\it R}+1\right)^2}\frac{1}{ 5(1+{\lambda})+\chi}\,,\quad \chi=\mbox{\it Pe} \mbox{\it Ma\,}\,. \end{equation} The parameter $\chi$ characterizes the magnitude of the surfactant effect on the EHD flow. In the limit $\chi=0$ the result reduces to the clean drop solution. In the case of nondiffusing surfactant $\mbox{\it Pe}\rightarrow\infty$ ($\chi\rightarrow\infty$), the surfactant completely immobilizes the interface and suppresses the EHD flow. In this case, the theory predicts that the drops will interact only electrostatically. Moreover, if $\mbox{\it R}=1$ even the DEP interaction vanishes. Thus a pair of spherical droplets covered with insoluble, nondiffusing surfactant and conductivity ratio $\mbox{\it R}=1$ will not interact in a uniform electric field. The drop translational velocity due to a neighbor drop is found from Faxen's law { \citep{Kim-Karrila:1991, pak_feng_stone_2014} }\begin{equation} {{\bf U}}^{\mathrm{ehd}}_{2} = \left(1 + \frac{\lambda}{2(3\lambda+2)}\nabla^2\right){\bs u}|_{{\bf x}=d{\bf \hat d}}\,. \end{equation} Inserting \refeq{ehdU} in the above equation leads to \begin{equation} \label{U2ehd} {{\bf U}}^{\mathrm{ehd}}_{2} = \beta\left(\frac{1}{ d^2}-\frac{2}{d^4}\left(\frac{1+3{\lambda}}{2+3{\lambda}}\right)\right)\left(-1+3 \cos^2\Theta\right){\bf \hat d}-\frac{2\beta}{d^4}\left(\frac{1+3{\lambda}}{2+3{\lambda}}\right)\sin(2\Theta){\bf{\hat{t}}}+O(d^{-5})\,. \end{equation} Combining the electrohydrodynamic and the dielectrophoretic velocities yields \begin{equation} \label{U2} {\bf U}_2=\frac{\beta}{ d^2}\left(-1+3 \cos^2\Theta\right){\bf \hat d}-\Phi_s\left({\lambda},\mbox{\it R}, \mbox{\it S}, \chi\right)\frac{2}{d^4}\left(\left(-1+3 \cos^2\theta\right){\bf \hat d}+\sin(2\Theta){\bf{\hat{t}}}\right)\,, \end{equation} where \begin{equation} \Phi_s=\frac{1+3{\lambda}}{2+3{\lambda}}\beta+\beta_D\frac{3(1+{\lambda})+\chi}{2+3{\lambda}+\chi}\,. \label{eq:Phi} \end{equation} The discriminant ${\Phi_s}$ quantifies the drop pair alignment with the field and the interplay of EHD and DEP interactions in drop attraction or repulsion. Drops with $\Phi_s >0$ move to align their line-of-centers to with the applied electric field, since $\dot\Theta={\bf U}_2\cdot {\bf{\hat{t}}}\sim -\Phi _s$. If $\Phi_s < 0$ (which occurs only for $\mbox{\it R}/\mbox{\it S}<1$ drops), the line of centers between the drops rotates towards a perpendicular orientation with respect to the applied electric field. The presence of surfactant reduces the parameter range where misalignment is predicted. Figure \ref{figT} summarizes the regimes of alignment and deformation. The relative radial motion of the two drops at a given separation depends on $\Phi_s$ and $\beta_T$. There is a critical separation $d_c$ corresponding to ${\bf U}_2(d_c)\cdot{\bf \hat d}=0$ at which drop relative radial motion can change sign \begin{equation} \label{dc} d_c^2=\frac{2(1+3{\lambda})}{2+3{\lambda}}+\frac{\left(\mbox{\it R}-1\right)^2}{\mbox{\it R}-\mbox{\it S}}\left(\frac{4(3(1+{\lambda})+\chi)(5(1+{\lambda})+\chi)}{9(2+3{\lambda}+\chi)}\right). \end{equation} For $\Phi_s > 0$ and $\mbox{\it R}/\mbox{\it S}<1$ ($\beta<0)$, $d_c$ does not exist and EHD and DEP interactions are cooperative and act in the same direction (note that system with $\Phi _s<0$ and $\mbox{\it R}/\mbox{\it S}>1$ can not exist). For $\Phi_s> 0$ and $\mbox{\it R}/\mbox{\it S}>1$ or $\Phi_s<0$ and $\mbox{\it R}/\mbox{\it S}<1$, there is competition between EHD and DEP, with the quadrupolar DEP winning out closer to the drops and the EHD taking over via the stresslet flow in the far-field. The critical distance is affected by the presence of surfactant. It increases with $\chi$, since the surfactant weakens the EHD flow and expands the region of dominance of DEP. In the limit of nondiffusing surfactant, $\chi\rightarrow\infty$, the drop interactions are entirely dominated by DEP. \section{Results and discussion} \label{sec:results} We consider two identical drops with viscosity ratio ${\lambda}=1$ and focus on the effect of surfactant on drop dynamics under variable $\mbox{\it R}$, $\mbox{\it S}$ and initial configuration. First we compare the drop steady velocity obtained from simulations and the asymptotic theory for a drop pair aligned with the field. Figure \ref{figB} shows that theory and simulations are in excellent agreement, especially at large separations, and the theory is able to capture the steady velocity even for a relatively high $\mbox{\it Ca}=1$. As the surfactant effect strengthens and $\chi$ increases, either by increase in the surfactant elasticity or decreasing diffusivity, the drops relative velocity switches from EHD to DEP dominated at the critical distance \refeq{dc}. Accordingly the slope dependence on distance changes from $d^{-2}$ to $d^{-4}$. This is most obvious for the $\chi=100$ case, where $d_c=7.14$. In the limit $\chi\rightarrow\infty$, the drop motion is entirely due to DEP. \\ \begin{figure}[h] \centering \includegraphics[width=\linewidth]{fig2.pdf} \caption{\footnotesize{ Steady relative velocity of a pair of leaky dielectric drops aligned with the field ($\Theta=0$). $\mbox{\it R}=2$, $\mbox{\it S}=1$, $\mbox{\it Ca}=1$ (left) $\mbox{\it E}=1$, $\mbox{\it Pe}=1$ (blue),$\mbox{\it Pe}=10$ (black) ,$\mbox{\it Pe}=100$ (red) and $\mbox{\it Pe}\rightarrow\infty$(magenta). The symbols are from our fully 3D code and the solid line is the theory \refeq{U2}. In the case of nondiffusing surfactant the interaction is dominated by DEP and the velocity shows $1/d^4$ dependence. (right) $\mbox{\it Pe}=1$ and $\mbox{\it E}=0,1,10,100,1000$ (green, blue, black, red, magenta). As $\chi=\mbox{\it Ma\,} \mbox{\it Pe}$ increases the critical distance beyond which the DEP dominates increases. Note that $\chi=100$ shows change of slope from -4 and -2. $\chi=1000$ slope -4 in the studied range. }} \label{figB} \end{figure} However, even in this limit where at steady state the interface is immobilized by the surfactant, until the steady DEP-dominated state is reached, there is EHD affected drop motion due to the transient drop deformation and surfactant redistribution. As a result, the drops can initially repel and then attract once steady drop shape and surfactant distribution are reached. This scenario is illustrated on Figure \ref{fig4}, which shows that the radial relative velocity in the case of a drop covered with non-diffusing surfactant can change sign from positive (indicating drop repulsion) to negative (attraction). The small-deformation theory which predicts this phenomenon is presented in the Appendix. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{fig4drops.pdf} \\ \caption{\footnotesize{Effect of surfactant on the interaction of two identical drops with $\mbox{\it R}=2$, $\mbox{\it S}=1$, $\mbox{\it Ca}=1$, $\mbox{\it E}=1$ initially aligned with the field $\Theta=0$. Black dots correspond to $\mbox{\it Pe}=1$ and red dots correspond to the limit of non-diffusing surfactant $\mbox{\it Pe}=10^6$. The surfactant suppresses the electrohydrodynamic repulsion and after initial transient due to shape deformation and surfactant redistribution the interaction can reverse sign. }} \label{fig4} \end{figure} Our previous study of clean drops \citep{Chiara:2020} found that drops initially misaligned with the field may not experience monotonic attraction or repulsion; instead their three-dimensional trajectories follow three scenarios: motion in the direction of the field accompanied by either attraction followed by separation or vice versa (repulsion followed by attraction), and attraction followed by separation in a direction transverse to the field. Next we address the question about the surfactant influence on these intricate dynamics. The theory presented in Figure \ref{figT} highlighted that the surfactant has two main effects: first, it increases the range of distances where DEP dominates over EHD, and second, decreases the range of $\mbox{\it S}$ and $\mbox{\it R}$ parameters where drops' line-of-centers rotates away from the direction of the applied field. Accordingly, clean and surfactant-covered drops with same $\mbox{\it S}$ and $\mbox{\it R}$, initial configuration and $\mbox{\it Ca}$ may display opposite aligning behavior. Figure \ref{fig:fig_al_mis} illustrates such a case. While the clean drops attract in the direction of the field and move towards each other, pair up, and then separate in the transverse direction, the surfactant-covered drops only attract and move to align their line-of-centers parallel to the field. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figAM.pdf} \caption{$\mbox{\it R}=0.1, \mbox{\it S}=5, \Theta=45°$. Initial distance $d=4$.(a) clean drops misaligning (b) non-diffusing surfactant-covered drops with $\mbox{\it E}=10$ aligning with the field (c) Center of mass trajectory in the $x-z$ plane. Arrows correspond to the velocity for the clean drops (black) and for the non-diffusing surfactant-covered drops (red). Movies in the additional material.} \label{fig:fig_al_mis} \end{figure} \section{Conclusions} The effect of surfactant on the three-dimensional interactions of a drop pair in an applied electric field is studied using numerical simulations and a small-deformation theory based on the the leaky dielectric model. We present results for the case of a uniform electric field and arbitrary angle between the drops' line-of-centers and the applied field direction, where the non-axisymmetric geometry necessitates three-dimensional simulations. The surfactant's main effect is to decrease the electrohydrodynamic flow due to Marangoni stresses compensating the electric shear. As a result, drops' interactions are more strongly affected by DEP: the surfactant-covered drops tend to align with the applied field direction and attract. The surfactant influence is quantified by the parameter $\chi=\mbox{\it Pe}\mbox{\it Ma\,}$. The surfactant effect is most pronounced for nondiffusing surfactant ($\mbox{\it Pe}\gg 1$) or high elasticity $\mbox{\it Ma\,}\gg1$. The critical separation at which the DEP overcomes the EHD interaction increases with $\chi$. The interaction is much weaker compared to the clean drops, because DEP decays with the drops' separation as $1/d^4$ compared to the $1/d^2$ for EHD. The DEP also causes drops to align with the field and the range of $\mbox{\it R}$ and $\mbox{\it S}$ where the drops attract and move in the direction of the field and then separate in the transverse direction is greatly diminished. \section{Acknowledgments} PV has been supported in part by NSF award CBET-1704996. \\
2,869,038,156,556
arxiv
\section{Introduction} Recently, reconfigurable intelligent surface~(RIS) has been proposed as a novel and cost-effective solution to achieve high spectral and energy efficiency for wireless communications via only low-cost reflecting elements~\cite{M-2019}. With a large number of elements whose electromagnetic response (e.g. phase shifts) can be controlled by simple programmable PIN diodes~\cite{TDR-2010}, the RIS can reflect the incident signal and generate a directional beam, and thus enhancing the link quality and coverage. In literature, some initial works have studied the phase shifts optimizations in the RIS assisted wireless communications. Since the phase shifts on the RIS will significantly influence the received energy, it provides another dimension to optimize for further quality-of-services~(QoS) improvement. In \cite{XDR-2020}, beamforming and continuous phase shifts of the RIS were optimized jointly to maximize the sum rate for an RIS assisted point-to-point communication system. For multi-user cases, the authors in \cite{BHLYZH}, proposed a hybrid beamforming scheme for a multi-user RIS assisted multi-input multi-output (MIMO) system together with a limited phase shifts optimization algorithm to maximize the user data rate. In \cite{CAGMC-2019}, a joint power allocation and continuous phase shift design is developed to maximize the system energy efficiency. However, most works assume continuous phase shift, which are hard to be implemented in practical systems~\cite{TDR-2010}. Therefore, it is worthwhile to study the impact of the limited phase shifts on the achievable data rate. In this paper, we consider an uplink cellular network where the direct link between the base station (BS) and the user suffer from deep fading. To improve the QoS at the BS, we utilize a practical RIS with limited phase shifts to reflect signal from the user to the BS. To evaluate the performance limits of the RIS assisted communications, we provide an analysis on the achievable data rate with continuous phase shifts of the RIS, and then discuss how the limited phase shifts influence the data rate based on the derived achievable data rate. The rest of this paper is organized as follows. In Section \ref{System}, we introduce the system model for the RIS assisted communications. In Section \ref{Rate}, the achievable data rate is derived. The impact of limited phase shifts is discussed in Section \ref{analysis}. Numerical results in Section \ref{simulation} validate our analysis. Finally, conclusions are drawn in Section \ref{sec:conclusion}. \section{System Model}% \label{System} Consider a narrow-band uplink cellular network\footnote{In the downlink case, although the working frequency and transmission power might be different, a similar method can be adopted since the channel model is the same due to the channel reciprocity.} consisting of one BS and one cellular user. Due to the dynamic wireless environment involving unexpected fading and potential obstacles, the Light-of-Sight~(LoS) link between the cellular user and the BS may not be stable or even falls into a complete outage. To tackle this problem, we adopt an RIS to reflect the signal from the cellular user towards the BS to enhance the QoS. In the following, we first introduce the RIS assisted communication model, and then present the reflection-dominant channel model. \begin{figure}[!t] \centering \includegraphics[width=2.8in]{Systemmodel.pdf} \vspace{-2mm} \caption{System model for the RIS assisted uplink cellular network.} \vspace{-4mm} \label{scenario} \end{figure} \subsection{RIS assisted Communication Model} The RIS is composed of $M \times N$ electrically controlled RIS elements. Each element can adjust the phase shift by leveraging positive-intrinsic-negative~(PIN) diode. In Fig.~\ref{scenario}, we give an example of the element's structure. The PIN diode can be switched between ``ON" and ``OFF" states by controlling its biasing voltage, based on which the metal plate can add a different phase shift to the reflected signal. It is worthwhile to point out that the phase shifts are limited rather than continuous in practical systems~\cite{TDR-2010}. In this paper, we assume that the RIS is $K$ bit coded, that is, we can control the PIN diodes to generate $2^K$ patterns of phase shifts with a uniform interval $\Delta \theta = \frac{2\pi}{2^K}$. Therefore, the possible phase shift value can be given by $s_{m,n} \Delta \theta$, where $s_{m,n}$ is an integer satisfying $0 \leq s_{m,n} \leq 2^K - 1$. Without loss of generality, the reflection factor of RIS element $(m,n)$ at the $m$-th row and the $n$-th column is denoted by $\Gamma_{m,n}$, where \begin{equation}\label{response} \Gamma_{m,n} = \Gamma e^{-j \theta_{m,n}}, \end{equation} where the reflection amplitude $\Gamma \in [0,1]$ is a constant. \subsection{Reflection Dominant Channel Model} \begin{figure}[!t] \centering \includegraphics[width=2.6in]{Channelmodel.pdf} \vspace{-2mm} \caption{Channel model for the RIS-based uplink cellular network.} \vspace{-4mm} \label{Channel} \end{figure} In this subsection, we introduce the channel modeling between the BS and the user. Benefited from the directional reflections of the RIS, the received power from the BS-RIS-user links actually is much stronger than the multipath effect as well as the degraded direct link between the BS and the user. For this reason, we use the Rician model to model the channel. Here, the BS-RIS-user links act as the dominant LoS component and all the other paths contributes to the non-LoS~(NLoS) component. Therefore, the channel model between the BS and the user via RIS element $(m,n)$ can be written by \begin{equation} \tilde{h}_{m,n} = \sqrt{\frac{\kappa}{\kappa + 1}} h_{m,n} + \sqrt{\frac{1}{\kappa + 1}}\hat{h}_{m,n}, \end{equation} where $h_{m,n}$ is the LOS component, $\hat{h}_{m,n}$ is the NLOS component, and $\kappa$ is the Rician factor indicating the ratio of the LoS component to the NLoS one. As shown in Fig.~\ref{Channel}, let $D_{m,n}$ and $d_{m,n}$ be the distance between the BS and RIS element $(m,n)$, and that between element $(m,n)$ and the user, respectively. Define the transmission distance through element $(m,n)$ as $L_{m,n}$, where $L_{m,n} = D_{m,n} + d_{m,n}$. According to~\cite{A-2005}, the reflected LoS component of the channel between the BS and the user via RIS element $(m,n)$ can then be given by \begin{equation} \begin{array}{ll} h_{m,n} & = \sqrt{G D_{m,n}^{-\alpha}d_{m,n}^{-\alpha}} e^{-j\frac{2 \pi}{\lambda} L_{m,n}}\\ & = \sqrt{G} \left[\sqrt{D_{m,n}^{-\alpha}}e^{-j\frac{2 \pi}{\lambda} D_{m,n}}\right]\cdot\left[\sqrt{d_{m,n}^{-\alpha}}e^{-j\frac{2 \pi}{\lambda} d_{m,n}}\right]\\ &= \sqrt{G}h^{t}_{m,n}h^r_{m,n}, \end{array} \end{equation} where $\alpha$ is the channel gain parameter, $G$ is the antenna gain, and $\lambda$ is the wave length of the signal. Here, $h_{m,n}^t$ and $h_{m,n}^r$ are the channel between the BS and RIS element $(m,n)$, as well as that between RIS element $(m,n)$ and the user, respectively. Similarly, the NLoS component can be written by \begin{equation} \hat{h}_{m,n} = PL(D_{m,n})PL(d_{m,n})g_{m,n}, \end{equation} where $PL(\cdot)$ is the channel gain for the NLoS component and $g_{m,n} \sim \mathcal{CN}(0,1)$ denotes the small-scale NLoS components. Using the geometry information, we can rewritten the channel gain by the following proposition, and the proof can be found in Appendix \ref{proof_appro}. \begin{proposition}\label{appro} When the distance $D_{m,n}$ between RIS element $(m,n)$ and the BS and the distance $d_{m,n}$ between element $(m,n)$ and the user are much larger than the horizontal and vertical distances between two adjacent elements, $d_h$ and $d_v$, i.e., $D_{m,n}, d_{m,n} \gg d_h, d_v$, for $\forall m,n$, we have \begin{equation}\label{channel} G D_{m,n}^{-\alpha}d_{m,n}^{-\alpha} \triangleq PL_{LoS},~PL(D_{m,n})PL(d_{m,n}) \triangleq PL_{NLoS}, \end{equation} where $PL_{LoS}$ and $PL_{NLoS}$ are constants. \end{proposition} Besides, we can have the following remark to show how the location of the RIS influences the channel gain. \begin{remark} \label{remark1} Given the transmission distance, i.e., $D_{m,n} + d_{m,n} = L$, where $L$ is a constant, the channel gain will decrease first and then increase when the RIS is further to the BS. \end{remark} \begin{IEEEproof} For the LoS component, according to (\ref{channel}), we have \begin{equation} \begin{array}{ll} PL_{LoS} &= G((L - D_{m,n})D_{m,n})^{-\alpha} \\ &= G(-(D_{m,n} - L/2)^2 + L^2/4)^{-\alpha}. \end{array} \end{equation} Therefore, when $D_{m,n}$ increases, the LoS channel gain will decrease first and then increase. This trend will be the same for the NLoS component but the LoS component is typically dominant, and thus, the NLoS one can be neglected. This ends the proof. \end{IEEEproof} \section{Achievable Data Rate Analysis}\label{Rate} After traveling through the reflection dominated channel, the received signal at the user can be expressed by \begin{equation} y = \sum\limits_{m,n} \Gamma_{m,n} \tilde{h}_{m,n} \sqrt{P} s + w, \end{equation} where $w \sim \mathcal{CN}(0,\sigma^2)$ is the additive white Gaussian noise, $P$ is the transmit power, and $s$ is the transmitted signal with $|s|^2 = 1$. Therefore, the received Signal-to-Noise Ratio (SNR) can be expressed by \begin{equation} \gamma = \frac{P}{\sigma^2} \left(\sum\limits_{m,n} \Gamma_{m,n} \tilde{h}_{m,n}\sum\limits_{{m'},{n'}}\Gamma^{*}_{{m'},{n'}}\tilde{h}^{*}_{m,n}\right), \end{equation} where $s^{*}$ is the conjugate of a complex number $s$. \setcounter{equation}{17} \begin{figure*}[hb] \hrulefill \begin{equation}\label{required} K_{req} = \log_2 \pi - \log_2 \arccos \sqrt{\frac{\kappa + 1}{\kappa \eta_{LoS}M^2N^2}\left(\left(1 + \frac{\eta_{NLoS}}{\kappa + 1}MN + \frac{\kappa \eta_{LoS}}{\kappa + 1}M^2N^2\right)^{\epsilon_0} -1 - \frac{\eta_{NLoS}}{\kappa + 1}MN\right)}, \end{equation} \end{figure*} The received SNR can be maximized by optimizing the response of each RIS element, and thus the achievable data rate can be expressed by \setcounter{equation}{8} \begin{equation}\label{achievable} R = \max\limits_{\{\theta_{m,n}\}} \mathbb{E}\left[ \log_2(1 +\gamma)\right], \end{equation} where \begin{equation}\label{datarate} \begin{array}{ll} &\hspace{-5mm}\mathbb{E}\left[ \log_2(1 +\gamma)\right] \approx \\ &~~\hspace{-8mm}\log_2 \left(1 \hspace{-1mm}+\hspace{-1mm} \frac{\eta_{LoS}}{\kappa + 1}MN\hspace{-1mm}+\hspace{-1mm}\frac{\kappa\eta_{NLoS}}{\kappa + 1}\hspace{-5mm}\sum\limits_{m,m',n, n'}\hspace{-5mm}e^{-j[\phi_{m,n} - \phi_{m',n'} + \theta_{m,n} - \theta_{m',n'}]}\right). \end{array} \end{equation} Here, $\eta_{NLoS} = \frac{P\Gamma^2}{\sigma^2}PL_{NLOS}$, $\eta_{NLoS} = \frac{P\Gamma^2}{\sigma^2} PL_{LOS}$, and $\phi_{m,n} = \frac{2 \pi}{\lambda}L_{m,n}$. Derivations are given in Appendix \ref{proof_datarate}. To maximize the data rate, we need to let $\phi_{m,n} - \phi_{m',n'} + \theta_{m,n} - \theta_{m',n'} = 0$ for any $(m,n)$ and $(m',n')$. Thus, we have the following proposition. \begin{proposition}\label{optimal} The optimal phase shifts with continuous value $\theta_{m,n}^{*}$ should satisfy the following equation: \begin{equation}\label{phase} \theta_{m,n}^{*} + \phi_{m,n} = C, \end{equation} where $C$ is an arbitrary constant. The achievable data rate is \begin{equation}\label{AD} R = \log_2\left(1 + \frac{\eta_{NLoS}}{\kappa + 1}MN + \frac{\kappa \eta_{LoS} }{\kappa + 1}M^2N^2 \right). \end{equation} \end{proposition} Based on the expression of the achievable data rate, we can have the following remarks to show the upper and lower bounds of the data rate. \begin{remark}\label{R1} The upper bound of the data rate is achieved when we consider the pure LoS channel, i.e., $\kappa \rightarrow \infty$, where an asymptotic received power gain of $O(M^2N^2)$ can be obtained. \end{remark} \begin{remark}\label{R2} The lower bound of the data rate is achieved when we consider the Rayleigh channel, i.e., $\kappa \rightarrow 0$, where an asymptotic squared received power gain of $O(MN)$ can be obtained. \end{remark} These two Remarks show that the data rate grows with $\kappa$, since the received SNR increases from the order of $O(MN)$ to that of $O(M^2N^2)$. \section{Analysis on the Number of Phase Shifts} \label{analysis} In this section, we will discuss the influence of limited phase shifts on the data rate. Since the number of phase shifts is finite in practice, we will select the one which is the closest to the optimal one $\theta_{m,n}^{*}$ as given in (\ref{phase}), and denote it by $\hat{\theta}_{m,n}$. Define the phase shift errors caused by limited phase shifts as \begin{equation} \delta_{m,n} = \theta_{m,n}^{*} - \hat{\theta}_{m,n}. \end{equation} With $K$ coding bits, we have $- \frac{2\pi}{2^{K + 1}} \leq \delta_{m,n} < \frac{2\pi}{2^{K + 1}}$. The SNR expectation $\hat{\gamma}$ with limited phase shifts can be written by \begin{equation}\label{bound} \begin{array}{ll} \mathbb{E}[\hat{\gamma}] & = \frac{\eta_{NLoS}}{\kappa + 1}MN + \frac{\kappa \eta_{LoS}}{\kappa + 1} \hspace{-4mm}\sum\limits_{m,n,m',n'}\hspace{-4mm} e^{-j(C+\delta_{m,n} -C - \delta_{m'n'})}\\ & = \frac{\eta_{NLoS}}{\kappa + 1}MN + \frac{\kappa \eta_{LoS}}{\kappa + 1}\left|\sum\limits_{m,n} e^{-j \delta_{m,n}}\right|^2. \end{array} \end{equation} Since $K \geq 1$, we have $\frac{2\pi}{2^{K+1}} \leq \frac{\pi}{2}$. Thus, the following inequality holds: \begin{equation} \left|\sum\limits_{m,n} e^{-j \delta_{m,n}}\right|^2 \geq \left|MN e^{-j \frac{2\pi}{2^{K+1}}}\right|^2 = M^2N^2\cos^2\left(\frac{2\pi}{2^{K+1}}\right). \end{equation} To quantify the data rate degradation, we define the error $\epsilon$ brought by limited phase shifts as the ratio of the data rate with limited phase shifts to that with continuous ones. To guarantee the system performance, $\epsilon$ should be larger than $\epsilon_0$ with $\epsilon_0 < 1$, i.e., \begin{equation} \epsilon = \log_2(1 + \mathbb{E}[\hat{\gamma}])/\log_2(1 + \mathbb{E}[\gamma]) \geq \epsilon_0. \end{equation} Recalling (\ref{bound}), we can obtain the requirements on the coding bits as \begin{equation}\label{requirement} \frac{1 + \frac{\eta_{NLoS}}{\kappa + 1}MN + \frac{\kappa \eta_{LoS}}{\kappa + 1}M^2N^2\cos^2\left(\frac{2\pi}{2^{K+1}}\right)}{\left(1 + \frac{\eta_{NLoS}}{\kappa + 1}MN + \frac{\kappa \eta_{LoS}}{\kappa + 1}M^2N^2\right)^{\epsilon_0}} \geq 1. \end{equation} Therefore, the required number of coding bits can be written as in (\ref{required}). In the following, we will provide a proposition to discuss impact of the RIS size on the required number of coding bits, i.e., the number of phase shifts. \begin{proposition}\label{bit-size} Given the performance threshold $\epsilon_0$, the required coding bits is a decreasing function of the RIS size. Specially, when the RIS size is sufficiently large, i.e., $MN \rightarrow \infty$, 1 bit is sufficient to satisfy the performance threshold. \end{proposition} \begin{IEEEproof} We first discuss how the number of phase shifts influences the required coding bits. Define the RIS size $x = MN \geq 1$, and \setcounter{equation}{18} \begin{equation} f(x) = \frac{\kappa + 1}{\kappa \eta_{LoS}x^2}\left(\left(1 \hspace{-1mm}+\hspace{-1mm} \frac{\eta_{NLoS}}{\kappa + 1}x \hspace{-1mm}+\hspace{-1mm} \frac{\kappa \eta_{LoS}}{\kappa + 1}x^2\right)^{\epsilon_0} \hspace{-2mm}-1 \hspace{-1mm}-\hspace{-1mm} \frac{\eta_{NLoS}}{\kappa + 1}x\right). \end{equation} According to (\ref{required}), the required coding bits $K$ has the same monotonicity as $f(x)$. Therefore, we only need to investigate how $f(x)$ changes when $x$ increases. Let $a = \frac{\kappa\eta_{LoS}}{\kappa + 1}$ and $b = \frac{\eta_{NLoS}}{\kappa + 1}$. We have \begin{equation} f'(x) = \frac{1}{ax^{3}}\left(2 +\hspace{-1mm}bx \hspace{-1mm}+\hspace{-1mm} \frac{(\epsilon_0 \hspace{-1mm}- \hspace{-1mm}1)2ax^2 \hspace{-1mm}+\hspace{-1mm}(\epsilon_0 \hspace{-1mm}-\hspace{-1mm} 2) bx\hspace{-1mm} -\hspace{-1mm} 2}{\left(1 \hspace{-1mm}+\hspace{-1mm} bx \hspace{-1mm}+ \hspace{-1mm} ax^2\right)^{1 -\epsilon_0}}\right). \end{equation} Note that $x \geq 1$ and $\epsilon_0 \leq 1$, we have \begin{equation} \begin{array}{ll} f'(x) &\leq \frac{x^{-3}}{a}\left(2 \hspace{-1mm}+\hspace{-1mm}bx \hspace{-1mm}+\hspace{-1mm} \left((\epsilon_0 \hspace{-1mm}- \hspace{-1mm}1)2ax^2 \hspace{-1mm}+\hspace{-1mm}(\epsilon_0 \hspace{-1mm}-\hspace{-1mm} 2) bx\hspace{-1mm} -\hspace{-1mm} 2\right) \right)\\ & = \frac{x^{-3}}{a}\left((\epsilon_0 \hspace{-1mm}- \hspace{-1mm}1)2ax^2 \hspace{-1mm}+\hspace{-1mm}(\epsilon_0 \hspace{-1mm}-\hspace{-1mm} 1) bx\right) \leq 0. \end{array} \end{equation} This implies that $f(x)$ decreases as $x$ grows, i.e., the required number of coding bits decreases as the RIS size grows. When the size of the RIS is sufficiently large, i.e., $MN \rightarrow \infty$, since $\epsilon < 1$, we have $\frac{1}{M^2N^2}\left(\left(1 + \frac{\eta_{NLoS}}{\kappa + 1}MN + \frac{\kappa \eta_{LoS}}{\kappa + 1}M^2N^2\right)^{\epsilon_0} \hspace{-2mm}-1 - \frac{\eta_{NLoS}}{\kappa + 1}MN\right) \\= 0$. Therefore, the required number of coding bits should be $\log_2 \pi - \log_2 (\pi/2) = 1$. \end{IEEEproof} \section{Simulation Results} \label{simulation} \begin{figure}[!t] \centering \includegraphics[width=2.6in]{layout.pdf} \vspace{-2mm} \caption{Simulation layout for the RIS-based cellular network (top view).} \vspace{-4mm} \label{layout} \end{figure} In this section, we verify the derivation of the achievable data rate and evaluate the optimal phase shift design for the RIS-assisted cellular network, the layout of which is given in Fig.~\ref{layout}. The parameters are selected according to 3GPP standard~\cite{3GPP-2018} and existing works \cite{TDR-2010,BHLYZH}. The distances between the BS and the user to the center of the RIS are given by $D_0 = 95$ m and $d_0 = 65$ m, and the heights of the BS and the RIS are 25 m and 10 m, respectively. The RIS separation is set as $d_h = d_v = 0.03$ m and the reflection amplitude is assumed to be ideal, i.e., $\Gamma = 1$. The transmit power is $P = 20$ dBm, the noise power is $\sigma^2 = -96$ dBm, and the carrier frequency $f$ is 5.9 GHz. The UMa model in \cite{3GPP-2018} is utilized to describe the path loss for both LoS and NLoS components. For simplicity, we assume that $M = N$ in this simulation. All numeral results are obtained by 3000 Monte Carlo simulations. \begin{figure}[!t] \centering \includegraphics[width=2.8in]{datarate.pdf} \vspace{-2mm} \caption{Achievable data rate vs. RIS size $N$ with continuous phase shifts.} \vspace{-4mm} \label{data} \end{figure} In Fig.~\ref{data}, we plot the achievable data rate vs. the RIS size $N$ with continuous phase shifts. From this figure, we can observe that our theoretic results are tightly close to the simulated ones. We can also observe that the data rate increases with the RIS size $N$, as more energy is reflected. In addition, when the RIS size is sufficiently large, the slope of the curve with the pure LoS channel is 4, which implies that the received SNR is proportional to the squared number of RIS elements\footnote{From (\ref{AD}), when $N$ is sufficiently large in the pure LoS scenario, the data rate can be written by $R = 4 \log_2(N) + z$, where $z$ is a constant. Since we use the logarithmic coordinates for the RIS size $N$, the slope of the curve being 4 is equal to the received SNR being an order of $O(N^4)$.}. This result is consistent with Remark \ref{R1}. Similarly, when the RIS size is sufficiently large, the slope of the curve with the Rayleigh channel is 2, which corresponds to Remark \ref{R2}. In addition, we can observe that the data rate increases with the Rician factor, i.e., $\kappa$. \begin{figure}[!t] \centering \includegraphics[width=2.8in]{code.pdf} \vspace{-2mm} \caption{Data rate degradation $\epsilon$ vs. coding bit $K$ with $\kappa = 4$.} \vspace{-4mm} \label{quantization} \end{figure} In Fig.~\ref{quantization}, we plot the performance degradation $\epsilon$ vs. coding bit $K$ with $\kappa = 4$ in the Rician channel model \cite{BHLYZH} and threshold $\epsilon_0 = 0.9$. From this figure, we can find that the required number of phase shifts with the Rician channel model is: 1) 3 bit when the RIS size is small, e.g., $N = 3$; 2) 2 bit when the RIS size is moderate, e.g., $N = 300$; 3) 1 bit when the RIS size approaches to infinity, e.g., $N = 3*10^{70}$. These observations imply that the required coding bits decrease as the number of RIS elements grows, and 1 bit is enough when the RIS size goes to infinity, which verify Proposition \ref{bit-size}. \begin{figure}[!t] \centering \includegraphics[width=2.8in]{location.pdf} \vspace{-2mm} \caption{Data rate degradation $\epsilon$ vs. distance between the BS and the RIS $D_0$ with $\kappa = 4$ and $D_0 + d_0 = 160$.} \vspace{-4mm} \label{location} \end{figure} In Fig.~\ref{location}, we plot the performance degradation $\epsilon$ vs. distance between the BS and the RIS $D_0$ with $\kappa = 4$ to show how the location of the RIS influences the data rate degradation. For fairness, we assume that the transmission distance remains the same, i.e., $D_0 + d_0 = 160$. We can easily observe that the data rate degradation will decrease first and then increase as $D_0$ increases given RIS size $N$ and coding bit $K$. This is because the channel gain will decrease and then increase as $D_0$ increases, which has been proved in Remark \ref{remark1}, and the performance degradation $\epsilon$ is a increasing function of the channel gain\footnote{When coding bit $K$ and RIS size is given, the performance degradation $\epsilon$ can be written by $\epsilon = \frac{\log_2(1 + g'x)}{\log_2(1 + gx)}$, where $g' < g$ due to the limited phase shifts and $x$ is the channel gain. It is easy to check that this function is increasing as $x$ grows by calculating its first order derivative.}. In addition, we can also learn that the variance caused by the location change of the RIS will be smaller when the RIS size becomes larger. This implies that the location change of the RIS might influence the number of required coding bits when the RIS size is small, while the number of required coding bits keeps unchanged when the RIS size is sufficiently large. \vspace{-2mm} \section{Conclusions and Future Works} \label{sec:conclusion} In this paper, we have derived the achievable data rate of the RIS assisted uplink cellular network and have discussed the impact of limited phase shifts design based on this expression. Particularly, we have proposed an optimal phase shift design scheme to maximize the data rate and have obtained the requirement on the coding bits to ensure that the data rate degradation is lower than the predefined threshold. From the analysis and simulation, given the location of the RIS, we can have the following conclusions: 1) We can achieve an asymptotic SNR of the squared number of RIS elements with a pure LoS channel model, and an asymptotic SNR of the number of RIS elements can be obtained when the channel is Rayleigh faded; 2) The required number of phase shifts will decrease as the RIS size grows given the data rate degradation threshold; 3) A number of phase shifts are necessary when the RIS size is small, while 2 phase shifts are enough when the RIS size is infinite. For future works, we can extend the RIS-assisted communications to Device-to-Device~(D2D) based heterogeneous network~\cite{ZMKGL-2016} for higher data rates since the interference among difference D2D users can be alleviated by proper phase shifts design, energy cooperation~\cite{HKM-2018} as the RIS can generate directional beams for more effective energy harvesting, and RF sensing~\cite{JHBLLYZH} for accurate object tracking by providing more information of the propagation environment. \vspace{-2mm} \begin{appendices} \section{Proof of Proposition \ref{appro}}\label{proof_appro} Assume that the BS is located at the local origin and the angle between the direction of the RIS and the $x$-$y$ plane as $\theta_R$. Define the principle directions of the RIS as $\bm{n}_h$ and $\bm{n}_v$, we have $\bm{n}_h = \bm{n}_x \cos\theta_R + \bm{n}_y \sin\theta_R$ and $\bm{n}_v = \bm{n}_z$, where $\bm{n}_x$, $\bm{n}_y$, and $\bm{n}_z$ are directions of $x$, $y$, and $z$ axis. Denote $\bm{c}_{m,n}$ as the position of RIS element $(m,n)$, where \begin{equation} \bm{c}_{m,n} = md_h\bm{n}_h + nd_v\bm{n}_v + D_{0,0}\bm{n}_y. \end{equation} Here, $D_{0,0}$ is the projected distance on $y$-asix between the BS and the RIS. Therefore, we have \begin{equation} \begin{array}{ll} D_{m,n}\hspace{-3mm} &= \left[(md_h\cos\theta_R)^2 + (md_h\sin\theta_R + D_{0,0})^2 +(nd_v)^2\right]^{\frac{1}{2}}\\ & \approx (md_h\sin\theta_R + D_{0,0}) + \frac{(md_h\cos\theta_R)^2 +(nd_v)^2}{2 D_{0,0}}, \end{array} \end{equation} which can be achieved by $\sqrt{1 + a} \approx 1 + a/2$ when $a \ll 1$. We can obtain the expression of the distance between the RIS and the user using the similar method. Since the distance between two RIS elements is much smaller than that between the BS and user, the pathloss of the BS and user via different RIS element can be regarded a constant. \section{Derivations of Equation (\ref{datarate})}\label{proof_datarate} Due to the property of the logarithmic function \cite{YWSCX-2019}, we have \begin{equation} \mathbb{E}[\log_2(1 + \gamma)] \approx \log_2(1 + \mathbb{E}[\gamma]). \end{equation} Since $\frac{P}{\sigma^2}$ is constant, we will derive $\mathbb{E}[\gamma]$ in the following. \begin{equation}\label{expectation} \begin{array}{ll} \mathbb{E}[\gamma] \hspace{-3mm} & \hspace{-1mm}= \frac{P\Gamma^2}{\sigma^2} \mathbb{E}\left[\sum\limits_{m,n} e^{-j\theta_{m,n}} \tilde{h}_{m,n}\sum\limits_{{m'},{n'}}e^{j\theta_{m',n'}}\tilde{h}^{*}_{{m'},{n'}}\right]\\ &\hspace{-1mm}= \frac{P\Gamma^2}{\sigma^2}\hspace{-5mm}\sum\limits_{m,n,{m'},{n'}}\hspace{-5mm}e^{-j(\theta_{m,n}\hspace{-1mm} -\theta_{m',n'})}\left(\frac{1}{\kappa + 1} \mathbb{E}[\hat{h}_{m,n}\hat{h}^{*}_{{m'},{n}}] + \right.\\ &\hspace{-1mm}~\left.\frac{\kappa}{\kappa + 1} PL_{LoS} e^{-j[\phi_{m,n} - \phi_{{m'},{n'}}]} +\right. \\ &\hspace{-1mm}~\left.2\frac{\sqrt{\kappa}}{\kappa + 1}\sqrt{PL_{LOS}}\mbox{Re}\left\{ e^{j\phi_{m,n}}\mathbb{E}[\hat{h}_{m,n}]\right\} \right). \end{array} \end{equation} Since $\hat{h}_{m,n}$ has a zero mean, the final term in (\ref{expectation}) equals to 0. Moreover, since $\hat{h}_{m,n}$ is independent for different elements $(m,n)$ and $(m',n')$, the following equation holds: \begin{equation} \mathbb{E}[\hat{h}_{m,n}\hat{h}^{*}_{m',n'}] = \left\{ \begin{array}{l} PL_{NLoS}, \mbox{if}~m = m', n = n',\\ 0, \mbox{otherwise}. \end{array} \right. \end{equation} Therefore, we have \begin{equation} \vspace{-1mm} \mathbb{E}[\gamma]=AMN+B\hspace{-4mm}\sum\limits_{m,m',n, n'}\hspace{-4mm}e^{-j[\phi_{m,n} - \phi_{m',n'} + \theta_{m,n} - \theta_{m',n'}]}. \end{equation} This ends the proof. \end{appendices}
2,869,038,156,557
arxiv
\section{INTRODUCTION} \label{intro} The spectral energy distribution of the blazar subclass of active galactic nuclei -- which includes flat-spectrum radio quasars, FSRQ, and BL~Lac-type objects -- is dominated by highly variable nonthermal emission from relativistic jets that are viewed within several degrees of the jet axis. The FSRQ 3C~279 \citep[redshift $z=0.538$,][]{Burbidge1965}; now commonly accepted $z=0.5362$, \citet{Marziani1996}, considered a prototypical blazar, exhibits pronounced variations of flux from radio to $\gamma$-ray frequencies (by $>5$ mag in optical bands) and strong, variable optical polarization as high as 45.5 per cent, observed at $U$ band \citep{Mead1990}. It has been intensely observed in many multiwavelength campaigns designed to probe the physics of the high-energy plasma responsible for the radiation. In one such campaign in 1996, 3C~279 was observed to vary rapidly during a high flux state by the EGRET detector of the {\it Compton} Gamma Ray Observatory and at longer wavelengths \citep{Wehrle1998}. The $\gamma$-ray maximum coincided with X-ray flare with no time lag within 1 day. Since the launch of \emph{Fermi}, numerous observational campaigns have been undertaken in order to trace the variability of 3C~279 on different time scales and over an as wide a wavelength range as possible \citep[e.g.,][]{Bottcher2007, Larionov2008, Hayashida2012, Pittori2018}. Although such multi-epoch campaigns, each extending over relatively a brief time interval, have led to improvements in our knowledge about the blazar, they have proven insufficient to discover patterns in the complex behaviour of 3C~279 that relate to consistent physical aspects of the jet. Polarimetric studies of 3C~279 are still not as numerous as photometric ones. During a 2007 dedicated observational campaign \citep{Larionov2008}, the optical and 7-mm EVPAs (electric vector position angles) rotated simultaneously, suggesting co-spatiality of the optical and radio emission sites. The same study found that the slope of the variable source of the optical (synchrotron) spectrum did not change, despite large ($\sim 3$ mag) variations in brightness. In a recent work by \citet{Rani2018}, an anticorrelation was found between the optical flux density and the degree of optical polarization. \citet{Kiehlmann2016} carried out a detailed analysis of 3C~279 polarimetric behaviour in order to distinguish between `real' and `random walk' rotations of the EVPA, finding that a smooth optical EVPA rotation by $\sim 360\degr$ during a flare was inconsistent with a purely stochastic process. \citet{Punsly2013} has reported a prominent asymmetry (strong `red wing') of the \ion{Mg}{ii} line based on examination of three spectra of 3C~279 obtained at Steward Observatory. The origin of the red wing is unknown, with explanations ranging from gravitational and transverse redshifts within 200 gravitational radii from the central black hole~\citep[see, e.g.,][]{Corbin1997}, reflection off optically thick, out-flowing clouds on the far side of the accretion disc, or transmission through inflowing gas on the near side of the accretion disc. High-resolution observations with the Very Long Baseline Array (VLBA) have demonstrated that $\gamma$-bright blazars possess the most highly relativistic jets among compact flat-spectrum radio sources \citep[e.g.,][]{Jorstad2001b}. This is inferred from comparison of kinematics of disturbances (knots) in parsec-scale jets of AGN that are strong $\gamma$-ray emitters versus those that are weak or undetected. Reported apparent speeds in 3C~279 have ranged from 4$c$ to 22$c$ \citep[e.g.,][]{Jorstad2004, Rani2018, Lister2019}. Based on a detailed study of knot trajectories, \citet{Qian2019} have suggested that the jet precesses, as could be caused by a binary black hole system in the nucleus. From an analysis of both the apparent speeds and the time scales of flux decline of individual knots in 3C~279, \citet{Rani2018} derived a Lorentz factor of the jet flow $\Gamma \ge 22.4$, and a viewing angle $\Theta \le 2\fdg6$. An analysis of times of high $\gamma$-ray flux and superluminal ejections of $\gamma$-ray blazars indicates a statistical connection between the two types of events \citep{Jorstad2001a,Jorstad2016,Rani2018}. Given the complexity of the time variability of non-thermal emission in blazars, a more complete observational dataset than customarily obtained might extend the range of conclusions that can be drawn concerning jet physics. \citet{Chatterjee2008} analysed a decade-long (1996--2007) dataset containing radio (14.5 GHz), single-colour ($R$-band) optical, and X-ray light curves, as well as VLBA images at 43 GHz, of 3C~279. They found strong correlations between the X-ray and optical fluxes, with time delays that change from X-ray leading optical variations to vice-versa, with simultaneous variations in between. Although the radio variations lag behind those in these wavebands by more than 100 days on average, changes in the jet direction on time scales of years are correlated with -- and actually lead -- long-term variations at X-ray energies. The current paper extends these observations to include multi-colour optical, ultraviolet (UV), and near-infrared (NIR) light curves, as well as linear polarization in the optical range, radio (350~GHz to 1~GHz) single-dish data, and 43~GHz VLBA images. We analyse optical spectroscopic observations performed at the Discovery Channel Telescope (DCT) telescope of Lowell Observatory. We also use open-access data from the \emph{Swift} (UVOT and XRT) and \emph{Fermi }satellites. The radio-to-optical data presented in this paper have been acquired during a multifrequency campaign organized by the Whole Earth Blazar Telescope (WEBT).\footnote{{\tt http://www.oato.inaf.it/blazars/webt/} \citep[see e.g.][]{Bottcher2005, Villata2006, Raiteri2007}.} For completeness, our study includes data from the 2006--2007 WEBT campaigns presented by \citet{Bottcher2007, Larionov2008,Hayashida2012}, and \citet{Pittori2018}. Our paper is organized as follows: Section~\ref{sect:observ} outlines the procedures that we used to process and analyse the data. Section~\ref{color_evolution} displays and analyses the multifrequency light curves, while optical spectra are presented and discussed in \S\ref{spectra}. In \S\ref{sect:sed} we present the spectral energy distribution from radio to $\gamma$-ray frequencies constructed at six different flux states. In \S\ref{correlations} we derive and discuss the time lags between variations in different colour bands. Section \ref{vlba} describes the kinematics of the radio jet as revealed by the VLBA images, and \S\ref{interpretation} relates changes in the structure of the radio jet to the multiwavelength flux variability. Section~\ref{pol_behaviour} gives the results of the optical and radio polarimetry. In \S\ref{conclusions} we discuss the implications of our observational results with respect to the physics of the jet in 3C~279. Abbreviation $TJD$ holds for $JD-2400000.0$. We use parameters for distances and cosmology $H_0= 73.0, \Omega_{\mathrm matter} = 0.27, \Omega_{\mathrm vacuum} = 0.73$. \section{OBSERVATIONS AND DATA REDUCTION} \label{sect:observ} \subsection{Optical and near-infrared photometry} \begin{table} \begin{minipage}[t]{0.9\columnwidth} \caption{Ground-based observatories participating in this work.} \label{obs} \begin{centering} \renewcommand{\footnoterule}{} \begin{tabular}{l r c} \hline\hline Observatory & Country & Bands\\ \hline \multicolumn{3}{c}{\it Optical}\\ Abastumani & Georgia & $R$ \\ Belogradchik & Bulgaria & $B, V, R, I$ \\ Calar Alto$^a$ & Spain & $R$ \\ Campo Imperatore (Schmidt) & Italy & $B, V, R$ \\ Catania & Italy & $R$ \\ Crimean AZT-8 (AP7 \& ST-7$^b$) & Russia & $B, V, R, I$ \\ Lowell (Perkins$^b$ \& DCT$^c$) & USA& $B, V, R, I$ \\ Lulin & Taiwan & $R$ \\ Mt.Maidanak & Uzbekistan & $B, V, R, I$ \\ New Mexico Skies (iTelesope) & USA & $V, R$ \\ Osaka Kyoiku & Japan & $B, V, R, I$ \\ Roque (KVA \& LT) & Spain & $R$ \\ Rozhen (200 cm \& 50/70 cm) & Bulgaria & $U, B, V, R, I$ \\ San Pedro Martir & Mexico & $R$ \\ Siena & Italy & $R$ \\ Sirio & Italy & $R$ \\ St.\ Petersburg$^b$ & Russia & $B, V, R, I$ \\ Steward (Kuiper, Bok \& MMT)$^d$ & USA & $R$ \\ Teide (IAC80 \& STELLA-I) & Spain & $R$ \\ Tijarafe & Spain & $R$ \\ Valle d'Aosta & Italy & $R, I$ \\ Vidojevica (140 \& 60 cm) & Serbia & $B, V, R, I$ \\ \hline \multicolumn{3}{c}{\it Near-infrared}\\ Campo Imperatore (AZT-8) & Italy & $J, H, K$ \\ Teide (TCS) & Spain & $J, H, K$ \\ \hline \multicolumn{3}{c}{\it Radio}\\ Mauna Kea (SMA) & USA & 230, 345 GHz \\ Medicina & Italy & 5, 8 GHz \\ Mets\"ahovi & Finland & 37 GHz \\ Noto & Italy & 5, 8, 43 GHz \\ OVRO & USA & 15 GHz \\ Pico Veleta (IRAM) & Spain & 86, 229 GHz \\ RATAN-600 & Russia & 1, 2, 5, 8, 11, 22 GHz \\ Svetloe & Russia & 5, 8 GHz \\ VLBA & USA & 43 GHz \\ UMRAO & USA & 4.8, 8, 14.5 GHz \\ \hline \end{tabular} \end{centering} $^a$ -- MAPCAT project: http://www.iaa.es/∼iagudo/research/MAPCAT\\ $^b$ -- photometry and polarimetry \\ $^c$ -- spectroscopy \\ $^d$ -- spectropolarimetry \end{minipage} \end{table} Observations of 3C~279 under the GASP-WEBT program ~\citep[see e.g.,][]{2008A&A...481L..79V, 2009A&A...504L...9V} were performed in 2008--2018 at optical, near-infrared (NIR), and radio bands at 32 observatories, listed in Table~\ref{obs}. We performed photometry in optical bands, as described in \cite{Raiteri1998}, and in NIR bands, as detailed in \cite{Gonzalez-Perez2001}. Following standard WEBT procedures, we compiled and `cleaned' the optical light curves \citep[see, e.g.,][]{Villata2002}. As needed, we applied systematic corrections (mostly from effective wavelengths deviating from those of standard $BVR_CI_C$ bandpasses) to the data from some of the telescopes, in order to adjust the calibration so that the data are consistent with those obtained via standard instrumentation and procedures. The resulting offsets are generally less than 0.01--0.03 mag in $R$ band. We have corrected the optical and NIR data for Galactic extinction using values for each filter reported in the NASA Extragalactic Database (NED)\footnote{\url{http://ned.ipac.caltech.edu/}} \citep{Schlafly2011}. The magnitude to flux conversion adopted the coefficients of \cite{Mead1990}. \subsection{\textit{Swift} observations} \subsubsection {Optical and ultraviolet data} The quasar 3C~279 was observed with the $UVOT$ instrument of the Neil Gehrels \emph{Swift} Observatory in all 6 available filters, $UVW2$, $UVM2$, $UVW1$, $U$, $B$, and $V$. The $UVOT$ data of 3C~279 were downloaded from the \emph{Swift} archive and reduced using the HEAsoft version 6.24 \emph{Swift} sub-package and corresponding HEASARC calibration database (CALDB). If necessary, aspect correction was performed using the \textsc{uvotunicorr} task and all exposures within an observation were summed using the \textsc{uvotimsum} task. A magnitude was derived using \textsc{uvotsource} with a 5 arcsec radius aperture centred on the object and a 20 arcsec radius aperture located in a source-free area away from the object for the background. The result was tested for a large coincidence-loss correction factor. Observations with a large magnitude error or an exposure time of less than 40 seconds were discarded. Galactic extinction correction in the ultraviolet (UV) bands was performed using the interstellar extinction curve of \cite{Fitzpatrick1999} with $R_V = 3.1$. The corrections of the $U$, $B$, and $V$ band data were made according to the \cite{Schlafly2011} values, as listed in the NASA Extragalactic Database ($A_U=0.124, A_B=0.104, A_V=0.078$). All of the derived parameters are given in Table~\ref{swift_calib}. \begin{table} \caption{\bf Swift calibrations used for 3C~279 analysis.} \label{swift_calib} \begin{tabular}{c | c | c | c | c | c | c |} \hline Bandpass & v & b & u & uw1 & um2 & uw2 \\ \hline $\lambda$, \AA & 5427 & 4353 & 3470 & 2595 & 2250 & 2066 \\ \hline $A_\lambda$, mag & 0.078 & 0.104& 0.124 & 0.175 & 0.265 & 0.251 \\ \hline conv. factors &2.603 & 1.468 & 1.649 & 4.420 & 8.372 & 5.997 \\ \hline \end{tabular}\\ \flushleft{Note -- Units of count rate to flux conversion factors are $10^{-16}{\rm erg}\,{\rm cm^{-2}s^{-1}}$\AA$^{-1}$.} \end{table} \subsubsection{X-ray data} We downloaded data obtained for 3C~279 with the X-Ray Telescope (XRT) of the Neil Gehrels \emph{Swift} Observatory from the \emph{Swift} archive, which covers the period from 2006 January to 2018 June. All observations were performed in Photon Counting (PC) mode, with an exposure time between 1-3~ksec, except for two cases when the quasar was in an active state and several exposures as short as 200 seconds were carried out during a given day. The XRT data were reduced using the HEAsoft version 6.24 \emph{Swift} sub-package and corresponding HEASARC CALDB. The FTOOLS task \textsc{xrtpipeline} was run to clean the data and to create an exposure file used in the task \textsc{xrtmkarf}, which generates an Ancillary Response Function file for input into the spectral fitting program \textsc{Xspec}. The source and background images and spectra were extracted using the task \textsc{Xselect}. For the source, photons were counted over a circular region of 20-pixel radius centred on the object's coordinates, while for the background region a larger annulus was used, with inner and outer radii of 75 and 100 arcsec, correspondingly, centred on the source and selected to avoid any contaminating sources. In preparation for running \textsc{Xspec}, we grouped the energy channels via the \textsc{grppha} tool such that each energy bin contained at least one photon, as recommended in the \textsc{Xspec} manual \citep{Arnaud1996} for Cash statistics, which we used to evaluate the goodness of fit \citep{Cash1979}. Spectra of 3C~279 from 0.3 to 10 keV were fit within Xspec using an absorbed power law model, with the $n_{\mathrm H}$ parameter fixed at $2.21\cdot 10^{20}$ cm$^{-2}$ \citep{Dickey1990}. Each epoch that suffered from pile-up ($> 0.5$ cnts/sec for PC data ) was re-examined as outlined in \url{ http://www.swift.ac.uk/analysis/xrt/pileup.php}. This resulted in creation of a new annular source region, with the size of the inner ring determined by the King function \citep{Moretti2005}. \subsection{\emph{Fermi} LAT observations} \label{sec:LAT} The $\gamma$-ray data were obtained with the {\emph Fermi} (LAT), which observes the entire sky every 3 hours at energies of 20~MeV--300~GeV \citep{Atwood2009}. The construction of the $\gamma$-ray light curve and the time-dependent spectral energy distribution (SED) was based on \emph{Fermi} Large Area Telescope observations at 0.1--200~GeV, obtained via the open-access mission website\footnote{\url{https://fermi.gsfc.nasa.gov/}}. To obtain the light curve, we used the standard unbinned likelihood analysis pipeline of the \emph{Fermi} Science Tools {\sc v10r0p5} package with the instrument response function {\sc p8r2\_v6}, Galactic diffuse emission model {\sc gll\_iem\_v06}, and isotropic background model {\sc iso\_pr2\_source\_v6\_v06}. We used the adaptive binning technique described in \cite{Larionov2017} to break the entire data time range into integration bins of variable length, such that high-flux periods were split into shorter bins than were other time spans. This approach allows us to have very fine (down to $0\fd25$) binning for active states while maintaining a significant signal level (maximum-likelihood test statistic $TS \ge 10$) during low-flux periods. To study $\gamma$-ray SED variations over time, we performed a binned likelihood analysis, which allows one to find a flux value over a set of energy bins, in contrast to an unbinned analysis, which can infer only the total photon flux over all energies. We split the entire time range into 50 time bins of roughly the same total flux. To do so, we integrated the light curve over time using fixed values of spectral parameters to estimate the total energy received from the object in $\gamma$-rays. We then divided the light curve into 50 sub-sections, each containing 2.0 per cent of the total energy. The number 50 was adopted empirically, such that the flux integrated over the resulting bins had a high enough significance level to run the binned likelihood analysis: the number of logarithmically distributed energy bins with detected signal was at least 12 in every time bin. The shortest time bin we obtained was 2.75 days, while the longest was almost 312 days. For running the binned analysis, we used the {\sc fermipy} Python wrapper~\citep{Wood2017} around the standard \emph{Fermi}\,ST package. \subsection{Radio observations} The University of Michigan Radio Astronomy Observatory (UMRAO) total flux density observations presented here were obtained at frequencies of 4.8, 8.0, and 14.5 GHz ($\lambda =$ 6.2, 3.8, and 2.0~cm) with the UMRAO 26-m paraboloid antenna as part of a long-term extragalactic variable source monitoring program \citep{1985ApJS...59..513A}. The adopted flux density scale is based on \citet{1977A&A....61...99B} and uses Cassiopeia A (3C 461) as the primary standard. In addition to observations of Cas A, observations of nearby secondary flux density calibrators were interleaved with observations of the target sources, typically every 1.5 to 2 hours, in order to verify the stability of the antenna gain and the accuracy of the telescope pointing. Each daily-averaged observation of 3C~279 consisted of a series of 8--16 individual measurements obtained over 25--45 minutes, and observations were carried out when the source was within 3 hours of the meridian. Observations with the RATAN-600 radio telescope, operated by the Special Astrophysical Observatory (SAO) of the Russian Academy of Science (RAS), were carried out in transit mode \citep{1972IzPul.188....3K,1979S&T....57..324K,1993IAPM...35....7P}. The flux density measurements were obtained at 6 frequencies -- 22, 11, 7.7, 4.8, 2.3, and 1.2 (or 0.97)~GHz ($\lambda =$ 1.38, 2.7, 3.9, 6.2, 13 and 24 (or 31)~cm) -- over several minutes per object. All continuum receivers are total-power radiometers with square-law detection. The data are registered using a regular universal registration system based on the hardware-software subsystem ER-DAS (Embedded Radiometric Data Acquisition System) \citep{2011AstBu..66..109T}. The data reduction procedures and main parameters of the antenna and radiometers are described in \cite{1999A&AS..139..545K,2014A&A...572A..59M} and \cite{2016AstBu..71..496U}. Observations at 5, 8 and 43 GHz were performed with the Medicina and Noto radio telescopes; a description of data reduction and analysis can be found in \citet{DAmmando2012}. 3C~279 was also observed at 15\,GHz as part of high-cadence blazar monitoring program using the Owens Valley Radio Observatory (OVRO) 40-m Telescope \citet{Richards2011}. Observations were performed approximately twice per week. 3C~286 was used as the primary flux density calibrator with the scale set to 3.44 Jy following \citet{Baars1977}. In the period from March 8, 2018 to January 2, 2019, radio observations at 4.8 and 8.5~GHz ($\lambda =$ 6.2 and 3.5~cm) were obtained with the 32~m diameter, fully-steerable radio telescope RT-32 at the Svetloe Observatory of the Institute of Applied Astronomy of the RAS. The source 3C~295 was used as the primary calibration standard, and the secondary standard was DR21. The average measurement errors are estimated as 2 per cent (4.8~GHz, 61 points) and 1 per cent (8.5~GHz, 23 points). The short millimetre wavelength data presented in this paper were obtained under the POLAMI (Polarimetric Monitoring of AGN at Millimetre Wavelengths) Program\footnote{http://polami.iaa.es} \citep{Agudo2018a,Agudo2018b,Thum2018}. POLAMI is a long-term program to monitor the polarimetric properties (Stokes I, Q, U, and V) of a sample of about 40 bright AGN at 3.5 and 1.3 millimetre wavelengths with the IRAM 30m Telescope near Granada, Spain. The program has been running since October 2006, and it currently has a time sampling of $\sim$2 weeks. The XPOL polarimetric observing setup, described in \citet{Thum2008}, has been routinely used since the start of the program. The reduction and calibration of the POLAMI data presented here are described in detail in \citet{Agudo2010,Agudo2014,Agudo2018a}. Observations at 230 and 350 GHz were obtained as part of the long-term, ongoing monitoring program of mm-wave calibrator sources using the Submillimetre Array, near the summit of Mauna Kea, Hawaii (see http://sma1.sma.hawaii.edu/callist/callist.html). \subsection{Very long baseline interferometry} Since 2007 June, the quasar 3C~279 has been monitored approximately monthly by the Boston University (BU) group with the Very Long Baseline Array (VLBA) at 43 GHz within a sample of gamma-ray and radio bright blazars (the VLBA-BU-BLAZAR program\footnote{http://www.bu.edu/blazars/VLBAproject.html}). The data are calibrated and imaged as presented in \cite{Jorstad2017}.\footnote{images and calibrated data of 3C~279 at all epochs can be found at www.bu.edu/blazars/VLBA\_GLAST/3c279.html.} We have analysed the total and polarized intensity images during the period from 2007 June to 2018 August. This results in 111 images in each Stokes parameter, $I$, $Q$, and $U$. The total intensity images were modelled by circular components with Gaussian brightness distributions in the same manner as described by \cite{Jorstad2017} using the routine \texttt{modelfit} in the \texttt{Difmap} software package \citep{Shepherd1997}. Each $I$ image is modelled by a number of components (knots), with parameters that result in the best fit to the $(u,v)$ data according to a $\chi^2$ test. Each knot is characterized by its flux density $F$, distance from the core, $R$, position angle wrt the core, $\Theta$, and size (FWHM). Uncertainties in the parameters are determined according to the formalism given in \cite{Jorstad2017}. The core, A0, is considered a stationary feature, located at the northeast (narrow) end of the jet structure. We have also modelled Stokes $Q$ and $U$ visibility data for components detected in the total intensity images, following the technique described in \cite{Jorstad2007}. For comparison with multi-wavelength light curves and polarization curves analysed in this paper, we have calculated at each epoch total and linearly polarized flux densities and electric vector position angles, EVPA, by summing the $I_i$, $Q_i$ and $U_i$ flux densities from all components $i$ of the source at each epoch. We have constructed a total flux density light curve at 43~GHz, $F_{\rm total}$, representing the sum of the flux densities of all components, and curves of the degree of polarization in percentage, $P=\sqrt{(F_{Q,{\rm total}}^2 + F_{U,{\rm total}}^2)}/F_{\rm total}\times 100$ percent, and EVPA in degrees, EVPA=$0.5{\rm arctan}(U_{\rm total}/Q_{\rm total})$. The light curve at 43~GHz is presented in Figure~\ref{lc_radio}, and the polarization parameters at 43~GHz versus epoch are given in Fig.~\ref{optical_radio_polar}. \subsection{Optical polarimetry} In this study, we include optical polarimetric data obtained with telescopes at the Crimean Astrophysical Observatory, St.~Petersburg University, Lowell Observatory (Perkins Telescope), Steward Observatory, and Calar Alto Observatory. The Galactic latitude of 3C~279 is 57\degr and $A_V$ = 0.078 mag, so that interstellar polarization (ISP) in its direction is $<0.3$ per cent. To correct for ISP, the mean relative Stokes parameters of nearby stars were subtracted from the relative Stokes parameters of the object. This removes the instrumental polarization as well, under the assumption that the stellar radiation is intrinsically unpolarized. The fractional polarization has been corrected for statistical bias, according to \citet{Wardle1974}. For some of our analysis (see \S\ref{pol_behaviour}), we resolve the $\pm180\degr$ ambiguity of the polarization electric-vector position angle (EVPA) by adding/subtracting $180\degr$ each time that the subsequent value of the EVPA is $>90\degr$ less/more than the preceding one. \subsection{Spectroscopic observations} \label{sec:dct} We performed spectral observations of 3C~279 with the 4.3~m Discovery Channel Telescope (DCT, Lowell Observatory, telescope located in Happy Jack, AZ, USA) using the DeVeny spectrograph\footnote{\url{https://jumar.lowell.edu/confluence/pages/viewpage.action?pageId=23234141\#DCTInstrumentationCurrent\&Future-DeVeny}} and Large Monolithic Imager (LMI)\footnote{\url{ https://jumar.lowell.edu/confluence/pages/viewpage.action?pageId=23234141\#DCTInstrumentationCurrent\&Future-LMI}}. The DeVeny spectrograph, with 300 grooves per mm and a grating tilt of 22\fdg35, was employed to produce a spectrum from 3500\AA\, to 7000\AA, centred at 5000\AA, with a dispersion of 2.2~\AA/pixel and a pixel size of 0.253 arsec. We used a slit width of 2.5 arcsec. The technique for the spectral observations and data reduction was adapted from that developed at the NASA Infrared Telescope Facility for observations with SpecX \citep{Vacca2003}, which is based on an observation of a `telluric standard' whose intrinsic spectrum is known. This allowed derivation of a system throughput curve, which was then applied to the spectrum of the source. Each spectral observation of 3C~279 included 2-3 exposures of 900~s each. A calibration star, HD~112587, of spectral type A0 and located $<1\degr$ from 3C~279, was observed just before and after the quasar, with two exposures of 60~s each. The exposure lengths of the target and telluric standard varied slightly, depending on the brightness of 3C~279 and weather conditions. Since the DCT is capable of switching between different instruments within 2-3~minutes, the spectral observations were followed by photometric observations of the quasar in $R$ and $V$ bands with the LMI in order to determine the flux calibration. Bias and flat-field images were obtained for corresponding corrections for both instruments. \section{RESULTS AND DISCUSSION}\label{results} \subsection{Flux and colour evolution}\label{color_evolution} The optical light curves of 3C~279, collected by the WEBT participating teams during the 2007--2018 time interval, are shown in Fig.~\ref{lc_UV_NIR} (\emph{a-k}); horizontal lines in panel (\emph{e}) mark the data published in earlier WEBT-GASP papers \citep{Bottcher2007, Larionov2008, Hayashida2012, Pittori2018}. In panels (\emph{f-k}) of the same figure we show \emph{Swift} UVOT data (open circles). As is common for the WEBT campaigns, the data coverage in the optical range is dense (nearly 5000 data points in $R$ band, 1000--1500 in $B$, $V$ and $I$) throughout each observational season. The most intensive observations were performed during high-activity states of the blazar. \begin{figure} \begin{center} \includegraphics[width=\columnwidth,clip]{lc_UV_NIR.pdf} \caption{Optical, near-infrared, and ultraviolet light curves of 3C~279 over the time interval of the WEBT campaign. Horizontal lines in panel (\emph{e}) mark the data published in earlier WEBT-GASP papers \protect\citep{Bottcher2007, Larionov2008, Hayashida2012, Pittori2018}. Empty circles in panels (\emph{f}) through (\emph{k}) refer to the \emph{Swift} data}. \label{lc_UV_NIR} \end{center} \end{figure} Radio flux densities versus time are plotted in Fig.~\ref{lc_radio}. The 43~GHz light curve is constructed based on the VLBA images as described in \S \ref{vlba}. Slanted dashed lines connect minima and maxima in the different light curves, which may correspond to the same emission events of 3C~279 at different frequencies. \begin{figure} \begin{center} \includegraphics[width=\columnwidth,clip]{lc_radio.pdf} \caption{Radio light curves of 3C~279 over the time interval 2007--2018. For comparison, the $R$ band light curve is reproduced in the bottom panel. Dashed slanted lines connect positions of main extrema of the light curves (see \S~\protect\ref{radio_corr}).} \label{lc_radio} \end{center} \end{figure} We plot the high-energy (\emph{Fermi} LAT and \emph{Swift} XRT) light curves in Fig.~\ref{lc_gamma_X-ray}. Visual inspection reveals that the general patterns of the $\gamma$-ray and X-ray light curves are quite similar. \begin{figure} \begin{center} \includegraphics[width=\columnwidth,clip]{lc_gamma_X-ray.pdf} \caption{High-energy light curves of 3C~279 over the time interval 2006--2018. For comparison, the $R$ band light curve is reproduced in the bottom panel. Dashed vertical lines mark epochs when SEDs were constructed (see \S~\protect\ref{sect:sed}).} \label{lc_gamma_X-ray} \end{center} \end{figure} The question of whether a blazar's optical radiation becomes redder or bluer when it brightens is a topic of numerous papers. It is commonly agreed that the relative contributions of the big blue bump (BBB) and Doppler-boosted synchrotron radiation from the jet differ between quiescence and outbursts, and that this is one factor that leads to variability of the SED. In a recent paper, \citet{Isler2017} have parametrized optical-NIR variablity of 3C~279 in terms of the combined contributions of the accretion disc and the jet. However, most previous studies are qualitative rather than quantitative, owing to the difficulty in evaluating the contributions of constant and slowly-varying components, such as starlight from the host galaxy, the accretion disc, and the broad emission-line region. A straightforward way to isolate the contribution of the component of radiation that is variable on the shortest time scales (presumably, synchrotron radiation) has been suggested by Hagen-Thorn \citep[see, e.g.,][and references therein]{Hagen-Thorn2008}. The method is based on plots of (quasi)simultaneous flux densities in different colour bands and the construction of the relative continuum spectrum based on the slopes of the sets of flux-flux relations thus obtained. This method was successfully applied for the quantitative analysis of the synchrotron radiation of several blazars, for example, 0235+164 \citep{Hagen-Thorn2008}; 3C~279, BL~Lac, and CTA~102 \citep{Larionov2008, Larionov2010,Larionov2016a}; 3C~454.3 \citep{Jorstad2010}; and 3C~66a, S4~0954+65, and BL~Lac \citep{Gaur2019}. We adopt the same approach, as displayed in Fig.~\ref{SED_variable}, where the flux densities of 3C~279 in $B$, $V$ and $I$ bands are plotted against the $R$-band flux density. The three plots in the top panels, obtained during different intervals of enhanced activity, together with analogous dependencies in UV and NIR bands, allow a derivation of the relative spectrum of the variable component, plotted in the bottom panel of the same figure. With this, we are able to trace the seasonal changes of the spectral index $\alpha$ (in the sense $F_\nu \propto \nu^{-\alpha}$) over the entire UV--NIR range. We split the time range into three parts, determined via visual inspection of Fig.~\ref{lc_UV_NIR}: 2008--2010 (period of general decline of the flux, down to the unprecedented low level of 2010), 2011--2016 (modest level of activity over the optical range), and 2017-2018 (enhanced mean flux and short time-scale variations in the optical bands). We obtain $\alpha= 1.65\pm0.02$ for 2008--2010, $\alpha= 1.47\pm0.02$ for 2011--2016, and $\alpha= 1.60\pm0.02$ for 2017--2018. The values of the slopes refer to the central frequency, corresponding to $R$ band. We emphasize that these values are for the \textit{variable} component only, not for the total flux density. We also find mild curvature (convexity) of the spectrum of the variable (synchrotron) component over the interval 2017--2018, i.e., a softening of the spectrum, despite elevated flux levels across the entire UV-optical frequency range (see Figures~\ref{lc_UV_NIR},~\ref{lc_gamma_X-ray}). We note that a very similar value of $\alpha= 1.58\pm0.01$ in 2006--2007 was reported by \cite{Larionov2008}. \begin{figure} \begin{center} \includegraphics[width=\columnwidth,clip]{SED_variable.pdf} \caption{Flux-flux dependencies of $F_B$, $F_V$ and $F_I$ vs $F_R$ (upper row of panels). Relative continuum spectrum of variable component of radiation from UV through NIR bands (bottom panel).} \label{SED_variable} \end{center} \end{figure} \subsection{Optical spectra and variability of broad emission-line clouds} \label{spectra} We analyse the optical spectroscopic behaviour of 3C~279 using data obtained at the 4.3~m Discovery Channel Telescope (DCT). Figure~\ref{3c279spectra} displays our spectra of 3C~279 from the 2017 and 2018 observing seasons. All of these spectra contain a prominent \ion{Mg}{ii} $\lambda 2800$\AA\, broad emission line redshifted to $\lambda 4280$\AA. In addition to this prominent feature, we also mark in the figure the low-redshift absorption line of \ion{Mg}{ii} at $\lambda 3906$\AA\, arising in a foreground galaxy at $z=0.395$ \citep[see][]{Stocke1998}. We also mark an emission line of \ion{O}{ii}, intrinsic to 3C~279. We note that there are a number of broad spectral features between $\lambda 3600$ and 5300{\AA} (rest wavelengths in the 2350-3450{\AA} range), often called the `little blue bump' and attributed to many blended Fe lines \citep[e.g.,][]{Vestergaard2001}. Visual inspection of the spectra displayed in Figure \ref{3c279spectra} reveals that (1) the line flux of \ion{Mg}{ii} correlates with the continuum flux and (2) there is marked asymmetry (`red wing') to this line, as previously noted by \citet{Punsly2013}. We deblend the observed line profiles, fitting them with two Gaussian functions superposed on a featureless continuum. Examples of the fit are given in Figure~\ref{MgIIgaussfit}. The wide range of continuum flux densities, seen in Figure~\ref{3c279spectra}, allows us to plot the dependencies of the fluxes in the `blue' and `red' components of \ion{Mg}{ii} on the continuum flux; see Figure~\ref{MgIIblue_red}. This definitive correlation is similar to that found in the \ion{Mg}{ii} line of quasar 3C~454.3 by \citet{Leon-Tavares2013}. Less pronounced line flux variability in the spectra of several blazars has also been reported by \citet{Isler2015}. In contrast, stability of emission-line fluxes has been reported in several previous studies: 3C~454.3 \citep{Raiteri2008}, PKS~1222+216 \citep{Smith2011}, 4C~38.41 \citep{Raiteri2012}, OJ~248 \citep{Carnerero2015}, and CTA~102 \citep{Larionov2016a}. \begin{figure} \centering \includegraphics[width=\columnwidth,clip]{3c279spectra.pdf} \caption{Spectra of 3C~279 during the 2017 and 2018 observing seasons.} \label{3c279spectra} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth,clip]{MgIIgaussfit.pdf} \caption{Examples of Gaussian fitting of the \ion{Mg}{ii} line profile for different levels of the continuum flux.} \label{MgIIgaussfit} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth,clip]{MgIIblue_red.pdf} \caption{Dependencies of \ion{Mg}{ii} line fluxes vs.\ continuum flux density. The 95\% confidence intervals are also shown. For comparison, the dependence of [\ion{O}{ ii}] $\lambda_\mathrm{rest}$ 3727\AA\, flux vs.\ the adjacent continuum flux density is plotted near the bottom.} \label{MgIIblue_red} \end{figure} The two-component profile of the \ion{Mg}{ii} line in 3C~279 could be explained by radial infall of matter onto the accretion disc surrounding the central black hole. The speed of the `red' component is then $\sim $3500 km~s$^{-1}$, corresponding to a $\sim$50~\AA\, wavelength shift of that component. The free-fall velocity of matter at a distance $R_{\rm pc}$ pc from a black hole of mass $M_9=10^9 $ M\sun\, is $v_{\rm in}= 2940 (M_9/R_{\rm pc}^{1/2})$ km~s$^{-1}$. As reported in \citet{Nilsson2009}, the most reliable estimate of the black hole mass in 3C~279 is $10^{8.9}$ M\sun, hence the free-fall velocity is $\sim 3500$~km~s$^{-1}$ at $R_{\rm pc}\sim 0.6$~pc. Another approach \citep{Corbin1997} suggests that the redward profile asymmetries can be produced by the effect of gravitational redshift on the emission from a `very broad line region,' provided that this region takes the form of a flattened ensemble of clouds viewed nearly face-on and with a mean distance of a few tens of gravitational radii from the black hole. The effect of gravitational redshift of the line emitted from a source at a distance $R_\mathrm{source}$ from the black hole is given by \begin{equation} \frac{\lambda_\mathrm{obs}}{\lambda_\mathrm{source}}= (1- \frac{R_\mathrm{Sch}}{R_\mathrm{source}})^{-1/2} \label{eqn_grav} \end{equation} Here $R_\mathrm{Sch}=2GM/c^2$ is the Schwarzschild radius of the central black hole with mass $M$. The displacement of the red component of \ion{Mg}{ii} of $\sim$50\AA\, (see Fig.~\ref{MgIIgaussfit}) corresponds to a position of the emitting cloud of $\sim$43$R_\mathrm{Sch}\approx3\times10^{-3}$pc, or about 4 light-days, from the black hole. While this is closer than expected to the black hole for a cloud emitting \ion{Mg}{ii} lines, which do not require high ionization parameters, we note that such small distances of emission-line regions from black holes have been inferred from microlensing studies of quasars \citep{Guerras2013}. We measure the \ion{Mg}{ii} line FWHM for both the `blue' and `red' components, from which one can derive the velocity dispersion of the gas clouds in the broad-line region (BLR) and obtain, correspondingly, $v_\mathrm{blue}= 3450\pm200\:\mathrm{km\: s^{-1}}$ and $v_\mathrm{red}= 3800\pm200\:\mathrm{km\: s^{-1}}$ . These values are lower limits to the actual velocity range of the clouds, since the expected velocity dispersion depends on the geometry and orientation of the BLR~\citep[see, e.g.,][]{Wills1995}. In fact, because the line of sight to a blazar is probably nearly perpendicular to the accretion disc, the de-projected velocity range is likely to be a factor $\gtrsim 2$ higher than the FWHM given above if the BLR is in the disc. The strong relationship between the optical (and presumably UV) continuum and the emission-line flux suggests that the radiation responsible for the excitation of the broad \ion{Mg}{ii} lines comes mostly from the jet. This is difficult to reconcile with the gravitational redshift hypothesis for the displacement of the red wing, since the optical emission from the jet is unlikely to be highly beamed only several light-days from the black hole \citep[see][]{Marscher2010}. If the dominant exciting radiation instead were to arise in the accretion disc, we would not expect to see such a strong correlation between the synchrotron continuum and line flux: the only direct association of flare and superluminal knot production in the jet with an event in the accretion disc corresponds to a \emph{decrease} in the optical-ultraviolet flux of the disc (in the radio galaxy 3C~120; \citealt{Marscher2018}). Since the jet emission is expected to be highly beamed by a Doppler factor $\sim 20$ \citep{Jorstad2017}, the association of variations in emission-line flux with beamed synchrotron radiation from the jet implies that the clouds responsible for these lines lie within $\sim 10\degr$ of the jet axis, as proposed by \citet{Leon-Tavares2015}. The deblending of the \ion{Mg}{ii} line discussed above does not take into account possible contribution of time-variable \ion{Fe}{ii} emission lines. \ion{Fe}{ii} lines are present over a wide range of wavelengths near the \ion{Mg}{ii} line and should be present under the physical conditions that produce strong \ion{Mg}{ii} emission. We note that \citet{Patino-Alvarez2018} conclude that \ion{Fe}{ii} emission is negligible in the vicinity of the \ion{Mg}{ii} line in the spectra of 3C~279 that they have studied. Nevertheless, given the presence of a strongly time-variable red wing to the \ion{Mg}{ii} line, which is difficult to explain physically \citep[see above and][]{Punsly2013}, we plan to carry out a more thorough analysis of Mg and Fe line emission in a future study. Based on the above considerations, we tentatively conclude that the variable, displaced (by 3500 km~s$^{-1}$) component of the \ion{Mg}{ii} emission line arises from infalling clouds located $\sim0.6$~pc from the black hole and outside the jet, but within $\sim10\degr$ of the jet axis. \subsection{Spectral energy distributions}\label{sect:sed} Figure \ref{3c279sed} presents the SED (${\rm log}\,\nu F_\nu$ vs.\ {\rm log}\,$\nu$) of 3C~279 at 6 epochs with different levels of activity, displaying the usual double-hump shape. We follow the common interpretation that the humps correspond to synchrotron radiation at lower frequencies and inverse Compton (IC) scattering at high energies. The epochs, which are also marked in Figure \ref{lc_gamma_X-ray}, include TJD~58120 when the $\gamma$-ray to optical-IR flux ratio at the corresponding peaks was $\sim60$, as well as other epochs when it was closer to unity. As has been discussed by \citet{Sikora2009}, the flux ratio should not greatly exceed unity if the $\gamma$-ray emission occurs via the synchrotron self-Compton (SSC) mechanism corresponding to IC scattering of synchrotron photons from the jet, with the same population of electrons responsible for both processes. Otherwise, the seed photons for the scattering are likely generated from outside the jet (external Compton, or EC, radiation). Among the six SEDs displayed in Figure \ref{3c279sed}, only two epochs (TJD~57189 and 58120) are inconsistent with SSC $\gamma$-ray emission solely on this basis. One of these (TJD~57820) includes the highest optical flux found in our study; the flux of the peak of the $\gamma$-ray SED was only a factor of $\lesssim4$ higher than that of the synchrotron peak. The SED should generally rise with frequency up to a point $\nu_{\rm peak}$ where the spectral index steepens to unity. This corresponds to the critical frequency of electrons with energy per unit rest mass $\gamma_{\rm peak}$, at which the energy distribution steepens to $N(\gamma)\propto \gamma^{-3}$. In some models, this occurs at a break in the injected electron energy distribution \citep[e.g.,][]{Sikora2009}, while in others there is a more gradual downturn of a log-parabolic energy distribution \citep[e.g.,][]{Massaro2006}. In the model of \citet{Marscher2014}, the volume filling factor of the highest-energy electrons is inversely proportional to energy owing to variations in magnetic field direction relative to shock fronts. This causes a steepening of the energy distribution when averaged over the entire volume. The shock model of \citet{Marscher1985} produces a break from radiative energy losses at an energy that evolves with time as the shock moves down the jet. If we tentatively assume that the $\gamma$-ray portion of the SED on TJD~57820 is produced by SSC emission, we can estimate the value of $\gamma_{\rm peak}$ as \begin{equation} \gamma_{\rm peak} \sim [\nu^{\rm SSC}_{\rm peak}/\nu^{\rm S}_{\rm peak}]^{1/2} \sim 4\times 10^4, \label{eq:gamma} \end{equation} where we have taken $\nu^{\rm SSC}_{\rm peak}\sim 5\times10^{13}$ Hz (see Fig.\ \ref{3c279sed}). The corresponding magnetic field strength is \begin{equation} B \sim [\nu^{\rm S}_{\rm peak}(1+z)]/[(2.8\times 10^6~{\rm Hz})\gamma_{\rm peak}^2\delta]~{\rm G} \sim 4\times10^{-4}~{\rm G} \label{eq:b} \end{equation} \citep[e.g.,][]{Rybicki1979}, where $\delta$ ($\sim40$ in 2017; see \S\ref{vlba} below) is the Doppler beaming factor and $z=0.538$ is the redshift of 3C~279. This is implausibly low for a compact emission region on parsec scales; hence we conclude that even for the flare on TJD~57820 with such a high synchrotron amplitude, the $\gamma$-ray emission was produced by the EC process. SSC emission could have dominated on TJD~54900 and 55280 when the $\gamma$-ray to synchrotron flux ratio was $\lesssim1$ and the value of $\nu^{\gamma}_{\rm peak}$ was relatively low, but the absence of flux measurements at the SED peaks at these epochs precludes an accurate assessment of this possibility. In the case of EC emission, which we infer to apply to the outbursts of 3C~279 with the highest fluxes \citep[as concluded earlier by][]{Sikora2009,Hayashida2015, Ackermann2016}, the frequency of the peak in the $\gamma$-ray SED is given by \begin{equation} \nu^{\rm EC}_{\rm peak} \approx \nu'_{\rm seed} \gamma_{\rm peak}^2 \delta (1+z)^{-1} \propto \nu_{\rm seed} \Gamma \delta (1+z)^{-1}, \label{ECfreqpeak} \end{equation} where $\nu_{\rm seed}\approx \nu'_{\rm seed}\Gamma^{-1}$ is the frequency of the peak in the SED of the seed photons in the host galaxy rest frame, and $\Gamma$ is the bulk Lorentz factor of the emitting plasma. Note that $\nu^{\rm EC}_{\rm peak}$ is approximately proportional to $\delta^2$ if $\Gamma\sim \delta$, as is expected to be the case for blazars. An increase in the Doppler factor should increase both the flux and frequency of the SED peak. While an increase in the latter is not evident in the SEDs displayed in Figure \ref{3c279sed}, the SEDs of very sharp flares on time scales of minutes to hours in 3C~279 have exhibited an increase in $\nu^{\rm EC}_{\rm peak}$ \citep{Hayashida2015,Ackermann2016} during maximum flux. In general, a rise in flux involves some combination of an increase in the number of radiating particles, the magnetic field strength, the Doppler beaming factor, and the energy density of seed photons, \textbf{$u_{\rm seed}$}. Radiative energy losses of the highest-energy electrons tend to be severe during outbursts in 3C~279 \citep{Sikora2009,Hayashida2015,Ackermann2016}, so that essentially all of the energy that the electrons obtain from particle acceleration is transferred to electromagnetic radiation. Increases in apparent luminosity require an increase in either the number of these electrons or the Doppler beaming factor. \begin{figure} \centering \includegraphics[width=\columnwidth,clip]{3c279SED.pdf} \caption{Quasi-simultaneous spectral energy distributions of 3C~279. TJD is truncated Julian date, JD$-$2400000} \label{3c279sed} \end{figure} Although the peak \textbf{frequency} of the synchrotron SED is determined only within about an order of magnitude, this is sufficient to specify that the ratio of the IC to synchrotron peak frequencies is $10^{9\pm0.5}$. The Lorentz factor (energy in rest-mass units) of the electrons radiating at the EC and synchrotron SED peaks should be the same and given by \begin{equation} \gamma_{\rm peak} \sim [\nu^{\rm EC}_{\rm peak}(1+z)/(\delta\nu'_{\rm seed})]^{1/2} \label{gampeakEC}, \end{equation} where $\nu'_{\rm seed}$ is the frequency of the maximum of the seed photon SED as measured in the frame of the radiating plasma. Equation \ref{eq:b} then allows us to derive the magnetic field strength. Based on the TJD 57189 SED, when the peak IC flux was $\sim40$ times the peak synchrotron flux, indicating dominance by the EC process, we express the parameters as $\nu^{\rm S}_{\rm peak}= 10^{14}\nu^S_{\rm peak,14}$~Hz, $\nu^{\rm EC}_{\rm peak}= 10^{23}\nu^C_{\rm peak,23}$~Hz, $\nu'_{\rm seed}= 10^{16}\nu'_{\rm seed,16}$~Hz, and, from apparent superluminal motions, $\delta= 20\delta_{20}$ and $\Gamma= 20\Gamma_{20}$ \citep{Jorstad2017}. Equation \ref{gampeakEC} then becomes \begin{equation} \gamma_{\rm peak}\sim900 [\nu^\gamma_{\rm peak,23}(1+z)/(\delta_{20}\nu'_{\rm seed,16})]^{1/2}. \label{gampeakvalues} \end{equation} If the seed photons are primarily from the little blue bump \citep[the Ly$\alpha$ emission line is not very strong in 3C~279;][]{Stocke1998}, so that $\nu_{\rm seed}\sim10^{15}$ Hz, $\nu'_{\rm seed,16}\sim 2\Gamma_{20}$, $\gamma_{\rm peak} \sim 800$, and $B\sim 4$~G. Even with such a low value of $\gamma_{\rm peak}$, the radiative cooling time of the electrons in the plasma frame is only $7.8\times10^8[B^2+[(u'_{\rm seed})^2/(8\pi)](\gamma'_{\rm peak})^{-1} \sim 40$~s (dominated by EC losses that are $\sim 40$ times stronger than synchrotron losses given the flux ratios of the two SED humps), allowing for rapid variability (although limited by the light-crossing time across the region). If, on the other hand, the seed photons correspond to blackbody radiation from dust with a temperature $\sim 1200$~K \citep[as in 4C21.35;][]{Malmrose2011}, $\nu'_{\rm seed,16}\sim 0.05\Gamma_{20}$, $\gamma_{\rm peak} \sim 5000$, and $B\sim 0.1$~G. In this case, the radiative cooling time is $\sim 10^4$~s in the plasma frame and $\sim 10^3$~s in the observer's frame. Intra-day flux variations are therefore possible in either case if the size of the emission region is $\lesssim0.01\delta_{\rm 20}$ pc. Another possible source of seed photons is synchrotron radiation from a relatively slow (and, correspondingly, less beamed) Mach disc \citep{Marscher2014} or sheath of the jet \citep{MacDonald2015}. As suggested by \citet{Marscher2010}, such slowly moving plasma could produce the requisite number of seed photons while being too poorly beamed to contribute substantially to the observed optical flux. Stacked multi-epoch VLBA images have confirmed the existence of sheaths in the jets of several blazars, including 3C~279 \citep{MacDonald2017}. The peak of the SED of such synchrotron photons is likely to be in the far-IR range, in which case the above estimate of the magnetic field for seed photons from hot dust should be applicable. A major difference between EC from polar emission-line clouds or slowly moving jet plasma and EC from dust is the possibility of variations in the former case, while dust is expected to provide a steady source of seed photons. Variability of the seed photons can explain the general absence of repeated episodes of variability with nearly identical patterns of temporal behaviour. \subsection{Inter-band correlations} \label{correlations} \subsubsection{$\gamma$-ray -- X-ray correlations} We calculate the discrete correlation function (DCF) \citep{Edelson1988, Hufnagel1992} between the $\gamma$-ray and X-ray flux variations of 3C~279 during 2008--2018. We determine the significance of the correlation via the flux redistribution -- random sub-sample selection method \citep[see, e.g.][]{Peterson1998}. The results are presented in Figure \ref{dcf_gamma-X}, which reveals a strong correlation with a peak DCF of $0.75 \pm 0.03$ and a delay of the $\gamma$-ray behind X-ray variations of 0.24$\pm$ 0.42 days, statistically consistent with zero lag. This implies that the correlation is significant at a level higher than p=0.001 per cent \citep[e.g.,][]{Bowker1972} and conforms with Figure~\ref{lc_gamma_X-ray}, which shows that every X-ray flare has a $\gamma$-ray counterpart and \textit{vice versa}. \begin{figure} \centering \includegraphics[width=\columnwidth,clip]{DCF_gamma-X.pdf} \caption{Upper panel: DCF between $\gamma$-ray and X-ray light curves of 3C~279 in 2008--2018. Positive delays correspond to X-ray leading $\gamma$-ray variations. Bottom panel: Distribution of lags of maxima of the DCF from 200 Monte Carlo simulations with flux redistribution and 67\% bootstrapping. The grey area in the upper panel correspond to $\pm 1 \sigma$ spread of simulated DCFs. } \label{dcf_gamma-X} \end{figure} \subsubsection{$\gamma$-ray -- optical correlations} \label{gamma-optical} In a similar way, we calculate the DCF between the optical and $\gamma$-ray flux variations. The resulting correlation, given in Fig.~\ref{dcf_R-gamma}, is rather weak, with a maximum DCF of 0.42. There is a delay on the level of $2\sigma$ between the variations in the two energy bands, with the $\gamma$-ray leading the optical variability by $1.06\pm 0.47$ days. The optical and $\gamma$-ray light curves of 3C~279 are quite complex, with `sterile' (optical without $\gamma$-ray counterpart) and `orphan' ($\gamma$-ray without optical counterpart) flares occurring in both energy ranges. To estimate the statistical significance of the correlation between $\gamma$-ray and optical variations, we have performed a Monte-Carlo simulation in the following way. At first, we approximated both $\gamma$-ray and optical light curves with a set of double-exponential functions in a manner similar to \citet{Abdo2010}. After obtaining the statistical distributions of peak parameters $(f_{\mathrm{max}}, t_{\mathrm{raise}}, t_{\mathrm{decay}})$, we generated a set of synthetic light curves by co-adding random peaks drawn from these distributions and placed into random positions. Any correlation between these synthetic light curves is by random chance, so by computing the correlation between them we can estimate the probability of a spurious correlation at a given level. Among $10^4$ artificial light curves generated in this manner, none gave a correlation coefficient equal to or higher than the observed value of 0.4. Therefore, we can consider this correlation level as statistically significant at the 0.01 per cent level. \begin{figure} \centering \includegraphics[width=\columnwidth,clip]{DCF_R-gamma.pdf} \caption{Upper panel: DCF between optical and $\gamma$-ray light curves of 3C~279 in 2008--2018. Positive delays correspond to $\gamma$-ray leading optical variations. Bottom panel: Distribution of lags of maxima of the DCF after 200 Monte Carlo simulations with flux redistribution and 67\% bootstrapping. The grey area in the upper panel correspond to $\pm 1 \sigma$ spread of simulated DCFs. } \label{dcf_R-gamma} \end{figure} This short delay between $R$-band and $\gamma$-ray variations allows us to compare directly the optical and $\gamma$-ray light curves. To do this, we bin the $R$-band optical data so that the mid-point and size (in time) of each optical bin corresponds to those of the respective $\gamma$-ray bin. Figure~\ref{opt_gamma}, where we plot the optical (upper panel) and $\gamma$-ray (middle panel) light curves, and $\gamma$-ray flux versus $R$ band flux (bottom panel), demonstrates clear differences in the optical/$\gamma$-ray relationship during the various stages of activity of 3C~279. The slopes of $\log F_\gamma - \log F_R$ dependencies, for time intervals (1) through (4) (as marked in the upper panel of that figure), are correspondingly: (1) $0.97\pm0.04$, (2) $7.7\pm1.2$, (3) $0.82\pm0.06$ and (4) $1.90\pm0.09$. The slope of the dependence therefore changes dramatically, starting from $\sim1$ (2008--2010), then switching to $\sim8$ (2011--2016). The onset of optical and $\gamma$-ray activity that occurred in late 2016 altered the slope again to $\sim1$. Shortly after the maximum of the optical outburst, the slope changed again, to $\sim2$. Remarkably, unlike transitions from the slope of 1 to 8 and back to 1, where the changes seem to have happened during seasonal gaps, the last change appears to have occurred over a few days, TJD~57837$\pm2$. It is tempting to find some event(s) that accompanied this switch from linear to quadratic slope. The only change in other observed parameters that is simultaneous with this transition is a short-term drop of the optical polarization degree to 1.5\% during the night of optical maximum. However, as can be seen from Fig.~\ref{optical_radio_polar}, there are several other incidents of low polarization, with no obvious physical reason to connect these two events. The $\gamma$-ray flux in Figure~\ref{opt_gamma} is derived by multiplying the photon flux (see \S~\ref{sec:LAT}) by an integral over a fixed log-parabolic energy distribution. It is therefore directly proportional to the photon flux, which in turn is proportional to the flux density divided by frequency. The synchrotron flux is the flux density times the fixed observed R-band frequency (which is above the frequency of the peak in the synchrotron SED). From \emph{relativistic beaming alone}, we then expect the dependencies on Doppler factor to be $F_\gamma\propto \delta^{2+\alpha_{\gamma}}$ and $F_{\rm opt}\propto \delta^{3+\alpha_{\rm opt}}$ \citep[see][]{Dermer1995}. Since the peak of the $\gamma$-ray SED is usually in the LAT range except during low states (cf. Fig.\ \ref{3c279sed}), we adopt $\alpha_\gamma=1.0$ for the $\gamma$-ray spectral index, while in \S\ref{color_evolution} we derived the spectral index of the variable optical component to be $\alpha_{\rm opt}=1.56\pm0.11$. As found by \citet{Sikora2009}, \citet{Hayashida2015}, and \citet{Ackermann2016}, the $\gamma$-ray and optical emission is so luminous that the electrons lose nearly all of their energy to EC radiation on time scales shorter than the light-travel time across the emitting region. In this fast-cooling case, the ratio of $\gamma$-ray to optical flux (as defined above) is proportional to the ratio of EC radiative losses to synchrotron losses, which in turn is proportional to $u'_{\rm seed}/B^2 \propto u_{\rm seed}\Gamma^2/B^2$. An increase in the number of radiating electrons $N_{\rm re}$ increases proportionately the luminosity of the dominant emission process, which is EC $\gamma$-ray production during the higher-flux states in 3C~279. The ratio of EC to synchrotron luminosity is affected by changes in the magnetic field $B$ or energy density of seed photons (as measured in the emitting plasma frame), which can occur if (i) the plasma moves toward or away from the source of the seed photons, (ii) more seed photons are produced near the jet, or (iii) the bulk Lorentz factor $\Gamma$ changes (since the energy density of the seed photons in the plasma frame $u'_{\rm seed} \propto \Gamma^2$). We can consider a number of cases of different physical parameters changing, each with its own dependence between $\gamma$-ray and optical flux. We parametrize the optical/$\gamma$-ray flux relationship as $F_\gamma \propto F_{\rm opt}^{\zeta}$. Based on arguments given in \S\ref{sect:sed}, here we assume that the EC process dominates the $\gamma$-ray production. \begin{enumerate} \item{} An increase solely in the number of radiating electrons $N_{\rm re}$ should cause the synchrotron and EC flux to rise by the same factor, so that $\zeta\approx 1$. The frequencies of the SED peaks should remain the same unless there is a change in the energy at which the slope of the electron energy distribution becomes steeper than $-3$. \item{} An outburst can result from a higher Doppler factor owing to bending toward the line of sight of the emitting region. This can occur if the entire jet changes its direction (wobbles or precesses), or if different parts of the jet cross-section with various velocity vectors relative to the mean become periodically or sporadically bright as time passes. A possible realisation of such behaviour in the case of CTA~102 was suggested by \citet{Larionov2017}. The flux during such events should follow the beaming-only relationships given above, with $F_\gamma\propto \delta^3$ and $F_{\rm opt}\propto \delta^{4.56}$, hence $\zeta \approx 0.7$. The frequency of the SED peaks of both the synchrotron and EC emission should obey $\nu_{\rm peak}\propto \delta \propto F_\gamma^{1/3}$. \item{} An increase in bulk Lorentz factor $\Gamma$, and therefore Doppler factor, raises both the beaming and $u'_{\rm seed}$, as well as $\nu_{\rm peak}$ of both the synchrotron and EC SEDs. The increase in the seed photon field actually decreases the synchrotron luminosity in the plasma frame, since EC scattering then consumes a higher fraction of the electron energies. If the plasma-frame EC luminosity is already dominant, then it will not increase much, since the electrons were already expending nearly all of their energy on EC emission. We then derive the rough dependence $F_{\rm opt}\propto \delta^{4.6}{u'_{\rm seed}}^{-1} \propto \delta^{4.6}\Gamma^{-2} \propto \delta^{2.6}$ if we approximate that $\delta\propto \Gamma$. Since $F_\gamma\propto \delta^3$, we obtain $\zeta\approx 1.2$. \item{} An increase solely in the magnetic field strength $B$ by a factor $f_B$ would cause the synchrotron flux to rise by a factor $f_B^{1+\alpha_{\rm opt}}$ while the frequency of the peak of the synchrotron SED increases by a factor of $f_B$. The EC flux would decrease slightly owing to the higher fraction of electron energy that goes into synchrotron radiation. In this case, $\zeta$ would be a small negative number. \item{} An increase only in the energy density of seed photons $u_{\rm seed}$ by a factor $f_{\rm seed}$ -- either because of changes within the source of the photons or because of a shift in position of the radiating plasma toward the source of seed photons -- would cause the ratio of EC to synchrotron flux to rise in proportion to $f_{\rm seed}$, while the frequencies of the synchrotron and SSC peaks would remain constant. Without a change in the number of radiating electrons, the $\gamma$-ray flux would increase only slightly, but the synchrotron flux would decrease by a factor of $f_{\rm seed}^{-1}$. This would result in $\zeta\ll -1$, with the exact value changing with the level of dominance of the EC luminosity compared with the synchrotron power. If the number of electrons also increases to create a flare, then a high positive value of $\zeta$ is possible. \end{enumerate} The value of $\zeta\approx 1$ during time interval (1) of Figure \ref{opt_gamma} agrees with scenario (i), but the higher frequency of the peak in the $\gamma$-ray SED (see Fig.\ \ref{3c279sed}) at higher flux levels suggests that cases (ii) or (iii) might apply despite their respectively lower (0.7) and higher (1.2) values of $\zeta$. The very high value of $\zeta$ and pronounced variations during interval (2) agree with the expectation of a combination of scenarios (i) and (v). This suggests that the sites of the high-amplitude flares during this period also involved steep gradients in the energy density of seed photons. In concert with our finding that emission-line fluxes in 3C~279 are proportional to optical (and presumably UV) continuum fluxes, with a time delay $<2$ months, this might be possible if the seed photons are stimulated by the synchrotron flares, providing more seed photons for inverse Compton scattering during subsequent flares. During time interval (3) of Figure \ref{opt_gamma}, which is limited to the highest-amplitude optical outburst, the value of $\zeta\approx 0.8$ is closest to that expected for scenario (iii), but this is contradicted by the relatively small $\gamma$-ray to optical flux ratio, $\lesssim4$. The latter suggests either a rise in the magnetic field strength or a decrease in $u'_{\rm seed}$. These considerations imply that an increase in both the magnetic field and number of radiating electrons, probably combined with a decrease in the seed photon field, occurred during interval (3), such that $\zeta$ adopted an intermediate value between cases (i), (iv), and (v). The value $\zeta \approx 2$ during time interval (4) corresponds to a time of very high EC dominance (see Fig.\ \ref{3c279sed}). Perhaps a combination of scenarios (iii) and (iv) -- changes in both bulk Lorentz factor and seed photon energy density -- could give the observed value of $\zeta$. \begin{figure} \begin{center} \includegraphics[width=\columnwidth,clip]{opt_gamma.pdf} \caption{From top to bottom: $R$ band log flux density light curve, with different colours and numerical marks corresponding to different stages of activity; logarithmic $\gamma$-ray flux light curve; optical -- $\gamma$-ray flux-flux diagram. Slopes of the linear (on a logarithmic scale) fits are (1) $0.97\pm0.04$, (2) $7.7\pm1.2$, (3) $0.82\pm0.06$ and (4) $1.90\pm0.09$.} \label{opt_gamma} \end{center} \end{figure} \subsubsection{Radio-band correlations}\label{radio_corr} Visual inspection of Figures \ref{lc_UV_NIR} and \ref{lc_radio} reveals that there are few common details in the optical and radio light curves that would allow one to test the possible existence of correlations and time lags between the two. If we compare the optical $R$-band and radio 250~GHz light curves, both of which have good sampling, we find only two intervals of common elevated flux level, TJD~54000--54300 and TJD~55800--56100, and a dip over TJD$\sim$55000--55300. However, close correspondence between different light curves as one considers lower radio frequencies, with time lags between them, are clearly visible. Before evaluating the time lags, we consider what the results of our analysis would be if we did not possess data at frequencies from 22 to 8~GHz. In this case, we would associate maxima observed at 350--37~GHz close to TJD~55100 with the maximum observed at 5~GHz on TJD$\sim$56800, deriving a delay between variations in these bands of $\sim700$ days. This is certainly a mis-identification, as we readily see when including the 22--8~GHz data. The dashed slanted lines in Fig.~\ref{lc_radio} give a hint of the `real' values of the delays, which are (across the widest span of frequencies) no more than 300 days. The progressive smoothing with decreasing frequency and, eventually, the disappearance of the most prominent maximum at TJD~56100, are naturally explained by the increased volume of the jet at lower frequencies, which smooths out the fine structure of variability. However, this does not explain the appearance and progressive growth of the emission feature first seen at 100~GHz close to TJD~56600 and culminating at 5~GHz near TJD~56800. A possible explanation of these two, markedly different, kinds of behaviour, is the same as suggested in \citet{Raiteri2017}: that of a twisted inhomogeneous jet. Within this approach, magnetohydrodynamic instabilities or rotation of the twisted jet cause different jet regions to change their orientation and hence their relative Doppler factors. \subsection{Jet kinematics}\label{vlba} Figure~\ref{vlbaimages} shows the total and polarized intensity VLBA images of 3C~279, convolved with an elliptical Gaussian beam that approximates the angular resolution at epochs when all 10 VLBA antennas operated. We follow the historical designation of moving components at 43 GHz started by \citet{Unwin1989, Wehrle2001}, and continued by \citet{Jorstad2004, Jorstad2005, Chatterjee2008, Larionov2008}, and \citet{Jorstad2017}. During 2007-2018 we identify 14 moving knots in the jet, C24-C37 (see Fig. \ref{movements}). Knots C24-C32 correspond to those identified by \cite{Jorstad2017}. The apparent speed of the moving components, $\beta_{\rm app}$, has been determined using the same procedure as defined in \cite{Jorstad2005}. Since almost all motions in 3C~279 are non-ballistic, in order to derive the most accurate values of the time of passage of a knot through the VLBI core, $T_0$\footnote{$T_0$ is the time when the centroids of the knot and the core coincide, also called the `ejection' time.}, we use only those epochs when a component is within 1 mas of the core, inside of which its motion is assumed to be ballistic. Knots C30, C31, C32, C33, C34, and C35 can be associated with features C1, C2, C3, NC1, NC2, and NC3, respectively, in \cite{Rani2018}. Table~\ref{kinematics} shows angular, $\mu$ (in mas/yr), and apparent, $\beta_{\rm app}$ (in units of $c$), velocities, the ejection time, $T_0$, the average projected position angle with respect to the core, $\langle \Theta \rangle$, the average distance from the core when the knot is observed, $\langle R \rangle$, and the average flux density, $\langle F \rangle$, of knots ejected between 2007 and 2018 (partly adopted from \cite{Jorstad2017}). The derived apparent velocities are in the range from 5 to 37$c$. In addition to the core A0, we find three quasi-stationary components, A1, A2, and A3, located at $\sim$ 0.1, 0.5, 0.7 mas from the core, respectively. These knots appear in the jet after the ejections of very bright knots C31 and C32. Knots C30 - C37 exhibit especially rapid motions, with apparent velocities ranging from 20 to 37$c$. Components C36 and C37 decelerate significantly after reaching a distance of about 1~mas from the core. Knots C30 and C31 possess velocities up to $\sim20c$ at distances $>$1.5~mas, factors of $\sim$ 2 and 3 times higher than those near the core, respectively. Table~\ref{crossing} lists times at which moving knots C35, C36, and C37 cross stationary components A1, A2 and A3, according their kinematics. \begin{figure*} \begin{center} \includegraphics[width=\linewidth,clip]{3C279VLBA.pdf} \end{center} \caption{A set of VLBA images at 43~GHz in total (contours) and polarized (colour scale) intensity ($S$ and $Sp$, correspondingly) overlaid with fitted Gaussian circular components; the images are convolved with a beam of 0.36$\times$0.15~mas$^2$ at PA=-10$\degr$; linear segments within colour scale show direction of EVPA; four different epochs are chosen to show significant changes in the jet structure as discussed in \S\,\protect\ref{vlba}. } \label{vlbaimages} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=\columnwidth,clip]{3c279_knots_separation.pdf} \caption{Separation of jet features from the core, A0, during 2007-2018, partly adopted from \protect\citet{Jorstad2017}. Each line or curve represents a polynomial fit to the motion of the respective knot. The diameter of each symbol is proportional to the logarithm of the flux density of the knot, as determined by model fitting of the VLBA data.} \label{movements} \end{center} \end{figure} \begin{table*} \caption{\bf Jet structure and kinematics} \label{kinematics} \begin{tabular}{c | c | c | c | c | c | c | c | c |} \hline Knot & N & $\mu$(mas/yr) & $\beta_\mathrm{app} $ & $T_0$(TJD) & $T_0$(year) & $\langle\Theta\rangle(\degr)$ & $\langle R \rangle$(mas) & $\langle F \rangle$(Jy)\\ \hline C24 & 7 & $0.503\pm0.031 $ & $16.01\pm0.99 $ & $54064\pm 40 $ & $2006.90\pm0.11 $ & $-125.4\pm2.8$ & $0.560\pm0.210 $ & $0.43\pm0.22$\\ \hline C25 & 17 & $0.386\pm0.014 $ & $12.30\pm0.43 $ & $54133\pm 33 $ & $2007.09\pm0.09 $ & $-124.9\pm13.1$ & $0.410\pm0.250 $ & $1.13\pm0.65$\\ \hline C26 & 13 & $0.153\pm0.015 $ & $4.87\pm0.48 $ & $54137\pm 51 $ & $2007.10\pm0.14 $ & $-139.6\pm6.1$ & $0.290\pm0.100 $ & $1.09\pm0.77$\\ \hline C27 & 14 & $0.358\pm0.021 $ & $11.39\pm0.66 $ & $54754\pm 29 $ & $2008.79\pm0.08 $ & $-150.6\pm12.1$ & $ 0.310\pm0.170$ & $ 1.77\pm1.65$\\ \hline C28 & 30 & $0.309\pm0.008 $ & $9.83\pm0.26 $ & $54933\pm 47 $ & $2009.28\pm0.13 $ & $-131.5\pm6.8$ & $0.600\pm0.240 $ & $0.81\pm0.37$\\ \hline C29 & 8 & $0.413\pm0.063 $ & $13.15\pm2.00 $ & $55149\pm 55 $ & $2009.87\pm0.15 $ & $-122.4\pm5.1 $ & $0.250\pm0.100$ & $0.91\pm1.17 $\\ \hline C30 & 21(12) & $0.394\pm0.018 $ & $12.56\pm0.57 $ & $55452\pm 22 $ & $2010.70\pm0.06 $ & $-139.3\pm10.3$ &$0.400\pm0.230 $ & $0.42\pm0.14$\\ \hline C31 & 55(23) & $0.227\pm0.012 $ & $7.23\pm0.40 $ & $55539\pm 77 $ & $2010.94\pm0.21 $ & $-181.2\pm19.3$ & $0.320\pm0.200 $ & $11.29\pm5.61$\\ \hline C32 & 48(10) & $0.168\pm0.016 $ & $5.35\pm0.53 $ & $55368\pm 84 $ & $2010.47\pm0.23 $ & $-175.9\pm18.3$ & $0.320\pm0.140 $ & $4.61\pm2.24$\\ \hline C33 &9(8) & $0.655\pm0.009 $ & $20.86\pm0.29 $ & $56507\pm29 $ & $2013.59\pm0.08 $ & $-163.8\pm6.5$ & $ 0.446\pm0.186$ &$2.01\pm0.49$ \\ \hline C34 &16(10) & $0.83\pm0.03 $ & $26.1\pm0.9 $ & $56711\pm6 $ & $2014.15\pm0.02 $ & $-157.3\pm6.1$ & $0.77\pm0.33$ & $1.39\pm0.78 $\\ \hline C35 &38(19) & $0.924\pm0.006 $ & $29.43\pm0.18 $ & $56953\pm22 $ & $2014.81\pm0.06 $ & $-151.7\pm6.5 $ & $1.347\pm0.604$ & $0.68\pm0.93$\\ \hline C36 &19(9) & $0.807\pm0.018 $ & $25.71\pm0.57 $ & $57511\pm11 $ & $2016.34\pm0.03 $ & $-150.5\pm2.8 $ & $0.859\pm0.266 $ & $0.56\pm0.34 $\\ \hline C37 & 8(6) & $1.161\pm0.034 $ & $36.97\pm1.08 $ & $ 57937\pm7$ & $2017.51\pm0.02 $ & $-148.3\pm2.0$ & $0.788\pm0.234$ & $0.51\pm0.13$\\ \hline A1 & 34 & - & - & - & - & $-164\pm7$ & $0.10\pm0.02$ &$4.09\pm1.38$ \\ \hline A2 & 13 & - & - & - & - & $-158\pm2$ & $0.46\pm0.03$ &$1.17\pm0.16$ \\ \hline A3 & 18 & - & - & - & - & $-156\pm3$ & $0.71\pm0.04$ &$0.63\pm0.23$ \\ \hline \end{tabular} \end{table*} The ejection of each component appears to be associated with a flare in the core (see Figure \ref{lc_knots}). Components ejected after extremely bright knot C31, with initial projected position angle $\Theta\sim$-210$\degr$ \citep{Jorstad2017}, follow a new jet direction of $\sim-160\degr$, in contrast to the usual direction of the parsec-scale jet of $\sim−130\degr $\citep{Jorstad2005,Lister2016}. Figure~\ref{optical_radio_polar} (bottom panel) shows the behaviour of the position angle (PA) of the inner jet (within 0.3~mas from A0). The PA undergoes significant changes during 2008-2018 (from $\sim-100\degr$ to $-200\degr$), with especially dramatic variations between 2009 and 2012. According to Table~\ref{kinematics}, this period is associated with ejection of 5 knots, including the brightest features, C31 and C32, and extreme variability of the flux density of A0 (Figure \ref{lc_knots}). After 2013 the core has lower average flux density than before at 43 GHz, which might be related to changes of the inner jet direction. \begin{figure} \begin{center} \includegraphics[width=\columnwidth,clip]{3c279_lc_knots1.pdf} \caption{{\it Top}: 43 GHz light curves of the core, A0, stationary feature, A1, and brightest moving knots in the jet, C31 \& C32. {\it Middle}: 43 GHz light curves of other knots detected in the jet during 2007-2018. {\it Bottom}: $\gamma$-ray light curve. The vertical lines show the times of ejection of moving knots.} \label{lc_knots} \end{center} \end{figure} \begin{table} \caption{\bf Stationary components crossing times and nearby $\gamma$-ray flares.} \label{crossing} \begin{tabular}{c | c | c | c | c | c|} \hline (1) & (2) & (3) & (4) & (5) & (6)\\ \hline C35& A1 &$0.10\pm0.02$ & $56992\pm8 $& 56998 & 38\\ \hline & A2 &$0.46\pm0.03$ & $57134\pm13 $ & 51153 & 30\\ \hline & A3 &$0.71\pm0.04$ & $57233\pm18 $ & 57214 & 45\\ \hline C36& A1 &$0.10\pm0.02$ & $57556\pm11$ & 57552 & 30\\ \hline & A2 &$0.46\pm0.03$ & $57719\pm18$ & -- & --\\ \hline & A3 &$0.71\pm0.04$ & $57832\pm26$ & 57842 & 56\\ \hline C37& A1 &$0.10\pm0.02$ & $57968\pm7 $ & 57928 & 68\\ \hline & A2 &$0.46\pm0.03$ & $58082\pm15$ & 58118 & 69\\ \hline & A3 &$0.71\pm0.04$ & $58162\pm19$ & 58136 & 249\\ \hline \end{tabular}\\ (1) -- Moving knot\\ (2) -- Stationary component\\ (3) -- Size of stationary component, $\langle R \rangle$(mas)\\ (4) -- $T_\mathrm{cross}$(TJD)\\ (5) -- $T_{\gamma}$(TJD) within $\pm 40$ days from $T_\mathrm{cross}$ \\ and $F_\gamma > 20 \cdot 10^{-7}$ ph cm$^{-2}$ s$^{-1} $ )\\ (6) -- $\gamma$-ray flux density ($10^{-7}$ ph cm$^{-2}$ s$^{-1} $ )\\ \end{table} Based on the properties of the components, we can distinguish four different periods of jet activity that can be associated with changes in the jet structure: \\ \begin{itemize} \item[(a)] 2008--mid-2010: four knots (C27-C30) with comparable apparent speeds of 12--13$c$ are ejected,\\$\langle \beta_{app} \rangle=11.7\pm1.5$, $\langle \Theta \rangle=-136^{\circ}\pm12$, $\langle F \rangle=0.98\pm0.57$ Jy; \item[(b)] late-2010-2013: two extremely bright knots (C31 and C32) with slower apparent velocities are ejected, \\ $\langle \beta_{app} \rangle=6.29\pm0.94$, $\langle \Theta \rangle=-179^{\circ}\pm3$, $\langle F \rangle=7.95\pm3.34$ Jy, with trajectories that bend from south-southeast to south-southwest; \item[(c)] 2013.6--2015: four knots (C33-C36) with similarly high apparent speeds, $\sim20$--$30c$, are ejected, \\ $\langle \beta_{app} \rangle=25.5\pm3.5$, $\langle \Theta \rangle=-154^{\circ}\pm4$, $\langle F \rangle=1.02\pm0.47$ Jy ); and \item[(d)] late 2015--2018: three stationary knots (A1-A3) appear, and fast knot C37 is ejected in mid-2017, \\ $\beta_{app}=37.0\pm1.1$, $\langle \Theta \rangle=-148\degr\pm2$, $\langle F \rangle=0.51\pm0.13$ Jy). \end{itemize} Figure\,\ref{vlbaimages} illustrates the aforementioned structural changes in the jet. Four epochs were selected to highlight corresponding periods (a), (b), (c), and (d). The trajectories of the knots reveal changes in the PA of the inner jet (see also Figure~\ref{optical_radio_polar}), or at least the PA of the portion of the jet cross-section that is involved in the main emission. Particularly striking is the shift in direction that occurs in late 2010, based on the south-southeastern motion of knots C31 and C32 (see also \citet{Lu2013} and \citet{Jorstad2017}) and then a swing back toward $-154^\circ$, $-18^\circ$ from the previous position angle. The observed degree of polarization at 43 GHz is quite high during 2007-2018, above 10\% nearly the entire time. The optical EVPA is in good agreement with the 43 GHz EVPA, and both are roughly parallel to the direction of the jet (see Figure~\ref{optical_radio_polar}). Figure~\ref{optical_radio_polar} also shows a remarkable swing of the EVPA at radio wavelengths and optical wavelengths, which occurs simultaneously with the dramatic change in the inner jet direction mentioned above. \subsection{Relationship between multiwavelength variability and changes in jet structure} \label{interpretation} Although the variations that we observe in multiwavelength flux, polarisation, and structure of the jet are quite complex, we offer the following approximate interpretation of changes in the physical processes that dominate at different epochs: \begin{enumerate} \item[(1)] TJD~54700-55400 (late 2008 - mid-2010): As discussed in \S\ref{gamma-optical}, the frequency of the EC SED peak (and probably of the synchrotron peak) is higher when the flux is greater (see Fig.\ \ref{3c279sed}). This trend, plus the near-unity slope of the $F_\gamma$ vs.\ $F_R$ relation (Fig.\ \ref{opt_gamma}), can be explained if the variations are dominated by changes in the Doppler factor because of a shift in viewing angle (see scenario (2) of \S\ref{sect:sed}). The factor of $\sim 10$ and 20 decline in the $\gamma$-ray and $R$-band fluxes, respectively, over the time period is consistent with a Doppler factor decrease by a factor of 1.7. This qualitatively conforms with the relatively moderate apparent speeds of 10-13$c$ of radio knots during this time span (see Table \ref{kinematics}) if the viewing angle was greater than the value of $\sim\Gamma^{-1}$ that maximizes superluminal motion. In fact, detailed analysis of knots C27-29 by \citet{Jorstad2017} determines that the viewing angle indeed increased from $\lesssim 2\degr$ to $\sim 6\degr$ from late 2008 to late 2009 as the optical and $\gamma$-ray fluxes dropped to very low values. \item [(2)] TJD~55500-57600 (late 2010 - mid-2016): As discussed in \S\ref{gamma-optical}, the high value of the slope of the $F_\gamma$ vs.\ $F_R$ relation (Fig.\ \ref{opt_gamma}) plus the high-amplitude variations imply that the seed photon density changes across the region where the electrons are accelerated [a combination of scenarios (1) and (5) of \S\ref{sect:sed}]. (We note that during the first section from late 2010 to mid-2011, the optical and $\gamma$-ray fluxes rise by about a similar factor of $\sim5$, which is consistent with an increase solely in the number of radiating electrons.) This is the same time period when two very bright knots are ejected in a very different direction than at earlier and later epochs. From late 2013 until late 2015, the X-ray and $\gamma$-ray flux variations have very high amplitudes compared with earlier epochs, with $\gamma$-ray doubling times as short as 5 min \citep{Hayashida2015,Ackermann2016}. The beginning of this behaviour coincides with the resumption of ejections of new knots after a $\sim2$-yr hiatus as the optical, X-ray, and $\gamma$-ray activity becomes strong again. These knots have quite high apparent speeds, in the $20$-$30c$ range. The establishment of stationary features A1, A2, and A3 in late 2015 might create structures (e.g., standing shocks) that both accelerate electrons and provide sources of (synchrotron) seed photons for IC scattering. \item [(3)] TJD~57700-57850 (late 2016 - early 2017): During this relatively brief period, the slope of the $F_\gamma$ vs.\ $F_R$ relation (Fig.\ \ref{opt_gamma}) reverts to a value near unity. The synchrotron flux rises by a factor $\sim20$ to its highest level of the 10 years of our study, while the Compton to synchrotron flux ratio is modest, $\sim4$. Knot C37, which crosses the millimetre-wave core shortly (0.25 yr) after this time interval ended, moves at the highest apparent speed observed in our study, $37c$. We can explain the behaviour of the optical and $\gamma$-ray flux qualitatively during this period by an increase in both the magnetic field strength and number of radiating electrons, perhaps accompanied by a decrease in the energy density of seed photons. \item [(4)] TJD~57850-58350 (early 2017 - mid-2018): During this time interval, the slope of the $F_\gamma$ vs.\ $F_R$ relation (Fig.\ \ref{opt_gamma}) switches to $\zeta\sim2$. The extremely high apparent speed of knot C37 indicates that the Lorentz factor and/or direction of motion indeed changes during this time span. Variations in the external seed photon energy density by the optical outburst in 2017 -- scenario (4) in \S\ref{sect:sed} -- might steepen the slope $\zeta$ from 1.2 to the observed value of 1.9. \end{enumerate} If the stationary features A1-3 supply seed photons for EC emission by electrons in the superluminal knots, flares of high-energy photons should occur contemporaneously with the crossing of knots through the features. Comparison of the times of the crossings listed in Table \ref{crossing} with times of $\gamma$-ray flares (none of which were missed, since there are no gaps in coverage) finds a $\gamma$-ray flare within 40 days of the epoch of knot crossing in: all three passages of knots through A1; two out of three through A2; and all three through A3, as seen in Figure~\ref{lc_knots}. However, while this suggests consistency with the hypothesis that stationary emission features in the jet supply seed photons for many of the flares, all but two of the above crossing-flare pairs occur during a period when there were so many closely-spaced flares that a coincidence is guaranteed. \citet{Ackermann2016} find that a standard EC model matching the observed characteristics of the $\gamma$-ray flux doubling time scale of 5 min during a flare in June 2015 requires a Lorentz factor $\Gamma>50$, with $\Gamma\sim 120$ needed to raise the derived energy density of the magnetic field to equipartition with that in relativistic electrons. Our measurement of an apparent speed of $37c$ in 2017 requires $\Gamma>37$, and the value required to explain the proper motion of knot C35 in 2015 is only 20\% lower. If turbulent motions are superposed on this systemic velocity, then local values of $\Gamma\sim70$ would be possible if the maximum turbulent velocity were the relativistic sound speed of $c/\sqrt{3}$, and even higher for supersonic turbulence or `mini-jets' formed via magnetic reconnections \citep{Giannios2009}. \subsection{Polarimetric behaviour at optical and radio wavelengths}\label{pol_behaviour} Figure \ref{optical_radio_polar} shows the time dependencies of optical $R$ band magnitude, degree of polarization, and electric vector position angle (EVPA) of 3C~279. The evolution of polarization parameters is shown for optical data and for three radio bands: 230, 100 and 43~GHz. We solve the $n\times180\degr$ ambiguity between optical and less well-sampled radio data that arise by minimizing the difference between the radio and optical EVPA values. We note the occurrence of exceptionally high-amplitude changes of the polarization degree (PD), from nearly 0\% to $>30\%$. \begin{figure} \centering \includegraphics[width=\columnwidth,clip]{optical_radio_polar2b.pdf} \caption{ (a) $R$-band light curve of 3C~279; (b) fractional polarization at optical and radio bands; (c) electric vector position angle (EVPA) at optical and radio bands, with 180\degr ambiguity solved; (d) raw values of optical EVPA (black) and position angle of the 43 GHz jet within 0.3~mas of the core (red). } \label{optical_radio_polar} \end{figure} This high level of scatter of the optical PD is in striking contrast to the relatively smooth behaviour of the PD at radio frequencies; in fact, the radio PD mostly represents the lower envelope of the optical PD, as is seen in Figure \ref{optical_radio_polar}(b). We propose that this contrast between optical and radio PD scatter is a consequence of very different volumes radiating at the two wavelength ranges in a plasma with a strong turbulent component of magnetic field: at radio frequencies the number of radiating cells with different magnetic field orientations is substantially larger than at optical bands, and the effect of each cell is diminished after vector co-addition of polarized intensity from all of the cells in the radiating volume. However, as seen in Figure \ref{optical_radio_polar}(c), there is almost no difference in the direction of the electric vector position angle (EVPA) between optical and radio frequencies. At most epochs, this direction is parallel to the jet axis, strongly implying that a common process partially orders the magnetic field across the entire wavelength range, such as shocks or a helical field component. This ordering, superposed with the more chaotic field structure indicated by variability of the PD and sometimes the EVPA, agrees well with scenarios like the turbulent, extreme multi-zone model (TEMZ) developed by \citet{Marscher2014}. In Figure \ref{optical_radio_polar}\,(d) we see a persistent offset between the mean optical EVPA and inner ($\lesssim0.3$~mas) jet direction of $\sim30^\circ$ starting in 2013. We note that the EVPA during this time interval is similar to the downstream (0.5-1.5~mas) jet direction of knots ejected prior to 2010 (see Table \ref{kinematics}). This implies that the inner jet direction after mid-2013 might correspond to an elongated emission feature (e.g., a site of magnetic reconnections) oriented at an angle to the general jet flow. Given the strong projection effects, this angle can be as small as the angle at which we view the jet axis, $1.4$-$6^\circ$. The large number of polarimetric measurements both in optical and radio bands allow us to look for minor differences between the EVPA behaviour at different wavelengths. To do this, in Fig.~\ref{histogram} we plot histograms of the EVPA distributions in optical $R$ band and radio frequencies of 229~GHz, 86~GHz, and 43~GHz. Unlike the plot in Fig.~\ref{optical_radio_polar}\,(c), we constrain the range of EVPAs to $[-90, 90]$ degrees. We immediately see a monotonic shift of the positions of maxima in the histograms with wavelength. We interpret this shift as a sign of Faraday rotation, and plot the values of these maxima relative to the square of wavelength in the bottom panel of the same figure. The values of the rotation measure (RM) obtained in this way for the pairs of frequencies $10^{5.6}$--229~GHz, 229--86~GHz, and 86--43~GHz are, correspondingly, $-35100\pm3000, -7900\pm1000$, and $-1900\pm500$~rad$\cdot$m$^{-2}$. Neither the signs nor frequency dependence of the RM contradict earlier determinations \citep[see, e.g.,][]{Kang2015, Park2018}. \begin{figure} \centering \includegraphics[width=\columnwidth,clip]{histogram.pdf} \caption{ Top: Histograms of the EVPA distribution in optical and radio bands. Bottom: Positions of maxima of the 2014--2018 distributions vs. square of wavelength. } \label{histogram} \end{figure} Figure~\ref{polar_degree} shows the dependencies of the optical PD on $R$-band flux density for different stages of activity (1)--(4), as defined in \$~\ref{interpretation}. The very different types of behaviour of PD vs. flux density may reflect different physical and geometrical conditions in the jet, as discussed above. Examples include changes in the Doppler factor due to shifts in viewing angle or variations in magnetic field strength and/or number of radiating electrons. Given the complexity of the observed behaviour, it is not surprising that time-limited campaigns may yield conclusions that are (or seem to be) contradictory. \begin{figure} \centering \includegraphics[width=\columnwidth,clip]{3c279_PD3.pdf} \caption{Behaviour of fractional polarization of 3C~279 at different stages of activity.} \label{polar_degree} \end{figure} Further important information that can be obtained from polarimetric data concerns possible spiral structure of the jet or helical structure of the magnetic field. During the past decade, several robust cases have been reported of optical EVPA rotations that mostly occurred close to optical, X-ray, and/or $\gamma$-ray outbursts, e.g., in BL~Lac \citep{Marscher2008}, 3C~279 \citep{Larionov2008}, PKS~1510-089 \citep{Marscher2010}, S5~0716+71 \citep{Larionov2013}, and CTA~102 \citep{Larionov2016a}. However, regular patterns in the behaviour of polarization vector could also occur during quiescent states. \citet{Larionov2016b} suggested using a DCF analysis between normalized Stokes parameters $q$ and $u$ to uncover possible regular rotations that might be hidden by stochastic variability. If monotonic rotation of the EVPA is present during the temporal evolution of the polarization parameters, the curves of $q(t)$ and $u(t)$ must have a systematic shift relative to each other. In the opposite case, when only stochastic variability is present, no systematic shift is expected. Moreover, the direction of any rotation can be obtained from the sign of the DCF slope at lag=0: negative slope corresponds to counter-clockwise rotation and \textit{vice versa}. In Fig.~\ref{dcfqu} we plot the DCF between $q$ and $u$ in optical $R$ band, which shows that during 2008--2018 there is a clear indication of anticlockwise rotation of the EVPA. To estimate the significance of the correlation between Stokes $q$ and $u$ parameters, we have performed a Monte-Carlo simulation in way similar to the one we used to estimate the significance of the $\gamma$-ray -- optical correlation. The only difference is the method we used to make a set of synthetic curves. The time dependence of the Stokes parameters does not show well defined peaks, and cannot be adequately fit by a set of double exponential peaks, as is possible for the $\gamma$-ray and optical curves. To make synthetic light curves of the Stokes parameters, we approximated them using a damped random walk (DRW) model \citep{Kelly2009, Zu2013}. We used a JAVELIN code \footnote{\url{https://bitbucket.org/nye17/javelin/}} to obtain parameters of the DRW models for both $q$ and $u$ data sets and to generate $10^4$ random light curves with the derived parameters. By correlating these synthetic light curves, we find that the maximum difference of 0.4 between the highest-magnitude positive and negative correlations of the observed $q$ and $u$ data sets is reached only in $3\cdot 10^{-4}$ cases. Therefore, we can consider this correlation to be statistically significant at the 0.03 per cent level. \begin{figure} \centering \includegraphics[width=\columnwidth,clip]{DCF_q-u_11-18.pdf} \caption{Discrete correlation function between normalized Stokes parameters $q$ and $u$ in optical $R$ band for the time interval 2008--2018 (top) and statistical distribution of the lag of the peak from Monte Carlo simulations (bottom). The grey area in the upper panel correspond to $\pm 1 \sigma$ spread of simulated DCFs.} \label{dcfqu} \end{figure} \section{CONCLUSIONS}\label{conclusions} We have reported the results of decade-long (2008--2018) monitoring of the blazar 3C~279 from $\gamma$-rays to 1~GHz radio frequencies. Our data set includes \emph{Fermi} and \emph{Swift} data, obtained alongside an intensive GASP-WEBT collaboration campaign, polarimetric and spectroscopic data collected during the same time interval, and roughly monthly sequences of VLBA images at 43 GHz. By analysing the multi-colour flux dependencies, we have isolated the spectra of the variable optical to near-IR continuum, which follows a power law with slope ranging from $-1.47$ to $-1.65$. Our multi-epoch optical spectra reveal changes in \ion{Mg}{ii} and \ion{Fe}{ii} emission-line flux. The \ion{Mg}{ii} line consists of a component at the systemic redshift of 3C~279, as well as a second component 3500 km~s$^{-1}$ on the long-wavelength (red) side. The fluxes of both components are proportional to the optical continuum flux, although the redder component has a stronger dependence. If the redder feature is free falling toward the central black hole, we estimate that the line-emitting clouds lie $\sim0.6$ pc from the black hole. The reverberation of the changing continuum -- which is synchrotron emission from the jet -- is less than two months, restricting the location of the clouds to within $\sim10\degr$ of the jet axis as viewed by the black hole. This polar line emission is likely to be a significant, variable source of seed photons for `external' Compton scattering creating X-ray and $\gamma$-ray emission. The SED contains the two-hump shape typical of blazars. The ratio of the higher peak, which falls in the $\gamma$-ray range, to the synchrotron peak in the IR, exceeds 100 during some outbursts. An analysis of the peak frequencies of the IR and $\gamma$-ray humps eliminates synchrotron self-Compton as the dominant $\gamma$-ray emission process during outbursts unless the magnetic field is extremely low, $\sim0.4$ mG, and the plasma extremely far from equipartition between the magnetic and particle energy densities. Instead, EC emission appears to dominate at most epochs, which is consistent with the high $\gamma$-ray to IR-optical flux ratios. The X-ray and $\gamma$-ray behaviour of 3C~279 are remarkably similar, with a time lag between the two light curves of $\la 3$ hours, indicating co-spatiality of the X-ray and $\gamma$-ray emission regions. The relation between the $\gamma$-ray and optical flux is quite complex, with a variety of slopes during different stages of activity. The dependence changes from linear to quadratic to a period when the $\gamma$-ray flux varied greatly during much more modest optical variations. At radio bands we have found progressive shifts of the most prominent light curve features with decreasing frequency, consistent with the emission peaking as it emerges from the $\tau=1$ surface. In addition, some fluctuations that emerge in the radio light curve disappear with decreasing frequency, which suggests different Doppler boosting of different radio-emitting zones in the jet. From our series of VLBA images at 43 GHz, we have identified 14 bright knots that appear to move at various superluminal speeds. Some of the trajectories curve, with the direction of `ejection' close to the `core' changing with time. In 2010 two extremely bright knots initially emerged to the south-east of the core before veering toward the usual south-western direction of the $\ga0.5$ mas-scale jet. Periods of different multi-wavelength behaviour of the light curves correspond to changes in the behaviour of the compact jet. The formation of three stationary emission features, which occurred after the direction of ejection became more consistent, and increase in apparent speed of moving knots up to $37c$ coincide with the highest-amplitude high-energy and optical outbursts. The extremely high apparent velocity requires a bulk Lorentz factor exceeding 37, which is nearly the value of $\sim50$ needed to explain extremely rapid variations during the highest-amplitude $\gamma$-ray flares in 2013-15. Turbulent motions at the relativistic sound speed could increase this up to $\sim70$ when a portion of the plasma moves toward the line of sight relative to the systemic flow. The degree and position angle of linear polarization at optical and millimetre wavelengths (single-telescope and VLBA) changes with time, with the most rapid variability occurring at optical wavelengths. The correlation of variations in the optical $q$ and $u$ Stokes parameters corresponds to that expected in the case of a persistent helical magnetic field component or spiral motion of the radiating plasma. The patterns of multi-wavelength variability of the flux, polarization, and structure of the relativistic jet of 3C~279 change over time. The variations in the ratios of the peak fluxes of the high-energy and optical-IR portions of the SED can be ascribed to changes in the ratio of the seed-photon to magnetic energy densities in the plasma frame. Such changes can occur via (1) variations in the bulk Lorentz factor of the plasma, (2) shifts in location of the site of acceleration of relativistic electrons relative to the emission-line clouds, especially those lying within $\sim10^\circ$ of the jet axis, or (3) time-dependent reverberation of emission lines following variations in the UV continuum from either the jet or accretion disk. General variations of the flux can correspond to fluctuations in the number of radiating electrons, magnetic field (for synchrotron and SSC emission only), or Doppler factor, with the last of these caused by a variation in either the bulk Lorentz factor or direction of the velocity vector of the emitting plasma. The complexity of the variations in 3C~279 implies that most, or all, of these factors play a major role at different times. \section*{ACKNOWLEDGEMENTS} We thank the referee for attentive reading and comments that helped to improve presentation of the manuscript. The data collected by the WEBT collaboration are stored in the WEBT archive at the Osservatorio Astrofisico di Torino - INAF (http://www.oato.inaf.it/blazars/webt/); for questions regarding their availability, please contact the WEBT President Massimo Villata ({\tt [email protected]}). The St. Petersburg University team acknowledges support from Russian Scientific Foundation grant 17-12-01029. The research at Boston University was supported in part by National Science Foundation grant AST-1615796 and NASA Fermi Guest Investigator grants 80NSSC17K0649, 80NSSC19K1504, and 80NSSC19K1505. The PRISM camera at Lowell Observatory was developed by K. Janes et al. at BU and Lowell Observatory, with funding from the NSF, BU, and Lowell Observatory. The emission-line observations made use of the Discovery Channel Telescope at Lowell Observatory, supported by Discovery Communications, Inc., Boston University, the University of Maryland, the University of Toledo, and Northern Arizona University. The VLBA is an instrument of the National Radio Astronomy Observatory. The National Radio Astronomy Observatory is a facility of the US National Science Foundation (NSF), operated under cooperative agreement by Associated Universities, Inc. This research has used data from the University of Michigan Radio Astronomy Observatory which was supported by the University of Michigan; research at this facility was supported by NASA under awards NNX09AU16G, NNX10AP16G, NNX11AO13G, and NNX13AP18G, and by the NSF under award AST-0607523. The Steward Observatory spectropolarimetric monitoring project was supported by NASA Fermi Guest Investigator grants NNX08AW56G, NNX09AU10G, NNX12AO93G, and NNX15AU81G. The Torino group acknowledges financial contribution from agreement ASI-INAF n.2017-14-H.0 and from contract PRIN-SKA-CTA-INAF 2016. I.A. acknowledges support by a Ram\'on y Cajal grant (RYC-2013-14511) of the "Ministerio de Ciencia, Innovaci\'on, y Universidades (MICIU)" of Spain and from MCIU through the “Center of Excellence Severo Ochoa” award for the Instituto de Astrof\'isica de Andaluc\'ia-CSIC (SEV-2017-0709). Acquisition and reduction of the POLAMI and MAPCAT data were supported by MICIU through grant AYA2016-80889-P. The POLAMI observations were carried out at the IRAM 30m Telescope, supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain). The MAPCAT observations were carried out at the German-Spanish Calar Alto Observatory, jointly operated by the Max-Plank-Institut f\"ur Astronomie and the Instituto de Astrof\'isica de Andaluc\'ia-CSIC. The study is based partly on data obtained with the STELLA robotic telescopes in Tenerife, an AIP facility jointly operated by AIP and IAC. The OVRO 40-m monitoring program is supported in part by NASA grants NNX08AW31G, NNX11A043G and NNX14AQ89G, and NSF grants AST-0808050 and AST-1109911. T.H. was supported by the Academy of Finland projects 317383 and 320085. AZT-24 observations were made within an agreement between Pulkovo, Rome and Teramo observatories. The Submillimeter Array is a joint project between the Smithsonian Astrophysical Observatory and the Academia Sinica Institute of Astronomy and Astrophysics and is funded by the Smithsonian Institution and the Academia Sinica. The Abastumani team acknowledges financial support by the Shota Rustaveli National Science Foundation under contract FR/217950/16. This research was partially supported by the Bulgarian National Science Fund of the Ministry of Education and Science under grants DN 08-1/2016, DN 18-13/2017, KP-06-H28/3 (2018) and KP-06-PN38/1 (2019), Bulgarian National Science Programme ”Young Scientists and Postdoctoral Students 2019”, Bulgarian National Science Fund under grant DN18-10/2017 and National RI Roadmap Projects DO1-157/28.08.2018 and DO1-153/28.08.2018 of the Ministry of Education and Science of the Republic of Bulgaria. GD and OV gratefully acknowledge observing grant support from the Institute of Astronomy and Rozhen National Astronomical Observatory via bilateral joint research project "Study of ICRF radio-sources and fast variable astronomical objects" (head - G.Damljanovic). This work was partly supported by the National Science Fund of the Ministry of Education and Science of Bulgaria under grant DN 08-20/2016, and by project RD-08-37/2019 of the University of Shumen. This work is a part of Projects No. 176011, No. 176004, and No. 176021, supported by the Ministry of Education, Science and Technological Development of the Republic of Serbia. M.G.\ Mingaliev acknowledges support through the Russian Government Program of Competitive Growth of Kazan Federal University. The Astronomical Observatory of the Autonomous Region of the Aosta Valley (OAVdA) is managed by the Fondazione Cl\'ement Fillietroz-ONLUS, which is supported by the Regional Government of the Aosta Valley, the Town Municipality of Nus and the "Unit\'e des Communes vald\^otaines Mont-\'Emilius". The research at the OAVdA was partially funded by several "Research and Education" annual grants from Fondazione CRT. This article is partly based on observations made with the IAC80 and TCS telescopes operated by the Instituto de Astrof\'isica de Canarias in the Spanish Observatorio del Teide on the island of Tenerife. A part of the observations were carried out using the RATAN-600 scientific equipment (Special Astrophysical Observatory of the Russian Academy of Sciences). \bibliographystyle{mnras}
2,869,038,156,558
arxiv
\section{Discussion} We have demonstrated interactive storytelling, a combination of interactive topic modeling and constrained search wherein documents are connected obeying user constraints on paths. User feedback is pushed deep into the computational pipeline and used to refine the topic model. Through experiments we have demonstrated the ability of our approach to provide meaningful alternative stories while satisfying user constraints. In future work, we aim to generalize our framework to a multimodal network representation where entities of various kinds are linked through a document corpus, so that constraints can be more expressively communicated. \section{Motivating Example} We present an illustrative example of how a storytelling algorithm can be steered toward desired lines of analysis based on user input. For our purposes, assume a vanilla storytelling algorithm (akin to~\cite{sto1,sto2}) based on heuristic search to prioritize the exploration of adjacent documents in order to reach a desired destination document. Adjacency here can be assessed in many ways. One approach is to use local representations such as a tf-idf representation and define similarity measures (e.g., Jaccard coefficient) over such local representations. A second approach is to utilize the normalized topic distribution generated using, e.g., LDA~\cite{lda}, to induce a distance between every pair of documents. Let us construct a toy corpus of $50$ documents wherein the terms are drawn from $9$ predefined \textit{themes} and some random \textit{noise} terms. Each theme is assumed to be represented by a collection of $4$ terms. An example of a theme is:\\ \textit{Theme 1}: \textbf{nation, terror, avert, orange}\\ \noindent Each document is generated by a single theme or by mixing two themes. In addition to the terms sampled from the themes, each document is assumed to also contain $2$ noise terms. (The noise terms are document-specific meaning two documents do not share the same terms.) Thus, we obtain $4$ terms for each of the $9$ themes and $2$ noise terms for each of the $50$ documents, so that the total number of terms is $9 \times 4 + 50 \times 2=136$. A pair of documents has an edge between them if they have at least one common term. (Since noise terms are not common between the documents, they are not responsible for edge formation.) We use the notation $d_n(p\cdots q)$ to denote a document. Here $n$ denotes the document index and $p,q$ are the two themes represented by the document. For example $d_1(5\cdots 6)$ is the first document in the corpus and contains terms from themes $5$ and $6$. \begin{figure}[!ht] \centering \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{fig/story1.pdf} \caption{} \label{fig:sto1} \end{subfigure} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{fig/story2.pdf} \caption{} \label{fig:sto2} \end{subfigure} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{fig/story3.pdf} \caption{} \label{fig:sto3} \end{subfigure} \caption{An illustration of the interactive storytelling algorithm.} \label{fig:story} \end{figure} Now consider the storytelling scenario from Fig.~\ref{fig:story}. The user desires to make a story from document $d_{43}(5\cdots 7)$ to document $d_{23}(1\cdots 3)$. $d_{43}(5\cdots 7)$ describes a bank robbery and $d_{23}(1\cdots 3)$ mentions a possible chemical attack. The constructed story is as follows: $d_{43}(5\cdots 7)\rightarrow d_{27}(1\cdots 7)\rightarrow d_{23}(1\cdots 3)$ using heuristic search (Fig.~\ref{fig:story} (a)). The first two documents are connected using (\textit{Theme 7}), involving the terms \textbf{bank, red, truck, aspen}. As can be seen this story is not desirable since the algorithm has conflated a bank robbery in Aspen using a red truck with the bankruptcy of the Red Trucking company (due to insufficient orange production in Aspen). Thus although the connection between two documents are established by the same set of terms, the contexts are very different. In this case the user realizes that the story does not make very good sense, and thus uses her domain knowledge to steer the story in the right direction. She aims to incorporate a story segment $<d_4(5\cdots 8), d_{22}(1\cdots 8)>$ into the construction. Here, $d_4(5\cdots 8)$ reports the closing of a chemical factory and $d_{22}(1\cdots 8)$ mentions about a sweet odor emanating from an abandoned chemical factory (see Fig.~\ref{fig:story} (b)). The user believes that these two documents could play an important role in the final story. Incorporating this feedback, a story from $d_{43}$ to $d_{23}$ could potentially be $d_{43}(5\cdots 7) \rightarrow d_4(5\cdots 8) \rightarrow d_{22}(1\cdots 8) \rightarrow d_{23}(1\cdots 3)$ (i.e., the shortest path from $d_{43}(5\cdots 7)$ to $d_{23}(1\cdots 3)$ via $d_4(5\cdots 8)$ and $d_{22}(1\cdots 8)$). Note that there could be other documents necessary to be included in the path that are not explicitly provided in the user's feedback. Incorporating this feedback, the algorithm introduced in this paper will infer new topic definitions over the dictionary of terms, and subsequently new topic distributions for each document. In this case, a new story is generated: $d_{43}(5\cdots 7) \rightarrow d_4(5\cdots 8) \rightarrow d_{22}(1\cdots 8) \rightarrow d_{23}(1\cdots 3)$. In this story (see Fig.~\ref{fig:story} (c)), the first two documents are connected by the terms \textbf{ski, tourist, destination, winter} (\textit{Theme 5}); the second and the third are linked via the terms \textbf{chemical, factory, recently, hiring} (\textit{Theme 8}) and the last two documents are connected by \textbf{nation, terror, avert, orange} (\textit{Theme 1}). This story thus suggests an alternative hypothesis for the user's scenario. \begin{figure}[hbtp] \centering \includegraphics[width = 0.8\textwidth]{fig/exmTDist.png} \caption{Probability of weights of terms before (green) and after (blue) feedback. The inferred topic distributions are shifted to induce proximity between documents so that the story is consistent with user feedback.} \label{fig:mass} \end{figure} After incorporating the user's feedback using our proposed algorithm, we see that \textbf{ski, tourist, destination, winter} has some mass for document $d_{22}(1\cdots 8)$ so that it is inferred closer to document $d_4(5\cdots 8)$ (see Fig.~\ref{fig:mass}). Similarly, the algorithm estimates positive probabilities for the terms \textbf{chemical, factory, recently, hiring} in document $d_{23}(1\cdots 3)$ which brings it closer to document $d_{22}(1\cdots 8)$. \section{Related Work} Related work pertaining to storytelling has been covered in the introduction. We survey topic modeling related work here. To the best of our knowledge, no existing work supports the incorporation of path-based constraints to refine topic models, as done here. \paragraph{Expressive topic models} The author-topic model~\cite{atm} is one of the popular extensions of topic models that aims to model how multiple authors contributed to a document collection. Works such as~\cite{genSpcePTM,combSem} extend basic topic modeling to include specific words or semantic concepts by incorporating notions of proximity between documents. In~\cite{tmbbow}, the authors move beyond bag-of-words assumptions and accommodate the ordering of words in topic modeling. Domain knowledge is incorporated in~\cite{domTM} in the form of Dirichlet forest priors. Finally, in~\cite{ctm}, correlated topic models are introduced to model correlations between topics. \paragraph{Incorporating external information} Supervised topic models are introduced in~\cite{slda}. Lu and Zhai~\cite{opInTM} propose a semi-supervised topic model to incorporate expert opinions into modeling. In~\cite{llda}, authors incorporate user tags accorded to documents to place constraints on topic inference. The timestamps of documents is used in~\cite{dtm,tot} to model the evolution of topics in large corpus. \paragraph{Visualizing topics} Wei et al.~\cite{tiara} propose TIARA, a visual exploratory text analytics system to observe the evolution of topics over time in a corpus. Crossno et al.~\cite{tv} develop a framework to visually compare document contents based on different topic modeling approaches. In~\cite{tpNet}, the authors present documents in topic space and depict inter-document connectivity as a network in a visual interface, simultaneously displaying community clustering. \paragraph{Interactive topic modeling} User feedback is incorporated in~\cite{itm} wherein users can provide constraints about specific words that must appear in topics. An active learning framework to incorporate user feedback and improve topic quality is introduced in~\cite{alCTM}. \section{Experimental Results} We evaluate our interactive storytelling approach over a range of text datasets from intelligence analysis, such as {\it Atlantic Storm}, {\it Crescent}, {\it Manpad}, and the {\it VAST11} dataset from the IEEE Visual Analytics Science \& Technology Conference. Pl see~\cite{hao-paper} for details of these datasets. The questions we seek to answer are: \begin{enumerate} \item Can we effectively visualize the operations of the interactive storytelling as user feedback is incorporated? (Section \ref{sec:visIn}) \item Does the interactive storytelling framework provide better alternatives for stories than a vanilla topic model? (Section \ref{sec:ks}) \item Are topic reoorganizations obtained from interactive storytelling significantly different from a vanilla topic model? (Section \ref{sec:proT}) \item Does our method scale to large datasets? (Section \ref{sec:clusS}) \item How effectively does the interactive storytelling approach improve over uninformed search (e.g., uniform cost search or breadth-first search)? (Section \ref{sec:com}) \end{enumerate} In the below, unless otherwise stated, we fix the number of topics to be $T=20$ and set $\alpha=0.05/T$ and $\beta=0.01$. We also use the Gini index to remove top 10\% of of the terms as a pre-processing step for our text collections. \subsection{Visualizing interactive storytelling} \label{sec:visIn} We apply multidimensional scaling (MDS) over the normalized topic space as an aid to visualize the operations of the storytelling algorithm. For instance, the Manpad dataset is visualized as shown in Fig.~\ref{fig:mds_mand}. Consider a story from document $Doc$-$29$ to document $Doc$-$26$. Here $Doc$-$29$ reports that a member of an infamous terrorist organization has a meeting with a notorious arms dealer. $Doc$-$26$ reports that a team of suicide bombers plans to set off bombs in trains carrying tens of thousands of commuters under the Hudson River. The storytelling algorithms generates a story as: $Doc$-$29\rightarrow Doc$-$32\rightarrow Doc$-$26$. Here, $Doc$-$32$ identifies a person belong to a terrorist organization. The user is not satisfied with this story and provides a constraint that the story should involve documents $Doc$-$44$ and $Doc$-$49$. Here, $Doc$-$44$ describes that libraries in Georgia and Colorado have some connections to a web site. $Doc$-$49$ reports that a code number is found in the website linked to a charitable organization. Using this feedback a new story is generated: $Doc$-$29\rightarrow Doc$-$44\rightarrow Doc$-$49\rightarrow Doc$-$16\rightarrow Doc$-$26$. In addition to being consistent with the user's feedback, note that the algorithm has introduced a new document ($Doc$-$16$) which contains a report of police seizing documents involving specific names and dates. \subsection{Evaluating story options}\label{sec:ks} In this experiment, we seek to generate multiple stories using our interactive storytelling approach as well as a vanilla topic modeling, with a view to comparative evaluation. In this experiment, run over the Atlantic Storm dataset, the user specifies $CIA06$ as the starting document and $NSA16$ as the ending document. The default story is: $CIA06\rightarrow CIA37\rightarrow NSA19\rightarrow NSA16$. The user's feedback specifies $CIA08$ and $NSA09$ to be included in the final story. The results of incorporating this feedback yields: $CIA06\rightarrow CIA08\rightarrow DIA01\rightarrow NSA09\rightarrow NSA16$. We next use Yen's $k$-shortest path algorithm~\cite{kShortest} to generate a set of top $10$ (alternative) stories. As shown in Table~\ref{table:story}, the top-ranked path in the interactive setting is indeed the shortest path in the new topic space that satisfies the given constraints. \begin{table*}[!t] \centering \scriptsize \caption{Top $10$ stories (shortest paths) generated from $CIA06$ to $NSA16$ using both a vanilla topic model and the interactive storytelling algorithm (using the Atlantic Storm dataset). The user's feedback requires that both $CIA08$ and $NSA09$ be included in the story. The interactive storytelling algorithm updates the topic model wherein the shortest path indeed contains these documents.} \label{table:story} \begin{tabular}{r|c|r|c} \hline \multicolumn{1}{c|}{Top $10$ stories generated using vanilla topic model} & Path Length & \multicolumn{1}{c|}{Top $10$ stories generated using interactive storytelling} & Path Length \\ \hline CIA06, CIA37, NSA19, NSA16 & 2.84 & CIA06, CIA08, DIA01, NSA09, NSA16 & 1.39 \\ CIA06, CIA20, CIA21, NSA16 & 3.16 & CIA06, CIA12, NSA09, NSA16 & 1.93 \\ CIA06, CIA22, CIA21, NSA16 & 3.16 & CIA06, CIA33, DIA01, NSA09, NSA16 & 2.13 \\ CIA06, CIA20, CIA22, CIA21, NSA16 & 3.16 & CIA06, CIA22, NSA09, NSA16 & 2.13 \\ CIA06, CIA22, CIA20, CIA21, NSA16 & 3.16 & CIA06, CIA08, DIA01, FBI07, NSA16 & 2.20 \\ CIA06, CIA08, NSA21, NSA16 & 3.23 & CIA06, CIA33, FBI07, NSA16 & 2.22 \\ CIA06, CIA08, NSA21, NSA12, NSA16 & 3.23 & CIA06, CIA33, CIA08, DIA01, NSA09, NSA16 & 2.31 \\ CIA06, CIA08, NSA21, NSA13, NSA16 & 3.23 & CIA06, CIA11, FBI13, DIA01, NSA09, NSA16 & 2.33 \\ CIA06, CIA08, NSA21, NSA12, NSA13, NSA16 & 3.23 & CIA06, DIA02, DIA01, NSA09, NSA16 & 2.33 \\ CIA06, CIA08, NSA21, NSA18, NSA16 & 3.23 & CIA06, CIA08, CIA23, NSA16 & 2.34 \\ \hline \end{tabular} \end{table*} \begin{figure*}[!t] \centering \includegraphics[width=0.95\textwidth]{fig/mds_manpad.pdf} \caption{Visualizing documents using multidimensional scaling (Manpad dataset) before and after user feedback. Many documents are omitted for better visualization. The starting and the ending documents are shown in green. The documents in the initial story are shown in blue (and the story by solid lines). The story generated by the interactive storytelling algorithm is shown in the dotted line through the grey documents. Each document is represented by its top five terms having the highest posterior probability.} \label{fig:mds_mand} \end{figure*} \subsection{Proximity between topics}\label{sec:proT} We investigate topic proximity in terms of Manhattan distance in Fig~ \ref{fig:heat}. Here, rows denote topics from a vanilla topic model, and the columns correspond to topics inferred by the interactive storytelling algorithm. As shown in Fig.~\ref{fig:heat} the diagonally dominant nature of the matrix is destroyed due to the introduction of user feedback, illustrating that the distributions of words underyling the topics are quite dissimilar. \iffalse shows the distance between a pair of topics from original LDA while the column shows the distance from LDA \textit{Interactive Storytelling}. The closest \textit{Interactive Storytelling} based topic for each LDA is shown in the diagonal. We can see that topics redefinition after incoporting feedback are quite different from the initial topics from vanilla LDA. Since topics are represented in terms of distribution the highest distance between two topics is 2 units. We can see that some of the distances along the diagonal are closer to 2. We also show top four terms from each computed by the both topic models in Table \ref{table:topLDA} and \ref{table:topILDA} for Atlantic Storm dataset. These terms also show that the topic redefinitions of two topic models are quite different from each other. \fi \begin{figure*}[!t] \centering \begin{tabular}{@{}c@{}c@{}c@{}} \includegraphics[width=0.3\textwidth,height=1.8in]{fig/heat_map_atlantic.pdf} & \includegraphics[width=0.3\textwidth,height=1.8in]{fig/heat_map_crescent.pdf} & \includegraphics[width=0.3\textwidth,height=1.8in]{fig/heat_map_manpad.pdf} \\ Atlantic Storm & Crescent & Manpad \\ \end{tabular} \caption{Manhattan distance between topic distributions before and after user feedback. Blue color denotes topics closest to each other. As can be seen, the incorporation of feedback destroys the diagonal dominance of the matrix.} \label{fig:heat} \end{figure*} \iffalse \begin{table*} \centering \scriptsize \caption{Top terms from each topic computed by LDA} \label{table:topLDA} \begin{tabular}{cccccccccc} \hline \hline \textit{Topic 1} & \textit{Topic 2} & \textit{Topic 3} & \textit{Topic 4} & \textit{Topic 5} & \textit{Topic 6} & \textit{Topic 47} & \textit{Topic 8} & \textit{Topic 9} & \textit{Topic 10} \\ \hline safrygin & cooper & licens & montreal & octob & omari & shamrani & karim & ojinaga & arz \\ bugarov & raid & salah & apart & regist & bomb & zinedin & fund & shipment & moral \\ kolokov & occup & quso & french & moroccan & qaeda & militia & scholarship & list & bueno \\ institut & ralph & motel & rent & salman & attempt & rafiki & donat & just & name \\ \hline \textit{Topic 11} & \textit{Topic 12} & \textit{Topic 13} & \textit{Topic 14} & \textit{Topic 15} & \textit{Topic 16} & \textit{Topic 17} & \textit{Topic 18} & \textit{Topic 19} & \textit{Topic 20} \\ \hline diamond & derwish & odeh & doha & charlott & nami & bugarov & atmani & miami & shakur \\ doppl & somad & muslih & ayyash & blake & scada & sizov & sufaat & tour & cairo \\ ortiz & bafaba & island & insur & qasim & system & arabia & letter & book & stai \\ polish & amsterdam & unit & compani & rifai & usa & embassi & diseas & left & father \\ \hline \hline \end{tabular} \end{table*} \begin{table*} \centering \scriptsize \caption{Top terms from each topic computed by LDA from \textit{Interactive Storytelling}} \label{table:topILDA} \begin{tabular}{cccccccccc} \hline \textit{Topic 1} & \textit{Topic 2} & \textit{Topic 3} & \textit{Topic 4} & \textit{Topic 5} & \textit{Topic 6} & \textit{Topic 7} & \textit{Topic 8} & \textit{Topic 9} & \textit{Topic 10} \\ \hline moral & nami & omari & atmani & karim & licens & octob & qasim & shamrani & zinedin \\ fund & miami & qaeda & usa & scholarship & amsterdam & special & blake & group & car \\ name & system & book & orang & fund & somad & moroccan & charlott & militia & montreal \\ went & safrygin & baltimor & sizov & donat & bafaba & regist & cooper & unit & chicago \\ \hline \textit{Topic 11} & \textit{Topic 12} & \textit{Topic 13} & \textit{Topic 14} & \textit{Topic 15} & \textit{Topic 16} & \textit{Topic 17} & \textit{Topic 18} & \textit{Topic 19} & \textit{Topic 20} \\ \hline arz & sufaat & tour & bomb & doha & odeh & diamond & letter & derwish & shakur \\ salah & explos & motel & scada & insur & muslih & ortiz & post & rcmp & father \\ photograph & kansa & present & jehani & ayyash & british & polish & island & canadian & busi \\ technician & casino & car & laptop & central & citizen & doppl & diseas & raid & loan \\ \hline \end{tabular} \end{table*} \fi \subsection{Scalability to large corpora}\label{sec:clusS} With large datasets, such as the VAST11 dataset, we can fruitfully combine clustering with our framework to navigate the document collection (see Fig.~\ref{fig:mds_vast_clu}). Given a document collection, an initial clustering (e.g., k-means) can be utilized to identify broad groups of documents that can be discarded during the initial story construction. Here, assume that the user specifies $00795.txt$ and $00004.txt$ as the starting and ending document, respectively. The storytelling algorithm generates $00795.txt \rightarrow 014171.txt \rightarrow 00004.txt$ as the initial story (solid line). Note that this story ignores documents from the cluster displayed in red. Assume that the user now requires that documents from the red cluster also participate in the story. Based on an initial exploratory analysis, the user specifies that documents $02247.txt$ and $00082.txt$ should participate in the story. Based on this feedback the interactive storytelling algorithm generates: $00795.txt\rightarrow 01486.txt\rightarrow 02247.txt\rightarrow 00082.txt\rightarrow 04134.txt\rightarrow 00004.txt$ (note the introduction of $04134.txt$ into the story). \begin{figure*}[!t] \centering \includegraphics[width=0.95\textwidth]{fig/mds_clu_vast.pdf} \caption{Scaling the storytelling methodology by integrating clustering. The initial story (solid line) from $00795.txt$ to $00004.txt$ avoided documents in the red cluster. After incorporating user feedback, the new story (dotted line) navigates through the red cluster.} \label{fig:mds_vast_clu} \end{figure*} \subsection{Comparing interactive storytelling vs uniform cost search}\label{sec:com} We now assess the performance of the constrained search process underlying interactive storytelling versus that of an uninformed search (e.g., uniform cost search). The comparison is shown in Fig. \ref{fig:comp}. We use different distance threshold $\xi$ to compute effective branching factor, path length and execution time We show in Fig. \ref{fig:comp}(a, b, c) that average effective branching factor increases with $\xi$. Since higher $\xi$ means a node will have more neighbors, the branching factor will increase in this case. However in case of \textit{Interactive Storytelling} path finding is more guided so the average effective branching factor does not vary much. We can see that using a heuristics decreases the average effective branching factor. The average path length however decreases with increasing $\xi$ (Fig. \ref{fig:comp}(d, e, f)). Increasing $\xi$ results in having larger neighborhood for each node, therefore the chance of reaching the goal becomes higher resulting in shorter average path length. For \textit{Interactive Storytelling} the average path length is higher because it has to visit the nodes specified by the user while searching for the shortest path. The execution time for both heuristic search and the uninformed search are almost same (Fig. \ref{fig:comp}(g, h, i)), however for \textit{Interactive Storytelling} it is much longer. Since it has to visit the nodes provided by the user, it travels the search space in more depth so it takes more time on average to finish the search. \renewcommand{\arraystretch}{0.1} \begin{figure*}[!t] \centering \begin{tabular}{@{}c@{}c@{}c@{}} \includegraphics[width=0.3\textwidth,height=1.7in]{fig/bf_as.pdf} & \includegraphics[width=0.3\textwidth,height=1.7in]{fig/bf_cr.pdf} & \includegraphics[width=0.3\textwidth,height=1.7in]{fig/bf_mp.pdf} \\ (a) & (b) & (c) \\ \includegraphics[width=0.3\textwidth,height=1.7in]{fig/pl_as.pdf} & \includegraphics[width=0.3\textwidth,height=1.7in]{fig/pl_cr.pdf} & \includegraphics[width=0.3\textwidth,height=1.7in]{fig/pl_mp.pdf} \\ (d) & (e) & (f) \\ \includegraphics[width=0.3\textwidth,height=1.7in]{fig/t_as.pdf} & \includegraphics[width=0.3\textwidth,height=1.7in]{fig/t_cr.pdf} & \includegraphics[width=0.3\textwidth,height=1.7in]{fig/t_mp.pdf} \\ (g) & (h) & (i) \\ \end{tabular} \caption{Comparison of interactive storytelling, heuristic search and uniform cost search in terms of average effective branching factor (top), average path length (middle) and execution time (bottom). (Left) Atlantic Storm. (middle) Crescent. (right) Manpad.} \label{fig:comp} \end{figure*} \section{Introduction} Faced with a constant deluge of unstructured (text) data and an ever increasing sophistication of our information needs, a significant research front has opened up in the space of what has been referred to as {\it information cartography}~\cite{metromaps1}. The basic objective of this space is to pictorially help users make sense of information through inference of visual constructs such as stories~\cite{kdd-storytelling2,sto1,dafna1,dafna2}, threads~\cite{threading2,themedelta,threading1}, and maps~\cite{metromaps2,metromaps3}. By supporting interactions over such constructs, information cartography systems aim to go beyond traditional information retrieval systems in supporting users' information exploration needs. Arguably the key concept underlying such cartography is the notion of storytelling, which aims to `connect the dots' between disparate documents by linking starting and ending documents through a series of intermediate documents. There are two broad classes of storytelling algorithms, motivated by their different lineages. Algorithms focused on news articles~\cite{dafna1,dafna2} aim for {\it coherence} of stories wherein every document in the story shares an underlying common theme. Algorithms focused in domains such as intelligence analysis~\cite{anWo} and bioinformatics~\cite{shahriar-plosone} must often work with sparse information wherein a common theme is typically absent or at best tenuous. Such algorithms must leverage weak links to bridge diverse clusters of documents, and thus emphasize the construction and traversal of similarity networks. Irrespective of the motivations behind storytelling, all such algorithms provide limited abilities for the user to steer the story construction process. There is typically no mechanism to interactively steer the story construction toward desired story lines and avoid specific aspects that are not of interest. In this paper, we present an alternative approach to storytelling wherein the user can interactively provide `must use' constraints to preferentially support the construction of some stories over others. At each stage of our approach, the user can inspect the given story and the overall document collection, and express preferences to adjust the storyline, either in part or in overall. Such feedback is then incorporated into the story construction iteratively. Our key contributions are: \begin{enumerate} \item Our interactive storytelling approach can be viewed as a form of `visual to parametric interaction' (V2PI~\cite{v2pi}) wherein users' natural interactions with documents in a workspace is translated into parameter-level interactions in terms of the underlying machine learning models (here, topic models). In particular, we demonstrate how high-level user feedback at the level of paths is translated down to redefine topic distributions. \item The underlying mathematical framework for interactive storytelling is a novel combination of hitherto uncombined components: distance measures based on (inferred) topic distributions, the use of constraints to define sets of linear inequalities over paths, and the introduction of slack and surplus variables to condition the topic distribution to preferentially emphasize desired terms over others. The proposed framework thus brings together notions from heuristic search, linear systems of inequalities, and topic models. \item We illustrate how just a modicum of user feedback can be fruitfully employed to redefine topic distributions and at the same time severely curtail the search process in navigating large document collections. Through experimental studies, we demonstrate the effectiveness of our interactive storytelling approach over multiple text datasets. \end{enumerate} \section{Framework} \iffalse \subsection{General Framework} The preceding example shows that finding patterns in document network is complicated. The conclusion about the data is as vital as the path to reach the conclusion. Due to the inherent limitation of human cognition when it comes to making sense of large corpus, the short term memory can process only as many as distinct pieces of information. A probable recourse is to devise visualization method that mine important piece of information from large corpus to allow user to delve into the parts of data in depth by interacting with the visualization. Analyst's Workspace \cite{anWo}, Interactive Principal Component Analysis \cite{inPCA}, Interactive Multi-dimensional Scaling \cite{inMDS} are all examples of such visualization systems. This type of interactions can be called \textit{parameter level interaction} where user modifies the parameter of the model to update the visualization. The new visualization provides user a tool to update her existig set of hypotheses. One of the crucial point of this type of system is that user must have a deep knowledge about the parameter of the system to interact with it. Our \textit{Interactive Storytelling} algorithm is inspired by the $V2PI$ framework \cite{v2pi} in that sense that we want users to focus on interacting with documents while preventing them from parameter level interaction. This should have a broader appeal to the user because it allows them to concentrate on data visualization and hypothesis validation rather than having to understand the nuances of the model. The Interactive Storytelling involves the following steps: \begin{enumerate} \item Algorithm provides a visualization based on initial latent topics. In \textit{Interactive Storytelling}, the visualization is generated based on the shortest path in the document network between the starting ($s$) document and the ending document ($t$) given by the user. \item User evaluates the visualization to provide feedback based on her semantic reasoning of the data. This feedback comes as a sequence of documents which user expects to be in the story. This feedback is known as \textit{cognitive feedback}. Suppose the user defined story is called $P^*$. Here use is completely shielded from path searching and visualization methods. \item $P^*$ is not the shortest path from $s$ to $t$ in the initial topic space. Now we consider user's cognitive feedback $P^*$ being the shortest path over all the paths from $s$ to $t$ in some topic space yet to be discovered. This is known as \textit{parametric feedback}. We devise a system of inequities or relationships denoted by $\Re$ where the length or cost of the story is smaller than other alternate stories. The parameters are re-computed under these constraints. \item The system updates its visualization based on the new topic space which is consistent with the relationships $\Re$. This process continues for the duration of the analytic process. \end{enumerate} \fi A summary of notation as used in this paper is given in Table~\ref{table:sym}. We utilize the terms \textit{nodes} and \textit{documents} interchangeably in this paper. As described earlier, we impute the notion of distance between documents based on vector representations inferred from probabilistic topic models (here, LDA). Specifically, we use the topic distributions $\theta^{(d_i})$ and $\theta^{(d_j)}$ for documents $d_i$ and $d_j$ (resp.) to calculate the distance or edge cost between $d_i$ and $d_j$. We posit an edge between two documents if they share any terms and the edge cost is lower than a fixed cost $\xi$. While a number of probabilistic measure of distance can be utilized, in this paper we adopt the Manhattan distance metric. The heuristic distance for a node $m$ is given by the straight line distance to the ending (target) document $t$. Since the Manhattan distance obeys the triangle inequality, it is well known that it is an admissble heuristic for A* search. As is customary, we define a node evaluation function $fScore(l)$ as the sum of $gScore(l)$ and $hScore(l)$. \begin{table}[!t] \small \centering \caption{Notation overview.} \label{table:sym} \begin{tabular}{>{\raggedleft\arraybackslash}p{5cm}|p{11cm}} \hline \hline Notation & Explanation \\ \hline $d_i$ & $i^{th}$ document in the copus \\ \hline $T$ & total number of topics \\ \hline $s$ & starting document \\ \hline $t$ & ending/goal document \\ \hline $\xi$ & distance threshold \\ \hline $\theta^{(d_i)}=(\theta_1^{(d_i)},\cdots,\theta_T^{(d_1)})$ & $T$ dimensional vector of normalized topic distribution of document $d_i$ \\ \hline $e_{ij}$ & edge between $d_i$ and $d_j$ if they have any term in common \\ \hline $c(e_{ij})$ & cost between $d_i$ and $d_j$, $c(e_{ij}) = c_{ij}=\sum_{t=1}^T\Delta_{(ij)t}$, where $\Delta_{(ij)t}=\vert \theta_t^{d_i}-\theta_t^{d_j} \vert$ \\ \hline $P=<s,d_{P(1)},d_{P(2)},\cdots,d_{L-1},t>$ & path $P$ from $s$ to $t$ with $L$ edges, $d_{P(i)}$ is the $i^{th}$ document after $s$ \\ \hline $c(P)$ & $c(P)=\sum_{e_{ij}\in P} c(e_{ij})$ \\ \hline $P^{*}$ & shortest path from $s$ to $t$ \\ \hline $d(i,j)$ & cost of the shortest path from $i$ to $j$ \\ \hline $gScore(l)$ & cost of the shortest path from $s$ to $l$ using $A^*$ search \\ \hline $hScore(m)$ & the heuristic distance (Manhattan distance) between the node $m$ and the goal node $t$ \\ \hline $\alpha_{e^*}$ & minimum cost any $e^*\in E-P^*$ is bounded by such that $P^*$ is the shortest path from $s$ to $t$ \\ \hline $\beta_{e^*}$ & maximum cost any $e^*\in P^*$ is bounded by such that $P^*$ is the shortest path from $s$ to $t$ \\ \hline $d^{e,k}(s,t)$ & cost of the shortest path from $s$ to $t$ with $c(e)=k$ \\ \hline $c^{e,k}(P)$ & cost of an arbitrary path $P$ with $c(e)=k$ \\ \hline $d(s,e,t)$ & cost of the shortest path from $s$ to $t$ including an edge $e\in E$ \\ \hline \hline \end{tabular} \end{table} \subsection{Obtaining User Feedback} After an initial story generated by heuristic search, the user provides a sequence of documents that ought to be included in the story (i.e., between the documents $s$ and $t$). Suppose this sequence is $\mathcal{C}=<C_1, \cdots, C_K>$. The order of the documents is important, since the sequence is a reflection of desired story progression. We define the path $P^*$ as a concatenation of the shortest path between $s$ and $C_1$, followed by the nodes in $\mathcal{C}$, and finally the shortest path between $C_K$ and $t$. This process is done in the original LDA-inferred topic space. We will now undertake a constrained $A^*$ search incorporating the user feedback. \iffalse The user can generate the sequence in a number of ways. She can be motivated by reading the documents in $\mathcal{C}$ or looking for specific term in the documents which may be pertinent to connecting $s$ and $t$. There could be several reason to include a document into the story. However we are ignoring these scenarios. We assume that there is visual analytic platform which can be used by the user to generate the sequence $\mathcal{C}$. We incorporate the feedback in an effort to search for parameters that will most likely represent a document as a mixture of topics and which will be consistent with the provided feedback feedback in $\Re$. . \fi \begin{figure}[!htbp] \centering \includegraphics[width=\textwidth]{fig/conAStar.pdf} \caption{(a) shows the heuristic distance $h(D_2,t)$ from original $A^*$ search. (b) depicts $h^*(D_2,t)$ based on constrained $A^*$ search. (c) depicts $h^*(D_2,t)$ when feedback nodes are $\mathcal{C}=<C_1,C_2>$ where ancestry of $D$ is given by $\mathcal{A}(D)=C_1$. Dashed line shows the shortest path from $s$ to $D$.} \label{fig:conAStar} \end{figure} \subsection{Constrained {\large $A^*$} search}\label{sec:cas} Now we discuss the incorporation of the user's feedback into the story. Consider the case where the user insists that a document $C$ (not in the initial story) should be included in the story. This case can be easily extended to a sequence of documents $\mathcal{C}=<C_1, \cdots, C_K>$. Suppose the adjacent nodes of a document $d$ is denoted by $\mathcal{N}(d)$. There are five adjacent nodes to $d$ in Fig.~\ref{fig:conAStar}. The heuristic distance between a neighbor (say, $D_2$) and the ending document $t$ is given by $h(D_2,t)$ in the original $A^*$ search. Our redefined heuristic distance for constrained $A^*$ search is given by $h^*(D_2,B)=h(D_2,C)+h(C,B)$. If the feedback is a sequence of documents $\mathcal{C}=<C_1, \cdots, C_K>$ then $h^*(D_2,t)=h(D_2,C_1)+h(C_1,C_2)+\cdots +h(C_K,t)$. However, while $h^*$ ensures that the $fScore$ of a document depends on the path via the sequence of feedback nodes $\mathcal{C}$, it must also consider the subset of $\mathcal{C}$ that already belong to the shortest path from $s$ to $D$ to estimate the heuristic distance $h^*(D,t)$. We define a property named \textit{Ancestry} that keeps track of the subset of the feedback nodes that already exists in the shortest path from the $s$ to the said node. Ancestry $\mathcal{A}(D_i)$ of an arbitrary neighbor of $D$ is defined as $\mathcal{A}(D_i)=\mathcal{A}(predecessor(D))$ if $D$ is not a feedback node. If $D$ is the feedback node immediately after the subsequence $\mathcal{A}(predecessor(D))$ in $\mathcal{C}$ then $\mathcal{A}(D_i)= \{ (predecessor(D)),D \}$. The starting node $s$ has an empty ancestry. A node having longer subsequence of $\mathcal{C}$ in its ancestry compared to another is said to have a \textit{richer} ancestry. A node with richer ancestry is always preferred. If ancestries are comparable, for an open node the predecessor with smaller $gScore$ is chosen while for a closed node the predecessor with smaller $fScore$ is chosen. \begin{figure}[!t] \centering \includegraphics[width=\textwidth]{fig/feedback.pdf} \caption{(left) The path with green nodes is the initial story generated by the storytelling algrithm and hence the shortest path from $s$ to $t$ before incorporating feedback. The gray paths (dashed and solid) are alternate stories abandoned by the $A^*$ search. (right) Story after user feedback where the user-preferred story $P^*$ is shown in blue. This is not the shortest path in the current topic space. The documents that the user desires to be in the story are shown in large circles. We intend to estimate the topic space where the blue path ($P^*$) is shorter than all the other alternate paths from $s$ to $t$.} \label{fig:feedback} \end{figure} \subsection{Alternate/Candidate Stories} The nodes explored by $A^*$ search in the initial topic space (the set of open and closed nodes) induce an acyclic graph $G(V, E)$. The orange nodes in Fig.~\ref{fig:feedback} are open nodes in such a graph. Denote the set of open nodes by $\mathcal{O}$. Any path from $s$ to $t$ via $o \in \mathcal{O}$ is a candidate story generated by $A^*$ search. Let us denote the path via $o$ by $P^{(o)}$. Now assume $\mathcal{O}$ has $O$ open nodes. To enforce the user feedback that $P^*$ be the shortest path over all paths from $s$ to $t$ we define the following system of inequalities: \small \begin{align}\label{eqn:inq} c(P^*) &\leq c(P^{(o_1)}) \nonumber \\ &\vdots \nonumber \\ c(P^*) &\leq c(P^{(o_O)}) \end{align} \normalsize If we break each inequality in terms of topics then we obtain: \begin{align} \sum_{t=1}^T(\Delta_t^*-\Delta^{(o_1)}) &\leq 0 \nonumber \\ &\vdots \nonumber \\ \sum_{t=1}^T(\Delta_t^*-\Delta^{(o_O)}) &\leq 0 \label{eqn:inqT} \end{align} In addition to this set of inequalities, we also add another set of inequalities imposing that the cost of an edge in the new topic space, $c(e)$ is at least as much as the cost of the edge in the initial topic space $c_0(e)$. \small \begin{equation}\label{eqn:e} c(e) \geq c_0(e), e \in E \end{equation} \normalsize This constraint is imposed so that the proximity of the document does not change drastically, as otherwise this might disorient users. \subsection{Deriving Systems of Inequalities} $A^*$ is a heuristic algorithm to find the shortest path between two nodes. Given the shortest path, finding the edge costs or upper and lower limits thereof is thus as \textit{inverse shortest path problem}. Our goal is to find a normalized topic distribution $\theta^{(d_i)}$ so that $P^*$ is actually the shortest path in the new topic space. In our approach, we obtain the inequalities in Eqn \ref{eqn:inqT} by using the following observation: if the cost of an edge $e^*\in P^*$ crosses the upper threshold $\beta_e^*$ or an edge $e \not\in P^*$ falls below the lower threshold $\alpha_e$, all the other edge cost being fixed $P^*$ is no longer the shortest path from $s$ to $t$. Therefore the condition for $P^*$ being the shortest path is \small \begin{align} c(e^*)\leq \beta_e^*, \forall e^*\in P^* \nonumber\\ c(e)\geq \alpha_e, \forall e \in E-P^* \end{align} \normalsize Upper and lower shortest path tolerances are presented in~\cite{tolSP} as: \small \begin{align}\label{eqn:ab} \beta_{e^*}&=d^{e^*,\infty}(s,t)-c(P^*)+c(e^*) \nonumber\\ \alpha_e&=c(P^*)-d^{e,0}(s,t) \end{align} \normalsize Therefore the inequities for the edges becomes: \small \begin{align}\label{eqn:pstar} c(P^*)&\leq d^{e^*,\infty}(s,t), \forall e^* \in P^* \\ c(e)&\geq c(P^*)-d^{e,0}(s,t), \forall e \in E-P^* \label{eqn:eP} \end{align} \normalsize Note that for the first equation in Eqn.~\ref{eqn:ab}, $\beta_{e^*}$ is the difference of two path costs: the cost of the shortest path from $s$ to $t$ that avoid $e^*$ (imposing an infinite cost for $e^*$) $d^{e^*,\infty}(s,t)$ and the minimum cost of $P^*$ with $e^*$ in the path (imposing a zero cost for $e^*$), so that $c(P^*)-c(e^*)=c^{e^*,0}(P^*)$. Notice also that if $e=(l,m)$, then $d^{e,o}(s,t)=\min(c(P^*),d(s,l)+d(m,t))$. For the second equation if the shortest path from $s$ to $t$ does not change even with $c(e)=0$, i.e. $d^{e,0}(s,t)=c(P^*)$, then the lower tolerance for $c(e)$ is zero. However, if the constraint $c(e)=0$ favors a different path through $e$ (meaning not $P^*$) the lower tolerance for $e$ is given by the drop in the path cost which this alternate path allows over $P^*$. We use the fact that our choice of $hScore$ is an admissible heuristic in $A^*$ search to simplify our formulation of inequalities. Due to admissibility, $hScore(m)\leq d(m,t)$, and consequently $gScore(l)+hScore(m)\leq d(s,l)+d(m,l)$. Replacing $d^{e,0}(s,t)$ with lower heuristic estimate of $gScore(l)+hScore(m)$ in Eqn. \ref{eqn:eP} we achieve a stricter inequality: \small \begin{equation} \left.\begin{matrix} c(e)\geq c(P^*)-gScore(l)-hScore(m)\\ c(e)\geq 0 \end{matrix}\right\} \forall e \in E-P^* \end{equation} \normalsize \begin{figure}[!t] \centering \includegraphics[width=0.5\textwidth]{fig/open.pdf} \caption{Dashed line shows the subtree $\tau(e^*)$ and the solid line shows the subtree $\tau^C(e^*)$. The candidate open nodes in $\tau^C(e^*)$ for Eqn. \ref{eqn:pstar} are shown in green. Red nodes are open nodes in $\tau(e^*)$ and do not contribute in Eqn. \ref{eqn:pstar}. The shortest path from $s$ to $t$ avoiding $e^*$ is the shortest path from $s$ to $t$ via any of the green nodes.} \label{fig:open} \end{figure} The cost of shortest path avoiding $e^*\in P^*$ is given by $d^{e^*,\infty}(s,t)=\min_{e\in E-P^*}(d(s,e,t)|e^*\not\in d(s,e,t))$. In Fig. \ref{fig:open} suppose the red edge is one such $e^*\in P^*$. Let the subtree induced by $A^*$ search following $e^*$ is $\tau(e^*)$ (shown in dashed line) and the remainder of the tree is $\tau^C(e^*)$ (shown in solid line). Based on the search process, we would expect the shortest path from $s$ to $t$ via any edge in $\tau(e^*)$ to have $e^*$ in it. Therefore $d^{e^*,\infty}(s,t)$ should be based on paths via edges in $\tau^C(e^*)$. Since we have path costs that are estimated by the heuristically $A^*$ search ($fScores$) we can use these for the open nodes in $\tau^C(e^*)$. These open nodes are shown in green in Fig.~\ref{fig:open}. Hence in this setting, the inequality $c(P^*)\leq \min_{e\in E-P^*}(d(s,e,t)|e^*\not\in d(s,e,t))$ is replaced by the following set of inequalities: \small \begin{equation}\label{eqn:o} c(P^*) \leq fScore(o), \forall o\text{ in the set of open nodes in }\tau^C(e^*) \end{equation} \normalsize Due to the admissibility of $hScore$, $fScore$ also underestimates the true distance, so we are using a stricter inequality in Eqn. \ref{eqn:o}. If this process is repeated for all $e^*\in P^*$ our set of inequalities consist of the user defined path $P^*$ being compared against all the set of paths defined by the open nodes in the original $A^*$ search given in Eqn \ref{eqn:inqT}. \subsection{Modeling Relationships by Auxiliary Variables} In the previous section we formulated the user feedback as a set of relationships, where each relationship is an inequality in terms of path lengths. Since the distance metric is based on normalized topic distribution we explicitly show the dependence of an individual relationship on $\pmb{\theta}$. For an inequality $r_o\equiv c(P^*)\leq c(P^{(o)})$ in Eqn. \ref{eqn:inqT}, we introduce a slack random variable $\lambda_o$ (i.e. $\lambda_o\leq\epsilon$ for some $\epsilon\leq 0$) as an auxiliary variable with expectation $\pmb{E}(\lambda_o)=\mu_o(\pmb{\theta})=c(P^*)-c(P^{(o)})$. Similarly for a relationship $r_e\equiv c(e)\geq c_0(e)$ in Eqn \ref{eqn:e} we define a surplus random variable $\lambda_e$ where $\lambda_e$ is positive with expectation given by $\pmb{E}(\lambda_e)=\mu_o(\pmb{\theta})=c(e)-c_0(e)$. Therefore $\mu_o(\pmb{\theta})=\sum_{t=1}^T(\Delta_t^*(\pmb{\theta})-\Delta_t^{(o)}(\pmb{\theta}))$. Suppose the distribution of the auxiliary variable is given by $\lambda_o \sim f(\cdot\vert\pmb{\theta})$. The random variable $\lambda_o$ measures the difference in path lengths between the user defined path $P^*$ and an alternate $P^{(o)}$. If $\mu_o(\pmb{\theta})$ is zero, it means enforcing the relationship that $P^*$ is as costly as the alternate path $P^{(o)}$. The more negative the value of its mean $\mu_o(\pmb{\theta})$, the larger we expect $P^{(o)}$ to be compared to $P^*$. This ensures that the topic space $\pmb{\theta}$ satisfies the relationship $c(P^*) \leq c(P^{(o)})$. Now conditional on a known $\pmb{\theta}$, the joint distribution of the auxiliary variables (both slack and surplus) and the observed feedback $\Re$ is given below: \begin{equation}\label{eqn:jd} \begin{split} f(\Re,\pmb{\lambda}|\pmb{\theta}) &\propto \prod_{o\in \mathcal{O}}\{ \mathbbm{1}_{c(P^*) \leq c(P^{(o)}}\mathbbm{1}_{\lambda_o\leq\epsilon} +\mathbbm{1}_{c(e)\geq c_o(e)}\mathbbm{1}_{\lambda_0\geq 0} \}f(\lambda_o|\pmb{\theta}) \end{split} \end{equation} Here, $\mathbbm{1}_x$ is an indicator variable which is one if condition $x$ holds and zero otherwise. Our goal is to find a set of surplus and slack variables $\pmb{\lambda}$ that maximizes the probability in Eqn \ref{eqn:jd}. Now let $f(\lambda_o|\pmb{\theta})$ be normally distributed with mean $\mu_0(\pmb{\theta})$ and variance 1. By marginalizing over the auxiliary variables $\lambda_o$, our formulation is same as the modeling the probability of satisfying a relationship using the cumulative normal distribution. \small \begin{align} P(c(P^*) \leq c(P^{(o)})|\pmb{\theta}) &= 1 - \Phi(\mu_o(\pmb{\theta})-\epsilon), \text{for Eqn \ref{eqn:inqT}} \nonumber \\ P(c(e)\geq c_0(e)|\pmb{\theta}) &= \Phi(\mu_o(\pmb{\theta})), \text{for Eqn \ref{eqn:e}} \end{align} \normalsize Here for a standard normal variable $Z$, $\Phi(z)=P(Z\leq z)$. This approach is very similar to the usage of auxiliary variables in probit regression \cite{probit}. In probit regression the mean of the auxiliary variable is modeled by a linear predictor to maximize the discrimination between the successes and failures in the data. In our case satisfiability of a user defined relationship is a success and the probability of satisfying the relationship is modeled by the mean of auxiliary variable. The mean of the auxiliary variable is a function of the topic space $\pmb{\theta}$ on which the distances are defined. Our goal is to search for a topic space $\pmb{\theta}$ which explains the term distribution of the documents and satisfies as many of the relationships in $\Re$ as possible. Truncating a slack variable $\lambda_o$ to a negative region specified by $\epsilon$ allows to search for $\pmb{\theta}$ that shrinks the mean $\mu_o(\pmb{\theta})$ to a negative value. The complete hierarchical model using the term document data $\pmb{\eta}$ and the relationship data $\Re$ is presented below: \begin{align} \begin{split} f(\Re,\pmb{\lambda}|\pmb{\theta}) &\propto \prod_{o\in \mathcal{O}}\{\mathbbm{1}_{c(P^*) \leq c(P^{(o)}}\mathbbm{1}_{\lambda_o\leq\epsilon} +\mathbbm{1}_{c(e)\geq c_o(e)}\mathbbm{1}_{\lambda_o\geq 0} \}N(\lambda_o|\mu_o(\pmb{\theta}),1) \end{split} \nonumber \\ \eta_i|z_i,\phi^{(z_i)} &\sim Discrete(\phi^{(z_i)}) \nonumber \\ \phi &\sim Dirichlet(\beta) \nonumber \\ z_i|\theta^{(d_i)} &\sim Discrete(\theta^{(d_i)}) \nonumber \\ \theta &\sim Dirichlet(\alpha) \end{align} \subsection{Inference} We use Gibbs sampling to compute the posterior distributions for $\mathbf{z}, \pmb{\lambda}$ and $\pmb{\theta}$. The conditional posterior distributions for $z_i$ is given below: \small \begin{equation}\label{eqn:sampT} p(z_i=j|\mathbf{z}_{(-i)},\pmb{\eta})\propto p(\eta_i|z_i=j,\mathbf{z}_{(-i)},\pmb{\eta}_{(-i)})p(z_i = j|\mathbf{z}_{(-i)},\pmb{\eta}_{(-i)}) \end{equation} \normalsize The sampling of topic for terms $\pmb{\eta}$ is same as used in vanilla LDA~\cite{pnastm}. \small \begin{equation} p(z_i = j | \mathbf{z}_{(-i)},\pmb{\eta}) \propto \dfrac{\beta+n^{(\eta_i)}_{(-i,j)}}{M\beta+n^{(\cdot)}_{(-i,j)}} \times \dfrac{\alpha+n_{(-i,j)}^{(d_i)}}{T\alpha + n_{(-i,\cdot)}^{(d_i)}} \end{equation} \normalsize The full conditional distribution for $\lambda_o$ is given below: \small \begin{equation} p(\lambda_o | \pmb{\theta},\Re) = \begin{cases} N(\cdot|\mu_o(\pmb{\theta}),1),\lambda_o\leq \epsilon, \text{if } r_o \text{ is } \leq \text{ type}\\ N(\cdot|\mu_o(\pmb{\theta}),1),\lambda_o>0, \text{if } r_o \text{ is } > \text{ type} \end{cases} \end{equation} \normalsize The full conditional distribution for the topic distribution of document $d_j$ is given below: \small \begin{align} \begin{split} p(\theta^{(d_j)}|\pmb{\theta}^{(-d_j)},\pmb{\lambda},\pmb{z}) &\propto \prod_{z_i\in d_j}p(z_i|\theta^{(d_j)})p(\theta^{(d_j)}|\alpha)\times \prod_{o\in \mathcal{O}}N(\lambda_o|\mu_o(\pmb{\theta}),1) \end{split}\nonumber \\ &\propto p(\theta^{(d_j)}|\pmb{z},\alpha)\prod_{o\in \mathcal{O}}N(\lambda_o|\mu_o(\pmb{\theta}),1) \nonumber \\ &\propto \prod_{t=1}^T \left( \theta_t^{(d_j)} \right)^{(n_t^{(d_j)}+\alpha)-1}\prod_{o\in\mathcal{O}}N(\lambda_o|\mu_o(\pmb{\theta}),1) \end{align} \normalsize since $p(\theta^{(d_j)}=Dirichlet(n_t^{(d_j)}+\alpha)$. $n_t^{(d_j)}$ denotes the number of terms in document $d_j$ assigned to topic $t$ based on $\pmb{z}$. If $d_j$ does not belong to $\Re$, then $\theta^{(d_j)}$ is sampled from $Dirichlet(n_t^{(d_j)}+\alpha)$. We sample from $p(\theta^{(d_j)}|\pmb{\theta}^{(-d_j)},\pmb{\lambda},\pmb{z})$ by a Metropolis-Hastings step otherwise. We use a proposal strategy based on stick-breaking process to allow better mixing. The stick-breaking process bounds the topic distribution of a document $d_j$ between zero and one and their sum to one. We first sample random variables $u_1,\cdots,u_{T-1}$ truncated between zeros and one and centered by $\pmb{\theta}^{(d_j)}$ using a proposal distribution $q(\cdot)$: \small \begin{align} u_1&\sim q\left(\cdot|\theta_1^{(d_j)}\right), 0<u_1<1 \nonumber \\ u_2&\sim q\left(\cdot|\dfrac{\theta_2^{(d_j)}}{1-u_1}\right), 0<u_2<1 \nonumber \\ u_3&\sim q\left(\cdot|\dfrac{\theta_3^{(d_j)}}{(1-u_1)(1-u_2)}\right), 0<u_3<1 \nonumber \\ \vdots \nonumber \\ u_{T-1}&\sim q\left(\cdot|\dfrac{\theta_{T-1}^{(d_j)}}{(1-u_1)(1-u_2)\cdots (1-u_{T-2})}\right), 0<u_{T-1}<1 \end{align} \normalsize This is followed by the mappings, $S:\mathbf{u}\rightarrow \pmb{\theta}_{1:T-1}^{*(d_j)}$, \small \begin{align} \theta_1^{*(d_j)} &= u_1 \nonumber \\ \theta_2^{*(d_j)} &= u_2(1-u_1) \nonumber \\ \theta_3^{*(d_j)} &= u_3(1-u_2)(1-u_1) \nonumber \\ \vdots \nonumber \\ \theta_{T-1}^{*(d_j)} &= (1-u_{T-1})(1-u_{T-2})\cdots (1-u_2)(1-u_1) \end{align} \normalsize The inverse mappings $S^{-1}:\pmb{\theta}_{1:T-1}^{*(d_j)}\rightarrow \mathbf{u}$ are given by: \small \begin{align} u_1&=\theta_1^{*(d_j)} \nonumber \\ u_t &= \dfrac{\theta_1^{*(d_j)}}{1-\sum_{i<t}\theta_i^{*(d_j)}}, t=2,\cdots T-1 \end{align} \normalsize The Metropolis-Hastings acceptance probability for such a proposed move is given by \small \begin{equation} \begin{split} p_{MH}&=\min \left(1, \dfrac{(p(\theta^{*(d_j)})|\mathbf{z})\prod_{o\in \mathcal{O}}N(\lambda_o|\mu_o(\pmb{\theta}^*),1))}{(p(\theta^{(d_j)})|\mathbf{z})\prod_{o\in \mathcal{O}}N(\lambda_o|\mu_o(\pmb{\theta}),1))} \times \right. \left. \dfrac{q(\pmb{\theta}^{*(d_j)}_{1:T-1})}{q(\mathbf{u})} \left| \dfrac{\delta(\pmb{\theta}^{*(d_j)}_{1:T-1})}{\delta(\mathbf{u})} \right| \right) \end{split} \end{equation} \normalsize where \small $\left| \dfrac{\delta(\pmb{\theta}^{*(d_j)}_{1:T-1})}{\delta(\mathbf{u})} \right|=\left| \dfrac{\delta(\mathbf{u})}{\delta(\pmb{\theta}^{*(d_j)}_{1:T-1})} \right|^{-1}$=$\left( \dfrac{1}{\prod_{t=2}^{T-1}\left( 1-\sum_{i<t}\theta_i^{*(d_j)} \right)} \right)$ \normalsize The samples from $\mathbf{z},\pmb{\lambda}$ and $\pmb{\theta}$ are iteratively sampled to generate the joint posterior distribution of all the unknown parameters using Gibbs Sampling. This procedure completes the interactivity loop in the storytelling algorithm. The newly inferred topic distributions will induce a new similarity network over which we can again conduct a search, followed by (potentially) additional user feedback.
2,869,038,156,559
arxiv
\section{Introduction} \IEEEPARstart{M}{id-air} haptics provide a new mode of sensory feedback for humans, creating ``virtual touch” that allows people to feel tactile sensations in the space above a transducer. The primary method to produce mid-air haptics is with ultrasonic arrays, which focus acoustic radiation pressure to induce tactile sensation by microscopically deflecting the skin~\cite{carterUltraHapticsMultipointMidair2013}. This technology is opening up new possibilities for contactless interactions: since there are no wearables needed, distinct from other haptic feedback devices, mid-air haptics can facilitate spontaneous interaction with haptic displays. Removing the need for physical contact can also be beneficial in situations needing sterile conditions, such as medical applications. Mid-air haptics can also provide an extra mode of interaction in mixed-reality interfaces, additional to the commonly-used visual and auditory forms of feedback \cite{rakkolainenSurveyMidAirUltrasound2021}. When developing haptic feedback using ultrasonic devices, one needs to evaluate whether the desired sensations are produced as intended. Therefore, there is a need to understand how focal points of pressure interact with compliant skin to cause it to deform. The focal points generated by the ultrasonic array can be modulated in intensity and position to create different sensations. For example, by moving a focal point along the path of a shape, such as a circle, the illusion of a static shape is produced that can be felt by a human hand~(Fig. \ref{fig_intro}). Alternatively, by modulating the intensity of the focal point, a pulsing sensation can be created. These methods can be combined to generate a variety of sensations with both changing intensities and positions.These modulated focal points deform the viscoelastic skin in a non-linear process to which a variety of modelling and experimental methods can be applied to test the sensation being produced. \begin{figure}[t] \centering \begin{tabular}[b] {p{0.46\linewidth}p{0.46\linewidth}} \includegraphics[width=0.45\columnwidth,trim={0 0 0 0},clip]{images//intro_hand.pdf} & \includegraphics[width=0.45\columnwidth,trim={0 20 0 60},clip]{images//intro_tactip.pdf} \\ {\bf (a) Mid-air haptics as sensed by a hand} & {\bf (b) Mid-air haptics as sensed by the tactile robot}\\ \end{tabular} \caption{Mid-air haptics are generated by the ultrasonic array and can be felt with a human hand (a). Our robot uses a tactile sensor to test the sensations in place of a hand (b).} \label{fig_intro} \end{figure} In the present study, we focus on using tactile sensing to measure the low-frequency deformation caused by the sensations. Specifically, we present a low-cost tactile robotic platform to test mid-air haptics. We combine a lightweight, desktop robot arm with a 3D-printed soft biomimetic tactile sensor~\cite{leporaSoftBiomimeticOptical2021,ward-cherrierTacTipFamilySoft2018}, to develop a system that can map mid-air haptic sensations. This tactile robot is applied to mid-air haptics created by an array of ultrasonic transducers. Our contributions are as follows:\\ \noindent 1) We introduce a low-cost desktop robotic system ($\sim$£2,000) utilizing biomimetic tactile sensing to map and test the performance of ultrasonic mid-air haptics.\\ \noindent 2) We demonstrate that we can accurately map the sensations produced by the ultrasonic array with comparable results to a higher-cost method.\\ \noindent 3) We show that we can add real-time control for more efficient exploration of mid-air haptic shapes. This paper is structured as follows. First, we present the components of the testing platform, explaining the choice of the robot arm and the biomimetic tactile sensor that are used to test mid-air haptics. We then provide details of methodology we have developed for \textit{systematic mapping} of mid-air haptics. This involves using the robot arm to move the sensor in a grid pattern over the ultrasonic array, combining the data gathered into a map of the sensation. We map several distinct mid-air haptic stimuli and also provide a comparison with other mapping methods. Our methodology is then expanded by adding intelligence into the robot movements to instantiate \textit{autonomous haptic exploration} that uses the robot arm to move the tactile sensor to explore the space and decide where to move next based on what it has felt. This can provide a more efficient method for data collection and make the process more similar to how people use ``exploratory procedures" to interact dynamically with tactile stimuli when finding an object’s shape~\cite{ledermanHandMovementsWindow1987}. \section{Background} \begin{figure*}[ht] \centering \begin{tabular}[b] {ccc} \includegraphics[width=0.4\linewidth,trim={0 0 0 0},clip]{images//methods_system_robot.pdf} & \includegraphics[width=0.22\linewidth,trim={0 0 10 0},clip]{images//methods_system_tactip.pdf} & \includegraphics[width=0.27\linewidth,trim={0 0 0 0},clip]{images//methods_system_mapping.pdf} \\ {\bf (a) Tactile robotic setup} & {\bf (b) Tactile sensor components} & {\bf (c) Mapping of mid-air haptics}\\ \end{tabular} \caption{Tactile robotic system to test mid-air haptics. The system consists of a Dobot MG400 robot arm, TacTip tactile sensor, UHEV1 ultrasonic array (a). The TacTip is an optical tactile sensor, with an internal camera which images the internal pins of the flat skin (b). The tactile image from the sensor is used to generate a representation of the deformation on the surface of the skin with a Voronoi diagram (c, upper). The system is used for mapping mid-air haptics (c, lower).} \label{fig_methods_overview} \end{figure*} \subsection{Testing mid-air haptics} Various methods have been used to measure the haptic output from phased ultrasonic arrays. These methods can be quantitative, in which specific measurements can be taken such as the pressure of the ultrasound or the deflection imparted to skin, or qualitative, in which the overall shape of the stimulus is evaluated from creating a visual map. One method to measure interaction of the ultrasound with skin-like materials is a Laser Doppler Vibrometer~(LDV), which is an established tool for non-contact vibration measurement. By generating focal points onto a skin-like silicone surface, LDV can measure the resulting surface deformation to test how ultrasonic waves propagate in human skin \cite{frierUsingSpatiotemporalModulation2018} and give insight into the shapes generated by modulating the ultrasound \cite{chillesLaserDopplerVibrometry2019}. An advantage of LDV is that it is sensitive to small displacements ($\lesssim$1 microns) of the material surface; however, it relies on costly, specialized equipment ($\sim$£250,000). An alternative is to use a pressure-field microphone to scan the ultrasound pressure of the generated focal point \cite{kappusSpatiotemporalModulationMidair2018}. This method directly measures the ultrasonic output, which can be useful if a systematic evaluation of the sound pressure is needed; however, it is not enough to test the interaction with other materials such as human skin. An oil bath is another useful qualitative testing tool, because acoustic pressure from an ultrasound haptic array deflects the oil surface, which can be imaged for visual inspection \cite{longRenderingVolumetricHaptic2014}. An advantage of this method is that the experimenter can directly see the haptic output without needing additional data processing; however, a disadvantage is that the method does not permit an immediate quantification of the haptic sensation. The technique also suffers from artifacts due to ultrasound resonance within the oil \cite{longRenderingVolumetricHaptic2014}. In practice, the most reliable method to test haptic displays is to gather feedback from human users. Participants have been asked to identify how many focal points they feel~\cite{carterUltraHapticsMultipointMidair2013}, what shapes they identify \cite{longRenderingVolumetricHaptic2014, ruttenInvisibleTouchHow2019}, or whether they can feel anything at all to determine the detection threshold across various ultrasound frequencies~\cite{takahashiTactileStimulationRepetitive2020}. The main advantage of user studies is that they take input directly from humans, who are the intended users of the device; however, relying on users is time-consuming, subjective, and costly, making these studies inappropriate for testing involving quantitative measurements or early hardware development. All the above methods for sensing and evaluating mid-air haptics have their pros and cons. A systematic method of evaluation, such as the LDV, requires external setups with costly, specialized equipment. Similarly, microphones provide a means of systematic evaluation, but are not a biomimetic tool (measuring sound not touch) and do not consider the interaction with materials such as skin. While user studies are essential for evaluating the sensations, as the haptic displays are intended for human use, they are time-consuming, subjective and appropriate for later stages of development. Together, these considerations highlight the need for an efficient, biomimetic testing method for use during the earlier stages of device development. \subsection{Tactile sensing for mid-air haptics} Developments in tactile sensing technology provide a promising tool for emulating human touch. Using a tactile sensor could address the need for a more biomimetic method for evaluating mid-air haptics, bringing in some of the advantages of quantitative methods with user studies. A microphone-based tactile sensor array has been used to sense mid-air haptics generated with an ultrasonic array \cite{sakiyamaEvaluationMultiPointDynamic2019}, with contact on the sensor surface causing a pressure change in an underlying cavity containing the microphone. This has been tested with sensor skins that emulate skin, such as a grooved-pattern to imitate fingerprints \cite{sakiyamaMidairTactileReproduction2020}. Being an array-based tactile sensor, it can capture spatial data, but the spatial resolution of the sensor is limited by the size of each element in its array (currently 11\,mm), each of which needs to fit a separate microphone. Tactile sensing has been identified as an important modality for enabling robots to interact with their surroundings, leading to a developing area of research in interpreting tactile data. One task in common use is contour following, a haptic exploratory procedure used by humans to determine an object's shape \cite{ledermanHandMovementsWindow1987}. Similarly, a robotic system with a tactile sensor can be used to explore an object by maintaining contact with its contour \cite{martinez-hernandezActiveSensorimotorControl2017,leporaExploratoryTactileServoing2017, leporaPixelsPerceptsHighly2019, liControlFrameworkTactile2013,kappassovTouchDrivenController2020}. The availability of advanced tactile robotic systems, and their use to emulate human tactile exploratory procedures, indicates their potential for testing mid-air haptics in a way similar to how humans would interact with mid-air haptic sensations. \section{Methods} \subsection{Tactile sensor} In this work, we sense mid-air haptics by using a biomimetic tactile sensor mounted on a robot arm that moves it over the ultrasonic array (Fig. \ref{fig_methods_overview}a), similar to a setup used for probing physical stimuli~\cite{ward-cherrierTacTipFamilySoft2018, leporaSoftBiomimeticOptical2021}. The TacTip is a soft biomimetic optical tactile sensor~\cite{leporaSoftBiomimeticOptical2021,ward-cherrierTacTipFamilySoft2018}; here we will explore its suitability for systematic sensing and evaluation of mid-air haptics. The tactile sensor is biologically inspired by \textit{glabrous} (hairless) human skin, which has internal dermal papillae that focus strain from the skin surface down to mechanoreceptors. The TacTip mimics this structure with an outer rubber-like skin connecting to inner nodular pins that amplify surface strain into inner mechanical movements. An internal camera tracks the movement of these artificial papillae optically, enabling ready transduction of the deformation of the skin via internal shear. Recently, these signals have been found to resemble recordings from biological tactile neurons~\cite{pestell2022a,pestell2022b}. The design of the TacTip is modular, allowing for tips with different skin and pin properties to be used. The mid-air haptic display studied in this work emits tiny forces on the order of a few millinewtons \cite{frierSimulatingAirborneUltrasound2022}, which presents a major challenge to designing a sensor that can detect and map these small forces. We met this challenge by introducing a highly compliant artificial tactile skin to maximize its deformation under small forces. The pins in the TacTip design also act as levers to amplify the small sensations produced by surface indentation. To find the most appropriate skin, we tested sensing surfaces of various shapes, both flat and hemispherical, as well as changing the internal support for the skin in terms of including or not including an internal supporting gel. Our conclusion is that the most suitable TacTip skin has a flat 40\,mm dia. profile without internal gel (Fig. \ref{fig_methods_overview}) \cite{alakhawandSensingUltrasonicMidAir2020}: then the skin deforms sufficiently to render detectable the forces from mid-air haptic focal points. Note also that human skin and rubber acoustic impedance are similar at $1.6\times10^9$ and $1.9\times10^9$ kg/m$^2$s compared with air at $4\times10^2$kg/m$^2$s. Hence, both skin and the TacTip will undergo similar acoustic radiation force from the ultrasound haptic array. The skin indentation may differ depending on the mechanical properties, but previous work with the TacTip has attempted to approximately match human skin~\cite{leporaSoftBiomimeticOptical2021}. \subsection{Robotic arm} We use a low-cost, desktop robot arm: the Dobot~MG400 4-axis arm designed for affordable automation. The base and control unit has footprint 190\,mm$\times$190\,mm, payload 750\,g, maximum reach 440\,mm and repeatability $\pm0.05\,$mm. The main constraint is that only the $(x,y,z)$-position and rotation around the $z$-axis of the end effector are actuated, but that is not an issue for this work. Another benefit of the Dobot MG400 desktop robot is the accessibility of the API: it is open-source and written in Python. To accompany this paper, we have released a version of our CRI robot interface libraries that integrates this API and that can be updated to include other functionality as the robot firmware is improved. \subsection{Ultrasound phased array} To generate the mid-air haptic sensations for our experiments, we used the Ultrahaptics Evaluation Kit (UHEV1) developed by Ultraleap, which includes an array of 256 ultrasonic transducers operating at 40\,kHz to generate focal points in mid-air with an update rate of 16\,kHz. Using the device's accompanying software, the focal points generated by the array can be modulated to generate various sensations. With Spatiotemporal Modulation (STM), one focal point can be moved rapidly along the path of the desired shape, producing the illusion of a static shape that can be felt by a human hand in mid-air (Fig. \ref{fig_intro}a). In this work, to verify that our system works we test two main cases: (1) An unmodulated (UM) focal point and (2) Spatiotemporal modulation (STM) of a focal point. In the first case (UM), the acoustic pressure and the position of the focal point are constant with the focal point generated 20\,cm above the center of the array. In the second case, the acoustic pressure is constant while the position of the focal point is moved along the path of a circle of 20\,mm diameter, at spatiotemporal modulation frequency of 70\,Hz. We chose these two cases as they have been tested using a Laser Doppler Vibrometer (LDV) \cite{chillesLaserDopplerVibrometry2019, frierSimulatingAirborneUltrasound2022}, and so we can directly compare our results. To extend our results, we test our methods on six more STM shapes: (1) line, (2) triangle, (3) square, (4) small cross, (5) large cross and (6) rose. This presents a variety of different shapes, which highlights that the method can sense corners, open curves and closed curves. The line, square, triangle, and small cross all have sides of 4\,cm. The large cross and rose are 6\,cm at the longest dimensions. All the shapes are generated at a spatiotemporal modulation frequency of 100\,Hz at height 15\,cm above the ultrasonic array with constant intensity. \subsection{Systematic mapping of mid-air haptics} Here, we present the methodology we developed to evaluate mid-air haptics with the tactile robotic system. Our method involves \textit{systematic mapping}, using a predefined sequence of motions by the sensor: here a 9$\times$9 grid equally spaced 10\,mm apart to cover an 80\,mm-square region. The sensor captures data which covers $\sim$20\,mm, with an overlap of $\sim$10\,mm in the sensor data between each position. The overlap in the data reduces the noise from the sensor and gives higher confidence in the measurements. The robot arm moves the sensor across the grid points in a pre-defined sequence starting at the top-left and finishing at the bottom-right of the scans in Figs~4,5. The time taken to gather the data is $\sim$4\,min for each run across a haptic shape. At each grid point, the sensor waits for 2 seconds for the skin of the sensor to reach close to steady state deformation, then captures tactile images for 1 second at a rate of 30 frames per second. These images are then processed to find the positions of each marker on the pins. These marker positions are used to generate a bounded Voronoi diagram, from which the change in areas of the cells give a third dimension of the sensor data associated with indentation (Fig. \ref{fig_methods_overview}c, upper panel) accompanying the $x$- and $y$-shear dimensions~\cite{cramphornVoronoiFeaturesTactile2018} (using the spatial module from SciPy library). The areas of the cells in the Voronoi diagram, $\Delta A$, increase as the skin is compressed due to a pressure on its surface (Fig.~2c, upper panel). The average of $\Delta A$ over the 30 frames collected is calculated as the deformation of the tactile skin at each pin. We combine the Voronoi area changes from individual sensor readings together using Gaussian Process Regression (GPR) with a squared-exponential covariance function (using the GP module in scikit-learn library). The output of this process allows us to plot a representation of the mid-air haptic sensations (Fig. \ref{fig_methods_overview}c). For further details, we refer to our paper in which we first introduced this process \cite{alakhawandSensingUltrasonicMidAir2020}. \subsection{Expanding the system: autonomous haptic exploration} \begin{figure}[t] \centerline{\includegraphics[width=\linewidth,trim={0 0 0cm 0},clip]{images//methods_exploration.pdf}} \caption{Autonomous haptic exploration algorithm with three levels: sensation, perception, and action selection. First, the system senses the intensity of the stimulus felt over an array of pins from the sensor. Second, it detects the position and orientation of the stimulus. Third, it selects an action to take that will allow it to continue exploring the shape.} \label{fig_methods_exploration} \end{figure} \begin{algorithm}[b] \caption{Action selection}\label{alg:action} \begin{algorithmic} \If{centroid $(\bar{x},\bar{y}) > 5\,$mm from sensor position} \State Move towards centroid \State Find angle $\theta$ of vector from sensor position to centroid \Else \State Move along orientation angle, $\theta$, in degrees \State Choice of $\theta_1 = \theta,\ \theta_2 = \theta + 180^\circ$ \State Find angle $\theta_i$ with minimum change from past action \EndIf \State Choose discrete action $(x, y)$ closest to the angle from set: \State $\big\{(a,\!0),\!(a,\!a),\!(0,\!a),\!(\shortminus a,\!a),\!(\shortminus a,\!0),\!(\shortminus a,\!\shortminus a),\!(0,\!\shortminus a),\!(a,\!\shortminus a)\big\}$ with grid-size $a=10$\,mm \end{algorithmic} \end{algorithm} As an extension to our methodology, we show how additional testing modes can be added to this system, such as using real-time control to more intelligently map a stimulus. We present a method for autonomous exploration, in which the system utilizes a sensation-perception-action loop to decide where to move by aiming to stay along the path of the shape (Fig. \ref{fig_methods_exploration}): the robot first senses the stimulus using touch, then perceives the local nature of the stimulus, from which it selects an appropriate action (here a movement). This process is repeated to explore the haptic shape. \begin{figure*}[t] \centering \begin{tabular}[b]{@{}c@{\hspace{8pt}}c@{\hspace{2pt}}c@{\hspace{4pt}}c@{\hspace{2pt}}c@{\hspace{4pt}}c@{\hspace{2pt}}c@{}} \frame{\includegraphics[width=0.178\linewidth,trim={0 0 0 0},clip]{images/shapes/point.pdf}} & \includegraphics[width=0.178\linewidth,trim={0 0 0 0},clip]{images/results1/point_sim.pdf} & \includegraphics[width=0.055\linewidth,trim={0 0 0 0},clip]{images/cbar_only/cbar_sim_point.pdf} & \includegraphics[width=0.178\linewidth,trim={0 0 0 0},clip]{images/results1/point_ldv2.pdf} & \includegraphics[width=0.055\linewidth,trim={0 0 0 0},clip]{images/cbar_only/cbar_ldv_point.pdf} & \includegraphics[width=0.178\linewidth,trim={0 0 0 0},clip]{images/results1/point_tactile.pdf} & \includegraphics[width=0.055\linewidth,trim={0 0 0 0},clip]{images/cbar_only/cbar_tactile_point3.pdf} \\ \frame{\includegraphics[width=0.178\linewidth,trim={0 0 0 0},clip]{images/shapes/circle.pdf}} & \includegraphics[width=0.178\linewidth,trim={0 0 0 0},clip]{images/results1/circle_sim.pdf} & \includegraphics[width=0.055\linewidth,trim={0 0 0 0},clip]{images/cbar_only/cbar_sim_circle.pdf} & \includegraphics[width=0.178\linewidth,trim={0 0 0 0},clip]{images/results1/circle_ldv2.pdf} & \includegraphics[width=0.055\linewidth,trim={0 0 0 0},clip]{images/cbar_only/cbar_ldv_circle.pdf} & \includegraphics[width=0.178\linewidth,trim={0 0 0 0},clip]{images/results1/circle_tactile.pdf} & \includegraphics[width=0.055\linewidth,trim={0 0 0 0},clip]{images/cbar_only/cbar_tactile_circle4.pdf} \\ {\bf (a) Focal point path} & {\bf (b) Acoustic simulation} & {} &{\bf (c) LDV measurement} & {} & {\bf (d) Tactile measurement} & {} \end{tabular} \caption{Testing the mapping of mid-air haptics by the tactile robot. (a) The path of the focal point; (b) the resulting simulated acoustic field; (c) the measured displacement of skin-like material due to the haptic sensation with a Laser Doppler Vibrometer \cite{chillesLaserDopplerVibrometry2019}; and (d) our results from the tactile robot. All plots cover a 40\,mm$\times$40\,mm square area.} \label{fig_results_validation} \end{figure*} The sensation step involves capturing and processing a tactile image to generate a bounded Voronoi diagram with areas representing stimulus intensity (see Section~IIIC). For the perception process, we use an analytic model to interpret the stimulus intensities felt by the TacTip to locate and characterize the haptic sensation. First, we perform Gaussian Process Regression (GPR) over the data, then we binarize the regressed map using a fixed threshold to reduce background noise; finally, we calculate the image moments using a standard technique from computer vision: \begin{equation} M_{ij}= \sum_{x} \sum_{y} x^i y^j I(x,y),\label{eq_moments} \end{equation} where $I(x,y)$ is the intensity of the stimulus as measured by the Voronoi area changes at each pin. Using the calculated moments, we can find the centroid and orientation of the local haptic stimulus from the first-order moments ($M_{ij}$) and second-order central moments ($\mu^\prime_{i,j}$), respectively: \begin{equation} (\bar{x},\bar{y})= \Big(\frac{M_{10}}{M_{00}}, \frac{M_{01}}{M_{00}}\Big),\hspace{0.4em} \theta=\frac{1}{2}\tan^{-1}\Big(\frac{2\mu^\prime_{1,1}}{\mu ^\prime_{2,0} - \mu^\prime_{0,2}}\Big). \label{eq_centroid} \end{equation} The action selection step decides where to move next, based on the calculated centroid and orientation of the local haptic stimulus. A set of discrete allowed movements are predefined in a grid space. Algorithm \ref{alg:action} is used to determine which of those movements to take, based on an intuitive, heuristic action-selection process which does not require any training. The goal of the algorithm is to choose a movement to continue exploring the haptic shape. \section{Results} \subsection{Testing the tactile robot on mid-air haptics} To verify the capabilities of our tactile robot for mapping mid-air haptic stimuli, we initially consider two distinct stimuli: (1) a focal point, which is stationary and unmodulated (UM); and (2) a circle, which is spatiotemporally modulated (STM). (Details in Methods, Sec. IIIC.) The systematic mapping process (Sec. IIID) is used to form detailed maps of the haptic stimuli as sensed by the tactile robot. Fig. \ref{fig_results_validation} shows our results for mapping haptic stimuli alongside some comparator methods (with a comparison in Table \ref{table_results1}). The left panels show the input paths of the focal point. The middle panels show the simulated acoustic output of the array and the deformation it causes on a skin-like surface measured by a Laser Doppler Vibrometer (taken from an experiment carried out in \cite{chillesLaserDopplerVibrometry2019}). Comparing these results show that our system appears to accurately sense the mid-air haptic stimuli, shown by the simulation of the acoustic pressure output. In particular, by comparing our sensed results to the acoustic simulation, we are able to sense the acoustic pressure above a threshold of $\sim$0.4\,kPa. In comparison with the measurements taken by the LDV, our results appear very similar: they both show the variation in the sensations, with a higher intensity in the middle which gets lower as you move away from the center of the focal point and the path of the circle. \begin{table}[h] \caption{Comparison of results shown in Fig. \ref{fig_results_validation}.} \centering \begin{tabular}{llccc} \toprule \textbf{} & \textbf{} & \textbf{Acoustic} & \textbf{LDV} & \textbf{Tactile } \\ & \textbf{} & \textbf{Simulation} & \textbf{Measurement} & \textbf{Measurement} \\ \hline \textbf{Peak} & Point & 1.0\,kPa & 0.35\,\micro m & 3.77\,mm\textsuperscript{2} \\ \textbf{Value} & Circle & 0.6\,kPa & 0.26\,\micro m & 0.98mm\,\textsuperscript{2} \\ \hline \multirow{2}{*}{\textbf{Size }} & Point & 13\,mm & 25\,mm & 19\,mm \\ & Circle & 40\,mm & 48\,mm & 34\,mm \\ \hline \multirow{2}{*}{\textbf{RMSE}} & Point & - & 19\% & 13\% \\ & Circle & - & 11\% & 10\% \\ \bottomrule \end{tabular} \label{table_results1} \end{table} Our measurements with the tactile sensor indicate that the focal point causes a round indentation of 19\,mm diameter (Table~\ref{table_results1}), measured from the distance at which the signal becomes $>$20\% of that of the central peak. In comparison, the LDV measurements show that the focal point creates an indentation on the skin-like material 25\,mm in diameter, also measured at $>$20\% of the peak value~\cite{chillesLaserDopplerVibrometry2019}. Both these values are larger than the 13\,mm diameter from the acoustic simulation, consistent with acoustic pressure sourcing a broader indentation of the elastic skin surface. Likewise, the circular stimulus (Fig~4, lower panels) causes a ring-like indentation. This is measured with an outer diameter of 34\,mm with the tactile sensor and 48\,mm with LDV, compared with a 40\,mm from the acoustic simulation, all to $>$20\% of the peak value on the ring (Table \ref{table_results1}). All these values are significantly larger than the 20\,mm diameter of the focal point path, as expected. \begin{figure*}[t] \centering \begin{tabular}[b]{@{}c@{\hspace{6pt}}c@{\hspace{6pt}}c@{\hspace{6pt}}c@{\hspace{6pt}}c@{\hspace{6pt}}c@{}} \frame{\includegraphics[width=0.15\linewidth,trim={0 0 0 0},clip]{images/shapes/line.pdf}} & \frame{\includegraphics[width=0.15\linewidth,trim={0 0 0 0},clip]{images/shapes/triangle.pdf}} & \frame{\includegraphics[width=0.15\linewidth,trim={0 0 0 0},clip]{images/shapes/square.pdf}} & \frame{\includegraphics[width=0.15\linewidth,trim={0 0 0 0},clip]{images/shapes/cross2.pdf}} & \frame{\includegraphics[width=0.15\linewidth,trim={0 0 0 0},clip]{images/shapes/cross3.pdf}} & \frame{\includegraphics[width=0.15\linewidth,trim={0 0 0 0},clip]{images/shapes/rose.pdf}} \\ \frame{\includegraphics[width=0.15\linewidth,trim={0 0 0 0},clip]{images/shapes/gpr_line.pdf}} & \frame{\includegraphics[width=0.15\linewidth,trim={0 0 0 0},clip]{images/shapes/gpr_triangle.pdf}} & \frame{\includegraphics[width=0.15\linewidth,trim={0 0 0 0},clip]{images/shapes/gpr_square.pdf}} & \frame{\includegraphics[width=0.15\linewidth,trim={0 0 0 0},clip]{images/shapes/gpr_cross2.pdf}} & \frame{\includegraphics[width=0.15\linewidth,trim={0 0 0 0},clip]{images/shapes/gpr_cross3.pdf}} & \frame{\includegraphics[width=0.15\linewidth,trim={0 0 0 0},clip]{images/shapes/gpr_rose.pdf}} \end{tabular} \caption{Systematic mapping of six mid-air haptic shapes: line, triangle, square, small cross, large cross, and rose. The plotted regions cover an 80\,mm$\times$80\,mm square area. The top panel shows the spatiotemporal modulation path of the focal points, the lower panel shows the output of the system, normalized between 0-1 (using the values for max. $\Delta A$ in Table \ref{table_results2}).} \label{fig_results_shapes} \end{figure*} In these tests, the LDV consistently gave larger estimates than the tactile measurements, which we attribute to the lower spatial resolution of the LDV spreading the signal. Collecting data with the LDV is a relatively slow process compared to tactile and results in single rather than multiple measurements. Additionally, another issue with the LDV is that data is collected at an angle that needs correcting, which makes it difficult to accurately scale the output. To assess the comparative accuracy of the results, we calculated the “pixel-to-pixel” error of both the LDV and tactile data compared to the acoustic simulation, using a root mean square error (RMSE). After scaling the data between 0-1, we present the RMSE in Table \ref{table_results1} as a percentage. The RMSE for the tactile data is lower for both stimuli, which we attribute to the higher spatial resolution of the tactile map. \subsection{Mapping mid-air haptic shapes} To show our tactile robot can also evaluate a variety of mid-air haptic stimuli, we apply it on six distinct shapes generated by the ultrasonic array: (1)~line, (2)~triangle, (3)~square, (4)~small cross, (5)~large cross, and (6)~rose. (See~Methods Sec. IIIC for details of their generation.) Our systematic mapping method produces detailed visualizations of the mid-air haptic shapes, visually appearing to closely resemble the focal point paths (cf. paths in top row of Fig. \ref{fig_results_shapes} with the maps in the bottom row). We observe that the sensor signal is strongest on the corners for all shapes. Overall, the robotic system maps shapes similarly to how humans described them in user studies (tested on the square, triangle, and circle), who interpreted these as constituting the appropriate geometry, but commenting on how blurry they felt~\cite{hajasMidAirHapticRendering2020}. \begin{table}[b] \caption{Testing mid-air haptic shapes.} \centering \begin{tabular}{lcc} \toprule & \textbf{Approx. path length} & \textbf{Max. $\Delta A$} \\ \textbf{Shape} & \textbf{(mm)} & \textbf{(mm\textsuperscript{2})} \\ \hline Line & 4 & 1.54 \\ Triangle & 13 & 0.49 \\ Square & 16 & 0.22 \\ Small cross & 8 & 1.65 \\ Large cross & 12 & 0.73 \\ Rose & 28 & 0.74 \\ \bottomrule \end{tabular} \label{table_results2} \end{table} Table \ref{table_results2} presents the maximum Voronoi area change sensed by the tactile robot, $\Delta A$, for each shape along with the approximate path length the focal point takes. A larger $\Delta A$ indicates a larger indentation on the skin of the sensor, which would be felt as a stronger sensation. The largest values for $\Delta A$, 1.54 for the line and 1.65 for the small cross, correspond to the shapes with the shortest path lengths. The relationship is not inversely proportional, however, as we can see that the rose has the highest path length, but not the smallest $\Delta A$. Looking at the map of the rose (Fig. \ref{fig_results_shapes}, right), we can identify this highest $\Delta A$ is in the center, while the petals of the rose have a lower intensity. In contrast, the large cross appears to produce no sensations in the center, even though the focal point passes through that position. This highlights the importance of testing the mid-air haptic sensations, as they might not produce the desired effect for all paths. \begin{figure*}[ht] \centering \begin{tabular}[b]{@{}c@{\hspace{6pt}}c@{\hspace{6pt}}c@{\hspace{6pt}}c@{\hspace{6pt}}c@{\hspace{6pt}}c@{}} \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_circle/results_exp_1_0.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_circle/results_exp_2_0.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_circle/results_exp_3_0.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_circle/results_exp_4_0.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_circle/results_exp_5_0.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_circle/results_exp_6_0.pdf}} \\ \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_circle/results_exp_1_1.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_circle/results_exp_2_1.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_circle/results_exp_3_1.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_circle/results_exp_4_1.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_circle/results_exp_5_1.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_circle/results_exp_6_1.pdf}} \\ \\ \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_triangle/results_exp_1_0.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_triangle/results_exp_2_0.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_triangle/results_exp_3_0.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_triangle/results_exp_4_0.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_triangle/results_exp_5_0.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_triangle/results_exp_6_0.pdf}} \\ \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_triangle/results_exp_1_1.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_triangle/results_exp_2_1.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_triangle/results_exp_3_1.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_triangle/results_exp_4_1.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_triangle/results_exp_5_1.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_triangle/results_exp_6_1.pdf}} \\ \\ \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_square/results_exp_1_0.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_square/results_exp_2_0.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_square/results_exp_3_0.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_square/results_exp_4_0.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_square/results_exp_5_0.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_square/results_exp_6_0.pdf}} \\ \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_square/results_exp_1_1.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_square/results_exp_2_1.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_square/results_exp_3_1.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_square/results_exp_4_1.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_square/results_exp_5_1.pdf}} & \frame{\includegraphics[width=0.13\linewidth,trim={45 40 35 40},clip]{images/exp_square/results_exp_6_1.pdf}} \end{tabular} \caption{An example of the autonomous exploration of circular, triangle, and square mid-air haptic shapes, with a selection of steps taken by the system. The top rows show the data collected, and the bottom rows show the global mapping of the shapes at that current state. All plots cover a 60\,mm$\times$60\,mm square area.} \label{fig_results_exploration} \end{figure*} \subsection{Autonomous haptic exploration} An initial testing of autonomous haptic exploration with our tactile robot was carried out on the haptic circle, triangle, and square, as regular shapes that typify the type of mid-air haptic exploration one would want with some variations. The autonomous tactile robot was able to efficiently explore the shapes and produce detailed visualizations (Fig. \ref{fig_results_exploration}). With the perception method used in the algorithm, it is able to identify enough local information about the stimulus to determine the direction it should move to. This lets the system collect data only at the positions in space where there is stimulus. This method builds up a shape gradually as it explores the space. Fig. \ref{fig_results_exploration} shows this process, highlighting the sequence of motions the robot takes with the data it has gathered at each step, and the global perception of the shape it has built up with all of the data it has gathered so far. Overall, the tactile robot traces around the entire haptic shape and generates a representation of the stimulus with only 15 data samples for the circle and square and 13 data samples for the triangle, compared with 81 samples for the systematic mapping method. This sampling procedure makes the autonomous exploration method more efficient than the first method for systematic mapping, in which every position in the grid space is sampled. This algorithm does rely on some conditions for efficient haptic exploration, such as a high enough tactile signal which can be used to localize the stimulus and detect its orientation. It also requires the shapes to have closed, single continuous curves. A more robust exploration algorithm can be developed in the future which utilizes more sophisticated methods, as has been demonstrated with the tactile sensor for physical contour following \cite{leporaSoftBiomimeticOptical2021} which could overcome these limitations. Overall, it shows the promise of real-time control as a component of the tactile robotic system to map haptic shapes more efficiently and more like how humans would explore the shapes. \section{Discussion} This paper presented a tactile robotic system for evaluating virtual touch: combining a low-cost, accessible desktop robot arm with a 3D-printed biomimetic tactile sensor with a systematic haptic mapping procedure. This platform allowed us to successfully map mid-air haptic stimuli of various shapes. Additional control modes can be included within this platform, such as autonomous haptic exploration to efficiently sample only those regions of space where there may be an appreciable tactile stimulation. Various methods have been used to test and characterize mid-air haptics. Quantitative methods, such as using a Laser Doppler Vibrometer (LDV) to measure the deflection of materials caused by the acoustic radiation force of the mid-air haptic stimuli, or microphones to measure the sound pressure of the stimulus, can give detailed information of the sensations. However, both these methods are are limited to collecting data at single points with every measurement; a large number of data samples is needed to scan points in a two-dimensional plane to generate spatial data. With our method, just one single measurement of data can give us spatial information about the sensation. Since the sensor we use, the TacTip, captures an image of 127 marker-tipped pins, each marker gives information about the deflection of the sensor's skin at that point. Qualitative methods which have been used, such as projecting the sensations on an oil bath, lead to a faster identification of the shape of the stimulus than with our method, so they can have their advantage when a quick assessment of the shape is needed. However, as they are a qualitative method, they do not give the detailed quantification of the stimulus intensity provided by our method, or the capability to explore and interact with the stimulus more similarly to humans. Since we use a robotic system, we can utilize different exploration and control methods to sample the data more intelligently, which we introduced with a method for autonomous haptic exploration. When humans interact with physical objects, they utilize contour following to move their hands along the most salient contact features of an object to determine the overall shape they are feeling. This has been identified as an exploratory procedure that can be used to determine various aspects such as shape and volume of handled objects \cite{ledermanHandMovementsWindow1987}. While it is not yet known which specific exploratory procedures are used by humans to interact with mid-air haptics, user studies have suggested that active touch could help people to distinguish between static mid-air haptic shapes more accurately \cite{hajasMidAirHapticRendering2020}. This opens up a way to compare the manner in which robots sample data with how humans interact with objects via their sense of touch, to understand better the nature of human haptic perception and interaction. {\em Acknowledgements:} We thank Andrew Stinchcombe for technical support and the rest of the Tactile Robotics group for their help. \bibliographystyle{IEEEtran}
2,869,038,156,560
arxiv
\section{Introduction} \label{Intro} Circulant graphs form an important and very well-studied class of graphs \cite{monakhova2012survey}. They find applications to the computer network design, telecommunication networking, distributed computation, parallel processing architectures, and VLSI design. For $n\in \mathbb{N}$ with $n\geq 4$, let $S=\{s_1, s_2, \ldots, s_k \}$ where $s_i$ $(i=1,2, \ldots, k)$ are positive integers such that $1\leq s_1 < s_2 < \ldots < s_k \leq \lfloor \frac{n}{2}\rfloor$. The \textit{circulant graph} $C_n(S)=(V,E)$ has the set $V=\{0, 1, \ldots, n-1 \}$ of integers as a vertex set and in which two distinct vertices $i,j\in \{0, 1, \ldots, n-1 \}$ are adjacent if and only if $|i-j|_n\in S$, where $|x|_n=\min(|x|, n-|x|)$. The elements of the generating set $S = (s_1, s_2, \ldots , s_k)$ are called generators (chords). A parametric description of the form $(n; S)$ completely specifies the circulant of order $n$ and dimension $k.$ Since circulants belong to the family of Cayley graphs, any undirected circulant graph $C_n(S)$ is vertex-transitive and $|S|$-regular. Circulant graphs are also known as starpolygon graphs \cite{boesch1984circulants}, cyclic graphs \cite{david1972enumeration}, distributed loop networks \cite{bermond1995distributed}, chordal rings \cite{barriere2000fault}, multiple fixed step graphs \cite{fabrega1997fault}, point-symmetric graphs \cite{turner1967point}, in Russian as Diophantine structures \cite{monakhova1979synthesis}. The bigger is the cardinal of $S$, the smaller is the value of the diameter of $C_n(S)$. For example, when $S = \{1, 2, \ldots , \lfloor \frac{n}{2}\rfloor\}$, the circulant graph $C_n(S)$ is isomorphic to the complete graph $K_n$ of order $n$ and diameter $diam(K_n)=1$. Therefore, we consider, in this paper, the connection set $S=\{1, s\}$ where $s$ is an integer such that $2\leq s\leq \lfloor \frac{n-1}{2} \rfloor,$ and we focus on $4$-regular circulant graph, $C_n(1,s)$, on $n$ vertices with respect to $S.$ It can be defined as a graph on vertex-set $V=\{i : i\in \mathbb{Z}_n\}$ and edge-set $E=\{(i,i\pm r) : r\in \{1,s\}\}$. Since $gcd(n,1,s)=1$, the graph $C_n(1,s)$ is connected. The distance $d(i,j)$ between two vertices $i$ and $j$ in $C_n(1,s)$ is the length of a shortest path joining $i$ and $j$. The diameter of $C_n(1,s)$, denoted $diam(C_n(1,s))$, is the maximum distance among all pairs of vertices in $C_n(1,s)$. Here we are interested in the following problem. \textit{\textbf{Problem.}} Given $n,s$, determine $diam(C_n(1,s))$. The problem has applications in the design of interconnection networks. The exact calculation of the diameter of circulant graphs is a well-studied problem even for the case $|S|=2$. For theoretical results, upper and lower bounds were giving (see \cite{monakhova2012survey, chen2005diameter}). As for algorithmic results, this problem was first stated by Wong and Coppersmith who gave a heuristic algorithm for solving it \cite{wong1974combinatorial}. It was also studied by Zerovnik and Pisanski \cite{zerovnik1993computing} who established an algorithm for computing the diameter of $C_n(1,s)$ with running time $O(log(n))$. However, there was no formulas giving exact values for the diameter of $C_n(1,s)$ for all $n$ and $s$. Our approach which makes it possible to determine the distances in a circulant, allowed us to give exact formulas of the diameter of $C_n(1,s)$ for various values of $n$ and $s$. The rest of this paper is structured as follows. In section \ref{Algo}, we focus on studying paths joining any two vertices in $C_n(1,s)$, and we determine the equivalence class of paths existing in circulant graphs. Later, we present a formula that provides the exact value for the distance between any two vertices in $C_n(1,s)$, then, for given $n$ and $s$, we reveal an algorithm for computing the diameter of $C_n(1,s)$. For theoretical results, section \ref{diam} provides exact formulas for the diameter of circulant graphs $C_n(1,s)$ for almost all $n$ and $s$, and gives an upper bound for the rest of the values of $n$ and $s$ (see table below where $\lambda=\frac{n}{s},$ $\gamma$ is the reminder of dividing $n$ by $s,$ $a=\frac{s}{\gamma},$ $b$ is the reminder of dividing $s$ by $\gamma,$ $p_0=\lfloor \frac{\lambda+\gamma}{2} \rfloor$, $p_1=\lfloor \frac{\gamma-b+(a+1)\lambda+1}{2} \rfloor$, $p_2=\lfloor \frac{\gamma+b+(a-1)\lambda+1}{2} \rfloor$, $p_3=\lfloor \frac{b+a\lambda+1}{2} \rfloor,$ and $e_1= \min\{\max\{p_1, p_3\},\max\{p_0, p_2\}\}$). \begin{table}[h!] \medskip \centering\renewcommand{\arraystretch}{1.2} \begin{tabular}{|c|c|c|l|} \cline{4-4} \multicolumn{2}{c}{} & & $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ Diameter of $C_n(1,s)$ \\ \hline \multicolumn{2}{|c}{$\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\gamma =0$} & & $=\lfloor \frac{\lambda+s-1}{2}\rfloor$ \\ \hline \multirow{7}{*}{\rotatebox{90}{\centering $\lambda\geq\gamma$}}& \multirow{2}{*}{$n$ even}&$s$ odd&$=\lceil \frac{\lambda}{2} \rceil + \frac{s-1}{2} -( \min(\lceil \frac{\gamma}{2}\rceil,\lceil \frac{s-\gamma+1}{2}\rceil) -1)$\\ \cline{3-4} & &$s$ even&$=\begin{cases} \lceil \frac{\lambda}{2} \rceil + \frac{s-\gamma}{2} & \mbox{if $\gamma \leq 2\lceil \frac{s-2}{4} \rceil$,} \\ \lfloor \frac{\lambda}{2} \rfloor +\frac{\gamma}{2} & \mbox{otherwise.} \end{cases}$\\ \cline{2-4} & \multirow{2}{*}{$n$ odd}&$s$ odd&$=\lceil \frac{\lambda}{2} \rceil + \frac{s-1}{2} -( \min(\lceil \frac{\gamma+1}{2}\rceil,\lceil \frac{s-\gamma+2}{2}\rceil) -1)$\\ \cline{3-4} & &$s$ even&$=\begin{cases} \lceil \frac{\lambda}{2} \rceil + \frac{s-2}{2} & \mbox{if $\gamma =1$,} \\ \lfloor \frac{\lambda}{2} \rfloor + \frac{s-\gamma+1}{2} & \mbox{if $3\leq \gamma \leq 2\lceil \frac{s}{4} \rceil -1$,} \\ \lceil \frac{\lambda}{2} \rceil + \frac{\gamma-1}{2} & \mbox{otherwise.} \end{cases}$\\ \hline \multicolumn{2}{|c}{$\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\lambda\leq\gamma$ and $b\leq a\lambda+1$} & & $=\begin{cases} p_1 -1 & \mbox{if $p_1=p_2$ and $(\gamma+b)(a\lambda-\lambda+1) \equiv 1 \pmod 2$,} \\ e_1 & \mbox{otherwise.} \end{cases}$ \\ \hline \multicolumn{2}{|c}{$\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ $\centering$ All $n$ and $s$} & & $\leq \min( \max\{ \lfloor \frac{n}{s} \rfloor+1, n-\lfloor \frac{n}{s} \rfloor s-2, ( \lfloor \frac{n}{s} \rfloor+1)s-n-1\}, \lfloor \frac{n+2}{4}\rfloor, \lfloor \frac{\lfloor \frac{n}{2} \rfloor}{s} \rfloor + \lceil \frac{s}{2} \rceil)$ \\ \hline \end{tabular} \label{tab1} \end{table} \section{Algorithm computing $diam(C_n(1,s))$} \label{Algo} Several different approaches have been used to obtain diameter results for circulant graphs. Of the methods we wish to mention, those achieved by Wong and Coppersmith \cite{wong1974combinatorial} were obtained first. Although some of their techniques were later improved upon, their method is quite accessible. Before revealing our approach, let us introduce some definitions and notations related to circulant graphs. \begin{definition} We denote a path leading from a vertex $i$ to another vertex $j$ in $C_n(1,s)$ by $P(i,j)$. It is represented by a couple $(\alpha a^{\pm}, \beta c^{\pm})$ where \begin{itemize} \item $a$ (resp. $c$) indicates that $P(i,j)$ walks through outer (resp. inner) edges; \item $\alpha$ (resp. $\beta$) is the number of outer (resp. inner) edges; \item $+$ (resp. $-$) means that $P(i,j)$ takes the clockwise (resp. the counterclockwise) direction. \end{itemize} \end{definition} \begin{remark} Let $i,j$ be two vertices of $C_n(1,s)$. A path $P(i,j)$ can join the vertices $i$ and $j$ in several ways: \begin{itemize} \item $P(i,j)=(\alpha a^{\pm}, \beta c^{\pm})$ means that $P(i,j)$ walks through $\alpha$ outer edges in the clockwise $(+)$ or the counterclockwise $(-)$ direction \textbf{before} going through $\beta$ inner edges in the clockwise $(+)$ or the counterclockwise $(-)$ direction; \item $P(i,j)=(\beta c^{\pm}, \alpha a^{\pm})$ means that $P(i,j)$ walks through $\beta$ inner edges in the clockwise $(+)$ or the counterclockwise $(-)$ direction \textbf{before} going $\alpha$ outer edges in the clockwise $(+)$ or the counterclockwise $(-)$ direction; \item $P(i,j)=(\alpha a^{\pm}, 0)$ means that $P(i,j)$ walks \textbf{only} through $\alpha$ outer edges in the clockwise $(+)$ or the counterclockwise $(-)$ direction; \item $P(i,j)=(0, \beta c^{\pm})$ means that $P(i,j)$ walks \textbf{only} through $\beta$ inner edges in the clockwise $(+)$ or the counterclockwise $(-)$ direction. \end{itemize} \end{remark} \begin{notation} In $C_n(1, s)$, we represent an outer (resp. inner) edge connecting the vertices $i$ and $j$ and taking the clockwise $(+)$ or the counterclockwise $(-)$ direction by $i \leadsto^{a^\pm} j$ (resp. $i \leadsto^{c^\pm} j$). \end{notation} \begin{notation} Let $i$ and $j$ be two vertices of $C_n(1, s)$. We denote the length of $P(i,j)$ by $\ell(P(i,j)),$ the number of outer edges of $P(i,j)$ by $\ell_a(P(i,j))$ and the number of inner edges of $P(i,j)$ by $\ell_c(P(i,j)).$ \end{notation} \begin{example} Let us focus on the graph $C_{10}(1,4)$ presented in Figure \ref{fig1}. \begin{figure}[!h] \centering \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.86cm,y=0.86cm] \begin{scriptsize} \draw [line width=0.8pt] (1.860739087062379,1.351906080272688)-- (-2.3,0.); \draw [line width=0.8pt] (0.7107390870623791,2.1874299874788528)-- (-1.860739087062379,-1.3519060802726879); \draw [line width=0.8pt] (-0.7107390870623789,2.187429987478853)-- (-0.7107390870623793,-2.1874299874788528); \draw [line width=0.8pt] (-1.8607390870623788,1.3519060802726883)-- (0.7107390870623785,-2.187429987478853); \draw [line width=0.8pt] (-2.3,0.)-- (1.8607390870623788,-1.3519060802726885); \draw [line width=0.8pt] (-1.860739087062379,-1.3519060802726879)-- (2.3,0.); \draw [line width=0.8pt] (-0.7107390870623793,-2.1874299874788528)-- (1.860739087062379,1.351906080272688); \draw [line width=0.8pt] (0.7107390870623785,-2.187429987478853)-- (0.7107390870623791,2.1874299874788528); \draw [line width=0.8pt] (1.8607390870623788,-1.3519060802726885)-- (-0.7107390870623789,2.187429987478853); \draw [line width=0.8pt] (2.3,0.)-- (-1.8607390870623788,1.3519060802726883); \draw [line width=0.8pt] (0.7107390870623791,2.1874299874788528)-- (-1.860739087062379,-1.3519060802726879); \draw [line width=0.8pt] (-0.7107390870623789,2.187429987478853)-- (-0.7107390870623793,-2.1874299874788528); \draw [line width=0.8pt] (-1.8607390870623788,1.3519060802726883)-- (0.7107390870623785,-2.187429987478853); \draw [line width=0.8pt] (-2.3,0.)-- (1.8607390870623788,-1.3519060802726885); \draw [line width=0.8pt] (-1.860739087062379,-1.3519060802726879)-- (2.3,0.); \draw [line width=0.8pt] (-0.7107390870623793,-2.1874299874788528)-- (1.860739087062379,1.351906080272688); \draw [line width=0.8pt] (0.7107390870623785,-2.187429987478853)-- (0.7107390870623791,2.1874299874788528); \draw [line width=0.8pt] (1.8607390870623788,-1.3519060802726885)-- (-0.7107390870623789,2.187429987478853); \draw [line width=0.8pt] (2.3,0.)-- (-1.8607390870623788,1.3519060802726883); \draw [line width=0.8pt] (-0.7107390870623789,2.187429987478853)-- (-0.7107390870623793,-2.1874299874788528); \draw [line width=0.8pt] (-1.8607390870623788,1.3519060802726883)-- (0.7107390870623785,-2.187429987478853); \draw [line width=0.8pt] (-2.3,0.)-- (1.8607390870623788,-1.3519060802726885); \draw [line width=0.8pt] (-1.860739087062379,-1.3519060802726879)-- (2.3,0.); \draw [line width=0.8pt] (-0.7107390870623793,-2.1874299874788528)-- (1.860739087062379,1.351906080272688); \draw [line width=0.8pt] (0.7107390870623785,-2.187429987478853)-- (0.7107390870623791,2.1874299874788528); \draw [line width=0.8pt] (1.8607390870623788,-1.3519060802726885)-- (-0.7107390870623789,2.187429987478853); \draw [line width=0.8pt] (2.3,0.)-- (-1.8607390870623788,1.3519060802726883); \draw [line width=0.8pt] (-1.8607390870623788,1.3519060802726883)-- (0.7107390870623785,-2.187429987478853); \draw [line width=0.8pt] (-2.3,0.)-- (1.8607390870623788,-1.3519060802726885); \draw [line width=0.8pt] (-1.860739087062379,-1.3519060802726879)-- (2.3,0.); \draw [line width=0.8pt] (-0.7107390870623793,-2.1874299874788528)-- (1.860739087062379,1.351906080272688); \draw [line width=0.8pt] (0.7107390870623785,-2.187429987478853)-- (0.7107390870623791,2.1874299874788528); \draw [line width=0.8pt] (1.8607390870623788,-1.3519060802726885)-- (-0.7107390870623789,2.187429987478853); \draw [line width=0.8pt] (2.3,0.)-- (-1.8607390870623788,1.3519060802726883); \draw [line width=0.8pt] (-2.3,0.)-- (1.8607390870623788,-1.3519060802726885); \draw [line width=0.8pt] (-1.860739087062379,-1.3519060802726879)-- (2.3,0.); \draw [line width=0.8pt] (-0.7107390870623793,-2.1874299874788528)-- (1.860739087062379,1.351906080272688); \draw [line width=0.8pt] (0.7107390870623785,-2.187429987478853)-- (0.7107390870623791,2.1874299874788528); \draw [line width=0.8pt] (1.8607390870623788,-1.3519060802726885)-- (-0.7107390870623789,2.187429987478853); \draw [line width=0.8pt] (2.3,0.)-- (-1.8607390870623788,1.3519060802726883); \draw [line width=0.8pt] (-1.860739087062379,-1.3519060802726879)-- (2.3,0.); \draw [line width=0.8pt] (-0.7107390870623793,-2.1874299874788528)-- (1.860739087062379,1.351906080272688); \draw [line width=0.8pt] (0.7107390870623785,-2.187429987478853)-- (0.7107390870623791,2.1874299874788528); \draw [line width=0.8pt] (1.8607390870623788,-1.3519060802726885)-- (-0.7107390870623789,2.187429987478853); \draw [line width=0.8pt] (2.3,0.)-- (-1.8607390870623788,1.3519060802726883); \draw [line width=0.8pt] (-0.7107390870623793,-2.1874299874788528)-- (1.860739087062379,1.351906080272688); \draw [line width=0.8pt] (0.7107390870623785,-2.187429987478853)-- (0.7107390870623791,2.1874299874788528); \draw [line width=0.8pt] (1.8607390870623788,-1.3519060802726885)-- (-0.7107390870623789,2.187429987478853); \draw [line width=0.8pt] (2.3,0.)-- (-1.8607390870623788,1.3519060802726883); \draw [line width=0.8pt] (0.7107390870623785,-2.187429987478853)-- (0.7107390870623791,2.1874299874788528); \draw [line width=0.8pt] (1.8607390870623788,-1.3519060802726885)-- (-0.7107390870623789,2.187429987478853); \draw [line width=0.8pt] (2.3,0.)-- (-1.8607390870623788,1.3519060802726883); \draw [line width=0.8pt] (1.8607390870623788,-1.3519060802726885)-- (-0.7107390870623789,2.187429987478853); \draw [line width=0.8pt] (2.3,0.)-- (-1.8607390870623788,1.3519060802726883); \draw [line width=0.8pt] (2.3,0.)-- (-1.8607390870623788,1.3519060802726883); \draw [line width=1.2pt] (1.860739087062379,1.351906080272688)-- (0.7107390870623791,2.1874299874788528); \draw [line width=1.2pt] (0.7107390870623791,2.1874299874788528)-- (-0.7107390870623789,2.187429987478853); \draw [line width=1.2pt] (-0.7107390870623789,2.187429987478853)-- (-1.8607390870623788,1.3519060802726883); \draw [line width=1.2pt] (-1.8607390870623788,1.3519060802726883)-- (-2.3,0.); \draw [line width=1.2pt] (-2.3,0.)-- (-1.860739087062379,-1.3519060802726879); \draw [line width=1.2pt] (-1.860739087062379,-1.3519060802726879)-- (-0.7107390870623793,-2.1874299874788528); \draw [line width=1.2pt] (-0.7107390870623793,-2.1874299874788528)-- (0.7107390870623785,-2.187429987478853); \draw [line width=1.2pt] (0.7107390870623785,-2.187429987478853)-- (1.8607390870623788,-1.3519060802726885); \draw [line width=1.2pt] (1.8607390870623788,-1.3519060802726885)-- (2.3,0.); \draw [line width=1.2pt] (2.3,0.)-- (1.860739087062379,1.351906080272688); \draw [line width=1.2pt] (0.7107390870623791,2.1874299874788528)-- (-0.7107390870623789,2.187429987478853); \draw [line width=1.2pt] (-0.7107390870623789,2.187429987478853)-- (-1.8607390870623788,1.3519060802726883); \draw [line width=1.2pt] (-1.8607390870623788,1.3519060802726883)-- (-2.3,0.); \draw [line width=1.2pt] (-2.3,0.)-- (-1.860739087062379,-1.3519060802726879); \draw [line width=1.2pt] (-1.860739087062379,-1.3519060802726879)-- (-0.7107390870623793,-2.1874299874788528); \draw [line width=1.2pt] (-0.7107390870623793,-2.1874299874788528)-- (0.7107390870623785,-2.187429987478853); \draw [line width=1.2pt] (0.7107390870623785,-2.187429987478853)-- (1.8607390870623788,-1.3519060802726885); \draw [line width=1.2pt] (1.8607390870623788,-1.3519060802726885)-- (2.3,0.); \draw [line width=1.2pt] (2.3,0.)-- (1.860739087062379,1.351906080272688); \draw [line width=1.2pt] (-0.7107390870623789,2.187429987478853)-- (-1.8607390870623788,1.3519060802726883); \draw [line width=1.2pt] (-1.8607390870623788,1.3519060802726883)-- (-2.3,0.); \draw [line width=1.2pt] (-2.3,0.)-- (-1.860739087062379,-1.3519060802726879); \draw [line width=1.2pt] (-1.860739087062379,-1.3519060802726879)-- (-0.7107390870623793,-2.1874299874788528); \draw [line width=1.2pt] (-0.7107390870623793,-2.1874299874788528)-- (0.7107390870623785,-2.187429987478853); \draw [line width=1.2pt] (0.7107390870623785,-2.187429987478853)-- (1.8607390870623788,-1.3519060802726885); \draw [line width=1.2pt] (1.8607390870623788,-1.3519060802726885)-- (2.3,0.); \draw [line width=1.2pt] (2.3,0.)-- (1.860739087062379,1.351906080272688); \draw [line width=1.2pt] (-1.8607390870623788,1.3519060802726883)-- (-2.3,0.); \draw [line width=1.2pt] (-2.3,0.)-- (-1.860739087062379,-1.3519060802726879); \draw [line width=1.2pt] (-1.860739087062379,-1.3519060802726879)-- (-0.7107390870623793,-2.1874299874788528); \draw [line width=1.2pt] (-0.7107390870623793,-2.1874299874788528)-- (0.7107390870623785,-2.187429987478853); \draw [line width=1.2pt] (0.7107390870623785,-2.187429987478853)-- (1.8607390870623788,-1.3519060802726885); \draw [line width=1.2pt] (1.8607390870623788,-1.3519060802726885)-- (2.3,0.); \draw [line width=1.2pt] (2.3,0.)-- (1.860739087062379,1.351906080272688); \draw [line width=1.2pt] (-2.3,0.)-- (-1.860739087062379,-1.3519060802726879); \draw [line width=1.2pt] (-1.860739087062379,-1.3519060802726879)-- (-0.7107390870623793,-2.1874299874788528); \draw [line width=1.2pt] (-0.7107390870623793,-2.1874299874788528)-- (0.7107390870623785,-2.187429987478853); \draw [line width=1.2pt] (0.7107390870623785,-2.187429987478853)-- (1.8607390870623788,-1.3519060802726885); \draw [line width=1.2pt] (1.8607390870623788,-1.3519060802726885)-- (2.3,0.); \draw [line width=1.2pt] (2.3,0.)-- (1.860739087062379,1.351906080272688); \draw [line width=1.2pt] (-1.860739087062379,-1.3519060802726879)-- (-0.7107390870623793,-2.1874299874788528); \draw [line width=1.2pt] (-0.7107390870623793,-2.1874299874788528)-- (0.7107390870623785,-2.187429987478853); \draw [line width=1.2pt] (0.7107390870623785,-2.187429987478853)-- (1.8607390870623788,-1.3519060802726885); \draw [line width=1.2pt] (1.8607390870623788,-1.3519060802726885)-- (2.3,0.); \draw [line width=1.2pt] (2.3,0.)-- (1.860739087062379,1.351906080272688); \draw [line width=1.2pt] (-0.7107390870623793,-2.1874299874788528)-- (0.7107390870623785,-2.187429987478853); \draw [line width=1.2pt] (0.7107390870623785,-2.187429987478853)-- (1.8607390870623788,-1.3519060802726885); \draw [line width=1.2pt] (1.8607390870623788,-1.3519060802726885)-- (2.3,0.); \draw [line width=1.2pt] (2.3,0.)-- (1.860739087062379,1.351906080272688); \draw [line width=1.2pt] (0.7107390870623785,-2.187429987478853)-- (1.8607390870623788,-1.3519060802726885); \draw [line width=1.2pt] (1.8607390870623788,-1.3519060802726885)-- (2.3,0.); \draw [line width=1.2pt] (2.3,0.)-- (1.860739087062379,1.351906080272688); \draw [line width=1.2pt] (1.8607390870623788,-1.3519060802726885)-- (2.3,0.); \draw [line width=1.2pt] (2.3,0.)-- (1.860739087062379,1.351906080272688); \draw [line width=1.2pt] (2.3,0.)-- (1.860739087062379,1.351906080272688); \draw (0.6739544712378257,2.5972681813603957) node[anchor=north west] {\textbf{0}}; \draw (-1,2.6) node[anchor=north west] {\textbf{9}}; \draw (1.9,1.6) node[anchor=north west] {\textbf{1}}; \draw (2.4,0.1385697063923914) node[anchor=north west] {\textbf{2}}; \draw (2,-1.3477621400571933) node[anchor=north west] {\textbf{3}}; \draw (0.5,-2.43) node[anchor=north west] {\textbf{4}}; \draw (-0.8957230862276198,-2.417365431427455) node[anchor=north west] {\textbf{5}}; \draw (-2.4,-1.2644164290413287) node[anchor=north west] {\textbf{6}}; \draw (-2.7293287285766357,0.2) node[anchor=north west] {\textbf{7}}; \draw (-2.35,1.6387925046779535) node[anchor=north west] {\textbf{8}}; \draw [fill=black] (1.860739087062379,1.351906080272688) circle (2.0pt); \draw [fill=black] (0.7107390870623791,2.1874299874788528) circle (2.0pt); \draw [fill=black] (-0.7107390870623789,2.187429987478853) circle (2.0pt); \draw [fill=black] (-1.8607390870623788,1.3519060802726883) circle (2.0pt); \draw [fill=black] (-2.3,0.) circle (2.0pt); \draw [fill=black] (-1.860739087062379,-1.3519060802726879) circle (2.0pt); \draw [fill=black] (-0.7107390870623793,-2.1874299874788528) circle (2.0pt); \draw [fill=black] (0.7107390870623785,-2.187429987478853) circle (2.0pt); \draw [fill=black] (1.8607390870623788,-1.3519060802726885) circle (2.0pt); \draw [fill=black] (2.3,0.) circle (2.0pt); \end{scriptsize} \end{tikzpicture} \caption{The circulant graph $C_{10}(1,4).$} \label{fig1} \end{figure} If we chose an arbitrary vertex, for instance the vertex $6.$ there exist various paths leading from $0$ to $6.$ We give some of them as an example in Table \ref{tab2}, \begin{table}[!h] \centering \begin{tabular}{|l|c|c|c|c|} \hline $P(0,6)$ & Representation of $P(0,6)$ & $\ell_a(P(0,6))$ & $\ell_c(P(0,6))$ & $\ell(P(0,6))$\\ \hline $=(2a^+,1c^+)$ & $0 \leadsto^{a^+} 1 \leadsto^{a^+} 2 \leadsto^{c^+} 6$ & 2 & 1 & 3\\ \hline $=(2a^-,2c^+)$ & $0 \leadsto^{a^-} 9 \leadsto^{a^-} 8 \leadsto^{c^+} 2 \leadsto^{c^+} 6$ & 2 & 2 & 4\\ \hline $=(0,4c^+)$ & $0 \leadsto^{c^+} 4 \leadsto^{c^+} 8 \leadsto^{c^+} 2 \leadsto^{c^+} 6$ & 0 & 4 & 4\\ \hline $=(0,c^-)$ & $0 \leadsto^{c^-} 6$ & 0 & 1 & 1 \\ \hline \end{tabular} \caption{Examples of paths leading from the vertex $0$ to the vertex $6$ in $C_{10}(1,4).$} \label{tab2} \end{table} \end{example} \begin{definition} Let $i,j \in V(C_n(1,s)).$ Let $P(i,j)$ and $Q(i,j)$ be two paths in $C_n(1,s)$. $P(i,j)$ is equivalent to $Q(i,j)$, denoted $P(i,j) \approx Q(i,j),$ if and only if $\ell(P(i,j))=\ell(Q(i,j)),$ $\ell_a(P(i,j))=\ell_a(Q(i,j)),$ $\ell_c(P(i,j))=\ell_c(Q(i,j)),$ and that $P(i,j)$ and $Q(i,j)$ take the same direction. \end{definition} \begin{lemma} Let $i,j \in V(C_n(1,s)).$ Let $P(i,j)$ and $Q(i,j)$ be two paths in $C_n(1,s)$. $\approx$ is an equivalence relation on the set of paths in $C_n(1,s)$. \end{lemma} \begin{proof} Let $\mathcal{P}$ be the set of paths in $C_n(1,s)$. It is easy to verify that $\approx$ is an equivalence relation on $\mathcal{P}.$ \end{proof} \begin{lemma} Let $i$ and $j$ be two vertices of $C_n(1,s).$ We have $$ P(i,j)\approx\begin{cases} P(0,j-i), & \text{if $ i< j$}\\ P(0,n-i+j), & \text{otherwise} \end{cases} $$ \end{lemma} \begin{proof} The circulant graph $C_n(1,s)$ is vertex-transitive. Thus, for any two vertices $i$ and $j$ of $C_n(1,s),$ the path $P(i,j)$ can be translated into the path $P(0,k)$ where $$ k=\begin{cases} j-i, & \text{if $ i< j$}\\ n-i+j, & \text{otherwise} \end{cases} $$ \end{proof} \begin{remark} For the rest of this work, we consider paths leading from $0$ to a vertex $i$ of $C_n(1,s)$, denoted $P(i).$ We denote the distance of $P(i)$ by $d(i)$. \end{remark} \begin{example} Let us take the graph $C_{10}(1,4)$ presented in Figure \ref{fig1}. Here is an example of some equivalent paths in $C_{10}(1,4).$ \begin{itemize} \item $P(6,9)=(1a^-,1c^+)$ is equivalent to $P(3)=(1a^-,1c^+);$ \item $P(6,2)=(0,1c^-)$ is equivalent to $P(6)=(0,1c^-);$ \end{itemize} \end{example} \begin{lemma} \label{Key} For any vertex $i$ of $C_n(1,s)$, there exists a path leading from $0$ to $i$ by walking through all its outer edges \textbf{before} entering to its inner edges or vice versa. \end{lemma} \begin{proof} Let $i$ be a vertex of $C_n(1,s)$. Without loss of generality, if we assume that $W$ is an arbitrary walk in $C_n(1,s)$ leading from $0$ to $i$ by taking $ \alpha a^{+}, $ $ \beta a^{-},$ $ \gamma c^{+}$ and $ \lambda c^{-}. $ Then, it exists a path $P(i)$, walking through all its outer edges \textit{\textbf{before}} entering to its inner edges or vice versa, represented as follows. \begin{align*} P(i) &= \begin{cases} ((\alpha-\beta) a^+ , (\gamma -\lambda) c^+) & \qquad \mbox{if } \quad \alpha\geq\beta \ \mbox{ and } \ \gamma \geq\lambda, \\ ((\beta-\alpha) a^- , (\lambda-\gamma) c^-) & \qquad \mbox{if } \quad \alpha\leq\beta \ \mbox{ and } \ \gamma \leq\lambda, \\ ((\alpha-\beta) a^+ , (\lambda-\gamma) c^-) & \qquad \mbox{if } \quad \alpha\geq\beta \ \mbox{ and } \ \gamma \leq\lambda, \\ ((\beta-\alpha) a^- , (\gamma-\lambda) c^+) & \qquad \mbox{if } \quad \alpha\leq\beta \ \mbox{ and } \ \gamma \geq\lambda. \end{cases} \end{align*}\\ Thus, $\ell(W)=\alpha + \beta + \gamma + \lambda$ and, \begin{align*} \ell(P(i)) &= \begin{cases} (\alpha-\beta) + (\gamma -\lambda) & \qquad \mbox{if } \quad \alpha\geq\beta \ \mbox{ and } \ \gamma \geq\lambda, \\ (\beta-\alpha) + (\lambda-\gamma) & \qquad \mbox{if } \quad \alpha\leq\beta \ \mbox{ and } \ \gamma \leq\lambda, \\ (\alpha-\beta) + (\lambda-\gamma) & \qquad \mbox{if } \quad \alpha\geq\beta \ \mbox{ and } \ \gamma \leq\lambda, \\ (\beta-\alpha) + (\gamma-\lambda) & \qquad \mbox{if } \quad \alpha\leq\beta \ \mbox{ and } \ \gamma \geq\lambda. \end{cases} \end{align*} It is obvious that $\ell(P(i))\leq \ell(W).$ Thus, for all $i\in V(C_n(1,s))$, there exists a path leading from $0$ to $i$ by walking through all its outer edges \textbf{before} entering to its inner edges or vice versa. \end{proof} \begin{example} Let us take the graph $C_{10}(1,4)$ presented in Figure \ref{fig1}. Without loss of generality, if we assume that $W$ is an arbitrary walk in $C_n(1,s)$ leading from $0$ to $6$ by taking $1 a^{+}, $ $ 2 c^{+}$, $ 1 a^{-},$ and $ 3 c^{-}. $ Then, $W$ is represented as follows. $$0 \leadsto^{a^+} 1 \leadsto^{c^+} 5 \leadsto^{c^+} 9 \leadsto^{a^-} 8 \leadsto^{c^-} 4 \leadsto^{c^-} 0 \leadsto^{c^-} 6.$$ By Lemma \ref{Key}, there exists a path leading from $0$ to $6$ defined by $P(6)=((1-1)a^+ ,(3-2)c^-)=(0,1c^-).$ We have $\ell(P(6)=1<\ell(w)=7$. Thus, for the vertex $6$, there exists a path $P(6)$ walking through all its outer edges \textit{\textbf{before}} entering to its inner edges. \end{example} \begin{notation} Let $t$ be an integer such that $1\leq t\leq \frac{s}{gcd(n,s)}.$ we denote the integers $q,q_t,\bar{q_t},r,r_t,$ and $\bar{r_t}$ by \begin{itemize} \item $i\equiv r \pmod{s}$ and $q:=\lfloor \frac{i}{s}\rfloor$, \item $tn+i\equiv r_t \pmod{s}$ and $q_t:=\lfloor \frac{tn+i}{s}\rfloor$, \item $tn-i\equiv \bar{r}_t \pmod{s}$ and $\bar{q}_t:=\lfloor \frac{tn-i}{s}\rfloor$. \end{itemize} \end{notation} \begin{remark} In $C_n(1,s)$, the inner edges form $gcd(n, s)$ cycles of length $\frac{n}{gcd(n,s)}.$ \end{remark} \begin{remark} Let $i$ be a vertex of $C_n(1,s)$. \begin{itemize} \item If $gcd(n,s)=1,$ then the number of inner edges in $P(i)$ cannot be greater than $n-1$, i.e., $1\leq \bar{q}_{t} \leq n-1$ and $ 1 \leq q_{t} \leq n-1.$ Therefore, $1\leq t \leq s.$ \item Otherwise, if $gcd(n,s)\neq 1$ then $1\leq \bar{q}_{t} < \frac{n}{gcd(n,s)}$ and $1\leq q_{t} < \frac{n}{gcd(n,s)}.$ Therefore, $1\leq t \leq \frac{s}{gcd(n,s)}.$ \end{itemize} \end{remark} Our goal lies in providing formulas giving exact values for the diameter of $C_n(1,s)$ for all values of $n$ and $s$. Since by definition, the diameter is the maximum distance among all pair of vertices and, the distance is the shortest path, we analyzed the behavior of paths in $C_n(1,s)$, and we found out that there is plenty of paths leading from $0$ to a vertex $i$ in $C_n(1,s)$ that are equivalent. Our approach lies in determining this equivalent class of paths in $C_n(1,s)$. \begin{lemma} \label{path} Let $t$ be an integer such that $1\leq t\leq \frac{s}{gcd(n,s)}.$ For every vertex $i$ of $C_n(1,s)$, there exists a class of pairwise non-equivalent paths $\mathcal{C}= \{P^1 (i) , P^2 (i), P^{1,t}(i),$ $ P^{2,t}(i), P^{3,t}(i), P^{4,t}(i) \}$ such that \vskip 0.1cm $P^1 (i)=(ra^+,qc^+) \qquad \qquad \quad \qquad \ \ \ P^2 (i)=((s-r)a^-,(q+1)c^+) \qquad \ \ \ P^{1,t}(i)=(r_{t}a^+,q_{t}c^+);$ \vskip 0.1cm $ P^{2,t}(i)=((s-r_{t})a^-,(q_{t}+1)c^+) \qquad \ \ P^{3,t}(i)=(\bar{r}_{t}a^-,\bar{q}_{t}c^-) \qquad \qquad \qquad P^{4,t}(i)=((s-\bar{r}_{t})a^+,(\bar{q}_{t}+1)c^-). $ \\ The length of these paths is consecutively equals to \vskip 0.1cm $ \ell^1 (i)=r+ q \quad \quad \qquad \qquad \quad \ \ \ell^2 (i)=1+s-r+ q \quad \quad \quad \ell^{1,t} (i)=r_{t}+ q_{t} ;$ \vskip 0.1cm $ \ell^{2,t} (i)=1+s-r_{t}+ q_{t} \quad \quad \quad \ell^{3,t} (i)=\bar{r}_{t}+ \bar{q}_{t} \quad \quad \qquad \ \ \ell^{4,t} (i)=1+s-\bar{r}_{t}+ \bar{q}_{t}$. \end{lemma} \begin{proof} Let $i$ be a vertex of $C_n(1,s)$ and $t$ be an integer such that $1\leq t\leq \frac{s}{gcd(n,s)}.$ As $i=q s+r,$ we consider the path $P(i)$ represented as follows $0 \leadsto^{a^+} 1 \leadsto^{a^+} 2 \leadsto^{a^+} \dots \leadsto^{a^+} r \leadsto^{c^+} r+s \leadsto^{c^+} r+2s \leadsto^{c^+} \dots \leadsto^{c^+} r+q s=i.$ We have $P^1 (i)=(r a^+,q c^+)$ and, $\ell^1 (i)= r+ q$. $i$ can be written as $i=(q+1) s+(r-s).$ We consider, in this case, the path $P(i)$ represented as follows $0 \leadsto^{a^-} n-1 \leadsto^{a^-} n-2 \leadsto^{a^-} \dots \leadsto^{a^-} n-(s-r) \leadsto^{c^+} n-(s-r)+s \leadsto^{c^+} n-(s-r)+2s \leadsto^{c^+} \dots \leadsto^{c^+} n-(s-r)+(q+1) s=n+i \ (\equiv i \mod n).$ We have $P^2(i)=((s-r)a^-,(q+1)c^+)$ and, $\ell^2 (i)=1+s-r+ q$. $i$ can also be written as $tn+i=q_{t} s+r_{t}$. In this case, we consider the path $P(i)$ represented as follows $0 \leadsto^{a^+} 1 \leadsto^{a^+} 2 \leadsto^{a^+} \dots \leadsto^{a^+} r_{t} \leadsto^{c^+} r_{t}+s \leadsto^{c^+} r_{t}+2s \leadsto^{c^+} \dots \leadsto^{c^+} r_{t}+q_{t} s=tn+i \ (\equiv i \mod n).$ We have $P^{3,t}(i)=(r_{t}a^+,q_{t}c^+)$ and, $\ell^{1,t} (i)=r_{t}+ q_{t}.$ $i$ can be written as $tn+i=(q_{t}+1) s+(r_{t}-s)$. We consider the path $P(i)$ represented as follows $0 \leadsto^{a^-} n-1 \leadsto^{a^-} n-2 \leadsto^{a^-} \dots \leadsto^{a^-} n-(s-r_{t}) \leadsto^{c^+} n-(s-r_{t})+s \leadsto^{c^+} n-(s-r_{t})+2s \leadsto^{c^+} \dots \leadsto^{c^+} n-(s-r_{t})+(q_{t}+1) s=(t+1)n+i \ (\equiv i \mod n).$ We have $P^{2,t}(i)=((s-r_{t})a^-,(q_{t}+1)c^+)$ and, $\ell^{2,t} (i)=1+s-r_{t}+ q_{t}.$ $i$ can be written as $tn-i=\bar{q}_{t} s+\bar{r}_{t}$. We consider the path $P(i)$ represented as follows $0 \leadsto^{a^-} n-1 \leadsto^{a^-} n-2 \leadsto^{a^-} \dots \leadsto^{a^-} -\bar{r}_{t} \leadsto^{c^-} -\bar{r}_{t}-s \leadsto^{c^-} -\bar{r}_{t}-2s \leadsto^{c^-} \dots \leadsto^{c^-} -\bar{r}_{t}-\bar{q}_{t} s=-tn+i \ (\equiv i \mod n).$ We have $P^{3,t}(i)=(\bar{r}_{t}a^-,\bar{q}_{t}c^-)$ and, $\ell^{3,t} (i)=\bar{r}_{t}+ \bar{q}_{t}.$ $i$ can be written as $tn-i=(\bar{q}_{t}+1) s+(\bar{r}_{t}-s)$. We consider the path $P(i)$ represented as follows $0 \leadsto^{a^+} 1 \leadsto^{a^+} 2 \leadsto^{a^+} \dots \leadsto^{a^+} -(\bar{r}_{t}-s) \leadsto^{c^-} -(\bar{r}_{t}-s)-s \leadsto^{c^-} -(\bar{r}_{t}-s)-2s \leadsto^{c^-} \dots \leadsto^{c^-} -(\bar{r}_{t}-s)-(\bar{q}_{t}+1) s=-tn+i \ (\equiv i \mod n). $ We have $P^{4,t}(i)=((s-\bar{r}_{t})a^+,(\bar{q}_{t}+1)c^-)$ and, $\ell^{4,t} (i)=1+s-\bar{r}_{t}+ \bar{q}_{t}.$ \end{proof} \begin{remark} For some values of $n$ and $s$, it is possible that $\mathcal{C}$ contains some \textit{walks}. If this is the case, then these walks will be equivalent to other paths of $\mathcal{C}$ of smaller lengths. \end{remark} \begin{proposition} Every single path in the class $\mathcal{C}$ constitute an equivalence class. \end{proposition} \begin{proof} Let $i$ be a vertex of $C_n(1,s)$ and $t$ be an integer such that $1\leq t\leq \frac{s}{gcd(n,s)}.$ The set of paths $\mathcal{P}$ contains the following six equivalence classes: \begin{itemize} \item $[P^1 (i)]=\{P(i) \in \mathcal{P} : P^1 (i)\approx P(i) \}$, $[P^2 (i)]=\{P(i) \in \mathcal{P} : P^2 (i)\approx P(i) \}$; \item $[P^{1,t}(i)]=\{P(i) \in \mathcal{P} : P^{1,t}(i)\approx P(i) \}$, $[P^{2,t}(i)]=\{P(i) \in \mathcal{P} : P^{2,t}(i)\approx P(i) \}$; \item $[P^{3,t}(i)]=\{P(i) \in \mathcal{P} : P^{3,t}(i)\approx P(i) \}$, $[P^{4,t}(i)]=\{P(i) \in \mathcal{P} : P^{4,t}(i)\approx P(i) \}$. \end{itemize} In fact and without loss of generality, if we assume that $W$ is an arbitrary walk in $C_n(1,s)$ leading from $0$ to $i$, then, by Lemma \ref{Key}, there exists a path $Q(i)$ walking through all its outer edges \textbf{before} entering to its inner edges or vice versa. Without loss of generality, assume that $Q(i)=(\alpha a^\pm , \beta c^\pm)$. In this case, there exists $ P(i) \in \mathcal{C}$ such that $\ell(P(i))=\ell(Q(i)),$ $\ell_a(P(i))=\ell_a(Q(i)),$ $ \ell_c(P(i))=\ell_c(Q(i))$, and $P(i)$ and $Q(i)$ take the same direction, i.e, there exist $ P(i) \in \mathcal{C}$ such that $ P(i) \approx Q(i)$. \end{proof} \begin{example} Let us take the graph $C_{10}(1,4)$ presented in Figure \ref{fig1}. Without loss of generality, if we assume that $W$ is an arbitrary walk in $C_n(1,s)$ leading from $0$ to $6$ by taking $1 a^{+}, $ $ 2 c^{+}$, $ 1 a^{-},$ and $ 3 c^{-}. $ The walk $W$ is represented as follows. $$0 \leadsto^{a^+} 1 \leadsto^{c^+} 5 \leadsto^{c^+} 9 \leadsto^{a^-} 8 \leadsto^{c^-} 4 \leadsto^{c^-} 0 \leadsto^{c^-} 6.$$ By Lemma \ref{Key}, there exists a path $Q(6)$ such that $Q(6)=((1-1)a^+ ,(3-2)c^-)=(0,1c^-).$ We remark that $\ell(Q(6))=1<\ell(W)=7.$ Furthermore, by Lemma \ref{path}, for $t=1$, we have $n-i=10-6=4=s$. Thus, $P^{3,1}(6)=(0,1c^-).$ Hence, $ Q(6) \approx P^{3,1}(6)$. Consequently, $ Q(6) \in [P^{3,1}(6)]$. \end{example} Since, by definition, the distance is the shortest path, the next result provides the distance between $0$ and any vertex $i$ in $C_n(1,s)$. \begin{lemma} \label{dis1} Let $t$ be an integer such that $1\leq t\leq \frac{s}{gcd(n,s)}.$ For all $ i\in V(C_n(1,s))$, we have $$d(i) =\min(\ell^1 (i), \ell^2 (i), \ell^{1,t} (i), \ell^{2,t} (i), \ell^{3,t} (i), \ell^{4,t} (i)).$$ \end{lemma} As $diam(C_n(1,s))= \max\{d(i): \ 0\leq i \leq n\}$, we obtain the following result. \begin{lemma}\label{dis2} For all $n$ and $s$, we have $$diam(C_n(1,s))= \max\{d(i): \ 2\leq i \leq \lfloor \frac{n}{2} \rfloor\}.$$ \end{lemma} \begin{proof} Let $t$ be an integer such that $1\leq t\leq \frac{s}{gcd(n,s)},$ and let $i$ be a vertex of $C_n(1,s)$. We need to prove that $d(i)=d(n-i)$ for $1\leq i \leq \lfloor \frac{n}{2} \rfloor$. Since by Lemma \ref{dis1}, $d(i) =\min(\ell^1 (i), \ell^2 (i), \ell^{1,t} (i), \ell^{2,t} (i), \ell^{3,t} (i), \ell^{4,t} (i)),$ we have $d(i) \leq \ell(i)$ where $\ell(i) \in \{ \ell^1 (i), \ell^2 (i), \ell^{1,t} (i), \ell^{2,t} (i), \ell^{3,t} (i), \ell^{4,t} (i) \}$. First, we prove that $d(i)\leq d(n-i).$ For that we need to discuss the following cases: \begin{itemize} \item If $d(n-i)=\ell^1 (n-i)$, then, we have $\ell^{3,1} (i) = \bar{r}_{1}+ \lfloor \frac{n-i}{s} \rfloor = \ell^1 (n-i)$ and $d(i)\leq \ell^{3,1} (i)$. Thus, $d(i)\leq d(n-i).$ \item If $d(n-i)=\ell^2 (n-i),$ then, we have $\ell^{4,1} (i) = 1+s-\bar{r}_{1}+ \lfloor \frac{n-i}{s} \rfloor = \ell^2 (n-i)$ and $d(i)\leq \ell^{4,1} (i)$. Thus, $d(i)\leq d(n-i).$ \item If $d(n-i)=\ell^{1,t} (n-i),$ with $1\leq t \leq \frac{s}{gcd(n,s)}$, then for $t=1,$ we have $\ell^{1,1} (n-i)=r_{1}+ \lfloor \frac{n+(n-i)}{s} \rfloor = r_{1}+ \lfloor \frac{2n-i}{s} \rfloor=\ell^{3,2} (i)$ and $d(i)\leq \ell^{3,2} (i)$. Thus, $d(i)\leq d(n-i).$ For $t\geq 2,$ we have $\ell^{3,t} (i) = \bar{r}_{t}+ \lfloor \frac{tn-i}{s} \rfloor = \bar{r}_{t}+ \lfloor \frac{(t-1)n+(n-i)}{s} \rfloor =\ell^{1,t-1} (n-i)$ and $d(i)\leq \ell^{3,t} (i)$. Thus, $d(i)\leq d(n-i).$ \item If $d(n-i)=\ell^{2,t} (n-i),$ with $1\leq t \leq \frac{s}{gcd(n,s)}$, then for $t=1,$ $\ell^{2,1} (n-i)=1+s-r_{1}+ \lfloor \frac{n+(n-i)}{s} \rfloor = 1+s- r_{1}+ \lfloor \frac{2n-i}{s} \rfloor=\ell^{4,2} (i)$ and $d(i)\leq \ell^{4,2} (i)$. Thus, $d(i)\leq d(n-i).$ For $t\geq 2,$ $\ell^{4,t} (i) =1+s- \bar{r}_{t}+ \lfloor \frac{tn-i}{s} \rfloor = 1+s-\bar{r}_{t}+ \lfloor \frac{(t-1)n+(n-i)}{s} \rfloor =\ell^{2,t-1} (n-i)$ and $d(i)\leq \ell^{4,t} (i)$. Thus, $d(i)\leq d(n-i).$ \item If $d(n-i)=\ell^{3,t} (n-i),$ with $1\leq t \leq \frac{s}{gcd(n,s)}$, then for $t=1,$ $\ell^{3,1} (n-i)=\bar{r}_{1}+ \lfloor \frac{n-(n-i)}{s} \rfloor = \bar{r}_{1}+ \lfloor \frac{i}{s} \rfloor=\ell^{1} (i)$ and $d(i)\leq \ell^{1} (i)$. Thus, $d(i)\leq d(n-i).$ For $t\geq 2,$ $\ell^{1,t-1} (i) = r_{t-1}+ \lfloor \frac{(t-1)n+i}{s} \rfloor = r_{t-1}+ \lfloor \frac{tn-(n-i)}{s} \rfloor =\ell^{3,t} (n-i)$ and $d(i)\leq \ell^{1,t-1} (i)$. Thus, $d(i)\leq d(n-i).$ \item If $d(n-i)=\ell^{4,t} (n-i),$ with $1\leq t \leq \frac{s}{gcd(n,s)}$, then for $t=1,$ $\ell^{4,1} (n-i)=1+s-\bar{r}_{1}+ \lfloor \frac{n-(n-i)}{s} \rfloor = 1+s- \bar{r}_{1}+ \lfloor \frac{i}{s} \rfloor=\ell^{2} (i)$ and $d(i)\leq \ell^{2} (i)$. Thus, $d(i)\leq d(n-i).$ For $t\geq 2,$ $\ell^{2,t-1} (i) = 1+s-r_{t-1}+ \lfloor \frac{(t-1)n+i}{s} \rfloor =1+s- r_{t-1}+ \lfloor \frac{tn-(n-i)}{s} \rfloor =\ell^{4,t} (n-i)$ and $d(i)\leq \ell^{2,t-1} (i)$. Thus, $d(i)\leq d(n-i).$ \end{itemize} By the same method, we prove that $d(i)\geq d(n-i).$ Thus, $diam(C_n(1,s))= \max\{d(i): \ 1\leq i \leq \lfloor \frac{n}{2} \rfloor\}$. However, $d(1)=d(n-1)=1.$ Hence, $diam(C_n(1,s))= \max\{d(i): \ 2\leq i \leq \lfloor \frac{n}{2} \rfloor\}$. \end{proof} Next, we present an algorithm computing the diameter of $C_n(1,s)$. \begin{algorithm}[H] \label{MyAlgo} \SetAlgoLined \KwResult{Value of $diam(C_n(1,s))$} \vskip 0.2cm \textbf{Step 1.} For given $n$ and $s$, we compute $gcd(n,s)$; \vskip 0.2cm \textbf{Step 2.} For $i\in \{2, \ldots\lfloor \frac{n}{2} \rfloor\}$ and $1\leq t \leq \frac{s}{gcd(n,s)}$, we compute $$d(i) =\min(\ell^1 (i), \ell^2 (i), \ell^{1,t} (i), \ell^{2,t} (i), \ell^{3,t} (i), \ell^{4,t} (i));$$ \vskip 0.2cm \textbf{Step 3.} $diam(C_n(1,s)) = \displaystyle \max_{2\leq i\leq \lfloor \frac{n}{2} \rfloor} d(i)$. \caption{Diameter of $C_n(1,s)$} \end{algorithm} \begin{remark} Algorithm \ref{MyAlgo} is a simple algorithm that gives exact values for the diameter of $C_n(1,s)$ for all $n$ and $s.$ \end{remark} The next section provides formulas giving exact values for the diameter of $C_n(1,s)$, that algorithm \ref{MyAlgo} has established. \section{Diameter formulas for $diam(C_n(1,s))$} \label{diam} Our method makes it possible to find the vertices of $C_{n}(1,s)$ whose distance with the vertex $0$ coincides with the diameter of the graph, and which allows to have an exact formula for the diameter. Let $n\geq 5$ and $2\leq s\leq \lfloor \frac{n-1}{2} \rfloor.$ For the rest of this section, let $n=\lambda s +\gamma$ where $\gamma$ is an integer such that $0\leq \gamma <s$. \begin{theorem}\cite{chen2005diameter}\label{0} If $n=\lambda s$, $$diam(C_{n}(1,s)) = \lfloor \frac{\lambda +s-1}{2}\rfloor.$$ \end{theorem} Next, we discuss the case when $\gamma\neq 0$ and $\lambda > \gamma$. \subsection{Diameter of $C_{n}(1,s)$ when $n$ is even, $s$ is odd, and $\lambda > \gamma > 0$} \begin{lemma}\cite{chen2005diameter}\label{1.1} Let $n$ be even and $s$ be odd. We have, $$diam(C_n(1,s))\leq \lceil \frac{\lambda}{2} \rceil + \frac{s-1}{2} -(\min( \lceil \frac{\gamma}{2} \rceil , \lceil \frac{s-\gamma+1}{2} \rceil) -1).$$ \end{lemma} \begin{lemma}\label{1.2} Let $n$ be even and $s$ be odd. There exists a vertex $i$ of $C_{n}(1,s)$ such that, $$d(i)= \lceil \frac{\lambda}{2} \rceil + \frac{s-1}{2} -(\min( \lceil \frac{\gamma}{2} \rceil , \lceil \frac{s-\gamma+1}{2} \rceil) -1).$$ \end{lemma} \begin{proof} Let $d=\lceil \frac{\lambda}{2} \rceil + \frac{s-1}{2} -(\min( \lceil \frac{\gamma}{2} \rceil , \lceil \frac{s-\gamma+1}{2} \rceil) -1)$. Let $R=\{1,2, \ldots, s-1\}$ be the set of all the possible values of $\gamma$. This set can be partitioned into: $R_1=\{2m-1: 1\leq m \leq \lfloor \frac{s+3}{4} \rfloor\}$, $R_2=\{2m: 1\leq m \leq \lfloor \frac{s+3}{4} \rfloor\}$, $R_3=\{s-2m+2: 2\leq m \leq \lceil \frac{s-1}{4} \rceil\}$, and $R_4=\{s-2m+1: 1\leq m \leq \lceil \frac{s-5}{4} \rceil\}$. Note that $R_3 = R_4= \varnothing $ when $s\leq 5$. Moreover, $R=R_1\cup R_2 \cup R_3 \cup R_4.$ In fact, when $\gamma$ is odd (i.e., $\gamma\in \{R_1, R_3\}$), we have $R_1=\{1,3,5,\ldots, 2\lfloor \frac{s+3}{4} \rfloor -1\}$ and $R_3=\{s-2\lceil \frac{s-1}{4} \rceil +2, \ldots, s-2\}$. It is easy to verify that, when $s$ is odd, we obtain $2\lfloor \frac{s+3}{4} \rfloor -1+2=s-2\lceil \frac{s-1}{4} \rceil +2$. Similarly, when $\gamma$ is even, we have $2\lfloor \frac{s+3}{4} \rfloor +2=s-2\lceil \frac{s-5}{4} \rceil +1$. \vskip 0.2cm \quad\textbf{Case 1.} $\gamma \in R_1$ \\Since $\gamma$ is odd, we have $\min( \lceil \frac{\gamma}{2} \rceil , \lceil \frac{s-\gamma+1}{2} \rceil)=\min( \frac{\gamma+1}{2} , \frac{s-\gamma+2}{2})= \begin{cases} \frac{\gamma+1}{2} & \mbox{ if } \ \gamma \leq \frac{s+1}{2},\\ \frac{s-\gamma+2}{2} & \mbox{ otherwise. } \end{cases}$ \\However, $\gamma=2m-1,$ where $1\leq m \leq \lfloor \frac{s+3}{4} \rfloor$. Thus, if $s\equiv 1 \pmod 4$, then $\gamma \leq \frac{s+1}{2}$. Otherwise, if $s\equiv 3 \pmod 4$, then $\gamma \leq \frac{s-1}{2}$. Hence, $\min( \frac{\gamma +1}{2} , \frac{s-\gamma+2}{2})= \frac{\gamma +1}{2}.$ Moreover, $\lambda$ is odd and $\lambda>\gamma \geq 1$. Thus, $\lambda\geq 3.$ Therefore, when $\gamma\in R_1$, $d= \frac{\lambda +1}{2} + \frac{s-1}{2} -\frac{\gamma +1}{2}+1.$ \vskip 0.2cm \qquad\quad\textbf{Case 1.1.} $\gamma=1$ \\In this case, $d= \frac{\lambda +s}{2}.$ Let $i=\frac{\lambda-1}{2} s+\frac{s+1}{2}$ be a vertex of $C_n(1,s).$ We have, \begin{itemize} \item $\ell^1(i)=\ell^2(i)=\frac{\lambda-1}{2} + \frac{s+1}{2}= d;$ \end{itemize} Let $1\leq t \leq \frac{s}{gcd(n,s)}$ be an integer. We have $tn+i=(t\lambda + \frac{\lambda-1}{2} )s +t+\frac{s+1}{2}.$ If $t+\frac{s+1}{2} < s$ then, \begin{itemize} \item $\ell^{1,t}(i)=t(\lambda + 1)+\frac{\lambda-1}{2} + \frac{s+1}{2} > d$; \item $\ell^{2,t}(i)=t(\lambda - 1)+\frac{\lambda-1}{2} + \frac{s+1}{2} > d$; \end{itemize} If $t+\frac{s+1}{2} \geq s,$ i.e., $t \geq \frac{s-1}{2}$, then $tn+i=(t\lambda + \frac{\lambda-1}{2}+\lambda')s +\gamma'$ where $\lambda'\geq 1$, $\gamma' \geq 0$ and $t+\frac{s+1}{2}=\lambda's+\gamma'$. \begin{itemize} \item $\ell^{1,t}(i)=t\lambda + \frac{\lambda-1}{2} + \lambda' + \gamma' \geq t\lambda+ \frac{\lambda+1}{2} > t+ \frac{\lambda+1}{2}.$ So, $\ell^{1,t}(i) > \frac{s-1}{2}+ \frac{\lambda+1}{2}.$ Thus, $\ell^{1,t}(i) > d;$ \item $\ell^{2,t}(i) =1+s-\gamma' + t\lambda + \frac{\lambda-1}{2}+\lambda' \geq t\lambda + \frac{\lambda+1}{2}+1.$ Thus, $\ell^{2,t}(i)> d;$ \end{itemize} We have $tn-i=((t-1)\lambda + \frac{\lambda+1}{2} )s +t-\frac{s+1}{2}.$ If $t-\frac{s+1}{2} < 0,$ then $tn-i=((t-1)\lambda + \frac{\lambda-1}{2})s +t+\frac{s-1}{2},$ \begin{itemize} \item $\ell^{3,t}(i)= (t-1)(\lambda+1)+ \frac{\lambda+1}{2} +\frac{s-1}{2} \geq \frac{\lambda+1}{2} +\frac{s-1}{2} \geq d$; \end{itemize} \begin{itemize} \item $\ell^{4,t}(i)=(t-1)(\lambda-1)+ \frac{\lambda-1}{2} +\frac{s+1}{2} \geq \frac{\lambda+1}{2} +\frac{s-1}{2}\geq d$; \end{itemize} If $0\leq t-\frac{s+1}{2} <s$, i.e., $t\geq \frac{s+1}{2}$, then $tn-i=((t-1)\lambda + \frac{\lambda+1}{2} )s +t-\frac{s+1}{2}.$ \begin{itemize} \item $\ell^{3,t}(i)=(t-1)\lambda + \frac{\lambda+1}{2} +t-\frac{s+1}{2}\geq (t-1)\lambda + \frac{\lambda+1}{2} >t + \frac{\lambda-1}{2}.$ So, $\ell^{3,t}(i) > d.$ \end{itemize} \begin{itemize} \item $\ell^{4,t}(i)=1+s-(t-\frac{s+1}{2})+(t-1)\lambda + \frac{\lambda+1}{2}\geq (t-1)\lambda + \frac{\lambda+1}{2}+1.$ Thus, $\ell^{4,t}(i) > d.$ \end{itemize} Thus, by Lemma \ref{dis1}, we have $d(i) =\ell^1 (i)=\ell^2 (i)=d$. \vskip 0.2cm \qquad\quad\textbf{Case 1.2.} $\gamma\geq 3$ If $s=5$ then, we have $\gamma=3,$ $\lambda\geq 5$, and $d= \frac{\lambda +1}{2} + 1.$ Let $i=(\frac{\lambda+1}{2}+1)s$ be a vertex of $C_n(1,s)$. By a simple calculation of the lengths $\ell^1(i)$ to $\ell^{4,t}(i)$ represented in Lemma \ref{path}, and by applying Lemma \ref{dis1}, we obtain $d(i) =\ell^1 (i)=d$. If $s>5$ then, $\gamma\geq 3$, $\lambda \geq 5$, and $d= \frac{\lambda +1}{2} + \frac{s+1}{2} -\frac{\gamma +1}{2}.$ Let $i=\frac{\lambda +1}{2}s+ \frac{\gamma +1}{2} + \frac{s+1}{2}$ be a vertex of $C_n(1,s)$. Since $s>5$ and $\gamma \leq \frac{s+1}{2}$, we obtain $\frac{\gamma +1}{2} + \frac{s+1}{2}<s$. Note that $\frac{\gamma-1}{2}-1\geq -\frac{\gamma+1}{2}+2$ because $\gamma\geq 3$. By a simple calculation of the lengths $\ell^1(i)$ to $\ell^{4,t}(i)$ represented in Lemma \ref{path}, and by applying Lemma \ref{dis1}, we obtain $d(i) =\ell^2 (i)=d$. \vskip 0.2cm \quad\textbf{Case 2.} $\gamma \in R_2$ \\ Since $\gamma$ is even, we have $\min( \lceil \frac{\gamma}{2} \rceil , \lceil \frac{s-\gamma+1}{2} \rceil)=\min( \frac{\gamma}{2} , \frac{s-\gamma+1}{2})= \begin{cases} \frac{\gamma}{2} & \mbox{ if } \ \gamma \leq \frac{s+1}{2},\\ \frac{s-\gamma+1}{2} & \mbox{ otherwise. } \end{cases}$ \\Moreover, $\gamma =2m$ where $1\leq m \leq \lfloor \frac{s+3}{4} \rfloor$. Thus, for $s$ odd, we obtain $\gamma \leq \frac{s+3}{2}$. Hence, we have $\gamma\geq 2$, $\lambda\geq 4$, $s\geq 3$, and $d= \begin{cases} \frac{\lambda}{2} + \frac{s+1}{2} -\frac{s-\gamma+1}{2} & \mbox{ if } \ \gamma = \frac{s+3}{2},\\ \frac{\lambda}{2} + \frac{s+1}{2}-\frac{\gamma}{2} & \mbox{ if } \ \gamma \leq \frac{s+1}{2}. \end{cases}$ \qquad\quad\textbf{Case 2.1.} $\gamma=\frac{s+3}{2}$ \\ In this case, $d=\frac{\lambda}{2}+\frac{\gamma}{2}$. Let $i=\frac{\lambda}{2}s+\frac{\gamma}{2}$ be a vertex of $C_n(1,s)$. Note that since $s>\gamma$, we obtain $s-\frac{\gamma}{2}>\frac{\gamma}{2} $. By a simple calculation of the lengths $\ell^1(i)$ to $\ell^{4,t}(i)$ represented in Lemma \ref{path}, and by applying Lemma \ref{dis1}, we obtain $d(i) =\ell^1 (i)=d$. \vskip 0.2cm \qquad\quad\textbf{Case 2.2.} $\gamma\leq \frac{s+1}{2}$ If $s=3$ then, $\gamma=2$, $\lambda \geq 4$, and $d=\frac{\lambda}{2}+1.$ Let $i=(\frac{\lambda}{2}+1)s$ be a vertex of $C_n(1,s)$. By a simple calculation of the lengths $\ell^1(i)$ to $\ell^{4,t}(i)$ represented in Lemma \ref{path}, and by applying Lemma \ref{dis1}, we obtain $d(i) =\ell^1 (i)=d$. If $s>3$ then, $\gamma\geq 2$, $\lambda\geq 4$, and $d= \frac{\lambda}{2} + \frac{s+1}{2} -\frac{\gamma}{2}$. Let $i=\frac{\lambda}{2}s+ \frac{s+1}{2}+\frac{\gamma}{2}$ be a vertex of $C_n(1,s)$. Since $s>3$ and $\gamma\leq \frac{s+1}{2}$, we have $\frac{s+1}{2}+\frac{\gamma}{2}<s.$ Note that $\frac{\gamma}{2}-1 \geq -\frac{\gamma}{2}+1$ because $\gamma\geq 2$. By a simple calculation of the lengths $\ell^1(i)$ to $\ell^{4,t}(i)$ represented in Lemma \ref{path}, and by applying Lemma \ref{dis1}, we obtain $d(i) =\ell^2 (i)=d$. \vskip 0.2cm \quad\textbf{Case 3.} $\gamma \in R_3$ \\ In this case, $\gamma=s-2m+2$ where $2\leq m \leq \lceil \frac{s-1}{4} \rceil$. If $s\equiv 1 \pmod 4$, then $\gamma\geq \frac{s+5}{2}.$ Otherwise, if $s\equiv 3 \pmod 4$, then $\gamma\geq \frac{s+3}{2}.$ Since $\gamma$ is odd and $\gamma \geq \frac{s+1}{2}$, we get $\min( \lceil \frac{\gamma}{2} \rceil , \lceil \frac{s-\gamma+1}{2} \rceil)=\min( \frac{\gamma+1}{2} , \frac{s-\gamma+2}{2} )=\frac{s-\gamma+2}{2}.$ Therefore, $\gamma\geq 3$, $\lambda \geq 5$, $s\geq 7$ (when $s\leq 5, R_3=\emptyset$), and $d=\frac{\lambda +1}{2} + \frac{s+1}{2} -\frac{s-\gamma+2}{2}.$ Let $i=(\frac{\lambda +1}{2}- \frac{s-\gamma+2}{2})s + \frac{s+1}{2}$ be a vertex $C_n(1,s)$. By a simple calculation of the lengths $\ell^1(i)$ to $\ell^{4,t}(i)$ represented in Lemma \ref{path}, and by applying Lemma \ref{dis1}, we obtain $d(i) =\ell^1 (i)=\ell^2 (i)=d$. \vskip 0.2cm \quad\textbf{Case 4.} $\gamma \in R_4$ \\ Here we have $\gamma = s-2m+1$ where $1\leq m \leq \lceil \frac{s-5}{4} \rceil$. If $s\equiv 1 \pmod 4$, then $\gamma\geq \frac{s+7}{2}.$ Otherwise, if $s\equiv 3 \pmod 4$, then $\gamma\geq \frac{s+5}{2}.$ Moreover, since $\gamma$ is even and $\gamma \geq \frac{s+1}{2}$, we have $\min( \lceil \frac{\gamma}{2} \rceil , \lceil \frac{s-\gamma+1}{2} \rceil)=\min( \frac{\gamma}{2} , \frac{s-\gamma+1}{2} )=\frac{s-\gamma+1}{2}.$ Therefore, $\gamma\geq 4$, $\lambda\geq 6$, $s\geq 7$ (when $s\leq 5, R_3=\emptyset$), and $d=\frac{\lambda }{2} + \frac{s+1}{2} -\frac{s-\gamma+1}{2}.$ Let $i=(\frac{\lambda }{2}- \frac{s-\gamma+1}{2}+1)s + \frac{s-1}{2}$ be a vertex $C_n(1,s)$. By a simple calculation of the lengths $\ell^1(i)$ to $\ell^{4,t}(i)$ represented in Lemma \ref{path}, and by applying Lemma \ref{dis1}, we obtain $d(i) =\ell^1 (i)=d$. Consequently, for all $\gamma\in R$, there exists a vertex $i$ of $C_{n}(1,s)$ such that, $d(i)= \lceil \frac{\lambda}{2} \rceil + \frac{s-1}{2} -(\min( \lceil \frac{\gamma}{2} \rceil , \lceil \frac{s-\gamma+1}{2} \rceil) -1).$ \end{proof} Thus, the following theorem follows from Lemmas \ref{1.1} and \ref{1.2}. \begin{theorem} Let $n$ be even and $s$ be odd. We have, $$diam(C_{n}(1,s)) = \lceil \frac{\lambda}{2} \rceil + \frac{s-1}{2} -(\min( \lceil \frac{\gamma}{2} \rceil , \lceil \frac{s-\gamma+1}{2} \rceil) -1).$$ \end{theorem} \subsection{Diameter of $C_{n}(1,s)$ when $n$ is even, $s$ is even, and $\lambda > \gamma > 0$} \begin{lemma}\cite{chen2005diameter} \label{2.1} Let $n$ and $s$ be even. We have, $$diam(C_{n}(1,s))\leq\begin{cases} \lceil \frac{\lambda}{2} \rceil + \frac{s-\gamma}{2} & \mbox{ if } \ \gamma \leq 2\lceil \frac{s-2}{4} \rceil, \\ \lfloor \frac{\lambda}{2} \rfloor +\frac{\gamma}{2}& \mbox{ otherwise. } \end{cases}$$ \end{lemma} \begin{lemma} \label{2.2} Let $n$ and $s$ be even. There exists a vertex $i$ of $C_{n}(1,s)$ such that, $$d(i)= \begin{cases} \lceil \frac{\lambda}{2} \rceil + \frac{s-\gamma}{2} & \mbox{ if } \ \gamma \leq 2\lceil \frac{s-2}{4} \rceil, \\ \lfloor \frac{\lambda}{2} \rfloor +\frac{\gamma}{2}& \mbox{ otherwise. } \end{cases}$$ \end{lemma} \begin{proof} Let $d=\begin{cases} \lceil \frac{\lambda}{2} \rceil + \frac{s-\gamma}{2} & \mbox{ if } \ \gamma \leq 2\lceil \frac{s-2}{4} \rceil, \\ \lfloor \frac{\lambda}{2} \rfloor +\frac{\gamma}{2} & \mbox{ otherwise. } \end{cases}$\\In this case, $\gamma$ is even and $s\geq 4$ because if $s=2$, then $\gamma=0$ (see Theorem \ref{0}). \textbf{Case 1.} $\gamma \leq 2\lceil \frac{s-2}{4} \rceil$ \qquad \textbf{Case 1.1.} $\lambda$ is even \\ In this case, $\gamma\geq 2$, $\lambda\geq 4$, $s\geq 4$, and $d=\frac{\lambda}{2}+\frac{s-\gamma}{2}.$ Let $i=(\frac{\lambda}{2}-\frac{\gamma}{2})s+\frac{s}{2}$ be a vertex of $C_n(1,s)$. Table \ref{tab10} represents a simple calculation of the lengths $\ell^1(i)$ to $\ell^{4,t}(i)$ provided in Lemma \ref{path}. \begin{table}[H] \centering {\begin{tabular}{ | c || l || r | } \hline $\ell^1(i)$ & $=\frac{\lambda}{2}+\frac{s-\gamma}{2}$ & $=d$ \\ \hline $\ell^2(i)$ & $=\frac{\lambda}{2}+\frac{s-\gamma}{2}+1$ & $>d$ \\ \hline $\ell^{1,t}(i), \ell^{2,t}(i)$ & $ >\begin{cases} \frac{\lambda}{2}+\frac{s-\gamma}{2} & \mbox{ if } \ t\gamma+\frac{s}{2}<s,\\ \frac{\lambda}{2}+\frac{s-\gamma}{2}+1 & \mbox{ if } \ t\gamma+\frac{s}{2}\geq s. \end{cases}$ & $>d$ \\ \hline $\ell^{3,t}(i)$ & $ >\begin{cases} \frac{\lambda}{2}+\frac{s+\gamma}{2} & \mbox{ if } \ t\gamma-\frac{s}{2}<0,\\ \frac{\lambda}{2}+\frac{s-\gamma}{2} & \mbox{ if } \ 0\leq t\gamma-\frac{s}{2}<s,\\ \frac{\lambda}{2}+\frac{s+\gamma}{2}+1 & \mbox{ if } \ t\gamma-\frac{s}{2} \geq s. \end{cases}$ & $> d$ \\ \hline $\ell^{4,t}(i)$ & $ \begin{cases} \geq \frac{\lambda}{2}+\frac{s-\gamma}{2} & \mbox{ if } \ t\gamma-\frac{s}{2}<0,\\ >\frac{\lambda}{2}+\frac{s-\gamma}{2}+1 & \mbox{ if } \ 0\leq t\gamma-\frac{s}{2}<s,\\ > \frac{\lambda}{2}+\frac{s+\gamma}{2}+2 & \mbox{ if } \ t\gamma-\frac{s}{2} \geq s. \end{cases}$ & $\geq d$ \\\hline \end{tabular}} \caption{Values of $\ell^1(i)$, $\ell^2(i)$, $\ell^{1,t}(i)$, \ldots, $\ell^{4,t}(i)$.} \label{tab10} \end{table}Thus, by Lemma \ref{dis1}, we have $d(i) =\ell^1 (i)=d$. \qquad \textbf{Case 1.2.} $\lambda$ is odd If $\gamma=2$ then, $\lambda\geq 3$, $s\geq 4$, and $d=\frac{\lambda+1}{2}+\frac{s-2}{2}.$ Let $i=\frac{\lambda-1}{2}s+\frac{s}{2}+1$ be a vertex of $C_n(1,s)$. Note that since $s\geq 4$, we have $\frac{s}{2}+1<s$. By a simple calculation of the lengths $\ell^1(i)$ to $\ell^{4,t}(i)$ represented in Lemma \ref{path}, and by applying Lemma \ref{dis1}, we obtain $d(i) =\ell^2 (i)=d$. If $\gamma\geq 4$ then, $\lambda\geq 5$, $s\geq 6$, and $d=\frac{\lambda+1}{2}+\frac{s-\gamma}{2}.$ Let $i=\frac{\lambda+1}{2}s+\frac{\gamma}{2}+ \frac{s}{2}+1$ be a vertex of $C_n(1,s)$. Since $s> 4$ and $\gamma\leq \lceil \frac{s-2}{2} \rceil$, we have $\frac{\gamma}{2}+ \frac{s}{2}+1<s$. Note that $\frac{\gamma}{2}-2 > -\frac{\gamma}{2}+1$ because $\gamma\geq 4$. By a simple calculation of the lengths $\ell^1(i)$ to $\ell^{4,t}(i)$ represented in Lemma \ref{path}, and by applying Lemma \ref{dis1}, we obtain $d(i) =\ell^2 (i)=d$. \textbf{Case 2.} $\gamma > 2\lceil \frac{s-2}{4} \rceil$ If $\lambda$ is even then, $\gamma\geq 2$, $\lambda\geq 4$, $s\geq 4$, and $d=\frac{\lambda}{2}+\frac{\gamma}{2}.$ Let $i=\frac{n}{2}$ be a vertex of $C_n(1,s)$. Note that since $s>\gamma$, we have $s-\frac{\gamma}{2}>\frac{\gamma}{2}$. By a simple calculation of the lengths $\ell^1(i)$ to $\ell^{4,t}(i)$ represented in Lemma \ref{path}, and by applying Lemma \ref{dis1}, we obtain $d(i) =\ell^1 (i)=d$. If $\lambda$ is odd then, $\gamma\geq 2$, $\lambda\geq 3$, $s\geq 4$, and $d=\frac{\lambda-1}{2}+\frac{\gamma}{2}.$ Let $i=\frac{\lambda-1}{2}s+\frac{\gamma}{2}$ be a vertex of $C_n(1,s)$. Since $s>\gamma$, we have $s-\frac{\gamma}{2}>\frac{\gamma}{2}$. By a simple calculation of the lengths $\ell^1(i)$ to $\ell^{4,t}(i)$ represented in Lemma \ref{path}, and by applying Lemma \ref{dis1}, we obtain $d(i) =\ell^1 (i)=d$. This completes the proof. \end{proof} Thus, the following theorem follows from Lemmas \ref{2.1} and \ref{2.2}. \begin{theorem} Let $n$ and $s$ be even. We have, $$diam(C_{n}(1,s))=\begin{cases} \lceil \frac{\lambda}{2} \rceil + \frac{s-\gamma}{2} & \mbox{ if } \ \gamma \leq 2\lceil \frac{s-2}{4} \rceil, \\ \lfloor \frac{\lambda}{2} \rfloor +\frac{\gamma}{2}& \mbox{ otherwise. } \end{cases}$$ \end{theorem} \subsection{Diameter of $C_{n}(1,s)$ when $n$ is odd, $s$ is odd, and $\lambda > \gamma > 0$} \begin{lemma}\cite{chen2005diameter} \label{3.1} Let $n$ and $s$ be odd. We have, $$diam(C_{n}(1,s))\leq \lceil \frac{\lambda}{2} \rceil + \frac{s-1}{2} -(\min( \lceil \frac{\gamma +1}{2} \rceil , \lceil \frac{s-\gamma+2}{2} \rceil) -1).$$ \end{lemma} \begin{lemma} \label{3.2} Let $n$ and $s$ be odd. There exists a vertex $i$ of $C_{n}(1,s)$ such that, $$d(i)= \lceil \frac{\lambda}{2} \rceil + \frac{s-1}{2} -(\min( \lceil \frac{\gamma +1}{2} \rceil , \lceil \frac{s-\gamma+2}{2} \rceil) -1).$$ \end{lemma} \begin{proof} Let $d=\lceil \frac{\lambda}{2} \rceil + \frac{s-1}{2} -(\min( \lceil \frac{\gamma +1}{2} \rceil , \lceil \frac{s-\gamma+2}{2} \rceil) -1).$ Let $R=\{1,2, \ldots, s-1\}$ be the set of all the possible values of $\gamma$. This set can be partitioned into: $R_1=\{2m-1: 1\leq m \leq \lfloor \frac{s+3}{4} \rfloor\}$, $R_2=\{2m-2: 2\leq m \leq \lceil \frac{s+5}{4} \rceil\}$, $R_3=\{s-2m+2: 2\leq m \leq \lceil \frac{s-1}{4} \rceil\}$, and $R_4=\{s-2m+3: 2\leq m \leq \lceil \frac{s-1}{4} \rceil\}$. Moreover, $R=R_1\cup R_2 \cup R_3 \cup R_4.$ In fact, when $\gamma$ is odd (i.e., $\gamma\in \{R_1, R_3\}$), we have $R_1=\{1,3,5,\ldots, 2\lfloor \frac{s+3}{4} \rfloor -1\}$ and $R_3=\{s-2\lceil \frac{s-1}{4} \rceil +2, \ldots, s-2\}$. It is easy to verify that, when $s$ is odd, we obtain $2\lfloor \frac{s+3}{4} \rfloor -1+2=s-2\lceil \frac{s-1}{4} \rceil +2$. Similarly, when $\gamma$ is even, we have $2\lceil \frac{s+5}{4} \rceil -2 +2=s-2\lceil \frac{s-1}{4} \rceil +3$. \vskip 0.2cm \quad\textbf{Case 1.} $\gamma \in R_1$ \\ Since $\gamma$ is odd, we have $\min( \lceil \frac{\gamma +1}{2} \rceil , \lceil \frac{s-\gamma+2}{2} \rceil)=\min( \frac{\gamma+1}{2} , \frac{s-\gamma+2}{2})= \begin{cases} \frac{\gamma+1}{2} & \mbox{ if } \ \gamma \leq \frac{s+1}{2},\\ \frac{s-\gamma+2}{2} & \mbox{ otherwise. } \end{cases}$ \\However, we have $\gamma =2m-1$ where $1\leq m \leq \lfloor \frac{s+3}{4} \rfloor$. Thus, if $s\equiv 1 \pmod 4$, then $\gamma \leq \frac{s+1}{2}$. Otherwise, if $s\equiv 3 \pmod 4$, then $\gamma \leq \frac{s-1}{2}$. Hence, $\min( \frac{\gamma +1}{2} , \frac{s-\gamma+2}{2})= \frac{\gamma +1}{2}.$ Moreover, we have $\gamma\geq 1$, $\lambda\geq 2$, $s\geq 3$, and $d= \frac{\lambda}{2} + \frac{s+1}{2} -\frac{\gamma +1}{2}.$ Let $i=(\frac{\lambda}{2}-1)s+\frac{s-1}{2}+\frac{\gamma +1}{2}$ be a vertex of $C_n(1,s).$ Since $\gamma \leq \frac{s-1}{2}$, we have $\frac{s-1}{2}+\frac{\gamma +1}{2}<s$. Note that $\frac{\gamma+1}{2}-1\geq -\frac{\gamma+1}{2}+1$ because $\gamma\geq 1$. Table \ref{tab15} represents a simple calculation of the lengths $\ell^1(i)$ to $\ell^{4,t}(i)$ provided in Lemma \ref{path}. \begin{table}[H] \centering {\begin{tabular}{ | c || l || r | } \hline $\ell^1(i)$ & $=\frac{\lambda}{2}+\frac{s-1}{2}+\frac{\gamma +1}{2}-1$ & $\geq d$ \\ \hline $\ell^2(i)$ & $=\frac{\lambda}{2} + \frac{s+1}{2} -\frac{\gamma +1}{2}$ & $=d$ \\ \hline $\ell^{1,t}(i), \ell^{2,t}(i)$ & $ >\begin{cases} \frac{\lambda}{2} + \frac{s+1}{2} -\frac{\gamma +1}{2} & \mbox{ if } t\gamma+\frac{\gamma+1}{2}+\frac{s-1}{2}<s,\\ \frac{\lambda}{2} + \frac{s+1}{2} -\frac{\gamma +1}{2} & \mbox{ if } t\gamma+\frac{\gamma+1}{2}+\frac{s-1}{2}\geq s. \end{cases}$ & $>d$ \\ \hline $\ell^{3,t}(i), \ell^{4,t}(i)$ & $ >\begin{cases} \frac{\lambda}{2} + \frac{s+1}{2} -\frac{\gamma +1}{2} & \mbox{ if } \ t\gamma-\frac{\gamma+1}{2}-\frac{s-1}{2}<0,\\ \frac{\lambda}{2} + \frac{s+1}{2} -\frac{\gamma +1}{2} & \mbox{ if } \ t\gamma-\frac{\gamma+1}{2}-\frac{s-1}{2}<s,\\ \frac{\lambda}{2} + \frac{s+1}{2} -\frac{\gamma +1}{2} & \mbox{ if } t\gamma-\frac{\gamma+1}{2}-\frac{s-1}{2}\geq s. \end{cases}$ & $> d$ \\ \hline \end{tabular}} \caption{Values of $\ell^1(i)$, $\ell^2(i)$, $\ell^{1,t}(i)$, \ldots, $\ell^{4,t}(i)$.} \label{tab15} \end{table} Thus, by Lemma \ref{dis1}, we have $d(i) =\ell^2 (i)=d$. \vskip 0.1cm \quad\textbf{Case 2.} $\gamma \in R_2$ \\ Since $\gamma$ is even, we have $\min( \lceil \frac{\gamma +1}{2} \rceil , \lceil \frac{s-\gamma+2}{2} \rceil)=\min( \frac{\gamma+2}{2} , \frac{s-\gamma+3}{2})= \begin{cases} \frac{\gamma+2}{2} & \mbox{ if } \ \gamma \leq \frac{s+1}{2},\\ \frac{s-\gamma+3}{2} & \mbox{ if } \gamma \geq \frac{s+3}{2}. \end{cases}$ \\However, we have $\gamma=2m-2$ where $ 2\leq m \leq \lceil \frac{s+5}{4} \rceil.$ Thus, if $s\equiv 1 \pmod 4$, then $\gamma \leq \frac{s+3}{2}$. Otherwise, if $s\equiv 3 \pmod 4$, then $\gamma \leq \frac{s+1}{2}$. Hence, $\min( \frac{\gamma +1}{2} , \frac{s-\gamma+2}{2})= \frac{\gamma +1}{2}.$ Therefore, $\gamma\geq 2$, $\lambda\geq 3$, $s\geq 3$, and $d= \begin{cases} \frac{\lambda+1}{2} + \frac{s+1}{2} -\frac{s-\gamma+3}{2} & \mbox{ if } \ \gamma = \frac{s+3}{2},\\ \frac{\lambda+1}{2} + \frac{s+1}{2}-\frac{\gamma+2}{2} & \mbox{ if } \ \gamma \leq \frac{s+1}{2}. \end{cases}$ \qquad\quad\textbf{Case 2.1.} $\gamma=\frac{s+3}{2}$ \\ In this case, $d=\frac{\lambda+1}{2}+\frac{s-1}{4}$. Let $i=\frac{\lambda+1}{2}s+\frac{s-1}{4}$ be a vertex of $C_n(1,s)$. Note that $s-\frac{s-1}{4}>\frac{s-1}{4} $. By a simple calculation of the lengths $\ell^1(i)$ to $\ell^{4,t}(i)$ represented in Lemma \ref{path}, and by applying Lemma \ref{dis1}, we obtain $d(i) =\ell^1 (i)=d$. \qquad\quad\textbf{Case 2.2.} $\gamma\leq \frac{s+1}{2}$ If $s=3$ then, $\gamma=2$, $\lambda \geq 3$, and $d=\frac{\lambda+1}{2}.$ Let $i=\frac{\lambda+1}{2}s$ be a vertex of $C_n(1,s)$. By a simple calculation of the lengths $\ell^1(i)$ to $\ell^{4,t}(i)$ represented in Lemma \ref{path}, and by applying Lemma \ref{dis1}, we obtain $d(i) =\ell^1 (i)=d$. If $s>3$ then, $\gamma\geq 2$, $\lambda\geq 3$, and $d= \frac{\lambda+1}{2} + \frac{s+1}{2} -\frac{\gamma+2}{2}$. Let $i=\frac{\lambda-1}{2}s+ \frac{s-1}{2}+\frac{\gamma+2}{2}$ be a vertex of $C_n(1,s)$. Since $s>3$ and $\gamma\leq \frac{s+1}{2}$, we get $\frac{s-1}{2}+\frac{\gamma+2}{2}<s.$ Note that $\frac{\gamma+2}{2}> -\frac{\gamma+2}{2}+2$ and $-\frac{\gamma-2}{2}> -\frac{\gamma+2}{2}+1$ because $\gamma\geq 2$. By a simple calculation of the lengths $\ell^1(i)$ to $\ell^{4,t}(i)$ represented in Lemma \ref{path}, and by applying Lemma \ref{dis1}, we obtain $d(i) =\ell^2 (i)=d$. \vskip 0.1cm \quad\textbf{Case 3.} $\gamma \in R_3$ \\ In this case, $\gamma\geq 3$, $\lambda \geq 4$, $s\geq 3$, and $d=\frac{\lambda }{2} + \frac{s+1}{2} -\frac{s-\gamma+2}{2}.$ Let $i=(\frac{\lambda}{2}- \frac{s-\gamma+2}{2})s + \frac{s+1}{2}$ be a vertex of $C_n(1,s)$. By a simple calculation of the lengths $\ell^1(i)$ to $\ell^{4,t}(i)$ represented in Lemma \ref{path}, and by applying Lemma \ref{dis1}, we obtain $d(i) =\ell^1 (i)=\ell^2 (i)=d$. \vskip 0.1cm \quad\textbf{Case 4.} $\gamma \in R_4$ \\ We have $\gamma\geq 4$, $\lambda\geq 5$, $s\geq 3,$ and $d=\frac{\lambda +1}{2} + \frac{s+1}{2} -\frac{s-\gamma+3}{2}.$ Let $i=(\frac{\lambda+1}{2}- \frac{s-\gamma+3}{2})s + \frac{s-1}{2}$ be a vertex of $C_n(1,s)$. By a simple calculation of the lengths $\ell^1(i)$ to $\ell^{4,t}(i)$ represented in Lemma \ref{path}, and by applying Lemma \ref{dis1}, we obtain $d(i) =\ell^1 (i)=\ell^2 (i)=d$. \\Consequently, for all $\gamma\in R$, there exists a vertex $i$ of $C_{n}(1,s)$ such that, $d(i)= \lceil \frac{\lambda}{2} \rceil + \frac{s-1}{2} -(\min( \lceil \frac{\gamma +1}{2} \rceil , \lceil \frac{s-\gamma+2}{2} \rceil) -1).$ \end{proof} Thus, the following theorem follows from Lemmas \ref{3.1} and \ref{3.2}. \begin{theorem} Let $n$ and $s$ be odd. We have, $$diam(C_{n}(1,s))= \lceil \frac{\lambda}{2} \rceil + \frac{s-1}{2} -(\min( \lceil \frac{\gamma +1}{2} \rceil , \lceil \frac{s-\gamma+2}{2} \rceil) -1).$$ \end{theorem} \subsection{Diameter of $C_{n}(1,s)$ when $n$ is odd, $s$ is even, and $\lambda > \gamma > 0$} \begin{lemma}\cite{chen2005diameter} \label{4.1} Let $n$ odd and $s$ even. we have, $$diam(C_{n}(1,s))\leq \begin{cases} \lceil \frac{\lambda}{2} \rceil + \frac{s-2}{2} & \mbox{ if } \ \gamma = 1 \mbox{ or } \gamma = s-1, \\ \lfloor \frac{\lambda}{2} \rfloor + \frac{s-\gamma+1}{2} & \mbox{ if } \ 3\leq \gamma \leq 2\lceil \frac{s}{4} \rceil -1, \\ \lceil \frac{\lambda}{2} \rceil +\frac{\gamma -1}{2} & \mbox{ otherwise. } \end{cases}$$ \end{lemma} \begin{lemma} \label{4.2} Let $n$ odd and $s$ even. There exists a vertex $i$ of $C_{n}(1,s)$ such that, $$d(i)= \begin{cases} \lceil \frac{\lambda}{2} \rceil + \frac{s-2}{2} & \mbox{ if } \ \gamma = 1 \mbox{ or } \gamma = s-1, \\ \lfloor \frac{\lambda}{2} \rfloor + \frac{s-\gamma+1}{2} & \mbox{ if } \ 3\leq \gamma \leq 2\lceil \frac{s}{4} \rceil -1, \\ \lceil \frac{\lambda}{2} \rceil +\frac{\gamma -1}{2} & \mbox{ otherwise. } \end{cases}$$ \end{lemma} \begin{proof} Let $d=\begin{cases} \lceil \frac{\lambda}{2} \rceil + \frac{s-2}{2} & \mbox{ if $\gamma \in \{1,s-1\}$,} \\ \lfloor \frac{\lambda}{2} \rfloor + \frac{s-\gamma+1}{2} & \mbox{ if } \ 3\leq \gamma \leq \lceil \frac{s}{2} \rceil -1, \\ \lceil \frac{\lambda}{2} \rceil +\frac{\gamma -1}{2} & \mbox{ otherwise. } \end{cases}$ \textbf{Case 1.} $\gamma \in \{1,s-1\}$\\ If $\gamma=1$ then, $\lambda\geq 2$, $s \geq 4$, and $d=\lceil\frac{\lambda}{2}\rceil+\frac{s-2}{2}.$ Let $i=(\lceil\frac{\lambda}{2}\rceil -1)s+\frac{s}{2}$ be a vertex of $C_n(1,s)$. Table \ref{tab21} represents a simple calculation of the lengths $\ell^1(i)$ to $\ell^{4,t}(i)$ provided in Lemma \ref{path}. \begin{table}[H] \centering {\begin{tabular}{ | c || l || r | } \hline $\ell^1(i)$ & $=\lceil\frac{\lambda}{2}\rceil+\frac{s-2}{2}$ & $=d$ \\ \hline $\ell^2(i)$ & $=\lceil\frac{\lambda}{2}\rceil+\frac{s}{2}$ & $>d$ \\ \hline $\ell^{1,t}(i), \ell^{2,t}(i)$ & $ >\begin{cases} \lceil\frac{\lambda}{2}\rceil+\frac{s-2}{2} & \mbox{ if } \ t+\frac{s}{2}<s,\\ \lceil\frac{\lambda}{2}\rceil+\frac{s-2}{2} & \mbox{ if } \ t+\frac{s}{2}\geq s. \end{cases}$ & $>d$ \\ \hline $\ell^{3,t}(i)$ & $ >\begin{cases} \lceil\frac{\lambda}{2}\rceil+\frac{s-2}{2} & \mbox{ if } \ t-\frac{s}{2}<0,\\ \lceil\frac{\lambda}{2}\rceil+\frac{s-2}{2} & \mbox{ if } \ 0\leq t-\frac{s}{2}<s. \end{cases}$ & $> d$ \\ \hline $\ell^{4,t}(i)$ & $ \begin{cases} \geq \lceil\frac{\lambda}{2}\rceil+\frac{s-2}{2} & \mbox{ if } \ t-\frac{s}{2}<0,\\ >\lceil\frac{\lambda}{2}\rceil+\frac{s-2}{2} & \mbox{ if } \ 0\leq t-\frac{s}{2}<s. \end{cases}$ & $\geq d$ \\ \hline \end{tabular}} \caption{Values of $\ell^1(i)$, $\ell^2(i)$, $\ell^{1,t}(i)$, \ldots, $\ell^{4,t}(i)$.} \label{tab21} \end{table}Thus, by Lemma \ref{dis1}, we have $d(i) =\ell^1 (i)=d$. If $\gamma=s-1$ then, $\lambda\geq s$, $s \geq 4$, and $d=\lceil\frac{\lambda}{2}\rceil+\frac{s-2}{2}.$ Let $i=(\lceil\frac{\lambda}{2}\rceil -1)s+\frac{s}{2}$ be a vertex of $C_n(1,s)$. By a simple calculation of the lengths $\ell^1(i)$ to $\ell^{4,t}(i)$ represented in Lemma \ref{path}, and by applying Lemma \ref{dis1}, we obtain $d(i) =\ell^1 (i)=d$. \textbf{Case 2.} $3\leq \gamma \leq \lceil \frac{s}{2}\rceil -1$ If $\lambda$ is even then, $\lambda\geq 4$, $s \geq 4$, and $d=\frac{\lambda}{2}+\frac{s}{2}-\frac{\gamma -1}{2}.$ Let $i=(\frac{\lambda}{2}-\frac{\gamma -1}{2})s+\frac{s}{2}+1$ be a vertex of $C_n(1,s)$. Note that since $s\geq 4$, we have $\frac{s}{2}+1<s$. By a simple calculation of the lengths $\ell^1(i)$ to $\ell^{4,t}(i)$ represented in Lemma \ref{path}, and by applying Lemma \ref{dis1}, we obtain $d(i) =\ell^2 (i)=d$. If $\lambda$ is odd then, $\gamma\geq 3$, $\lambda\geq 5$, $s \geq 4$, and $d=\frac{\lambda-1}{2}+\frac{s}{2}-\frac{\gamma -1}{2}.$ Let $i=\frac{\lambda-1}{2}s+\frac{\gamma -1}{2}+\frac{s}{2}+1$ be a vertex of $C_n(1,s)$. Note that since $\gamma \leq \lceil \frac{s}{2}\rceil -1$ and $s\geq 4$, we get $\frac{\gamma -1}{2}+\frac{s}{2}+1<s$. By a simple calculation of the lengths $\ell^1(i)$ to $\ell^{4,t}(i)$ represented in Lemma \ref{path}, and by applying Lemma \ref{dis1}, we obtain $d(i) =\ell^2 (i)=d$. \textbf{Case 3.} $\lceil \frac{s}{2}\rceil\leq \gamma <s-1 $ \\ We have $\gamma\geq 3$, $\lambda\geq 4$, $s \geq 4$, and $d=\lceil\frac{\lambda}{2}\rceil+\frac{\gamma -1}{2}.$ Let $i=\lceil\frac{\lambda}{2}\rceil s+\frac{\gamma -1}{2}$ be a vertex of $C_n(1,s)$. Note that since $s>\gamma$, we have $s-\frac{\gamma-1}{2}>\frac{\gamma+1}{2}$. By a simple calculation of the lengths $\ell^1(i)$ to $\ell^{4,t}(i)$ represented in Lemma \ref{path}, and by applying Lemma \ref{dis1}, we obtain $d(i) =\ell^1 (i)=d$. This completes the proof. \end{proof} Thus, the following theorem follows from Lemmas \ref{4.1} and \ref{4.2}. \begin{theorem} Let $n$ odd and $s$ even. We have, $$diam(C_{n}(1,s))=\begin{cases} \lceil \frac{\lambda}{2} \rceil + \frac{s-2}{2} & \mbox{ if } \ \gamma = 1 \mbox{ or } \gamma = s-1, \\ \lfloor \frac{\lambda}{2} \rfloor + \frac{s-\gamma+1}{2} & \mbox{ if } \ 3\leq \gamma \leq 2\lceil \frac{s}{4} \rceil -1, \\ \lceil \frac{\lambda}{2} \rceil +\frac{\gamma -1}{2} & \mbox{ otherwise. } \end{cases}$$ \end{theorem} Next, we discuss the case when $\gamma\neq 0$ and $\lambda \leq \gamma$. \subsection{Diameter of $C_{n}(1,s)$ when $\lambda \leq \gamma$} \begin{theorem}\cite{chen2005diameter} Let $s = a\gamma + b,$ where $b$ $(0 < b < \gamma)$ is the remainder of dividing $s$ by $\gamma$. Define $p_0=\lfloor \frac{\lambda+\gamma}{2} \rfloor$, $p_1=\lfloor \frac{\gamma-b+(a+1)\lambda+1}{2} \rfloor$, $p_2=\lfloor \frac{\gamma+b+(a-1)\lambda+1}{2} \rfloor$, and $p_3=\lfloor \frac{b+a\lambda+1}{2} \rfloor$. Let $e_1=\min\{\max\{p_1, p_3\},\max\{p_0, p_2\}\}.$ If $\lambda\leq \gamma$ and $b\leq a\lambda+1,$ then $$diam(C_{n}(1,s))=\begin{cases} p_1 -1 & \mbox{ if } p_1=p_2 \mbox{ and } (\gamma+b)(a\lambda-\lambda+1) \equiv 1 \pmod 2, \\ e_1 & \mbox{ otherwise. } \end{cases}$$ \end{theorem} For the rest of the values of $n$ and $s$, our algorithm gives an exact value for the diameter of $C_{n}(1,s)$, but does not provide an exact formula. Therefore, we present, in the next section, upper bounds on the diameter of $C_n(1,s)$ for all values of $n$ and $s$. \subsection{Upper bound for $diam(C_n(1,s))$} \label{Subsec:3} In 1990, Du \textit{et al.} \cite{du1990combinatorial} gave the following upper bound on the diameter of $C_n(1,s)$: \begin{equation} \label{1} diam(C_n(1,s)) \leq \max\{ \lfloor \frac{n}{s} \rfloor+1, n-\lfloor \frac{n}{s} \rfloor s-2, ( \lfloor \frac{n}{s} \rfloor+1)s-n-1\}. \end{equation} Another upper bound was given by G{\"o}bel and Neutel \cite{gobel2000cyclic}: \begin{equation} \label{2} diam(C_n(1,s)) \leq diam(C_n(1,2))=\lfloor \frac{n+2}{4}\rfloor. \end{equation} Next, we present our upper bound on the diameter of $C_n(1,s)$ for all values of $n$ and $s$. \begin{theorem} \label{prop10} For all $n$ and $s,$ we have \begin{equation} \label{3} diam(C_n(1,s)) \leq \lfloor \frac{\lfloor \frac{n}{2} \rfloor}{s}\rfloor + \lceil \frac{s}{2} \rceil. \end{equation} \end{theorem} \begin{proof} If $i\leq \lfloor \frac{n}{2} \rfloor,$ then $\ell^1(i) \leq r + \lfloor \frac{\lfloor \frac{n}{2} \rfloor}{s} \rfloor $ and $\ell^2(i) \leq 1+s-r+ \lfloor \frac{\lfloor \frac{n}{2} \rfloor}{s} \rfloor.$ Since, by Lemma \ref{dis1}, $d(i) \leq \min(\ell^1(i), \ell^2(i)),$ then $d(i) \leq \lfloor \frac{\lfloor \frac{n}{2} \rfloor}{s}\rfloor + \lceil \frac{s}{2} \rceil .$ In fact, $\min(\ell^1(i), \ell^2(i))=\ell^1(i)$ if $r \leq \lceil \frac{s}{2} \rceil$ and we have $\min(\ell^1(i), \ell^2(i))=\ell^1(i) \leq \lceil \frac{s}{2} \rceil+ \lfloor \frac{\lfloor \frac{n}{2} \rfloor}{s} \rfloor .$ If $r > \lceil \frac{s}{2} \rceil,$ then $\min(\ell^1(i), \ell^2(i))=\ell^2(i) < 1+s-\lceil \frac{s}{2} \rceil +\lfloor \frac{\lfloor \frac{n}{2} \rfloor}{s}\rfloor \leq \lceil \frac{s}{2} \rceil +\lfloor \frac{\lfloor \frac{n}{2} \rfloor}{s}\rfloor.$ \\ Thus, for $i\leq \lfloor \frac{n}{2} \rfloor,$ $d(i) \leq \lfloor \frac{\lfloor \frac{n}{2} \rfloor}{s}\rfloor + \lceil \frac{s}{2} \rceil$. By Lemma \ref{dis2}, we conclude that $diam(C_n(1,s)) \leq \lfloor \frac{\lfloor \frac{n}{2} \rfloor}{s}\rfloor + \lceil \frac{s}{2} \rceil$. \end{proof} By combining (\ref{1}), (\ref{2}) and (\ref{3}), we obtain the following result. \begin{theorem} For all $n$ and $s,$ we have $$diam(C_n(1,s)) \leq \min (\max\{ \lfloor \frac{n}{s} \rfloor+1, n-\lfloor \frac{n}{s} \rfloor s-2, ( \lfloor \frac{n}{s} \rfloor+1)s-n-1\}, \lfloor \frac{n+2}{4}\rfloor, \lfloor \frac{\lfloor \frac{n}{2} \rfloor}{s}\rfloor + \lceil \frac{s}{2} \rceil ).$$ \end{theorem} \paragraph*{\textbf{Conclusion}} To the best of our knowledge, despite the regularity of circulant graphs, there is no formulas giving exact values for the distance and the diameter of $C_n(1,s)$ for all $n$ and $s$. Our approach which makes it possible to determine the distances in a circulant, allowed us to give exact formulas for the diameter of circulant graphs $C_n(1,s)$, for almost all values of $n$ and $s$, and to provide upper bounds for the rest of the cases of $n$ and $s$. \bibliographystyle{unsrt}
2,869,038,156,561
arxiv
\section*{References}} \usepackage{amsmath,amssymb} \usepackage{hyperref} \usepackage{tikz} \usetikzlibrary{intersections} \usetikzlibrary{decorations.pathmorphing} \usetikzlibrary{decorations.markings} \tikzset{ laser/.style={decorate, decoration={snake,segment length=5mm}, draw=black}, electron/.style={draw=black, postaction={decorate}, decoration={markings,mark=at position .55 with {\arrow[draw=black]{>}}}}, } \newcommand{\delta}{\delta} \newcommand{\epsilon}{\epsilon} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand\zn[1]{Z_2^{(#1)}} \newcommand{\phi_{_{\mathrm{V}}}}{\phi_{_{\mathrm{V}}}} \newcommand{a_{_{\mathrm{V}}}\!}{a_{_{\mathrm{V}}}\!} \newcommand{b_{_{\mathrm{V}}}\!}{b_{_{\mathrm{V}}}\!} \newcommand{a^\dag_{_{\mathrm{V}}}\!}{a^\dag_{_{\mathrm{V}}}\!} \newcommand{b^\dag_{_{\mathrm{V}}}\!}{b^\dag_{_{\mathrm{V}}}\!} \newcommand{k{\cdot}x}{k{\cdot}x} \newcommand{k{\cdot}x}{k{\cdot}x} \newcommand{{\bar{p}}}{{\bar{p}}} \newcommand{{\bar{q}}}{{\bar{q}}} \newcommand{\tilde{p}}{\tilde{p}} \newcommand{\int\!\frac{{\bar{}\kern-0.45em d}^{\,3}{p}}{2E_{p}}}{\int\!\frac{{\bar{}\kern-0.45em d}^{\,3}{p}}{2E_{p}}} \newcommand{\int\!\frac{{\bar{}\kern-0.45em d}^{\,3}{p}}{2E^*_p}}{\int\!\frac{{\bar{}\kern-0.45em d}^{\,3}{p}}{2E^*_p}} \newcommand{\int\!\frac{{\bar{}\kern-0.45em d}^{\,3}{q}}{2E^*_q}}{\int\!\frac{{\bar{}\kern-0.45em d}^{\,3}{q}}{2E^*_q}} \newcommand{D_{_{\mathrm{c}}}}{D_{_{\mathrm{c}}}} \newcommand{D_{_{\mathrm{\ell}}}}{D_{_{\mathrm{\ell}}}} \newcommand{\mathrm{e}}{\mathrm{e}} \newcommand{\w}[2]{w_{#1\,#2}} \newcommand{\widetilde{\mathrm{J}}}{\widetilde{\mathrm{J}}} \newcommand{{\mathrm{J}}}{{\mathrm{J}}} \newcommand{\bra}[1]{\langle #1|} \newcommand{\ket}[1]{|#1\rangle} \newcommand{\braV}[1]{{_{_{\mathrm{V}}}}\!\bra{#1}} \newcommand{\ketV}[1]{\ket{#1}_{_{\mathrm{V}}}} \newcommand{{\cdot}}{{\cdot}} \begin{document} \title{Sideband Mixing in Intense Laser Backgrounds} \author{Martin~Lavelle and David~McMullan} \affiliation{School of Computing and Mathematics\\ University of Plymouth \\ Plymouth, PL4 8AA, UK} \date{\today} \begin{abstract} The electron propagator in a laser background has been shown to be made up of a series of sideband poles. In this paper we study this decomposition by analysing the impact of the residual gauge freedom in the Volkov solution on the sidebands. We show that the gauge transformations do not alter the location of the poles. The identification of the propagator from the two-point function is maintained but we show that the sideband structures mix under residual gauge transformations. \end{abstract} \pacs{11.15.Bt,12.20.Ds,13.40.Dk} \maketitle \section{Introduction} The recent rapid progress in laser technologies offers a timely testing ground for quantum field theory techniques associated with non-trivial backgrounds~\cite{DiPiazza:2011tq}. In this paper we are going to study charge propagation in such a background. A novel feature of a propagating charge in a laser is that it is indistinguishable from a charge which absorbs a given number of laser photons and also emits the same number of photons degenerate with the laser. Such laser induced degeneracies have a close parallel with the soft and collinear degeneracies associated with the infra-red regime in both QED and QCD~\cite{Bloch:1937pw}\cite{Kinoshita:1962ur}\cite{Lee:1964is}\cite{Lavelle:2005bt} while the induced mass effects in a laser background should help to refine our understanding of the current versus constituent mass distinction in QCD~\cite{Lavelle:1995ty}. Needless to say, the new generation of laser facilities that will soon be on line will help to both inform and refine our theoretical understanding of these vexed issues. In QED we usually expand around the free theory but in a laser background we use the interacting, Volkov solution~\cite{Volkov:1935zz}\cite{Heinzl:2011ur} as our starting point. This solution is much richer than in the normal perturbative vacuum and, as we will summarise below, alters the propagator which becomes a sum of so-called sideband poles~\cite{Reiss:1962}\cite{Reiss:2009ed}. As this is not a free theory, we have to address the effects of local gauge transformations~\cite{Heinzl:2008rh} on this solution and the propagator. We recall~\cite{Volkov:1935zz} that the solutions of a scalar field in a plane wave background are distorted. For a linearly polarised background where the vector potential is given by \begin{equation}\label{pot} A_\mu(x)=a_\mu\cos (k{\cdot}x)\,, \end{equation} and where the constant amplitude $a_\mu$ is space-like and taking the null vector $k^\mu$ to be spatially aligned along the laser direction, the matter field is described by \begin{equation}\label{phiV} \phi_{_{\mathrm{V}}}(x)=\int\!\frac{{\bar{}\kern-0.45em d}^{\,3}{p}}{2E^*_p} \left( D(x,p)a_{_{\mathrm{V}}}\!(p)+D(x,-p)b^\dag_{_{\mathrm{V}}}\!(p)\right)\,, \end{equation} where \begin{equation} D(x,p)=\mathrm{e}^{-ip\cdot x}\mathrm{e}^{i\left(eu\sin(k{\cdot}x)+e^2v\sin(2k{\cdot}x)\right)}\,, \end{equation} and \begin{equation}\label{potbits} u=-\frac{p{\cdot} a}{p{\cdot} k}\qquad \mathrm{and}\qquad v=\dfrac{a^2}{8p\cdot k}\,. \end{equation} In this expression the momentum $p$ is on-shell at $m_{\star}$ where the laser shifted mass~\cite{Reiss:1962}\cite{Brown:1964zz}\cite{Nikishov:1964zza} \cite{Harvey:2012ie}\cite{Kohlfurst:2013ura} is \begin{equation} p^2={m_{\star}}^2=m^2-\tfrac12 e^2 a^2\,. \end{equation} In this paper we do not explicitly distinguish between on-shell and off-shell momenta as it has no impact on our discussion of gauge dependence. See~\cite{Lavelle:2013wx} for a fuller discussion. We recall further that (\ref{phiV}) may be written as a sum over modes \begin{equation}\label{summingmodes} \phi_{_{\mathrm{V}}}(x)=\sum_n \phi_n(x)\,, \end{equation} where \begin{equation}\label{174174} \phi_n(x)= \int\!\frac{{\bar{}\kern-0.45em d}^{\,3}{p}}{2E^*_p} \left( \mathrm{e}^{-ie p{\cdot} x}\mathrm{e}^{ink{\cdot}x} J_n(eu,e^2v)a_{_{\mathrm{V}}}\!(p)+ \mathrm{e}^{ie p{\cdot} x}\mathrm{e}^{ink{\cdot}x} J_n(eu,-e^2v) b^\dag_{_{\mathrm{V}}}\!(p)\right)\,, \end{equation} and the generalised Bessel function, $J_n(eu,e^2v)$, is defined in terms of Bessel functions via \begin{equation}\label{genBsfndef} J_n(eu,e^2v)=\sum_r J_{n-2r}(eu)J_r(e^2v) \,. \end{equation} The Volkov propagator contains not just the standard pole $i/(p^2-{m_*}^2)$ familiar from perturbation theory but also infinitely many sideband poles of the form $i/((p+nk)^2-{m_*}^2)$ where $n$ is any integer~\cite{Brown:1964zz}\cite{Eberly:1966b}\cite{Reiss:1966A}\cite{Ehlotzky:1967c}\cite{Ilderton:2012qe}. As we have previously seen~\cite{Lavelle:2013wx}, the propagator is not given by the two-point function of the full Volkov field but is identified as the diagonal part of the two-point function in the vacuum $\ketV 0$ picked out by the Volkov annihilation operators: \begin{equation}\label{propdiagdef} i D_V(x-y)=\sum_n\braV 0 T \phi_n(x)\phi^{\dag}_n(y)\ketV 0\,. \end{equation} This is to ensure that the propagator represents processes where there is a fixed momentum flow through the matter field. This can also be understood~\cite{Lavelle:2013wx} in terms of degenerate processes extending the Lee-Nauenberg~\cite{Lee:1964is} characterisation of the infra-red problem~\cite{Lavelle:2005bt}. The form of the vector potential chosen here requires that $k\cdot a=0$ which corresponds from~(\ref{pot}) to a Landau like gauge as $\partial_\mu A^\mu=0$. In~\cite{Lavelle:2013wx} the propagator was constructed in this gauge. The mass shift and wave function renormalisation were calculated to all orders in an operator formalism. This was further verified to the first few orders by explicit diagrammatic calculations. Each term in the sum~(\ref{propdiagdef}) generates a separate, so-called sideband structure: \begin{equation}\label{propdiagdef2} \int d^4x \, \mathrm{e}^{-ip\cdot (x-y)}\braV 0 T (\phi_n(x)\phi^{\dag}_n(y))\ketV 0 = \frac{Z_2^{(n)}(u,v)}{(p+nk)^2-{m_*}^2+i\epsilon}\,, \end{equation} showing the common mass shift and distinct wave function renormalisations of the sidebands. Here $p$ is off-shell. It has been argued that the central sideband, corresponding to $n=0$ and produced by the $\phi_0(x)$ mode, may dominate in some regimes~\cite{Eberly:1966b}. In this paper we want to address the issue of the residual gauge freedom which is opened up by the boundary conditions imposed on the plane wave laser background. Below we will show that, although the Volkov field transforms with the expected phase shift characteristic of a charged matter field under such a residual gauge transformation, the modes (\ref{summingmodes}) actually mix with each other in a non-trivial manner. This mixing of the modes raises a question about whether the above identification of the propagator~(\ref{propdiagdef}) is consistent with gauge transformations. We will demonstrate below that the construction of the propagator is robust under such transformations and that the overall effect of gauge transformations may be absorbed into shifts of the wave function renormalisation factors. We therefore now turn to the gauge freedom in the Volkov formalism, its effects on the various modes of the Volkov field and thus build up the diagonal sum~(\ref{propdiagdef}). Although our conclusions hold to all orders, we shall, for illustrative purposes, demonstrate them perturbatively. \section{Residual Gauge Transformations} There is in the gauge fixing discussed above a residual gauge freedom as we can make the replacement \begin{equation} A_\mu(x)\to A_\mu(x)+\partial_\mu \lambda(x)\,, \end{equation} where $\lambda(x)=\lambda\sin( k{\cdot}x)$ and $\lambda$ is a constant. This corresponds to the amplitude shift \begin{equation} a_\mu\to a_\mu+\lambda k_\mu\,, \end{equation} which still preserves our gauge choice due to the null nature of $k^\mu$. Under this transformation we have, from (\ref{potbits}), \begin{equation}\label{potbitstrans} u\to u-\lambda\qquad \mathrm{and}\qquad v\to v\,. \end{equation}% Similarly the distortion factor transforms as \begin{equation} D(x,p)\to \mathrm{e}^{-ie\lambda(x)}D(x,p)\,. \end{equation} From (\ref{phiV}) we see the phase shift \begin{equation}\label{131474} \phi_{_{\mathrm{V}}}(x)\to \mathrm{e}^{-ie\lambda(x)} \phi_{_{\mathrm{V}}}(x)\,, \end{equation} as would be expected of a charged matter field under gauge transformations. This is a local gauge transformation and, as the field extends to spatial infinity along the laser direction, the transformation does not vanish asymptotically along the laser. This residual gauge transformation is consistent both with our original Landau gauge condition and the boundary conditions of the Volkov solution. We now want to analyse the impact of the gauge freedom on the various modes of the Volkov field. As the propagator is constructed from the diagonal sum over the modes (\ref{propdiagdef}) it is crucial that we know how they transform. In (\ref{174174}) the generalised Bessel functions, through their dependence on $u$, are responsible for the gauge dependence of the fields \begin{equation}\label{Jchange} J_n(eu,e^2v)\to J_n(e(u-\lambda),e^2v)= \sum_m J_m(e\lambda) J_{n+m}(eu,e^2v) \,, \end{equation} where we used (\ref{potbitstrans}). This means that the Volkov modes mix under such a local gauge transformation as \begin{equation}\label{phinchange} \phi_n(x)\to \sum_s J_s(e\lambda) \phi_{n+s}(x) \mathrm{e}^{-isk{\cdot}x} \,, \end{equation} with a Bessel function dependent weighting. It is useful here to verify that this is consistent with the overall transformation of the Volkov field. From (\ref{summingmodes}) we have \begin{equation}\ \phi_{_{\mathrm{V}}}(x)\to \sum_n \sum_s J_s(e\lambda) \phi_{n+s}(x) \mathrm{e}^{-isk{\cdot}x} \,. \end{equation} Shifting the label $n$ and using the standard result \begin{equation}\ \mathrm{e}^{i\ell\sin(k{\cdot}x)}=\sum_r\mathrm{e}^{irk{\cdot}x}J_r(\ell)\,, \end{equation} we find that the Volkov field \begin{equation} \phi_{_{\mathrm{V}}}(x)\to \mathrm{e}^{-ie\lambda(x)} \sum_r\phi_{r}(x) \,, \end{equation} which shows that, as expected from (\ref{131474}), the phase may be extracted from the sum over modes. It is clear from (\ref{phinchange}) that there is a mixing of the modes and it is not obvious that the identification of the propagator as a diagonal sum is compatible with this mixing. To study this perturbatively, let us consider the lowest modes in the propagator. It will be apparent how this extends to higher orders. The first few terms in the diagonal sum around the central term ($n=0$) are: \begin{equation} \braV 0 T (\phi_0(x)\phi_0^{\dagger}(y)+\phi_1(x)\phi_1^{\dagger}(y)+\phi_{-1}(x)\phi_{-1}^{\dagger}(y)+\dots\ketV 0 \,. \end{equation} Now from (\ref{phinchange}) and the standard series representation of the Bessel function we see that at order $e^2$ the only term that is gauge dependent is $\phi_0(x)\phi_0^{\dagger}(y)$. We find that \begin{equation}\label{173974} \phi_0(x)\to \left(1-\tfrac14{e^2\lambda^2} \right)\phi_0(x)+\tfrac{e\lambda}2\phi_1(x)\mathrm{e}^{-ik{\cdot} x}-\frac{e\lambda}2\phi_{-1}(x)\mathrm{e}^{ik{\cdot} x} \,, \end{equation} and similar for $\phi_0^{{\dagger}}$. We see that the modes mix under a gauge transformation and that, more generally, expanding up to order $e^{2n}$ mixes modes whose labels are separated by $n$. Furthermore, the factors of $\mathrm{e}^{\pm ik{\cdot} x}$ here might initially appear concerning as such a $k$-dependence in the diagonal two-point function would be incompatible with its interpretation as a propagator. However, we will see below that although the mixing is real this $k$-dependence cancels in our result and the propagator interpretation of the diagonal sum~(\ref{propdiagdef}) holds. We now want to express the modes in terms of the Volkov creation and annihilation operators. Rewriting the generalised Bessel functions in (\ref{174174}) in terms of Bessel functions via (\ref{genBsfndef}) yields \begin{equation} J_0(eu,e^2v)= J_0(eu)J_0(e^2v)+J_2(eu)J_{-1}(e^2v)+J_{-2}(eu)J_{1}(e^2v)+\dots \,. \end{equation} The second and third terms on the right hand side here are of order $e^4$, so to order $e^2$ \begin{equation} J_0(eu,e^2v)= 1-\frac{e^2}4u^2 \,. \end{equation} We thus obtain to leading order in the coupling \begin{equation}\label{formphi0} \phi_0(x)= \int\!\frac{{\bar{}\kern-0.45em d}^{\,3}{p}}{2E^*_p}\left( \mathrm{e}^{-ip{\cdot} x}\left(1-\frac{e^2}4u^2\right)a_V(p) + \mathrm{e}^{ip{\cdot} x}\left(1-\frac{e^2}4 u^2\right)b^{{\dagger}}_V(p) \right) \,. \end{equation} A similar calculation leads to \begin{equation}\label{formofphipm1} \phi_{\pm1}(x)= \mp\frac e2\int\!\frac{{\bar{}\kern-0.45em d}^{\,3}{p}}{2E^*_p}\left( \mathrm{e}^{-ip{\cdot} x}\mathrm{e}^{\pm ik{\cdot} x}\left(1-\frac{e^2}4u^2\right)a_V(p) + \mathrm{e}^{ip{\cdot} x}\mathrm{e}^{\mp ik{\cdot} x}\left(1-\frac{e^2}4u^2\right)b^{{\dagger}}_V(p) \right) \,. \end{equation} It is important to notice how the factors of $\mathrm{e}^{\pm ik{\cdot} x}$ enter here. This means that such factors cancel on substitution into~(\ref{173974}). We find from (\ref{173974}), (\ref{formphi0}) and (\ref{formofphipm1}) that the mode $\phi_0(x)$, expressed in terms of creation and annihilation operators, up to order $e^2$ transforms under the residual gauge transformation as \begin{equation}\label{1739742} \phi_0(x)\to \int\!\frac{{\bar{}\kern-0.45em d}^{\,3}{p}}{2E^*_p} \left( 1-\frac{e^2}4(u-\lambda)^2 \right)\left(\mathrm{e}^{-i p{\cdot} x} a_V(p)+\mathrm{e}^{ip{\cdot} x}b^{\dag}_V(p)\right) \,. \end{equation} This last equation shows the leading order, local gauge transformations of the mode $\phi_0$. However, it should be emphasised that this result includes terms from mixing with the modes $\phi_{\pm1}$ and will receive further contributions from $\phi_{\pm2}$ at order $e^4$ etc. It is clear that at all orders, all of the modes will mix under a gauge transformation. This demonstrates that restricting to specific modes is a gauge dependent, and unphysical approximation. The $e^{ik{\cdot}x}$ factors have cancelled in (\ref{1739742}) and so if under our gauge transformation $\phi_n\to\tilde{\phi}_n$ the diagonal term $\braV 0 T (\tilde{\phi}_n(x)\tilde{\phi}^{\dag}_n(y))\ketV 0 $ continues to generate solely the sideband pole in $(p+nk)^2-{m^*}^2$, although $\tilde{\phi}_n$ itself contains contributions from all the other modes in the original gauge. We have thus seen that under gauge transformations, propagators defined through the diagonal sum over modes remain as propagators. The changes affect the wave function renormalisation which acquires a $\lambda$-dependence. For the central sideband \begin{equation}\label{132484} Z_2^{(0)}(eu,e^2v)\to 1-\frac{e^2}2 (u-\lambda)^2+{\cal{O}}(e^4)\,. \end{equation} If we recall~\cite{Lavelle:2013wx} that the all orders wave function renormalisation constant for the $n$-th sideband in a linear background has the form \begin{equation}\label{130084} Z_2^{(n)}(eu,e^2v)=J^2_{n}(eu,e^2v) \,, \end{equation} then~(\ref{132484}) corresponds to the shift \begin{equation}\label{132684} Z_2^{(0)}(eu,e^2v)\to Z_2^{0}(e(u-\lambda),e^2v) \,. \end{equation} This exemplifies the gauge dependence of the wave function renormalisation and we reiterate that it is in part caused by a mixing of modes which previously generated other sideband states in the initial gauge. Note that although $Z_2^{(\pm1)}$ is gauge invariant at this order, it becomes gauge dependent at the next order, and so on. The mass shift, as a potentially measurable quantity~\cite{Harvey:2012ie}, is gauge invariant as under our residual gauge tranformation \[ a^2\to (a_\mu+\lambda k_\mu)(a^\mu+\lambda k^\mu)=a^2 \,, \] where we have used that $k^2=0$ and the Landau gauge $k\cdot a=0$. This is reflected in the expression for the mass shift which is only generated by diagrams with the four-point vertex and not by the gauge dependent three-point vertex in scalar QED. This is also the case for circular polarised backgrounds~\cite{Lavelle:2013wx}. It should be contrasted with fermionic QED where the gauge dependent vertex is proportional to $e\slash \!\!\!a$ and it is possible to generate the gauge invariant structure $a^2$ via $\slash \!\!\!a \slash \!\!\!a$ from the fermionic vertex. We have seen in this paper that the mass shift is gauge invariant and that the wave function renormalisation factors are gauge dependent. Furthermore, the different sidebands mix with each other under gauge transformations although, in the resulting gauge, each mode in the diagonal sum generates its corresponding sideband pole. This shows that in this theory we have a consistent identification of the propagator with its sideband structure. The next step in understanding the quantum field theory of the Volkov solution is to identify the vertex structures in scalar QED in laser backgrounds. We expect similar structures in fermionic QED, however, this does introduce an additional complexity to the theory~\cite{Reiss:1966A} which needs to be revisited. Progress in these areas is essential in the development of the theory of scattering in laser backgrounds. \bigskip \noindent \textbf{Acknowledgements:} We thank Tom Heinzl and Ben King for helpful discussions. \newpage
2,869,038,156,562
arxiv
\section{Introduction} \label{sec:1} As discussed in previous chapters (Vink, Owocki), two critical aspects in the evolution of very massive stars (VMSs) are that their high luminosities cause strong mass loss in radiation-driven winds, and that high luminosities can also cause severe instabilities in the stellar envelope and interior as the star approaches the Eddington limit. These features become increasingly important as the initial stellar mass increases, but especially so as the star evolves off the main sequence and approaches its death. Moreover, the two are interconnected, since mass loss will increase the star's luminosity/mass ratio, possibly leading to more intense instabilities over time. It should not be surprising, then, that VMSs show clear empirical evidence of this instability, and this chapter discusses various observational clues that we have. This is a particularly relevant topic, as time-domain astronomy is becoming an increasingly active field of observational research. Throughout, the reader should remember that we are focussed on observed phenomena, and that working backward to diagnose possible underlying physical causes is not always straightforward. Hence, this interpretation is where most of the current speculation and debate rests among researchers working in the field. Stellar evolution models make predictions for the appearance of single massive stars late in their lives, but the influence of binary interaction may be extremely important or even dominant (Langer 2012), and the assumptions about mass-loss that go into the single-star models are not very reliable (Smith 2014). In particular, the eruptive instabilities discussed in this chapter are not included in single-star evolution models, and as such, these models provide us with little perspective for understanding the very latest unstable phases of VMSs or their final fates. The loosely bound envelopes that result from a star being close to the Eddington limit may be an important factor in directly causing outbursts, but having a barely bound envelope may also make it easier for other mechanisms to be influential, such as energy injection from non-steady nuclear burning, precursor core explosions, or binary interactions (see e.g., Smith \& Arnett 2014 for a broader discussion of this point). In the sections to follow, we discuss the observed class of eruptive luminous blue variables (LBVs) that have been linked to the late evolutionary phases of VMSs, various types of very luminous supernovae (SNe) or other explosions that may come from VMSs, and direct detections of luminous progenitors of SNe (including a few actual detections of pre-SN eruptions) that provide a link between VMSs and their SNe. \section{LBVs and their Giant Eruptions} \label{sec:2} Perhaps the most recognizable manifestation of the instability that arises in the post-main-sequence evolution of VMSs is the class of objects known as luminous blue variables (LBVs). These were recognized early-on as the brightest blue irregular variables in nearby galaxies like M31, M33, and NGC~2403 (Hubble \& Sandage 1953; Tammann \& Sandage 1968), and these classic examples were referred to as the ``Hubble-Sandage variables''. Later, Conti (1984) recognized that many different classes of hot, irregular variable stars in the Milky Way and Magellanic clouds were probably related to these Hubble-Sandage variables, and probably occupy similar evolutionary stages in the lives of massive stars, so he suggested that they be grouped together and coined the term ``LBV'' to describe them collectively. The LBVs actually form a rather diverse class, consisting of a wide range of irregular variable phenomena associated with evolved massive stars (see reviews by Humphreys \& Davidson 1994; van Genderen 2001; Smith et al.\ 2004, 2011a; Van Dyk \& Matheson 2012; Clark et al.\ 2005). \subsection{Basic observed properties of LBVs} In addition to their high luminosities, some of the key observed characteristics of LBVs are as follows (although beware that not all LBVs exhibit all these properties): \begin{itemize} \item {\bf S Doradus eruptions.} Named after the prototype in the LMC, S Dor eruptions are seen as a brightening that occurs at visual wavelengths resulting from a change in apparent temperature of the star's photosphere; this causes the peak of the energy distribution to shift from the UV to visual wavelengths at approximately constant bolometric luminosity. The increase in visual brightness (i.e. 1--2 mag, typically for more luminous stars) corresponds roughly to the bolometric correction for the star, so that hotter stars exhibit larger amplitudes in their S Dor events. LBVs have different temperatures in their quiescent state, and this quiescent temperature increases with increasing luminosity. The visual maximum of S Dor eruptions, on the other hand, usually occurs at a temperature around 7500 K regrdless of luminosity, causing the star to resemble a late F-type supergiant with zero bolometric correction (see Figure~\ref{fig:hrd}). While these events are defined to occur at constant bolometric luminosity (Humphreys \& Davidson 1994), in fact quantitative studies of classic examples like AG Car do reveal some small varition in $L_{Bol}$ through the S Dor cycle (Groh et al.\ 2009). Similarly, the traditional explanation for the origin of the temperature change was that the star increases its mass-loss rate, driving the wind to very high optical depth and the creation of a pseudo photosphere (Humphreys \& Davidson 1994; Davidson 1987). Quantitative spectroscopy reveals, however, that the measured mass-loss rates do not increase enough to cause a pseudo photosphere in classic S Dor variables like AG Car (de Koter et al.\ 1996), and that the increasing photospheric radius is therefore more akin to a true expansion of the star's photosphere (i.e., a pulsation). Possible causes of this inflation of the star's outer layers is discussed elsewhere in this book (see Owocki's chapter). LBVs that experience these excursions are generally thought to be very massive stars, but their mass range is known to extend down to around 25 $M_{\odot}$ (Smith et al.\ 2004). \begin{figure}[b] \includegraphics[scale=0.62]{fig1.eps} \caption{The upper HR Diagram of LBVs and some LBV candidates (from Smith, Vink, \& de Koter 2004). The most massive LBVs and LBV candidates like $\eta$ Car and the Pistol star are off the top of this diagram. The diagonal strip where LBVs reside at quiescence is the S Dor instability strip discussed in the text. Note that LBVs are recognized by their characteristic photometric variability down to luminosities where the S Dor instability strip meets the eruptive temperature.} \label{fig:hrd} \end{figure} \item {\bf Quiescent LBVs reside on the S Dor instability strip.} As noted in the previous point, LBVs all show roughly the same apparent temperature in their cool/bright state during an outburst, but they have different apparent temperatures in their hot/quiescent states. These hot temperatures are not random. In quiescence, most LBVs reside on the so-called ``S Dor instability strip'' in the HR Diagram (Wolf 1989). This is a diagonal strip, with increasing temperature at higher luminosity (see Figure~\ref{fig:hrd}). Notable examples that do not reside on this strip are the most luminous LBVs, like $\eta$ Car and the Pistol star, so the S Dor instability strip may not continue to the most massive and most luminous stars, for reasons that may be related to the strong winds in these VMSs (see Vink chapter). Many of the stars at the more luminous end of the S Dor instability strip are categorized as Ofpe/WN9 or WNH stars in their hot/quiescent phases, with AG Car and R127 being the classic examples where these stars are then observed to change their spectral type and suffer bona-fide LBV outbursts. There are also many Ofpe/WN9 stars in the same part of the HR Diagram that have not exhibited the characteristic photometric variability of LBVs in their recent history, but which have circumstellar shells that may point to previous episodes of eruptive mass loss (see below). Such objects with spectroscopic similarity to quiescent LBVs, but without detection of their photometric variability, are sometimes called ``LBV candidates''. \item {\bf Giant eruptions.} The most dramatic variability attributed to LBVs is the so-called ``giant eruptions'', in which stars are observed to increase their radiative luminosity for months to years, accompanied by severe mass loss (e.g., Humphreys et al.\ 1999; Smith et al.\ 2011a). The star survives the disruptive event. The best studied example is the Galactic object $\eta$ Carinae, providing us with its historically observed light curve (Smith \& Frew 2011), as well as its complex ejecta that contain 10-20 $M_{\odot}$ and $\sim$10$^{50}$ ergs of kinetic energy (Smith et al. 2003; Smith 2006). Besides the less well-documented case of P Cygni's 1600 AD eruption, our only other examples of LBV-like giant eruptions are in other nearby galaxies. A number of these have been identified, with peak luminosities similar to $\eta$ Car or less (Van Dyk \& Matheson 2012; Smith et al. 2011a). Typical expansion speeds in the ejecta are 100 - 1000 km s$^{-1}$ (Smith et al. 2011a). These events are discussed more below. \item {\bf Strong emission-line spectra.} Most, but not all, LBVs exhibit strong emission lines (especially Balmer lines) in their visual-wavelength spectra. This is a consequence of their very strong and dense stellar winds (see Vink chapter), combined with their high UV luminosity and moderately high temperature. The wind mass-loss rates implied by quantitative models of the spectra range from 10$^{-5}$ to 10$^{-3}$ $M_{\odot}$ yr$^{-1}$; this is enough to play an important role in the evolution of the star (see Smith 2014), and eruptions enhance the mass loss even more. The emission lines in LBVs are, typically, much stronger than the emission lines seen in main-sequence O-type stars of comparable luminosity, and all of the more luminous LBVs have strong emission lines. Other stars that exhibit similar spectra but are not necessarily LBVs include WNH stars, Ofpe/WN9 stars, and B[e] supergiants, some of which occupy similar parts of the HR Diagram. \item {\bf Circumstellar shells.} Many LBVs are surrounded by spatially resolved circumstellar shells. These fossil shells provide evidence of a previous eruption. Consequently, some stars that resemble LBVs spectroscopically and have massive circumstellar shells, but have not (yet) been observed to exhibit photometric variability characteristic of LBVs, are often called LBV candidates. Many authors prefer to group LBVs and LBV candidates together (the logic being that a volcano is still a volcano even when it is dormant). LBV circumstellar shells are extremely important, as they provide the only reliable way to estimate the amount of mass ejected in an LBV giant eruption. The most common technique for measuring the mass is by calculating a dust mass from thermal-IR radiation, and then converting this to a total gas mass with an assumed gas:dust mass ratio (usually taken as 100:1, although this value is uncertain\footnote{If this value is wrong, it is probably a conservative underestimate. This is because a gas:dust mass ratio of 100:1 assumes that all refractory elements at $Z_{\odot}$ are in grains, whereas in reality, the dust formation may be less efficient or UV and shocks may destroy some dust, leaving some of these elements in the gas phase (and thus raising the total mass). In general, nebular gas masses inferred from thermal-IR dust emission should be considered lower limits, especially at $Z < Z_{\odot}$.}). To calculate a dust mass from the IR luminosity, one must estimate the dust temperature from the spectral energy distribution (SED), and then adopt some wavelength-dependent grain opacities in order to calculate the emitting mass. The technique can be quite sensitive to multiple temperature components, and far-IR data have been shown to be very important because most of the mass can be hidden in the coolest dust, which is often not detectable at wavelengths shorter than 20 $\mu$m. One can also measure the gas mass directly by various methods, usually adopting a density diagnostic like line ratios of [Fe~{\sc ii}] or [S~{\sc ii}] and multiplying by the volume and filling factor, or calculating a model for the density needed to produce the observed ionization structure using codes such as CLOUDY (Ferland et al.\ 1998). The major source of uncertainty here is the assumed ionization fraction. Masses of LBV nebulae occupy a very large range from $\sim$20 $M_{\odot}$ at the upper end down to 0.1 $M_{\odot}$ (Smith \& Owocki 2006), although even smaller masses become difficult to detect around bright central stars. \item{\bf Wind speeds and nebular expansion speeds.} LBV winds and nebulae typically have expansion speeds of 50-500 km s$^{-1}$, due to the fact that the escape speed of the evolved blue supergiant is lower than for the more compact radii of O-type stars and WR stars that have faster speeds of order 1000 km s$^{-1}$. In many cases, the shell nebulae are expanding with an even slower speed than the underlying wind, but this is not always the case. The slower nebular speeds may suggest that the nebulae were ejected in a state when the star was close to the Eddington limit (lower effective gravity) or that the LBV eruption ejecta have decelerated after colliding with slow CSM or high-pressure ISM. \item {\bf N-rich ejecta.} Lastly, LBVs typically exhibit strong enhancements in their N abundance, measured in the circumstellar nebulae or in the wind spectrum. The most common measurement involves the analysis of visual-wavelength spectra, using nebular [S~{\sc ii}] lines to derive an electron density, using the [N~{\sc ii}] ($\lambda$6583+$\lambda$6548)/$\lambda$5755 ratio to derive an electron temperature, and then using the observed intensity of the [N~{\sc ii}] lines compared to H lines for a relative N$^+$/H ratio, and then doing a similar analysis of O and C lines in order to estimate N/O and N/C ratios. One must make assumptions about the ionization levels of N and other elements, but if UV spectra are available, one can constrain the strength of a wide range of ionization levels of each atom. In the case of $\eta$ Carinae, for example, strong lines of N~{\sc i}, {\sc ii}, {\sc iii}, {\sc iv}, and {\sc v} are detected, but O lines of all ionization levels are extremely faint (Davidson et al.\ 1986). The observed levels of N enrichment in LBVs suggest that the outer layers of the stars include large quantities of material processed through the CNO cycle and mixed to the surface, requiring that LBVs are post-main-sequence stars. \end{itemize} \subsection{The Evolutionary State of LBVs} While evidence for N enrichment and C+O depletion suggest that LBVs are massive post-main-sequence stars, their exact evolutionary status within that complex and possibly non-monotonic evolution has been controversial - moreso in recent years. The traditional view of LBVs, which emerged in the 1980s and 1990s, is that they correspond to a very brief transitional phase of massive star evolution, as the star moves from core H burning when it is seen as a main sequence O-type star, to core He burning when it is seen as a Wolf-Rayet (WR) star. A typical monotonic evolutionary scheme for a VMS is as follows: \smallskip \noindent 100 $M_{\odot}$: O star $\rightarrow$ Of/WNH $\rightarrow$ LBV $\rightarrow$ WN $\rightarrow$ WC $\rightarrow$ SN Ibc \smallskip \noindent In this scenario, the strong mass-loss experienced by LBVs is important for removing what is left of the star's H envelope after the main sequence, leaving a hydrogen-poor WR star following the end of the LBV phase. The motivation for thinking that this is a very brief phase comes from the fact that LBVs are extremely rare, even for very massive stars: taking the relative numbers of LBVs and O-type stars at high luminosity, combined with the expected H-burning lifetime of massive O-type stars, would imply a duration for the LBV phase of only a few 10$^4$ yr or less. This view fits in nicely with a scenario where the observed population of massive stars is dominated by single-star evolution. However, a number of problems and inconsistencies have arisen with this standard view of LBVs. For one thing, the very short transitional lifetime depends on the assumption that the observed LBVs are representative of the whole transitional phase. In fact, there is a much larger number of blue supergiant stars that are not {\it bona fide} LBVs seen in eruption, but which are probably related --- these are the LBV candidates discussed earlier. Examining populations in nearby galaxies, for example, Massey et al. (2007) find that there are more than {\it an order of magnitude} more LBV candidates than there are LBVs confirmed by their photometric variability. (For example, there are several hundred LBV candidates in M31 and M33, compared to the 8 LBVs known by their photometric variability.) If the LBV candidates are included with LBVs, then the average lifetime of the LBV phase must rise from a few 10$^4$ yr to several 10$^5$ yr. Now we have a problem, because this is a significant fraction of the whole He burning lifetime, making it impossible for LBVs to be fleeting {\it transitional} objects. There is not enough time in core-He burning to link them to both WR stars and LBVs. Should we include the LBV candidates and related stars? Are they dormant LBVs? If indeed LBVs go through dormant phases when they are not showing their instability (or when they have temporarily recovered from the instability after strong mass loss), then it would be a mistake not to include the duty cycle of instability in the statistics of LBVs. Massey (2006) has pointed to the case of P Cygni as a salient example: its 1600 A.D.\ giant LBV eruption was observed and so we consider it an architypal LBV, but it has shown no eruptive LBV-like behavior since then. If the observational record had started in 1700, then we would have no idea that P Cygni was an LBV and we would be wrong. So how many of the other LBV candidates are dormant LBVs? The massive circumstellar shells seen around many LBV candidates imply that they have suffered LBV giant eruptions in the previous 10$^3$ yr or so. Another major issue is that we have growing evidence that LBVs or something like them (massive stars with high mass loss, N enrichment, H rich, slow $\sim$100 km s$^{-1}$ winds, massive shells) are exploding as core-collapse SNe while still in an LBV-like phase (see below). This could not be true if LBVs are only in a brief transition to the WR phase, which should last another 0.5-1 Myr before core collapse to yield a SN Ibc. Pre-supernova eruptive stars that resemble LBVs are discussed in more detail in following sections. Last, the estimates for lifetimes in various evolutionary phases in the typical monotonic single-star scenario (see above) ignore empirical evidence that binary evolution dominates the evolution of a large fraction of massive stars. Many massive O-type stars (roughly 1/2 to 2/3) are in binary systems whose orbital separation is small enough that they should interact and exchange mass during their lifetime (Kobulnicky \& Fryer 2007; Kiminki \& Kobulnicky 2012; Kiminki et al.\ 2012; Chini et al.\ 2012; Sana et al.\ 2012). These binary systems {\it must} make a substantial contribution to the observed populations of evolved massive stars and SNe, so to find agreement between predictions of single-star evolutionary models and observed populations indicates that something is wrong with the models. Unfortunately, solutions to these problems are not yet readily apparent; some current effort is focussed here, and these topics are still a matter of debate among massive star researchers. \subsection{A special case: Eta Carinae} The enigmatic massive star $\eta$ Carinae is perhaps the most famous and recognizable example of an evolved and unstable VMS. It is sometimes regarded as the prototype of eruptive LBVs, but at the same time it has a long list of peculiarites that make it seem unique and very atypical of LBVs. In any case, it is by far the {\it best studied} LBV, and (for better or worse) it has served as a benchmark for understanding LBVs and the physics of their eruptions. Several circumstances conspire to make $\eta$ Car such a fountain of information. It is nearby (about 2.3 kpc; Smith 2006) and bright with low interstellar extinction, so one is rarely photon-starved when observing this object at any wavelength. It is one of the most luminous and massive stars known, with rough values of $L \simeq 5 \times 10^6$ $L_{\odot}$ and a present-day mass for the primary around 100 $M_{\odot}$ (its ZAMS mass is uncertain, but was probably a lot more than this). Its giant eruption in the 19th century was observed at visual wavelengths so that we have a detailed light curve of the event (Smith \& Frew 2011), and $\eta$ Car is now surrounded by the spectacular expanding Homunculus nebula that provides us with a fossil record of that mass loss. This nebula allows us to estimate the mass and kinetic energy of the event, which are $\sim$15 $M_{\odot}$ and $\sim$10$^{50}$ erg, and we can measure the geometry of the mass ejection because the Homunculus is still young and in free expansion (Smith et al.\ 2003; Smith 2006). \begin{figure}[b] \includegraphics[scale=0.59]{fig2.eps} \caption{The historical light curve of the 19th century Great Eruption of $\eta$ Carinae, from Smith \& Frew (2011).} \label{fig:etaLC} \end{figure} Davidson \& Humphreys (1997) provided a comprehensive review of the star and its nearby ejecta in the mid-1990s, but there have been many important advances in the subsequent 16 years. It has since been well established that $\eta$~Car is actually in a binary system with a period of 5.5 yr and $e \simeq 0.9$ (Damineli et al.\ 1997), which drastically alters most of our ideas about this object. Accordingly, much of the research in the past decade has been devoted to understanding the temporal variability in this colliding-wind binary system (see Madura et al.\ 2012, and references therein). Detailed studies of the Homunculus have constrained its 3D geometry and expansion speed to high precision (Smith 2006), and IR wavelengths established that the nebula contains almost an order of magnitude more mass than was previously thought (Smith et al.\ 2003; Morris et al.\ 1999; Gomez et al.\ 2010). The larger mass and kinetic energy force a fundamental shift in our undertanding of the physics of the Great Eruption (see below). Observations with {\it HST} have dissected the detailed ionization structure of the nebula and measured its expansion proper motion (e.g., Gull et al.\ 2005, Morse et al.\ 2001). Spectra have revealed that the Great Eruption also propelled extremely fast ejecta and a blast wave outside the Homunculus, moving at speeds of 5000 km s$^{-1}$ or more (Smith 2008). We have an improved record of the 19th century light curve from additional released historical documents (Smith \& Frew 2011; Figure~\ref{fig:etaLC}), and perhaps most exciting, we have now detected light echoes from the 19th century eruption, allowing us to obtain spectra of the outburst itself after a delay of 160 years (Rest et al.\ 2012). Altogether, the outstanding observational record of $\eta$ Car suggests a picture wherein a VMS suffered an extremely violent, $\sim$10$^{50}$ erg explosive event comparable to a weak supernova, which ejected much of the star's envelope - but the star apparently survived this event. This gives us a solid example of the extreme events that can result from the instability in a VMS, but the underlying physics is still not certain. Interactions with a close companion star are critical for understanding its present-day variability; the binary probably played a critical role in the behavior of the 19th century Great Eruption as well, although the details are still unclear. While $\eta$ Car is the best observed LBV, it may not be very representative of the LBV phenomenon in general. In what ways is $\eta$ Car so unusual among LBVs? Its 19th century Great Eruption reached a similar peak absolute magnitude ($-$14 mag) to those of other so-called ``SN impostor'' events in nearby galaxies (see Smith et al.\ 2011a), but unlike most extragalactic examples, its eruption persisted for a decade or more, whereas most extragalactic examples of similar luminosity last only 100 days or less. Among well-studied LBVs in the Galaxy and Magellanic Clouds, only $\eta$ Car is known to be in a wide colliding-wind binary system that shows very pronounced, slow periodic modulation across many wavelengths (HD~5980 in the SMC is in a binary, but with a much shorter period). Its 500 km s$^{-1}$ and 10$^{-3}$ $M_{\odot}$ yr$^{-1}$ wind is unusually fast and dense compared to most LBVs, which are generally an order of magnitude less dense. Its Homunculus nebula is the youngest LBV nebula, and together with P Cygni these are the only sources for which we have both an observed eruption event and the nebula it created. Thus, it remains unclear if $\eta$ Car represents a very brief (and therefore rarely observed) violent eruption phase that most VMSs pass through at some time in their evolution, or if it really is so unsual because of its very high mass and binary system parameters. In any case, the physical parameters of $\eta$ Car's eruption are truly extreme, and they push physical models to limits that are sometimes hard to meet. The 19th century event has long been the protoype for a super-Eddington wind event, but detailed investigation of the physics involved shows that this is quite difficult to achieve (see Owocki's chapter in this volume). At the same time, we now have mounting observational evidence of an explosive nature to the Great Eruption: (1) A very high ratio of kinetic energy to integrated radiated energy, exceeding unity; (2) Brief spikes in the light curve that occur at times of periastron; (3) evidence for a small mass of very fast moving ($\sim$5000 km s$^{-1}$) ejecta and a blast wave outside the Homunculus, which requires a shock-powered component to the eruption, and (4) behavior of the spectra seen in light echoes, which do not evolve as expected from an opaque wind. These hints suggest that some of the phenomena we associate with LBVs (and their extragalactic analogs) are driven by explosive physics (i.e. hydrodynamic events in the envelope) rather than (or in addition to) winds driven from the surface by high luminosity. This is discussed in more detail in the following subsection. \subsection{Giant Eruptions: Diversity, Explosions, and Winds} Giant eruptions are simultaneously the most poorly understood, most puzzling, and probably the physically most important of the observed phenomena associated with LBVs. They are potentially the most important aspect for massive stars because of the very large amounts of mass (as much as 10-20 $M_{\odot}$) that are ejected in a short amount of time, and consequently, because of their dramatic influence on immediate pre-SN evolution (next section). Although the giant eruptions themselves are rarely observed because they are infrequent and considerably fainter than SNe, a large number of LBVs and spectroscopically similar stars in the Milky Way and Magellanic Clouds are surrounded by massive shell nebulae, indicating previous eruptions with a range of ejecta masses from 1-20 $M_{\odot}$ (Clark et al.\ 2005; Smith \& Owocki 2006; Wachter et al.\ 2010; Gvaramadze et al.\ 2010). Thus, eruptive LBV mass loss is inferred to be an important effect in late evolution of massive stars, and perhaps especially so in VMSs. Originally the class of LBV giant eruptions was quite exclusive, with only four approved members: $\eta$ Car's 1840s eruption, P Cygni's 1600 AD eruption, SN~1954J (V12 in NGC~2403), and SN~1961V (see Humphreys, Davidson, \& Smith 1999). Due the the advent of dedicated searches for extragalactic SNe from the late 1990s onward, the class of giant eruptions has grown to include a few dozen members (see recent summaries by Smith et al.\ 2011a; Van Dyk \& Matheson 2012). Because of their serendipitous discovery in SN searches, they are also referred to as ``SN impostors''. Other names include ``Type V'' supernovae (from F.\ Zwicky), ``$\eta$ Car analogs'', and various permutations of ``intermediate luminosity transients''. Although the total number of SN impostors is still quite small (dozens) compared to SNe (thousands), the actual rates of these events could potentially be comparable to or even exceed those of core-collapse SNe. The difference is due to the fact that by definition, SN impostors are considerably fainter than true SNe, and are therefore much harder to detect. Since they are $\sim$100 times less luminous than a typical Type Ia SN, their potential discovery space is limited to only 1/1000 of the volume in which SNe can be discovered with the same telescope. Their discovery is made even more difficult because of the fact that their contrast compared to the underlying host galaxy light is lower, and because in some cases they have considerably longer timescales and much smaller amplitudes of variability than SNe. Unfortunately, there has not yet been any detailed study of the rates of SN impostors corrected for the inherent detection bias in SN searches. We are limited to small numbers, but one can infer that the rates of LBV eruptions and core-collapse SNe are comparable based on a local guesstimate: in our nearby region of the Milky Way there have been 2 giant LBV eruptions (P Cyg \& $\eta$ Car) and 3 SNe (Tycho, Kepler, and Cas A; and only 1 of these was a core-collapse SN) in the past $\sim$400 yr. The increased number of SN impostors in the past decade has led to recognition of wide diversity among the group, and correspondingly, increased ambiguity about their true physical nature. It is quite possible that many objects that have been called ``SN impostors'' are not LBVs, but something else. The SN impostors have peak absolute magnitudes around $-$14 mag, but there is actually a fairly wide spread in peak luminosity, ranging from $-$15 mag down to around $-$10 mag. At higher luminosity, transients are assumed to be supernovae, and at lower luminosity we call them something else (novae, stellar mergers, S Dor eruptions, etc.) --- but these dividing lines are somewhat arbitrary. Most of their spectra are similar, the most salient characteristic being bright, narrow H emission lines (so they are all ``Type IIn'') atop either a smooth blue continuum or a cooler absorption-line spectrum. Since the outbursts all look very similar, many different types of objects might be getting grouped together by observers. When more detailed pre-eruption information about the progenitor stars is available, however, we find a range of cases. Some are indeed very luminous, blue, variable stars; but some are not so luminous ($<$10$^5$ $L_{\odot}$), and are sometimes found among somewhat older stellar populations than one expects for a VMS (Prieto et al.\ 2008a, 2008b; Thompson et al.\ 2009). Some well-studied extragalactic SN impostors that are clearly massive stars suffering LBV-like giant eruptions are SN~1997bs, SN~2009ip, UGC~2773-OT, SN~1954J, V1 in NGC~2366, SN~2000ch; some well-studied objects that appear to be lower-mass stars (around 6-10 $M_{\odot}$) are SN~2008S, NGC~300-OT, V838 Mon, and SN~2010U. There are many cases in between where the interpretation of observational data is less straightforward or where the data are less complete. In any case, it is interesting that even lower mass stars (8--15 $M_{\odot}$) may be suffering violent eruptive instabilities similar to those seen in the most massive stars. If the physical cause of the outbursts is at all related, it may point to a deep-seated core instability associated with nuclear burning or some binary collision/merger scenario, rather than an envelope instability associated with the quiescent star being near the Eddington limit. Physically, the difference between a ``SN impostor''/giant eruption and a true (but underluminous) SN is that the star does not survive the latter type of event. Observationally, it is not always so easy to distinguish between the two. Even if the star survives, it may form dust that obscures the star at visual wavelengths, while IR observations may not be available to detect it. On the other hand, even if the star dies, there may appear to be a ``surviving'' source at the correct position if it is a host cluster, a companion star, an unrelated star superposed at the same position, or ongoing CSM interaction from the young SN remnant. It is often difficult to find decisive evidence in the faint, noisy, unresolved smudges one is forced to interpret when dealing with extragalactic examples. Consider the extremely well-observed case of SN~1961V. This object was one of the original ``Type V'' SNe and a prototype of the class of LBV giant eruptions (Humphreys et al.\ 1999). However, two recent studies have concluded that it was most likely a true core-collapse Type IIn SN, and for two different reasons: Smith et al.\ (2011a) point out that all of the observed properties of the rather luminous outburst are fully consistent with the class of Type~IIn SNe, which did not exist in 1961 and was not understood until recently. If SN~1961V were discovered today, we would undoubtedly call it a true SN~IIn since its high peak luminosity ($-$18 mag) and other observed properties clearly make it an outlier among the SN impostors. On the other hand, Kochanek et al.\ (2011) analyzed IR images of the site of SN~1961V and did not find an IR source consistent with a surviving luminous star that is enshrouded by dust, like $\eta$ Car. Both studies conclude that since the source is now at $\sim$6 mag fainter than the luminous blue progenitor star, it probably exploded as a core-collapse event. Although there is an H$\alpha$ emission line source at the correct position (Van Dyk et al.\ 2002), this could be due to ongoing CSM/shock interaction, since no continuum emission is detected. It is hard to prove definitively that the star is dead, however (for an alternative view, see Van Dyk \& Matheson 2012). This question is very important, though, because the progenitor of SN~1961V was undoubtedly a very luminous star with a likely initial mass well exceeding 100 $M_{\odot}$. If it was a true core-collapse SN, it would prove that some very massive stars do explode and make successful SNe. What is the driving mechanism of LBV giant eruptions? What is their source of luminosity and kinetic energy? Even questions as simple and fundamental as these have yet to find answers. Two broad classes of models have developed: super-Eddington winds, and explosive mass loss. Both may operate at some level in various objects. Traditionally, LBV giant eruptions have been discussed as super-Eddington winds driven by a sudden and unexplained increase in the star's bolometric luminosity (Humphreys \& Davidson 1994; Shaviv 2000; Owocki et al.\ 2004; Smith \& Owocki 2006), but there is growing evidence that some of them are non-terminal hydrodynamic ejections (see Smith 2008, 2013). Part of the motivation for this is based on detailed study of $\eta$ Carinae, which as noted above, has shown several signs that the 1840s eruption had a shock-driven component to it. One normally expects sudden, hydrodynamic events to be brief (i.e., a dynamical time), which may seem incongruous with the 10 yr long Great Eruption of $\eta$ Car. However, as in some very long-lasting core-collapse SNe, it is possible to power the observed luminosity of the decade-long Great Eruption with a shock wave plowing through dense circumstellar gas (Smith 2013). In this model, the duration of the transient brightening event is determined by how long it takes for the shock to overrun the CSM (this, in turn, depends on the relative speeds of the shock and CSM, and the radial extent of the CSM). Since shock/CSM interaction is such an efficient way to convert explosion kinetic energy into radiated luminosity, it is likely that many of the SN impostors with narrow emission lines are in fact powered by this method. The catch is that even this method requires something to create the dense CSM into which the shock expands. This may be where super-Eddington winds play an important role. The physical benefit of this model is that the demands on the super-Eddington wind are relaxed to a point that is more easily achievable; instead of driving 10 $M_{\odot}$ in a few years (as for $\eta$ Car), the wind can provide roughly half the mass spread over several decades or a century. The required mass-loss rates are then of order 0.01-0.1 $M_{\odot}$ yr$^{-1}$, which is more reasonable and physically plausible than a few to several $M_{\odot}$ yr$^{-1}$. Also, the wind can be slow (as we might expect for super-Eddington winds; Owocki et al.\ 2004), whereas the kinetic energy in observed fast LBV ejecta can come from the explosion. In any case, the reason for the onset of the LBV eruption remains an unanswered question. In the super-Eddington wind model, even if the wind can be driven at the rates required, we have no underlying physical explanation for why the star's bolometric luminosity suddenly increases by factors of 5-10 or more. In the explosion model, the reason for an explosive event preceding core collapse is not known, and the cause of explosive mass loss at even earlier epochs is very unclear. It could either be caused by some instability in late nuclear burning stages (see e.g., Smith \& Arnett 2014), or perhaps by some violent binary interaction like a collision or merger (Smith 2011; Smith \& Arnett 2014; Podsiadlowski et al.\ 2010). Soker and collaborators have discussed an accretion model to power the luminosity in events like $\eta$ Car's Great Eruption, but these assume that an eruption occurs to provide the mass that is then accreted by a companion, and so there is no explanation for what triggers the mass loss from the primary in the first place. In any case, research on these eruptions is actively ongoing; it is a major unsolved problem in astrophysics, and in the study of VMSs in particular. \section{Very Luminous Supernovae \label{sec:3} \subsection{Background} The recognition of a new regime of SN explosions has just occurred in the last few years --- this includes SNe that are observed to be substantially more luminous than a standard bright Type Ia SN (the brightest among ``normal'' SNe). Although this is still a young field, the implications for and connections to the evolution and fate of VMSs is exciting. Here we discuss these luminous SNe as well as gamma ray bursts (GRBs), and their connection to the lives and deaths of the most massive stars. This field of research on the most luminous SNe took on a new dimension with the discovery of SN~2006gy (Smith et al.\ 2007; Ofek et al.\ 2007), which was the first of the so-called ``super-luminous SNe'' (SLSNe). The surprising thing about this object was that with its high peak luminosity ($-$21.5 mag) and long duration (70 days to rise to peak followed by a slow decline), the integrated luminous energy $E_{rad}$ was a few times 10$^{51}$ erg, more than any previous SN. A number of other SLSNe have been discovered since then (see below). Why were these SLSNe not recognized previously? There may be multiple reasons, but clearly one reason is that earlier systematic SN searches were geared mainly toward maximizing the number of Type Ia SN discoveries in order to use them for cosmology. This meant that these searches, which usually imaged one galaxy per pointing due to the relatively small field of view, mainly targeted large galaxies to maximize the chances of discovering SNe Ia each night. Since it appears that SLSNe actually seem to prefer dwarf galaxy hosts (either because of lower metallicity, or because dwarf galaxies have higher specific star-formation rates), these searches may have been biased against discovering SLSNe. More recent SN searches have used larger fields of view and therefore search large areas of the sky, rather than targeting individual large galaxies; this is probably the dominant factor that led to the increased discovery rate of SLSNe (see Quimby et al.\ 2011). Additionally, even if SLSNe were discovered in these earlier targeted searches, prescious followup resources for spectroscopy on large telescopes are limited, and so SNe that were not Type Ia were given lower priority. \begin{figure}[!ht] \includegraphics[scale=0.56]{fig3.eps} \caption{Example light curves of several Type IIn SNe, along with two non-IIn SLSNe (SN~2005ap, a Type Ic) and SN~2008es (Type II) for comparison. SN~1999em is also shown to illustrate a ``normal'' Type II-P light curve. The fading rate of radioative decay from $^{56}$Co to $^{56}$Fe is indicated, although for most SNe~IIn this is not thought to be the power source despite a similar decline rate at late times in some objects. Note that SN~2002ic and SN~2005gl are thought to be examples of SNe Ia interacting with dense CSM, leading them to appear as Type IIn (see text).} \label{fig:lc2n} \end{figure} \subsection{Sources of unusually high luminosity} So what can make SLSNe 10--100 times more luminous than normal SNe? There are essentially two ways to get a very luminous explosion. One is by having a relatively large mass of $^{56}$Ni that can power the SN with radioactive decay; a higher luminosity generally requires a larger mass of synthesized $^{56}$Ni. While a typical bright Type Ia SN might have 0.5-1 $M_{\odot}$ of $^{56}$Ni, a super-luminous SN must have 1-10 $M_{\odot}$ of $^{56}$Ni to power the observed luminosities. Currently, the only proposed explosion mechanism that can do this is a pair instability SN (see Chapter 7 by Woosley \& Heger). It is interesting to note that most normal SNe are powered by radioactive decay -- were it not for the synthesis of $^{56}$Ni in these explosions, we wouldn't ever see most SNe. The synthesized mass of $^{56}$Ni needed to supply the luminosity of a PISN through radioactivity is estimated from observations the same way as for normal SNe: \begin{equation} L = 1.42 \times 10^{43} {\rm ergs \ s^{-1}} e^{-t/111d} \, M_{\rm Ni}/M_{\odot} \end{equation} \noindent (Sutherland \& Wheeler 1984) where $L$ is the bolometric luminosity at time $t$ after explosion (usually measured at later times when the SN is clearly on the radioactive decay tail). Important uncertainties here are that $L$ must be the {\it bolometric} luminosity, which is not always easily obtained without good multiwavelength data (otherwise this provides only a lower limit to the $^{56}$Ni mass), and the time of explosion $t$ must be known (this is often poorly constrained observationally, since most SNe have been discovered near maximum luminosity). An additional cause of ambiguity is that in very luminous SNe, it is often difficult to determine if the source of luminosity is indeed radioactivity, since other mechanisms (see below) may be at work. The other way to generate an extraordinarily high luminosity is to convert kinetic energy into heat, and to radiate away this energy before the ejecta can expand and cool adiabatically. This mechanism fails for many normal SNe, since the explosion of any progenitor star with a compact radius (a white dwarf, compact He star, blue supergiant) must expand to many times its initial radius before the photosphere is large enough to provide a luminous display. These SNe are powered primarily by radioactivity, as noted above. Red supergiants, on the other hand, have larger initial radii, and so their peak luminosity is powered to a much greater extent by radiation from shock-deposited thermal energy. However, even the bloated radii of red supergiants (a few AU) are far smaller than a SN photosphere at peak ($\sim$10$^{15}$ AU), and so the most common Type II-P SNe from standard red supergiants never achieve an extraordinarily high luminosity. Most of the thermal energy initially deposited in the envelope is converted to kinetic energy through adiabatic expansion. This inefficiency (and relatively low $^{56}$Ni yields of only $\sim$0.1 $M_{\odot}$) is why the total radiated energy of a normal SN~II-P (typically 10$^{49}$ erg) is only about 1\% of the kinetic energy in the SN ejecta.\footnote{Of course, most of the energy from a core collapse SN escapes in the form of neutrinos ($\sim$10$^{53}$ erg).} Smith \& McCray (2007) pointed out that this shock-deposition mechanism could achieve the extremely high luminosities of SLSNe like SN~2006gy if the initial ``stellar radius'' was of order 100 AU, where this radius is not really the hydrostatic photospheric radius of the star, but is instead the radius of an opaque CSM shell ejected by the star before the SN. The key in CSM interaction is that something else (namely, pre-SN mass loss) has already done the work against gravity to put a large mass of dense and slow-moving material out at large radii ($\sim$10$^{15}$ cm) away from the star. When the SN blast wave crashes into this material, already located at a large radius, the fast SN ejecta are decelerated and so the material is heated far from the star, where it can radiate away its thermal energy before it expands by a substantial factor. By this mechanism, large fractions ($\sim$50\% or more) of the total ejecta kinetic energy can be converted to thermal energy that is radiated away. In a hydrogen-rich medium, the photosphere tends to an apparent temperature around 6000-7000 K, and so a large fraction of the radiated luminosity escapes as visual-wavelength photons. Since this mechanism of optically thick CSM interaction is very efficient at converting ejecta kinetic energy into radiation, this process can yield a SLSN without an extraordinarily high explosion energy or an exotic explosion mechanism. What makes this scenario extraordinary (and a challenge to understand) is the requirement of ejecting $\sim$10 $M_{\odot}$ in just the few years before core collapse. This is discussed more below. A variant of this conversion of kinetic energy into light is powering a SLSN with the birth of a magentar (Woosley 2010; Kasen \& Bildsten 2010). In this scenario, a normal core-collapse SN explodes the star and sends its envelope (10s of $M_{\odot}$) expanding away from the star. For the SN itself, there is initially nothing unusual compared to normal SNe. But in this case a magnetar is born instead of a normal neutron star or black hole. The rapid spin-down of the magnetar subsequently injects $\sim$10$^{51}$ ergs of energy into the SN ejecta (which have now expanded to a large radius of $\sim$100 AU). Similar to the opaque CSM interaction model mentioned previously, this mechanism reheats the ejected material at a large radius, so that it can radiate away the energy before the heat is lost to adiabatic expansion, providing an observer with a SLSN. It would be very difficult to tell the difference observationally between the magnetar model and the opaque shocked shell model during the early phases around peak when photons are diffusing out through the shell or ejecta. It may be possible to see the difference at late times if late-time data are able to see the signature of the magnetar (Inserra et al. 2013). In summary, there are three proposed physical mechanisms for powering SLSNe. For each, there are also reasons to suspect a link to VMSs. {\bf 1. Pair instability SNe.} This is a very powerful thermonuclear SN explosion. To produce the observed luminosity and radiated energy, one requires of order 10 $M_{\odot}$ of synthesized $^{56}$Ni. These explosions are only expected to occur in VMSs with initial masses of $>$150 $M_{\odot}$, because those stars are the only ones with a massive enough CO core to achieve the high temperatures needed for the pair-instability mechanism. The physics of these explosions is discussed more in the chapter by Woosley \& Heger. So far, there is only one observed example of a SN that has been suggested as a good example of a PISN, and this is SN~2007bi (Gal-Yam et al.\ 2009). However, this association with a PISN is controversial. Dessart et al.\ (2012) have argued that SN~2007bi does not match predictions for a PISN; it has a very blue color with a peak in the UV, whereas the very large mass of Fe-group elements in a PISN should cause severe line blanketing, leading to very red observed colors and deep absorption features. Thus, it is unclear if we have ever yet observed a PISN. {\bf 2. Opaque shocked shells.} Here we have a normal SN explosion that collides with a massive CSM shell, providing a very efficient way of converting the SN ejecta kinetic energy into radiated luminosity when the SN ejecta are decelerated. The reason that this mechanims would be linked to VMS progenitors is because one requires a very large mass of CSM (10-20 $M_{\odot}$) in order to stop the SN ejecta. Given expectations for the minimum mass of SN ejecta in models and the fact that stars also suffer strong mass loss duing their lifetimes, a high mass progenitor star is needed for the mass budget. Also, sudden eruptive mass loss in non-terminal events that eject $\sim$10 $M_{\odot}$ is, so far, a phenomenon exclusivy associated with VMSs like LBVs. Although lower-mass stars do appear to be suffering eruptions that look similar (see above), these do not involve the ejection of 10 $M_{\odot}$. {\bf 3. Magnetar-powered SNe.} In principle, the mechanism is quite similar to the opaque shocked shell model, in the sense that thermal energy is injected at a large radius, although here we have magnetar energy being dumped into a SN envelope, rather than SN ejecta colliding with CSM. Although the SN explosion that leads to this SLSN may be normal, the potential association with VMSs comes from the magnetar. Some magnetars have been found in the Milky Way to be residing in massive young star clusters that appear to have a turnoff mass around 40 $M_{\odot}$, suggesting that the progenitor of the magnetar had an initial mass above 40 $M_{\odot}$. \subsection{Type IIn SLSNe} Since massive stars are subject to strong mass loss, it is common that there is CSM surrounding a massive star at the time of its death, into which the fast SN ejecta must expand. The collision between the SN blast wave and this CSM is referred to as ``CSM interaction'', which is commonly observed in core-collapse SNe in the form of X-ray or radio emission (Chevalier \& Fransson 1994). However, only about 8-9\% of core-collapse SNe (Smith et al.\ 2011b) have CSM that is dense enough to produce strong visual-wavelength emission lines and an optically thick continuum. In these cases, the SN usually exhibits a smooth blue continuum with strong narrow H emission lines, and is classified as a Type IIn SN. Intense CSM interaction can occur in two basic regimes: (1) If the interaction is optically thick so that photons must diffuse out through the material in a time that is comparable to the expansion time, or (2) an effectively optically thin regime, where luminosity generated by CSM interaction escapes quickly. This is equivalent to cases where the outer boundary of the CSM is smaller or larger, respectively, than the ``diffusion radius'' (see Chevalier \& Irwin 2011). The former case will yield an observed SN without narrow lines, resembling a normal broad-lined SN spectrum. The latter will exhibit strong narrow emission lines with widths comparable to the speed of the pre-shock CSM, emitted as the shock continues to plow through the extended CSM. In most cases, the SN will transition from the optically thick case to the optically thin case around the time of peak luminosity (see Smith et al.\ 2008). If the CSM is hydrogen rich, the narrow H lines earn the SN the designation of Type~IIn. (If the CSM is H-poor and He-rich, it will be seen as a Type Ibn, but these are rare and no SLSNe have yet been seen of this type.) Although narrow H emission lines are the defining charcteristic of the Type~IIn class, the line widths and line profiles can be complex with multiple components. They exhibit wide diversity, and they evolve with time during a given SN event as the optical depth drops and as the shock encounters density and speed variations in the CSM. These line profiles are therefore a powerful probe of the pre-SN mass loss from the SN prognitor star. Generally, the emission line profiles in SNe~IIn break down into three subcomponents: narrow, intermediate-width, and broad. \begin{figure}[b] \includegraphics[scale=0.6]{fig4.eps} \caption{Observations of pre-shock CSM speeds in the SLSN IIn SN~2006gy. The left panel shows several tracings of the narrow P Cygni feature. The right shows velocities measured for various radii, where dates have been converted to radii based on the observed expansion speed of the cold dense shell (upper points are for the blue edge of the absorption, while the lower points are for the velocity at the minimum of the absorption). The CSM velocity follows a Hubble-like law, indicating a single ejection date for the CSM about 8 yr prior to the SN. Both figures are from Smith et al.\ (2010a).} \label{fig:06gy} \end{figure} \begin{itemize} \item The narrow (few 10$^2$ km s$^{-1}$) emission lines arise from a photoionized shock precursor, when hard ionizing photons generated in the hot post-shock region propagate upstream and photoionize much slower pre-shock gas. The width of the narrow component, if it is resolved in spectra, gives an estimate of the wind speed of the progenitor star in the years leading up to core collapse. Since these speeds are generally between about 200--600 km s$^{-1}$, this seems to suggest blue supergiant stars or LBVs for the progenitors of SNe~IIn, because the escape speeds are about right (see SMith et al.\ 2007, 2008, 2010a). Bloated red supergiants or compact WR stars have much slower or faster wind speeds, respectively. In some cases when relatively high spectral resolution is used, one can observe the narrow P Cygni absorption profile. This gives an even more precise probe of the wind speed of the pre-shock gas along the line-of-sight, which in some cases has multiple velocity components showing that the wind speed has been changing (see below, and Groh \& Vink 2011). Since the absorption occurs in the densest gas immediately ahead of the shock, one can potentially use the time variation in the P Cyg absorption to trace out the radial velocity law in the wind. A dramatic example of this was the case of SN~2006gy (Fig.~\ref{fig:06gy}; Smith et al.\ 2010a), where the velocity of the P~Cyg absorption increased with time as the shock expanded, indicating a Hubble-like flow in the CSM (i.e. $v \propto R$). In this case, the Hubble-like law in the pre-shock CSM indicated that the dense CSM was ejected only about 8 yr before the SN (Smith et al.\ 2010a). The rather close synchronization between the pre-SN eruptions and the SN has important implications, and is discussed more below. \item Intermediate-width ($\sim$10$^3$ km s$^{-1}$) components usually accompany the narrow emission-line cores. Generally these broader components exhibit a Lorentzian profile at early times and gradually transition to Gaussian, asymmetric, or irregular profiles at late times. This is thought to be a direct consequence of dropping optical depth (see Smith et al.\ 2008). At early times in very dense CSM, line photons emitted in the ionized pre-shock CSM must diffuse outward through optically thick material outside that region. The multiple electron scatterings encountered as the photons escape produces the Lorentzian-shaped wings to the narrow line cores. For these phases, it would therefore be a mistake to fit multiple components to the H$\alpha$ line profile, for example, and to adopt the broader component as indicative of some characteristic expansion speed in the explosion. At later times when the pre-shock density is lower and we see deeper into the shock, the intermediate-width components can trace the kinematics of the post-shock region more directly. These genrally indicate shock speeds of a few 10$^3$ km s$^{-1}$ or less. \item Sometimes, in special cases of lower-density CSM (or at late times), clumpy CSM, or CSM with non-spherical geometry, one can also observe the broad-line profiles from the underlying fast SN ejecta. In these cases one can estimate the speed of the SN ejecta directly. This usually does not occur in the most luminous SNe~IIn, however, simply because the lower-density CSM or the small solid angle for CSM interaction (i.e. a disk) needed to allow one to see the broad SN ejecta lines also limits the luminosity of the CSM interaction, making it hard to have both transparency and high luminosity in the same explosion. A recent case of this is teh 2012 SN event of SN~2009ip (Smith et al.\ 2014). \end{itemize} \subsection{CSM Mass Estimates for SLSNe IIn} {\it Cold Dense Shell (CDS) Luminosity:} Armed with empirical estimates of the speed of the CSM and the speed of the advancing shock, one can then calculate a rough estimate for the density and mass-loss rate of the CSM required to power the observed luminosity of the SN. Dense CSM slows the shock, and the resulting high densities in the post-shock region allow the shock to become radiative. With high densities and optical depths, thermal energy is radiated away primarily as visual-wavelength continuum emission. This loss of energy removes pressure support behind the forward shock, leading to a very thin, dense, and rapidly cooling shell at the contact discontinuity (usually referred to as the ``cold dense shell'', or CDS; see Chugai et al.\ 2004; Chugai \& Danziger 1994). This CDS is pushed by ejecta entering the reverse shock, and it expands into the CSM at a speed $V_{CDS}$. In this scenario, the maximum emergent continuum luminosity from CSM interaction is given by \begin{equation} L_{CSM} \ = \ \frac{1}{2} \, \dot{M} \, \frac{V_{CDS}^3}{V_W} \ = \ \frac{1}{2} \, w \, V_{CDS}^3 \end{equation} \noindent where $V_{CDS}$ is the outward expansion speed of the CDS derived from observations of the intermediate-width component, $V_W$ is the speed of the pre-shock wind derived from the narrow emission line widths or the speed of the P Cygni absorption trough, $\dot{M}$ is the mass-loss rate of the progenitor's wind, and $w$ = $\dot{M}/V_W$ is the so-called wind density parameter (see Chugai et al.\ 2004; Chugai \& Danziger 1994; Smith et al.\ 2008, 2010a). The wind density parameter is a convenient way to describe the CSM density, because it does not assume a constant speed (for the highest mass-loss rates, it may be a poor assumption to adopt a constant wind with a standard $R^{-2}$ density law, since the huge masses involved are more likely to be the result of eruptive/explosive mass loss). In general, this suggests that more luminous SNe require either higher density in the CSM, faster shocks, or both. Thus, a wide range of different CSM density (resulting from different pre-SN eruption parameters or different wind mass-loss rates) should produce a wide variety of luminosities in SNe IIn. This is, in fact observed. Figure~\ref{fig:lc2n} shows several examples of light curves for well-studied SNe IIn, which occupy a huge range in luminosity from the most luminous SNe down to the lower bound of core-collapse SNe (below peaks of about -15.5 mag, we would generally refer to a SN~IIn as a SN impostor). To derive a CSM mass, it is common to re-write the previous equation with an efficiency factor $\epsilon$ as: \begin{equation} L_{CSM} \ = \epsilon \ \frac{1}{2} \, \dot{M} \, \frac{V_{CDS}^3}{V_W} \ = \epsilon \ \frac{1}{2} \, w \, V_{CDS}^3. \end{equation} \noindent With representative values, this can be rewritten as: \begin{equation} \dot{M} = 0.3 \, M_{\odot} \, {\rm yr}^{-1} \times \frac{L_9}{\epsilon^{-1}} \frac{V_w/200}{(V_{CDS}/2000)^3} \end{equation} \noindent where $L_9$ is the bolometric luminosity in units of 10$^9$ $L_{\odot}$, $V_W/200$ is the CSM expansion speed relative to 200 km s$^{-1}$, and $V_{CDS}/2000$ is the expansion speed of the post-shock gas in the CDS relative to 2000 km s$^{-1}$. These velocities are representative of those observed in SNe~IIn, although there is variation from one object to the next. $L_9$ corresponds roughly to an absolute magnitude of only $-$17.8 mag, which is relatively modest for SNe~IIn (Fig.\ 3). Thus, we see that even for relatively normal luminosity SNe~IIn, extremely high pre-SN mass-loss rates are required, much higher than is possible for any normal wind. For SLSNe that are $\sim$10 times more luminous, extreme mass-loss rates of order $\sim$1 $M_{\odot}$ yr$^{-1}$ are needed. Moreover, this mass-loss rate is really a lower limit, due to the efficiency factor $\epsilon$, which must be less than 100\%. In favorable cases (fast SN ejecta, slow and dense CSM) the efficiency can be quite high (above 50\%; see van Marle et al. 2010). However, for lower densities and especially non-spherical geometry in the CSM, the efficiency drops and CSM mass requirements rise. In cases where the post-shock H$\alpha$ emission is optically thin, one can, in principle, also estimate the CSM mass in a similar way, by replacing the bolometric luminosity with $L_{H\alpha}$, and the efficiency $\epsilon$ with the corresponding H$\alpha$ efficiency $\epsilon_{H\alpha}$. This is perhaps most appropriate at late times, as CSM interaction may contiunue for a decade after the SN. During this time the assumption of optically thin post-shock H$\alpha$ emission may be valid. In practice, however, there are large uncertainties in the value of $\epsilon_{H\alpha}$ (usually assumed to be of order 0.005 to 0.05; e.g.\ Salamanca et al.\ 2002), so this diagnostic provides only very rough order of magnitude estimates. {\it Light Curve Fits:} The rough estimate in the previous method provides a mass-loss rate corresponding only to the density overtaken at one moment by the shock (assuming the CDS radiation escapes without delay; see below). In reality, the values of $V_{CDS}$, $V_{w}$, and the CSM density can change with time as the shock decelerates while it expands into the CSM, as does the speed of the SN ejecta crashing into the reverse shock. Moreover, pre-SN mass loss is likely to be episodic, so it is unclear for how long that value of $\dot{M}$ was sustained. To get the total CSM mass ejected by the progenitor within some time frame before core collapse (and hence, an average value of $\dot{M}$), one must integrate over time. This means producing a model to fit the observed light curve. One can calculate a simple analytic model for the CSM mass needed to yield the light curve by demanding that momentum is conserved in the collision between the SN ejecta and the CSM, and that the change in kinetic energy resulting from the deceleration of the fast SN ejecta is lost to radiation. Assuming an explosion energy, a density law for the SN ejecta, and a speed and density law of the CSM, one can calculate the resulting analytic light curve assuming that high densities and H-rich composition lead to a small bolometric correction (see Smith et al.\ 2008, 2010a; Smith 2013a, 2013b; Chatzopoulos et al.\ 2013; Moriya et al.\ 2013). One can also do the same from a numerical simulation (e.g., Woosley et al. 2007; van Marle et al. 2010). In general, very high CSM masses of order 10-20 $M_{\odot}$ are found for SLSNe like SN~2006gy and 2006tf, emitted in the decade or so preceeding the explosion. Considering the CSM mass within the radius ivertaken by the shock, the undertainty in this mass estimate is roughyl a factor of 2, but should also be considered a lower limit to the total mass since more mass can reside at larger radii. When very high mass and dense CSM is involved, this methos is usually more reliable than other methods (emission lines, X-rays, radio) that may severely underestimate the mass due to high optical depths. {\it Diffusion Time:} In extreme cases where the CSM is very dense, the diffusion time $\tau_{diff} \simeq (n \sigma R^2)c$ may be long. If $\tau_{diff}$ becomes comparable to the expansion timescale of the shock moving through the CSM $\tau_{exp} \simeq R/V_s$, then the shock-deposited thermal energy can leak out after the shock has broken out of the CSM. Since the radius of the CSM may be very large (of order 10$^{15}$ cm), this may produce an extremely luminous SN display (Smith \& McCray 2007). This is essentially the same mechanism as the normal plateau luminosity of a SN~II-P (Falk \& Arnett 1977), but the radius here is the radius of the CSM, not the hydrostatic radius of the star. This can be simplified to \begin{equation} M_{CSM}/M_{\odot} \simeq R_{15} (\tau_{diff} / 23 days) \end{equation} \noindent where $R_{15}$ is the assumed radius of the opaque CSM in units of 10$^{15}$ cm, and $\tau_{diff}$ can be estimated from observations of the characteristic fading time of the SN light curve. Applying this to SLSNe like SN~2006gy yields a CSM mass of order 10-20 $M_{\odot}$ (Smith \& McCray 2007; Chevalier \& Irwin 2011). This is comparable to the estimates through the previous method. The underlying physical mechanism is the same as normal CSM interaction discussed above, but the optical depths are assumed to be too high for the luminosity to escape quickly. In fact, even lower-luminosity SNe~IIn may have diffusion-powered light curves at early times as the shock breaks through the inner and denser parts of the wind; their lower luminosity compared to SLSNe reflects the smaller radius in the CSM where this breakout occurs (see Ofek et al.\ 2013a). {\it H$\alpha$ Emission from Unshocked CSM:} When high resolution spectra reveal a narrow P Cygni component to the H$\alpha$ line (widths of order 100-500 km s$^{-1}$), one can infer that this emission arises from the pre-shock CSM. (Note that if a narrow P Cyg profile is not seen, but rather a simple emission profile, it is uncertain if this narrow component arises from a distant circumstellar nebula or a nearby H~{\sc ii} region.) Following Smith et al.\ (2007), the mass of emitting ionized hydrogen in the CSM around a SN~IIn can be inferred from the total narrow-component H$\alpha$ luminosity $L_{H\alpha}$ from \begin{equation} M_{H\alpha} \simeq \frac{m_H L_{H\alpha}}{h \nu {\alpha}^{eff}_{H\alpha} n_e} \end{equation} \noindent where $h\nu$ is the energy of an H$\alpha$ photon, $\alpha^{eff}_{H\alpha}$ is the Case B recombination coefficient, and $n_e$ is the average electron density. This simplifies to \begin{equation} M_{H\alpha} \simeq 11.4 \, M_{\odot} (L_{H\alpha}/n_e) \end{equation} \noindent with $L_{H\alpha}$ in units of $L{\odot}$ and $n_e$ in cm$^{-3}$ (see Smith et al.\ 2007). Note that this is only the mass of ionized H at high densities, so it is only a lower limit to the CSM mass if some of the CSM remains neutral. However, as with mass-loss rates of normal O-type stars, the H$\alpha$ emission depends on the degree of clumping in the wind (see review by Smith 2014), which can lower the total required mass. For more luminous SNe~IIn with very dense pre-shock CSM, the narrow H$\alpha$ component mar arize from a relatively thin zone ahead of the shock, and it therefore provides a useful probe of the immediate pre-shock CSM in cases where a narrow P Cyg profile is observed. For a SLSN IIn like SN~2006gy, this method yields a CSM mass of order 10 $M_{\odot}$ or a mass-loss rate of order 1 $M_{\odot}$ yr$^{-1}$ (Smith et al. 2007). For a more moderate-luminosity SN~IIn like SN~2009ip, Ofek et al.\ (2013a) applied this same method and found a mass-loss rate of order 10$^{-2}$ $M_{\odot}$ yr$^{-1}$. {\it X-ray and radio emission:} For SLSNe~IIn the X-ray and radio emission is of limited utility in diagnosing the pre-SN mass-loss rate, since very high CSM densities cause the X-rays to be self absorbed (the reprocessing of X-rays and their thermalization to lower temperatures is what powers the high visual-wavelength continuum luminosity of SNe~IIn) and the CSM is optically thick to radio emission during the main portion of the visual light curve peak. When X-rays are detected, the X-ray luminosity $L_X$ can be used to infer a characteristic mass-loss rate (see Ofek et al.\ 2013a; Smith et al.\ 2007; Pooley et al. 2002): \begin{equation} L_X \simeq 3.8 \times 10^{41} ergs s^{-1} (\dot{M}/0.01)^2 (V_w/500)^{}-2 R_{15} e^{-(\tau + \tau_bf)} \end{equation} \noindent where $\dot{M}$ is in units of 0.01 $M_{\odot}$ yr$^{-1}$, the wind speed is relative to 500 km s$^{-1}$, $R_{15}$ is the shock radius in units of 10$^{15}$ cm, $\tau$ is the Thomson optical depth in the wind, and the exponential term is due to wind absorption (see Ofek et al.\ 2013a for further detail). Caution must be used when inferring global properties, however. If the CSM is significantly asymmetric (as most nebulae around massive stars are), X-rays may indeed escape from less dense regions of the CSM/shock interaction, while much denser zones may yield high optical depths and a strong visual-wavelength continuum. Thus, one could infer both low and high densities simultaneously, which might seem contradictory at first glance. This was indeed the case in SN~2006gy, where the CSM density indicated be X-rays was not nearly enough to provide the observed visual luminosity (Smith et al.\ 2007). Radio synchrotron emission is quashed for progenitor mass-loss rates much higher than about 10$^{-5}$ $M_{\odot}$ yr$^{-1}$ in the first year or so after explosion, and as a result, radio emission is rarely seen from SNe~IIn at early times. (In order for the CSM interaction luminosity to compete with the normal SN photosphere luminosity, the mass-loss rate of a SN~IIn progenior must generally be higher than 10$^{-4}$ $M_{\odot}$ yr$^{-1}$. Moreover, very massive stars almost always have normal winds in this range anyway, due to their high luminosity.) Radio emission can be detected at later times when the density drops, but this emission is then tracing the mass-loss rate that occurred centuries before the SN, rather than the eruptions in the last few years before explosion. For a discussion of how to use radio emission as a diagnostic of the progenitor's mass-loss rate, we refer the reader to Chevalier \& Fransson (1994). \subsection{Connecting SNe IIn and LBVs} There are several lines of evidence that suggest a possible connection between LBVs and the progenitors of SNe~IIn. While each one is not necessarily conclusive on its own, taken together they clearly favor LBVs as the most likely known type of observed stars that fit the bill. If the progenitors of SNe~IIn are not actually LBVs, they do a very good impersonation. Here is a list of the different lines of evidence that have been suggested: (1) Super-luminous SNe IIn, where the demands on the amount of CSM mass are so extreme (10-20 $M_{\odot}$ in some cases) that unstable massive stars are required for the mass budget, and the inferred radii and expansion speeds of the CSM require that it be ejected in an eruptive event within just a few years before core collapse (Smith et al.\ 2007, 2008, 2010a; Smith \& McCray 2007; Woosley et al.\ 2007; van Marle et al.\ 2010). So far, the only observed precedent for stars known to exhibit this type of extreme, eruptive mass loss is LBV giant eruptions. (In fact, one could argue that since LBV is an observational designation, if any such pre-SN event were to be observed, we would probably call it an LBV-like eruption.) (2) Direct detections of progenitors of SNe~IIn that are consistent with massive LBV-like stars (Gal-Yam \& Leonard 2009; Gal-Yam et al. 2007; Smith et al. 2010b, 2011a, 2012; Kochanek et al.\ 2011). This is discussed in the next section (Sect. 4). (3) Direct detections of non-terminal LBV-like eruptions preceding a SN explosion. This is seen by some as a smoking gun for an LBV/SN connection. So far there are only two clear cases of this, and two more with less complete observations, discussed later (Sect. 5). (4) The narrow emission-line components from the CSM indicate H-rich ejecta surrounding the star. H-rich CSM is obviously not exclusive to LBVs, but it argues against most WR stars as the progenitors. If SNe IIn (especially SLSNe IIn) indeed require very massive progenitors, this is a pretty severe problem for standard models of massive-star evolution. In any case, among massive stars with very strong mass loss, LBVs are the only ones with the combination of H-rich ejecta and high densities comparable to those required. (5) Wind speeds consistent with LBVs. As noted above, the observed line widths for narrow components in SNe~IIn suggest wind speeds of a few 10$^2$ km s$^{-1}$. This is consistent with the expected escape velocities of blue supergiants and LBVs (Salamanca et al. 2002; Smith 2006; Smith et al.\ 2007, 2008, 2010a; Trundle et al.\ 2008). While it doesn't prove that the progenitors are in fact LBVs, it is an argument against red supergiants or WR stars as the likely progenitors. Wind speeds alone are not conclusive, however, since radiation from the SN itself may accelerate pre-shock CSM to these speeds. (6) Wind variability that seems consistent with LBVs. Modulation in radio light curves indicates density variations that suggest a connection to the well-established variability of LBV winds (Kotak \& Vink 2006). Also, multiple velocity components along the line of sight seen in blueshifted P Cygni absorption components of some SNe IIn resemble similar multi-component absorption features seen in classic LBVs like AG Car (Trundle et al.\ 2008). This may hint that some SN IIn progenitors had winds that transitioned across the bistability jump, as do LBVs (see Vink chapter; Groh \& Vink 2011). As with the previous point (wind speed), this is not a conclusive connection to LBVs, since other stars do experience density and speed variations in their winds, and the sudden impulse of radiation driving from the SN luminosity itself might give the impression of multiple wind speeds seen in absorption along the line of sight. Nevertheless, the variability inferred does hint at a possible connection to LBVs, and is consistent with that interpretation. We must note, however, that not all SNe~IIn are necessarily tied to LBVs and the most massive stars. Some SNe~IIn may actually be Type Ia explosions with dense CSM (e.g., Silverman et al. 2013 and references therein), some may be electron-capture SN explosions of stars with initial masses around 8$-$10 $M_{\odot}$ (Smith 2013b; Mauerhan et al.\ 2013b; Chugai et al.\ 2004), and some may arise from extreme red supergiants like VY~CMa with very dense winds (Smith et al.\ 2009; Mauerhan \& Smith 2012; Chugai \& Danziger 1994). The argument for a connection to LBVs and VMSs is most compelling for the SLSNe IIn because of the required mass budget, which is hard to circumnavigate (Smith \& McCray 2007; Smith et al.\ 2007, 2008, 2010a; Woosley et al.\ 2007; Rest et al.\ 2011). \subsection{Requirements for Pre-SN Eruptions and Implications} In order for the characteristic Type~IIn spectrum to be observed, and to achieve a high luminosity from CSM interaction, the collision between the SN shock and the CSM must occur immediately after explosion. This places a strong constraint on the location of the CSM and the time before the SN when it must have been ejected. Given the luminosity of SLSNe, the photosphere must be at a radius of a few 10$^{15}$ cm, which must also be the location of the CSM if interaction drives the observed luminosity. Another way to arrive at this same number is to require that a SN shock front (the cold dense shell or CDS, as above) expands at a few 10$^3$ km s$^{-1}$ in order to overtake the CSM in the first $\sim$100 days. Then we have $D$ = $v \times t$ = (2,000 km s$^{-1}$) $\times$ (100 d) = 2$\times$10$^{15}$ cm. Note that the observed blueshifted P Cygni absorption profiles in narrow line components indicate that the CSM is {\it outflowing}. This observed expansion rules out possible scenarios where the CSM is primordial (i.e. disks left-over from star formation). How recently was this CSM ejected by the progenitor star? From the widths of narrow lines observed in spectra we can derive the speed of the pre-SN wind, and these show speeds of typically 100$-$600 km s$^{-1}$ (Smith et al. 2008, 2010a; Kiewe et al. 2012), as noted earlier. In order to reach radii of 1-2$\times$10$^{15}$ cm, then, the mass ejection must have occurred only a few years before the SN. Since the lifetime of the star is several Myr and the time of He burning is 0.5-1 Myr, a timescale of only 2-3 yr is very closely synchronized with the time of core collapse. This is a strong hint that something violent (i.e., hydrodynamic) may be happening to these stars very shortly before core collapse, apparently as a {\it prelude} to the core collapse event. As noted earlier, the CSM mass must be substantial in order to provide enough inertia to decelerate the fast SN ejecta and extract the kinetic energy. This is especially true for SLSNe, where high CSM masses of order 10 $M_{\odot}$ are required. Combined with the expansion speeds of several 10$^2$ km s$^{-1}$ derived from narrow emission lines in SNe~IIn, we find that whatever ejected the CSM must have been provided with an energy of order 10$^{49}$ ergs. Since the mass loss occurred in only a few years before core collapse, it is necessarily an eruptive event that is short in duration. The H-rich composition, high mass, speed, and energy of these pre-SN eruptions are remarkably similar to the physical conditions derived for LBV giant eruptions. This is the primary basis for the connections between LBVs and SNe~IIn, as noted earlier. Consequently, we are left with the same ambiguity about the underlying physical mechanism of pre-SN outbursts as we have for LBVs. The SN precursors seem to be some sort of eruptive or explosive mass-loss event, but the underlying cause is not yet known. Unlike many of the LBVs, however, the pre-SN eruptions provide a telling clue --- i.e. for some reason they appear to be synchronized with the time of core collapse. This is interesting, since we do know that core evolution proceeds rapidly through several different durning stages as a massive star approaches core collapse. It is perhaps natural to associated these pre-SN eruptions with Ne and O burning, each of which lasts roughly a year (see Quataert \& Shiode 2011; Smith \& Arnett 2014). Carbon burning lasts at least several centuries (too long for the immediate SN precursors, but possibly important in some SNe~IIn), while Si burning lasts only a day or so (too short). A number of possible instabilities that may occur in massive stars during these phases is discussed in more detail by Smith \& Arnett (2014), as well the specific case of wave-driven mass loss by Quataert \& Shiode (2011) and Shiode \& Quataert (2013). In extremely massive stars with initial masses above $\sim$100 $M_{\odot}$, a series of precursor outbursts can occur as a result of the pulsational pair instability (PPI; see chapter by Heger \& Woosley). These eruptions are thought to occur in a range of initial masses (roughly 100-150 $M_{\odot}$) where explosive O burning events are insufficient to completely disrupt the star as a final SN, but which can give rise to mass ejections with roughly the mass and energy required for conditions observed in luminous SNe IIn precursors. The PPI should occur far too rarely ($\sim$1\% or less of all core-collapse SNe) to explain all of the SNe IIn (which are about 8-9\% of ccSNe; Smith et al. 2011b). It may, however, provide a plausible explanation for the much more rare cases of SLSNe of Type IIn. \subsection{Type Ic SLSNe and GRBs} Not all SLSNe are Type~IIn, and not all SLSNe have H in their spectra. The progenitors of SNe~IIn are required to eject a large mass of H in just a few years before core collapse, so they must retain significant amounts of H until the very ends of their lives. This fact is in direct conflict with stellar evolution models, as noted above. There are also, however, a number of SNe that may be associated with the deaths of VMSs which have shed all of their H envelopes and possibly their He envelopes as well before finally exploding. Recall that SNe with no visible sign of H, but which do show strong He lines are Type Ib, and those which shown neither H or He are Type Ic (see Fillipenko 1997 for a review of SN classification). (Type IIb is an intermediate category that is basically a Type Ib, but with a small ($\sim$0.1 $M_{\odot}$) mass of residual H left, and so the SN is seen as a Type II in the first few weeks, but then transitions to look like a Type Ib.) Together, Types Ib, Ic, and IIb are sometimes referred to as ``stripped envelope'' SNe. The stripped envelope SNe most closely related to the deaths of VMSs are the SLSN of Type Ic, and the broad-lined Type Ic supernovae that are observed to be associated with GRBs. {\bf SLSN Ic.} The most luminous SNe known to date turn out to be of spectral Type Ic. The prototypes for this class are objects like SN~2005ap (Quimby et al.\ 2007) and a number of other cases discussed by Quimby et al.\ (2011). Although these SNe were discovered around the same time as SN~2006gy, their true nature as the most luminous Type Ic SNe wasn't recognized until a few years later. This is because they were actually located at a fairly substantial redshift ($z \simeq $0.2 to 0.3), causing their visual-wavelength spectra to appear unfamiliar. It turns out that these objects are closest to Type Ic spectra, with no H and little if any He visible in their spectra.\footnote{There is so far only one exception to this, which is SN~2008es (Miller et al.\ 2010; Gezari et al.\ 2010), whose light curve is shown in Figure~\ref{fig:lc2n}. This object is a SLSN of Type II, with broad H lines in its spectra, and is not a Type IIn. The total mass of H in its envelope is not well constrained, however.} Once their redshifts were recognized, it became clear that these SNe were the most luminous of any SNe known, having peak absolute magnitudes around $-$22 to $-$23. These SNe are also hotter than normal Type Ic SNe, however, with the peak of their spectral energy distribution residing in the near-UV; this enhances their visual-wavelength apparent brightness (and detectability) because of the redshifts at which they are found. The hotter photospheric temperatures are likely related to their lack of H and He. Unlike SNe IIn, these obeject do not have narrow lines in their spectra; their spectra exhibit broad absorption lines that are more like normal SNe. More detailed information about these objects is available in two recent reviews (Quimby et al.\ 2011; Gal-Yam 2012). The three possible physical driving mechanisms for these explosions are the same as those mentioned above for all SLSNe: (1) Interaction between a SN shock and an opaque CSM shell, (2) Magnetar birth, or (3) Pair instability SN. Even though these objects do not have narrow lines in their spectra (and therefore lack tell-tale signatures of CSM interaction), the first is a possible power source if the opaque CSM shell has a sharp outer boundary that is smaller than the diffusion radius in the CSM. If this is the case, then the shock will break out of the CSM and photons will diffuse out afteward, producing a broad-lined spectrum (Smith \& McCray 2007; Chevalier \& Irwin 2011). Magnetar-driven SNe (Kasen \& Bildsten 2010; Woosley 2010) provide another possible power source for SLSNe Ic, and so far appear to be consistent with all available observations. Recently, Inserra et al.\ (2013) have presented evidence that favors the magnetar model for these SLSNe Ic, seen in the late-time data. The third mechanism of a pair instability SN (PISN) is perhaps the oldest viable idea for making SLSNe from very massive stars (Barkatt et al.\ 1967; Bond et al.\ 1984), but so far evidence for this type of explosion remains unclear. Most of the SLSNe Ic fade too quickly to be PISNe; for their observed peak luminosities they would require $\sim$10 $M_{\odot}$ of $^{56}$Ni in order to be powered by radioactive decay, but the rate at which they are observed to fade from peak is much faster than the $^{56}$Co rate (Quimby et al.\ 2011). So far only one object among the SLSNe Ic, SN~2007bi, has a fading rate that is consistent with radioactivity (Gal-Yam et al.\ 2009), but the suggestion that this is a true PISN is controversial, as noted earlier (Dessart et al.\ 2012). It remains unclear if any PISN have yet been directly deteted. Originally these SNe were predicted to occur only for extremely massive stars in the early universe (with little mass loss), as discussed more extensively in the chapter by Woosley \& Heger. {\bf SNe Ic-BL associated with GRBs.} Gamma Ray Bursts (GRBs) represent another example of the possible deaths of VMSs. The detailed observed properties of GRBS, the variety of GRBs (short vs. long duration, etc.), and their history is too rich to discuss here (see Woosley \& Bloom 2006 for a review). Instead we focus on the observable SNe that are associated with long-duration GRBs, which are thought to result from core collapse to a black hole in the death of a massive star. So far, the only type of SN explosion seen to be associated with GRBs are the so called ``broad-lined'' Type Ic, or SN~Ic-BL. Here we must be careful in terminology. While earlier in this chapter we referred to the fact that normal SNe have broad lines, at least compared to the narrow and intermediate-width lines seen in SNe IIn, the class of SN~Ic-BL have extremely broad absorption lines in their spectra. A normal SN typically has lines that indicate outflow speeds of $\sim$10,000 km s$^{-1}$, but SNe Ic-BL exhibit expansion speeds closer to 30,000 km s$^{-1}$, or 0.1c. These trans-relativistic speeds are related to the fact that a GRB has a highly relativistic jet that is seen as the GRB, if we happen to be observing it nearly pole-on. Since kinetic energy goes as velocity squared, these very fast expansion speeds in SNe Ic-BL imply large explosion energy, and have led them to be referred to as ``hypernovae'' by some researchers. The reason to associate these SNe Ic-BL and GRBs with the possible deaths of VMSs is that the favored scenario for producing the relativistic jet (the ``collapsar'', see Heger \& Woosley chapter) involves a collapse to a black hole that is thought to occur in stars with initial masses above 30 $M_{\odot}$. Although the GRBs and the afterglows are extremely luminous, the SN explosion seen as SNe Ic-BL that follow the GRBs are not extremely luminous (they are near the top end of the luminosity distribution for normal SNe, with peaks of $-$19 or $-$20 mag), and certainly not as luminous as the class of SLSN Ic discussed above. {\bf Host Galaxies.} An interesting commonality is found between SLSNe~Ic and the class of SN~Ic-BL associated with GRBs. In addition to sharing the Ic spectral type, indicating a progenitor stripped of both its H and He layers, the two groups seem to arise preferentially in similar environments. Namely, both classes of Ic occur preferentially in relatively low-mass host galaxies with low metallicity (Neill et al.\ 2011; Modjaz et al.\ 2008). This may hint that these two classes of SNe are the endpoints of similar evolution in massive stars at low metallicity, but that some additional property helps to determine if the object is a successful GRB or not. Since one normally associates stronger mass loss and stripping of the H and He layers with stronger winds (and therefore higher metallicity), the low-metallicity hosts of these SNe may hint that binary evolution plays a key role in the angular momentum that is needed (especailly for the production of GRB jets), with an alternative explanation relying upon chemically homogeneous evolution of rapidly rotating stars (see Yoon \& Langer 2005). In this vein, it is perhaps interesting to note that magnetars have been suggested as another possible driving source for GRBs, while magnetar birth is also a likely explanation for SLSNe Ic as noted above. This is still an active topic of current research. \section{Detected Progenitors of Type IIn Supernovae \label{sec:4} While the previous section described inferred connections between very luminous SNe and VMSs, these connections are however indirect, based primarily on circumstantial evidence. For example, they rely upon the large mass of CSM needed in SNe IIn, the observed wind speeds, and the requirement of extreme eruptive variability only demonstrated (to our knowledge) by evolved massive stars like LBVs, and the possible association with magnetars or collapsars. However, our most direct way to draw a connection between a SN and the mass of the star that gave rise to it is to directly detect the progenitor star itself in archival images of the explosion site taken before the SN occurred. The increase of successful cases of this in recent years is thanks in large part to the existence of archival {\it HST} images of nearby galaxies, and this has now been done for a number of normal SNe and for a small collection of Type~IIn explosions (only one of which qualifies as a super-luminous SN). For this technique to work in identifying the progenitor star, one must be lucky\footnote{Another somewhat less direct technique for estimating the mass of a SN progenitor star is to analyze the stellar population in the nearby SN environment. The age of the surrounding stellar population provides a likely (although not necessarily conclusive) estimate of the exploded star's lifetime and initial mass. While this information can only be obtained for the nearest SNe, it can be performed after the SN fades and therefore does not require the lucky circumstance of having a pre-existing high-quality archival image.} enough to have a high quality, deep image of the explosion site in a public archive. The first cases of a direct detection of a SN progenitor star were the very nearby explosions of SN~1987A in the Large Magellanic Cloud and SN~1993J in M 81, using archival ground-based data. With the advent of {\it HST}, this technique could be pushed to host galaxies at larger distances, and a number of such cases up until 2008 were reviewed by Smartt (2009). New examples continue to be added since the Smartt (2009) review, including the very nearby SN IIb in M101, SN~2011hd (Van Dyk et al.\ 2013). Most of the progenitor detections so far (and all those discussed in the Smartt 2009 review) are for SNe II-P and IIb, all with relatively low implied initial masses ($<$20 $M_{\odot}$). The technique for identifying SN progenitors requires very precise work. Once a nearby SN is discovered, one must determine if an archival image of sufficient quality exists (it is frustrating, for example, to find that your SN occurred at a position that is right at the very edge of a CCD chip in an archival image, or just past that edge). Then one must obtain an {\it HST} image or high-quality ground-based image (with either excellent seeing or adaptive optics) of the SN itself, in order to perform very careful and precise astrometry to pinpoint the exact position of the SN (usually the precision is a few percent of an HST pixel). The exact position of the SN must then be identified on the pre-explosion archival image of the SN site, using reference stars in common to both images (preferably at the same wavelengths), and then finally one can determine if there is a detected point source at the SN's location. If not, one can derive an upper limit to the progenitor star's luminosity and mass, which is most useful in the nearest cases where the upper limit can be quite restrictive. If there is a source detected, then it becomes a ``candidate'' progenitor, because it could also be a chance alignment, a companion star in a binary or triple system, or a host cluster. The way to tell is to wait several years and verify that this candidate progenitor source has disappeared after the SN fades beyond detectability. Once a secure detection is made, one can then use the pre-explosion image to estimate the apparent and absolute magnitudes of the star, and to estimate colors if there are multiple filters. After correcting for the effects of extinction and reddening of the progenitor (which might include the effects of unknown amounts of CSM dust that was vaporized by the SN), one can place the progenitor star on an HR diagram. One can then use single-star evolution tracks to infer a rough value for the star's initial mass, by comparing the progenitor's position on the HR diagram to the expected luminosity and temperatures at the endpoints of evolution models (note, however, that trajectories of these evolution tracks are highly sensitive to assumptions about mass loss and mixing in the models, and the 1D models do not include possible instabilities in late burning phases; see Smith \& Arnett 2014). The technique favors types of progenitor stars that are luminous in the filters used for other purposes (usually nearby galaxy surveys, using $R$ and $I$-band filters), allowing them to be more easily detected. For example, WR stars are the expected progenitors of at least some SNe~Ibc, but while these stars are luminous, they are also hot and therefore emit most of their flux in the UV. Compared to a red supergiant at the same distance, they are therefore less easily detected in the red $I$-band filters that are often used in surveys of nearby galaxies that populate the HST archive. Similarly, very luminous progenitors that emit much of their luminosity at visual eavelengths, like LBVs, should be relatively easy to detect at a given distance. This probably explains why we have multiple cases of LBV-like progenitors, despite the relatively small numbers of very massive stars. A central issue for understanding VMSs is whether they make normal SNe when they die, rare and unusual types of SNe (like SLSNe or Type~IIn), or if instead they have weak/failed SNe as core material and $^{56}$Ni falls back into a black hole (making them difficult or impossible to observe). A common expectation from single-star evolution models combined with core collapse studies (e.g., Heger et al.\ 2003 and references therein; see also the chapter by Woosley \& Heger) is that stars with initial masses above some threshold (for example, 30 $M_{\odot}$, although the exact value differs from one study to the next) will collapse to a black hole and will fail to make a successful bright SN explosion, unless special conditions such as very rapid rotation and envelope stripping can lead to a collapsar and GRB. Observationally, there are at least four cases where stars more massive than 30 $M_{\odot}$ do seem to have exploded successfully, and all of these are Type~IIn (recall that these cases may be biased because LBV-like progenitos are very bright and easier to detect than hotter stars of the same bolometric luminosity). The four cases are listed individually below. {\bf SN~2005gl.} SN~2005gl was a moderately luminous SN~IIn (Gal-Yam et al. 2007). Pre-explosion images showed a source at the SN position that faded below detection limits after the SN had faded (Gal-Yam \& Leonard 2009). Its high luminosity suggested that the progenitor was a massive LBV similar to P Cygni, with an initial mass of order 60 $M_{\odot}$ and a mass-loss rate shortly before core-collapse of $\sim$0.01 $M_{\odot}$ yr$^{-1}$ (Gal-Yam et al.\ 2007). {\bf SN~1961V.} Another example of a claimed detection of a SN~IIn progenitor, SN~1961V, has a more complicated history because it is much closer to us and more highly scrutinized. For decades SN~1961V was considered a prototype (although the most extreme case) of giant eruptions of LBVs, as noted above, and an analog of the 19th century eruption of $\eta$ Carinae (Goodrich et al.\ 1989; Filippenko et al.\ 1995; Van Dyk et al.\ 2002). However, two recent studies (Smith et al.\ 2011a; Kochanek et al.\ 2011) argue for different reasons that SN 1961V was probably a true core-collapse SN~IIn. Both studies point out that the pre-1961 photometry of this source's variability was a detection of a very luminous quiescent star, as well as a possible precursor LBV-like giant eruption in the few years before the supposed core collapse. While the explosion mechanism of SN~1961V is still debated (e.g., Van Dyk \& Matheson 2012), the clear detection and post-outburst fading of its LBV progenitor is at least as reliable as the case for SN~2005gl. SN~2005gl was shown to have faded to be about 1.5 mag fainter than its progenitor star, whereas SN~1961V is now at least 6 mag fainter than its progenitor. In any case, the luminosity of the progenitor of SN~1961V suggests an initial mass of at least 100-200 $M_{\odot}$. In the previous two cases, the SN has now faded enough that it is fainter than its detected progenitor star. The implication is that the luminous progenitor stars detected in pre-explosion images are no longer there, and are likely dead. This provides the strongest available evidence that these detected sources were indeed the stars that exploded to make the SNe we saw, and not simply a chance alignment of another unrelated star, a star cluster, or a companion star in a binary. This is not true for the next two sources, which are still in the process of fading from their explosion. We will need to wait until they fade to be sure that the candidate sources are indeed the star that exploded. {\bf SN~2010jl.} Of the four progenitor detections discussed here, SN~2010jl is the only explosion that qualifies as a SLSN, with a peak absolute magnitude brighter than $-$20 mag. Smith et al.\ (2011c) identified a source at the location of the SN in pre-explosion {\it HST} images. The high luminosity and blue colors of the candidate progenitor suggested either an extremely massive progenitor star or a very young and massive star cluster; in either case it seems likely that the progenitor had an initial mass well above 30 $M_{\odot}$. In this case, however, the SN has not yet faded (it is still bright after 3 yr), so we will need to wait to solve the issue of whether the source was the prognitor or a likely host cluster. {\bf SN~2009ip.} Although its name says ``2009'', SN~2009ip is the most recent addition to the class of direct SN~IIn progenitor detections, because while the 2009 discovery event was a SN impostor, the same object now appears to have suffered a true SN in 2012 (Mauerhan et al.\ 2013; Smith et al.\ 2014). SN~2009ip is an exceptional case, and is discussed in more detail below (Sect.\ 5). For now, the relevant point to mention is that archival {\it HST} images obtained a decade before the initial discovery revealed a luminous point source at the precise location of the transient. If this was the quiescent progenitor star, the implied initial mass is 50-80 $M_{\odot}$ (Smith et al.\ 2010b) or $>$60 $M_{\odot}$ (Foley et al.\ 2011), depending on the assumptions used to caculate the mass. Thus, the case seems quite solid that the progenitor was indeed a VMS. Altogether, all four of these cases of possible progenitors of SNe IIn suggest progenitor stars that are much more massive than the typical red supergiant progenitors of SNe II-P (Smartt 2009). \section{Direct Detections of Pre-SN Eruptions} \label{sec:5} SNe~IIn (and SNe Ibn) require eruptive or explosive mass loss in just the few years preceding core collapse in order to have the dense CSM needed for their narrow-line spectra and high luminosity from CSM interaction. As noted above, the timescale is constrained to be within a few years beforehand, based on the observed expansion speed of the pre-shock gas and the derived radius of the shock and photosphere. Until recently, these pre-SN eruptions were mostly hypothetical, limited to conjectures supported by the circumstantial evidence that {\it something} must deposit the outflowing CSM so close to the star. However, we now have examples of SN explosions where a violent outburst was detected in the few years before a SN, and in all cases the SN had bright narrow emission lines indicative of CSM interaction. The two most conclusive detections of an outburst are SN~2006jc and SN~2009ip, and they deserve special mention. SN~1961V and SN~2010mc also had pre-peak detections, although the data are less complete, and the interpretations are more controversial. {\bf SN 2006jc.} - SN~2006jc was the first object clearly recognized to have a brief outburst 2 years before a SN. The precursor event was discovered in 2004 and noted as a possible LBV or SN impostor. It had a peak luminosity similar to that of $\eta$ Car (absolute magnitude of $-$14), but was fairly brief and faded after only a few weeks (Pastorello et al.\ 2007). No spectra were obtained for the precursor transient source, but the SN explosion 2 years later was a Type Ibn with strong narrow emission lines of He, indicating moderately slow (1000 km s$^{-1}$) and dense H-poor CSM (Pastorello et al.\ 2007; Foley et al.\ 2007). There is no detection of the quiescent progenitor, but the star is inferred to have been a WR star based on the H-poor composition of the CSM. \begin{figure}[b] \includegraphics[scale=0.58]{fig5.eps} \caption{The pre-SN light curve of SN~2009ip, from Mauerhan et al.\ (2013).} \label{fig:09ip} \end{figure} {\bf SN 2009ip.} A much more vivid and well-documented case was SN~2009ip, mentioned earlier. It was initially discovered and studied in detail as an LBV-like outburst in 2009, again with a peak absolute magnitude near $-$14. This time, however, several spectra of the pre-SN eruptions were obtained, and these spectra showed properties similar to LBVs (Smith et al.\ 2010b; Foley et al.\ 2011). Also, a quiescent progenitor star was detected in archival {\it HST} data taken 10 yr earlier, which as noted above, indicated a VMS progenitor. In the 5 yr preceding its discovery as an LBV-like eruption, the progenitor also showed slow variability consistent with an S Dor-like episode without a major increase in bolometric luminosity, characteristic of LBVs. The object then experienced several brief luminosity peaks over 3 yrs that looked like additional LBV eruptions (unlike SN 2006jc, detailed spectra of these progenitor outbursts were obtained), culminating in a final SN explosion in 2012 (Mauerhan et al.\ 2013a; Smith et al.\ 2014). The SN light curve was double-peaked, with an initially fainter bump ($-$15 mag) that had very broad (8000 km s$^{-1}$) emission lines probably formed in the SN ejecta photosphere, and it rose quickly 40 days later to a peak of $-$18 mag, when it looked like a normal SN~IIn (caused by CSM interaction, as the SN crashed into the slow material ejected 1-3 years earlier; see Mauerhan et al.\ 2013a and Smith et al.\ 2014). A number of detailed studies of the bright 2012 transient have now been published, although there has been some controversey about whether the 2012 event was a true core-collapse SN (Mauerhan et al. 2013a; Prieto et al.\ 2013; Ofek et al.\ 2013a; Smith et al.\ 2013, 2014) or not (Pastorello et al.\ 2013; Fraser et al.\ 2013; Margutti et al.\ 2013). More recently, Smith et al.\ (2014) have shown that the object continues to fade and its late-time emission is consistent with late-time CSM interaction in normal Type IIn supernovae. In any case, SN~2009ip provides us with the most detailed information about any SN progenitor for a decade preceding the SN, with a detection of a quiescent progenitor, several LBV-like precursor eruptions of two different types, and detailed high-quality spectra of the star. This object paints a very detailed picture of the violent death throes in the final years in the life of a VMS. {\bf SN~2010mc.} Ofek et al.\ (2013b) reported the discovery of a precursor outburst in the $\sim$40 days before the peak of SN~2010mc, recognized after the SN by analyzing archival data. Smith et al.\ (2013a,2013b) showed that the light curve of SN~2010mc was nearly identical to that of the 2012 supernova-like event of SN~2009ip, to a surprising degree. Smith et al.\ (2014) proposed that the $\sim$40 day precursor events in both SN~2009ip and SN~2010mc were in fact the SN explosions, since this is when very broad P Cygni features were seen in the spectra, and that the following rise to peak was actually due to additional luminosity generated by intense CSM interaction. In that case, the $\sim$40 day precursor event of SN~2010mc was not actually a pre-SN eruption, but the SN itself. Nevertheless, the similarity in light curves and spectra between SN~2009ip and SN~2010mc would obviously suggest that SN~2010mc probably did have a series of pre-SN LBV-like erutions too, although those preceding events were not detected. {\bf SN~1961V.} The remarkable object SN~1961V has extensive temporal coverage of its pre-SN phases and solid detections of a luminous and highly variable progenitor, moreso than any other SN. The luminous ($-$12.2 mag absolute at blue/photographic wavelengths) progenitor is well detected in data reaching back to more than 20 yr preceding the SN, which includes some small ($\sim$0.5 mag) fluctuations in brightness that could be S Dor-like LBV episodes. In the year before the SN, there is one detection at an absolute magnitude of roughly $-$14.5, although since it is only one epoch, we don't know if this was an LBV giant eruption or the beginning of the SN. Then in 1961 there was a $\sim$100 day plateau at almost $-$17 mag followed by a brief peak at about $-$18 mag. After this, the SN faded rapidly and has been fading ever since, except for some plateaus or humps in the declining light curve within $\sim$5 yr after peak. Currently, the suggested source at the same position is about 6 mag fainter than the progenitor, and shows H$\alpha$ emission. In chronological order, SN~1961V was therefore the first direct detection of a pre-SN eruption. In practice, however, the significance of this has been overlooked because the 1961 event was discussed in terms of LBV eruptions (it was considered a ``super-$\eta$ Car-like event''), and was not thought to be a true SN. It is only the much more recent recognition that SN~1961V could have been a true core-collapse Type~IIn supernova (Smith et al.\ 2011a; Kochanek et al.\ 2011) that underscores the implications of the pre-1961 photometric evidence. These direct discoveries of pre-SN transient events provide strong evidence that VMSs suffer violent instabilities associated with the latest phases in a massive star's life. The extremely short timescale of only a few years probably hints at severe instability in the final nuclear burning sequences, especially Ne and O burning (Smith \& Arnett 2014; Shiode \& Quataert 2013; Quataert \& Shiode 2012), each of which lasts about 1 yr. These instabilities may be exacerbated in the most massive stars, athough much theoretical work remains to be done. The increased instability at very high initial masses is certainly true in cases where the pre-SN eruptions result from the pulsational pair instability (see the chapter by Woosley \& Heger), but it may extend to other unkown nuclear burning instabilities as well (Smith \& Arnett 2014). Although the events listed above are just a few very lucky cases, they may also be merely the tip of the iceberg. Undoubtedly, continued work on the flood of new transient discoveries will reveal more of these cases. Future cases will be interesting if high-quality data can place reliable constraints on the duration, number, or luminosity of the pre-SN outbursts that will allow for a meaningful comparison with LBV-like eruoptions. The limitation will be the existence of high-quality archival data over long timescales of years before the SNe, but these sorts of archives are always becoming more populated and improved. When LSST arrives, it will probably become routine to detect pre-SN outbursts. \section{Looking Forward (or Backward, Actually)} \label{sec:5} Very massive stars are very bright, and their SLSNe are even brighter. Thus, we can see them at large distances, and there is hope that we may soon be able to see light from the explosions of some of the earliest stars in the Universe. The fact that VMSs appear to suffer pre-SN instability that leads to the ejection of large amounts of mass --- which in turn enhances the luminosity of the explosion --- helps our chances of seeing the first SNe. There is an expectation that the low metallicity environments in the early Universe may favor the formation of very massive stars because of the difficulty in cooling and fragmentation during the star-formation process. So then we must ask what happens to these stars and their explosions as we move to very low metallicity? How does the physics of eruptions and explosions in the local universe translate to the low-metallicity environments of the earlier universe? Traditional expectations for massive star evolution are that lower metallicity means lower mass-loss rates (e.g., Heger et al.\ 2003), since line-driven winds of hot stars have a strong metallicity dependence. It is somewhat ironic, then, that the SNe associated with VMSs have some of the most extreme mass-loss rates (SNe~IIn and SLSNe Ic), {\it but these appear to favor host galaxies with low metallicity}. This contradicts the simple expectation that lower metallicity means lower mass loss, and the implication is that eruptive mass-loss and mass transfer in binary systems may play an extremely important role. It may, in fact, dominate the observed populations of different types of SNe (Smith et al.\ 2011b). In that case, extrapolating back to low-metallicity conditions in the early univere is not so easy. Binary evolution is not well understood even in the local universe, so extrapolating to a regime where there is no data remains rather adventurous. The main theme throughout this chapter is that VMSs seem to suffer violent eruptions that impact their evolution and drastically modify the type of SN seen. These eruptions may be very important and may actually dominate the mass lost by VMSs in the local universe, and it is important to recognize that they are probably much less sensitive to changes in metallicity than line-driven winds. The two leading candidates for the physical mechanism of driving this eruptive mass loss are continuum-driven super-Eddington winds and hydrodynamic explosions. While we are not yet certain of the triggering mechanism(s) for either type of event, which may turn out to depend somehow on metallicity, the {\it physical mechamisms} that drive the mass loss are not metallicity dependent. Super-Eddington continuum-driven winds rely on electron scattering opacity to transfer radiation momentum to the gas (see the chapter by Owocki; Owocki et al.\ 2004; Smith \& Owocki 2006), and this is independent of metallicity since it only requires electrons supplied by ionized H. This occurs because absorption lines are saturated for high densities in winds with mass-loss rates much above 10$^{-4}$ $M_{\odot}$ yr$^{-1}$ (recall that LBV eruptions typically have mass-loss rates of 0.01 $M_{\odot}$ yr$^{-1}$ or more). Non-terminal hydrodynamic explosions are driven by a shock wave, and shock waves can obviously still accelerate gas even with zero metal content. If the shocks are driven by some sort of instability in advanced nuclear burning stages (using the ashes of previous burning stages as fuel), it seems unlikely that this would depend sensitively on the initial metallicity that the star was born with. Since these eruptive mechanisms appear to be important for heavy mass loss of VMS in the local universe, there is a good chance that they will still operate or may even be enhanced at low metallicity (Smith \& Owocki 2006). The recent recognition that SLSNe appear to favor low-metallicity hosts (see above) would seem to reinforce this suspicion. One of the key missions for the {\it James Webb Space Telescope} ({\it JWST}) will be to detect the light of the explosions from the first stars. Given the arguments above, we should perhaps be hopeful that {\it JWST} may be able to see extremely luminous SNe from very massive stars, if they suffer similar types of pre-SN eruptive mass loss. \input{refs.tex} \end{document}
2,869,038,156,563
arxiv
\section{Introduction} \IEEEPARstart{I}{n} status update systems, sensor nodes are largely deployed to monitor different types of physical processes. They need to continuously sample data to get timely status updates about the targeted process, and the sampled data will be transmitted to the network to support real-time monitoring and control applications, such as environmental surveillance, smart transport, industrial control, e-health and so on. Stale data can be problematic. Therefore, the freshness of data plays an important role in such systems. The age of information (AoI) has been introduced in ~\cite{2012infocom?,FirstWorkMultiSource} as a new performance metric to characterise the data freshness from the receiver's perspective. It is defined as the time elapsed since the latest received status update was sampled. Specifically, the AoI at time $t$ is given as \begin{equation} \label{eq:AoI-definition} \Delta(t)=t-u(t), \end{equation} where $u(t)$ is the generation time of the latest sample received at the destination before time $t$. The AoI has received much attention due to its novelty of characterising the timeliness of information, and it has been widely studied as a concept, a metric, and a tool in a variety of communication systems \cite{BookConceptTool,MagazineIoT}. Many works focused on AoI and its variants in different queueing systems, studying statistical properties~\cite{2012infocom?,estimationMultiSource,DistributionISITJournal}, and exploring the impact of queueing disciplines~\cite{Queue2012,QueueLCFS,QueueLCFSmultihop}, transmission priority~\cite{queuePriority}, packet deadlines~\cite{queuepacketddl}, buffer sizes and packet replacement~\cite{QueueBufferMilcom} on the performance of AoI. In addition to the fundamental-level research works, AoI-oriented scheduling and optimisation problems have also been extensively studied in the design of freshness-aware applications. Optimal link activation and sampling problems for AoI minimisation were investigated in single-hop~\cite{scheduleMinMaxJ} and multiple-hop networks~\cite{mineWCL}. AoI-based scheduling policies were proposed in~\cite{EHBatterySize,EHAoIscheduling} to improve energy efficiency in energy harvesting networks. The joint optimisation of trajectory design and user scheduling problems were explored in unmanned aerial vehicle (UAV) networks~\cite{UAVavergeAgeTVT,mineUAV}. Furthermore, machine learning-based algorithms were applied to solve the above age-optimal problems more efficiently~\cite{UAVdeeplearning,UAVPacketLossQLearning,UAVlearning}. The age given in~\eqref{eq:AoI-definition} increases linearly with time until a new status update is received which means that the concept of AoI is independent of the statistical variations inherent in the underlying source data. However, in some practical cases, old information with a large age may still have value while new information with a low age may have less value. For example, some information (e.g., the node mobility) update very frequently over time, thus even fresh samples may hold little valuable information; some other information (e.g., the temperature) update slowly, thus old samples may be sufficient enough to be used for further analysis. This means that the age of information cannot fully capture the performance degradation in information quality caused by the time lapse between status updates or the correlation property the underlying random process might exhibit. In this regard, it seems that AoI may not be a perfect metric. Therefore, more systematic approaches should be further investigated to quantify the information value. A general way to measure the information value is to utilise the non-linear AoI functions~\cite{NonlinearSurveySY}. The authors in~\cite{QueueSYUpdatWait} proposed the concept of the ``age penalty'', which maps the AoI to a non-linear and non-decreasing penalty function to evaluate the level of ``dissatisfaction'' related to the outdated information. Closed-form expressions of non-linear age under different queueing models were derived in~\cite{NonlinearEH} for energy harvesting networks. The authors in~\cite{NonlinearVoUDJ} considered the auto-correlation of the random process and investigated exponential and logarithmic AoI penalty functions. Furthermore, information-theoretic AoI research has also been widely discussed to provide the theoretical interpretation of non-linear age functions. The mean square error (MSE) in remote estimation can remove the linearity and has been extensively utilised to measure the information value~\cite{estimationMarkovSource,estimationWienerProcess,estimationOUprocess,estimationContext-aware,estimation+correlation,estimationAllerton}. In~\cite{estimationMarkovSource}, the authors defined a metric called the ``effective age'' which is increasing with the estimation error, and studied the optimal scheduling problem with the aim of minimising MSE for remote estimation of the Markov data source. The relationship between AoI and the estimation error was explored in the context of two Markov processes: Wiener process~\cite{estimationWienerProcess} and Ornstein-Uhlenbeck process~\cite{estimationOUprocess}. In~\cite{estimationContext-aware}, the authors defined a context-aware metric called the ``urgency of information'' which can be used to describe both the non-linear performance degradation and the context dependence of the Markov status update system. The timely updating strategy for two correlated information sources was investigated in~\cite{estimation+correlation} to minimise the estimation error. Moreover, the conditional entropy was used in~\cite{MIConditionalEntropy} to evaluate the staleness of data for estimation. In~\cite{SPAWC}, the mutual information was utilised to characterise the timeliness of information, and the authors studied the optimal sampling policy for Markov models. Despite these contributions, hidden Markov models have not been explicitly treated in related works. In practical applications, the noise, interference, errors or other features can lead to severe performance degradation. This means that the status updates generated at the source can be negatively affected, and may be hidden for observation when they are delivered to the receiver. However, existing works only treat Markov models in which variables are assumed to be directly visible at the destination node, and the timeliness of the system only relates to the most recent received status update. Against this background, we are motivated to develop a general value of information (VoI) framework for hidden Markov models to characterise how valuable the status updates are at the receiver. In our previous work~\cite{mineGlobecom}, we defined the basic notion of the information value and started to look at the Ornstein-Uhlenbeck (OU) process. In this paper, we extend the basic model and go into more depth with regard to different sampling policies. The contributions of this paper are given as follows: \begin{itemize} \item A VoI framework is formalised for hidden Markov models. The VoI is defined as the mutual information between the current status and a dynamic sequence of past observations, which gives the theoretical interpretation of the reduction in uncertainty in the current (unobserved) status of a hidden process given that we have noisy measurements. \item The VoI is explored in the context of one of the most important hidden Markov models: the noisy Ornstein-Uhlenbeck process, and its closed-form expressions are derived. \item The VoI with different sampling policies is investigated. For uniform sampling, simplified VoI expressions are derived in both large and small noise regimes. For random sampling, the simplified VoI expression is derived in the small noise regime, and the probability density and the cumulative distribution of the worst-case VoI are analysed in a particular case: the M/M/1 queueing model. \item Numerical results are provided to verify the theoretical analysis. The effect of noise, number of observations, sampling rate and correlation on VoI and its statistical properties are discussed. The performance of VoI for Markov and hidden Markov models are also presented. \end{itemize} The remainder of this paper is organised as follows. The VoI formalism for hidden Markov models is given in Section II. The VoI for a specific hidden Markov model (the noisy OU process) is analysed in Section III. The VoI with uniform and random sampling policies are explored in Section IV and V, respectively. Numerical results and analysis are provided in Section VI. Conclusions are drawn in Section VII. \section{Value of Information Formalism} \subsection{Definition} We consider a status update system where the source node continuously monitors a random process and samples data to get timely status updates of the targeted process, and these time-stamped messages will be transmitted via the communication system to the destination node for further analysis. As the communication resources are limited, we assume that the transmission delay exists when the status updates are received by the destination. We denote $\{X_t\}$ as the random process under observation at the source node. Here, the time variable $t$ can be either continuous or discrete. Denote $(t_i,X_{t_i})$ as the message which is generated at arbitrary time $t_i$, and contains the corresponding value $X_{t_i}$ of the underlying random process. The status update is received by the destination node at time $t'_i$ with $t'_i > t_i$. The observations at the receiver are recorded in the observed random process $\{Y_t\}$ where $Y_{t'_i}$ is the observation corresponding to $X_{t_i}$. For the given time period $(0,t)$, denote $n$ as the index of the most recent data received at time $t'_n$ with $t'_n<t \le t'_{n+1}$. In this paper, we define the value of information as the mutual information between the current status of the underlying random process at the source and a dynamic sequence of past observations captured by the receiver. For the given time instants, the general definition of VoI is given as \begin{equation} \label{eq:general definition} v(t) = I({X_{t}};{Y_{t'_{n}}}, \cdots ,{Y_{t'_{n-m+1}}}), \quad t>t'_n, \end{equation} which is conditioned on times $\{t'_i\}$. Here, $n$ is the total number of recorded observations during the time period $(0,t)$. We look back in time and use a dynamic time window containing the most recent $m$ of $n$ samples ($1 \le m \le n$) to measure the information value of the current status $X_t$ of a hidden process. \subsection{VoI for Hidden Markov Models} In the Markov model, the random process $\{X_t\}$ is directly visible, and the observations are also Markovian, i.e., $Y_{t_i'}=X_{t_i}$, for all $1 \le i \le n$. In this case, the VoI can be simplified to~\cite{SPAWC} \begin{equation} v(t) = I(X_t;X_{t_n}),\quad t>t'_n. \end{equation} The VoI in the Markov model is independent of the length of time window $m$ and only depends on the most recent single status update. For hidden Markov models (Fig.~\ref{fig:HMM}), the observations at the receiver may be different from the initial value, i.e., $Y_{t_i'} \not= X_{t_i}$, but where \begin{equation} \operatorname{P}[Y_{t_i'}\in A | X_{t_1},\ldots,X_{t_i}] = \operatorname{P}[Y_{t_i'}\in A | X_{t_i}] \end{equation} for all admissible $A$. Hence, the initial samples $\{X_{t_i}\}$ are invisible at the receiver. In this case, we have \begin{equation} I({X_{t}};{Y_{t'_{n}}}, \cdots ,{Y_{t'_{n-m+1}}}) \ge I({X_{t}};{Y_{t'_{n}}}, \cdots ,{Y_{t'_{n-m+2}}}), \end{equation} and \begin{equation} \label{eq:MM upper bound} \begin{aligned} v(t) &= h(X_t) - h(X_t|{Y_{t'_{n}}}, \cdots ,{Y_{t'_{n-m+1}}})\\ &\le h(X_t) - h(X_t|{Y_{t'_{n}}}, \cdots ,{Y_{t'_{n-m+1}}},X_{t_n})\\ &= h(X_t) - h(X_t|X_{t_n})\\ &= I(X_t;X_{t_n}) \end{aligned} \end{equation} for $2 \le m \le n$. We find that the VoI increases with the length of the time window $m$ and converges when more past observations are used. Moreover, the VoI in the Markov model can be regarded as the upper bound of the VoI in the hidden Markov model which illustrates that the lack of a direct route to observe $\{X_t\}$ reduces the information value. The difference between the VoI in the Markov model and its counterpart in the hidden Markov model can be expressed as \begin{equation} \label{eq:correction} \begin{aligned} & {I}({X_t};{X_{{t_n}}}) - {I}({X_t};{Y_{t{'_n}}}, \cdots ,{Y_{t{'_{n - m + 1}}}})\\ = &h({X_t}|{Y_{t{'_n}}}, \cdots ,{Y_{t{'_{n - m + 1}}}}) - h({X_t}|{X_{{t_n}}})\\ =& h({X_t}|{Y_{t{'_n}}}, \cdots ,{Y_{t{'_{n - m + 1}}}}) - h({X_t}|{X_{{t_n}}},{Y_{t{'_n}}}, \cdots ,{Y_{t{'_{n - m + 1}}}})\\ = & I({X_t};{X_{{t_n}}}|{Y_{t{'_n}}}, \cdots ,{Y_{t{'_{n - m + 1}}}}). \end{aligned} \end{equation} This reduction can be interpreted as the ``correction'' which captures the VoI gap due to the indirect observation in the hidden Markov model. In other words, we can think the VoI for the hidden Markov model as the VoI for the Markov model minus the correction. The ``correction'' can be quantified by the mutual information between the current status $X_t$ and the most recent (unobserved) status update $X_{t_n}$ conditioned on the knowledge of a sequence of past observations $\{{Y_{t{'_n}}}, \cdots ,{Y_{t{'_{n - m + 1}}}}\}$. \begin{figure} \centering \includegraphics[width=7cm]{HMM.eps} \caption{Temporal evolution of hidden Markov models.} \label{fig:HMM} \end{figure} \section{VoI for a Noisy OU Process} \subsection{Noisy OU Process Model} In this section, we consider a particular case of a noisy Ornstein–Uhlenbeck process to show how the proposed VoI framework can be applied in the hidden Markov model. The underlying OU process $\{X_t\}$ satisfies the following stochastic differential equation (SDE) \begin{equation} \label{eq:OU SDE} \operatorname{d}\!X_t = \kappa (\theta-X_t) \operatorname{d}\!t + \sigma\operatorname{d}\!W_t \end{equation} where $\{W_t\}$ is standard Brownian motion, $\kappa$ is the rate of mean reversion, $\theta$ is the long-term mean, and $\sigma$ is the volatility of the random fluctuation. We assume that the initial value $X_0$ is normally distributed with $\mathcal{N}({\theta},\frac{\sigma^2}{2\kappa})$. The OU process is a stationary Gauss–Markov process which can represent many practical applications. For example, it can be used to model the mobility of the node which is anchored to the point $\theta$ but experiences positional disturbances. For any $t$, the variable $X_t$ is normally distributed with mean and variance: \begin{equation} \operatorname{E}[{X_t}] = \theta, \quad \operatorname{Var}[X_t] = \frac{\sigma^2}{2\kappa}. \end{equation} $X_t$ conditioned on $X_s$ is also Gaussian with mean and variance: \begin{equation} \begin{aligned} &\operatorname{E}[{X_t}|{X_s}] = \theta + ({X_s} - \theta ){e^{ - \kappa (t - s)}},\\ &\operatorname{Var}[{X_t}|{X_s}] = \frac{\sigma^2}{2\kappa}\left(1-e^{-2\kappa (t-s)}\right). \end{aligned} \end{equation} The covariance of two variables is given by \begin{equation}\label{eq:cov xts} \operatorname{Cov}[{X_t},{X_s}] = \frac{{{\sigma ^2}}}{{2\kappa }}{e^{ - \kappa |t - s|}}. \end{equation} We assume that the underlying OU process $\{X_t\}$ is observed through an additive noise channel. Therefore, this noisy OU model constitutes a hidden Markov model with observations defined as \begin{equation} Y_{t_i'}=X_{t_i}+N_{t'_i}. \end{equation} Here, $\{N_{t}\}$ is a noise process which is anchored at time $t'_i$ with the value $N_{t'_i}$. In practice, it can be used to represent the measurement or error that corrupts the status update $X_{t_i}$. We assume that $\{N_{t'_{i}}\}$ are independent and identically distributed (i.i.d.) Gaussian variables with zero mean and constant variance ${{\sigma}_n^2}$. Let the $m$-dimensional vector $\bm{X} = {[{X_{t_{n-m+1}}},\cdots,{X_{t_n}}]^{\operatorname{T}}}$ denote the sequence of status updates sampled by the source node, and its covariance matrix is given by \begin{multline} \label{eq:cov x} {\mathbf{\Sigma_X} }= \\ {\left[ {\begin{array}{*{20}{c}} {\operatorname{Cov}[{X_{{t_{n-m+1}}}},{X_{{t_{n-m+1}}}}]}& \cdots &{\operatorname{Cov}[{X_{{t_{n-m+1}}}},{X_{{t_n}}}]}\\ \vdots & \ddots & \vdots \\ {\operatorname{Cov}[{X_{{t_{n-m+1}}}},{X_{{t_{n}}}}]}& \cdots &{\operatorname{Cov}[{X_{{t_n}}},{X_{{t_n}}}]} \end{array}} \right]}. \end{multline} Let vector $\bm{Y} = {[Y_{t'_{n-m+1}},\cdots,Y_{t'_n}]^{\operatorname{T}}}$ denote the corresponding set of observations recorded at the receiver. Similarly, the associated noise samples are captured in vector $\bm{N} = {[N_{t'_{n-m+1}},\cdots,{N_{t'_n}}]^{\operatorname{T}}}$ with the covariance matrix \begin{equation} \label{eq:cov n} \mathbf{\Sigma_N}=\sigma_n^2\mathbf{I}, \end{equation} where $\mathbf{I}$ is the identity matrix. Therefore, the observations of the noisy OU process can be collectively represented by \begin{equation} \label{eq:OU hidden mapping} \bm{Y}=\bm{X}+\bm{N}. \end{equation} \subsection{VoI for the Noisy OU Process} Based on the model given before, we can state the following main result of this section. \begin{proposition} \label{prop:1} Let the $m$-dimensional matrix $\mathbf{A} = \sigma_n^2\mathbf{\Sigma}^{ - 1}_{\mathbf{X}} +\mathbf{I}$, and denote $\mathbf A_{ij}$ as the $(m-1)\times (m-1)$ matrix constructed by removing the $i$th row and the $j$th column of the matrix $\mathbf A$. The VoI for the noisy OU process defined above can be written as \begin{multline} \label{eq:prop. 1 general voi} v(t) = \frac{1}{2}\log \bigg(\frac{{1 }}{{1 - {e^{ - 2\kappa (t - {t_n})}}}}\bigg) \\ - \frac{1}{2}\log \bigg(1 + \frac{{{1 }}}{{ \left(e^{2\kappa (t-t_n)} - 1\right) }} \frac{\det(\mathbf{A}_{mm})}{\gamma\det(\mathbf{A})}\bigg). \end{multline} Here, $\gamma$ is denoted as the ratio of the variance of the OU process and the variance of the noise, i.e., \begin{equation} \label{eq:gamma snr} \gamma = \frac{\operatorname{Var}[X_{t_i}]}{\operatorname{Var}[N_{t'_i}]}=\frac{{\sigma ^2}}{{{2\kappa \sigma _n^2}}}. \end{equation} \end{proposition} \begin{IEEEproof} See appendix~\ref{appendix:voi} . \end{IEEEproof} It is easy to show that the first logarithmic term in~\eqref{eq:prop. 1 general voi} represents the VoI for the Markov OU model $X_t$. The remainder quantifies a ``correction'' to the VoI of the hidden process that arises due to the indirect observation of the process through the noisy channel, and it evaluates the result of~\eqref{eq:correction} in the example of the OU model. Note that both $\mathbf{A}$ and $\mathbf{A}_{mm}$ are positive semi-definite, thus the second logarithmic term in~\eqref{eq:prop. 1 general voi} is non-negative. The parameter $\gamma$ gives a comparison between the randomness in the underlying OU process and the noise process in the communication channel, and it can be compared to the concept of the signal-to-noise ratio (SNR) in communication systems. \subsection{Results for a Single Observation} The result given in Proposition~\ref{prop:1} is general. In this subsection, we consider a special case ($m=1$) which gives the information about how much value the most recently received observation contains about the current status of a random process. In this case, the VoI can be calculated by replacing the $m$-dimensional vector $\bm{Y}$ with the single variable $Y_{t'_n}$ in~\eqref{eq:general definition}, which leads to the following corollary. \begin{corollary}\label{cor:1} The VoI for the noisy OU process with a single observation is given by \begin{equation} \label{eq:general voi 1 dimension} v(t) = - \frac{1}{2}\log \bigg(1 - \frac{\gamma }{{1 + \gamma }}{e^{ - 2\kappa (t - {t_n})}}\bigg). \end{equation} \end{corollary} \begin{IEEEproof} This result follows directly from Proposition~\ref{prop:1} where $\det(\mathbf{A}_{mm})\coloneqq 1$. \end{IEEEproof} This corollary shows that for fixed $t_n$, as time $t$ increases, the VoI will decrease and the newly received update can cause a corresponding reset of $v(t)$. This is somewhat similar to the concept of AoI, which is equal to $t'_n-t_n$ at the moment the $n$th update arrives and then increases with unit slope until the next update comes. However, the VoI will decrease like $O(e^{-2\kappa t})$ until a new status update is received. The parameter $\kappa$ can be used to represent how correlated the updates are. Therefore, compared with AoI, the proposed VoI framework not only reflects the time evolution of a random process, but also captures the correlation property of the underlying data source and the noise in the transmission channel. \section{Noisy OU Model with Uniform Sampling} \label{sec:uni} Corollary~\ref{cor:1} looks at the special case when the length of the time window $m=1$ to illustrate the VoI concept clearly. When $m>1$, the covariance matrix $\mathbf{\Sigma_X}$ given in Proposition~\ref{prop:1} is closely related to the sampling interval of the status updates. To explore this result further, we will study how the sampling policy affects the VoI. We first consider the case when the sampling intervals are uniform in this section. We assume that status updates of the OU process under observation are generated at regular times $t_i=i\Delta t$, where the constant $\Delta t$ ($\Delta t>0$) denotes the fixed sampling interval. For the OU process with uniform sampling, the $m$ samples in $\bm{X}$ form a first-order autoregressive AR(1) process. Let $\rho=e^{ - \kappa \Delta t}$. The inverse covariance matrix of $\bm{X}$ is a tridiagonal matrix which is written as~\cite{irregularAR1} \begin{equation} \label{eq:matrix x^-1} \mathbf{\Sigma}^{ - 1}_{\mathbf{X}} = \frac{{2\kappa }}{{{\sigma ^2}(1 - {\rho ^2})}}\left[ {\begin{array}{*{20}{c}} 1&{ - \rho }&{}&{}&{}\\ { - \rho }&{1 + {\rho ^2}}&{ - \rho }&{}&{}\\ {}&{ - \rho }& \ddots & \ddots &{}\\ {}&{}& \ddots &{1 + {\rho ^2}}&{ - \rho }\\ {}&{}&{}&{ - \rho }&1 \end{array}} \right]. \end{equation} Then the matrix $\mathbf{A}$ in Proposition~\ref{prop:1} is given by \begin{equation} \mathbf{A} = \sigma_n^2\mathbf{\Sigma}^{ - 1}_{\mathbf{X}} +\mathbf{I}= \left[ {\begin{array}{*{20}{c}} a&b&{}&{}&{}\\ b&c&b&{}&{}\\ {}&b& \ddots & \ddots &{}\\ {}&{}& \ddots &c&b\\ {}&{}&{}&b&a \end{array}} \right], \end{equation} where \begin{equation} \label{eq:uni paprameter abc} a = \frac{{1}}{{{\gamma}(1 - {\rho ^2})}} + 1,\quad b = \frac{{-\rho }}{{{\gamma}(1 - {\rho ^2})}},\quad c = \frac{{1 + {\rho ^2}}}{{{\gamma}(1 - {\rho ^2})}} + 1. \end{equation} It is clear to see that the matrix $\mathbf{A}$ is also tridiagonal, so its determinant can be calculated by cofactor expansion and be expressed by a recurrence relation~\cite{TridiagnalPossion}. Therefore, in this case, we have \begin{multline} \label{eq: uni A} \det (\mathbf{A}) = \frac{{{{( - 1)}^m}{b^{m - 1}}}}{{\sqrt {{c^2} - 4{b^2}} }}\bigg({a^2}(\lambda _1^{m - 1} - \lambda _2^{m - 1}) +\\ 2ab(\lambda _1^{m - 2} - \lambda _2^{m - 2}) + {b^2}(\lambda _1^{m - 3} - \lambda _2^{m - 3})\bigg) \end{multline} \begin{multline} \label{eq: uni Amm} \det (\mathbf{A}_{mm}) = \frac{{{{( - 1)}^{m - 1}}{b^{m - 2}}}}{{\sqrt {{c^2} - 4{b^2}} }}\bigg(ac(\lambda _1^{m - 2} - \lambda _2^{m - 2}) + \\ (ab + bc)(\lambda _1^{m - 3} - \lambda _2^{m - 3}) + {b^2}(\lambda _1^{m - 4} - \lambda _2^{m - 4})\bigg) \end{multline} where \begin{equation} \label{eq:uni lambda12} {\lambda _1} = \frac{{ - c + \sqrt {{c^2} - 4{b^2}} }}{{2b}},\quad {\lambda _2} = \frac{{ - c - \sqrt {{c^2} - 4{b^2}} }}{{2b}}. \end{equation} \begin{IEEEproof} See appendix~\ref{appendix:characteristic equation}. \end{IEEEproof} Thus, we have derived the closed-form expression of the determinant ratio in~\eqref{eq:uni B determiant ratio}, which can help further explore how the VoI relates to the sampling interval $\Delta t$, correlation parameter $\kappa$ and channel condition parameter $\sigma_n^2$. \begin{figure*} \begin{equation} \begin{aligned} \label{eq:uni B determiant ratio} \frac{{\det ({\mathbf{A}_{mm}})}}{{\gamma\det (\mathbf{A})}} &= \frac{1-\rho^2}{\rho}\cdot\frac{{ac(\lambda _1^{m - 2} - \lambda _2^{m - 2}) + (ab + bc)(\lambda _1^{m - 3} - \lambda _2^{m - 3}) + {b^2}(\lambda _1^{m - 4} - \lambda _2^{m - 4})}}{{{a^2}(\lambda _1^{m - 1} - \lambda _2^{m - 1}) + 2ab(\lambda _1^{m - 2} - \lambda _2^{m - 2}) + {b^2}(\lambda _1^{m - 3} - \lambda _2^{m - 3})}}\\ \end{aligned} \end{equation} \end{figure*} \subsection{High SNR Regime} The parameter $\gamma$ given in~\eqref{eq:gamma snr} can largely affect the VoI in the hidden Markov model. If $\gamma$ is large, the underlying latent process is dominant; otherwise, the noise process is dominant. Therefore, it is interesting to explore the VoI with different levels of noise. In this subsection, we consider the high SNR regime in which the small variance of noise leads to large $\gamma$, i.e., $\frac{1}{\gamma} \to 0$. In this case, we can state the following result. \begin{corollary} \label{cor: uni high snr} For the noisy OU process with uniform sampling, the VoI in the high SNR regime can be given as \begin{multline} \label{eq:uni appro. high snr} v(t)= \frac{1}{2}\log \bigg(\frac{1}{{1 - {e^{ - 2\kappa (t - {t_n})}}}}\bigg) \\ - \frac{1}{2}\log \bigg[1 + \frac{1}{{{e^{2\kappa (t - {t_n})}} - 1}} \bigg(\frac{1}{\gamma } - \frac{1}{{(1 - {\rho ^2}){\gamma ^2}}}\bigg)\bigg]+ O(\frac{1}{{{\gamma^3 }}}). \end{multline} \end{corollary} \begin{IEEEproof} We substitute~\eqref{eq:uni paprameter abc} and~\eqref{eq:uni lambda12} into~\eqref{eq:uni B determiant ratio}, and expand this expression at the point $\frac{1}{\gamma}=0$. Hence, the series expansion of~\eqref{eq:uni B determiant ratio} in the high SNR regime can be given as \begin{equation} \label{eq:uni appro ratio high snr} \frac{\det(\mathbf{A}_{mm})}{\gamma\det(\mathbf{A})}= \frac{1}{\gamma} - \frac{{{1}}}{{{{(1 - {\rho ^2})}}{\gamma^2}}} + O(\frac{1}{{{\gamma^3 }}}). \end{equation} In this case, the VoI ``correction'' can be written as \begin{equation} \label{eq:uni high snr correction} \begin{aligned} &\frac{1}{2}\log \bigg(1 + \frac{{{1 }}}{{ \left(e^{2\kappa (t-t_n)} - 1\right) }} \frac{\det(\mathbf{A}_{mm})}{\gamma\det(\mathbf{A})}\bigg)\\ =&\frac{1}{2}\log \bigg[1 + \frac{1}{{{e^{2\kappa (t - {t_n})}} - 1}} \bigg(\frac{1}{\gamma } - \frac{1}{{(1 - {\rho ^2}){\gamma ^2}}}+ O(\frac{1}{{{\gamma^3 }}})\bigg)\bigg]\\ = &\frac{1}{2}\log \bigg[1 + \frac{1}{{{e^{2\kappa (t - {t_n})}} - 1}} \bigg(\frac{1}{\gamma } - \frac{1}{{(1 - {\rho ^2}){\gamma ^2}}}\bigg)\bigg] \\ &+ \frac{1}{2}\log \bigg[1 + \frac{O(\frac{1}{{{\gamma^3 }}})}{1 + \frac{1}{{{e^{2\kappa (t - {t_n})}} - 1}} \bigg(\frac{1}{\gamma } - \frac{1}{{(1 - {\rho ^2}){\gamma ^2}}}\bigg)}\bigg]\\ =&\frac{1}{2}\log \bigg[1 +\frac{1}{{{e^{2\kappa (t - {t_n})}} - 1}} \bigg(\frac{1}{\gamma } - \frac{1}{{(1 - {\rho ^2}){\gamma^2}}}\bigg)\bigg]+O(\frac{1}{\gamma^3}). \end{aligned} \end{equation} Therefore, the result of this corollary is obtained directly by substituting~\eqref{eq:uni high snr correction} into~\eqref{eq:prop. 1 general voi}. \end{IEEEproof} Similar to Proposition~\ref{prop:1}, the first logarithmic term in Corollary~\ref{cor: uni high snr} represents the VoI for the non-noisy Markov OU process $\{X_t\}$, and the remainder quantifies the ``correction'' which can be presented as the scale of $\frac{1}{\gamma}$. We find that the expression of $v(t)$ does not depend on $m$ (the number of samples are used). This is because when $\gamma$ is large, the Markov OU randomness is dominant, and the noisy channel is not expected to play a vital role in the calculation of VoI. Therefore, the VoI in the high SNR regime approaches its Markov counterpart which is not related to $m$. Furthermore, if the VoI given in~\eqref{eq:uni appro. high snr} is truncated to the second-order term $\frac{1}{\gamma^2}$, the approximated VoI will first decrease and then increase with $\frac{1}{\gamma}$. The turning point is at $\frac{1}{\gamma}=\frac{1-\rho^2}{2}$. Therefore, the valid region of the approximated VoI is \begin{equation} \label{eq:valid region uni high} \gamma \ge \frac{2}{1-\rho^2}. \end{equation} \subsection{Low SNR Regime} The VoI in the low SNR regime (i.e., $\gamma \to 0$) can be obtained similarly. We have the following result of this subsection. \begin{corollary} \label{cor: uni low snr} For the noisy OU process with uniform sampling, the VoI in the low SNR regime can be given as \begin{equation} \label{eq:uni appro. low snr} \begin{aligned} v(t) &= - \frac{1}{2}\log \bigg[1 - \frac{{{{e ^{{-2\kappa(t - t_n)}}}}(1 - {\rho ^{2m}})}}{{1 - {\rho ^2}}}\gamma+ {e ^{{-2\kappa(t - t_n)}}}\\ &\times \bigg(\frac{{(1 - {\rho ^{2m}})(1 + {\rho ^2})}}{{{{(1 - {\rho ^2})}^2}}} - \frac{{2m{\rho ^{2m}}}}{{1 - {\rho ^2}}}\bigg){\gamma ^2}\bigg]+ O({\gamma ^3}). \end{aligned} \end{equation} \end{corollary} \begin{IEEEproof} The series expansion of~\eqref{eq:uni B determiant ratio} at the point $\gamma = 0$ can be written as \begin{multline} \label{eq:uni appro low snr linear term} \frac{\det(\mathbf{A}_{mm})}{\gamma\det(\mathbf{A})}= 1 - \frac{{1 - {\rho ^{2m}}}}{{1 - {\rho ^2}}}\gamma \\ +\bigg (\frac{{(1 - {\rho ^{2m}})(1 + {\rho ^2})}}{{{{(1 - {\rho ^2})}^2}}} - \frac{{2m{\rho ^{2m}}}}{{1 - {\rho ^2}}}\bigg){\gamma ^2} + O({\gamma ^3}). \end{multline} Similar to the proof given in Corollary~\ref{cor: uni high snr}, the result of this corollary given in~\eqref{eq:uni appro. low snr} is obtained by substituting~\eqref{eq:uni appro low snr linear term} into~\eqref{eq:prop. 1 general voi}. \end{IEEEproof} The VoI in the low SNR regime is presented as the scale of ${\gamma}$. Compared with the high SNR regime, the randomness of the noise dominates in the low SNR regime, thus the VoI relates to the length of the time window $m$. As $0<\rho<1$, the result in this corollary further shows that the VoI increases with $m$ and converges when $m$ grows large. Moreover, if we omit the term $O(\gamma^3)$ in~\eqref{eq:uni appro. low snr}, the valid region of the approximated VoI in the low SNR regime can be given by \begin{equation} \gamma \le \frac{(1-\rho^2)(1-\rho^{2m})}{2(1-\rho^{2m})(1+\rho^2)-4m\rho^2(1-\rho^2)}. \end{equation} \section{Noisy OU Model with Random Sampling} In some cases, the status update may not be generated with uniform time intervals. In such applications, it may still be of interest to have a clear representation of VoI with irregular sampling intervals. In this section, we consider a more general case for the VoI in the noisy OU process with a random sampling policy. We assume that status updates are generated as a rate $\lambda$ Poisson process and let the i.i.d. exponential random variables \begin{equation} \label{eq:T_i} {T_i} = {t_{n-m+i}} - {t_{n-m+i-1}}, \quad 2 \le i \le m \end{equation} be the sampling interval of two packets. In this case, the covariance matrix of $\bm{X}$ can be written as \begin{equation} \mathbf{\Sigma _X} = \frac{{{\sigma ^2}}}{{2\kappa }}\left[ {\begin{array}{*{20}{c}} 1&{{e^{ - \kappa {T_2}}}}& \cdots &{{e^{ - \kappa \sum\limits_{i = 2}^m {{T_i}} }}}\\ {{e^{ - \kappa {T_2}}}}&1& \cdots &{{e^{ - \kappa \sum\limits_{i = 3}^m {{T_i}} }}}\\ \vdots & \vdots & \ddots & \vdots \\ {{e^{ - \kappa \sum\limits_{i = 2}^m {{T_i}} }}}&{{e^{ - \kappa \sum\limits_{i = 3}^m {{T_i}} }}}& \cdots &1 \end{array}} \right]. \end{equation} For simplicity, let the random variable \begin{equation} {R_i} = \frac{1}{{1 - {e^{ - 2\kappa {T_i}}}}}, \quad 2 \le i \le m. \end{equation} The inverse of the covariance matrix of $\bm{X}$ is tridiagonal which can be written as~\cite{irregularAR1,TridiPoissonIndepen} \begin{equation} \mathbf{\Sigma}^{ - 1}_{\mathbf{X}} = \frac{{2\kappa }}{{{\sigma ^2}}}\left[ {\begin{array}{*{20}{c}} {{a_1}}&{{b_1}}&{}&{}&{}\\ {{b_1}}&{{a_2}}&{{b_2}}&{}&{}\\ {}&{{b_2}}& \ddots & \ddots &{}\\ {}&{}& \ddots &{{a_{m - 1}}}&{{b_{m - 1}}}\\ {}&{}&{}&{{b_{m - 1}}}&{{a_m}} \end{array}} \right], \end{equation} where \begin{equation} {a_i} = \left\{ {\begin{array}{*{20}{l}} {{R_2}}&{i = 1}\\ {{R_m}}&{i = m}\\ {{R_i} + {R_{i + 1}} - 1}&{\text{others}} \end{array}} \right. \end{equation} and \begin{equation} b_i =-\sqrt{ {R_{i + 1}}({R_{i + 1}} - 1)}, \quad 1 \le i \le m-1. \end{equation} Then, the matrix $\mathbf{A}$ in Proposition~\ref{prop:1} can be written as \begin{multline} \label{eq: poi A} \mathbf{A} = \sigma _n^2\mathbf{\Sigma}^{ - 1}_{\mathbf{X}} + \mathbf{I}=\\ \left[ {\begin{array}{*{20}{c}} {{{\frac{1}{\gamma}}a_1+1}}&{{{\frac{1}{\gamma}}b_1}}&{}&{}&{}\\ {{{\frac{1}{\gamma}}b_1}}&{{{\frac{1}{\gamma}}a_2+1}}&{{{\frac{1}{\gamma}}b_2}}&{}&{}\\ {}&{{{\frac{1}{\gamma}}b_2}}& \ddots & \ddots &{}\\ {}&{}& \ddots &{{{\frac{1}{\gamma}}a_{m - 1}+1}}&{{{\frac{1}{\gamma}}b_{m - 1}}}\\ {}&{}&{}&{{{\frac{1}{\gamma}}b_{m - 1}}}&{{{\frac{1}{\gamma}}a_m+1}} \end{array}} \right]. \end{multline} \subsection{VoI in the High SNR Regime} Similar to our analysis in Sec.~\ref{sec:uni}, we consider the high SNR regime (i.e., $\frac{1}{\gamma} \to 0$) to simplify the expression of the VoI with random sampling. \begin{lemma} \label{lemma: induction} Let $f_i$ denote the determinant of the $i$-dimensional matrix constructed from the first $i$ columns and rows of matrix $\mathbf{A}$, i.e., $f_{m-1}=\det(\mathbf{A}_{mm})$ and $f_{m}=\det(\mathbf{A})$. In the high SNR regime, $f_k$ can be calculated as \begin{equation} \label{eq:f_m} {f_k} = 1+\sum\limits_{i = 1}^k {{a_i}{\frac{1}{\gamma}} + \bigg(\sum\limits_{1 \le i < j \le k} {{a_i}{a_j} - \sum\limits_{i = 1}^{k - 1} {b_i^2} } \bigg){{\frac{1}{\gamma^2}}} + O{{(\frac{1}{\gamma^3})}}} \end{equation} for $1 < k \le m$. \end{lemma} \begin{IEEEproof} See appendix~\ref{appendix:induction}. \end{IEEEproof} Then, we have the following corollary. \begin{corollary} \label{cor: random high snr} For the noisy OU process with random sampling, the VoI in the high SNR regime can be written as \begin{multline} \label{eq:poi approx VoI high snr} v(t) = \frac{1}{2}\log \bigg(\frac{1}{{1 - {e^{ - 2\kappa (t - {t_n})}}}}\bigg) - \frac{1}{2}\log \bigg[1 + \frac{1}{{{e^{2\kappa (t - {t_n})}} - 1}}\\ \times \bigg( \frac{1}{\gamma } - \frac{1}{{1 - {e^{ - 2\kappa ({t_n} - {t_{n - 1}})}}}}\frac{1}{{{\gamma ^2}}} \bigg)\bigg]+ O(\frac{1}{{{\gamma ^3}}}). \end{multline} \end{corollary} \begin{IEEEproof} For simplicity, we temporarily denote the coefficient of the second-order term $\frac{1}{\gamma^2}$ in~\eqref{eq:f_m} as $c_k$, i.e., \begin{equation} c_k=\sum\limits_{1 \le i < j \le k} {{a_i}{a_j} - \sum\limits_{i = 1}^{k - 1} {b_i^2} }. \end{equation} Based on Lemma~\ref{lemma: induction}, we have \begin{equation} \begin{aligned} \label{eq:poi high snr ratio} \frac{\det(\mathbf{A}_{mm})}{\gamma\det(\mathbf{A})}&=\frac{1}{\gamma }\cdot\frac{{1 + \sum\limits_{i = 1}^{m - 1} {{a_i}\frac{1}{\gamma }} + {c_{m - 1}}\frac{1}{{{\gamma ^2}}} + O(\frac{1}{{{\gamma ^3}}})}}{{1 + \sum\limits_{i = 1}^m {{a_i}\frac{1}{\gamma }} +{c_m}\frac{1}{{{\gamma ^2}}} + O(\frac{1}{{{\gamma ^3}}})}}\\ &= \frac{1}{\gamma } - {a_m}\frac{1}{{{\gamma ^2}}} + (b_{m - 1}^2 - a_m^2)\frac{1}{{{\gamma ^3}}} + O(\frac{1}{{{\gamma ^4}}})\\ &= \frac{1}{\gamma } - {R_m}\frac{1}{{{\gamma ^2}}} - {R_m}\frac{1}{{{\gamma ^3}}} + O(\frac{1}{{{\gamma ^4}}})\\ &= \frac{1}{\gamma } - \frac{1}{1-e^{-2\kappa(t_n-t_{n-1})}}\frac{1}{{{\gamma ^2}}} + O(\frac{1}{{{\gamma ^3}}}). \end{aligned} \end{equation} Similar to the proof given in Corollary~\ref{cor: uni high snr}, the result of this corollary given in~\eqref{eq:poi approx VoI high snr} is obtained by substituting~\eqref{eq:poi high snr ratio} into~\eqref{eq:prop. 1 general voi}. \end{IEEEproof} In the high SNR regime, the VoI with random sampling shows the similar behaviours as its counterpart with uniform sampling in Corollary~\ref{cor: uni high snr}. The expression of $v(t)$ does not depend on the number of samples are used due to the dominant Markov OU randomness. For random sampling, the ``correction" term is also presented as the scale of $\frac{1}{\gamma}$ and it relates to the time difference $t_n-t_{n-1}$ which is a random variable. Furthermore, if the VoI given in~\eqref{eq:poi approx VoI high snr} is truncated to the second-order term $\frac{1}{\gamma^2}$, the approximated VoI in Corollary~\ref{cor: random high snr} is valid when \begin{equation} \label{eq:valid region poi high} \gamma \ge \frac{2}{1-e^{-2\kappa(t_n-t_{n-1})}}. \end{equation} \subsection{An Application of VoI in the M/M/1 System} In this subsection, we consider the case of a first-come-first-serve (FCFS) M/M/1 queueing system to explore the statistical property of the VoI, which provides the potential applicability of the proposed framework. We assume that status updates are sampled as a rate $\lambda$ Poisson process and the service rate is $\mu$. Let the random variables \begin{equation} {T_i} = {t_{i}} - {t_{i-1}}, \quad n-m+2 \le i \le n \end{equation} be the sampling interval of two packets, which are i.i.d. exponential random variables with mean $\frac{1}{\lambda }$ and variance $\frac{1}{{{\lambda ^2}}}$. Let the random variables $\{{W_i}\}$ ($ n-m+1 \le i \le n$) be the service time which are i.i.d. exponential random variables with mean $\frac{1}{\mu}$ and variance $\frac{1}{{{\mu ^2}}}$. Let the random variable \begin{equation} {S_i} = {t'_{i}} - {t_{i}}, \quad n-m+1 \le i \le n \end{equation} be the system time of the $i$th status update. When the system reaches steady state, the system times are also i.i.d. exponential random variables with the mean $1/(\mu-\lambda)$~\cite{2012infocom?,bookRandomProcessSystemTime}. We consider the case when $m=1$ and the VoI expression is given in Corollary~\ref{cor:1}. We observe that the VoI immediately reaches the local minimum before a new update is received by the destination. Given that $n$ samples are observed, the worst-case VoI can be obtained when $t=t'_{n+1}$ and it is believed to be of interest in applications with a threshold restriction on the information value. When the time instants are random, the VoI in the worst case can also be regarded as a random variable. Based on~\eqref{eq:general voi 1 dimension}, the worst VoI with $n$ status updates is given as \begin{equation} \begin{aligned} V_n &= - \frac{1}{2}\log \bigg(1 - \frac{\gamma }{{1 + \gamma }}{e^{ - 2\kappa (t'_{n+1} - {t_n})}}\bigg)\\ &= - \frac{1}{2}\log \bigg(1 - \frac{\gamma }{{1 + \gamma }}{e^{ - 2\kappa ((t{'_{n + 1}} - {t_{n + 1}}) + ({t_{n + 1}} - {t_n}))}}\bigg)\\ & = - \frac{1}{2}\log \bigg(1 - \frac{\gamma }{{1 + \gamma }}{e^{ - 2\kappa ({S_{n + 1}} + {T_{n + 1}})}}\bigg). \end{aligned} \end{equation} The system time $S_{n + 1}$ and the sampling interval $T_{n+1}$ are the main factors affecting the distribution of VoI. The joint probability density function (PDF) of $S_{n + 1}$ and $T_{n+1}$ is given by \begin{equation} {f_{T,S}}(t,s) = \lambda \mu {e^{ - \lambda t - \mu s}} - {\mu ^2}{e^{ - \mu (t + s)}} + \mu (\mu - \lambda ){e^{ - \mu t - (\mu - \lambda )s}}. \end{equation} \begin{IEEEproof} See appendix~\ref{appendix:mm1 joint PDF}. \end{IEEEproof} Let the variable $Z_{n+1}=S_{n + 1}+T_{n+1}$ and its PDF is given by \begin{multline} \label{eq:PDF z} {f_Z}(z) =\int_0^z {{f_{T,S}}(z - s,s)} \operatorname{d}\!s =\mu \bigg[\frac{\lambda }{{\mu - \lambda }}{e^{ - \lambda z}} -\\ \bigg(\frac{\lambda }{{\mu - \lambda }} + \mu z + \frac{{\mu - \lambda }}{\lambda }\bigg){e^{ - \mu z}} + \frac{{\mu - \lambda }}{\lambda }{e^{ - (\mu - \lambda )z}}\bigg]. \end{multline} Let the monotonic function \begin{equation} g(z) = - \frac{1}{2}\log (1 - \frac{\gamma }{{1 + \gamma }}{e^{ - 2\kappa z}}). \end{equation} Then, the PDF of $V_n$ can be calculated as \begin{equation} \label{eq:v PDF mapping} {f_V}(v) = {f_Z}({g^{ - 1}}(v))\left| {\frac{\operatorname{d}}{{\operatorname{d}\!v}}({g^{ - 1}}(v))} \right|. \end{equation} Here, the $g^{ - 1}$ denotes the inverse function, and we have \begin{equation} \label{eq:inverse v} {g^{ - 1}}(v) = - \frac{1}{{2\kappa }}\log \bigg(\frac{{(1 + \gamma )(1 - {e^{ - 2v}})}}{\gamma }\bigg), \end{equation} \begin{equation} \label{eq:dervi v} \frac{\operatorname{d}}{{\operatorname{d}\!v}}({g^{ - 1}}(v)) = - \frac{{{e^{ - 2v}}}}{{\kappa (1 - {e^{ - 2v}})}}. \end{equation} Then, we can state the following results. \begin{proposition}\label{prop:2} In the FCFS M/M/1 queueing system, the probability density function of the worst-case VoI is given by \begin{multline} \label{eq:PDF prop2} {f_V}(v) = \frac{{\mu {e^{ - 2v}}}}{{\kappa (1 - {e^{ - 2v}})}}\bigg[\frac{\lambda }{{\mu - \lambda }}{(r(v))^{\frac{\lambda }{{2\kappa }}}}-\bigg(\frac{\lambda }{{\mu - \lambda }}\\ + \frac{{\mu - \lambda }}{\lambda } - \frac{\mu }{{2\kappa }}\log (r(v))\bigg){(r(v))^{\frac{\mu }{{2\kappa }}}} + \frac{{\mu - \lambda }}{\lambda }{(r(v))^{\frac{{\mu - \lambda }}{{2\kappa }}}}\bigg]. \end{multline} Here, $r(v)$ is denoted as \begin{equation} r(v) = \frac{{(1 + \gamma )(1 - {e^{ - 2v}})}}{\gamma }. \end{equation} \end{proposition} \begin{IEEEproof} This result is obtained directly by substituting~\eqref{eq:PDF z}, ~\eqref{eq:inverse v} and~\eqref{eq:dervi v} into~\eqref{eq:v PDF mapping}. \end{IEEEproof} \begin{proposition}\label{prop:3} In the FCFS M/M/1 queueing system, the cumulative distribution function of the worst-case VoI is given by \begin{multline} \label{eq:CDF prop3} {F_V}(v) = \operatorname{P}(V \le v) = \frac{{(\mu - \lambda )}}{\mu }{e^{ - \frac{{v\lambda }}{\kappa }}}{(r(v))^{^{\frac{\lambda }{{2\kappa }}(1 + \frac{{2v}}{{\log (r(v))}})}}} \\ + \frac{\lambda }{\mu }{e^{ - \frac{{v(\mu - \lambda )}}{\kappa }}} {(r(v))^{^{\frac{{\mu - \lambda }}{{2\kappa }}(1 + \frac{{2v}}{{\log (r(v))}})}}}\\ + \bigg(1 - \frac{{{\mu ^2}}}{{\lambda (\mu - \lambda )}} + \frac{\mu }{{2\kappa }}\log (r(v))\bigg){e^{ - \frac{{v\mu }}{\kappa }}}{(r(v))^{^{\frac{\mu }{{2\kappa }}(1 + \frac{{2v}}{{\log (r(v))}})}}}. \end{multline} \end{proposition} \begin{IEEEproof} The cumulative distribution function (CDF) is obtained directly by \begin{equation} {F_V}(v) = \int_0^v {{f_V}(x)\operatorname{d}\!x}. \end{equation} \end{IEEEproof} In practice, this distribution function can be interpreted as the ``VoI outage'', i.e., the probability that the VoI right before a new sample arrives is below a threshold $v$, which can play a vital role in the system design. \section{Numerical Results} In this section, numerical results are provided through Monte Carlo simulations. Results show the VoI performance in the Markov and hidden Markov models, verify the validity of the simplified VoI in the high and low SNR regimes and illustrate the effect of time window's length, sampling rate, correlation and noise on the VoI. In the simulation, the long-term mean parameter $\sigma$ of the OU model is set as $1$. We consider the FCFS transmission and the service time of each status update is generated randomly by a rate $\mu=1$ exponential distribution. Fig.~\ref{fig:voi time evolution} shows the VoI in the Markov and hidden Markov OU models for different length of time window $m$. In the noisy OU process, all the received observations are used for the result labelled ``$m=n$''; only the most recent received single observation is used for the result labelled ``$m=1$''. This figure verifies the results given in Proposition~\ref{prop:1} and Corollary~\ref{cor:1}. The VoI decreases with time until a new update is transmitted which shows the similar behaviour to the traditional AoI evolution. The black curve represents the VoI in the underlying Markov OU model which is the first term in Proposition~\ref{prop:1}. The gap between the result in the Markov model and its counterpart in the hidden Markov model is the second term in Proposition~\ref{prop:1} which represents the ``VoI correction'' due to the indirect observation. Furthermore, the gap between two curves in the hidden Markov model increases with time. This means that the length of the time window can affect the VoI in the hidden Markov model. However, the VoI in the Markov model does not depend on $m$. \begin{figure} \centering \includegraphics[width=7cm]{fig8VoIvsTime.eps} \caption{Time evolution of VoI in Markov OU process and the noisy OU process; correlation parameter $\kappa=0.1$, noise parameter $\sigma_n^2=1$ and sampling interval $\Delta t=2$.} \label{fig:voi time evolution} \end{figure} \begin{figure} \centering \includegraphics[width=7cm]{fig3UniWindow.eps} \caption{VoI in the noisy OU process versus the length of the time window $m$ for $\sigma_n^2 \in \{ 0.1,2,5,10\}$ at $t=100$; correlation parameter $\kappa=0.05$ and sampling interval $\Delta t=2$.} \label{fig:voi-m} \end{figure} Fig.~\ref{fig:voi-m} further shows how the VoI varies with the length of the time window $m$ for different values of $\sigma_n^2$. The horizontal axis represents the number of observations we used to predict the value of the current status of the random process. The vertical axis is the normalised VoI which represents the ratio of $v(t)$ to $v_\text{OU}(t)$, where $v_\text{OU}(t)$ is the VoI in the underlying Markov OU process. This result shows that the VoI in the noisy OU process increases with the length of the time window, and converges as more past observations are used. This can be explained as more past observations can give more information about the current status of the latent random process. Moreover, the normalised VoI is approaching $1$ for small $\sigma_n^2$ which means that the VoI in the Markov model can be regarded as the upper bound of its counterpart in the hidden Markov model~\eqref{eq:MM upper bound}. Fig.~\ref{fig:uni high SNR},~\ref{fig:poi high SNR} and~\ref{fig:uni low SNR} show the numerical validation of the exact general VoI given in Proposition~\ref{prop:1} and the approximated VoI with different sampling policies in different SNR regimes which are discussed in Corollaries~\ref{cor: uni high snr},~\ref{cor: uni low snr} and~\ref{cor: random high snr}. Fig.~\ref{fig:uni high SNR} and~\ref{fig:poi high SNR} consider the high SNR regime with uniform and random sampling policies, respectively. We compare the exact VoI given in~\eqref{eq:prop. 1 general voi} with the approximated VoI in the high SNR regime given in~\eqref{eq:uni appro. high snr} and~\eqref{eq:poi approx VoI high snr}, respectively. It is not surprising that the exact VoI decreases as $\sigma_n^2$ increases. As the approximated VoI is truncated to the second-order term of $\frac{1}{\gamma}$, the VoI first decreases and starts to increase when it reaches the invalid region as $\sigma_n^2$ increases. The turning points are $\sigma_n^2 \in \{0.9,0.8,0.7\}$ in Fig.~\ref{fig:uni high SNR} and $\{0.5,0.4,0.3\}$ in Fig.~\ref{fig:poi high SNR}, verifying the results given in~\eqref{eq:valid region uni high} and~\eqref{eq:valid region poi high}. As expected, the approximated VoI is very close to the actual VoI when $\sigma_n^2$ and $\kappa$ are small, while the gap increases when the system experiences larger noise. Fig.~\ref{fig:uni low SNR} considers the low SNR regime, and compares the exact VoI with the approximated VoI given in~\eqref{eq:uni appro. low snr}. Compared to the high SNR regime, we observe the opposite behaviour, i.e., the approximated VoI is approaching the exact VoI when $\sigma_n^2$ and $\kappa$ are large. Therefore, these simulation results verify the analysis in Corollaries~\ref{cor: uni high snr},~\ref{cor: uni low snr} and~\ref{cor: random high snr}. \begin{figure} \centering \includegraphics[width=7cm]{fig1UniHigh.eps} \caption{High SNR regime: Comparison of the exact VoI and the approximated VoI with uniform sampling for $\kappa \in \{ 0.05,0.1,0.2\}$ at $t=100$; sampling interval $\Delta t=2$ and the length of time window $m=5$.} \label{fig:uni high SNR} \end{figure} \begin{figure} \centering \includegraphics[width=7cm]{fig4poihigh.eps} \caption{High SNR regime: Comparison of the exact VoI and the approximated VoI with random sampling for $\kappa \in \{ 0.05,0.1,0.2\}$ at $t=100$; sampling rate $\lambda=0.5$ and the length of time window $m=5$.} \label{fig:poi high SNR} \end{figure} In Fig.~\ref{fig:voi-lambda}, we investigate the effect of the sampling rate and correlation on the VoI in the noisy OU process. Fixing $\kappa$, we observe that both small and large sampling rates lead to small VoI. For small $\lambda$, the system lacks the newly generated status updates to predict the current status of the underlying random process. For large $\lambda$, more status updates have been sampled at the source, but they may not be transmitted in a timely manner because they need to wait for a longer time in the FCFS queue before being transmitted. Fixing $\lambda$, the system sees the large value when $\kappa$ is small. The parameter $\kappa$ represents the mean reversion which can be used to capture the correlation of the latent OU process. Compared to the less correlated samples (larger $\kappa$), the value of highly correlated samples is larger, which further illustrates that ``old" samples from the highly correlated source may still have value in some cases. \begin{figure} \centering \includegraphics[width=7cm]{fig2UniLow.eps} \caption{Low SNR regime: Comparison of the exact VoI and the approximated VoI with uniform sampling for $\kappa \in \{ 0.25,0.3,0.35\}$ at $t=100$; sampling interval $\Delta t=2$ and the length of time window $m=5$.} \label{fig:uni low SNR} \end{figure} \begin{figure} \centering \includegraphics[width=7cm]{fig6poiLambda.eps} \caption{VoI in the noisy OU process versus the sampling rate $\lambda$ for $\kappa \in \{ 0.05, 0.1, 0.2\}$ at $t=100$; noise parameter $\sigma_n^2=0.5$ and the length of time window $m=2$.} \label{fig:voi-lambda} \end{figure} Fig.~\ref{fig:PDF} and~\ref{fig:CDF} show the statistical properties of the VoI in the worst case. Fig.~\ref{fig:PDF} shows the density of the discrete path of the worst-case VoI and the theoretical density function given in Proposition~\ref{prop:2} when $\kappa=0.1$. It is clear to find that the results obtained from Monte Carlo simulations are consistent with the PDF obtained from the theoretical analysis. In Fig.~\ref{fig:CDF}, we plot the CDF of the worst-case VoI given in Proposition~\ref{prop:3} for different values of $\kappa$ and $\sigma_n^2$. This figure illustrates that the ``VoI outage" is more likely to occur when the status updates are less correlated or the system experiences large noise. \begin{figure} \centering \includegraphics[width=7cm]{fig5poiPDF.eps} \caption{The density function of the worst-case VoI; correlation parameter $\kappa=0.1$, noise parameter $\sigma_n^2=0.5$ and sampling rate $\lambda=0.5$.} \label{fig:PDF} \end{figure} \begin{figure \centering \includegraphics[width=7cm]{fig7poiCDF.eps} \caption{The cumulative distribution function of the worst-case VoI for $\kappa \in \{ 0.05, 0.1, 0.2, 0.3\}$ and $\sigma_n^2 \in \{0.5,1\}$; sampling rate $\lambda=0.5$.} \label{fig:CDF} \end{figure} \section{Conclusions} In this paper, a mutual-information based value of information framework was formalised to characterise how valuable the status updates are for hidden Markov models. The notion of VoI was interpreted as the reduction in the uncertainty of the current unobserved status given that we have a dynamic sequence of noisy measurements. We took the noisy OU process as an example and derived closed-form VoI expressions. Moreover, the VoI was further explored in the context of the noisy OU model with different sampling policies. For uniform sampling, the simplified VoI expressions were derived in high and low SNR regimes, respectively. For random sampling, the simplified expression and the statistical properties of VoI were obtained. Furthermore, numerical results are presented to verify the accuracy of our theoretical analysis. Compared with the traditional AoI metric, the proposed VoI framework can be used to describe the timeliness of the source data, how correlated the underlying random process is, and the noise in hidden Markov models. \begin{appendices} \section{Proof of Proposition 1} \label{appendix:voi} Since $(\bm{Y}^{\operatorname{T}},X_t)$ is multivariate normal distribution, the VoI defined in~\eqref{eq:general definition} can be written as~\cite{bookinformationtheory} \begin{equation} \label{eq:genernal voi with covariance} \begin{aligned} v(t)&=I(X_t;\bm{Y}^{\operatorname{T}}) \\ &= h(X_t) + h(\bm{Y}^{\operatorname{T}}) - h(\bm{Y}^{\operatorname{T}},X_t)\\ &=\frac{1}{2}\log \frac{{\operatorname{Var}[X_t]\det({\mathbf{\Sigma_Y} })}}{\det({\mathbf{\Sigma} _{\mathbf{Y},X_t}})} \end{aligned} \end{equation} where ${\mathbf{\Sigma_Y}}$ and $\mathbf{\Sigma} _{\mathbf{Y},X_t}$ are the covariance matrices of ${\bm{Y}}$ and $({\bm{Y}}^{\operatorname{T}},X_t)^{\operatorname{T}}$, respectively. Since ${\bm{X}}$ and ${\bm{N}}$ are independent, the covariance matrix ${\mathbf{\Sigma} _{\mathbf{Y}}}$ can be given as \begin{equation} \label{eq:cov y} {\mathbf{\Sigma_Y} } = {\mathbf{\Sigma_X} } + {\mathbf{\Sigma_N}}. \end{equation} Moreover, $\det(\mathbf{\Sigma} _{\mathbf{Y},X_t})$ in~\eqref{eq:genernal voi with covariance} can be obtained by the probability density function of $(\bm{Y}^{\operatorname{T}},X_t)$, and this density function can be further obtained by marginalising the joint density function of $(\bm{Y}^{\operatorname{T}},X_t,\bm{X}^{\operatorname{T}})$ over $\bm{X}^{\operatorname{T}}$. Hence, we have \begin{equation} \label{eq:cov y xt} \det({\mathbf{\Sigma} _{\mathbf{Y},X_t}}) =\operatorname{Var}[{X_t}|{X_{{t_n}}}] \det\bigg({\mathbf{\Sigma_N} } + {\mathbf{\Sigma_X} } + \frac{{{\mathbf{\Sigma_X}}\bm{v}\bm{v}^{\operatorname{T}}{\mathbf{\Sigma_N} }}}{{\operatorname{Var}[{X_t}|{X_{{t_n}}}]}}\bigg) \end{equation} where vector $\bm{v} = [0, \cdots ,0,{e^{ - \kappa (t - {t_n})}}]^{\operatorname{T}}$. Substituting~\eqref{eq:cov y} and~\eqref{eq:cov y xt} into~\eqref{eq:genernal voi with covariance}, the VoI for the noisy OU process can be expressed as \begin{multline} v(t)\\ = \frac{1}{2}\log \left(\frac{{\operatorname{Var}[{X_t}]}}{{\operatorname{Var}[{X_t}|{X_{{t_n}}}]}} \frac{{\det({\mathbf{\Sigma_N} } + {\mathbf{\Sigma_X} })}}{{\det({\mathbf{\Sigma_N}} + {\mathbf{\Sigma_X} } + \frac{{{\mathbf{\Sigma_X}}\bm{v}\bm{v}^{\operatorname{T}}{\mathbf{\Sigma_N}}}}{{\operatorname{Var}[{X_t}|{X_{{t_n}}}]}})}}\right). \end{multline} By utilising the matrix determinant lemma, the determinant in the denominator can be written as \begin{multline} {\det\bigg({\mathbf{\Sigma_N}} + {\mathbf{\Sigma_X} } +\frac{{{\mathbf{\Sigma_X}}\bm{v}\bm{v}^{\operatorname{T}}{\mathbf{\Sigma_N}}}}{{\operatorname{Var}[{X_t}|{X_{{t_n}}}]}}\bigg)}\\ =\bigg(1+\frac{{{\bm{v}^{\operatorname{T}}}{{({\mathbf{\Sigma}^{ - 1}_{\mathbf{X}} + \mathbf{\Sigma}^{ - 1}_{\mathbf{N}}})}^{-1}}\bm{v}}}{{\operatorname{Var}[{X_t}|{X_{{t_n}}}]}}\bigg) {\det({\mathbf{\Sigma_N} } + {\mathbf{\Sigma_X} })}. \end{multline} Therefore, the VoI expression can be further written as \begin{multline} v(t) = \frac{1}{2}\log \bigg(\frac{{1 }}{{1 - {e^{ - 2\kappa (t - {t_n})}}}}\bigg) \\ - \frac{1}{2}\log \bigg(1 + \frac{{{2\kappa\sigma_n^2 }}}{{\sigma^2 \left(e^{2\kappa (t-t_n)} - 1\right) }} \frac{\det(\mathbf{A}_{mm})}{\det(\mathbf{A})}\bigg) \end{multline} where $\mathbf{A} = \sigma_n^2\mathbf{\Sigma}^{ - 1}_{\mathbf{X}} + \mathbf{I}$. \section{Proof of the Determinant Calculation for Uniform Sampling} \label{appendix:characteristic equation} Let the $m$-dimensional circulant matrix $\bm{\eta}$ where \begin{equation} {(\bm{\eta}) _{i,j}} = \left\{ {\begin{array}{*{20}{l}} 1&{i = m,j = 1}\\ 1&{j = i + 1}\\ 0&{\text{others}} \end{array}} \right. \end{equation} \begin{equation} \label{eq:det eta} \det (\bm{\eta} ) = {( - 1)^{m - 1}}. \end{equation} The product of matrix $\mathbf{A}$ and $\bm{\eta}$ can be partitioned into four blocks \begin{equation} \begin{aligned} \label{eq:block matrix} \mathbf{A}\bm{\eta} &= \left[ {\begin{array}{*{20}{l}} 0&\vline& a&b&0& \cdots &0\\ \hline 0&\vline& b&c&b&{}&{}\\ \vdots &\vline& {}&b&c& \ddots &{}\\ 0&\vline& {}&{}& \ddots & \ddots &b\\ b&\vline& {}&{}&{}&b&c\\ a&\vline& {}&{}&{}&{}&b \end{array}} \right] \\ &= \left[ {\begin{array}{*{20}{l}} \bm{{\eta_{11}}}&\vline& \bm{{\eta_{12}}}\\ \hline \bm{{\eta_{21}}}&\vline& \bm{{\eta_{22}}} \end{array}} \right]. \end{aligned} \end{equation} Taking the determinant of both side, then we have \begin{equation} \label{eq:detB} \det (\mathbf{A}) = \frac{{\det (\bm{\eta _{22}})\det(\bm{\eta _{11}} - \bm{\eta _{12}}\bm{\eta _{22}}^{ - 1}\bm{\eta _{21}})}}{{\det (\bm{\eta} )}}. \end{equation} Here, \begin{equation} \label{eq:det eta_22} \det (\bm{\eta_{22}})=b^{m-1}. \end{equation} As $\bm{\eta_{22}}$ is a tri-band Toeplitz matrix, the inverse matrix can be expressed by~\cite{UpperTriangular} \begin{equation} \label{eq:inverse eta_22} \bm{\eta _{22}}^{ - 1} = \left[ {\begin{array}{*{20}{l}} {{J_1}}&{{J_2}}& \cdots &{{J_{m - 1}}}\\ {}&{{J_1}}& \ddots & \vdots \\ {}&{}& \ddots &{{J_2}}\\ {}&{}&{}&{{J_1}} \end{array}} \right] \end{equation} where ${J_i}$ falls in the form of the following recurrence relation \begin{equation} {J_i} = - \frac{c}{b}{J_{i - 1}} -{J_{i - 2}} \end{equation} with ${J_1} = \frac{1}{b}$ and ${J_2} = - \frac{c}{{{b^2}}}$. Substituting~\eqref{eq:block matrix}, ~\eqref{eq:det eta_22} and~\eqref{eq:inverse eta_22} into~\eqref{eq:detB}, we have \begin{equation} \label{eq:det B uniform} \det (\mathbf{A}) = {( - 1)^m}{b^{m - 1}}({a^2}{J_{m - 1}} + 2ab{J_{m - 2}} + {b^2}{J_{m - 3}}). \end{equation} The recurrence relation can be solved by the roots of the characteristic polynomial. The characteristic equation is given by \begin{equation} {\lambda ^2} + \frac{c}{b}\lambda + 1 = 0, \end{equation} and the eigenvalues are \begin{equation} {\lambda _1} = \frac{{ - c + \sqrt {{c^2} - 4{b^2}} }}{{2b}},\quad {\lambda _2} = \frac{{ - c - \sqrt {{c^2} - 4{b^2}} }}{{2b}}. \end{equation} Thus, $J_i$ can be written as \begin{equation} {J_i} = \frac{1}{{\sqrt {{c^2} - 4{b^2}} }}(\lambda _1^i - \lambda _2^i). \end{equation} Substituting $J_i$ into~\eqref{eq:det B uniform}, we obtain the result given in~\eqref{eq: uni A}. The result given in~\eqref{eq: uni Amm} can be obtained in the similar way. \section{Proof of Lemma~\ref{lemma: induction}} \label{appendix:induction} Mathematical induction is utilised to prove the statement $f_k$ for all natural numbers $1\le k\le m$. First, for the base case, we have \begin{equation} {f_1} = 1+{\frac{1}{\gamma}}{a_1},\quad {f_1} = 0 \cdot 1+ {a_1}{\frac{1}{\gamma}} +{{{\frac{1}{\gamma^2}}}}. \end{equation} \begin{equation} \begin{aligned} {f_2} &= ({\frac{1}{\gamma}}{a_1} + 1)({\frac{1}{\gamma}}{a_2} + 1) - {{\frac{1}{\gamma^2}}}b_1^2,\\ {f_2} &= 1+ ({a_1} + {a_2}){\frac{1}{\gamma}} + ({a_1}{a_2} - b_1^2){{\frac{1}{\gamma^2}}}. \end{aligned} \end{equation} It is easy to see that $f_1$ and $f_2$ are clearly true. Next, we turn to the induction hypothesis. We assume that, for a particular $s$, the cases $k=s$ and $k=s+1$ hold. This means that \begin{equation} \label{eq:assumption} \begin{aligned} &{f_s} = 1 + \sum\limits_{i = 1}^s {{a_i}{\frac{1}{\gamma}} + \bigg(\sum\limits_{1 \le i < j \le s} {{a_i}{a_j} - \sum\limits_{i = 1}^{s - 1} {b_i^2} } \bigg){{\frac{1}{\gamma^2}}} + O({{\frac{1}{\gamma^3}}})} \\ &{f_{s + 1}}=\\ &1 + \sum\limits_{i = 1}^{s + 1} {{a_i}{\frac{1}{\gamma}} + \bigg(\sum\limits_{1 \le i < j \le s + 1} {{a_i}{a_j} - \sum\limits_{i = 1}^s {b_i^2} } \bigg){{\frac{1}{\gamma^2}}} + O({{\frac{1}{\gamma^3}}})}. \end{aligned} \end{equation} As matrix $\mathbf{A}$ is tridiagonal, the cofactor expansion can be used to calculate the determinant. When $k=s+2$, we have the following recurrence relation \begin{equation} \label{eq:recur high snr} {f_{s + 2}} = ({\frac{1}{\gamma}}{a_{s + 2}} + 1){f_{s + 1}} - {{\frac{1}{\gamma^2}}}b_{s + 1}^2{f_s}. \end{equation} Substituting~\eqref{eq:assumption} into~\eqref{eq:recur high snr}, we have \begin{equation} \begin{aligned} &{f_{s + 2}} = \bigg({a_{s + 2}}\sum\limits_{i = 1}^{s + 1} {{a_i}} + \sum\limits_{1 \le i < j \le s + 1} {{a_i}{a_j} - \sum\limits_{i = 1}^s {b_i^2} } \bigg){{\frac{1}{\gamma^2}}} \\ &+ \bigg({a_{s + 2}} + \sum\limits_{i = 1}^{s + 1} {{a_i}} \bigg){\frac{1}{\gamma}} + 1 + {{\frac{1}{\gamma^2}}}b_{s + 1}^2 + O({{\frac{1}{\gamma^3}}})\\ =& 1 + \sum\limits_{i = 1}^{s + 2} {{a_i}{\frac{1}{\gamma}} + \bigg(\sum\limits_{1 \le i < j \le s + 2} {{a_i}{a_j} - \sum\limits_{i = 1}^{s + 1} {b_i^2} } \bigg){{\frac{1}{\gamma^2}}} + O({{\frac{1}{\gamma^3}}})}. \end{aligned} \end{equation} This shows that the statement $f_{s+2}$ also holds true, establishing the inductive step. Both base cases and inductive steps are proved to be true, therefore we can conclude that $f_k$ in~\eqref{eq:f_m} holds for every $k$ with $1\le k\le m$. \section{Proof of the Joint Density Function of Sampling Interval and System Time} \label{appendix:mm1 joint PDF} In the FCFS M/M/1 queueing system, the variables $S_n$, $W_{n+1}$ and $T_{n+1}$ are independent with each other, thus their joint PDF can be obtained by \begin{equation} \begin{aligned} {f_{{S_n},W,T}}({s_n},w,t) &= {f_{{S_n}}}({s_n}){f_W}(w){f_T}(t) \\ &= \lambda \mu (\mu - \lambda ){e^{ - \lambda t - \mu w - (\mu - \lambda )s_n}}. \end{aligned} \end{equation} The system time of the $(n+1)$th update $S_{n+1}$ can be expressed by \begin{equation} {S_{n + 1}} = {({S_n} - {T_{n + 1}})^ + } + {W_{n + 1}} \end{equation} where the non-negative term represents the waiting time. Therefore, the joint PDF of $T_{n+1}$ and $S_{n+1}$ can be obtained by \begin{multline} {f_{T,S}}(t,s) = \int_0^{ + \infty } {{f_{{S_n},W,T}}({s_n},s - {{({s_n} - t)}^ + },t)} \operatorname{d}\!{s_n}\\ = \lambda \mu (\mu - \lambda ){e^{ - \lambda t}}\bigg(\int_0^t {{e^{ - \mu s - (\mu - \lambda ){s_n}}}} \operatorname{d}\!{s_n} \\ + \int_t^{s + t} {{e^{ - \mu (s + {s_n} + t) - (\mu - \lambda ){s_n}}}} \operatorname{d}\!{s_n}\bigg)\\ = \lambda \mu {e^{ - \lambda t - \mu s}} - {\mu ^2}{e^{ - \mu (t + s)}} + \mu (\mu - \lambda ){e^{ - \mu t - (\mu - \lambda )s}}. \end{multline} \end{appendices} \bibliographystyle{IEEEtran}
2,869,038,156,564
arxiv
\section{\label{}} \section{Introduction} The standard model (SM) fails to provide an explanation for the baryon-to-photon ratio in the present universe, $\eta_B^{\rm obs} \simeq 6 \times 10^{-10}$~\cite{Ade:2013zuv}, which serves as a major indication for new physics. Consequently, some new dynamical mechanism must be responsible for baryogenesis, i.e., the generation of a primordial baryon-antibaryon asymmetry in the early universe~\cite{Dolgov:1997qr}. Most mechanisms proposed in the literature are devised so as to satisfy the three famous Sakharov conditions~\cite{Sakharov:1967dj}: (i) violation of baryon number $B$ (or lepton number $L$ in the case of leptogenesis), (ii) violation of $C$ as well as of $CP$ invariance, and (iii) departure from thermal equilibrium. As it turns out, it is, however, not mandatory to fulfill these three conditions in order to successfully generate a baryon asymmetry. The point is that Sakharov's conditions are based on the assumption of $CPT$ invariance, which means that one is actually able to circumvent them in case $CPT$ is spontaneously broken. This idea has been pioneered by Cohen and Kaplan in their scenario of \textit{spontaneous baryogenesis}~\cite{Cohen:1987vi}, where the baryon asymmetry is generated in thermal equilibrium; and since then, it has been studied and expanded upon by many authors~\cite{Dine:1990fj,Kuzmin:1992up,Dolgov:1994zq,Chiba:2003vp}. For example, Kusenko et al.\ have recently shown how the $CPT$ violation during the phase of SM Higgs relaxation after the end of inflation can be used for the realization of baryogenesis via leptogenesis~\cite{Kusenko:2014lra}. In this talk, I will draw upon this earlier work and demonstrate that it can be easily generalized to the case of generic axion-like scalar fields relaxing from large initial field values in the course of reheating; further details pertaining to our analysis can be found in our recent paper~\cite{Kusenko:2014uta} as well as in another forthcoming publication. In an expanding universe at nonzero temperature, $CPT$ invariance can be easily broken spontaneously by introducing a pseudoscalar field, $a(t,\vec{x})$, which couples derivatively to the fermion current $j^\mu$ in the Lagrangian, \begin{align} \mathcal{L} \supset \frac{1}{f_a} \, \partial_\mu a \, j^\mu \,, \quad j^\mu = \sum_f \bar{\psi}_f \gamma^\mu \psi_f \,, \label{eq:derivative} \end{align} with $f_a$ being some cut-off scale. Imposing spatial homogeneity, $a = a(t)$, and assuming that the classical background is given cause to evolve with nonzero velocity, $\dot{a} \neq 0$, (which is readily done in the early universe, as we will review shortly) this coupling turns into an effective chemical potential $\mu_{\rm eff}$ for the fermion number, \begin{align} \mathcal{L} \supset \frac{1}{f_a} \, \dot{a} \, j^0 = \mu_{\rm eff}\, n_F \,, \quad \mu_{\rm eff} = \frac{\dot{a}}{f_a} \,, \quad j^0 \equiv n_F = n_f - n_{\bar{f}}\,, \label{eq:mueff} \end{align} which shifts the energy levels of fermions $f$ and antifermions $\bar{f}$ w.r.t.\ each other. In thermal equilibrium, the minimum of the free energy is therefore obtained for a nonzero fermion-antifermion asymmetry $n_F$, \begin{align} n_{f,\bar{f}}^{\rm eq} \sim T^3 \left(1 \pm \frac{\mu_{\rm eff}}{T}\right) \,, \quad n_F^{\rm eq} = n_f^{\rm eq} - n_{\bar{f}}^{\rm eq} \sim \mu_{\rm eff}\, T^2 \,. \end{align} As observed by Cohen and Kaplan, this result may serve as a basis for the successful generation of the baryon asymmetry. However, in order to arrive at a realistic model, one first of all has to address three important questions: (i) what is the nature of the field $a$ and the origin of the derivative coupling in Eq.~\eqref{eq:derivative}, (ii) how is the field $a$ set in motion, and (iii) what kind of interactions drive the number density $n_F$ towards its equilibrium value $n_F^{\rm eq}$? In the following, I shall discuss each of these issues in turn, cf.\ Sec.~\ref{sec:mechanism}, which will eventually lead us to an interesting alternative to thermal leptogenesis~\cite{Fukugita:1986hr}. In Sec.~\ref{sec:parameters}, I will then sketch the parameter dependence of the final baryon asymmetry in our model; and in Sec.~\ref{sec:conclusions}, I will conclude and give a brief outlook. \section{Novel axion-driven leptogenesis mechanism} \label{sec:mechanism} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{eta1.pdf}\hspace{0.05\textwidth} \includegraphics[width=0.45\textwidth]{eta2.pdf} \caption{Evolution of the (instantaneous) baryon asymmetry (projected onto its would-be present-day value) as a function of time for $m_a \gg \Gamma_\varphi$ \textbf{(left panel)} and $m_a \ll \Gamma_\varphi$ \textbf{(right panel)}. Here, $f_a = 3 \times 10^{14}\,\textrm{GeV}$ in both panels. \label{fig:interplay}} \end{figure} Let us integrate the derivative interaction in Eq.~\eqref{eq:derivative} by parts, $\mathcal{L} \supset 1/f_a\, \partial_\mu a \, j^\mu \rightarrow - a/f_a\, \partial_\mu j^\mu$. This illustrates that the $CPT$-violating coupling required for baryogenesis is equivalent to a coupling of the pseudoscalar $a$ to the divergence of the fermion current $j^\mu$. The field $a$ is thus naturally identified as an axion-like field, or simply ``axion''~\cite{Peccei:1977hh}, which couples to the anomaly of the fermion number $F = 3\,B + L$. From this point of view, the cut-off scale $f_a$ in Eq.~\eqref{eq:derivative} is immediately recognized as the decay constant of the axion field $a$. Moreover, we note that the electroweak anomalies of the baryon number $U(1)_B$ and lepton number $U(1)_L$ in the SM allow us to recast the axion coupling to $\partial_\mu j^\mu$ as a coupling to the electroweak field strength tensors $W_{\mu\nu}$ and $B_{\mu\nu}$, \begin{align} \mathcal{L} \supset - \frac{a}{f_a}\,\partial_\mu j^\mu \rightarrow \frac{a}{f_a} \frac{N_f}{8\pi^2} \left(g_2^2\, W_{\mu\nu}\tilde{W}^{\mu\nu} - g_1^2\, B_{\mu\nu}\tilde{B}^{\mu\nu} \right) \,, \label{eq:aFF} \end{align} where $N_f = 3$ is the number of SM fermion generations and with $g_2$ and $g_1$ denoting the electroweak gauge couplings. Interactions of this form may, for instance, arise in string theory, which always features at least one (model-independent) axion~\cite{Witten:1984dg} associated with the Green-Schwarz mechanism of anomaly cancellation~\cite{Green:1984sg}. This axion couples to all gauge groups with universal strength $f_a \sim 10^{16}\,\textrm{GeV}$~\cite{Choi:1985je}. Besides that, string theory may also give rise to a multitude of further axions fields coupling to different gauge groups with nonuniversal strength~\cite{Witten:1984dg,Witten:1985fp}. The couplings of these model-dependent axions are then determined by the gauge structure as well as the details of the compactification. For our purposes, the upshot of these considerations is that a certain linear combination of stringy axions may very well end up coupling to $F\tilde{F}$, where $F = W,B$. In the following, we shall therefore identify the field $a$ in Eq.~\eqref{eq:derivative} with just this linear combination and take the above string-based argument to be the origin of the coupling $\mathcal{L} \supset a/f_a\,F\tilde{F} \leftrightarrow 1/f_a\, \partial_\mu a \, j^\mu \leftrightarrow \mu_{\rm eff} \, n_F$ in the Lagrangian. The dynamics of the axion background in the early universe are governed by its classical equation of motion, \begin{align} \ddot{a} + 3 \, H \, \dot{a} = - \partial_a V_{\rm eff}(a) \,, \quad V_{\rm eff} \simeq \frac{1}{2} m_a^2 a^2 \,, \label{eq:aEOM} \end{align} where the effective potential $V_{\rm eff}$ and hence the axion mass $m_a \simeq \Lambda_H^2 / f_a$ may, for instance, originate from instanton effects in a strongly coupled hidden sector featuring a dynamical scale $\Lambda_H$. Assuming that the PQ-like symmetry associated with the flat axion direction is broken sufficiently early before the end of inflation (and not restored afterwards), the initial axion field value $a_0 = (0\cdots 2\pi) f_a$ becomes constant on superhorizon scales. For definiteness, we shall therefore take $a_0 / f_a$ to be $1$ in the entire observable universe at the end of inflation. As we will see shortly, the baryon asymmetry produced during reheating is going to depend on $a_0$. Because of that, we have to ensure that the baryonic isocurvature perturbations induced by the quantum fluctuations $\delta a$ of the axion field around its homogeneous background $a_0$ remain below the observational bound~\cite{isocurvature}. This constrains the Hubble rate $H_{\rm inf}$ during inflation: $H_{\rm inf}/(2\pi)/a_0 \lesssim 10^{-5}$ or equivalently $H_{\rm inf} \lesssim 6\times 10^{11}\,\textrm{GeV} \left(f_a / 10^{15}\,\textrm{GeV}\right)$. At the same time, we have to require that $m_a \lesssim H_{\rm inf}$, so that during inflation the Hubble friction term on the left-hand side of Eq.~\eqref{eq:aEOM} outweighs the potential gradient on the right-hand side of Eq.~\eqref{eq:aEOM}. After the end of inflation, the Hubble rate then begins to drop, until, around $H \simeq m_a$, the axion begins to coherently oscillate around the minimum of its effective potential, $a = 0$, with frequency $\omega = m_a$. During this stage of \textit{axion relaxation}, the axion field therefore evolves with nonzero velocity in its potential, which temporarily induces an effective chemical potential for the fermion number, as anticipated at the beginning of this talk, cf.\ Eq.~\eqref{eq:mueff}. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{eta_contours.pdf}\hspace{0.05\textwidth} \includegraphics[width=0.45\textwidth]{fa_contours.pdf} \caption{\textbf{(Left panel)} Contour plot of the final baryon asymmetry $\eta_B^0$. The black (bent) contours represent the full numerical result, while the colorful (straight) contours depict the analytical estimate according to Eqs.~\eqref{eq:etaB}, \eqref{eq:etaL} and \eqref{eq:Delta}. The effect of washout is illustrated by the difference between the dashed ($\kappa = 0$) and solid ($\kappa \neq 0$) lines. \textbf{(Right panel)} Contour lines of successful leptogenesis ($\eta_B^0 = \eta_B^{\rm obs}$) for different values of $f_a$. The dashed segments along the individual contours indicate where either $m_a$ or $\Gamma_\varphi$ becomes comparable to the maximally allowed Hubble rate, $H_{\rm inf}^{\rm max} \simeq 2\pi \,10^{-5}\,f_a$. \label{fig:scan}} \end{figure} In order to make use of $\mu_{\rm eff} = \dot{a}/f_a$ for the purposes of baryogenesis, it is important that there be an \textit{external source} of baryon or lepton number violation that proceeds at a much faster rate $\Gamma$ than the motion of the axion field in its effective potential, $\Gamma \gg \dot{a}/a$. Only then do the axion oscillations act as an adiabatic background, so that $\dot{a}/f_a$ can be interpreted as an effective chemical potential~\cite{Dolgov:1994zq}. Here, a minimal choice to satisfy this requirement is to rely on $L$ violation through the $s$- and $t$-channel exchange of heavy Majorana neutrinos $N_i$, \begin{align} \Delta L = 2 : \quad \ell_i\ell_j \leftrightarrow N_k^* \leftrightarrow HH \,, \:\: \ell_i H \leftrightarrow N_k^* \leftrightarrow \bar{\ell}_j\bar{H} \,, \quad \ell_i^T = \begin{pmatrix} \nu_i & e_i \end{pmatrix} \,,\:\: H^T = \begin{pmatrix} h_+ & h_0 \end{pmatrix} \,, \:\: i,j,k = 1,2,3 \,. \label{eq:scatterings} \end{align} These processes are guaranteed to be present in the bath as long as one believes in the seesaw mechanism as the correct explanation for the small neutrino masses in the SM~\cite{seesaw}. In order to separate the leptogenesis mechanism under study form the contributions from ordinary thermal leptogenesis, we shall assume that all right-handed neutrinos $N_i$ acquire Majorana masses $M_i$ close to the scale of grand unification (GUT), $M_i \sim \mathcal{O}\left(0.1\cdots 1\right)\Lambda_{\rm GUT} \sim 10^{15}\cdots 10^{16}\, \textrm{GeV}$, so that none of them is actually ever thermally produced. For center-of-mass energies $\sqrt{s}\ll M_i$, the thermally averaged cross section of the $\Delta L = 2$ lepton-Higgs scattering processes in Eq.~\eqref{eq:scatterings}, $\sigma_{\rm eff} \equiv \left<\sigma_{\Delta L = 2}\, v\right>$, is then practically fixed~\cite{Buchmuller:2004nz} by the experimental data on the SM neutrino sector~\cite{Agashe:2014kda}, \begin{align} \sigma_{\rm eff} \approx \frac{3}{32\pi} \frac{\bar{m}^2}{v_{\rm ew}^4} \simeq 1\times10^{-31} \,\textrm{GeV}^{-2} \,, \quad \bar{m}^2 = \sum_{i=1}^3 m_i^2 \approx \Delta m_{\rm atm}^2 \simeq 2.4 \times 10^{-3} \,\textrm{eV}^2 \,, \quad v_{\rm ew} \simeq 174 \,\textrm{GeV} \,. \end{align} Correspondingly, the evolution of the $L$ number density $n_L$ is described by the following Boltzmann equation, \begin{align} \dot{n}_L + 3\,H\,n_L \simeq -\Gamma_L \left(n_L - n_L^{\rm eq}\right) \,, \quad \Gamma_L = 4\, n_\ell^{\rm eq} \sigma_{\rm eff} \,, \quad n_\ell^{\rm eq} = \frac{2}{\pi^2} T^3 \,, \quad n_L^{\rm eq} = \frac{4}{\pi^2} \mu_{\rm eff} T^2 \,, \label{eq:Lboltz} \end{align} where we have approximated the lepton and $L$ number densities in thermal equilibrium, $n_\ell^{\rm eq}$ and $n_L^{\rm eq}$, by their corresponding expressions in the classical Boltzmann approximation. Notice that the production term on the right-hand side of this equation, $\Gamma_L n_L^{\rm eq} \propto \sigma_{\rm eff} \,\mu_{\rm eff}\,T^5$, is largely independent of the details of the neutrino sector. It does, in particular, not depend on the amount of $CP$ violation in the neutrino sector nor on the heavy neutrino mass spectrum. At the same time, it increases linearly with the light neutrino mass scale $\bar{m}$. The usual bound on this mass scale from thermal leptogenesis, $\bar{m}\lesssim 0.2\,\textrm{eV}$, (where it ensures that dangerous washout processes do not become too strong~\cite{Buchmuller:2005eh}) hence does not apply in our scenario. The absolute neutrino mass scale will soon be probed experimentally (on earth~\cite{Drexlin:2013lha} as well as in the sky~\cite{Abazajian:2011dt}). This, therefore, entails the intriguing possibility to test our model against the conventional scenario of thermal leptogenesis in the near future. \section{Parameter dependence of the final baryon asymmetry} \label{sec:parameters} Subsequent to its generation (according to Eq.~\eqref{eq:Lboltz}), the lepton asymmetry is partly converted into a baryon asymmetry by means of electroweak sphalerons. The final present-day baryon asymmetry $\eta_B^0$ is then given as \begin{align} \eta_B^0 = \frac{n_B^0}{n_\gamma^0} = c_{\rm sph} \, \frac{g_{*,s}^0}{g_*} \, \eta^L_a \simeq 0.013 \, \eta_L^a \,, \quad c_{\rm sph} = \frac{28}{79} \,, \quad g_{*,s}^0 = \frac{43}{11} \,, \quad g_* = \frac{427}{4} \,, \label{eq:etaB} \end{align} where $c_{\rm sph}$, $g_*$ and $g_{*,s}^0$ denote the SM sphaleron conversion factor as well as the effective number of relativistic degrees of freedom at high and low temperatures, respectively. Moreover, $\eta_L^a$ represents the final lepton asymmetry after the late-time decay of the axion field at times around $t \sim \Gamma_a^{-1}$, where $\Gamma_a \simeq g_2^2/(256\,\pi^4)\, m_a^3/f_a^2$. In general, $\eta_L^a$ does not correspond to the equilibrium number density at the same time, $\eta_L^{\rm eq}$, as the efficiency of leptogenesis typically begins to cease before the equilibrium value is actually reached, so that $\eta_a^L \ll \eta_a^{\rm eq}$. For fixed values of $a_0$ and $f_a$, the final lepton asymmetry ends up depending on two parameters: the axion mass $m_a$ as well as the reheating temperature $T_{\rm rh}$, which is, in turn, determined by the inflaton decay rate, $T_{\rm rh} \simeq 0.3\,\sqrt{\Gamma_\varphi M_{\rm Pl}}$. We infer the precise parameter dependence of $\eta_L^a$ by numerically solving Eqs.~\eqref{eq:aEOM} and \eqref{eq:Lboltz} together with the Friedmann equation for the scale factor as well as the Boltzmann equations for the number densities of inflaton particles and relativistic SM particles, respectively. As it turns out, our exact numerical result can be very well reproduced by the following analytical expression, cf.\ also the left panel of Fig.~\ref{fig:scan}, \begin{align} \eta_L^a = C \, \Delta_a^{-1} \Delta_\varphi^{-1}\,\eta_L^{\rm max}\, e^{-\kappa} \,, \quad \eta_L^{\rm max} \simeq \frac{\sigma_{\rm eff}}{g_*^{1/2}} \frac{a_0}{f_a} \, m_a \, M_{\rm Pl} \times \min\left\{1,\,\left(\Gamma_\varphi/m_a\right)^{1/2}\right\} \,, \label{eq:etaL} \end{align} with $C$ being a numerical fudge factor of $\mathcal{O}(1)$. $\eta_L^{\rm max}$ denotes the all-time maximum of the lepton asymmetry, which is reached around the time when the axion oscillations set it, i.e., at $t \sim t_{\rm osc} \simeq m_a^{-1}$. Note that it is rather insensitive to both $a_0$ and $f_a$, as it only depends on the ratio $a_0/f_a$, which is expected to be of $\mathcal{O}(1)$. Furthermore, $\Delta_\varphi$ and $\Delta_a$ account for the dilution of $\eta_L^{\rm max}$ in the course of inflaton and axion decays, respectively, \begin{align} \Delta_\varphi \simeq \max\left\{1,\,\left(m_a/\Gamma_\varphi\right)^{5/4}\right\} \,, \quad \Delta_a \simeq \max\left\{1,\,\frac{8\pi^3}{g_2^2}\frac{f_a\, a_0^2}{m_a\, M_{\rm Pl}^2} \times \min\left\{1,\,\left(\Gamma_\varphi/m_a\right)^{1/2}\right\}\right\} \,. \label{eq:Delta} \end{align} Here, $\Delta_\varphi$ reflects the interplay between leptogenesis and reheating, cf.\ Fig.~\ref{fig:interplay}. For $m_a \gtrsim \Gamma_\varphi$, the axion begins to oscillate before the end of reheating and the initial asymmetry becomes diluted due to the entropy production in inflaton decays. For $m_a \lesssim \Gamma_\varphi$, on the other hand, the axion oscillations only set in after the end of reheating and the final asymmetry becomes independent of the inflaton decay rate. Meanwhile, $\Delta_a$ begins to play a role for $f_a$ values around $3\times10^{13}\,\textrm{GeV}$, cf.\ the right panel of Fig.~\ref{fig:scan}. For smaller values of $f_a$, we always have $\Delta_a = 1$ in the entire parameter region of interest. The factor $e^{-\kappa}$, finally, accounts for the efficiency of the washout term, $-\Gamma_Ln_L$, on the right-hand side of Eq.~\eqref{eq:Lboltz}. For $m_a \gtrsim \Gamma_\varphi$, $\kappa$ can be roughly estimated as $\kappa \sim T_{\rm rh}/T_L$, where $T_L \sim 1/\left(\sigma_{\rm eff} M_{\rm Pl}\right) \sim 10^{13}\,\textrm{GeV}$ is the typical temperature scale of leptogenesis, while, for $m_a \lesssim \Gamma_\varphi$, we have $\kappa \sim 1$. A better analytical understanding of washout in our scenario is, however, still pending. Successful leptogenesis restricts the axion decay constant $f_a$ to take a value within the following range, \begin{align} 4\times 10^{10}\,\textrm{GeV} \lesssim f_a \lesssim 4\times 10^{15}\,\textrm{GeV} \,. \end{align} which translates into allowed ranges for $m_a$, $\Gamma_\varphi$ and $T_{\rm rh}$, cf.\ the right panel of Fig.~\ref{fig:scan}, which are all very well consistent with typical string axion models. We note that, for smaller values of $f_a$, it is not possible to generate a sufficiently large baryon asymmetry, while keeping the baryonic isocurvature perturbations small enough. Likewise, for larger values of $f_a$, the dilution of the asymmetry in the late-time decay of the axion is too strong. \section{Conclusions and outlook} \label{sec:conclusions} While thermal leptogenesis typically operates at $T_{\rm rh} \sim 10^{10}\,\textrm{GeV}$, the requirement of a large rate of $L$ violation, $\Gamma_L \gg H$, pushes $T_{\rm rh}$ to values at least of $\mathcal{O}\left(10^{12}\right)\,\textrm{GeV}$ in our scenario. Furthermore, our final baryon asymmetry turns out be independent of the amount of $CP$ violation in the neutrino sector as well as of the $N_i$ mass spectrum. On top of that, the usual bound on $\bar{m}$ from thermal leptogenesis does not apply in our case. The presented model should therefore be regarded as an attractive alternative to thermal leptogenesis in case the latter should begin to look less favorable from the experimental point of view! Beyond that, further work is needed: it remains, for instance, to be seen how the required high $T_{\rm rh}$ could be possibly accommodated in a supersymmetric version of our model. A further intriguing question, which we are currently investigating, is whether the role of the axion field $a$ could not be equally played by the inflaton. This would result in an even more minimal scenario. \begin{acknowledgments} I wish to thank A.~Kusenko and T.~T.~Yanagida for many helpful discussions and the fruitful collaboration on our joint paper; I wish to thank the organizers of \textit{HPNP 2015} for a wonderfully organized workshop; and I wish to thank all participants of \textit{HPNP 2015} for an exciting week in Toyama. In particular, I am grateful to T.~T.~Yanagida for generous travel support through Grants-in-Aid for Scientific Research from the Ministry of Education, Science, Sports, and Culture (MEXT), Japan, \#26104009 and \#26287039. In addition, this work has been supported in part by the World Premier International Research Center Iniative (WPI), MEXT, Japan. \end{acknowledgments} \bigskip
2,869,038,156,565
arxiv
\section{Introduction} The electromagnetic form factors in both space-like $(Q^2>0)$ and time-like $(Q^2<0)$ regions are essential to understand the intrinsic structure of hadrons. The experimental data of elastic form factors over several decades, including recent high precision measurement at Jefferson Lab \cite{PRL-1398,PRL-092301} and elsewhere \cite{PRD-5491}, have provided considerable insight into the detail structure of the nucleon. Generally, in Born amplitude for one photon exchange, the proton current operator is parameterized in terms of Dirac $(F_1)$ and Pauli $(F_2)$ form factors, \begin{eqnarray} \Gamma_{\mu}=F_1(q^2)\gamma_{\mu}+i\frac{F_2(q^2)}{2 m_N} \sigma_{\mu \nu} q^{\nu}, \label{ff0}% \end{eqnarray} where $q$ is the momentum transfer to the nucleon and $m_N$ is the nucleon mass. The resulting differential cross section depends on two kinematic variables, conventionally taken to be $Q^2\equiv -q^2$ (or $\tau$, in order to consistent with the case in the time-like region, we take $\tau \equiv q^2/4m_N$ other than $\tau \equiv Q^2/4m_N$ ) and the scattering angle $\theta_e$ (or virtual photon polarization $\varepsilon \equiv [1+ 2(1-\tau) \tan^2 (\theta_e/2)]^{-1}$). The reduced Born cross section, in terms of the Sachs electric and magnetic form factors, is \begin{eqnarray} \frac{d\sigma}{d\Omega}=C(Q^2,\varepsilon) \left[G_M^2(Q^2)-\frac{\varepsilon}{\tau} G_E^2(Q^2)\right]. \label{cs0}% \end{eqnarray} \par% The standard method that has been used to determine the electric and magnetic form factors, particularly those of the proton has been the Rosenbluth, or longitudinal-transverse(LT), separation method. The results of the Rosenbluth measurements for the proton form factor ratio $R=\mu_p G_E/G_M$ have generally been consistent with $R \approx 1$ for $Q^2\leq 6 GeV^2$ \cite{PRD-5671,PRC-034325,PRC-015206}. The 'Super-Rosenbluth' experiment at Jefferson Lab \cite{PRL-142301}, with very small systematic errors were achieved by detecting the recoiling proton rather than the electron, is also consistent with the earlier LT results. It should be mentioned that polarized lepton beams give another way to access the form factors \cite{SP-588}. In the Born approximation, the polarization of the recoiling proton along its motion $(p_l)$ is proportional to $G_M^2(Q^2)$ while the component perpendicular to the motion $(p_t)$ is proportional to $G_E(Q^2)G_M(Q^2)$. Then the form factor ratio $R$ can be determined through a measurement of $p_t/p_l$, with \begin{eqnarray} \frac{p_t}{p_l}=-\sqrt{-\frac{2\varepsilon}{\tau(1+\varepsilon)}} \frac{G_E(Q^2)}{G_M(Q^2)} \ . \end{eqnarray} This method has been applied only recently in Jefferson Lab \cite{PRL-1398}, since it needs high-intensity polarized beams, large solid-angle spectrometers, and advanced techniques of polarimetry in $GeV$ range. The measurement about the electron-to-proton polarized transfer in $\vec{e}^{\ -} + p \rightarrow e^{-}+ \vec{p}$ shows that the ratio of Sachs form factors \cite{PR-2256,NC-821} is monotonically decreasing with increasing of $Q^2$, which strongly contradicts to the scaling ratio determined by the traditional Rosenbluth separation method \cite{PR-615}. In order to explain the discrepancy, radiative corrections, especially the two-photon contribution, have been involved \cite{PRC-054320, PRL-142303, PRL-142304, PRL-172503,PRL-122301, PRC-065203, PRC-038202}. In Ref. \cite{PRL-142304}, only the intermediate proton state considered, it is found that the two-photon corrections have the proper sign and magnitude to resolve a lager part of the discrepancy between the two experimental techniques. Furthermore, Ref. \cite{PRL-172503} considered the intermediate $\Delta^+$ state as well as the proton. In Ref.\cite{PRL-122301} a partonic calculation of the two-photon exchange contribution to the form factors is given. It is concluded that for $Q^2$ in the range of $2 \sim 3 GeV^2$, the ratio extracted using LT method including the two-photon corrections agrees well with the polarization transfer results. Consequently, it shows that the two-photon exchange corrections can, at least, partly explain the discrepancy of the two methods of the separation. \par% For a stable hadron, in the space-like region the form factors are real, while its time-like form factors have a phase structure reflecting the final-state interactions of the outgoing hadrons, therefore, form factors are complex. So far, there are not many precise experimental data in this region as in the space-like one. In the theoretical point of view, it seems unavoidable to check the two-photon exchange contribution to the nucleon form factors in the time-like region. Actually, some works have been done. Refs. \cite{PRC-042202, NPA-120, PLB-197} employed the general arguments based on crossing symmetry for the processes of $e^{-} + h \rightarrow e^{-} +h$ and $e^{+} + e^{-} \rightarrow h + \bar{h}$, and showed the general expressions for the polarization observables of the reaction $\bar{p} + p \rightarrow e^{+} + e^{-}$ in terms of three independent complex amplitudes and in presence of two-photon exchange. Ref. \cite{PLB-197} also tried to search some experimental evidences for the two-photon exchange from the experimental data of $e^{+} + e^{-} \rightarrow p+\bar{p}+\gamma$. However, a negative conclusion is obtained due to the level of the present precision. A total contribution of the radiative corrections to the angular asymmetry is under $2 \%$, while the asymmetry getting from the experimental data is always compatible with zero and the typical error is about $5 \%$. In this reference, the polarization observables are not discussed. \par% Difference with the above work, we calculate the two-photon exchange correction to the unpolarized differential cross section as well as the double spin polarization observables. Some qualitative properties based on the crossing symmetry and C- invariance are discussed in section $2$. Moreover, the analytical forms of the unpolarized differential cross section and polarization observables are presented in section $3$. In section $4$, we will directly calculate the two-photon exchange contribution to the differential cross section and polarization observables. In section $5$, some numerical results and discussions are given. \par% \section{Crossing Symmetry and C-invariance} In quantum field theory, crossing symmetry is a symmetry that relates to the $S$-matrix elements. In general, the $S$-matrix for any process involving a particle with momentum $p$ in the initial state is equal to the $S$-matrix for an otherwise identical process but with an anti-particle of momentum $k=-p$ in the final state, that is, \begin{eqnarray} \mathcal{M}(\phi(p)+\cdot\cdot\cdot\rightarrow\cdot\cdot\cdot)= \mathcal{M}(\cdot\cdot\cdot\rightarrow\cdot\cdot\cdot+\bar{\phi}(k)), \end{eqnarray} where $\bar{\phi}$ stands for anti-particle and $k = -p$. We notice that there is no any realistic value of $p$ for which $p$ and $k$ are both physically allowed. So technically we should say that either amplitude can be obtained from the other by analytic continuation. The crossing symmetry provides a relation between the scattering channel $e^{-} + p \rightarrow e^{-} + p$ and the annihilating channel $e^{+} + e^{-} \rightarrow p + \bar{p}$. In the one-photon approximation as shown in Fig. (\ref{Fig-feyntree}), the crossing symmetry can be expressed by the following relation \begin{eqnarray} \overline{|\mathcal{M}(e^{-}p \rightarrow e^{-}p)|^2}=f(s,t)=\overline{|\mathcal{M}(e^{+} e^{-} \rightarrow p \bar{p})|^2}. \label{csym}% \end{eqnarray} The line over $\mathcal{M}$ denotes the sum over the polarization of all particles in the initial and final states. The Mandelstan variables $s$ and $t$ are defined as follows: \begin{eqnarray} s&=&(k_1+p_1)^2=m_N^2+2 E_1 m_N \geq m_N^2,\nonumber\\ t&=&(k_1-k_2)^2=q^2<0, \end{eqnarray} for the scattering channel (with $E_1$ being the energy of the incoming electron in the Lab frame), and \begin{eqnarray} s&=&(k_1-p_1)^2=m_N^2-2 \widetilde{\epsilon}^2+2 \widetilde{\epsilon} \sqrt{\widetilde{\epsilon}^2-m_N^2} \cos \theta \leq 0,\nonumber\\ t&=&(k_1+k_2)^2=4 \widetilde{\epsilon}^2>4 m_N^2, \end{eqnarray} for the annihilating channel with $\widetilde{\epsilon}$ being the energy of the initial electron (or final proton) and $\theta$ being the hadron production angle. \par% Considering Lorentz, parity, time-reversal, and helicity conservation in the limit of $m_e\rightarrow 0$, the $T-$ matrix for the elastic scattering of two Dirac particles can be expanded in terms of three independent Lorentz structures. Then, the proton current operator through the Lorentz structure \cite{SP-588} is \begin{eqnarray} \Gamma_{\mu}=\widetilde{F}_1(s,t)\gamma_{\mu}+i\frac{\widetilde{F}_2(s,t)}{2 m_N} \sigma_{\mu \nu} q^{\nu} +\widetilde{F}_3(s,t) \frac{\gamma \cdot K P_{\mu}}{m_N^2}, \label{ff1}% \end{eqnarray} with \begin{eqnarray} P=\frac{1}{2} (p_2+p_1),\ \ \ \ \ K=\frac{1}{2} (k_1+k_2), \end{eqnarray} in the scattering channel, and \begin{eqnarray} P=\frac{1}{2} (p_2-p_1),\ \ \ \ \ K=\frac{1}{2} (k_1-k_2), \end{eqnarray} in the annihilating channel. Similar to the Sachs form factor, we can recombine the form factors $\widetilde{F}_{1,2}$ as \begin{eqnarray} \widetilde{G}_E(q^2, \cos \theta) &=&\widetilde{F}_{1} (q^2, \cos \theta) +\tau \widetilde{F}_2(q^2, \cos \theta),\nonumber\\ \widetilde{G}_M(q^2, \cos \theta) &=&\widetilde{F}_{1} (q^2, \cos \theta) + \widetilde{F}_2(q^2, \cos \theta). \label{sachs1}% \end{eqnarray} \par% Taking the proton current operator defined in Eq. (\ref{ff1}) which includes the multi-photon exchange, we can express $f(s,t)$ in Eq. (\ref{csym}) in the form: \begin{eqnarray} f(s,t)=\frac{8 e^4}{(4 m_N^2 -t)t}\Big\{8 |\widetilde{G}_E|^2 m_N^2 \big[m_N^4-2 s m_N^2 + s(s+t)\big]- |\widetilde{G}_M|^2 t\big[2 m_N^4 - 4 m_N(s+t)+2 s^2+t^2 +2 s t\big] \nonumber\\ -m_N^{-2} \big[2 m_N^6-m_N^4(6s+t)+2 m_N^2s(3s+2t)-s(2s^2+3ts+t^2) \big]Re\big[(4 m_N^2 \widetilde{G}_E-t \widetilde{G}_M)^{*}\widetilde{F}_3\big]\Big\}. \end{eqnarray} \par% In the one-photon mechanism for $e^{+} + e^{-} \rightarrow p + \bar{p}$, the conservation of the total angular momentum $\mathcal{J}$ allows only one value of $\mathcal{J}=1$. This is due to the quantum numbers of the photon : $\mathcal{J}^p=1^-, C(1 \gamma)=-1$. The selection rule combined with $C$ and $P$ invariances allows two states for $e^{+} e^{-}$ (and $p \bar{p}$): \begin{eqnarray} S=1,\ \ \ \ell=0\ \ \ \mathrm{and}\ \ \ \ S=1,\ \ \ \ \ell=2 \ \ \ \ \mathrm{with}\ \ \ \mathcal{J}^p=1^{-}, \end{eqnarray} where $S$ is the total spin and $\ell$ is the orbital angular momentum of the $e^{+} e^{-}$ (or $p \bar{p}$) system. As a result the $\theta$ dependence of the differential cross section for $e^{+} + e^{-} \rightarrow p + \bar{p}$, in the one-photon exchange mechanism, has the following general form \begin{eqnarray} \frac{d\sigma^{1 \gamma}}{d \Omega} = a(t) +b(t) \cos^2 \theta. \label{sigma0}% \end{eqnarray} Similar analysis can be done for the $\cos \theta$ dependence of the $1 \gamma \otimes 2 \gamma-$ interference contribution to the differential cross section of this precess. In general, the spin and parity of the $2 \gamma -$ states are not fixed, but only a positive $C-$ parity, $C ( 2 \gamma ) = +$, is allowed, then the $\cos \theta$ dependence of the $1\gamma \otimes 2 \gamma$ interference contribution to the differential cross section can be predicted on the basis of its $C-$ odd nature as: \begin{eqnarray} \frac{d\sigma^{int}}{d\Omega}= \cos\theta \big[c_0(t)+c_1(t) \cos^2 \theta + c_2(t) \cos^4 \theta + ...\big]. \label{sigmaint}% \end{eqnarray} \par% In the one-photon exchange mechanism, the differential cross section is angular symmetric. However, after considering the two-photon exchange, this symmetry is broken. Define the asymmetry of the total differential cross section as \begin{eqnarray} A_{2 \gamma}(q^2,\theta)= \frac{\displaystyle\frac{d \sigma}{d \Omega}(q^2, \theta) - \frac{d \sigma}{d \Omega}(q^2,\pi-\theta) } {\displaystyle\frac{d \sigma}{d \Omega}(q^2, \theta) + \frac{d \sigma}{d \Omega}(q^2,\pi-\theta) }\ \ , \end{eqnarray} after some algebraic simplification, we have \begin{eqnarray} A_{2\gamma}(q^2, \theta)=\frac{d\sigma^{int}}{d \Omega} (q^2,\theta) ~\Big/~ \frac{d\sigma^{1 \gamma}}{d \Omega}(q^2,\theta). \end{eqnarray} Then based on the general forms of $d \sigma^{1\gamma}/d\Omega$ and $d \sigma^{int}/d\Omega$ as shown in Eq. (\ref{sigma0}) and Eq. (\ref{sigmaint}), One can easily conclude that the angular asymmetry of the total differential cross section is also an odd function of $\cos\theta$. \section{Differential Cross Section and Polarization Observables} \label{dcspo}% In order to represent the polarization vector of outgoing anti-proton in a straight way for the process of $e^{+} + e^{-} \rightarrow p + \bar{p}$, we define a coordinate frame in center of mass system (CMS) of the reaction in such a way that the $z$ axis directs along the three-momentum of the anti-proton and the angle between the incoming electron and outgoing anti-proton is defined as $\theta$. In such a frame, according to the approaches used in Refs. \cite{EPJA-331, NPA-271, NPA-322}, one has \begin{eqnarray} \mathcal{M}=\frac{e^2}{q^2} j_{\mu} J^{\mu} \end{eqnarray} with leptonic current \begin{eqnarray} j_{\mu}=\bar{u} (-k_2) \gamma_{\mu} u(k_1) \nonumber \end{eqnarray} and hadronic current \begin{eqnarray} J_{\mu}=\bar{u} (p_2) \Big[\widetilde{F}_1(s,t)\gamma_{\mu}+i\frac{\widetilde{F}_2(s,t)}{2 m_N} \sigma_{\mu \nu} q^{\nu} +\widetilde{F}_3(s,t) \frac{\gamma \cdot K P_{\mu}}{m_N^2}\Big] u(-p_1) \label{current}% \end{eqnarray} Then the differential cross section of the reaction in the CMS is \begin{eqnarray} \frac{d \sigma}{d \Omega} =\frac{\alpha^2\beta}{q^6}L_{\mu \nu} H^{\mu \nu}, \ \ \ L_{\mu \nu} =j_{\mu} j^{*}_{\nu}, \ \ \ H_{\mu \nu} = J_{\mu} J^{*}_{\nu}, \label{tensor}% \end{eqnarray} $\alpha=e^2/4 \pi$ is the fine structure constant and $\beta=\sqrt{1-4M^2/q^2}$ is the nucleon velocity in the CMS. In this work we consider the unpolarized incoming positron and longitudinally polarized incoming electron with the polarization four-vector $s$, and in the final state, the anti-proton is polarized with polarization four-vector $s_1$, then the leptonic and hadronic vectors can be divided into unpolarized and polarized parts \begin{eqnarray} L_{\mu\nu} = L_{\mu \nu}(0) +L_{\mu\nu} (s), \ \ \ H_{\mu \nu} =H_{\mu \nu}(0) +H_{\mu \nu} (s_1). \end{eqnarray} \par% In the current operator shown in Eq. (\ref{ff1}), the Lorentz structure functions are not only the function of $q^2$ but also depend on hadron production angle $\theta$, and they can relate to the Dirac and Pauli form factors \begin{eqnarray} \widetilde{F}_{1,2} (q^2, \cos\theta)=F_{1,2} (q^2) + \Delta F_{1,2} (q^2, \cos\theta) \end{eqnarray} and $\widetilde{G}_{E,M}(q^2, \cos \theta)$ related to the Sachs form factors \begin{eqnarray} \widetilde{G}_{E,M}(q^2, \cos \theta)= G_{E,M}(q^2) + \Delta G_{E,M}(q^2, \cos \theta) \ . \label{dsachs}% \end{eqnarray} \par% The unpolarized differential cross section of the process $e^{+} + e^{-} \rightarrow p + \bar{p}$ is in the form \begin{eqnarray} \frac{d\sigma_{un}}{d\Omega}=\frac{\alpha^2 \beta}{4 q^6}L_{\mu \nu}(0) H^{\mu \nu}(0)=\frac{\alpha^2 \beta}{4 q^2}\ D, \end{eqnarray} with the current operator in Eq. (\ref{ff1}) and the definition in Eq. (\ref{sachs1}), $D$ can be expressed as: \begin{eqnarray} D=|\widetilde{G}_M|^2 (1+ \cos^2\theta) +\frac{1}{\tau} |\widetilde{G}_E|^2 \sin^2\theta - 2 \sqrt{\tau (\tau-1)} Re[(\widetilde{G}_M-\frac{1}{\tau} \widetilde{G}_E) \widetilde{F}_3 ^{*}] \sin^2\theta \cos \theta. \end{eqnarray} Notice that in Eq.(\ref{dsachs}), $\Delta G_{E,M}$ and $\widetilde{F}_3$ caused by the two-photon exchange is in the order of $\alpha \simeq 1/137$, so that the terms $\Delta G_{E,M} \Delta G_{E,M}$ and $\Delta G_{E,M}\widetilde{F}_3$ are negligible, then, \begin{eqnarray} D&=&|G_M|^2 (1+\cos^2 \theta) + \frac{1}{\tau} |G_E|^2 \sin^2 \theta + 2 Re[G_M \Delta G^*_M] (1+ \cos^2 \theta) + \nonumber\\ &&\frac{2}{\tau} Re[G_E \Delta G^*_E] \sin^2 \theta- 2 \sqrt{\tau (\tau-1)} Re[(G_M-\frac{1}{\tau} G_E) \widetilde{F}_3^{*}] \sin^2\theta \cos \theta. \label{dd}% \end{eqnarray} \par% From $C-$ invariance and the above expression of $D$, we have the general properties of the form factors, \begin{eqnarray} \Delta G_{E,M}(q^2, +\cos\theta)&=&-\Delta G_{E,M}(q^2,-\cos\theta),\nonumber\\ \widetilde{F}_3(q^2, +\cos \theta) &=& \widetilde{F}_3(q^2, -\cos \theta). \label{gemf3}% \end{eqnarray} which is equivalent to the symmetry relations of the scattering channel \cite{EPJA-331}. Generally, the polarization four-vector $S_{\mu}$ of a relativistic particle with three-momentum $\vec{p}$ and mass $m$, is connected with the polarization vector, $\vec{\xi}$, by a Lorentz boost: \begin{eqnarray} \vec{S} = \vec{\xi} + \frac{\vec{p}\cdot\vec{\xi}\ \vec{p}}{m(E+p)}\ ,\ \ \ \ \ S^0 =\frac{\vec{p}\cdot \vec{S}}{m}\ . \end{eqnarray} Where $E=\sqrt{m^2+\vec{p}^2}$ is the energy of the particle. In the CMS defined above, we have the polarization vectors of the anti-proton \begin{eqnarray} \vec{\xi}_x &=&(1,\ 0,\ 0),\ \ \ \ s_{1x}= (0,\ 1,\ 0,\ 0), \nonumber\\ \vec{\xi}_y &=&(0,\ 1,\ 0),\ \ \ \ s_{1y}= (0,\ 0,\ 1,\ 0), \nonumber\\ \vec{\xi}_z &=&(0,\ 0,\ 1),\ \ \ \ s_{1z}= (\sqrt{\tau -1},\ 0,\ 0,\ \sqrt{\tau}). \end{eqnarray} \par% $P_y$ is a single-spin polarization observable, which relates to one polarized particle along the $y-$ axis. Since the time-like form factors are complex, then it appears in the Born approximation in the process $e^{+} + e^{-} \rightarrow p + \bar{p}$. In this work we consider the outgoing anti-proton polarized. The general expression for $P_y$ is \begin{eqnarray} P_y=\frac{1}{D q^4} L_{\mu\nu} H_{\mu \nu} (s_{1 y})=\frac{1}{D q^4} \big[L_{\mu\nu}(0) H_{\mu \nu} (s_{1 y})+L_{\mu\nu}(s) H_{\mu \nu} (s_{1 y})\big]. \label{py}% \end{eqnarray} After some algebraic calculation \cite{arXiv-0704.3375}, we can find $P_y$ does not depend on the polarization of incoming electron, that means the second term in Eq. (\ref{py}) has no contribution to $P_y$. With the proton current operator in Eq. (\ref{ff1}) we have, \begin{eqnarray} P_y&=& \frac{2 \sin \theta}{D \sqrt{\tau}} \big[Im[\widetilde{G}_M \widetilde{G}^{*}_E] \cos \theta - \sqrt{\tau (\tau -1)} (Im[ \widetilde{G}_E \widetilde{F}^{*}_3 ] \sin^2 \theta +Im[ \widetilde{G}_M \widetilde{F}^{*}_3 ] \cos^2 \theta )\big]\nonumber\\ &=& \frac{2 \sin \theta}{D \sqrt{\tau}} \big[Im[G_M G^{*}_E + G_M \Delta G^{*}_E+\Delta G_M G^{*}_E] \cos \theta -\nonumber\\ &&\sqrt{\tau (\tau -1)} (Im[ G_E \widetilde{F}^{*}_3 ] \sin^2 \theta +Im[G_M \widetilde{F}^{*}_3 ] \cos^2 \theta ) \big]. \label{polarpy}% \end{eqnarray} Similar definitions are employed for the double spin polarization observables $P_x$ and $P_z$. For $P_x$ and $P_z$, the polarization of the incoming electron is necessary, and the unpolarized incoming electron has no contribution, that is $L_{\mu\nu}(0) H_{\mu \nu} (s_{1x,z})=0$. Since $L_{\mu \nu}(0) H^{\mu \nu} (s_{1x,z}) \propto a_{\mu}b_{\nu}c_{\rho}d_{\lambda}\epsilon^{\nu\nu\rho\lambda}\equiv \epsilon^{a b c d}$ and $a,\ b,\ c,\ d$ are four out of $s_{1x,z},\ k_1,\ k_2,\ p_1,\ p_2$, and all of those four-vectors have zero $y-$ component, then the contribution of the unpolarized electron vanishes. The double spin polarization observables with proton current operator in Eq. (\ref{ff1}) are \begin{eqnarray} P_x&=&-\frac{2 \sin \theta}{D \sqrt{\tau}} \Big\{Re[\widetilde{G}_M \widetilde{G}^{*}_E] + Re[\widetilde{G}_M \widetilde{F}^{*}_3] \sqrt{\tau (\tau-1)} \cos \theta \Big\}\nonumber\\ &=&-\frac{2 \sin \theta}{D \sqrt{\tau}} \Big\{Re[ G_M G^{*}_E + G_M \Delta G^{*}_E+\Delta G_M G^{*}_E] + Re[G_M \widetilde{F}^{*}_3 ]\sqrt{\tau (\tau-1)} \cos \theta \Big\},\nonumber\\ P_z&=&\frac{2}{D}\Big\{|\widetilde{G}_M|^2 \cos \theta -Re[\widetilde{G}_M \widetilde{F}_3^{*} ] \sqrt{\tau (\tau-1)}\sin^2 \theta \Big\}\nonumber\\ &=&\frac{2}{D}\Big\{|G_M|^2 \cos \theta + 2 Re[G_M \Delta G_M] \cos \theta -Re[G_M \widetilde{F}_3^{*} ] \sqrt{\tau (\tau-1)}\sin^2 \theta \Big\}. \label{polar}% \end{eqnarray} In Eqs. (\ref{polarpy},\ref{polar}), if we set $\Delta G_{E,M}=0$ and $\widetilde{F}_3=0$, the polarization observables reduces to the results in the one-photon approximation. Considering the two-photon exchange contribution to the double spin polarization observables, we define $\delta(P_{x,z})$ as the ratio between the contributions of $1 \gamma \otimes 2 \gamma$ interference terms and the results in the one-photon mechanism, that is , \begin{eqnarray} \delta(P_{x,z}) =P^{int}_{x,z}/P^{1 \gamma}_{x,z},\nonumber \end{eqnarray} with Eq. (\ref{polar}) we have, \begin{eqnarray} \delta(P_x) &=& \frac{Re[G_M \Delta G^{*}_E + G_E \Delta G^{*}_M]}{Re[ G_M G^{*}_E]} + \sqrt{\tau (\tau -1)} \frac{Re[G_M \widetilde{F}_3]}{Re[ G_M G^{*}_E]} \cos \theta~, \nonumber\\ \delta(P_z) &=& \frac{2 Re[G_M \Delta G_M]}{|G_M|^2}- \sqrt{\tau (\tau-1)} \frac{Re[G_M \widetilde{F}^{*}_3]}{|G_M|^2} \sin \theta \tan \theta~. \label{deltap}% \end{eqnarray} One can see both $\delta(P_x)$ and $\delta(P_z)$ are the odd functions of $\cos \theta$. \section{Two-Photon Exchange Contribution} This section is devoted to a directly numerical calculation for the two-photon exchange. We know that much work has been done in the space-like region. Naturally, it is expected that the same calculation should be performed in the time-like region. After considering the two-photon exchange, the amplitude $\mathcal{M}$ will be essentially modified, that is, \begin{eqnarray} \mathcal{M}=\mathcal{M}_0 +\mathcal{M}_{2\gamma}, \end{eqnarray} where $\mathcal{M}_0$ is the contribution of the one-photon exchange and $\mathcal{M}_{2\gamma}$ denotes the two-photon exchange. Therefore, to the first order of $\alpha\ (\alpha = e^2/4\pi)$, we have, \begin{eqnarray} \frac{d \sigma}{d \Omega}\ \propto\ \overline{|\mathcal{M}|^2}\ =\ \overline{|\mathcal{M}_0|^2}\ (1+ \delta_{2 \gamma}) \nonumber \end{eqnarray} with \begin{eqnarray} \delta_{2 \gamma}=2 \frac{Re\{\overline{\mathcal{M}_{2\gamma} \mathcal{M}_0^\dagger}\}}{|\mathcal{M}_0|^2}. \label{delta}% \end{eqnarray} From the analysis in section 2, one can see that $A_{2\gamma}(q^2,\theta)$ and $\delta_{2\gamma}$ are identical. \par% To proceed a direct calculation, the amplitude of the two-photon exchange from the direct box (Fig. \ref{Fig-feyntpe} $a$) and crossed box diagram (Fig. \ref{Fig-feyntpe} $b$) has the form \begin{eqnarray} \mathcal{M}_{2 \gamma}= e^4 \int \frac{d^4 k}{(2 \pi)^4}\left[ \frac{N_{a}(k)}{D_a(k)}+\frac{N_{b}(k)}{D_{b}(k)}\right]. \label{mtpe}% \end{eqnarray} where the numerators are the matrix elements. For the direct box diagram, \begin{eqnarray} N_{a}(k)= j_{(a)\mu\nu}J_{(a)}^{\mu\nu}~,\nonumber \end{eqnarray} with \begin{eqnarray} j_{a}^{\mu \nu}&=& \bar{u}(-k_2) \gamma^{\mu} (\hat{k}_1-\hat{k}) \gamma^{\nu} u(k_1),\nonumber\\ J_{a}^{\mu\nu}&=& \bar{u}(p_2) \Gamma^{\mu}(k_1+k_2-k) (\hat{k}-\hat{p}_1-m_N) \Gamma^{\nu}(k) u(-p_1), \end{eqnarray} with $\hat{k} \equiv \gamma\cdot k$ and $\Gamma_{\mu}(k)$ defined in Eq. (\ref{ff0}). The denominators in Eq. (\ref{mtpe}) are the products of the scalar propagators, \begin{eqnarray} D_{a}(k)&=&[k^2-\lambda^2][(k_1+k_2-k)^2-\lambda^2][(k_1-k)^2-m_e^2] [(k-p_1)^2-m_N^2], \end{eqnarray} where an infinitesimal photon mass, $\lambda$, has been introduced in the photon propagator to regulate the IR divergence. Similarly we can write down the expressions of $N_{b}(k)$ and $D_{b}(k)$ for Fig. \ref{Fig-feyntpe} $b$. \par% For the $1\gamma \otimes 2\gamma$ interference term, we define the leptonic and hadronic tensors as, \begin{eqnarray} L^{(a,b)}_{\mu\nu\rho}=j^{(a,b)}_{\mu\nu} j^{*}_{\rho}~,~~~~ H^{(a,b)}_{\mu\nu\rho}=J^{(a,b)}_{\mu\nu} J^{*}_{\rho}\ . \end{eqnarray} Here the current operator in the hadronic current $J_{\rho}$ is the same as the one in $J_{\mu\nu}$, then, \begin{eqnarray} \frac{d\sigma^{int}}{d\Omega}\propto\mathcal{M}_{2\gamma}\mathcal{M}_0 =\frac{e^6}{q^2} \int \frac{d^4k}{(2 \pi)^4} \Big[ \frac{L^{(a)}_{\mu\nu\rho} H^{(a) \mu\nu\rho}}{D_{a}(k)} +\frac{L^{(b)}_{\mu\nu\rho} H^{(b) \mu\nu\rho}}{D_{b}(k)} \Big] \label{mtpem0}.% \end{eqnarray} For the unpolarized differential cross section only $L^{(a,b)}_{\mu\nu\rho}(0) H^{(a,b) \mu\nu\rho}(0)$ survives. From the crossing symmetry, we conclude that the expressions of $\delta_{2 \gamma}$ are identical with Mandelstan variables for both the scattering channel and the annihilating channel, that is, \begin{eqnarray} \delta_{2 \gamma}(s, t)_{e^{-} + p\rightarrow e^{-} + p}=g(s,t)=\delta_{2 \gamma}(s, t)_{e^{-} + e^{+} \rightarrow p + \bar{p}}. \end{eqnarray} In the soft approximation $g(s,t)$ can be expressed as \begin{eqnarray} g(s,t)_{soft}=-2 \frac{\alpha}{\pi} \ln \left| \frac{s-m_e^2-m_N^2}{s+t-m_e^2-m_N^2} \right| \ln \left|\frac{t}{\lambda^2}\right|. \label{soft}% \end{eqnarray} \par% For the double spin polarization observables $P_x$ and $P_z$ the $1 \gamma \otimes 2\gamma$ interference contribution is \begin{eqnarray} P^{int}_{x,z}&=&\frac{e^2}{q^2 D} \int \frac{d^4k}{(2\pi)^4} \Big[ \frac{L^{(a)}_{\mu\nu\rho} H^{(a)\mu\nu\rho}(s_{1x,z})}{D_a(k)} +\frac{L^{(b)}_{\mu\nu\rho} H^{(b)\mu\nu\rho} (s_{1x,z}) }{ D_b(k) }\Big]\nonumber\\ &=&\frac{e^2}{q^2 D} \int \frac{d^4k}{(2\pi)^4} \Big[ \frac{L^{(a)}_{\mu\nu\rho}(S) H^{(a)\mu\nu\rho}(s_{1x,z})}{D_a(k)} +\frac{L^{(b)}_{\mu\nu\rho}(S) H^{(b)\mu\nu\rho} (s_{1x,z}) }{ D_b(k) }\Big]. \label{pint}% \end{eqnarray} As in the one-photon exchange approximation, the unpolarized leptonic vector has no contribution to the double spin polarization observables. For the term of $L_{\mu\nu\rho}(0) H^{\mu\nu\rho} (s_{1x,z})$, after some algebraic calculations, we find that the non-vanishing contributions are in the forms of $\varepsilon^{a b c k},\ a'\cdot k \varepsilon^{a b c k},\ a'\cdot k \ b'\cdot k \varepsilon^{a b c k},\ k^2 \ a'\cdot k \varepsilon^{a b c k},$ and $\{a',\ b',\ a,\ b,\ c\}\in \{s_{1},\ k_1,\ k_2,\ p_1,\ p_2,\}$. Since $a',\ a,\ b,\ b' $ and $c$ have zero-$y$ component, then the non-vanishing terms are the odd functions of $k_y$. Namely $L_{\mu\nu\rho}(0)H^{\mu\nu\rho}(s_1) = f(s,t,k_0,k_x,k_z,k_y^2) k_y $. Since the denominators in Eq. (\ref{pint}) are the even functions of $k_y$, the contribution of $L_{\mu\nu\rho}(0) H^{\mu\nu\rho}(s_1)$, therefore, vanishes. \section{Numerical Results and Discussion} In this work, we calculate the contributions of direct box diagram (Fig. \ref{Fig-feyntpe} $a$ ) and crossed box diagram (Fig. \ref{Fig-feyntpe} $b$ ) to the unpolarized differential cross section and the double spin polarization observables. In this calculation a simple monopole form of the form factors is employed. This phenomenological form factor is $G_E(q^2)=G_M(q^2)/\mu_p=G(q^2) = -\Lambda^2 / (q^2-\Lambda^2)$, with $\Lambda=0.84\ GeV$, which is consistent with the size of the nucleon. Practically, for the interaction of the outgoing hadrons, the time-like form factors have a phase structure. which means the form factors are complex in the time-like region. In this work what we concern is the ratio $\delta_{2\gamma}$ and double spin polarization observables $P_x$ and $P_z$. Moreover, the phenomenological form factors appear in both denominator and numerator of these physical observables. In such cases, the form of form factors varies the ratio and polarization observables in a very limited extension. The same conclusion can be drawn from the results of two-photon exchange corrections to space-like form factors in Ref. \cite{PRC-065203}. \par% In our calculation, the loop integrals of the two-photon exchange contribution, firstly, were evaluated analytically in terms of the four-point Passarino-Veltman functions \cite{NPB-151} using package FeynCalc \cite{CPC-345}. Then, the Passarino-Veltman functions were evaluated numerically with LoopTools \cite{CPC-153}. The IR divergence in the $1 \gamma \otimes 2 \gamma$ is proportional to $\ln \lambda$. This conclusion can be drawn by analyzing the integral in Eq. (\ref{delta}) as well as by crossing symmetry and previous results in the scattering channel. Furthermore, the previous calculations in the scattering channel have shown that the IR divergence in the two-photon exchange contribution is exactly canceled by the corresponding terms in the bremsstrahlung cross section involving the interference between the real photons emitted from the electron and from the proton. With crossing symmetry, the IR divergence in the annihilating channel caused by the two-photon exchange can also be ignored. \par% From our previous analysis, the two-photon contribution to unpolarized differential cross section $\delta_{2\gamma}$ is identical to the angular asymmetry $A_{2 \gamma}$, which means $\delta_{2 \gamma}$ is also the odd function of $\cos \theta$. The numerical results of the two-photon contribution to unpolarized differential cross section $\delta_{2 \gamma}$ are presented in Fig. (\ref{Fig-unpolar}), where we show a comparison of $\delta_{2 \gamma}$ (defined as in Eq. (\ref{delta})) between the results of the full calculation and the soft approximation. The full circles in the figure are the full calculation, the dotted curves are the results with soft approximation and the full curves are the polynomial fits to the full calculation. We find a polynomial in the form of $\cos \theta [a_0(t) + a_1(t) \cos^2 \theta + a_2(t) \cos^4 \theta + ...]$ can give a good fit with a power series of $\cos \theta$ (no more than $\cos^5 \theta$). The left panel of Fig. (\ref{Fig-unpolar}) shows the results with momentum transfer $q^2=~4~GeV^2$, which is near the threshold of the reaction $e^{+}+ e^{-} \rightarrow p + \bar{p}$. We see that the two-photon exchange contribution to the unpolarized differential cross section is rather small, only about $\pm 0.6 \%$ at $\theta = \pi (0)$. In addition, with the coefficients $a_0 = -9.6 \times 10^{-3}, a_1 = 4.9 \times 10^{-3}, a_2 = -1.5 \times 10^{-3}$ we see that the polynomial gives a good fit of the full calculation. The right panel of Fig. (\ref{Fig-unpolar}) is the results at $q^2= 16~GeV^2$, the contribution of the two-photon exchange is relatively large, nearly $4 \%$, the parameters of the fit for the full calculation are $a_0 = 2.9 \times 10^{-3}, a_1 = 5.7 \times 10^{-2}, a_2 = -1.9 \times 10^{-2}$. We conclude that at a fixed momentum transfer, the contribution of the two-photon exchange is strongly dependent on $\cos \theta$. In magnitude, the contribution is rather limited in small momentum transfer region, with $q^2$ increasing, the contribution becomes larger. This conclusion is consistent with the results in the space-like region. \par% In Fig. \ref{Fig-ffs}, we show the $\cos \theta$ dependence of the real part of corrections to the proton time-like form factors caused by the two -photon exchange at $q^2~=~4 ~GeV^2$. For $\Delta G_E/G$ and $\Delta G_M/G$, significant $\cos \theta$ dependences are observed, while $\widetilde{F}_3/G$ weakly depends on $\cos \theta$. For the parity, $\Delta G_E/G$ and $\Delta G_M/G$ are odd, and $\widetilde{F}_3/G$ is even. These features are consistent with our general analysis. The electric form factor is relatively more sensitive to the two-photon exchange corrections, is about $2.5 \%$ at $\theta =0 (\cos \theta=1)$ and $-2.5 \%$ at $\theta = \pi (\cos \theta=-1)$. The correction to magnetic form factor, $\Delta G_M /G$ varies from $1 \%$ to $-1 \%$ with $\theta$ from zero to $\pi$, while $\widetilde{F}_3/G_E$ is about $1\%$ in the whole $\theta$ region. \par% From our previous analysis in Sec. \ref{dcspo}, one can see the two-photon contributions to double spin polarization observables $P_x$ and $P_z$ are even functions of $\cos \theta$. In our previous numerical results in Fig. (4) we find $\widetilde{F}_3$ is not zero at $\theta~=~\pi/2$, then $\delta(P_z)$ will be proportional to $\tan \theta$ at the limit of $\theta \rightarrow \pi/2$ and will be infinity at $\theta~=~\pi/2$. Our numerical results of the $\cos \theta$ dependence of $\delta(P_{x,z})$ at $q^2~=~4~GeV^2$ are displayed in Fig. (\ref{Fig-polar}). We can see that the two-photon exchange contribution to the double spin polarization is strongly $\theta-$ dependence, and is the odd function of $\cos \theta$, which is consistent with our general analysis. For $P_x$, the variation caused by the two-photon exchange reaches maximum at $\theta =\pi(0)$ (about $4 \%$). It seems that one can more easily find the signal of the two-photon exchange at the backward ($\theta = \pi$) and forward ($\theta = 0$) angle. However, notice Eq. (\ref{polar}), we know that $P^{1\gamma}_x$ is proportional to $\sin \theta$. It means when $\theta$ is very small(close to $0$) or very large(close to $\pi$), $P^{1 \gamma}_x$ will be compatible to $0$, and therefore, the absolute variation caused by the two-photon exchange will be very limited. Thus, it will still be difficult to find any signal of the two-photon exchange in the observable $P_x$. For $P_z$, the contribution of the two-photon exchange reaches maximum when $\theta= \pi/2$. In the one photon mechanism $P^{1\gamma}_z $ is proportional to $ \cos\theta$, which suggests that no matter what kinds of form factors we employed, $P^{1 \gamma}_z$ vanishes at $\theta ~=~\pi/2$. While taking the two-photon exchange contribution into consideration, as in Eq. (\ref{deltap}), $P_z(\pi/2)$ is not equal to zero any more. From the experimental point of view, the nonzero $P_z$ at $\theta~=~\pi/2$ might be a strong evidence of the two-photon exchange in the process of $e^{+} + e^{-} \rightarrow p + \bar{p}$. \par% According to our numerical results, one can see the two-photon exchange contribution to the unpolarized differential cross section $\delta_{2\gamma}$, which is identical to angular asymmetry $A_{2\gamma}$, is rather small at small momentum transfer. With present experimental precision, it is rather difficult to find any evidence of the two-photon exchange from the unpolarized observable in the process $e^{+} + e^{-} \rightarrow p + \bar{p}$, especially at low momentum transfer region. With $q^2$ increasing, the contribution of the two-photon exchange becomes important. It can be several percent at $q^2$ about $16 ~GeV^2$. Furthermore, for the double spin polarization observables, $P_z$ deserves to be considered in further experiment. In conclusion,the precise measurements of the unpolarized differential cross section at high momentum transfer and the double spin polarization observable $P_z$ especially at $\theta~=~\pi/2$ are expected to show some evidences of the two-photon exchange in this process. \par% \section{Acknowledgment} This work is supported by the National Sciences Foundations of China under Grant No. 10475088, No. 10747118, and by CAS Knowledge Innovation Project No. KC2-SW-N02.
2,869,038,156,566
arxiv
\section{ Introduction } In quantum field theory one of the main aim is to calculate the scattering cross section which is experimentally measured. The scattering cross section can be calculated from the S-matrix element. In QCD the generating functional in the path integral formulation yields the connected green's function of the parton at all orders in coupling constant. This connected green's function can be used in the LSZ reduction formula to predict the S-matrix element for the partonic scattering process in QCD at all orders in coupling constant. Consider for example the $2 \rightarrow n$ partonic scattering process \begin{eqnarray} k_1+k_2 \rightarrow k'_1+k'_2+...+k'_n \label{gs} \end{eqnarray} in QCD where $k_1,k_2$ are the four-momenta of incoming gluons and $k'_1,k'_2,...,k'_n$ are the four-momenta of outgoing gluons. The initial state $|i>$ and the final state $f>$ for the above scattering process are given by \begin{eqnarray} |i>=|k_1,k_2>,~~~~~~~~~~|f>=|k'_1,k'_2,...,k'_n>. \label{ifi} \end{eqnarray} In this paper we will neglect the quarks but the inclusion of quarks is straightforward. By using the LSZ reduction formula, the S-matrix element for the partonic scattering process in eq. (\ref{gs}) at all orders in coupling constant in QCD is given by \begin{eqnarray} &&<f|i> = [G(-k'_1)]^{-1} [G(-k'_2)]^{-1}...[G(-k'_n)]^{-1}[G(k_2)]^{-1}[G(k_1)]^{-1} G(-k'_1,-k'_2,...,-k'_n,k_1,k_2)\nonumber \\ \label{hh33} \end{eqnarray} where $G(k)$ is the renormalized (full) propagator of gluon in momentum space and $G(k_1,...,k_n)$ is the renormalized n-point connected green's function of gluon in the momentum space. In eq. (\ref{hh33}) [and throughout this paper] the suppression of color and Lorentz indices is understood. For simplicity we have included the finite factors such as the relevant sum over polarization vectors and color factors in the partonic cross section \begin{eqnarray} {\hat \sigma} \propto |<f|i>|^2 \label{cr} \end{eqnarray} instead of the S-matrix element in eq. (\ref{hh33}) so that the S-matrix element in eq. (\ref{hh33}) is expressed in terms of the green's functions only. In perturbative quantum chromodynamics (pQCD) the ultra violet (UV) divergence occurs in the calculation of loop diagram when the momentum integration limit goes to infinity. The renormalization program is introduced to handle the UV divergence in pQCD \cite{ht,gr}. One finds that the one particle irreducible (1PI) diagram is the basic building block in pQCD where the UV divergence occurs when the loop momentum integration limit goes to infinity. Hence as far as renormalization is concerned it is sufficient to study the UV divergence of the the N-point one particle irreducible (1PI) proper vertex function $\Gamma[k_1,...,k_N]$ in QCD. From this point of view, as far as the renormalization of the S-matrix element is concerned, it is useful to express the S-matrix element in eq. (\ref{hh33}) in terms of the N-point 1PI proper vertex functions ${\bar \Gamma}[k_1,...,k_N]$ instead of the n-point connected green's function $G(k_1,...,k_n)$ at all orders in coupling constant where $N\le n$. In coordinate space the eq. (\ref{hh33}) can be written as \begin{eqnarray} &&<f|i> =\int d^4x'_1... \int d^4x'_n \int d^4x_2 \int d^4x_1 ~e^{i k'_1 \cdot x'_1+...+i k'_n \cdot x'_n-i k_2 \cdot x_2-i k_1 \cdot x_1} \int d^4y'_1...\int d^4y'_n \int d^4y_2 \int d^4y_1 \nonumber \\ && \times [G(x'_1,y'_1)]^{-1}... [G(x'_n,y'_n)]^{-1}[G(x_2,y_2)]^{-1}[G(x_1,y_1)]^{-1}~G(y'_1,...,y'_n,y_2,y_1) \label{nbg} \end{eqnarray} where $G(x_1,x_2)$ is the renormalized (full) propagator of gluon in coordinate space and $G(x_1,...,x_n)$ is the renormalized n-point connected green's function of gluon in coordinate space. Note that eq. (\ref{nbg}) is suitable to study factorization of infrared (IR) and collinear divergences in QCD at all orders in coupling constant \cite{n,s,gs}. It can be mentioned here that the 2-point 1PI vertex function $\Gamma[x_1,x_2]$ is the inverse of the (full) propagator (the 2-point connected Green's function $G(x_1,x_2)$) and the 3-point connected green's function $G(x_1,x_2,x_3)$ is expressed in terms of the 3-point 1PI vertex function $\Gamma[x_1,x_2,x_3]$ by adding (full) propagators to the external legs \cite{ab1}. Similarly, the 4-point connected green's function $G(x_1,x_2,x_3,x_4)$ is expressed in terms of the 4-point 1PI vertex function $\Gamma[x_1,x_2,x_3,x_4]$ and the 3-point 1PI vertex function $\Gamma[x_1,x_2,x_3]$ and the (full) propagator $G(x_1,x_2)$ \cite{ab1}. In this paper we express the 5-point connected green's function $G(x_1,x_2,x_3,x_4,x_5)$ of gluon in terms of the 5-point 1PI vertex function $\Gamma[x_1,x_2,x_3,x_4,x_5]$ and the 4-point 1PI vertex function $\Gamma[x_1,x_2,x_3,x_4]$ and the 3-point 1PI vertex function $\Gamma[x_1,x_2,x_3]$ and the (full) propagator $G(x_1,x_2)$ at all orders in coupling constant by using the path integral formulation of QCD. We also perform our calculation in the momentum space and express the 5-point connected green's function $G(k_1,k_2,k_3,k_4,k_5)$ of gluon in terms of the 5-point 1PI proper vertex function ${\bar \Gamma}[k_1,k_2,k_3,k_4,k_5]$ and the 4-point 1PI proper vertex function ${\bar \Gamma}[k_1,k_2,k_3,k_4]$ and the 3-point 1PI proper vertex function ${\bar \Gamma}[k_1,k_2,k_3]$ and the (full) propagator $G(k)$ at all orders in coupling constant by using the path integral formulation of QCD, see eq. (\ref{5gk}). We use this in the LSZ reduction formula and express the S-matrix element for the $gg \rightarrow ggg$ scattering process at all orders in coupling constant in QCD in terms of 5-point, 4-point, 3-point 1PI proper vertex functions and the (full) propagator. We will provide a derivation of eq. (\ref{5gk}) in this paper. The paper is organized as follows. In section II we describe the generating functional $Z[J,\eta,{\bar \eta}]$ in QCD in the path integral formulation. In section III we obtain the n-point connected green's function $G(x_1,...,x_n)$ and the n-point 1PI vertex function ${ \Gamma}[x_1,...,x_n]$ of gluon from the generating functional in QCD by using the the path integral formulation. In section IV we express 5-point connected Green's function of gluon in terms of 5-point, 4-point, 3-point 1PI vertex functions and the (full) propagator in coordinate space at all orders in coupling constant in QCD. In section V we express 5-point connected Green's function of gluon in terms of 5-point, 4-point, 3-point 1PI proper vertex functions and the (full) propagator in momentum space at all orders in coupling constant in QCD. In section VI we use this in the LSZ reduction formula and express the S-matrix element for the $gg \rightarrow ggg$ scattering process at all orders in coupling constant in QCD in terms of 5-point, 4-point, 3-point 1PI proper vertex functions and the (full) propagator. Section VII contains conclusion. \section{ Generating Functional in the Path Integral Formulation of QCD } We denote the gluon field by $Q^{\mu a}(x)$ where $\mu=0,1,2,3$ is the Lorentz index and $a=1,...,8$ is the color index. In the path integral formulation the generating functional $Z[J,\eta,{\bar \eta}]$ in QCD is given by \cite{ab} \begin{eqnarray} &&Z[J,\eta,{\bar \eta}]=\int [d{\bar \psi}][d\psi] [dQ] ~{\rm det}(\frac{\delta \partial^\mu Q_\mu^d}{\delta \omega^e})\nonumber \\ &&\times e^{i\int d^4x [-\frac{1}{4}F_{\mu \nu }^{d^2}[Q] - \frac{1}{2\alpha} (\partial_\mu Q^{\mu d}(x))^2 +{\bar \psi}(x)[i\gamma^\mu \partial_\mu +gT^d \gamma^\mu Q_\mu^d(x) -m]\psi(x) + {\bar \psi}(x) \cdot \eta(x) +{\bar \eta}(x) \cdot \psi(x)+ J(x) \cdot Q(x)]}\nonumber \\ \label{q1v} \end{eqnarray} where $\alpha$ is the gauge fixing parameter and \begin{eqnarray} &&F_{\mu \nu }^{d^2}[Q] =[\partial_\mu Q_\nu^d(x) - \partial_\nu Q_\mu^d(x) + gf^{dbc} Q_\mu^b(x) Q_\nu^c(x)] \nonumber \\ &&\times [\partial^\mu Q^{\nu d}(x) - \partial^\nu Q^{\mu d}(x) + gf^{dae} Q^{\mu a}(x) Q^{\nu e}(x)]. \label{q3v} \end{eqnarray} In eq. (\ref{q1v}) the determinant ${\rm det}( \frac{\delta \partial^\mu Q_\mu^d}{\delta \omega^e})$ can be expressed in terms of the path integration over the ghost fields but we will directly work with ${\rm det}(\frac{\delta \partial^\mu Q_\mu^d}{\delta \omega^e})$ in eq. (\ref{q1v}). The external source to the quark field $\psi_i(x)$ is ${\bar \eta}_i(x)$ and the external source to the gluon field $Q^{\mu a}(x)$ is $J^{\mu a}(x)$. The n-point connected green's function $G(x_1,...,x_n)$ and the n-point 1PI vertex function $\Gamma[x_1,...,x_n]$ of gluon at all orders in coupling constant in QCD can be generated from the generating functional in eq. (\ref{q1v}) by using the path integral formulation of QCD. \section{N-point connected green's function of gluon and the n-point 1PI vertex function of gluon in QCD} As mentioned above the n-point green's function $G(x_1,...,x_n)$ of gluon at all orders in coupling constant in QCD can be obtained from the generating functional in eq. (\ref{q1v}) by using the path integral formulation of QCD. From eq. (\ref{q1v}) we find that the n-point connected Green's function $G(x_1,...,x_n)$ of gluon at all orders in coupling constant in QCD is given by \begin{eqnarray} G(x_1,...,x_n)=\frac{1}{i^{n-1}}~ \frac{ \delta^n W[J,\eta,{\bar \eta}]}{\delta J(x_1)...\delta J(x_n)}|_{\eta ={\bar \eta} =J=0} \label{w1v} \end{eqnarray} where $W[J,\eta,{\bar \eta}]$ is related to $Z[J,\eta,{\bar \eta}]$ in eq. (\ref{q1v}) via the equation \begin{eqnarray} W[J,\eta,{\bar \eta}=-i~{\rm ln}Z[J,\eta,{\bar \eta}]. \label{wzv} \end{eqnarray} The n-point Green's function of gluon in QCD obeys the invariance \begin{eqnarray} G(x_1,x_2,...,x_n) = G(x_1+y,x_2+y,...,x_n+y). \label{giv} \end{eqnarray} From eq. (\ref{wzv}) we find that the effective action functional in QCD is given by \begin{eqnarray} \Gamma[<Q>,<\eta>,<{\bar \eta}>]=W[J,\eta,{\bar \eta}]-\int d^4x[J(x)\cdot <Q(x)>+{\bar \eta}(x) \cdot <\psi(x)>+<{\bar \psi}(x)> \cdot \eta(x) ] \nonumber \\ \label{w4v} \end{eqnarray} which generates the n-point 1PI vertex function $\Gamma[x_1,...,x_n]$ of gluon at all orders in coupling constant in QCD given by \begin{eqnarray} \Gamma[x_1,...,x_n]=\frac{1}{i^{n-1}} \frac{\delta^n \Gamma[<Q>,<\eta>,<{\bar \eta}>]}{\delta <Q(x_1)>...\delta <Q(x_n)>}|_{<Q>=<\eta>=<{\bar \eta}>=0} \label{w5v} \end{eqnarray} where \begin{eqnarray} <Q(x)>= \frac{\delta W[J,\eta,{\bar \eta}]}{\delta J(x)},~~~~~<\psi(x)>= \frac{\delta W[J,\eta,{\bar \eta}]}{\delta {\bar \eta}(x)},~~~~~<{\bar \psi}(x)>= \frac{\delta W[J,\eta,{\bar \eta}]}{\delta { \eta}(x)}. \label{w3v} \end{eqnarray} \section{5-point connected green's function and the 5-point, 4-point and 3-point 1PI vertex functions of Gluon in coordinate space } As mentioned in the introduction the 2-point 1PI vertex function $\Gamma[x_1,x_2]$ is the inverse of the (full) propagator (the 2-point connected Green's function $G(x_1,x_2)$) and the 3-point connected green's function $G(x_1,x_2,x_3)$ is expressed in terms of the 3-point 1PI vertex function $\Gamma[x_1,x_2,x_3]$ by adding (full) propagators to the external legs in \cite{ab1} which can be easily verified from eqs. (\ref{w1v}) and (\ref{w5v}). Similarly, the 4-point connected green's function $G(x_1,x_2,x_3,x_4)$ is expressed in terms of the 4-point 1PI vertex function $\Gamma[x_1,x_2,x_3,x_4]$ and the 3-point 1PI vertex function $\Gamma[x_1,x_2,x_3]$ and the (full) propagator $G(x_1,x_2)$ in \cite{ab1} which can be verified from eqs. (\ref{w1v}) and (\ref{w5v}). In this section we will express the 5-point connected green's function $G(x_1,x_2,x_3,x_4,x_5)$ of gluon in terms of the 5-point 1PI vertex function $\Gamma[x_1,x_2,x_3,x_4,x_5]$ and the 4-point 1PI vertex function $\Gamma[x_1,x_2,x_3,x_4]$ and the 3-point 1PI vertex function $\Gamma[x_1,x_2,x_3]$ and the (full) propagator $G(x_1,x_2)$ at all orders in coupling constant in QCD. As discussed in the previous section, in the path integral formulation of QCD, the n-point connected Green's function $G(x_1,...,x_n)$ of gluon at all orders in coupling constant in QCD is given by eq. (\ref{w1v}) and the n-point 1PI vertex function $\Gamma[x_1,...,x_n]$ of gluon at all orders in coupling constant in QCD is given by eq. (\ref{w5v}). Hence after doing a lengthy but straightforward calculation we find from eqs. (\ref{w1v}) and (\ref{w5v}) that \begin{eqnarray} &&G(x_1,x_2,x_3,x_4,x_5)= \nonumber \\ &&\int d^4x'_1 \int d^4x'_2 \int d^4x'_3 \int d^4x'_4 \int d^4x'_5 G(x_1,x'_1)G(x_2,x'_2)G(x_3,x'_3) G(x_4,x'_4) G(x_5,x'_5) \Gamma[x'_1,x'_2,x'_3,x'_4,x'_5]\nonumber \\ && +\int d^4x \int d^4y \int d^4z \int d^4w \int d^4w_1 \int d^4w_2 \int d^4w_3 [\nonumber \\ &&G(x_1,w_1)G(x_5,w_2)G(x,w_3)\Gamma[w_1,w_2,w_3]G(x_2,y)G(x_3,z)G(x_4,w)\Gamma[x,y,z,w]\nonumber \\ &&+G(x_1,w_1)G(x_2,w_2)G(y,w_3)\Gamma[w_1,w_2,w_3]G(x_5,x)G(x_3,z)G(x_4,w)\Gamma[x,y,z,w]\nonumber \\ &&+G(x_1,w_1)G(x_3,w_2)G(z,w_3)\Gamma[w_1,w_2,w_3]G(x_5,x) G(x_2,y)G(x_4,w)\Gamma[x,y,z,w] \nonumber \\ &&+G(x_1,w_1)G(x_4,w_2)G(w,w_3)\Gamma[w_1,w_2,w_3]G(x_5,x) G(x_2,y)G(x_3,z)\Gamma[x,y,z,w] \nonumber \\ &&+G(x_1,x) G(x_5,y)G(x_2,z)\Gamma[x,y,z,w]G(w,w_1)G(x_3,w_2)G(x_4,w_3)\Gamma[w_1,w_2,w_3]\nonumber \\ &&+G(x_5,w_1)G(x_2,w_2)G(y,w_3)\Gamma[w_1,w_2,w_3]G(x_1,x) G(x_3,z)G(x_4,w)\Gamma[x,y,z,w]\nonumber \\ && +G(x_1,x) G(x_5,y)G(x_3,z)\Gamma[x,y,z,w]G(x_2,w_1)G(w,w_2)G(x_4,w_3)\Gamma[w_1,w_2,w_3]\nonumber \\ &&+G(x_5,w_1)G(x_3,w_2)G(z,w_3)\Gamma[w_1,w_2,w_3]G(x_1,x) G(x_2,y)G(x_4,w)\Gamma[x,y,z,w] \nonumber \\ && +G(x_1,x) G(x_5,y)G(x_4,z)\Gamma[x,y,z,w]G(x_2,w_1)G(x_3,w_2)G(w,w_3)\Gamma[w_1,w_2,w_3] \nonumber \\ &&+G(x_5,w_1)G(x_4,w_2)G(w,w_3)\Gamma[w_1,w_2,w_3]G(x_1,x) G(x_2,y)G(x_3,z)\Gamma[x,y,z,w]] \nonumber \\ && +\int d^4x \int d^4y \int d^4z \int d^4w \int d^4w_1 \int d^4w_2 \int d^4w_3 \int d^4y' \int d^4z' [ \nonumber \\ &&G(x_1,w_1)G(x_5,w_2)G(x,w_3)\Gamma[w_1,w_2,w_3] G(x_2,y)G(x_3,z)G(x_4,w)\Gamma[x,y,y']G(y',z')\Gamma[z',z,w]\nonumber \\ &&+G(x_1,w_1)G(x_2,w_2)G(y,w_3)\Gamma[w_1,w_2,w_3]G(x_5,x) G(x_3,z)G(x_4,w)\Gamma[x,y,y']G(y',z')\Gamma[z',z,w]\nonumber \\ &&+G(x_1,w_1)G(x_3,w_2)G(z,w_3)\Gamma[w_1,w_2,w_3]G(x_5,x) G(x_2,y)G(x_4,w)\Gamma[x,y,y']G(y',z')\Gamma[z',z,w]\nonumber \\ &&+G(x_1,w_1)G(x_4,w_2)G(w,w_3)\Gamma[w_1,w_2,w_3] G(x_5,x) G(x_2,y)G(x_3,z)\Gamma[x,y,y']G(y',z')\Gamma[z',z,w]\nonumber \\ &&+G(x_1,x) G(x_5,y)G(x_2,z)G(w_1,w)\Gamma[x,y,y']G(y',z')\Gamma[z',z,w]G(x_3,w_2)G(x_4,w_3)\Gamma[w_1,w_2,w_3]\nonumber \\ &&+G(x_5,w_1)G(x_2,w_2)G(y,w_3)\Gamma[w_1,w_2,w_3]G(x_1,x)G(x_3,z)G(x_4,w)\Gamma[x,y,y']G(y',z')\Gamma[z',z,w]\nonumber \\ && +G(x_1,x) G(x_5,y)G(x_3,z)G(w_2,w)\Gamma[x,y,y']G(y',z')\Gamma[z',z,w]G(x_2,w_1)G(x_4,w_3)\Gamma[w_1,w_2,w_3]\nonumber \\ &&+G(x_5,w_1)G(x_3,w_2)G(z,w_3)\Gamma[w_1,w_2,w_3]G(x_1,x) G(x_2,y)G(x_4,w)\Gamma[x,y,y']G(y',z')\Gamma[z',z,w]\nonumber \\ &&+G(x_1,x)G(x_5,y)G(x_4,z)G(w_3,w)\Gamma[x,y,y']G(y',z')\Gamma[z',z,w]G(x_2,w_1)G(x_3,w_2)\Gamma[w_1,w_2,w_3] \nonumber \\ &&+G(x_5,w_1)G(x_4,w_2)G(w,w_3)\Gamma[w_1,w_2,w_3]G(x_1,x) G(x_2,y)G(x_3,z)\Gamma[x,y,y']G(y',z')\Gamma[z',z,w] \nonumber \\ &&+G(x_1,w_1)G(x_5,w_2)\Gamma[w_1,w_2,w_3]G(w_3,x) G(x_2,y)G(x_3,z)G(x_4,w)\Gamma[x,z,y']G(y',z')\Gamma[y,z',w]\nonumber \\ &&+G(x_1,w_1)G(x_2,w_2)G(y,w_3)\Gamma[w_1,w_2,w_3]G(x_5,x)G(x_3,z)G(x_4,w)\Gamma[x,z,y']G(y',z')\Gamma[y,z',w]\nonumber \\ &&+G(x_1,w_1)G(x_3,w_2)\Gamma[w_1,w_2,w_3]G(x_5,x) G(x_2,y)G(w_3,z)G(x_4,w)\Gamma[x,z,y']G(y',z')\Gamma[y,z',w]\nonumber \\ &&+G(x_1,w_1)G(x_4,w_2)\Gamma[w_1,w_2,w_3]G(x_5,x) G(x_2,y)G(x_3,z)G(w_3,w)\Gamma[x,z,y']G(y',z')\Gamma[y,z',w]\nonumber \\ &&+G(x_1,x) G(x_5,y)G(x_2,z)G(w_1,w)\Gamma[x,z,y']G(y',z')\Gamma[y,z',w]G(x_3,w_2)G(x_4,w_3)\Gamma[w_1,w_2,w_3]\nonumber \\ &&+G(x_5,w_1)G(x_2,w_2)\Gamma[w_1,w_2,w_3]G(x_1,x) G(w_3,y)G(x_3,z)G(x_4,w)\Gamma[x,z,y']G(y',z')\Gamma[y,z',w]\nonumber \\ && +G(x_1,x) G(x_5,y)G(x_3,z)\Gamma[x,z,y']G(y',z')\Gamma[y,z',w]G(x_2,w_1)G(w,w_2)G(x_4,w_3)\Gamma[w_1,w_2,w_3]\nonumber \\ &&+G(x_5,w_1)G(x_3,w_2)G(z,w_3)\Gamma[w_1,w_2,w_3]G(x_1,x) G(x_2,y)G(x_4,w)\Gamma[x,z,y']G(y',z')\Gamma[y,z',w]\nonumber \\ &&+G(x_1,x) G(x_5,y)G(x_4,z)\Gamma[x,z,y']G(y',z')\Gamma[y,z',w]G(x_2,w_1)G(x_3,w_2)G(w,w_3)\Gamma[w_1,w_2,w_3] \nonumber \\ &&+G(x_5,w_1)G(x_4,w_2)\Gamma[w_1,w_2,w_3]G(x_1,x) G(x_2,y)G(x_3,z)G(w_3,w)\Gamma[x,z,y']G(y',z')\Gamma[y,z',w]\nonumber \\ &&+G(x_1,w_1)G(x_5,w_2)G(x,w_3)\Gamma[w_1,w_2,w_3]G(x_2,y)G(x_3,z)G(x_4,w)\Gamma[x,w,y']G(y',z')\Gamma[y,z,z']\nonumber \\ &&+G(x_1,w_1)G(x_2,w_2)G(y,w_3)\Gamma[w_1,w_2,w_3]G(x_5,x)G(x_3,z)G(x_4,w)\Gamma[x,w,y']G(y',z')\Gamma[y,z,z']\nonumber \\ &&+G(x_1,w_1)G(x_3,w_2)\Gamma[w_1,w_2,w_3]G(x_5,x) G(x_2,y)G(w_3,z)G(x_4,w)\Gamma[x,w,y']G(y',z')\Gamma[y,z,z']\nonumber \\ &&+G(x_1,w_1)G(x_4,w_2)\Gamma[w_1,w_2,w_3]G(x_5,x) G(x_2,y)G(x_3,z)G(w_3,w)\Gamma[x,w,y']G(y',z')\Gamma[y,z,z']\nonumber \\ &&+G(x_1,x) G(x_5,y)G(x_2,z)\Gamma[x,w,y']G(y',z')\Gamma[y,z,z']G(w,w_1)G(x_3,w_2)G(x_4,w_3)\Gamma[w_1,w_2,w_3]\nonumber \\ &&+G(x_5,w_1)G(x_2,w_2)\Gamma[w_1,w_2,w_3]G(x_1,x) G(w_3,y)G(x_3,z)G(x_4,w)\Gamma[x,w,y']G(y',z')\Gamma[y,z,z']\nonumber \\ && +G(x_1,x) G(x_5,y)G(x_3,z)\Gamma[x,w,y']G(y',z')\Gamma[y,z,z']G(x_2,w_1)G(w,w_2)G(x_4,w_3)\Gamma[w_1,w_2,w_3]\nonumber \\ &&+G(x_5,w_1)G(x_3,w_2)G(z,w_3)\Gamma[w_1,w_2,w_3]G(x_1,x) G(x_2,y)G(x_4,w)\Gamma[x,w,y']G(y',z')\Gamma[y,z,z']\nonumber \\ && +G(x_1,x) G(x_5,y)G(x_4,z)\Gamma[x,w,y']G(y',z')\Gamma[y,z,z']G(x_2,w_1)G(x_3,w_2)G(w,w_3)\Gamma[w_1,w_2,w_3]\nonumber \\ &&+G(x_5,w_1)G(x_4,w_2)G(w,w_3)\Gamma[w_1,w_2,w_3]G(x_1,x) G(x_2,y)G(x_3,z)\Gamma[x,w,y']G(y',z')\Gamma[y,z,z']\nonumber \\ &&+G(x_1,w_1)G(x_5,w_1)\Gamma[w_1,w_2,w_3]G(w_2,x)G(x_2,y)\Gamma[x,y,z]G(z,w)G(x_3,y')G(x_4,z')\Gamma[w,y',z'] \nonumber \\ &&+G(x_1,w_1)G(x_5,w_2)\Gamma[w_1,w_2,w_3]G(w_3,x)G(x_3,y)\Gamma[x,y,z]G(x_2,w)G(z,y')G(x_4,z')\Gamma[w,y',z']\nonumber \\ &&+G(x_1,w_1)G(x_5,w_2)\Gamma[w_1,w_2,w_3]G(w_3,x)G(x_4,y)\Gamma[x,y,z]G(x_2,w)G(x_3,y')G(z,z')\Gamma[w,y',z']\nonumber \\ &&+G(x_1,w_1)G(x_2,w_2)\Gamma[w_1,w_2,w_3]G(x_5,x)G(w_3,y)\Gamma[x,y,z]G(z,w)G(x_3,y')G(x_4,z')\Gamma[w,y',z']\nonumber \\ &&+G(x_1,w_1)\Gamma[w_1,w_2,w_3]G(x_5,x)G(x_2,y)G(w_3,z)\Gamma[x,y,z]G(w_2,w)G(x_3,y')G(x_4,z')\Gamma[w,y',z']\nonumber \\ && +G(x_5,w_1)G(x_2,w_2)\Gamma[w_1,w_2,w_3]G(x_1,x)G(x_3,y)\Gamma[x,y,z]G(w_3,w)G(z,y')G(x_4,z')\Gamma[w,y',z']\nonumber \\ && +G(x_5,w_1)G(x_2,w_2)\Gamma[w_1,w_2,w_3]G(x_1,x)G(x_4,y)\Gamma[x,y,z]G(w_3,w)G(x_3,y')G(z,z')\Gamma[w,y',z']\nonumber \\ &&+G(x_1,w_1)G(x_2,w_2)\Gamma[w_1,w_2,w_3]G(x_5,x)G(x_3,y)\Gamma[x,y,z]G(w_3,w)G(z,y')G(x_4,z')\Gamma[w,y',z']\nonumber \\ && +G(x_1,w_1)\Gamma[w_1,w_2,w_3]G(x_5,x)G(x_3,y)G(w_3,z)\Gamma[x,y,z]G(x_2,w)G(w_2,y')G(x_4,z')\Gamma[w,y',z']\nonumber \\ && + G(x_5,w_1)G(z,w_2)\Gamma[w_1,w_2,w_3]G(x_1,x)G(x_3,y)\Gamma[x,y,z]G(x_2,w)G(w_3,y')G(x_4,z')\Gamma[w,y',z']\nonumber \\ && +G(x_5,w_1)G(x_3,w_2)\Gamma[w_1,w_2,w_3]G(x_1,x)G(x_4,y)\Gamma[x,y,z]G(x_2,w)G(w_3,y')G(z,z')\Gamma[w,y',z']\nonumber \\ &&+G(x_1,w_1)G(x_2,w_2)\Gamma[w_1,w_2,w_3]G(x_5,x)G(x_4,y)\Gamma[x,y,z]G(w_3,w)G(x_3,y')G(z,z')\Gamma[w,y',z']\nonumber \\ && +G(x_1,w_1)\Gamma[w_1,w_2,w_3]G(x_5,x)G(x_4,y)G(w_3,z)\Gamma[x,y,z]G(x_2,w)G(x_3,y')G(w_2,z')\Gamma[w,y',z']\nonumber \\ && +G(x_5,w_1)G(x_4,w_2)\Gamma[w_1,w_2,w_3]G(x_1,x)G(x_3,y)\Gamma[x,y,z]G(x_2,w)G(z,y')G(w_3,z')\Gamma[w,y',z']\nonumber \\ && +G(x_5,w_1)G(z,w_2)\Gamma[w_1,w_2,w_3]G(x_1,x)G(x_4,y)\Gamma[x,y,z]G(x_2,w)G(x_3,y')G(w_3,z')\Gamma[w,y',z'] ]\nonumber \\ \label{5gx} \end{eqnarray} which is the expression of the 5-point connected green's function $G(x_1,x_2,x_3,x_4,x_5)$ of gluon in coordinate space in terms of the 5-point 1PI vertex function $\Gamma[x_1,x_2,x_3,x_4,x_5]$ and the 4-point 1PI vertex function $\Gamma[x_1,x_2,x_3,x_4]$ and the 3-point 1PI vertex function $\Gamma[x_1,x_2,x_3]$ and the (full) propagator $G(x_1,x_2)$ at all orders in coupling constant in QCD. \section{5-point connected green's function and the 5-point, 4-point and 3-point 1PI Proper vertex functions of Gluon in momentum space} In this section we perform the calculation in the momentum space and express the 5-point connected green's function $G(k_1,k_2,k_3,k_4,k_5)$ of gluon in terms of the 5-point 1PI proper vertex function ${\bar \Gamma}[k_1,k_2,k_3,k_4,k_5]$ and the 4-point 1PI proper vertex function ${\bar \Gamma}[k_1,k_2,k_3,k_4]$ and the 3-point 1PI proper vertex function ${\bar \Gamma}[k_1,k_2,k_3]$ and the (full) propagator $G(k)$ at all orders in coupling constant in QCD. Note that since the sum of total momentum is zero in the 1PI vertex function, the n-point 1PI vertex function $\Gamma[k_1,k_2,...,k_n]$ in momentum space is related to the n-point 1PI proper vertex function ${\bar \Gamma}[k_1,k_2,...,k_n]$ via the relation \begin{eqnarray} \Gamma[k_1,k_2,...,k_n]=\delta^{(4)}(k_1+k_2+...+k_n)~{\bar \Gamma}[k_1,k_2,...,k_n]. \label{pgv} \end{eqnarray} Hence in the path integral formulation of QCD we find from eqs. (\ref{giv}), (\ref{pgv}) and (\ref{5gx}) that \begin{eqnarray} &&[G(k_1)]^{-1} [G(k_2)]^{-1}[G(k_3)]^{-1}[G(k_4)]^{-1}[G(k_5)]^{-1}G(k_1,k_2,k_3,k_4,k_5)= \delta^{(4)}(k_1+k_2+k_3+k_4+k_5)\nonumber \\ &&\times [{\bar \Gamma}[k_1,k_2,k_3,k_4,k_5]~\nonumber \\ && +{\bar \Gamma}[k_1,k_5,-k_1-k_5]G(-k_1-k_5){\bar \Gamma}[k_1+k_5,k_2,k_3,k_4]\nonumber \\ && +{\bar \Gamma}[k_1,k_2,-k_1-k_2]G(-k_1-k_2){\bar \Gamma}[k_5,k_1+k_2,k_3,k_4]\nonumber \\ && +{\bar \Gamma}[k_1,k_3,-k_1-k_3]G(-k_1-k_3){\bar \Gamma}[k_5,k_2,k_1+k_3,k_4]\nonumber \\ && +{\bar \Gamma}[k_1,k_4,-k_1-k_4]G(-k_1-k_4){\bar \Gamma}[k_5,k_2,k_3,k_1+k_4]\nonumber \\ && +{\bar \Gamma}[-k_3-k_4,k_3,k_4]G(-k_3-k_4){\bar \Gamma}[k_1,k_5,k_2,k_3+k_4]\nonumber \\ && +{\bar \Gamma}[k_5,k_2,-k_5-k_2]G(-k_5-k_2){\bar \Gamma}[k_1,k_5+k_2,k_3,k_4]\nonumber \\ && +{\bar \Gamma}[k_2,-k_2-k_4,k_4]G(-k_2-k_4){\bar \Gamma}[k_1,k_5,k_3,k_2+k_4]\nonumber \\ && +{\bar \Gamma}[k_5,k_3,-k_5-k_3]G(-k_5-k_3){\bar \Gamma}[k_1,k_2,k_5+k_3,k_4]\nonumber \\ && +{\bar \Gamma}[k_2,k_3,-k_2-k_3]G(-k_2-k_3){\bar \Gamma}[k_1,k_5,k_4,k_2+k_3]\nonumber \\ && +{\bar \Gamma}[k_5,k_4,-k_5-k_4]G(-k_5-k_4){\bar \Gamma}[k_1,k_2,k_3,k_5+k_4]\nonumber \\ && +{\bar \Gamma}[k_1,k_5,-k_1-k_5]G(-k_1-k_5){\bar \Gamma}[k_1+k_5,k_2,k_3+k_4]G(-k_3-k_4){\bar \Gamma}[-k_3-k_4,k_3,k_4]\nonumber \\ && +{\bar \Gamma}[k_1,k_2,-k_1-k_2]G(-k_1-k_2){\bar \Gamma}[k_5,k_1+k_2,k_3+k_4]G(-k_3-k_4){\bar \Gamma}[-k_3-k_4,k_3,k_4]\nonumber \\ && + {\bar \Gamma}[k_1,k_3,-k_1-k_3]G(-k_1-k_3){\bar \Gamma}[k_5,k_2,-k_5-k_2]G(k_5+k_2){\bar \Gamma}[k_5+k_2,k_1+k_3,k_4]\nonumber \\ && +{\bar \Gamma}[k_1,k_4,-k_1-k_4]G(-k_1-k_4){\bar \Gamma}[k_5,k_2,-k_5-k_2]G(k_5+k_2){\bar \Gamma}[k_5+k_2,k_3,k_1+k_4]\nonumber \\ && + {\bar \Gamma}[-k_3-k_4,k_3,k_4]G(k_3+k_4){\bar \Gamma}[k_1,k_5,-k_1-k_5]G(k_1+k_5){\bar \Gamma}[k_1+k_5,k_2,k_3+k_4]\nonumber \\ && +{\bar \Gamma}[k_5,k_2,-k_5-k_2]G(-k_5-k_2){\bar \Gamma}[k_1,k_5+k_2,k_3+k_4]G(-k_3-k_4){\bar \Gamma}[-k_3-k_4,k_3,k_4]\nonumber \\ && +{\bar \Gamma}[k_2,-k_2-k_4,k_4]G(k_2+k_4){\bar \Gamma}[k_1,k_5,-k_1-k_5]G(k_1+k_5){\bar \Gamma}[k_1+k_5,k_3,k_2+k_4]\nonumber \\ && +{\bar \Gamma}[k_5,k_3,-k_5-k_3]G(-k_5-k_3){\bar \Gamma}[k_1,k_2,-k_1-k_2]G(k_1+k_2){\bar \Gamma}[k_1+k_2,k_5+k_3,k_4]\nonumber \\ && +{\bar \Gamma}[k_2,k_3,-k_2-k_3]G(k_2+k_3){\bar \Gamma}[k_1,k_5,-k_1-k_5]G(k_1+k_5){\bar \Gamma}[k_1+k_5,k_4,k_2+k_3]\nonumber \\ && +{\bar \Gamma}[k_5,k_4,-k_5-k_4]G(-k_5-k_4){\bar \Gamma}[k_1,k_2,-k_1-k_2]G(k_1+k_2){\bar \Gamma}[k_1+k_2,k_3,k_5+k_4]\nonumber \\ && +{\bar \Gamma}[k_1,k_5,-k_1-k_5]G(k_1+k_5){\bar \Gamma}[k_1+k_5,k_3,k_2+k_4]G(-k_2-k_4){\bar \Gamma}[k_2,-k_2-k_4,k_4]\nonumber \\ && +{\bar \Gamma}[k_1,k_2,-k_1-k_2]G(-k_1-k_2){\bar \Gamma}[k_5,k_3,-k_5-k_3]G(k_5+k_3){\bar \Gamma}[k_1+k_2,k_5+k_3,k_4]\nonumber \\ && +{\bar \Gamma}[k_1,k_3,-k_1-k_3]G(k_1+k_3){\bar \Gamma}[k_5,k_1+k_3,k_2+k_4]G(-k_2-k_4){\bar \Gamma}[k_2,-k_2-k_4,k_4]\nonumber \\ && +{\bar \Gamma}[k_1,k_4,-k_1-k_4]G(k_1+k_4){\bar \Gamma}[k_5,k_3,-k_5-k_3]G(k_5+k_3){\bar \Gamma}[k_2,k_5+k_3,k_1+k_4]\nonumber \\ && +{\bar \Gamma}[-k_3-k_4,k_3,k_4]G(k_3+k_4){\bar \Gamma}[k_1,k_2,-k_1-k_2]G(k_1+k_2){\bar \Gamma}[k_5,k_1+k_2,k_3+k_4]\nonumber \\ && +{\bar \Gamma}[k_5,k_2,-k_5-k_2]G(k_5+k_2){\bar \Gamma}[k_1,k_3,-k_1-k_3]G(k_1+k_3){\bar \Gamma}[k_5+k_2,k_1+k_3,k_4]\nonumber \\ && +{\bar \Gamma}[k_2,-k_2-k_4,k_4]G(-k_2-k_4){\bar \Gamma}[k_1,k_3,-k_1-k_3]G(k_1+k_3){\bar \Gamma}[k_5,k_1+k_3,k_2+k_4]\nonumber \\ && +{\bar \Gamma}[k_5,k_3,-k_5-k_3]G(-k_5-k_3){\bar \Gamma}[k_1,k_5+k_3,k_2+k_4]G(-k_2-k_4){\bar \Gamma}[k_2,-k_2-k_4,k_4]\nonumber \\ && +{\bar \Gamma}[k_2,k_3,-k_2-k_3]G(-k_2-k_3){\bar \Gamma}[k_1,k_4,-k_1-k_4]G(-k_1-k_4){\bar \Gamma}[k_5,k_1+k_4,k_2+k_3]\nonumber \\ && +{\bar \Gamma}[k_5,k_4,-k_5-k_4]G(k_5+k_4){\bar \Gamma}[k_1,k_3,-k_1-k_3]G(k_1+k_3){\bar \Gamma}[k_2,k_1+k_3,k_5+k_4]\nonumber \\ && +{\bar \Gamma}[k_1,k_5,-k_1-k_5]G(-k_1-k_5){\bar \Gamma}[k_1+k_5,k_4,k_2+k_3]G(-k_2-k_3){\bar \Gamma}[k_2,k_3,-k_2-k_3]\nonumber \\ && +{\bar \Gamma}[k_1,k_2,-k_1-k_2]G(-k_1-k_2){\bar \Gamma}[k_5,k_4,-k_5-k_4]G(k_5+k_4){\bar \Gamma}[k_1+k_2,k_3,k_5+k_4]\nonumber \\ && +{\bar \Gamma}[k_1,k_3,-k_1-k_3]G(k_1+k_3){\bar \Gamma}[k_5,k_4,-k_5-k_4]G(-k_5-k_4){\bar \Gamma}[k_2,k_1+k_3,k_5+k_4]\nonumber \\ && +{\bar \Gamma}[k_1,k_4,-k_1-k_4]G(k_1+k_4){\bar \Gamma}[k_5,k_1+k_4,k_2+k_3]G(-k_2-k_3){\bar \Gamma}[k_2,k_3,-k_2-k_3]\nonumber \\ && +{\bar \Gamma}[-k_3-k_4,k_3,k_4]G(-k_3-k_4){\bar \Gamma}[k_1,k_3+k_4,k_5+k_2]G(-k_5-k_2){\bar \Gamma}[k_5,k_2,-k_5-k_2]\nonumber \\ && +{\bar \Gamma}[k_5,k_2,-k_5-k_2]G(k_5+k_2){\bar \Gamma}[k_1,k_4,-k_1-k_4]G(k_1+k_4){\bar \Gamma}[k_5+k_2,k_3,k_1+k_4]\nonumber \\ && +{\bar \Gamma}[k_2,-k_2-k_4,k_4]G(-k_2-k_4){\bar \Gamma}[k_1,k_2+k_4,k_5+k_3]G(-k_5-k_3){\bar \Gamma}[k_5,k_3,-k_5-k_3]\nonumber \\ && +{\bar \Gamma}[k_5,k_3,-k_5-k_3]G(-k_5-k_3){\bar \Gamma}[k_1,k_4,-k_1-k_4]G(k_1+k_4){\bar \Gamma}[k_2,k_5+k_3,k_1+k_4]\nonumber \\ && +{\bar \Gamma}[k_2,k_3,-k_2-k_3]G(-k_2-k_3){\bar \Gamma}[k_1,k_2+k_3,k_5+k_4]G(-k_5-k_4){\bar \Gamma}[k_5,k_4,-k_5-k_4]\nonumber \\ && +{\bar \Gamma}[k_5,k_4,-k_5-k_4]G(-k_5-k_4){\bar \Gamma}[k_1,k_5+k_4,k_2+k_3]G(-k_2-k_3){\bar \Gamma}[k_2,k_3,-k_2-k_3]\nonumber \\ && +{\bar \Gamma}[k_1,-k_1-k_5,k_5]G(k_1+k_5){\bar \Gamma}[k_1+k_5,k_2,k_3+k_4]G(-k_3-k_4){\bar \Gamma}[-k_3-k_4,k_3,k_4]\nonumber \\ && +{\bar \Gamma}[k_1,k_5,-k_1-k_5]G(k_1+k_5){\bar \Gamma}[k_1+k_5,k_3,k_2+k_4]G(-k_2-k_4){\bar \Gamma}[k_2,-k_2-k_4,k_4]\nonumber \\ && +{\bar \Gamma}[k_1,k_5,-k_1-k_5]G(k_1+k_5){\bar \Gamma}[k_1+k_5,k_4,k_2+k_3]G(-k_2-k_3){\bar \Gamma}[k_2,k_3,-k_2-k_3]\nonumber \\ && +{\bar \Gamma}[k_1,k_2,-k_1-k_2]G(k_1+k_2){\bar \Gamma}[k_5,k_1+k_2,k_3+k_4]G(-k_3-k_4){\bar \Gamma}[-k_3-k_4,k_3,k_4]\nonumber \\ && +{\bar \Gamma}[k_1,k_3+k_4,k_5+k_2]G(-k_5-k_2){\bar \Gamma}[k_5,k_2,-k_5-k_2]G(-k_3-k_4){\bar \Gamma}[-k_3-k_4,k_3,k_4]\nonumber \\ && +{\bar \Gamma}[k_5,k_2,-k_5-k_2]G(k_5+k_2){\bar \Gamma}[k_1,k_3,-k_1-k_3]G(k_1+k_3){\bar \Gamma}[k_5+k_2,k_1+k_3,k_4]\nonumber \\ && +{\bar \Gamma}[k_5,k_2,-k_5-k_2]G(k_5+k_2){\bar \Gamma}[k_1,k_4,-k_1-k_4]G(k_1+k_4){\bar \Gamma}[k_5+k_2,k_3,k_1+k_4]\nonumber \\ && +{\bar \Gamma}[k_1,k_2,-k_1-k_2]G(k_1+k_2){\bar \Gamma}[k_5,k_3,-k_5-k_3]G(k_5+k_3){\bar \Gamma}[k_1+k_2,k_5+k_3,k_4]\nonumber \\ && +{\bar \Gamma}[k_1,k_2+k_4,k_5+k_3]G(-k_5-k_3){\bar \Gamma}[k_5,k_3,-k_5-k_3]G(-k_2-k_4){\bar \Gamma}[k_2,-k_2-k_4,k_4]\nonumber \\ && +{\bar \Gamma}[k_5,k_1+k_3,k_2+k_4]G(k_1+k_3){\bar \Gamma}[k_1,k_3,-k_1-k_3]G(-k_2-k_4){\bar \Gamma}[k_2,-k_2-k_4,k_4]\nonumber \\ && +{\bar \Gamma}[k_5,k_3,-k_5-k_3]G(-k_5-k_3){\bar \Gamma}[k_1,k_4,-k_1-k_4]G(-k_1-k_4){\bar \Gamma}[k_2,k_5+k_3,k_1+k_4]\nonumber \\ && +{\bar \Gamma}[k_1,k_2,-k_1-k_2]G(k_1+k_2){\bar \Gamma}[k_5,k_4,-k_5-k_4]G(k_5+k_4){\bar \Gamma}[k_1+k_2,k_3,k_5+k_4]\nonumber \\ && +{\bar \Gamma}[k_1,k_2+k_3,k_5+k_4]G(-k_5-k_4){\bar \Gamma}[k_5,k_4,-k_5-k_4]G(-k_2-k_3){\bar \Gamma}[k_2,k_3,-k_2-k_3]\nonumber \\ && +{\bar \Gamma}[k_5,k_4,-k_5-k_4]G(k_1+k_3){\bar \Gamma}[k_1,k_3,-k_1-k_3]G(k_5+k_4){\bar \Gamma}[k_2,k_1+k_3,k_5+k_4]\nonumber \\ && + {\bar \Gamma}[k_5,k_1+k_4,k_2+k_3]G(k_1+k_4){\bar \Gamma}[k_1,k_4,-k_1-k_4]G(-k_2-k_3){\bar \Gamma}[k_2,k_3,-k_2-k_3]~] \label{5gk} \end{eqnarray} which is the expression of the 5-point connected green's function $G(k_1,k_2,k_3,k_4,k_5)$ of gluon in momentum space in terms of the 5-point 1PI proper vertex function ${\bar \Gamma}[k_1,k_2,k_3,k_4,k_5]$ and the 4-point 1PI proper vertex function ${\bar \Gamma}[k_1,k_2,k_3,k_4]$ and the 3-point 1PI proper vertex function ${\bar \Gamma}[k_1,k_2,k_3]$ and the (full) propagator $G(k)$ at all orders in coupling constant in QCD. \section{ 5-Point 1PI Proper Vertex Function of Gluon and the S-Matrix Element For $gg \rightarrow ggg$ Scattering Process at All Orders in Coupling Constant } In this section we use eq. (\ref{5gk}) in the LSZ reduction formula to express the S-matrix element for the gluonic scattering process $gg \rightarrow ggg$ at all orders in coupling constant in QCD in terms of 5-point, 4-point, 3-point 1PI proper vertex functions and the (full) propagator by using the path integral formulation of QCD. Consider the $2 \rightarrow 3$ gluonic scattering process $gg \rightarrow ggg$ in QCD given by \begin{eqnarray} k_1+k_2 \rightarrow k'_1+k'_2+k'_3 \label{gs3} \end{eqnarray} where $k_1,k_2$ are the four-momenta of the two incoming gluons and $k'_1,k'_2,k'_3$ are the four-momenta of three outgoing gluons. The initial (final) state is given by \begin{eqnarray} |i>=|k_1,k_2>,~~~~~~~~~~|f>=|k'_1,k'_2,k'_3>. \label{ifi} \end{eqnarray} Using the LSZ reduction formula, the S-matrix element for the $gg \rightarrow ggg$ scattering process in eq. (\ref{gs3}) at all orders in coupling constant in QCD is given by \begin{eqnarray} &&<f|i> = [G(-k'_1)]^{-1} [G(-k'_2)]^{-1}[G(-k'_3)]^{-1}[G(k_2)]^{-1}[G(k_1)]^{-1} G(-k'_1,-k'_2,-k'_3,k_1,k_2)\nonumber \\ \label{h3} \end{eqnarray} where $G(k)$ is the renormalized (full) propagator of gluon in momentum space and $G(k_1,k_2,k_3,k_4,k_5)$ is the renormalized 5-point connected green's function of gluon in momentum space. Note that as mentioned in eq. (\ref{cr}), for simplicity, we have included the finite factors such as the relevant sum over polarization vectors and color factors in the partonic cross section \begin{eqnarray} {\hat \sigma} \propto |<f|i>|^2 \label{crf} \end{eqnarray} instead of the S-matrix element in eq. (\ref{h3}) so that the S-matrix element in eq. (\ref{h3}) is expressed in terms of the green's functions only. By using eq. (\ref{5gk}) in (\ref{h3}) we find the expression of the S-matrix element for the scattering process $gg \rightarrow ggg$ at all orders in coupling constant in QCD in terms of 5-point, 4-point, 3-point 1PI proper vertex functions and the (full) propagator by using the path integral formulation of QCD. The path integral procedure we have outlined above is suitable for the simultaneous study of renormalization of ultra violet (UV) divergences and factorization of infrared (IR) and collinear divergences at all orders in coupling constant in QCD. The above technique can also be extended to the closed-time path integral formalism in non-equilibrium QCD \cite{c} to study partonic scattering cross sections at all orders in coupling constant in the non-equilibrium quark-gluon plasma \cite{a1,a2,a3,a4} at RHIC and LHC. \section{Conclusions} As far as the renormalization in perturbative QCD is concerned the n-point one particle irreducible (1PI) proper vertex function is the basic building block where the ultra-violet (UV) divergence occurs when the loop momentum integration limit goes to infinity. In this paper we have expressed the S-matrix element for the $gg \rightarrow ggg$ scattering process at all orders in coupling constant in terms of 5-point, 4-point, 3-point 1PI proper vertex functions and the (full) propagator by using the path integral formulation of QCD.
2,869,038,156,567
arxiv
\section{Introduction} With the rapid development of human society and global economy, the expense of resources has increased progressively, especially since the first industrial revolution\cite{2015_Science_347_1246501_Bonaccorso_Graphene}. The existing fossil fuels in earth such as coal, oil and natural gas, which were accumulated in the past billions of years, would be probably exhausted in hundreds years due to the huge energy demand. The approaching to resource exhaustion and the accompanying production of environmentally harmful by-products push us to find possible solutions for future energy. This could be remedied in two ways. One is to promote current utilization efficiency of energy, and to develop novel technologies to reduce the waste of energy, and to collect waste heat for reuse. The other is to find sustainable energy, renewable sources, clean fuels, \emph{etc}. It is known that large amount of energy is wasted in factories, home cooking and vehicle driving because of the low efficiency of energy conversion. To name a few, the efficiency of engine is about 25-50\pct, where the remaining part of energy is dispersed to nature in the form of waste heat, which makes serious environmental pollution and the waste of a lot of resources. If the waste heat can be recycled, we would improve fundamentally the efficiency of energy utilization and solve, to some extent, the current energy and environmental problems. On the other hand, carbon dioxide and monoxide as well as dust particles such as PM 2.5 are harmful by-products when consuming fossil fuels, which are responsible for the global warming due to greenhouse effect and are very harmful to individual's health in addition to the environmental pollution. Thus, it is on demands to solve these challenges by seeking for next-generation energy sources that should be economic, sustainable (renewable), clean (environment-friendly), and abundant in earth. There have been numerous studies exploring the possible utilization of carbon based materials for next-generation energy applications due to the promising physical and chemical properties\cite{2018_CAAJ_13_1518_Wang_Recent, 2016_Nanoscale_8_12863_Gao_Electron, 2018_JoIaEC_64_16_Jayaraman_Recent, 2010_NM_9_871_Tour_Green}. However, large-scale fabrication of carbon materials or carbon based nanostructures is a formidable challenge\cite{2018_PiEaCS_67_115_Kumar_Recent}. Lots of efforts have been dedicated to the synthesis processes, such as the bottom-up approaches from designed carbon molecules,\cite{2018_ACIE_57_9679_Mori_Carbon} the pseudo-topotactic conversion of carbon nanotubes by picosecond pulsed-laser irradiation,\cite{2017_NC_8_683_Zhang_Pseudotopotactic} \emph{etc}. Benefited from the progress and emergence of new synthetic technology, the synthesis of novel carbon materials becomes feasible. Recently, T-carbon, which is a previously predicted carbon allotrope by theoretical study, \cite{2011_PRL_106_155703_Sheng_TCarbon} was experimentally synthesized (Figure~\ref{fig:carbon}G,H)\cite{2017_NC_8_683_Zhang_Pseudotopotactic}. \begin{figure*}[tb] \centering \includegraphics[width=1.00\linewidth]{overview.jpg} \caption{\label{fig:overview} Overview for T-carbon of properties, energy applications, possible enhancement approaches for thermoelectrics and hydrogen storage, and future development. Reproduced with permission. \cite{2014_EES_7_3857_Lee_A, 2016_Nanoscale_8_15033_Joya_Efficient, 2014_MH_1_400_Djurišić_Strategies} Copyright 2014, 2016, Royal Society of Chemistry. Reproduction with permission available at \emph{ https://www.sigmaaldrich.com/technical-documents/articles/technology-spotlights/plexcore-pv-ink-system.html, http://toroccoscoolingandheating.com/thermoelectric-wine-coolers-work/ } } \end{figure*} \begin{figure*}[tb] \centering \includegraphics[width=1.00\linewidth]{carbon.jpg} \caption{\label{fig:carbon} The position of T-carbon in the carbon family as compared with the common carbon materials in (A-C) three-dimensional (amorphous carbon, graphite, diamond), (F) two-dimensional (graphene), (E) one-dimensional (carbon nanotube), and (D) zero-dimensional (fullerene). (G) The crystal structure of T-carbon (its space group $Fd\overline{3}m$ is the same as cubic diamond) is generated by replacing each atom in cubic diamond with a carbon tetrahedron (C$_4$ unit). The numbers in [] indicate the crystal direction. (H) The experimental synthesis layout of T-carbon from a pseudo-topotactic conversion of multi-walled carbon nanotubes under picosecond pulsed-laser irradiation in methanol. Reproduced with permission.\cite{2011_PRL_106_155703_Sheng_TCarbon} Copyright 2011, American Physical Society. Reproduction with permission available at \emph{http://gr.xjtu.edu.cn/web/jinying-zhang/publications}. } \end{figure*} \begin{figure*}[tb] \centering \includegraphics[width=1.00\linewidth]{energy.jpg} \caption{\label{fig:energy} The typical energy applications of T-carbon in (A-C) thermoelectrics, (D) hydrogen storage, and (E,F) lithium ion batteries (LIB). (A) Seebeck coefficient (thermopower) and (B) the figure of merit $ZT$ in contour plot in plane of chemical potential ($\mu$) and temperature. (C) The modulation of the power factor (PF) of T-carbon with compressive strain of 2\protect\pct\ applied, with calcium (Ca) or magnesium (Mg) doped, or being cut into two-dimensional structures along the (111) direction. (D) The hydrogen storage in T-carbon with the capacity of $\sim$7.7\,wt\protect\pct. (E) The overview of M (= Li/Na/K/Mg) atoms migration in T-carbon. The minimum migration path is between T$_1$ and the neighboring T$_1$ sites, which are the center of the vacancy. T$_2$ indicates the middle point. (F) The energy profiles of Li, Na, K and Mg atoms diffusing along the minimum migration path as indicated in (E). } \end{figure*} Herein, we would like to introduce T-carbon and discuss its promising applications for next-generation energy technologies (Figure~\ref{fig:overview} and Figure~\ref{fig:carbon}). It is shown that T-carbon can be potentially used in thermoelectrics, hydrogen storage, lithium ion batteries (Figure~\ref{fig:energy}), \emph{etc}. The challenges, opportunities, and possible directions for future studies of energy applications of T-carbon are also addressed. \section{T-carbon} Carbon is one of the most abundant elements on earth and one of the most important elements for life, which is contained in majority of molecules. Carbon atoms possess unique ability to form bonds with other carbon atoms and nonmetallic elements in diverse hybridization states ($sp$, $sp^2$, $sp^3$)\cite{2017_MT_20_592_Borchardt_Toward, 2018_JoIaEC_64_16_Jayaraman_Recent}. Countless carbon-based organic compounds in the form of a wide range of structures from small molecules to long chains are generated, which have great diversity in the chemical and biological properties and thus result in the present colorful world\cite{2018_CAAJ_13_1518_Wang_Recent}. Carbon mainly exists as three natural allotropes, namely graphite, diamond, and amorphous carbon (Figure~\ref{fig:carbon}A,B,C)\cite{2015_CR_115_4744_Georgakilas_Broad}. Despite the same and exclusive component of carbon atoms, their properties are drastically different from each other, which is a strong hint of the diversity in the properties of carbon materials with different structures and orbital hybridizations. Over the last several decades, several new carbon allotropes have been synthesized with novel properties and potential applications in technology. The three most typical examples include zero-dimensional (0D) fullerenes discovered in 1985 (Figure~\ref{fig:carbon}D), one-dimensional (1D) carbon nanotubes identified in 1991 (Figure~\ref{fig:carbon}E), and two-dimensional (2D) graphene isolation in 2004 (Figure~\ref{fig:carbon}F)\cite{2010_NM_9_868_Hirsch_The}. The fantastic properties of these carbon allotropes and their highly expected engineering probabilities have attracted intensive attention from both academia and industry \cite{2010_NM_9_871_Tour_Green}. Beyond that, a number of three-dimensional (3D) carbon allotropes have also been predicted theoretically, including M-carbon,\cite{2009_PRL_102_175506_Li_Superhard} bct-C$_4$,\cite{2010_PRL_104_125504_Umemoto_BodyCentered} BCO-C$_{16}$,\cite{2016_PRL_116_195501_Wang_BodyCentered} \emph{etc}, where T-carbon predicted in 2011 \cite{2011_PRL_106_155703_Sheng_TCarbon} is the most impactful form (Figure~\ref{fig:carbon}G). T-carbon can be simply derived by substituting each carbon atom in cubic diamond with a C$_4$ unit of carbon tetrahedron (Figure~\ref{fig:carbon}C,G), and it is where the name of `T-carbon' comes from. The space group of T-carbon is $Fd\overline{3}m$, the same as cubic diamond. There exist two tetrahedrons (eight carbon atoms in total) in a unit cell. Such geometric configuration of carbon atoms that forms 3D T-carbon is thermodynamically stable, which is confirmed by the non-imaginary frequency of the phonon dispersion in previous study\cite{2011_PRL_106_155703_Sheng_TCarbon}. The lattice constant of the fully optimized T-carbon is about 7.52\,\AA, which is more than two times that of diamond (3.566\,\AA). As compared to the bond length in diamond (1.544\,\AA), the bonds in T-carbon possess two types with the bond length being 1.502 and 1.417\,\AA\ for intratetrahedron and intertetrahedron, respectively\cite{2017_Carbon_121_154_Esser_Bonding}. Besides, different from the bond angle in diamond (109.5$^\circ$), the bond angle in T-carbon are 60 and 144.74$^\circ$ for the bonds in tetrahedron and two inequivalent bonds, respectively, implying the existence of strain. Because of the large interspaces between atoms in T-carbon, the density is 1.50\,g/cm$^3$, which is much smaller compared to that of diamond, graphite, M-carbon, and bct-C$_4$\cite{2011_PRL_106_155703_Sheng_TCarbon}. In addition to the low density, the Vickers hardness of T-carbon is calculated to be 61.1\,GPa, which is around 1/3 softer than diamond (96\,GPa)\cite{2011_PRB_84_121405_Chen_Hardness, 2011_PRL_106_155703_Sheng_TCarbon}. The low density with large interspaces between atoms and the soft nature would promise broad applications of T-carbon. T-carbon has recently been synthesized in experiment (Figure~\ref{fig:carbon}H) from a pseudo-topotactic conversion of multi-walled carbon nanotube (MWCNTs) suspended in methanol under picosecond pulsed-laser irradiation \cite{2017_NC_8_683_Zhang_Pseudotopotactic}. Firstly, MWCNTs (length: $~\sim$100-200\,nm; diameter: $\sim$10-20\,nm) are prepared by chemical vapor deposition (CVD) and subsequent processing. After dispersed in absolute methanol, the suspension containing individualized MWCNTs are then transferred to the self-designed process with laser irradiation, which is stirred with a magnetic stirring bar and kept under the nitrogen atmosphere. In the fast and far-from-equilibrium process, the metastable structure is captured with the successfully transition from $sp^2$ to $sp^3$ chemical bonds (Figure~\ref{fig:carbon}H) and the suspension becomes transparent after the laser reaction. Hollow carbon nanotubes are transformed into solid carbon nanorods, where the connections between carbon atoms are exactly the same as the theoretical predicted T-carbon, demonstrating the synthesis of this kind of structure. During the transformation process, the time scale of energy transfer from the laser to the MWCNTs and the subsequent ultrafast quenching play a key role in the formation and stabilization of T-carbon. The cubic crystal system of the generated T-carbon NWs is confirmed by the fast Fourier transform (FFT) pattern at different tilting angles from the high-resolution transmission electron microscopy (HRTEM) image. The successful synthesis of T-carbon makes it joining in the carbon family as another achievable 3D carbon allotrope in addition to graphite, diamond, and amorphous carbon (Figure~\ref{fig:carbon}). T-carbon's proposal and then experimental realization is a breakthrough in carbon science\cite{2016_Materials_9_484_Xing_A, 2016_JPCM_28_475402_Wang_C20}. Compared with other allotropes of carbon, T-carbon has many unique and intriguing properties (Figure~\ref{fig:overview}), suggesting that it could have a wide variety of potential applications in photocatalysis, solar cells, adsorption, energy storage, supercpacitors, aerospace materials, electronic devices, \emph{etc}. For example, T-carbon is predicted to be a semiconductor with direct band gap of $\sim$3.0\,eV at $\Gamma$-point (GGA: 2.25\,eV; HSE06: 2.273\,eV; B3LYP: 2.968\,eV)\cite{2019_nCM_5_9_Sun_A, 2011_PRL_106_155703_Sheng_TCarbon}. The orbitals in T-carbon hybridize with each other and form anisotropic $sp^3$ hybridized bonds. As for the two types of bonds in T-carbon, the charge density is found to be much larger for the intertetrahedron bonds compared to the intratetrahedron bonds, indicating relatively stronger intertetrahedron bond strength with more accumulated electrons. The bond strength is consistent with the bond length, which stabilize the structure by balancing the strain from the carbon tetrahedron cage. Moreover, the band gap could be effectively adjusted by doping elements or strain engineering to be suitable for photocatalysis and solar cells\cite{2019_Optik_180_125_Alborznia_Pressure, 2019_CP_518_69_Ren_Efficient, 2019_nCM_5_9_Sun_A}. Particularly, the band gap can be tuned in the range of 1.62-3.63\,eV with group IVA single-atom substitution, where the doped structures hold the stability\cite{2019_CP_518_69_Ren_Efficient}. Quite recently, it was shown that T-carbon nanowire exhibits better ductility and larger failure strain than other carbon materials such as diamond and diamond-like carbon \cite{2018_Carbon_138_357_Bai_Mechanical}. It was also reported that the transport properties of T-carbon can be effectively modulated by imposing strain\cite{2019_nCM_5_9_Sun_A}. With the specific characteristics of the electronic band structures such as the potential efficient electron transport,\cite{2019_nCM_5_9_Sun_A} T-carbon has the potential to be used as thermoelectric materials for energy recovery and conversion, \cite{2017_PRB_95_085207_Yue_Thermal} especially after doping or strain engineering. Besides, owing to its `fluffy' crystal structure, T-carbon is also possible for the storage of hydrogen, lithium, and other small molecules for energy purposes. In the following, the specific applications of T-carbon in thermoelectrics, hydrogen storage, and lithium ion batteries will be discussed in details to illustrate its potential applications in future energy fields. \section{Thermoelectrics} In the sense of `turning waste into treasure', the thermoelectric power generation has received extensive attention in recent years due to the low cost of operation. By achieving the output voltage through temperature gradient based on the Seebeck effect, the thermoelectrics shows a strong capability of firsthand solid-state conversion to electrical power from thermal energy, especially from the reuse of waste heat \cite{2012_Nature_489_414_Biswas_Highperformance}, thereby revealing its valued applications in reusing resources and being helpful for the crisis of environment and the save of energy. Generally, the thermoelectric efficiency and performance can be characterized by a dimensionless figure of merit $ZT = S^2\sigma T/\kappa$, \cite{2014_SR_4_6946_Guangzhao_Hingelike} where $S$, $\sigma$, $T$ and $\kappa$ represent the thermopower (Seebeck coefficient), electrical conductivity, absolute temperature and total thermal conductivity, respectively. To approach the Carnot coefficient, a high energy generation efficiency is necessary, which corresponds to a large $ZT$. The continuous improvement of thermoelectric performance and the strive to increase the power output under the same heat source are the key focus in the thermoelectric technology, which demands the in-depth study of thermoelectric conversion materials and the development of new materials. Based on electronic structures and previously studied thermal transport properties of T-carbon \cite{2017_PRB_95_085207_Yue_Thermal}, we examined the thermoelectric performance of T-carbon by combining the first-principles calculations with the semi-classical Boltzmann transport theory\cite{2014_SR_4_6946_Guangzhao_Hingelike, 2006_CPC_175_67_Georg_BoltzTraP}. The thermopower of T-carbon shown in Figure~\ref{fig:energy}A ($\sim$2000\,$\mu$V/K) is comparable with or even larger than some excellent thermoelectric materials, such as SnSe ($\sim$550\,$\mu$V/K) \cite{2016_EES_9_3044_Zhao_SnSe} that was reported to have an unprecedented high $ZT$ value. The huge thermopower of T-carbon indicates its strong potential capable of serving as a thermoelectric material for energy recovery and conversion. The overall view of evaluated $ZT$ value of T-carbon shows that it is a high-temperature $n$-type thermoelectric material (Figure~\ref{fig:energy}B). However, the thermoelectric performance of T-carbon is not good enough compared with other existing thermoelectric materials \cite{2012_Nature_489_414_Biswas_Highperformance}. For instance, SnSe possesses a $ZT$ value of 2.6 at 930\,K along a specific lattice direction \cite{2016_EES_9_3044_Zhao_SnSe}. The reasons could lie in two aspects. Firstly, the electronic energy band gap of T-carbon is relatively large, and the conduction band minimum (CBM) and valence band maximum (VBM) are relatively flat, which may lead to a large effective mass of the carriers and lower the electrical conductivity. Secondly, the thermal conductivity of T-carbon at room temperature is 33\,W/mK, \cite{2017_PRB_95_085207_Yue_Thermal} which is much higher for thermoelectric applications in contrast to that of SnSe (0.46-0.68\,W/mK) \cite{2016_EES_9_3044_Zhao_SnSe}. Nevertheless, there exists a large room for further improving the thermoelectric performance of T-carbon in view of its excellent thermopower through, \emph{e.g.}\ applying strain,\cite{2014_SR_4_6946_Guangzhao_Hingelike} doping proper elements,\cite{2016_SR_6_26774_Gharsallah_Giant} or cutting into low-dimensional structures to modify transport properties\cite{2016_Nanoscale_8_11306_Qin_Diverse}. We then examined possible approaches for the improvement of the thermoelectric performance of T-carbon. As shown in Figure~\ref{fig:energy}C, by either applying compressive strain of 2\pct\ or doping calcium (Ca) and magnesium (Mg) atoms into the fluffy structure of T-carbon, the power factor can be effectively improved. We focus on the $n$-type doping, since T-carbon is a $n$-type thermoelectric material as discussed above. The reason for the doping enhanced power factor lies in two aspects. On the one hand, with Ca/Mg atoms doped, the characteristics of conduction band is kept, promising a large thermopower after doping. On the other hand, the electronic band gap transits from direct to indirect and is largely decreased, leading to a large electrical conductivity. Note that the thermal conductivity commonly decreases with foreigner atoms doped, thus the thermoelectric performance of T-carbon would be largely improved owing to the simultaneously improved electrical transport properties and reduced thermal transport properties. Considering the huge computational costs, we could estimate the thermoelectric performance based on the estimated thermal conductivity with atoms doped. Assuming a one order of magnitude decrease of the thermal conductivity with Ca/Mg atoms doped, the $ZT$ value of T-carbon is estimated to be largely enhanced with a two-fold increase. Moreover, by cutting T-carbon into two-dimensional structures along the (111) direction, the power factor can also be improved, especially at low temperatures (Figure~\ref{fig:energy}C). The large power factor at low temperature makes T-carbon in nanoscale transit from the high-temperature thermoelectric material to a low-temperature thermoelectric material, suggesting its wider applications for energy conversion, such as the waste heat recovery under ambient conditions. \section{Hydrogen Storage} Seeking for sustainable, renewable, and clean fuels is on demand, in particular we are facing with the challenges of energy crisis and climate change. Since 1970s, hydrogen has been thought as one of the most promising alternatives to fossil fuels due to the cleanliness of its combustion. Water (H$_2$O) is the only by-product for hydrogen combustion, which has great advantages compared with the combustion of fossil fuels producing greenhouse gas and harmful pollutants. Moreover, hydrogen is lightweight, providing a higher energy density and making the hydrogen-powered engines more efficient than the internal combustion engines. The reason hampering the popularization of hydrogen economy lies in that it is difficult to store large amounts of hydrogen safely, densely, rapidly, and then access easily. Lots of efforts have been dedicated to discovering new materials as next generation hydrogen storage materials, including the extremely porous metal-organic framework (MOF) compounds \cite{2012_CR_112_782_Suh_Hydrogen}. Benefiting from the high surface area and lightweight of carbon materials, there have been many efforts in designing novel porous carbon materials for hydrogen storage applications\cite{2016_Nanoscale_8_12863_Gao_Electron, 2017_MT_20_629_Xu_Design, 2017_MT_20_592_Borchardt_Toward, 2012_TJoPCC_116_25015_Srinivasu_Electronic}. Since T-carbon itself is a fluffy carbon material, there are large interspaces between atoms compared with other forms of carbon materials, which could make it potentially useful for hydrogen storage (Figure~\ref{fig:overview} and Figure~\ref{fig:energy}D). In fact, T-carbon possesses a low density ($\sim$1.50\,g/cm$^3$) as mentioned above, which is approximate 2/3 of graphite and 1/2 of diamond\cite{2011_PRL_106_155703_Sheng_TCarbon}. By absorbing hydrogen into the fluffy structure of T-carbon, the hydrogen storage value can be estimated based on the number of adsorbed hydrogen molecules (H$_2$), which is 16 in maximum for one unit cell in a stable structure. The adsorption energy for hydrogen in T-carbon is defined as\cite{2012_TJoPCC_116_25015_Srinivasu_Electronic} \begin{equation} E_\textrm{adsorption} = [E_\textrm{T-carbon} + nE_\mathrm{H_2} - E_\textrm{total}] / n \ , \end{equation} where $E_\textrm{T-carbon}$ is the total energy of T-carbon, $E_\mathrm{H_2}$ is the total energy of hydrogen molecules, $n$ is the number of hydrogen molecules, and $E_\textrm{total}$ is the total energy of T-carbon with hydrogen absorbed. $E_\textrm{adsorption}$ is 0.173 and -0.216\,eV for 8 and 16 H$_2$ absorbed, respectively. Considering the strong C-C bonding in T-carbon, the system is stable despite the weak repulsive interactions among the absorbed H$_2$ like in fullerenes with hydrogen absorbed. With this condition, the hydrogen storage capacity of T-carbon is estimated to be $\sim$7.7\,wt\pct, which makes it quite competitive for the high-capacity hydrogen storage \cite{2011_PRL_106_155703_Sheng_TCarbon}. \section{Lithium Ion Batteries} Rechargeable energy storage devices such as lithium ion batteries (LIB) are playing a critical role as portable power sources in electronic devices, biomedicine, aviation space and electric vehicles\cite{2015_MT_18_252_Nitta_Liion}. Various carbon based materials have been widely used in LIB, among which graphite is the most commonly used anode material. Due to its layered structure with high specific surface area and large interlayer space to accommodate lithium atoms, graphite has a high specific energy capacity (372\,mAhg$^{-1}$)\cite{2010_NL_10_2838_Uthaisar_Edge}. As a new member of carbon materials family, T-carbon could be also a promising electrode material for LIB and other rechargeable energy storage devices due to its fluffy structure. The possibilities of T-carbon acting as an electrode material for alkali metals (Li, Na, K) and alkaline earth metal (Mg) ion batteries were investigated based on first-principles calculations. The specific capacity of metal atoms is defined as $C=nF/M_{C_X}$,\cite{2015_MT_18_252_Nitta_Liion} where $n$ represents the number of electrons involved in the electrochemical process ($n$=1 for Li, Na, K; $n$=2 for Mg), $F$ is the Faraday constant with a value of 26.801\,Ahmol$^{-1}$, $M_{C_X}$ is the mass of $C_X$ ($C$ means carbon atoms, $X$ means the number of carbon atoms, which is 8 for T-carbon). Our results reveal that T-carbon is a good anode material for LIB with the specific energy capacity of 588\,mAhg$^{-1}$, which is 58\pct\ higher than that of graphite (372\,mAhg$^{-1}$)\cite{2010_NL_10_2838_Uthaisar_Edge}. The corresponding formula is Li$_2$C$_8$, indicating that two Li ions can be intercalated in each T-carbon unit cell. The results for Na, K, and Mg are also similar, except that the specific energy capacity for Mg is 1176\,mAhg$^{-1}$ due to its doubled valence electrons. As shown in Figure~\ref{fig:energy}E, the most stable adsorption site of metallic ions was calculated to be the center of the vacancy of T-carbon, which is marked as T$_1$ site. The migration process of M (= Li/Na/K/Mg) ions in T-carbon was simulated by means of the climbing image nudged elastic band method (CI-NEB)\cite{2000_JCP_113_9901_HenkelmanGraeme_A} in a $2\times 2\times 2$ supercell. As indicated in Figure~\ref{fig:energy}E, the minimum migration path is between the neighboring T$_1$ sites. The middle point (T$_2$ site) corresponds to the saddle point on the potential energy surface. Figure~\ref{fig:energy}F shows the energy evolution for the migration process, where the migration barriers are 0.075, 0.233, 0.528, and 0.688\,eV for Li, Na, K, and Mg, respectively. The lowest barrier for ion moving from T$_1$ to neighboring T$_1$ sites in T-carbon is observed for Li ion. It should be noted that the Li migration barrier in T-carbon is about 1/4 of that in graphite (0.327\,eV) \cite{2010_NL_10_2838_Uthaisar_Edge}, which implies that the diffusion constant of Li ion in T-carbon should be $1.7\times 10^4$ times larger than that of Li ion in graphite following the Arrhenius law ($D\sim \exp (-E/k_B T)$, where $E$, $k_B$, and $T$ are barrier energy, Boltzmann constants, and temperature, respectively)\cite{2018_PCCP_20_9865_Hao_Lithium}. Thus, T-carbon should be a very good material for the diffusion of Li ions, revealing that it might be quite useful for the ultrafast charge and discharge of future rechargeable energy storage devices. \section{Opportunities and Challenges} Both opportunities and challenges exist for the applications of T-carbon in next-generation energy technologies (Figure~\ref{fig:overview}). To achieve better thermoelectric performance of T-carbon, doping other elements can be performed by following what researchers did previously for clathrates and skutterudites,\cite{2003_IMR_48_45_Chen_Recent} which are hot thermoelectric materials with hollow cage-like structures. In terms of the fluffy structure with low density and hollow feature (Figure~\ref{fig:overview}), the characteristics of `phononic glass \& electron crystals' could be realized in T-carbon by filling the holes with different kinds of atoms and different filling rates, thus reducing its thermal conductivity and simultaneously improving its electrical transport properties, which would make T-carbon a better thermoelectric material. In addition to the filling doping, one can also introduce nanotwin structures into T-carbon, which could have the similar effect \cite{2017_Nanoscale_9_9987_Zhou_Decouple}. Based on previous works, the possible interstitial atoms to improve the thermoelectric performance of T-carbon could be lanthanides, alkali metals, alkaline earth metals, and rare earth metals. Other approaches in addition to applying external fields\cite{2016_Nanoscale_8_11306_Qin_Diverse} and bond nanodesigning,\cite{2018_NE_50_425_Guangzhao_Lonepair, 2016_PRB_94_165445_Qin_Resonant} such as strain engineering and nanostructuring, would also be possible (Figure~\ref{fig:energy}C). Further detailed and comprehensive studies for examining possible improvement of the thermoelectric performance, especially from the experimental aspects, are expected to come in future. In addition, with the potentially high capacity of hydrogen storage in T-carbon, the effects of different surface areas, pore volumes, and pore shapes on hydrogen storage parameters should be examined. New methods to enhance the storage capacity are necessary, such as the addition of metal catalysts, which has been previously reported for considerably improving the capacity of hydrogen storage. With the fluffy structure, it is also possible for T-carbon to be used for storing or filtering other small molecules for energy purposes beyond the applications in LIB. Beyond the applications of T-carbon in thermoelectrics, hydrogen storage, and lithium ion batteries as discussed above, T-carbon, especially T-carbon based heterostructures,\cite{2018_JoIaEC_64_16_Jayaraman_Recent, 2018_ACIE_57_9679_Mori_Carbon, 2018_Carbon_137_266_Ram_Tetrahexcarbon} could have a wide variety of potential applications in more energy fields (Figure~\ref{fig:overview}), such as electrochemical, photocatalysis, solar cells, adsorption, energy storage, supercpacitors, aerospace, electronics, \emph{etc}, which are worth to be further investigated in future. To further explore potential applications of T-carbon in energy fields, much effort on fabrication methods and massive production of T-carbon should be greatly paid (Figure~\ref{fig:overview})\cite{2018_PiEaCS_67_115_Kumar_Recent}. Apart from the synthesizing method reported in Ref.~\cite{2017_NC_8_683_Zhang_Pseudotopotactic}, the plasma enhanced chemical vapor deposition at a proper environmental pressure is also a possible route to generate T-carbon. Particularly, T-carbon is found to be more stable and more easily formed at negative pressure circumstances since it possesses a relatively smaller enthalpy than diamond beyond negative 22.5\,GPa. In addition, T-carbon could be grown from seed microparticles in a chemical vapor transport process, where the seed quality and distribution should be optimized to obtain high-quality T-carbon samples. With the development of more environment-friendly technologies, the potential applications of T-carbon in energy fields would not only produce scientifically significant impact in related fields but also lead to a number of industrial and technical applications. Beyond the applications in energy fields, T-carbon may also contribute to solving the carbon crisis in interstellar dust\cite{1997_TAJ_484_779_Dwek_Can}, which remains an unsolved question for decades. Observations find that the abundance of carbon in the interstellar medium is only $\sim$60\pct\ of its solar value, leading to the difficulty in explaining the interstellar extinction curve with the traditional interstellar dust models\cite{1997_TAJ_484_779_Dwek_Can}. Due to the fluffy structure of T-carbon, its density is approximately 2/3 of graphite. Besides, the optical absorption of T-carbon has a sharp peak around 225\,nm, which is very close to the broad `bump' centered at 217.5\,nm in the interstellar extinction curve\cite{2017_NC_8_683_Zhang_Pseudotopotactic, 1997_TAJ_484_779_Dwek_Can}. Moreover, the negative pressure circumstance in the universe is beneficial for the formation of T-carbon. Thus, it would be very meaningful to explore whether T-carbon already exists in universe beyond the artificial synthesis. \section*{Acknowledgments} G.\ Q.\ gratefully acknowledges Dr.\ Zhenzhen Qin (Zhengzhou University) for literature review and Dr.\ Huimin Wang (Nanjing University) for plotting Figure~\ref{fig:overview}. G.\ Q.\ also thanks Dr.\ Xianlei Sheng (Beihang University) and Mr.\ Jingyang You (University of Chinese Academy of Sciences) for their fruitful discussions. This work was supported in part by the National Key R\&D Program of China (Grant No.\ 2018YFA0305800), the NSFC (Grant Nos.\ 11834014, 14474279), the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant Nos.\ XDB28000000, XDPB08). \section*{Computation details} All calculations involved in this paper (Figure~\ref{fig:energy}) were carried out by means of the first-principles calculations in the framework of density functional theory (DFT) as implemented in the Vienna \emph{ab-initio} simulation package (VASP)\cite{1996_PRB_54_11169_uller_Efficient}. The projector augmented wave (PAW) method\cite{1999_PRB_59_1758_Kresse_From} were employed for interactions between ion cores and valence electrons. The electron exchange-correlation interactions were described by the generalized gradient approximation (GGA) in the form proposed by Perdew-Burke-Ernzerhof (PBE)\cite{1996_PRL_77_3865_Perdew_Generalized}. The cutoff energy was set as 1000\,eV for plane-wave expansion of valence electron wave function. The structure relaxation considering both atomic positions and lattice vectors was performed until the total energy was converged to 10$^{-8}$\,eV/atom and the maximum force on each atom was less than 0.001\,eV/\AA. The Monkhorst-Pack scheme\cite{1976_PRB_13_5188_Monkhorst_Special} was used to sample the Brillouin zone (BZ) with a $11\times 11\times 11$ $k$-point mesh.
2,869,038,156,568
arxiv
\section{Introduction\label{intro}} By virtue of their large populations of coeval stars, the Galactic globular clusters present us with a unique laboratory for the study of the evolution of low mass stars. The combination of their extreme ages, compositions and dynamics also allows us a glimpse at the early history of the Milky Way and the processes operating during its formation. These aspects become even more significant in the context of the star-to-star light element inhomogeneities found among red giants in every cluster studied to date. The large differences in the surface abundances of C, N, O, and often Na, Mg, and Al have defied a comprehensive explanation in the three decades since their discovery. Proposed origins of the inhomogeneities typically break down into two scenarios: 1) As C, N, O, Na, Mg, and Al are related to proton capture processes at CN and CNO-burning temperatures, material cycled through a region in the upper layers of the H-burning shell in evolving cluster giants may be brought to the surface with accompanying changes in composition; see \cite{sweigart79} and \cite{charbonnel1994} for an introduction to the relevant theory of mixing. There is ample observational evidence that deep mixing takes place during the red giant branch (RGB) ascent of metal-poor cluster stars (see the reviews of Kraft 1994, Pinsonneault 1997, \& Gratton, Sneden \& Carretta 2004 and references therein). 2) It has also become apparent that at least some component of these abundance variations must be in place before some cluster stars reach the giant branch. Spectroscopic observations of main sequence turn-off stars in 47 Tuc (beginning with Hesser 1978, see Cannon {\it et al.\/}\ 1998, Briley {\it et al.\/}\ 2004a and references therein) demonstrate this. Our own work with large samples of low luminosity stars in M71 (Cohen 1999, Briley \& Cohen 2001, Ram\'{\i}rez \& Cohen 2002) and in M13 (Briley, Cohen \& Stetson 2002, 2004b and Cohen \& Melendez 2004) have shown variations in CN and CH-band, O and Na line strengths among subgiants and main sequence stars consistent with patterns found among the evolved giants of these clusters. The low mass cluster stars we observe are, however, incapable of both deep dredge-up and significant CNO nucleosynthesis while on the main sequence. Hence the early cluster material must have been at least partially inhomogeneous in these elements or some form of modification of these elements took place within the cluster. Suggested culprits include mass-loss from intermediate mass asymptotic giant branch stars and supernovae ejecta. \cite{cannon98} present an excellent discussion of these possibilities. Thus the observed light element inhomogeneities imply that there is some aspect of the structure of the evolving cluster giants which remains poorly understood (the deep mixing mechanism), that the early proto-clusters may have been far less homogeneous, that intermediate mass stars may have played a greater role in setting the composition of the present day low mass stars than previously thought, etc. It is this set of issues that we explore in the present series of papers. In our earlier work we studied the C and N abundances for large samples of stars in the globular clusters M71 \citep{cohen99,briley01}, M5 \citep{cohen02} and M13 \citep{briley02,briley04b} (collectively denoted GC--CN). We consider here the extremely metal poor globular cluster M15. In this range of metallicity and luminosity (i.e. $T_{eff}$), the CN bands at 3880 and at 4220~\AA\ in the spectra of the M15 stars are too weak to be useful, so we rely on the CH and NH band strengths to derive C and N abundances for a large sample of stars in M15. We adopt values from the on-line database of \cite{harris96} for the apparent distance modulus of M15 at $V$ of 15.31 mag with a reddening of E(B--V) = 0.09 mag, supported by analysis of deep HST photometry by \cite{recio04}. We adopt the metallicity [Fe/H] = $-$2.2 dex, also from \cite{harris96}; \cite{sneden91} in a high dispersion abundance analysis of a large sample of stars on the upper giant branch of M15 found a somewhat lower value. We describe the sample in \S\ref{section_phot} and \ref{section_spec}. We outline our measurement of the molecular band indices and their interpretation in \S\ref{section_indices}. With an assumption about the O abundance, these are converted into C and N abundances, from which we find an anti-correlation between C and N in \S\ref{section_cnabund}. A consideration of the need for ON burning follows in \S\ref{section_on}. A discussion of our results together with a comparison with the trends seen among the red giants in M15 with our earlier work, which now covers four globular clusters spanning a wide range of metallicity, combined with data from the literature, is given in \S\ref{section_othergc}, while \S\ref{section_mix} discusses the implications of our results for the mechanism producing the C and N differences. Inferences we can draw from this for the formation and early chemical history of globular clusters are given in \S\ref{section_chem_evol}. A brief summary concludes the paper. \section{Photometric Databases \label{section_phot}} The optical photometry of M15 employed here was carried out as part of a larger program to provide homogeneous photometry for star clusters and nearby resolved galaxies \citep{stetson00}. The general characteristics of the photometric database are described in detail in \cite{cohen02} for the case of M5; the case of M15 is similar. At the time the present photometry for M15 was derived, the corpus of M15 images in Stetson's database included some 307 images in $B$, 340 images in $V$, and 181 in $I$. The images did not all cover the same region of sky, of course, and any given star fell within no more than 235 $B$ images, 239 $V$ images, or 179 $I$ images. A network of local standards supported appropriate transformations for those images taken under non-photometric conditions. In our experience, photometry from datasets such as those employed here typically display an external accuracy of order 0.02$\,$mag per observation; this level of observation-to-observation scatter is probably dominated by temporal and spatial fluctuations in the instantaneous atmospheric extinction, and probably also by the difficulty of obtaining truly appropriate flat-field corrections in the presence of such effects as scattered light, ghosts, fringing and spectral mismatch between the flat-field illumination and the astronomical scene. The fundamental system is that of \cite{landolt92}. The absolute astrometry of our catalog is based upon the United States Naval Observatory Guide Star Catalogue~I (A~V2.0; henceforth USNOGSC, Monet {\it et al.\/}\ 1998), access to which is obtained by PBS through the services of the Canadian Astronomy Data Centre. Throughout the region of our field that is well populated by USNOGSC stars (including essentially all of the stars in our present spectroscopic sample), we expect systematic errors of our right ascensions and declinations on the system of the USNOGSC to be $<0.1\,$arcsec. Individual {\it random\/} errors in our coordinate measurements are probably not much better than 0.02$\,$arcsec on a star-by-star basis, the errors becoming somewhat worse than this for the fainter and more crowded stars in our photometric/astrometric sample. The alignment images for the slitmasks used with the Low Resolution Imaging Spectrometer (LRIS) \citep{lris_ref} were taken with $V$ or $I$ filters. Although these exposures were short (1 sec typically), they were used to determine $V$ mags for the faintest stars, particularly those that were somewhat crowded, in our sample in M15. To broaden the wavelength range of our photometry, we attempted to obtain infrared colors from the 2MASS database \citep{2mass1,2mass2} for the stars in our sample. However, many of them are too faint to be included therein. Thus in Sep. 2004 we observed the fields of our M15 sample with the Wide Field Infrared Camera \citep{wilson03} at the 5-m Hale Telescope for the purpose of establishing reliable J,K magnitudes for the fainter stars in our sample. The 2MASS colors of nearby isolated somewhat brighter stars were used to calibrate our WIRC photometry. Total integrations of 10 min to 30 min for each of the two filters in each of the two fields were obtained. These images were reduced using Figaro \citep{shortridge} and DAOPHOT \citep{stetson87}. Most other recent photometric studies of M15 \citep{buonanno87,dacosta90} do not reach as faint as the bulk of our sample. The deep $B,V$ CMD study of \cite{durrell92}, which focuses on the age of the cluster, its distance, and its luminosity function along the main sequence, does not cover the full sample of our stars. Stars are identified in this paper by a name derived from their J2000 coordinates, so that star C12345\_5432 has coordinates 21 12 34.5~~+12 54 32. \section{Spectroscopic Observations \label{section_spec}} The initial sample of stars consisted of those from the photometric database located more than 180 arcsec from the center of M15 (to avoid crowding) with $16.5<V<18.5$ and with $B-V$ within 0.06 mag of the cluster locus, which we take as $B-V = 0.73 - 0.062(V-16.5)$ mag. The main sequence turnoff of M15 is at $V \sim 19.2$ mag, so these stars include subgiants as well as low luminosity giants near the base of the RGB. (A preliminary version of the photometric catalog described in \S\ref{section_phot} was used for this purpose.) From this list, two slitmasks containing about 25 slitlets each were designed using JGC's software. The center of the first field was roughly 3.2 arcmin E and 0.7 arcmin N of the center of M15, while the center of the second field was located roughly 2.8 arcmin W and 2.9 arcmin S of the cluster center. These slitmasks were used with LRIS at the Keck Observatory in June 2003. Three 1200 sec exposures were obtained for the second mask and two 1200 sec exposures for the first slitmask. The airmass was less than 1.06 for all exposures, so differential refraction did not play a role even at 3200~\AA. The exposures were dithered by moving the stars along the length of the slitlets by 2 arcsec between each exposure. Because of the crowded fields, there was often more than one suitably bright object in each slitlet. The width of the slitlets was 0.8 arcsec, narrower than normal to enhance the spectral resolution. LRIS-B \citep{lris-b} was used with a 400 line grism giving a dispersion of 1.0\,\AA/pixel (4.0\,\AA\ resolution for a 0.8 arcsec wide slit). This gave good coverage of the region from 3000 to 5000\,\AA, including the key NH band at 3360\,\AA\ and the G band of CH at 4300\,\AA. The CN bands at 3880 and 4200\,\AA, while included within the spectral range covered, are for most of these low luminosity very metal-poor stars too weak to be measured with precision; they were not used at all. The red side of LRIS was configured to use a 1200 g/mm grating centered at H$\alpha$ with the intention of providing higher accuracy radial velocities. The dispersion is then 0.64\,\AA/pixel (29 km~s$^{-1}$/pixel) or 1.9\,\AA/spectral resolution element. Figaro \citep{shortridge} scripts were used for the data reduction. The original detector of LRIS-B was upgraded to a new one with much higher UV sensitivity prior to these observations. This was crucial to the success of our effort. However, there were some unexpectedly severe reflection problems in our blue-channel spectra. These were perhaps exacerbated by the many bright stars in the field. The reflections were non-dispersed, aligned along the slit, and several times the height of a stellar image. They were removed partially by sky subtraction, but the resulting spectra had to be hand checked, with additional corrections applied as necessary. This was done for each individual exposure, then the resulting spectra for each star were summed. In addition, the LRIS-B images were not flattened, as there is no suitably bright featureless UV calibration lamp at the Keck Observatory. The pixel-to-pixel variation is small in these detectors, and each spectrum is the sum of several exposures which fell in different locations on the detector array. Furthermore, the projected image size of a point source along the slit has a FWHM of 3 to 4 pixels, so many pixels are sampled in forming the final 1D spectrum for each star. We therefore are confident that the lack of flat fielding does not introduce spurious small scale features. The final spectrum summed over the three 1200 sec exposures of a M15 star in our sample with $V = 18.0$ mag has roughly 9000 detected electrons per spectral pixel (1.0~\AA) in the region of the G band of CH, and roughly 1100 detected electrons per spectral pixel in the blue continuum bandpass for the NH feature. For a $V = 18.0$ mag star in the slitmask with only two 1200 sec exposures (stars numbered C30...), there are roughly 6000 detected electrons per spectral pixel (1.0~\AA) in the region of the G band of CH, and roughly 500 detected electrons per spectral pixel in the blue continuum bandpass for the NH feature. Thus the uncertainties in the measured CH and NH indices are not dominated by Poisson statistics, even for the faintest stars in our sample in M15, but rather by the various instrumental issues described above. In addition to the primary sample described above, since these fields are rather crowded, other stars sometimes serendipitously fell into the slitlets. If they were bright enough, their spectra were also reduced. We refer to the latter as the secondary sample. As might be expected from the luminosity function, most of the secondary sample consists of stars at or just below the main sequence turnoff. \subsection{Membership} The galactic latitude of M15 is only $-27.3^{\circ}$. Even though our fields are as close to the center of the cluster as possible, the cluster is more distant than ones we previously studied, and our sample consists of faint stars. Establishing membership is a concern. The primary basis for determining membership is through the spectra. M15 is a very metal poor cluster, so that these moderate resolution spectra are in themselves capable of confirming membership. Table~\ref{table_phot} gives the $V, I, J, K$ photometry for the 68 sample stars we believe to be members of M15. Three non-members were culled from our sample as their spectra show absorption features much stronger than expected; they are listed at the end of this table. The CMD diagrams can also be used to eliminate non-members. $B,V$ colors were used to select the primary sample, but they (and other colors) can provide constraints for the secondary stars. Fig.~\ref{fig_cmd} shows the $V,I$ and $V,K$ CMD diagrams for our sample in M15 with a 12 Gyr isochrone with [Fe/H] $-2.3$ dex from \cite{yi01} superposed in each case. The star at the lower left is a hot horizontal branch star; it is part of the secondary sample. The spectroscopic non-members are indicated by large open circles; they all line on the cluster isochrone in the $V-K$ CMD, and two do for the $V-I$ CMD, with the third only slightly off it. Thus photometry alone, while it can eliminate many non-members of M15 from our sample selected for spectroscopy, is not sufficient in itself. The magnitude of the radial velocity of M15 is sufficiently high to establish membership through measurements on the red spectra, which are of higher spectral resolution than those from LRIS-B with the adopted instrument configuration. Given the extreme metal deficiency of the stars and the foreground reddening, which produces easily detectable interstellar NaD lines, we rely exclusively on H$\alpha$ for this purpose. Thus these $v_r$ measurements are not of high accuracy, with typical uncertainty of $\pm30$~km~s$^{-1}$. A histogram of the radial velocities for 50 stars from our sample is shown in Fig.~\ref{fig_vr}. The three spectroscopic non-members culled from our sample are indicated. This figure demonstrates that the vast majority of the stars in our sample are members of M15. The spectra of star C29445\_0952 (V=16.88) in our sample are more extended along the slit than that of a point source and clearly indicate that there are two different stars contributing. The photometric database and the LRIS alignment images were checked; this object turns out to be a close pair of separation 0.8 arcsec with a brightness difference of 2 mag. It was not possible to separate the contributions of each to the spectra, so data for this object is included in the tables, but it is not shown in any of the figures. Figure~\ref{fig_2spec} shows the region of the 3360\,\AA\ band of NH in the spectra of two of the stars in the primary sample in M15. These stars have essentially the same stellar parameters ($T_{eff}$\ $\sim5200$K and log($g$)\ $\sim$2.8 dex) lying at about the same place in the cluster CMD, yet their NH bands differ strongly. From this figure alone, we can anticipate one of the key results of our work, the large scatter in C and N abundance we will find among M15 members at the base of the red giant branch (RGB) at $17 < V < 18.5$ mag. \section{Measurement of CH and CN Indices \label{section_indices}} For each spectrum, indices sensitive to absorption by the 4300\,\AA\ CH band were measured as described in \cite{briley01}. While for the NH band we could have used the double-sided index described in \cite{briley93} and used by us for our small sample of spectra of M13 subgiants described in \cite{briley04b}, we were concerned with the decline of the apparent continuum level towards bluer wavelengths in the UV for these LRIS-B/Keck spectra. This is presumably due to the wavelength dependence of both the stellar flux and the instrumental efficiency. It is extremely difficult to flux spectra taken through slitmasks because of the varying slit losses and the possibility of atmospheric dispersion affecting the spectra, although the latter was, as discussed earlier, not a concern here. Carrying out the observations with the length of the slit set to the parallactic angle, which is the usual method for eliminating atmospheric dispersion for single slit observations, cannot be used for multislit observations as the position angle is fixed by the design of the slitmask. It is for these reasons that no attempt was made to flux the spectra. Since the stars are so metal poor, there are relatively few detected features in the region between 3300 and 3430\,\AA\ besides the desired NH band. Thus we decided to normalize the stellar continuum in the spectrum of each star, then find the absorption within the NH feature bandpass. The continuum fitting approach adopted here allows for a direct comparison between indices measured from the spectra and those computed from theoretical spectral synthesis without the need for any slope corrections. The feature bandpass for NH adopted here was 3354 to 3375\,\AA\ (in the rest frame of M15). This was done by fitting a second order polynomial to the bandpass 3300 to 3430\,\AA, masking out the region of the NH band. The polynomial fitting used a 6$\sigma$ high and 3$\sigma$ low clipping, running over a 5 pixel average. There was little change in the measured NH indices as compared to simply applying the two-sided continuum and feature bandpasses. This reflects our educated guess for choice of the continuum bandpasses, which are more or less symmetrically distributed about the center of the feature bandpass. The measured indices are listed in Table~\ref{table_obs_inds} and plotted in Fig.~\ref{fig_chcn_indices}. The upper axis of these figures is $M_V^0$, and the RGB bump in M15 occurs at $V=15.41\pm0.04$ mag \citep{zoccali99}. Approaching the MSTO, $T_{eff}$\ suddenly increases, and the molecular bands we study here become much weaker; they are effectively undetectable in the present spectra. This produces the sharp drop in measured indices apparent at $V \sim18.5$ in Fig.~\ref{fig_chcn_indices}. It is not until $\sim$2.5 mag below the MSTO that $T_{eff}$\ as cool as 5600~K is again reached, at which point molecular bands can again be expected to be detectable in M15 main sequence stars with similar quality spectra to those presented here for its subgiants. The error bars given in Table~\ref{table_obs_inds} (drawn in Fig.~\ref{fig_chcn_indices} at 2$\sigma$) have been calculated strictly from Poisson statistics based on the signal present in the feature and continuum bandpasses. The CH index, in particular, is very robust; independent measurements by each of the first two authors using both a double-sided index and continuum fitting as described above always agreed to within 2\% for each star in the sample in M15. Every star in the sample has a measurement of I(CH). I(CH) is positive for all the stars in the sample, and exceeds 1\% for all stars except C29424\_0729 on the extended BHB and one star near the main sequence turnoff at $V = 19.26$ mag. We do not expect to see any CH for the extreme BHB star nor for the stars just at the turnoff. The measured I(CH) values for them range from 0 to 4\%, with most having I(CH) $\le$2\%. These include the faintest stars in the sample, and their I(CH) measurements appear valid, again suggesting that the measurements of the CH indices at all luminosities considered here are robust. As can be seen in Figure \ref{fig_chcn_indices}, substantial star-to-star differences occur among the I(CH) indices for the stars in our M15 sample, even if one considers only stars in similar evolutionary states. Obtaining high quality spectra in the region of the NH band at 3360~\AA\ for such faint stars is not easy. There are five stars with poor quality spectra in the NH region for which no measurement of the NH index was attempted. These arose as a result of low signal level, the location of the spectra with respect to the edges of the slit (a problem more common among the secondary stars), or a bright reflection falling on or very near the NH feature. There are 10 additional stars with low signal level, but measured I(NH), which are marked by open circles instead of filled circles in the figures; only one of these is brighter than $V=18.1$. Among the stars brighter than $V=18.3$ with measured $I(NH)$, there are five with negative $I(NH)$. Of these, three have I(NH) $ \ge -2$\%\, while the smallest is $-5$\%. Fig.~\ref{fig_chcn_indices} demonstrates that a very substantial range exists in I(NH) among stars of similar evolutionary state for $17 < V < 18~(1.6 < M_V^0 < 2.6)$ mag. In examining Fig.~\ref{fig_chcn_indices}, aside from the extreme BHB star, there are no obvious anomalous stars in our sample in M15. Furthermore, there is no obvious bimodality among either the CH or NH indices. Over a small range in $T_{eff}$, there appears to be an anti-correlation of the strength of the CH and NH bands. The correlation is not perfect, but given the weakness of the molecular features in these low luminosity stars in M15, is reasonably convincing. We return to this issue in \S\ref{section_abund} once C and N abundances are derived for the stars in our M15 sample. The large range in C abundances which we suspect to be present in the M15 subgiant sample creates an unusual situation with regard to the expected strength of the CN features. Normally, since there is more carbon than nitrogen, the N abundance controls the amount of CN. However, if C is highly depleted, there can be fewer carbon atoms per unit volume than nitrogen atoms, and C will control the formation of CN, as suggested by Langer (1985). Since we are using the NH band to deduce N abundances, this is not an issue here. \section{Comparisons with Synthetic Spectra \label{section_cnabund}} Clearly the pattern of abundances underlying the CH and CN band indices of Fig.~\ref{fig_chcn_indices} cannot be interpreted on the basis of band strengths alone - we must turn to models. The technique employed is similar to that of \cite{briley01}, where the region of the CMD of interest is fit by a series of models whose parameters are taken from (in the present case) a Y2 \citep{yale04} 12 Gyr isochrone with Z = 0.000400, Y = 0.230800, and [$\alpha$/Fe] = 0.6. The set of representative model points are listed in Table \ref{table_modelch}. Model stellar atmospheres were then generated using the Marcs model atmosphere program \citep{marc75} at the $T_{eff}$, log($g$)\ of these points. Logarithmic solar abundances of Fe, C, N, and O are assumed to be 7.52, 8.62, 8.00, and 8.86 dex, respectively, on a scale of H = 12. These are somewhat higher than the latest solar abundances inferred from 3D hydrodynamic models by \cite{asplund05}, but we continue to use them for consistency with our previously published papers in this series. Furthermore, our molecular transition probabilities were scaled to fit the solar spectrum with our adopted solar abundances when they were first adopted (e.g., \cite{bell94}). From each model, synthetic spectra were calculated using the SSG program (Bell \& Gustafsson 1978; Gustafsson \& Bell 1979; Bell \& Gustafsson 1989; Bell, Paltoglou, \& Tripicco 1994) and the line list of \cite{trip95}. The spectra were calculated from 3,200 to 5,500\,\AA\ in 0.05\,\AA\ steps with a microturbulent velocity of 2 km/s, an [O/Fe] abundance of +0.20 dex, C$^{12}$/C$^{13}$ = 10, but with differing C and N abundances. The resulting spectra were then smoothed to the resolution of the observed spectra and the corresponding I(CH) and I(NH) indices measured. By construction, no zero point shifts were necessary. The values of I(CH) from 18 sets of models with [C/Fe] from $-1.4$ to +0.4 dex in steps of 0.2 dex are plotted with the observed indices in Figure~\ref{fig_chcn+model}. I(NH) was computed from the same set of models for [N/Fe] from $-0.6$ to +2.0 dex in steps of 0.2 dex. The corresponding I(NH) values are also shown in this figure. The spread in [C/Fe] and [N/Fe] among the M15 SGB stars appears well represented by this range. Clearly the star-to-star spread in N abundances of the stars at the base of the RGB in M15 approaches a factor of 10. The resulting I(CH) and I(NH) indices predicted via synthetic spectra from the grid of model atmospheres are listed in Tables~\ref{table_modelch} and \ref{table_modelnh}. \section{Inferred C and N Abundances Among the Subgiants \label{section_abund}} To disentangle the underlying C and N abundances from the CH and NH band strengths, we have fit the [C/Fe] and [N/Fe] abundances corresponding to the observed I(CH) and I(NH) indices of the SGB stars in M15. Since we are using the NH band instead of a CN band, the coupling between the assumed abundance of C and the deduced abundance of N is minimal and we employ here the same technique of Briley {\it et al.\/}\ 2002, 2004b (our M13 analysis): the model isoabundance curves of Table~\ref{table_obs_inds} were interpolated to the $M_V$ of each program star using cubic splines, and the observed index converted into the corresponding abundance based on the synthetic indices at that $M_V$. The resulting C and N abundances are plotted in Figure~\ref{fig_cn_abund} and listed in Table~\ref{table_obs_inds}. The error bars were determined by repeating the process while including shifts in the observed indices of twice the average error among the SGB indices as derived from Poisson statistics (0.005 in I(CH) and 0.02 in I(NH)). The shifts were included in opposing directions (e.g., +0.005 in I(CH) and --0.02 in I(NH), followed by --0.005 in I(CH) and +0.02 in I(NH)) and reflect likely errors in the abundances due to noise in the spectra. Plotted in Figure~\ref{fig_2spec} we show synthetic spectra corresponding to two SGB stars (one NH strong, the other NH weak) and a variety of N abundances. Also shown are the abundances resulting from index matching, which agree well with what one would obtain from visual matching between observed and calculated spectra. The sensitivity of the derived C and N abundances to our assumptions was also evaluated. We chose four representative stars near the base of the RGB and again repeated the fitting of the CN and CH band strengths with different values of [Fe/H], [O/Fe], C$^{12}$/C$^{13}$, etc. These results are presented in Table \ref{table_changes}, where it may be seen that the sensitivity of the derived C and N abundances to the choice of model parameters is remarkably small (well under 0.2 dex for reasonably chosen values), as would be expected from these weak molecular features. We have also plotted the C and N abundances of our M15 sample as both functions of $V$ (Fig~\ref{fig_cn_abund}) and $V-I$ colors (not shown) to evaluate possible systematic effects with luminosity and temperature; none appear to be present. C and N abundances were not calculated for the stars at or below the MSTO, as all abundance sensitivity is lost due to the high $T_{eff}$\ and resulting weakness of the molecular bands. For the six stars in our sample with I(NH) $\le 1$\%, essentially all of which have I(NH)+1$\sigma[{\rm{I(NH)}}] > 0$, we assign upper limits to [N/Fe], which are shown in Fig.~\ref{fig_cn_abund}. This figure immediately confirms the very large range in C and N abundances from star to star at similar evolutionary stages in M15 previously deduced from the appearance of the I(CH) and I(NH) - V mag plots (Fig.~\ref{fig_chcn_indices}). We next consider whether our data show any correlation between the C and N abundances we derive for M15 subgiants and lower RGB stars. Fig.~\ref{fig_c_vs_n} shows a plot of [N/Fe] versus [C/Fe] for the entire M15 sample. An anti-correlation, with considerable scatter, is apparent. The scatter is consistent with the observational errors, but there are a few outliers. In a sample of 70 objects with Gaussian errors, one outlier at the 2.5$\sigma$ level might be expected. One of these, C30123\_1138 (V=18.27) is among the stars with low signal in the continuum near the NH band, hence I(NH) with a high uncertainty, indicated in the figures by an open circle. The deviation of star C29413\_1023 (V=17.32), with extremely enhanced C and N, from the mean relation shown by the M15 sample in Fig.~\ref{fig_c_vs_n} is of higher statistical significance. This star will be discussed in Cohen \& Melendez (2005), where additional relevant data will be presented. We thus conclude, with the caveat of a very small number of probable outliers, that an anti-correlation between C and N is indeed found among the low luminosity sample of 68 stars in M15 studied here. Evaluating the accuracy of our absolute abundance scale is more difficult as external comparisons are limited. For the main sequence stars in 47 Tuc, we can compare the results of Briley {\it et al.\/}\ (1991, 1994), carried out in a manner fairly similar to the present work, with the independent analysis of a different sample of stars by Cannon {\it et al.\/}\ (1998). This suggests we may be systematically underestimating the absolute C abundance by about 0.15 dex, and overestimating the N abundance by about 0.2 dex. A second comparison is possible in M13. We obtained C abundances using our procedures as described here from newly obtained spectra for previously studied bright giants in M13 precisely to address this issue. The mean differences in [C/Fe] for our results as compared with literature values was 0.03$\pm0.14$ dex for four stars in common with \cite{smith96} and 0.14$\pm0.07$ dex for stars also observed by \cite{suntzeff81} (if one extreme case is removed) (see Briley {\it et al.\/}\ 2002 for details). This is very reasonable agreement. It is clear that shifts in the absolute abundance scale cannot account for the large range in C and N abundances apparent in Figure \ref{fig_c_vs_n}. We therefore conclude that the C versus N anti-correlation among the low luminosity M15 stars in Figure \ref{fig_c_vs_n} is indeed real. \subsection{From the RGB Tip to the Main Sequence Turnoff \label{section_trefzger} } \cite{trefzger83} carried out an extensive analysis of C and N abundances for the most luminous stars in M15. We now combine our results with theirs, thus sampling the [C/Fe] and [N/Fe] ratio over the full range of luminosity from the RGB tip to the main sequence turnoff in M15 in the two panels of Fig.~\ref{fig_tref}. With the confidence that our absolute abundance scale is reasonably secure and the hope that the same holds for the work of \cite{trefzger83}, we assert that Fig.~\ref{fig_tref} shows a large range in [C/Fe] at low luminosities, accompanied by a decrease in the mean [C/Fe] at about $V \sim$ 15 mag, which is essentially the location of the RGB bump in this globular cluster. We take this as evidence of two separate mechanisms contributing to the spread in the abundance of C and N in globular clusters. At high luminosities near the RGB tip, we see evidence of the first dredge up, as expected from normal stellar evolution, plus the extra-mixing common among metal-poor cluster giants, with a decline in the mean C abundance of about a factor of 5 (0.8$\pm0.3$ dex); the large uncertainty reflects the possibility that the absolute abundance scale of \cite{trefzger83} is different from ours, a matter we plan to investigate in the near future. In metal-poor field giants \citep{gratton00,spite04} a similar drop of about a factor of 2.5 is seen in the C abundance at about the luminosity of the RGB bump. These studies of field giants show an increase in N abundance of about a factor of 4 at ${\sim}L$(RGB bump), a drop in the Li abundance, and a decrease in the $^{12}$C/$^{13}$C ratio as well. There is some suggestion of an increase in the mean N abundance for stars in M15 more luminous than $L$(RGB bump) (Fig.~\ref{fig_tref}), but it is less clear cut than the drop in mean [C/Fe] there. There is also a potential concern of bias, in that \cite{trefzger83} could not reliably detect NH bands weaker than those included here; their paper contains several non-detections which were not plotted in Fig.~\ref{fig_tref}. This is the same phenomenon we identified earlier in M13 \citep{briley02,briley04b}, where the mean C/Fe and spread about that value were constant from the subgiants to below the main sequence turn off, but stars near the RGB tip showed lower surface C abundance. In that case, due to the limited data from the literature for the luminous RGB stars, we could not identify the luminosity at which the transition occurred. In M15, as is shown in Fig.~\ref{fig_tref}, that transition luminosity is reasonably well defined, and it is $L$(RGB bump) \section{ON Burning \label{section_on} } We next examine whether converting C to N, presumably via the CN cycle, is sufficient to reproduce the behavior we have found for our M15 sample, or whether burning of the even more abundant element O is also required. Figure~\ref{fig_sum_cn_m5} shows the sum of the C and N abundance as a function of the C abundance of the sample of M15 subgiants. The solid dot shows the predicted location assuming the initial C and N abundances (C$_0$, N$_0$) are the Solar values reduced by the metallicity of M15 ([Fe/H] = $-2.2$ dex). Thus this is the initial location for no burning and for a Solar C/N ratio. If the present stars incorporated material in which just C was burned into N, then the locus of the observed points representing the M15 sample of low luminosity stars should consist of a single horizontal line, with the initial point, the presence of no CN-cycle exposed material, at the right end of the line (the maximum C abundance) and the left end of the line corresponding to a substantial fraction of the star's mass (i.e. the atmosphere plus surface convection zone) including C-poor, N-rich material. Furthermore, if the initial C/N ratio of the cluster is not Solar, then the locus should still be a horizontal line, but located at a different vertical height in this figure. The maximum possible N enhancement for a cluster SGB star with these assumptions occurs if the star formed entirely from material in which all C has been converted into N. For initial values (C$_0$, N$_0$) (not expressed as logarithms), this maximum N enhancement would be (C$_0$ + N$_0$)/N$_0$. If the initial value was the Solar ratio, C$_0$/N$_0 \sim3.2$, the resulting maximum N enhancement is a factor of $\sim$4.2, while for an unrealistic initial C$_0$/N$_0$ of 10, the maximum N enhancement is a factor of 11. Now we examine the behavior of the C and N abundances among the M15 subgiant sample as inferred from our observations. It is clear that the assumption that the only thing happening is inclusion of material in which C was burned into N must be incorrect. The sum of C+N seems to systematically increase by a factor of $\sim$5 between the most C rich star and most C deficient star. The discussion of the errors, both internal and systematic, in \S\ref{section_abund} suggests maximum systematic errors of $-0.2$ dex for log(C/H) and +0.2 for log(N/H). This is completely insufficient to explain such a large trend as errors. Thus the sum of C+N was {\it{not}} constant as C was burned into N, wherever that might have occurred. Furthermore the observed range in N abundances is very large. The most obvious way to reproduce this is to include O burning as well as C burning. If we adopt Solar ratios as our initial values, then a substantial amount of O burning is required. Figure~\ref{fig_sum_cn_m5} suggests that the initial ratio of C/N is close to Solar. Adopting the Solar value as the initial C/N ratio, we calculate the minimum amount of O which must be burned at the base of the AGB envelopes to reproduce the locus observed in the figure (under the arguable assumption of the most extreme of our stars having formed largely from such material - this will, however, provide us with at least an estimate of the minimum burning required). We need to produce a N enhancement of at least a factor of 10. The Solar ratio is C/N/O = 3.2/1/7.6, so if all the C and 50\% of the O were converted, we have an enhancement of N of a factor of 8 available to the present stars. Oxygen is typically found to be overabundant with respect to Fe in old metal-poor systems (see Mel\'endez, Barbuy \& Spite 2001, Gratton {\it et al.\/}\ 2001, Ram\'{\i}rez \& Cohen 2002, and references therein); we assume [O/Fe] $\sim +0.3$ dex, a typical value. Then the initial C/N/O ratios will be 3.2/1/15.2. Note that the same amount of O has to be burned to produce the observed distribution of C and N abundances, but in this case it is a considerably smaller fraction of the initial O. \section{Comparison With C and N Studies in Other Globular Clusters \label{section_othergc}} We have now analyzed four galactic globular clusters covering a wide range in metallicity, M71, M5, M13 (see GC--CN), and the present study of M15. In each case, large samples of stars, all well below the luminosity of the RGB bump, were used. In M13 and M71, we had large samples below the main sequence turn off. In this section we attempt to assemble, compare, and integrate the results of these efforts, adding in relevant other work from the literature. The following section will seek to interpret these results. The nearby globular cluster 47 Tuc has been studied in great detail by \cite{cannon98} (see this paper for references to many earlier studies), while \cite{briley04a} extend their results by pushing several magnitudes below the MSTO of this very nearby cluster. 47 Tuc and the four clusters we have studied (see GC--CN) are the only globular clusters for which suitable data exists for the low luminosity range probed here. In each of these five globular clusters, there are large differences in star-to-star C and N abundances among the low luminosity stars. Fig.~\ref{fig_allgc_c} and Fig.~\ref{fig_allgc_n} show histograms for the samples of stars in each of these five globular clusters of the derived C and N abundances. From these we estimate the range of variation among these stars for both C and N for each cluster, ignoring a few obvious outliers in some cases. The field star C and N values for unmixed stars (low luminosity metal-poor field giants, presumably these are unmixed stars) from \cite{gratton00} ([C/Fe] $\sim 0.0$, [N/Fe] $\sim -0.1$ dex) roughly coincide with the maximum C/Fe ratio and with the minimum N/Fe ratio. Our most important new result derives from Fig.~\ref{fig_allgc_c} and Fig.~\ref{fig_allgc_n}. This figure clearly shows that the range of the spread in both C and N is about the same when expressed as [C/Fe] and [N/Fe] values in each of the five clusters. An anti-correlation between C and N has been found in each of these clusters. This anti-correlation is most easily seen in the metal rich clusters as there the observational errors are a smaller fraction of the signal. However, it is seen even in M15. This anti-correlation takes a particular form, illustrated in Fig.~\ref{fig_sum_cn_m5}, where the [C/Fe] -- [N/Fe] relationship we earlier demonstrated to prevail in M5 (the dashed curve) is superposed on the results presented here for M15. The length of the curve covers the full range of our M5 stellar data. The agreement of the mean relations we have determined in M5 and in M15, both their form and their extent, is very good. In the metal-rich GCs M71 and 47 Tuc, the CN and the CH indices appears bimodal, with a preferred high and low value, each varying with luminosity, but few stars occupying the middle ground. However, the more metal poor GCs, M5, M13 and M15 show no sign of bimodality for either the CH or the CN (or NH) line strengths. While an upper limit to band strengths in the metal-rich clusters might arise as the bands saturate, the general appearance of the distribution of line indices (see, for example, Fig.~4 of Cannon {\it et al.\/}\ 1998) does not support this as an important mechanism here. The bimodality more likely reflect the underlying abundance distributions of C and of N. By adding in samples of much more luminous stars on the upper giant branch with C and N abundances from the literature, we have been able to show that for M13 and for M15, the most luminous stars have a mean C/Fe ratio about a factor of 3 to 5 lower than than those on the lower giant branch. A similar behavior is present in the large sample of luminous stars in M92 studied by \cite{carbon82}, where C and N abundances were assigned to 43 giants with $M_V < +2$, by \cite{langer86}, and in the more recent work on C along the lower RGB of M92 by \cite{bellman01}. The latter suggest that C depletion begins at $M_V = 0.5$ to 1.0 mag, while \cite{zoccali99} find the RGB bump to be at $M_V \sim0.0$ mag in M15 based on HST CMDs. Previous studies of the C and N abundances of stars in M5, summarized in \cite{cohen02}, combined with our low luminosity large sample in this globular cluster, suggest that any difference in the mean [C/H] between the tip and the base of the RGB in M5 must be less than 0.3 dex; further observations to refine this are underway. For M71, we can compare our main sequence C abundances with those for the 75 RGB giants found by \cite{briley01m71} from DDO photometry. There is no evidence for any additional C depletion near the RGB tip in M71, but the uncertainties are large. The situation in 47 Tuc is comical. This globular cluster is so nearby, hence its stars are relatively bright, that large surveys of the luminous giants (see, e.g. Norris, Freeman \& Da Costa 1984) were carried out before the development of molecular band synthesis techniques, and hence they compared CH and CN molecular band indices as a function of luminosity, but did not derive C and N abundances. We have been unable to put together a sample from the literature of stars in 47 Tuc with C abundances that encompasses the necessary luminosity range in spite of the multitude of published analyses. This drop in C/Fe near the RGB tip is in agreement with the behavior of field stars \citep{gratton00}. Thus, as was suggested in \cite{briley02}, there appear to be two distinct abundance altering mechanisms that affect globular cluster stars. One produces strong star-to-star scatter at all luminosities, and another, most effective in the metal-poor GCs, produces the drop in C abundance which starts at ${\sim}L$(RGB bump). Only the latter is seen among field stars, and only the latter is at present understood, by enhanced (``deep'') mixing at the first dredge up once stars evolve to luminosities exceeding $L$(RGB bump). \section{Implications for Stellar Evolution \label{section_mix} } In the previous section, we reviewed the observational results accumulated thus far regarding the C and N abundances of stars in galactic globular clusters. We note that the sample of well studied GCs covers a wide range in metallicity, and, especially at the metal rich end, a wide range of present cluster mass and central stellar density. In this section we discuss the implications of the collected observational results for stellar evolution of metal-poor globular cluster stars. We found that there appear to be two distinct mixing mechanisms. First we address the extra C-depletion found on the upper RGB beginning approximately at $L$(RGB bump). This mechanism is, to first order at least, understood. A classical review of post-main sequence stellar evolution can be found in \cite{iben83}. Their description of the consequences of the first dredge up phase, the only dredge up phase to occur prior to the He flash, indicates that a doubling of the surface $^{14}$N and a 30\% reduction in the surface $^{12}$C can be expected, together with a drop in the ratio of $^{12}$C/$^{13}$C from the solar value of 89 to $\sim$20, as well as a drop in surface Li and B by several orders of magnitude. Observations of metal-poor field stars over a wide range of luminosities conform fairly well to this picture, see e.g. Shetrone {\it et al.\/}\ (1993), \cite{gratton00}, although additional mixing of Li and lower than predicted ratios of $^{12}$C/$^{13}$C seem to occur even among field stars \citep{nascimento00}. Additional physics was introduced into calculations of dredge up in old metal poor stars to better reproduce the observations via ``deep mixing''. Specific improvements include meridional mixing as described by Sweigart \& Mengel (1979) as well as turbulent diffusion (see Charbonnel 1994, 1995) and the insights of Denissenkov \& Denissenkova (1990) concerning the importance of the $^{22}$Ne($p,\gamma)^{23}$Na reaction as a way to produce p-burning nuclei. The clear prediction of the most recent calculations of this type by Denissenkov \& Weiss (1996), Cavallo, Sweigart \& Bell (1998) and Weiss, Denissenkov \& Charbonnel (2000) is that the earliest that deep mixing can begin is at the location of the bump in the luminosity function of the RGB which occurs when the H-burning shell crosses a sharp molecular weight discontinuity. The observations of C and N abundances in globular cluster stars (and in field stars, see, e.g. Gratton {\it et al.\/}\ 2000) are in reasonable agreement with the predictions of the latest such models with regard to the key points: at what luminosity the first dredge up begins, the amplitude of the decline in C abundance, and the general shape of the C depletion as a function of luminosity. The observational situation is not yet adequate to verify the predicted increase of the N abundance in GC RGB stars above the bump luminosity. The models also predict, in agreement with the observations, that this phenomenon is more efficient at low metallicities, as the thickness of the H-burning shell decreases rapidly and the shell burning timescale also decreases as the metallicity rises \citep*[see Fig.~7 and 6b of][]{cavallo98}. The models of \cite{cavallo98} and others can predict the strong O-Na correlation seen among giants close to the RGB tip in some globular clusters, particularly the metal poor ones, as another consequence of p-burning in the H-burning shell. We therefore regard this aspect of the behavior of the C and N abundances in globular cluster stars as having a reasonable explanation. It is the very large star-to-star differences in C and N abundances found at all luminosities in all globular clusters studied with sufficient data to date which are not easily explained. The range of variation of [C/Fe] and of [N/Fe] is to first order constant, irrespective of the metallicity of the GC and independent of stellar luminosity, with C and N anti-correlated, and with maximum C depletions of a factor of 3 to 5 accompanying maximum N enhancement of more than a factor of 10. The very high N enhancements require not only CN burning but ON burning as well. The range of luminosity over which these C and N variations (and also O and Na variations as well, see, e.g. Cohen \& Melendez 2005 or \cite{gratton01} and references therein) occur in globular clusters has by now ruled out any scenario which invokes dredge up and mixing intrinsic to the star itself. We must now regard the fundamental origin of the star-to-star variations in C and N abundance we see in GCs as arising outside the stars whose spectra we have studied here. The strong anti-correlation between C and N, however, does suggest that CN-cycle material must be involved, and that this material has somehow reached the surface of these low luminosity GC stars. Since we know it cannot come from inside these stars, it must come from some external source. As reviewed by Lattanzio, Charbonnel \& Forestini (1999), CN and ON cycling is known to occur in intermediate mass AGB stars, and such stars are also known to have sufficient dredge up to bring such material to their surfaces. Recent detailed computations, including both nucleosynthesis with a large set of isotopes and nuclear reaction pathways, have been carried out by \cite{karakas03} for very metal poor intermediate mass stars. \cite{herwig04} has also added in detailed mixing to the stellar surface. These calculations can qualitatively reproduce essentially all of the observational data. We thus might speculate that the site of the proton exposure could be more rapidly evolving higher mass AGB stars, which then suffered extensive mass loss (either in or outside of binary systems) and polluted the generation of lower-mass stars we currently observe, while the higher mass stars are now defunct. Considerable effort to develop this scenario of AGB pollution of the lower mass stars we observe today has been made by \cite{ventura01}, \cite{dantona02}, and most recently (and most completely) by \cite{fenner04}. However, the recent observational facts summarized above have in our view rendered this scenario not viable either. One problem with any ``pollution'' scenario is that these abundance inhomogeneities cannot simply be surface contaminations as they would be diluted by the increasing depth of the convective envelope during RGB ascent. Thus the amount of accreted mass required to explain the observed C and N variations becomes very large. It must be significant fraction of the total stellar mass, given that a fraction of the star much larger than just the surface convection zone of a luminous RGB star must be contaminated to maintain a constant range of C and of N at all luminosities. This seems to us unrealistic and contrived, but we note the recent calculations of \cite{thoul02} which demonstrate that large accumulations may be possible with some assumptions about stellar orbits, particularly in clusters with small core radii. They estimate that for stars in the core of 47 Tuc as much as 80\% of the mass of a 1 M\mbox{$_{\odot}$} ~ star in that cluster could be accreted material. Unfortunately, the core radii of some of the GCs we have studied are considerably larger. Even with generous assumptions regarding orbital anisotropy, similar calculations for them yield a much smaller expected accretion ($\sim$10\%), which is not sufficient for present purposes. Another problem arises because of the tight anti-correlation between C and N (see Fig.~\ref{fig_sum_cn_m5}). Any external mechanism for producing these variations will involve an efficiency factor for the incorporation of material. This might be a cross section if accretion from the cluster gas is involved, or some property of the accretion disk if binaries are involved. We expect this factor to depend on the mass of the star itself, how much additional mass is incorporated (${\Delta}M$), and the initial C and N abundances in the star itself and within ${\Delta}M$. Since these properties of ${\Delta}M$ might be expected to fluctuate wildly depending on the mass of the evolved star producing the N rich ejecta, this process therefore should show a lot of stochastic random variability. ``Pollution'' of a low mass star by ejecta from intermediate mass AGB stars is just too chaotic and unpredictable to be able to reproduce such well behaved trends. We now turn to the implications of Fig.~\ref{fig_allgc_c} and \ref{fig_allgc_n}, which display the ranges of [C/Fe] and [N/Fe] found among low luminosity stars (stars at the base of the RGB, subgiants, and/or on the main sequence) in five GCs spanning a range in metallicity of a factor of 40. This figure illustrates our most important new result. We see that the maximum and minimum for each of these is approximately the same (to within a factor of 3) for each of the clusters. Note that the maxima and minima expressed as log[$\epsilon$(C)] and log[$\epsilon$(N)] are {\it{not}} constant. Furthermore the maximum in [C/Fe] corresponds reasonably well to that of the field stars, while the minimum in [N/Fe] corresponds to that of the field stars. Thus it seems reasonable that, as is commonly assumed, the high C, low N stars represent the nominal chemical inventory, while the abnormal ones are those with low C and high N. $^{12}$C is produced by the triple-$\alpha$ process and destroyed by CN burning, while $^{14}$N is produced via CN and ``hot bottom'' burning. We therefore expect N to behave as a primary element, while the behavior of C may be more complex. The observations, however, demand that the additional material dumped onto the low C/high N stars is not from some primary process in which a fixed amount of N per gm is produced, dispersed into the GC, and mixed into the GC gas. Instead the chemical inventory of C and N behaves like a secondary process, increasing as [Fe/H] increases. The modeling of the production and dredge up of elements such as C and N in AGB stars, while still very uncertain, is rapidly advancing. Detailed models, such as those of \cite{karakas03} of yields and of abundances for various species at the surfaces of such stars after dredge up are now available. So we must ask what are the surface ratios of the species of interest in intermediate mass AGB stars after dredge up and what are the relevant mass loss rates, which will drive the processed material into the cluster gas. Mass loss in AGB stars is primarily driven by radiation pressure on dust grains \citep{vasil}. While there is still some uncertainty, recent comparisons of heavily obscured AGB stars in the SMC, the LMC and the Milky Way by \cite{vanloon} suggest that the total mass loss rate for a star of fixed luminosity is only weakly dependent on metallicity. Chemical yields for intermediate mass stars have been given by \cite{marigo}, \cite{ventura02} and \cite{karakas03}. We examine the ratio of initial to final C and N in models of different initial metallicity to determine how closely the behavior of $^{12}$C and of $^{14}$N in the ejecta of such stars matches the extreme range of the observations in globular clusters over a wide range in metallicity. Table~\ref{table_modelcomp} presents this comparison in detail for the models of \cite{ventura02}; those of \cite{karakas03} and of \cite{gavilan04} do not cover a sufficient range in metallicity to be useful for our purpose, while the results of \cite{marigo} were not given in tabular form. We find that, while correct in sign, the models of \cite{ventura02} fail to reproduce the observations by factors of up $\sim$10, depending on which of the three ratios presented in the table is examined. The worst discrepancy is in the final ratio C/N after dredge up in the most metal poor GCs. There the extent of both the depletion of C and the enhancement of N are badly underestimated by the models. \cite{fenner04} also finds that ejecta from intermediate mass AGB stars cannot reproduce details of the abundance distributions of the Mg isotope ratios in NGC~6752 \citep*[see also][]{denissenkov03}. AGB ejecta can be observed directly by studying them in situ, i.e. in planetary nebulae. An independent verification comparison for the predicted yields can be attempted via abundance analyses of the planetary nebulae in the SMC. \cite{lmc_pn} discuss the planetary nebulae in the LMC, whose metallicity is higher than that of any GC considered here. The SMC PN are not as well studied, and with the demise of STIS/HST, future progress in this area will be at best very slow. Although the predictions of \cite{ventura02} and of \cite{karakas03} for the behavior of C and N after dredge up of intermediate mass AGB stars are reasonably consistent with each other, the uncertainty in these calculations must be large. Whether it is large enough to accommodate discrepancies of a factor of $\sim$10 in C/N ratios is not clear. An optimist would say that this level of (dis)-agreement is satisfactory, given the difficulties and complexity of the modeling effort required, while a pessimist would say that these discrepancies are larger than can be reasonably expected from the models and from the data. A very recent paper, \cite{ventura05}, discusses the modeling uncertainties arising from just one issue, the description of convection adopted. They find changes of a factor of two in predicted C and N surface abundances in intermediate mass metal poor AGB stars are easily achieved by this means. We choose to be optimists. More such modeling efforts, even though they require many assumptions, will be very valuable. \section{Implications for Globular Cluster Formation and Chemical Evolution \label{section_chem_evol} } In addition to the accumulated evidence regarding C and N abundances presented above, there is one other key fact that must figure in any model of the chemical evolution of globular clusters. This is that the abundances of the heavy elements, particularly those between Ca and the Fe-peak, are constant for all stars in a globular cluster. Extensive efforts (see, for example, Cohen \& Melendez 2005) have failed to detect any dispersion larger than the observational errors. The abundance spreads are confined to the light elements\footnote{There is increasing evidence there may be some variation of the heavy and very rare $r$ and $s$-process elements in globular clusters. We ignore this here.}. We suggest that a viable scenario for the chemical evolution of GCs can only be constructed if globular cluster stars are not all coeval, and more than one epoch of star formation in GCs must have occurred, albeit all within a relatively short timescale. During the early stages, a variation in C and N abundances satisfying the above observational data was imprinted on the proto-cluster gas {\it{before}} the present generation of stars we now observe were formed. The low mass stars we currently observe formed from the ``polluted'' gas some time later during the extended period of star formation in GCs. Furthermore, if one is a pessimist, one must rule out some previous generation intermediate mass AGB stars as the the source of this ``pollution'' because of problems in the predicted C/N ratios. If one is an optimist, then one ascribes these problems to modeling difficulties, mass loss rates increase dramatically with increasing metallicity, which is not supported by observations \citep{vanloon}, or some other such factor, and assumes that these AGB stars did generate the C and N variations seen today in low luminosity GC stars. A tentative scenario which fits most of the facts might be that the first stars to form in the proto-cluster gas were very massive. Since this gas might have had very low metallicity, theoretical support for an IMF heavily biased towards high mass stars under these conditions can be found in the review of \cite{bromm04}. The SNII from these stars produced the heavy elements through the Fe-peak seen in globular cluster stars. The violent explosions ejected energy into the cluster gas which kept it well mixed. This is crucial to maintaining constant abundances of the heavy elements in the stars within a particular GC. SNII explosions may also have acted to disrupt the lowest mass proto-clusters, which became halo field stars. The lifetimes of high mass stars are very short, and so would be the duration of this phase of evolution of the GC. After some (short) time, no more massive stars were formed. Intermediate mass stars began to form with metallicity that of the GC as seen today. Such stars have typical lifetimes of $\sim$2 Gyr. During the course of their evolution, in their interiors they produced material that went through the CN process (and the ON process to some extent). This material was subsequently ejected, but the gas was no longer mixed globally over the cluster volume, and local pockets of substantial or negligible enrichment of the light elements developed. Since GC CMD diagrams do not permit an age range of 2 Gyr among the low mass GC stars, no low mass stars could have formed until the near the end of this second phase. At this point, the low mass stars that we see today formed, with variable light element ratios, but fixed heavy element abundances. It is now possible to include the formation of globular clusters in cosmological simulation \citep{kravtsov04}. However, the level of detail needed here to follow their chemical evolution with regard to the light elements is still beyond our capabilities. Although the overall picture sketched above seems reasonable, current models for nucleosynthesis and dredge up for intermediate mass AGB stars fail to reproduce in detail the observed C and N variations in GC stars, in the sense that the C depletions and N enhancements observed in low metallicity globular clusters are considerably larger than theory predicts. Unless those models are flawed, the relatively short lived stellar source for the second phase of this scenario, when the cluster gas is no longer well mixed throughout its volume, is unknown. In evaluating such a scenario, it is important to remember that the present mass of a GC may be much lower than its initial mass as a proto-cluster; stars are lost from the cluster through many processes \citep*[see, e.g.][]{mash}. Thus the absence of a relation between the present mass (or central density) of a GC and its [Fe/H] should not be surprising. It would be of interest to test to even greater accuracy the constancy of the Fe-peak elements within a particular GC. \section{Summary} We present moderate resolution spectroscopy and photometry for a large sample of subgiants and stars at the base of the RGB in the extremely metal poor Galactic globular cluster M15 (NGC~7078), with the goal of deriving C abundances (from the G band of CH) and N abundances (from the NH band at 3360\,\AA). Star-to-star stochastic variations with significant range in both [C/Fe] and especially [N/Fe] are found at all luminosities extending to the subgiants at $M_V {\sim}+3$. An analysis of these LRIS/Keck spectra with theoretical synthetic spectra reveals that these star-to-star variations between C and N abundances are anti-correlated, as would be expected from the presence of proton-capture exposed material in our sample stars. The evolutionary states of these stars are such that the currently proposed mechanisms for {\it in situ} modifications of C, N, O, etc. have yet to take place. On this basis, we infer that the source of proton exposure lies not within the present stars, but more likely in a population of more massive stars which have ``polluted'' our sample. The range of variation of the N abundances is very large and the sum of C+N increases as C decreases. To reproduce this requires the incorporation not only of CN but also of ON-processed material, as we also found earlier for M5 (see GC--CN). We combine our work with that of \cite{trefzger83} for the brighter giants in M15 to extend coverage to a larger luminosity range reaching from the RGB tip to the main sequence turnoff. We then find strong evidence for additional depletion of C among the most luminous giants. This presumably represents the first dredge up (with enhanced deep mixing) expected for such luminous RGB stars in the course of normal stellar evolution as they cross the RGB bump. Our work now covers four GCs (M15, M13, M5 and M71, see GC--CN) spanning a metallicity range of a factor of 40. We look at the trends of C and N abundances common to all the GCs studied to date, including (from the literature) 47 Tuc. While all clusters studied show strong anti-correlated variations of C and N at all luminosities probed, the metal rich clusters (M71, 47 Tuc and M5) do not show evidence for the first dredge up among their most luminous giants, while the metal poor ones (M5, M13, M92 and M15) do. This is predicted by the models of the first dredge up on the RGB, which can reproduce essentially all the key features of the associated changes in C abundance, including the luminosity at which it begins and the amplitude of the decline in [C/Fe] as a function of metallicity. The metal poor clusters do not show evidence for the bimodality in CH and CN line strengths seen in the metal rich clusters. The origin of the bimodality is unclear. It is the star-to-star variations in C and in N seen at low luminosity in all these GCs that is more difficult to explain. Having eliminated {\it{in situ}} CN processing, ``pollution'' by material from intermediate mass AGB stars is the most popular current scenario to produce this. However, we rule out this suggestion, at least as far as accretion onto existing stars is concerned. Our most important new result is that the range of [C/Fe] and of [N/Fe] seen in these five GCs is approximately constant (see Fig.~\ref{fig_allgc_c} and \ref{fig_allgc_n}), i.e. C and N are behaving as though they were produced via a secondary, not a primary, nucleosynthesis process. A detailed comparison of our results with the models for nucleosynthesis and dredge up of low metallicity intermediate mass AGB stars by \cite{ventura02} fails to explain the details of the C and N abundances, predicting key ratios incorrectly by a factor of up to $\sim$10. Thus pollution of cluster gas by such stars can also be ruled out unless current models of surface N abundances after dredge up are flawed, which seems possible given the complexity of the modeling and the many assumptions required. The behavior of the C and N abundances among low luminosity stars in GCs, while [Fe/H] is constant to high precision within each GC, force us to assume that there was an extended period of star formation in GCs. The first stars were exclusively of high mass, and their SNII ejecta produced the heavy metals seen in the GC. A second generation of short-lived stars of an unknown type (not intermediate mass AGB stars, unless current models are flawed) evolved, ejected mass, and ``polluted'' with light elements the GC gas; the low mass stars we see today formed afterwards. \acknowledgements The entire Keck/HIRES and LRIS user communities owes a huge debt to Jerry Nelson, Gerry Smith, Steve Vogt, Bev Oke, and many other people who have worked to make the Keck Telescope and HIRES and LRIS a reality and to operate and maintain the Keck Observatory. We are grateful to the W. M. Keck Foundation for the vision to fund the construction of the W. M. Keck Observatory. The authors wish to extend special thanks to those of Hawaiian ancestry on whose sacred mountain we are privileged to be guests. Without their generous hospitality, none of the observations presented herein would have been possible. JGC acknowledges support from the National Science Foundation (under grant AST-025951) and MMB acknowledges support from the National Science Foundation (under grant AST-0098489) and from the F. John Barlow endowed professorship. We are also in debt to Roger Bell for the use of the SSG program and the Dean of the UW Oshkosh College of Letters and Sciences for the workstation which made the extensive modeling possible, and to Jorge Melendez for the IR observations. This work has made use of the USNOFS Image and Catalog Archive operated by the United States Naval Observatory, Flagstaff Station (http://www.nofs.navy.mil/data/fchpix/). This publication makes use of data from the Two Micron All-Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center, funded by the National Aeronautics and Space Administration and the National Science Foundation. \clearpage
2,869,038,156,569
arxiv
\section{Introduction} \subsection{Renormalization of the deformations of the action}\label{sec:IntroRenorm} Consider a quantum field theory with an action $S$ invariant under some Lie algebra of symmetries $\bf g$. Let us study its infinitesimal deformations of the theory, corresponding to the deformations of the action: \begin{equation}\label{Deformations} \delta S \;=\; \epsilon\int d^dx \sum_I f_I(x) U_I(x) \end{equation} where $\epsilon$ is an infinitesimal parameter, $f_I(x)$ are some space-time-dependent coupling constants and $\{U_I\}$ is some set of local operators, closed under $\bf g$ in the sense that the expressions on the RHS of Eq. (\ref{Deformations}) form a linear representations of $\bf g$. We call $T_0$ the linear space of this representation \begin{equation} T_0 \; = \; \mbox{\tt\small linear space generated by } \int d^dx f_I(x) U_I(x) \end{equation} In principle, we can take $\{U_I\}$ the set of all local operators of the theory. But there could be smaller $\bf g$-invariant subspaces. We can study the effects on the correlation functions, or perhaps on the $S$-matrix, of the deformation of the form (\ref{Deformations}), to the linear order in $\epsilon$. We can also study the effects of the deformation (\ref{Deformations}) beyond linear order in $\epsilon$, but this requires taking care of the definitions. To define (\ref{Deformations}), we expand in powers of $\epsilon$, bringing down from the exponential expressions like: \begin{equation}\label{MultipleIntegral} \epsilon^n \int d^dx fU\cdots\int d^dx fU \end{equation} This has to be regularized, because of singularities due to collisions of $U$. Suppose that the set $\{U_I\}$ is big enough in the sense that all the required counterterms are linear combinations of $\{U_I\}$. The counterterms are not unique, because we can always add a finite expression. Suppose that we have choosen some rule to fix this ambiguity. Then, we have a map, parameterized by a small parameter $\epsilon$: \begin{equation}\label{MapFinFieldTheoryContext} F_{\epsilon}\;:\;T_0 \longrightarrow \left[\mbox{\tt\small space of finite deformations}\right] \end{equation} \subsection{Symmetries of undeformed theory act on deformations}\label{sec:ActionOfGOnDefs} The space of finite deformations is not, in any useful sense, a linear space. It is a ``nonlinear infinite-dimensional manifold''. But it naturally comes with an {\em action} of $\bf g$. Indeed, the regularized expression (\ref{MultipleIntegral}) is, in particular, a (non-local) operator in the original theory. As the symmetry group of the undeformed theory acts on operators, it therefore acts on deformations, bringing one deformation to another. \vspace{10pt} \noindent Because we had some freedom in the choice of regularization, the map $F_{\epsilon}$ does not necessarily commute with the action of $\bf g$. Can we choose regularization with some care, so the resulting $F_{\epsilon}$ {\em does} commute with $\bf g$? Of course, we can not, there are obstacles. \vspace{10pt} \noindent In this paper we will introduce a geometrical framework for describing these obstacles and to what they correspond in the strong coupling limit {\it via} AdS/CFT duality. \subsection{Holographic renormalization} AdS/CFT correspondence relates the deformations of CFT to the classical solutions of SUGRA deforming AdS. As main example, consider Type IIB SUGRA in $AdS_5\times S^5$ and $N=4$ SYM on the boundary $\partial(AdS_5\times S^5)$. Deformations of the SYM action of the form (\ref{Deformations}) are mapped by AdS/CFT to the classical SUGRA solutions, deformations of $AdS_5\times S^5$. Linearized SUGRA solutions correspond to linearized deformations. Renormalization of the deformations of QFT (Section \ref{sec:IntroRenorm}) should correspond to {\em something} on the AdS side. Most of the work on {\em holographic renormalization} was done along the lines of \cite{Bianchi:2001kw,Skenderis:2002wp}, and was based on the study of the bulk supergravity action. On the other hand, the computation of the renormgroup flow of the beta-deformation done in \cite{Aharony:2002hx} seems to use a different method. In particular, the authors of \cite{Aharony:2002hx} did not need to know the action of the bulk theory. This, in particular, may allow to apply their method to the cases where the action is not known and maybe even does not exists, such as higher spin theories \cite{Vasiliev:2004qz,Sharapov:2017yde}. \subsection{Geometrical abstraction}\label{sec:GeometricaAbstraction} Suppose that a Lie algebra $\bf g$ acts on a manifold $M$, preserving a point $p$. Then it acts in the tangent space to $M$ at $p$. The question is, can we find a formal map: \begin{align}\label{IntroMapF} F_{\epsilon}\;:\;T_pM \rightarrow M \\ F_0 \equiv p \end{align} parametrized by $\epsilon$ (``formal'' means power series in $\epsilon$) from the tangent space to $m$ to $M$, commuting with the action of $\bf g$? There are, generally speaking, obstructions to the existence of such a map --- see Section \ref{sec:Linearization}. We want to classify these obstructions. This is, essentially, equivalent to studying the {\bf normal form of the action} of $\bf g$ in the vicinity of the fixed point. \paragraph {Tangent vectors as equivalence classes of trajectories} Maps $F_{\epsilon}\;:\;T_pM \rightarrow M$ participate in the ``usual'' definition of the tangent space ({\it e.g.} \cite{Arnold}). The tangent space $T_pM$ is defined as the space of equivalence classes of paths (maps from $\bf R$ to $M$) $p(\epsilon)$ such that $p(0)=p$. The equivalence relation is that two paths $p_1$ and $p_2$ are equivalent when $p_1(\epsilon)-p_2(\epsilon) = o(\epsilon)$ in a coordinate patch. Giving a function $F_{\epsilon}$ as in Eq. (\ref{IntroMapF}) is same as giving a prescription of how to pick, for each tangent vector $v$, one path from the corresponding equivalence class. That path is: \begin{align} p(\epsilon) = F_{\epsilon}(v) \end{align} Of course, there are many such prescriptions. The question is, can we find some, which would be consistent with the action of $\bf g$? The space of formal paths $p\;:\;{\bf R}\rightarrow M$ such that $p(0)=p$ can be denoted $\Omega_pM$ --- similar to the space of $p$-based loops in $M$, but we only need a formal power series in $\epsilon$ at $\epsilon=0$, not the whole loop. To summarize, we investigate the existence of a map: \begin{equation}\label{LiftToFormalLoops} T_pM \rightarrow \Omega_pM \end{equation} commuting with the action of $\bf g$. We find that there are obstacles to the existence of such a map, and classify them. These obstacles are, roughly speaking, some cohomology groups. More precisely, they are solutions of a Maurer-Cartan equation, modulo gauge transformations (a nonlinear analogue of cohomology groups) --- see Section \ref{sec:Linearization}. \paragraph {The role of supergeometry and infinite-dimensional geometry} In our main application (AdS/CFT): ${\bf g} = {\bf psu}(2,2|4)$ --- the superconformal algebra. Its even (bosonic) part is $so(2,4)\oplus so(6)$. If $M$ were a finite-dimensional ``usual'' (not super) manifold then there would be no obstacle in linearizing the action, because the relevant cohomology groups are zero. This makes our picture somewhat counter-unintuitive geometrically. \begin{centering{\small The relevant cohomology group is $H^1$. In classical geometry, we would have nontrivial invariants if $\bf g$ were ${\bf u}(1)$ or contained ${\bf u}(1)$ as a subalgebra. For example, a classical mechanical system can have a free limit and have in that limit periodic trajectories, but away from that free limit the trajectories are not periodic. }\end{minipage There are two reasons for having nontrivial invariants. The first reason is that $M$ is actually infinite-dimensional. But there is also the second reason: even when we can find some finite-dimensional ``subsectors'' (submanifolds in $M$), they are actually {\em super}-manifolds. This can make the cohomolgy nontrivial even in finite-dimensional case. \subsection{Summary of this paper} \paragraph {Cohomological framework for holographic renormalization} Here we will develop a formalism for computations along the lines of \cite{Aharony:2002hx}, which makes them geometrically transparent. We interpret \cite{Aharony:2002hx} as computing certain {\em invariants of supergravity equations} in the vicinity of AdS, namely the solution of some Maurer-Cartan equation modulo gauge transformations. We give the definition of these invariants in Section \ref{sec:GeometricaAbstractionAdS}. This is broadly similar to the obstructions to the existence of the $\bf g$-invariant map $F_{\epsilon}$ of Section \ref{sec:GeometricaAbstraction}. The details, however, are more subtle, because we are dealing with gravity. The symmetry algebra ${\bf g} = {\bf psu}(2,2|4)$ of $AdS_5\times S^5$ is actually a part of the larger infinite-dimensional symmetry, the gauge symmetries of the supergravity theory. This makes the analysis on the AdS side more interesting. \paragraph {Finite-dimensional representations have nontrivial cohomology} The cohomological obstacles for linearization of the symmetry are usually rather complicated, because they involve the cohomology with coefficients in infinite-dimensional representations (see Section \ref{sec:InfiniteDimensionalExtension}). But when the symmetry is {\em super}-symmetry, even finite-dimensional representations have nontrivial cohomology (Section \ref{sec:ObstacleIsQuadratic}). We formulate some conjecture\footnote{the computation required to prove or dispove this is outlined in Section \ref{sec:OutlineOfComputation}} about the role of these cohomology classes in the particular case of beta-deformation. Our conjecture implies that the anomalous dimension (which in the case of beta-deformation is cubic in the deformation parameter) is, in some sense, a square of a simlper obstruction, which is quadratic in the deformation parameter. While the anomalous dimension is analogous to a four-point function, the simpler quadratic obstruction is analogous to a three-point function. This might explain the observation in \cite{Aharony:2002hx} that the anomalous dimension is not renormalized. \vspace{10pt} \noindent The idea is, therefore, to study the space of {\em all} perturbative solutions of supergravity (instead of particular solutions) and describe its invariants, as a invariants of a supermanifold. \paragraph {Plan} In Section \ref{sec:Linearization} we develop geometrical formalism for studying the obstructions to the existence of the map (\ref{IntroMapF}) commuting with $\bf g$. We explain that the obstacle is a solution of the Maurer-Cartan (MC) equation with values in vector fields. In Section \ref{sec:FeynmanDiagrams} we explain how to apply this formalism to the space of perturbative solutions of a classical field theory. We show that there is a natural operation of ``amputation of the last leg'' which converts Feynman diagrams into a solution of the MC equation. In particular, in Section \ref{sec:ClassicalCFT} we consider the case of classical CFT in ${\bf R}\times S^{d-1}$. In Section \ref{sec:AdS} we discuss deformations of $AdS_5\times S^5$ and holographic renormalization. In Section \ref{sec:BetaDeformation} we study the particular case of beta-deformation \cite{Leigh:1995ep}. Finally, in Section \ref{sec:Discussion} we discuss some open questions and potential problems. \section{Obstacles to linearization of symmetry}\label{sec:Linearization} \subsection{Action of a symmetry in local coordinates}\label{sec:ActionOfSymmetry} Suppose that a Lie algebra $\bf g$ acts on a manifold $M$ and leaves invariant a point $p\in M$. Then $\bf g$ naturally acts in the tangent space $T_pM$. Consider maps $F_{\epsilon}\;:\;T_pM\rightarrow M$ parameterized by a small parameter $\epsilon$, satisfying: \begin{align} & F_{\epsilon}(0) = p \label{MustPassThroughP}\\ & F_{\epsilon *}(0) = {\bf 1}\;:\;T_pM \rightarrow T_pM \label{DerivativeIsId}\\ & F_{\kappa\epsilon}(x) = F_{\epsilon}(\kappa x) \label{RescaleEpsilon} \end{align} As we discussed in Section \ref{sec:GeometricaAbstraction}, there are many such maps. Let us ask the following question: is it possible to construct such a map $F_{\epsilon}$ which would also commute with the action of $\bf g$? (We are interested in a {\em formal} map, {\it i.e.} a map specified as an infinite series in $\epsilon$; we will not discuss convergence.) Let us start by picking {\em some} map $F\;:\; T_pM\to M$ (not necessarily ${\bf g}$-invariant) satisfying Eqs. (\ref{MustPassThroughP}), (\ref{DerivativeIsId}) and (\ref{RescaleEpsilon}). For each element $\xi\in {\bf g}$ there is a corresponding vector field $v\langle\xi\rangle$ on $M$. Let us consider $F_{\epsilon *}^{-1}v\langle\xi\rangle$. It is a vector field on $T_pM$: \begin{equation}\label{ExpansionOfVectorField} F_{\epsilon *}^{-1} v\langle\xi\rangle = v_0\langle\xi\rangle + \epsilon v_1\langle\xi\rangle + \epsilon^2 v_2\langle\xi\rangle + \ldots \end{equation} where $v_n\langle\xi\rangle$ is of the form $v_n\langle\xi\rangle=f_n^{\mu}(x){\partial\over\partial x^{\mu}}$ with $f_n^{\mu}(x)$ a polynomial of the degree $n+1$ in $x$. Notice that the power of $\epsilon$ in Eq. (\ref{ExpansionOfVectorField}) correlates with the degree in $x$ of $f_n^{\mu}(x)$. Therefore we will just skip $\epsilon$ from our formulas; we will think of ``$x$ being of the order $\epsilon$''. The vector field $F^{-1}_*v\langle\xi\rangle$ has a very straightforward meaning. Our map $F$ turns a sufficiently small open neighborhood of $0\in T_pM$ into a chart of $M$. In this context, $F^{-1}_*v\langle\xi\rangle$ is just the ``coordinate representation'' of the vector field $v\langle\xi\rangle$ in that chart. Therefore, our question becomes: \begin{itemize} \item Can we choose a chart so that $v_1=v_2=\ldots = 0$? \end{itemize} We will see that obstacles are certain cohomology classes. \subsection{Maurer-Cartan equation} For two elements $\xi$ and $\eta$ of $\bf g$, we have: \begin{equation} \left[\;F_*^{-1} v\langle\xi\rangle\;,\;F_*^{-1} v\langle\eta\rangle\;\right] \;=\; F_*^{-1} v\langle[\xi,\eta]\rangle \end{equation} This means that, for $c\in \Pi {\bf g}$: \begin{equation}\label{CommutatorOfVectorFieldsDependingOnGhosts} \left[F_*^{-1} v\langle c\rangle \,,\, F_*^{-1} v\langle c\rangle\right] = c^Ac^Bf_{AB}{}^C {\partial\over\partial c^C}F_*^{-1} v\langle c\rangle \end{equation} where $c^A$, $A\in \{1,2,\ldots,\mbox{dim}({\bf g})\}$ denote the coordinates on $\Pi {\bf g}$. Besides that: \begin{equation} [v_0\langle\xi\rangle,v_0\langle\eta\rangle] = v_0\langle[\xi,\eta]\rangle \end{equation} Define the ``BRST operator'': \begin{equation}\label{BRSTOperator} Q = {1\over 2}c^Ac^Bf_{AB}{}^C {\partial\over\partial c^C} + v_0\langle c\rangle \end{equation} where $c = c^At_A\in {\bf g}$. We have $Q^2=0$. This defines the differential in the Lia algebra cohomology complex \cite{Knapp} of $\bf g$ with values in the space of vector fields on $T_pM$ having zero of at least second order at the point $p$. (The action of the second term, $v_0\langle c\rangle$, is by the commutator of vector fields.) Let us define $\Psi$ as follows ({\it cf.} Eq. (\ref{ExpansionOfVectorField})):\marginpar{$\Psi$} \begin{equation}\label{DefPsi} \Psi = F_*^{-1}v\langle c\rangle - v_0\langle c\rangle = v_1\langle c\rangle + v_2\langle c\rangle +\ldots \end{equation} Eq. (\ref{CommutatorOfVectorFieldsDependingOnGhosts}) implies that $\Psi$ satisfies the MC equation: \begin{equation}\label{MaurerCartan} Q\Psi + {1\over 2}[\Psi,\Psi] = 0 \end{equation} \subsection{Gauge transformations}\label{sec:GaugeTransformations} Suppose that we replace $F\;:\;T_pM\to M$ with another function $\widetilde{F}=F\circ G$, where $G$ is any (nonlinear) function $T_pM\to T_pM$ such that $G(0)=0$ and $G_*(0)=\mbox{id}$. Then $\Psi$ gets replaced with $\widetilde{\Psi}$ where: \begin{equation}\label{GaugeTransformations} \widetilde{\Psi} = G_*^{-1}\Psi + G_*^{-1}QG \end{equation} This is the gauge transformation. An infinitesimal gauge transformation is: \begin{equation}\label{InfinitesimalGaugeTransformations} \delta_{\Phi}\Psi = Q\Phi + [\Psi,\Phi] \end{equation} \subsection{Tangent space to the moduli space of solutions of MC equation} The tangent space to the solutions of Eq. (\ref{MaurerCartan}) at the point $\Psi$ is the cohomology of the operator $Q + [\Psi,\_]$: \begin{equation} T_{\Psi}(MC) = H^1(Q + [\Psi,\_]) \end{equation} The Lie algebra of nonlinear\footnote{{\it i.e.} quadratic and higher orders in coordinates} vector fields has a filtration by the scaling degree. Therefore the cohomology of $H^1(Q + [\Psi,\_])$ can be computed by a spectral sequence. The first page of this spectral sequence is: \begin{equation}\label{E1forTMC} E_1^{p,q} = H^q(Q, \mbox{Hom}(S^pL,L)) \end{equation} where $L = T_pM$. \subsection{Monodromy transformation}\label{sec:MonodromyTransformation} \paragraph {Additional assumption} We now have two actions of $\bf g$ on $T_pM$: the linearized one, which is given by $v_0$ of Eq. (\ref{ExpansionOfVectorField}), and the nonlinear action given by $F_*^{-1}v\langle\xi\rangle$. Suppose that the linearized one integrates to some action $\rho_0$ of a group $G$: \begin{align} & \rho_0\;:\; G \longrightarrow \mbox{Hom}(T_pM, T_pM) \\ & \rho_{0*}(\xi) = \left.{d\over dt}\right|_{t=0}\rho_0\left(e^{t\xi}\right) \;=\; v_0\langle\xi\rangle \end{align} Suppose that this group $G$ has a non-contractible one-dimensional cycle. Consider the path ordered exponential over this non-contractible cycle. Without loss of generality, we can start and end the loop at the unit. We define: \begin{align}\label{MonodromyTransformation} m \;=\;& P\exp\left[\oint \rho_0(g)^{-1}\Psi\langle dgg^{-1}\rangle \rho_0(g)\right] = \\ \;=\;& \left(P\exp\left[\oint \rho_0(dg g^{-1}) + \Psi\langle dgg^{-1}\rangle \right]\right) \end{align} where $\rho_0(g)$ is the action of $G$ on $T_0M$. This is the monodromy transformation: \begin{equation} m\in \mbox{Map}(T_pM,T_pM) \end{equation} Notice that the derivative of $m$ at the point $0\in T_pM$ is zero. Therefore we can define its second derivative: \begin{equation} m'' \in \mbox{Hom}(S^2T_pM, T_pM) \end{equation} Eq. (\ref{MonodromyTransformation}) implies: \begin{align} m'' = & \oint \rho_0(g)^{-1}\Psi_2\langle dgg^{-1}\rangle \rho_0(g) \label{SecondDerivativeOfM}\\ \mbox{\tt\small where } & \Psi_2 = v_1 \mbox{ \tt\small of Eq. (\ref{ExpansionOfVectorField})} \end{align} Usually the cycle is such that $\dot{g}g^{-1}$ is constant. Then the meaning of the integration in Eq. (\ref{SecondDerivativeOfM}), is that that we pick the {\em resonant terms} in the quadratic vector field $v_1$. \subsection{Symmetries}\label{sec:Symmetries} The monodromy transformation $m$ commutes with the action of $\bf g$, but we have to remember that the action of $\bf g$ is given by nonlinear vector fields --- see Eq. (\ref{ExpansionOfVectorField}). If it were given just by linearized vector fields, {\it i.e.} the $v_0\langle \xi\rangle$ of Eq. (\ref{ExpansionOfVectorField}), life would be easier. But this is, generally speaking, not the case. Notice that $v_0\langle\xi\rangle = \xi{\partial\over\partial c}$. Instead of $\left[\xi{\partial\over\partial c}Q\,,\,\Psi\right]$ being zero, we have: \begin{equation}\label{Symmetries} \left[\xi{\partial\over\partial c}Q\,,\,\Psi\right]\;=\; \left[Q+\Psi\,,\,\xi{\partial\over\partial c}\Psi\right] \end{equation} But $m''$ of Eq. (\ref{SecondDerivativeOfM}) {\em does} commute with the undeformed action of $\bf g$ on $T_pM$ ({\it i.e.} with $v_0$). This is because $v_{\geq 1}$ are of quadratic and higher order, and the first derivative of $m$ vanishes. Sometimes $m''$ is zero on some subspace $L\subset T_pM$. Then, on this subspace, we can define the third derivative $m'''$. Suppose that, in addition, the restriction of $v_1$ on $L$ is parallel to $L$, {\it i.e.}: \begin{align} v_1'' \;:\; & S^2 T_pM \rightarrow T_pM \\ \mbox{\tt\small is such that: } & v_1''(S^2L)\subset L \end{align} Then $m'''$ commutes with the undeformed action of $\bf g$ on $T_pM$. \subsection{Closed subsectors}\label{sec:ClosedSubsectors} Suppose that $T_pM$, as a representation of $\bf g$, has an invariant subspace: \begin{equation} V \subset T_pM \end{equation} It may happen that the restriction of $\Psi$ to $V$ is tangent to $V$. This, essentially, means that $F(V)$ is closed under the action of the symmetry. In particular, the monodromy transformation of Section \ref{sec:MonodromyTransformation} acts within $V$. The sufficient condition for this is: \begin{equation} H^1\left( {\bf g}\;,\; \mbox{Hom}(S^nV\,,\, T_pM/V) \right) = 0 \end{equation} \section{Relation to tree level Feynman diagrams}\label{sec:FeynmanDiagrams} Here we will apply the formalism of Section \ref{sec:Linearization} to the case when $M$ is the space of solutions of some classical nonlinear field equations, constructed as perturbation of some zero solution $p\in M$. This is different from the context of deformations of QFT (Section \ref{sec:IntroRenorm}), but the AdS/CFT correspondence establishes a relation between these two contexts (Section \ref{sec:AdS}). \subsection{Perturbation theory as a map $TM\rightarrow \Omega M$} Let us take $M$ to be the space of perturbative solutions $\phi$ of nonlinear equations of the form: \begin{equation}\label{NonlinearEquation} L\phi = f(\phi) \end{equation} where $L$ is some linear differential operator, and $f(\phi)$ is a nonlinear function describing the interaction. We assume that $f$ is a polynomial starting with the terms of quadratic or higher order. The point $p\in M$ will be the zero solution $p=0$. Then $T_0M$ can be identified with the space of solutions of the linearized equation: \begin{equation}\label{LinearizedEqM} L\phi = 0 \end{equation} Tree level perturbation theory can be thought of as a 1-parameter map \begin{equation} F_{\epsilon}\;:\; T_0M \rightarrow M \end{equation} parameterized by a small parameter $\epsilon$. As explained in Section \ref{sec:GeometricaAbstraction}, it can be also understood as a map $T_0M \rightarrow \Omega_0M$. We will embed $M$ into the space $M_{\rm os}$ of all field configurations, not necessarily satisfying equations of motion (subindex ``os'' means ``off-shell``). We assume that $M_{\rm os}$ is a linear space. We consider $F\;:\;T_0M \rightarrow M$ as a function $T_0M \rightarrow M_{\rm os}$. It can be described as a sum of tree level Feynman diagrams. Every incoming leg corresponds to a solution of the linearized equation (\ref{LinearizedEqM}). Every internal leg and the outgoing leg each correspond to a propagator $L^{-1}$. There is a recursion relation\footnote{the right hand side is a sum of two elements of $M_{\rm os}$; remember that we assumed that $M_{\rm os}$ a linear space}: \begin{equation}\label{RecursionRelation} F[\phi_0] = \epsilon\phi_0\;+ L^{-1} f(F[\phi_0]) \end{equation} where $L^{-1}$ satisfies: \begin{equation}\label{LInverse} LL^{-1} = {\bf 1} \end{equation} The definitions of the operator $L^{-1}$ has an ambiguity (because one can add a solution of the free equation). Suppose that we made some choice of $L^{-1}$. The dependence on the choice of $L^{-1}$ is controlled by Lemma \ref{theorem:GaugeTransformation} below. As we already explained, we need an embedding of $M$ into the linear space of off-shell field configurations $M_{\rm os}$, just because we want to add Feynman diagrams. Obvously, the space $T_0M$ of free solutions is also embedded into $M_{\rm os}$. Let us assume that the action of $\bf g$ on $M_{\rm os}$ agrees with this embedding. This is not really important, but we make this assumption for this Section. For example, suppose $\bf g$ contains time translation $\partial\over\partial t$. We assume that it acts as $\delta \phi = \partial_t \phi$, both on $M$ and on $T_0M$. Let us define $\Psi$ as follows: \begin{align} \Psi\;\in\; & \mbox{Hom}\left(\Pi {\bf g}\;,\; \mbox{Vect}(T_0M)\right) \\ \Psi\langle c\rangle[\phi_0] \;=\; & [Q,L^{-1}]f(F[\phi_0]) \label{PsiViaFeynmanDiagrams} \end{align} (the dependence of $c$ on the RHS comes from $Q$). \paragraph {Lemma \arabic{Theorems}:\label{theorem:SamePsi}}\refstepcounter{Theorems} This $\Psi$ is the same $\Psi$ as defined in Eq. (\ref{DefPsi}): \begin{equation} \Psi = F_*^{-1} v\langle c\rangle - v_0\langle c\rangle \end{equation} \paragraph {Proof} We have to show that for any $F_*(v_0\langle c\rangle + \Psi) = v\langle c \rangle$. In other words, for any $\xi\in {\bf g}$: \begin{align}\label{FStarOnRecurrentRelation} F_*\left(v_0\langle \xi\rangle + [\xi,L^{-1}] f(F[\phi_0])\right) = \xi F[\phi_0] \end{align} We will use: \begin{equation} F[\phi_0]_* \;=\; {\bf 1} + L^{-1} f_* F[\phi_0]_* \quad \mbox{\tt\small therefore}\quad F[\phi_0]_*^{-1} \;=\; {\bf 1} - L^{-1}f_* \label{InverseDerivativeMap} \end{equation} We have: \begin{align} \xi F[\phi_0] = \xi\phi_0 + [\xi, L^{-1}] f(F[\phi_0]) + L^{-1} f_*\xi F[\phi_0] \end{align} Together with Eq. (\ref{InverseDerivativeMap}) this implies: \begin{align} F[\phi_0]_*^{-1}\xi F[\phi_0]\;=\;\xi\phi_0 + [\xi, L^{-1}] f(F[\phi_0]) \end{align} The proof can be put in slightly different words, as follows. Notice: \begin{align} QF[\phi_0] \;&= Q\left( \phi_0 + L^{-1}f(\phi_0) + L^{-1}f(L^{-1}f(\phi_0)) + \ldots \right) \end{align} Every time $Q$ hits $\phi_0$, we get $v_0\langle c\rangle$: \begin{equation} \left.{d\over dt}\right|_{t=0}\Big( tv_0\langle c\rangle + L^{-1}f(\phi_0 + tv_0\langle c\rangle) + L^{-1}f(L^{-1}f(\phi_0 + tv_0\langle c\rangle)) + \ldots \Big) \end{equation} --- this gives the $F_*v_0\langle c\rangle$ term on the LHS of Eq. (\ref{FStarOnRecurrentRelation}). And when $Q$ hits one of the $L^{-1}$, we get $F_*\Big([Q,L^{-1}]f(F[\phi_0])\Big)$. \paragraph {Lemma \arabic{Theorems}:\label{theorem:GaugeTransformation}}\refstepcounter{Theorems} An infinitesimal variation of $L^{-1}$: \begin{align} & L^{-1} \mapsto L^{-1} + \delta L^{-1} \end{align} where $\delta L^{-1}$ satisfies $L \delta L^{-1} = 0$, corresponds to an infinitesimal gauge transformation of $\Psi$ (see Eq. (\ref{InfinitesimalGaugeTransformations})) where: \begin{equation} \Phi = \delta L^{-1} f(F[\phi_0]) \end{equation} \paragraph {Proof} \begin{align} \delta\Psi(\phi_0) \;=\; & \delta \left([Q,L^{-1}] f(F[\phi_0])\right) = \nonumber\\ \;=\; & [Q,\delta L^{-1}] f(F[\phi_0]) + [Q,L^{-1}] f(F[\phi_0])_* \delta F[\phi_0]\;= \nonumber\\ \;=\; & [Q,\Phi] - \delta L^{-1} f(F[\phi_0])_* F[\phi_0]_* \Psi + [Q,L^{-1}] f(F[\phi_0])_* \delta F[\phi_0]\;= \nonumber\\ \;=\; & [Q,\Phi] + [\Psi,\Phi] \end{align} \subsection{Amputation of the last leg}\label{sec:AmputationOfTheLastLeg} We will now present a slightly different point of view on the construction. Suppose that for every linearized solution $\phi_0$ we constructed a nonlinear solutions $\phi$ (depending on a small parameter $\epsilon$). What should we do with $\phi$, to obtain $\Psi\langle c\rangle$? Remember that $\Psi\langle c\rangle$ is a (nonlinear) vector field on the space of linearized solutions. Obviously, we have to somehow ``project'' $\phi$ to a linearized solution. According to Eq. (\ref{PsiViaFeynmanDiagrams}) we should remove the last leg, and replace it with $[Q,L^{-1}]$: \begin{equation} \Psi = [Q,L^{-1}]f(\phi) = [Q,L^{-1}]L \phi \end{equation} Remember that $L^{-1}$ satisfies Eq. (\ref{LInverse}): \begin{equation} LL^{-1}= {\bf 1} \end{equation} Let us define the ``amputator'' $A$ as the composition: \begin{align} A := [Q,L^{-1}]L\;=\;& [P,Q] \\ \mbox{\tt\small where } & P = \left({\bf 1} - L^{-1}L\right) \end{align} (Notice that $P$ is a projector to $\mbox{ker}L$.) It satisfies\footnote{Actually, any operator of the form $[P,Q]$, where $P^2=P$ and $PQP=QP$, is nilpotent; the nilpotence of $Q$ is not necessary for the nilpotence of $A$.}: \begin{equation} A^2 = 0 \end{equation} If $\phi$ is our perturbative solution ({\it i.e.} $\phi = \phi_0 + L^{-1} f(\phi)$), then: \begin{equation} [Q,L^{-1}]f(\phi) = A\phi \end{equation} This leads to the following interpretation. The ``projector'' $P$ can be interpreted as a map $M\rightarrow T_0M$ (Section \ref{sec:ActionOfSymmetry}), the inverse of $F$. Then, again, $[P,Q] = F_*^{-1}v - v_0$. \subsection{Trivial example}\label{sec:TrivialExample} Consider a vector field ${\bf V}\in \mbox{Vect}({\bf R}^n)$. Suppose that $\bf V$ vanishes at $0\in {\bf R}^n$, and the derivative of $\bf V$ also vanishes at $0$: \begin{equation} {\bf V}(0) = 0 \mbox{ \tt\small and } {\bf V}'(0)=0 \end{equation} Consider the following equation: \begin{equation}\label{FlowEquation} {d\over dt} {\bf x}(t) = {\bf V}({\bf x}(t)) \end{equation} In notations of Section \ref{sec:FeynmanDiagrams}, $M$ be the space of all solutions of Eq. (\ref{FlowEquation}) and $L = d/dt$. This equation is invariant under translations of $t$, generating ${\bf g} = {\bf R}$. The generator of ${\bf g}$ is $\xi = d/dt$. Let us construct solutions perturbatively in the vicinity of the constant solution: \begin{align} p\;:\; & {\bf x}(t) = 0 \\ T_0M \;=\;& {\bf R}^n \mbox{ \tt\small (constant $\bf x$)} \end{align} For any functions $f(t)$, we can expand it in Taylor series around $t=0$. We define $L^{-1}$ as follows: \begin{equation} L^{-1} t^m = {1\over m+1} t^{m+1} \end{equation} The map $F_{\epsilon}\;:\;T_0M \rightarrow M$ of Eq. (\ref{RecursionRelation}) is: \begin{equation} {\bf x}_0 \mapsto \epsilon {\bf x}_0 + t {\bf V}(\epsilon {\bf x}_0) + \ldots \end{equation} We observe: \begin{equation}\label{AmputatorTrivial} \left[ {d\over dt} \, , \, L^{-1} \right] t^m \;=\; \left\{\begin{array}{l} 1 \mbox{ \tt\small if } m=0 \cr 0 \mbox{ \tt\small if } m>0 \end{array} \right. \end{equation} According to Eq. (\ref{PsiViaFeynmanDiagrams}), to construct $\Psi$ we have to take ${\bf V}(\epsilon {\bf x}_0 + t {\bf V}(\epsilon {\bf x}_0) + \ldots)$ and hit it with $\left[ {d\over dt} \, , \, L^{-1} \right]$, which, according to Eq. (\ref{AmputatorTrivial}) amounts to put $t=0$. Therefore, in this case: \begin{equation} \Psi = c {\bf V} \end{equation} The gauge transformations of Eq. (\ref{InfinitesimalGaugeTransformations}) are: \begin{equation} \delta_{\Phi} \Psi = c [{\bf V},\Phi] \end{equation} where $\Phi\in\mbox{Vect}({\bf R}^n)$ is another vector field. Therefore, in this case the MC invariant computes the normal form of the vector field $\bf V$ ({\it i.e.} $\bf V$ modulo nonlinear changes of coordinates). In this example we had ${\bf g} = {\bf R}$, and the structure of the first cohomology group was rather tautological: $H^1({\bf g}, L)$ was the space of $\bf g$-invariants in $L$. In the next example we will consider the non-abelian ${\bf g} = so(2,d)$, with much more interesting (smaller!) cohomologies. In particular, $H^1({\bf g},V)$ can be nonzero only for infinite-dimensional $V$. \subsection{Example: classical CFT on ${\bf R}\times S^{d-1}$}\label{sec:ClassicalCFT} Consider a conformally invariant classical theory on the Lorentzian ${\bf R}\times S^{d-1}$, for example the $\phi^4$ theory, $d=4$. \subsubsection{Realization of ${\bf R}\times S^{d-1}$ as the base of the lightcone} We will use same notations as in Appendix \ref{sec:EmbeddingFormalism}. We denote: \begin{equation} d = D - 1 \end{equation} Consider the light cone in ${\bf R}^{2,D-1}$ ({\it cp} Eq. (\ref{Hyperboloid})): \begin{equation}\label{LightCone} I^2 := |Z|^2 - \sum\limits_{i=1}^d X_i^2 = 0 \end{equation} A convenient model for the conformal ${\bf R}\times S^{d-1}$ is the projectivization of the light cone, which is parametrized by $(Z,X_1,\ldots,X_d)$ satisfying (\ref{LightCone}) modulo the equivalence relation: \begin{equation} (Z,X_1,\ldots,X_d) \simeq (\lambda Z,\lambda X_1,\ldots,\lambda X_d)\;,\;\; \lambda\in{\bf R} \end{equation} A density of the weight $w$ is a function $\sigma(Z,X_1,\ldots,X_d)$ satisfying: \begin{equation} \sigma(\lambda Z,\lambda X_1,\ldots,\lambda X_d) = \lambda^{-w} \sigma(Z,X_1,\ldots,X_d) \end{equation} modulo functions divisible by $I^2$. Let ${\bf D}_w$ denote the space of such densities. The conformally invariant d'Alambert acts as follows: \begin{align} L \;:&\; {\bf D}_{d-2\over 2} \rightarrow {\bf D}_{d+2\over 2} \\ L\;\sigma \;=&\; \left( 4{\partial\over\partial Z}{\partial\over\partial \overline{Z}} - \sum\limits_{i=1}^d {\partial^2\over \partial X_i^2} \right)\sigma \end{align} This operator is only well-defined with this value of $w$, because for other values of $w$ it would not annihilate modulo $I^2$ those functions which are divisible by $I^2$. The elements of the kernel of $L$, {\it i.e.} the solutions of free field equations, are real sums of positive and negative frequency waves: \begin{align} \sigma \;=\; & \sigma_+ + \sigma_- \label{FreeSolutions}\\ \mbox{\tt\small where } & \sigma_+ \;=\; {p(X)\over Z^{\mbox{deg}(p) + {d-2\over 2}}} \\ & \sigma_- \;=\; {\overline{p}(X)\over \overline{Z}^{\mbox{deg}(\overline{p}) + {d-2\over 2}}} \end{align} \subsubsection{Conformal symmetry} Besides the rotations of $S^{d-1}$, there are also the following conformal transformations: \begin{align} E \;& = Z{\partial\over\partial Z} - \overline{Z}{\partial\over\partial\overline{Z}} \label{GenE}\\ K_i \;& = 2 X_i {\partial\over\partial Z} + \overline{Z} {\partial\over\partial X_i} \label{GenK}\\ \overline{K}_i \; & = 2 X_i {\partial\over\partial \overline{Z}} + Z {\partial\over\partial X_i} \label{GenBarK} \end{align} \subsubsection{Amputator}\label{sec:Amputator} Introduce the Lie algebra cocycle $C$: \begin{align} C \;\in\; & Z^1\left( {\bf g}\;,\; \mbox{Hom}\left( {\bf D}_{d/2 + 1}\,,\, \mbox{ker}\left({\bf D}_{d/2 - 1}\stackrel{L}{\longrightarrow} {\bf D}_{d/2 + 1}\right) \right) \right) \nonumber \\ C\;=\; & [Q, L^{-1}] \label{CocycleC} \end{align} As we explained in Section \ref{sec:AmputationOfTheLastLeg}, given a perturbative solution $F[\phi_0]$, the corresponding solution of the MC Eq. (\ref{MaurerCartan}) is: \begin{equation} \Psi = Cf(F[\phi_0]) \end{equation} Consider elements of ${\bf D}_{d+2\over 2}$ periodic in global time. Any such element can be written as: \begin{equation}\label{PeriodicDensity} \sigma = \sum_{\rho,\overline{\rho}} {p_{\rho,\overline{\rho}}(X)\over \overline{Z}^{\overline{\rho}} Z^{\rho}} \end{equation} where the summation is over a pair of integers $\rho,\overline{\rho}$ and $p_{\rho,\overline{\rho}}(X)$ is a {\em harmonic polynomial} of $X_1,\ldots,X_d$ of the following degree: \begin{equation} \mbox{deg}(p_{\overline{\rho},\rho}) = \overline{\rho} + \rho - {d+2\over 2} \end{equation} We define $L^{-1}\;:\;{\bf D}_{d+2\over 2}\rightarrow {\bf D}_{d-2\over 2}$ as follows: \begin{equation} L^{-1}\left({p(X)\over \overline{Z}^{\overline{\rho}} Z^{\rho}}\right) \;=\; \left\{ \begin{array}{rl} \mbox{\tt if }\rho\neq 1 \mbox{ \tt and } \overline{\rho}\neq 1\;:\; & {1\over 4}{1\over (\overline{\rho} - 1)(\rho - 1)}\;{p(X)\over \overline{Z}^{\overline{\rho} - 1} Z^{\rho - 1}} \cr \mbox{\tt if }\rho = 1\;:\; & - {1\over 4}{1\over (\overline{\rho} - 1)}\left(\mbox{log} {Z\over\overline{Z}}\right)\; {p(X)\over \overline{Z}^{\overline{\rho} - 1}} \cr \mbox{\tt if }\overline{\rho} = 1\;:\; & {1\over 4}{1\over (\rho - 1)}\left(\mbox{log} {Z\over\overline{Z}}\right)\; {p(X)\over Z^{\rho - 1}} \cr \mbox{\tt if }\rho = \overline{\rho} = 1\;:\; & - {1\over 8}\left(\mbox{log} {Z\over\overline{Z}}\right)^2p(X)\; \end{array} \right. \label{Propagator} \end{equation} Therefore: \begin{align} [L^{-1}, K_i] \left( {p(X)\over \overline{Z}^{\overline{\rho}}Z} \right) \;& =\; {1\over 4} {\partial_i p(X)\over (\overline{\rho}-1)^2\overline{Z}^{\overline{\rho}-2}} \label{AK}\\ [L^{-1},\overline{K}_i] \left( {p(X)\over \overline{Z}^{\overline{\rho}}Z} \right) \;& =\; {1\over 4}{\left[||X||^2 \partial_i - 2X_i (X_j \partial_j) - (d-2) X_i\right]\;p(X) \over (\overline{\rho}-1)^2 \overline{Z}^{\overline{\rho}}} \label{AKbar}\\ [L^{-1},E] \left({p(X)\over \overline{Z}^{\overline{\rho}} Z}\right) \;& =\; {1\over 2} {p(X)\over (\overline{\rho}-1) \overline{Z}^{\overline{\rho}-1}} \label{AE} \end{align} These formulas {\em partially} define a cohomology class $C$ of Eq. (\ref{CocycleC}). The definition is only partial, because Eq. (\ref{PeriodicDensity}) does not describe all elements of ${\bf D}_{d+2\over 2}$, but only those periodic which are periodic in the global time $t= {1\over i}\mbox{log}{Z\over\overline{Z}}$. Generic elements are linear combinations of: \begin{equation}\label{DensityWithPowersOfT} t^k {p(X)\over \overline{Z}^{\overline{\rho}} Z^{\rho}} \end{equation} To completely specify $C$, we have to define $L^{-1}$ on elements containing powers of $t$, and compute for them the commutators, as in Eqs. (\ref{AK}), (\ref{AKbar}), (\ref{AE}). We will not do it here. \subsubsection{Relation to renormgroup} Our discussion of the classical field theory solutions in this section is a warm-up. However, it {\em is} related to renormalization. Given a set of operators $\{{\cal O}_I\}$, {\it e.g.} ${\cal O}_{i_1,\ldots i_N} = \partial_{i_1}\cdots\partial_{i_N}\phi$, and a set of infinitesimal coefficients $\epsilon_I$, let us define the coherent state, schematically: \begin{equation}\label{CoherentState} \exp \left( \sum_I\epsilon_I{\cal O}_I\right) \end{equation} which in the classical limit corresponds to a classical solution. This, of course, requires regularization. Therefore, the map \begin{equation}\label{QuantumMap} \sum_I\epsilon_I{\cal O}_I \mapsto \left.\exp \left( \sum_I\epsilon_I{\cal O}_I\right)\right|_{\rm renormalized} \end{equation} does not commute with the action of the symmetries. What we studied in this section must be the classical limit of this map. This requires further study. \subsection{Comments on the structure of $\Psi$} \subsubsection{$\Psi$ is simpler than perturbative classical solutions}\label{sec:PsiIsSimple} Let us continue with the example of the previous section. Generally speaking, a perturbative solution is a sum of expressions of the form: \begin{equation} t^k{p(X)\over \overline{Z}^{\overline{\rho}} Z^{\rho}} \quad \mbox{\tt\small where } \mbox{deg}(p) = \rho + \overline{\rho} - {d-2 \over 2} \end{equation} However, after the replacement of the last leg with $C$ of eq. (\ref{CocycleC}), the resulting expression does not contain ``bare'' $t$ ({\it i.e.} only contains $t$ {\it via} its exponentials). Indeed, $Cf(\phi)$ is {\em a solution of the free field equations}. Solutions of the free field equations do not contain powers of $t$. They only involve expressions of the form Eq. (\ref{FreeSolutions}). No powers of $t$. In this sense, the amputated $\phi$ is much simpler than full perturbative solution. \begin{centering{\small As we mentioned at the end of Section \ref{sec:Amputator}, we did not actually compute the amputator on the field configurations containing powers of $t$ (Eq. (\ref{DensityWithPowersOfT})). But we know in advance that the resulting expression will not contain any powers of $t$. }\end{minipage \vspace{5pt} \noindent Moreover, we know that $\Psi$ satisfies a constraint: the Maurer-Cartan Eq. (\ref{MaurerCartan}). In some situations, this might allow for some partial bootstrap, see Section \ref{sec:Bootstrap}. There is a price to pay: the definition of $\Psi$ contains an ambiguity. We could have choosen a different $L^{-1}$. This corresponds to the gauge transformation of Eq. (\ref{GaugeTransformations}). Moreover, the condition of $\bf g$-covariance is complicated: \subsubsection{The condition of $\bf g$-covariance is complicated} ({\it cp} Section \ref{sec:Symmetries}) Under the false impression that all non-covariance is due to the ``resonant'' factors $\mbox{log}{Z\over\overline{Z}}$, one might conjecture that $\Psi\langle\xi\rangle$ is ${\bf g}$-covariant in the sense that: \begin{equation}\label{FalseCovariance} [\eta,\Psi\langle\xi\rangle] = \Psi\langle [\eta,\xi]\rangle \quad\mbox{(wrong)} \end{equation} This, however, is not the case. At least when $\bf g$ is a semisimple Lie algebra, Eq. (\ref{FalseCovariance}) is incompatible with the MC equation: \begin{equation} \Psi\langle [\eta,\xi] \rangle = [\eta,\Psi\langle\xi\rangle] - [\xi,\Psi\langle\eta\rangle] + [\Psi\langle\eta\rangle,\Psi\langle\xi\rangle] \end{equation} because $\Psi$ takes values in vector fields of degree 1 and higher. A semisimple Lie algebra cannot be represented by the vector fields of degree 1 and higher. In fact, it follows immediately from Eqs. (\ref{AK}), (\ref{AKbar}) and (\ref{AE}) that with $\bf L$ chosen as in Eq. (\ref{Propagator}) $\Psi(\xi)$ is zero when $\xi$ is an infinitesimal rotation of $S^{d-1}$. This already contradicts Eq. (\ref{FalseCovariance}). Instead of simple but wrong Eq. (\ref{FalseCovariance}) we have more complicated Eq. (\ref{Symmetries}). \subsubsection{Can $\Psi$ be bootstrapped?}\label{sec:Bootstrap} Consider an infinitesimal $G$-preserving deformation $s$ of the action which is a monomial of the order $n$ in the elementary fields. Then the corresponding cocycle, representing a class of $H^1\left(Q,\mbox{Hom}(S^{n-1} T_0{\cal S},T_0{\cal S})\right)$, is given by the expression: \begin{equation}\label{TermsOfLowestOrder} [Q,L^{-1}] {\delta^{n-1}s\over \delta \phi^{n-1}} \end{equation} Could it be that {\em all} $H^1\left(Q,\mbox{Hom}(S^{n-1} T_0{\cal S},T_0{\cal S})\right)$ is exhausted by the expressions of this form for various ${\bf g}$-preserving deformations? This is certainly {\em not} true for SUGRA on $AdS_5\times S^5$. But in the situations when this is true, Eq. (\ref{MaurerCartan}) allows to recursively compute $\Psi$, modulo gauge equivalence described in Section \ref{sec:GaugeTransformations}, starting from the terms of the lowest order in $\phi$ given by Eq. (\ref{TermsOfLowestOrder}). \section{Supergravity in AdS space}\label{sec:AdS} \subsection{Holographic renormgroup} Consider Type IIB SUGRA in $AdS_{1,4}\times S^5$ and $N=4$ supersymmetric Yang-Mills on the boundary. We can proceed in two ways, which are equivalent because of the AdS/CFT duality: \paragraph {- Renormgroup flow on the boundary} We choose some map from the space of linearized deformations of the $N=4$ SYM theory to the space of finite deformations. There is no way to fix such a map preserving $\bf g$, so we want to study the deviation from $\bf g$-invariance, in the context of Section \ref{sec:Linearization}. \paragraph {- Classical solutions of SUGRA in the bulk} We fix some map from the space of solutions of the linearized SUGRA equations to the space of nonlinear solutions. Then we study the deviation of this map from being $\bf g$-invariant as in Section \ref{sec:Linearization}. \vspace{10pt} \noindent Here we will discuss this second approach. In fact, $\bf g$ is a subalgebra of a larger superalgebra, the superalgebra of gauge transformations of supergravity. This requires a generalization of the formalism of Section \ref{sec:Linearization} which we will describe in Section \ref{sec:GeometricaAbstractionAdS}. \subsection{Gauge transformations of supergravity}\label{sec:GaugeTransformationsSUGRA} The precise description of the gauge transformations of supergravity depends, generally speaking, on the formalism. Any theory of supergravity necessarily includes the group of space-time super-diffeomorphisms (= coordinate transformations), as gauge transformations. Those theories which have B-field should also include gauge transformation of the B-field. In the case of bosonic string they have been recently discussed in \cite{Schulgin:2014gra}. In bosonic string, gauge transformations correspond to BRST exact vertices \cite{Schulgin:2014gra}. In the pure spinor formalism, we are not aware of any reference discussing specifically gauge transformations. Apriori the coordinate transformations should also include transformations of pure spinor ghosts; they are in fact gauge-fixed \cite{Berkovits:2001ue}. Our approach to holographic renormgroup is based on the study of the normal form of the action of the group of SUGRA gauge transformations in the vicinity of the AdS solution. We will now develop a geometrical abstraction for that. The construction parallels Section \ref{sec:GeometricaAbstraction}, the main difference being that instead of a point $p\in M$ we have to consider a degenerate orbit ${\cal O}\subset M$. We will now explain the details. \subsection{Geometrical abstraction}\label{sec:GeometricaAbstractionAdS} Consider a Lie supergroup $A$ acting on a supermanifold $M$ with a subgroup $G\subset A$ preserving a point $p\in M$. Moreover, we will assume that: \begin{itemize}\item The action of $A$ on $M$ is free in a neighborhood of $p$ except at the orbit of $p$\end{itemize} Let $\cal O$ denote the orbit of $p$: \begin{equation} {\cal O} = Ap \end{equation} In the context of holographic renormgroup: \begin{itemize} \item $M$ is the space of SUGRA solutions in the vicinity of pure AdS \item $p$ is $AdS_5\times S^5$ with some fixed choice of coordinates and zero $B$-field \item $A$ is the group of all gauge transformations of supergravity, and ${\bf a} = \mbox{Lie} A$ is its Lie superalgebra \item $G\subset A$ is the subgroup preserving $AdS_5\times S^5$, and ${\bf g} = {\bf psu}(2,2|4)$ its Lie superalgebra \item $\cal O$ is the space of all metrics which can be obtained from the fixed $AdS_5\times S^5$ metric by coordinate redefinitions, and exact $B$-field \end{itemize} Introduce the coordinates $\alpha^I$ on $\cal O$, and coordinates $x^a$ in the transverse direction to $\cal O$. Consider the normal bundle $N{\cal O}$ of $\cal O$: \begin{equation} 0 \longrightarrow T{\cal O} \longrightarrow TM|_{\cal O} \longrightarrow N{\cal O} \longrightarrow 0 \end{equation} The action of $A$ on $M$ induces the action of $A$ on $N{\cal O}$. Let $I^{\infty} {\cal O}$ denotes the formal neighborhood of $\cal O$ in $M$. It also comes with an action of $A$. Consider the following question: can we find a family of maps, parameterized by $\epsilon\in{\bf R}$: \begin{equation} F_{\epsilon} \;:\; N{\cal O} \longrightarrow I^{\infty}{\cal O} \end{equation}\label{MapFromNOToI} satisfying ({\it cp} Eqs. (\ref{MustPassThroughP}), (\ref{DerivativeIsId}), (\ref{RescaleEpsilon})): \begin{align} & F_0 \;=\; \pi : N{\cal O} \longrightarrow {\cal O} \label{MustPassThroughO}\\ & \left.{d\over d\epsilon}\right|_{\epsilon = 0}f\circ F_{\epsilon}\circ \gamma \;=\; {\cal L}_{\gamma} f \quad \forall \gamma \in \Gamma(N{\cal O}), f\in \mbox{Fun}(M), f|_{\cal O} = 0, \label{DerivativeInNormalDirection}\\ & F_{\kappa\epsilon} \;=\; F_{\epsilon}\circ R_{\kappa} \quad \mbox{where $R_{\kappa}$ rescales by $\kappa$ in the fiber} \label{RescaleEpsilonO} \end{align} commuting with the action of $A$? It turns out that the obstacle exists already at the linearized level. Normal bundle is not the same as the first infinitesimal neighborhood. The space of functions on $I^1{\cal O}$ is not the same, as a representation of $\bf a$, as the space of functions on $N{\cal O}$ constant-linear on fibers --- see Eq. \eqref{ObstacleToNI} below. We will now proceed to the study of the obstacles to the existence of $F_{\epsilon}$. \subsection{Normal form of the action of $\bf a$}\label{sec:NormalFormA} Locally near the point $p$, the normal bundle to $\cal O$ can be trivialized. Let $\alpha^I$ denote the coordinates on $\cal O$, and $x^a$ coordinates in the fiber. Then, the equation of $\cal O$ is: \begin{equation} x^a = 0 \end{equation} \subsubsection{General vector field tangent to $\cal O$}\label{GeneralVectorField} Let us fix some $F_{\epsilon}$ satisfying Eqs. (\ref{MustPassThroughO}), (\ref{DerivativeInNormalDirection}), (\ref{RescaleEpsilonO}). Since $A$ acts on $M$, every element of the Lie algebra $\xi\in\bf a$ defines a vector field $v\langle\xi\rangle$ on $M$. Then, $F_{\epsilon *}^{-1} v\langle\xi\rangle$ is a vector field on $N{\cal O}$: \begin{align} F_{\epsilon *}^{-1}v\langle\xi\rangle \;=\; & \phantom{+} \left( (u\langle\xi\rangle(\alpha))^I + \sum_{n\geq 1}\epsilon^n(\Theta\langle\xi\rangle(\alpha))^I_{b_1\cdots b_n} x^{b_1}\cdots x^{b_n} \right) {\partial\over\partial \alpha^I} \;+ \nonumber \\\mbox{} & + \left( (v_0\langle\xi\rangle(\alpha))^a_b x^b + \sum_{n\geq 2}\epsilon^{n-1}(\Psi\langle\xi\rangle(\alpha))^a_{b_1\cdots b_n} x^{b_1}\cdots x^{b_n} \right) {\partial\over\partial x^a} \end{align} It should satisfy: \begin{equation}\label{CommutatorInIO} [F_{\epsilon *}^{-1}v\langle\xi\rangle\,,\,F_{\epsilon *}^{-1}v\langle\eta\rangle] = F_{\epsilon *}^{-1}v\langle[\xi,\eta]\rangle \end{equation} The power of $\epsilon$ correlates with the degree in $x$-expansion. As an abbreviation, we will omit $\epsilon$ in the following formulas. \paragraph {Big Maurer-Cartan equation} Let us introduce the Faddeev-Popov ghosts $\hat{c}\in \Pi {\bf a}$ for $\bf a$. Let us denote: \begin{align}\hat{Q}\;=\; & (u\langle \hat{c}\rangle (\alpha))^I{\partial\over\partial\alpha^I} + (v_0\langle \hat{c}\rangle(\alpha))^a_b x^b {\partial\over\partial x^a} + {1\over 2} f_{AB}^C \hat{c}^A \hat{c}^B {\partial\over\partial \hat{c}^C} \\\hat{\Psi}\;=\; & \sum_{n\geq 1}(\Theta\langle\hat{c}\rangle(\alpha))^I_{b_1\cdots b_n} x^{b_1}\cdots x^{b_n} {\partial\over\partial \alpha^I} + \sum_{n\geq 2}(\Psi\langle\hat{c}\rangle(\alpha))^a_{b_1\cdots b_n} x^{b_1}\cdots x^{b_n} {\partial\over\partial x^a} \end{align} Eq. (\ref{CommutatorInIO}) implies: \begin{equation} \hat{Q} \hat{\Psi} + {1\over 2}[\hat{\Psi},\hat{\Psi}] = 0 \end{equation}\label{BigMC} \subsubsection{Reduction to ${\bf g}\subset {\bf a}$}\label{sec:VicinityOfPoint} We want to reduce from $\bf a$ to $\bf g$, for the following reasons: \begin{itemize} \item The $\hat{Q}$ of Eq. (\eqref{BigMC}) is too complicated. Instead of acting on functions of $x$, it acts on functions on the first infinitesimal neighborhood of $\cal O$. The expansion in powers of of $x$ is tensored with arbitrary functions of $\alpha^I$. \item In principle, $\bf a$ is ``implementation-dependent''; different descriptions of supergravity may have slightly different gauge symmetries \item On the field theory side, we only have $\bf g$ and not $\bf a$ \end{itemize} Therefore, we will now investigate the reduction to ${\bf g}\subset {\bf a}$. We will start by concentrating on the vicinity of the point $p\in {\cal O}$. Let us concentrate on the vicinity of the point $p\in \cal O$. Suppose that at the point $p$: $\alpha^I=0$. Again, we will separate $v\langle\xi\rangle$ into linear and non-linear part. For $\xi\in\bf g$ (\textit{i.e.} the stabilizer of $p$), $v\langle\xi\rangle(p)=0$, therefore: \begin{align} F^{-1}_{\epsilon *}v\langle\xi\rangle\;=\; &q\langle\xi\rangle + \sum_{m+n\geq 2\atop m\geq 1} \psi^a_{a_1\cdots a_m,I_1\cdots I_n} x^{a_1}\cdots x^{a_m}\alpha^{I_1}\cdots\alpha^{I_n}{\partial\over\partial x^a} + \nonumber\\ & \phantom{q\langle\xi\rangle} + \sum_{m+n\geq 2} \theta^I_{a_1\cdots a_m,I_1\cdots I_n} x^{a_1}\cdots x^{a_m}\alpha^{I_1}\cdots\alpha^{I_n}{\partial\over\partial \alpha^I} \label{vNearP}\\ \mbox{} &\mbox{where}\nonumber\\q\langle\xi\rangle\;=\; & \rho_{gauge}\langle\xi\rangle^I_J \alpha^J {\partial\over\partial\alpha^I} + \rho_{phys}\langle\xi\rangle^a_b x^b {\partial\over\partial x^a} + \theta\langle\xi\rangle^I_a x^a {\partial\over\partial \alpha^I} \end{align} Schematically: \begin{align}q\langle\xi\rangle\;=\; &[x \; \alpha] \left[\begin{array}{cc} \rho_{phys} & \theta \cr 0 & \rho_{gauge} \end{array} \right] \left[\begin{array}{c} \partial_x \cr \partial_{\alpha} \end{array}\right] \end{align} This defines the extension of the physical states with gauge transformations: \begin{align}\mbox{} &0\longrightarrow {{\bf a}\over {\bf g}} \longrightarrow T_p M \longrightarrow {\cal H}\longrightarrow 0\label{ObstacleToNI}\\\mbox{where } &{\cal H} = N_pM\\\mbox{} &{{\bf a}\over {\bf g}} = T_p{\cal O}\end{align} The tensor $\theta\langle c\rangle^I_a$ defines a cocycle --- an element of $C^1({\bf g}, \mbox{Hom}({\cal H}, {{\bf a} / {\bf g}}))$. Its class in $\mbox{Ext}^1_{\bf g}\left({\cal H}, {{\bf a} / {\bf g}}\right)$ is the obstacle to finding a $\bf g$-invariant map $N_p{\cal O}\longrightarrow T_pM$. If this obstacle is zero, then by a linear coordinate redefinition we can put $\theta^I_a=0$, \textit{i.e.} $q\langle\xi\rangle = \rho_{gauge}\langle\xi\rangle^I_J \alpha^J {\partial\over\partial\alpha^I} + \rho_{phys}\langle\xi\rangle^a_b x^b {\partial\over\partial x^a}$. In fact, we want to remove all $\theta^I_{a_1\cdots a_m}$, not only $\theta^I_a$. The first non-vanishing coefficient $\theta^I_{a_1\cdots a_m}$ defines a class in: \begin{equation}\label{ObstaclesToSplit} \mbox{Ext}_{\bf g}^1\left(S^n{\cal H},{{\bf a} / {\bf g}}\right) \end{equation} If nonzero, this is a cohomological invariant. It is not clear to us how to interpret such invariants on the field theory side. It was proven in \cite{Mikhailov:2011si} that the sequence \eqref{ObstacleToNI} splits for $\cal H$ of large enough spin. In Section \ref{sec:BetaDef} we will prove that it splits for beta-deformation, which is the representation of the smallest nonzero spin. Motivated by these observations, we will assume in the following discussion that \eqref{ObstacleToNI} always splits. Moreover, we assume that all the obstacles in \eqref{ObstaclesToSplit} vanish. (Not that the whole cohomology group $\mbox{Ext}_{\bf g}^1\left(S^n{\cal H},{{\bf a} / {\bf g}}\right)$ vanishes, but the actual invariant is zero.) If true, this is a nonlinear analogue of the covariance property of the vertex discussed in \cite{Mikhailov:2011si}. \paragraph {Assuming that there are no obstacles of the type Eq. \eqref{ObstaclesToSplit}} \begin{figure}[!htb] \center{\includegraphics[scale=0.3] {orbit.png}} \caption{\label{fig:orbit} $\cal O$ and $M_{gauge-fixed}$} \end{figure} If the obstacles of the type Eq. \eqref{ObstaclesToSplit} are all zero up to some value of $n$, then we can remove all $\theta^I_{a_1\cdots a_m,I_1\cdots I_n}$ with $n=0$, so we are left with: \begin{align} F^{-1}_{\epsilon *}v\langle\xi\rangle\;=\; &q\langle\xi\rangle + \sum_{m+n\geq 2\atop m\geq 1} \psi\langle\xi\rangle^a_{a_1\cdots a_m,I_1\cdots I_n} x^{a_1}\cdots x^{a_m}\alpha^{I_1}\cdots\alpha^{I_n}{\partial\over\partial x^a} \;+ \nonumber\\ & \phantom{q\langle\xi\rangle} + \sum_{m+n\geq 2\atop n\geq 1} \theta\langle\xi\rangle^I_{a_1\cdots a_m,I_1\cdots I_n} x^{a_1}\cdots x^{a_m}\alpha^{I_1}\cdots\alpha^{I_n}{\partial\over\partial \alpha^I} \end{align} Geometrically this means that we can find a $\bf g$-invariant submanifold $M_{gauge-fixed}$ which intersects $\cal O$ transversally at the point $p$. It is given by the equation $\alpha^I = 0$. In this case, $v\langle\xi\rangle$ for $v\in\bf g$ defines a vector field on $M_{gauge-fixed}$, and a solution of the Maurer-Cartan equation. Let $c\in \Pi{\bf g}$ be the Faddeev-Popov ghost for ${\bf g}$. Restriction of $v\langle c\rangle$ to $M_{gauge-fixed}$ is: \begin{align}\mbox{} &q_{phys} + \Psi\\\mbox{where $q_{phys}\;=\;$} &\rho_{phys}\langle c\rangle^a_b x^b {\partial\over\partial x^a}\\\Psi \;=\; &\sum_{n\geq 2}\psi\langle c\rangle^a_{a_1\cdots a_n} x^{a_1}\cdots x^{a_n} {\partial\over\partial x^a}\end{align} Let us denote: \begin{equation} Q_{phys} = q_{phys} + {1\over 2}f^C_{AB}c^A c^B{\partial\over\partial c^C} \end{equation} We then observe that $\Psi$ satisfies the Maurer-Cartan equation: \begin{align}\mbox{} &Q_{phys}\Psi + {1\over 2}[\Psi,\Psi] = 0\label{MCPhys}\end{align} \subsubsection{Is $\Psi$ an invariant? Deformations of $M_{gauge-fixed}$.}\label{sec:Complication} Therefore, an action of $A$ on the vicinity of ${\cal O}\subset M$ defines some solution $\Psi$ of the Maurer-Cartan Eq. \eqref{MCPhys}. A gauge transformation $\delta \Psi = Q_{phys}\Phi + [\Psi,\Phi]$ corresponds a change of coordinates on $M_{gauge-fixed}$. But is it true that $\Psi$ modulo such gauge transformations is an invariant? Generally speaking the answer is negative, for the following reason: $M_{gauge-fixed}$ may be deformable, different choices of $M_{gauge-fixed}$ leading to different $\Psi$, apriori not related by a gauge transformations of the form $\delta \Psi = Q_{phys}\Phi + [\Psi,\Phi]$. Therefore, we have to study the deformations of $M_{gauge-fixed}$. \paragraph {Geometrical picture} Deformations of a $\bf g$-invariant submanifold $M_{gauge-fixed}$ are given by $\bf g$-invariant sections of its normal bundle $N M_{gauge-fixed}$. An exact sequence \begin{equation} 0 \longrightarrow TM_{gauge-fixed} \longrightarrow TM \longrightarrow NM_{gauge-fixed} \longrightarrow 0 \end{equation} implies the existence of a canonical map\footnote{ $\mbox{im}\delta$ is the kernel of the natural map $ H^1({\bf g}, \mbox{Vect}(M_{gauge-fixed}))\longrightarrow H^1({\bf g}, \Gamma(TM|_{M_{gauge-fixed}})) $ }: \begin{equation}\label{DeltaMorphism} \delta\;:\; H^0({\bf g}, \Gamma(NM_{gauge-fixed})) \longrightarrow H^1({\bf g}, \mbox{Vect}(M_{gauge-fixed})) \end{equation} The space $H^1({\bf g}, \mbox{Vect}(M_{gauge-fixed}))$ can be naturally identified with the tangent space to the space of solutions of the Maurer-Cartan Eq. \eqref{MCPhys} modulo gauge transformations: \begin{equation} T(MC) = H^1({\bf g}, \mbox{Vect}(M_{gauge-fixed})) \end{equation} Therefore, when $\delta\neq 0$, solution of MC equation depends on the choice of $M_{gauge-fixed}$. This means that we have, generally speaking, a {\em family} of invariants. \paragraph {In coordinates} Remember that $M_{gauge-fixed}$ is given by the equations: \begin{equation} \alpha^I=0 \end{equation} Infinitesimal deformations of $M_{gauge-fixed}$ can be obtained as fluxes by vector fields of the form \begin{equation} Y^I_{a_1\cdots a_n}x^{a_1}\cdots x^{a_n}{\partial\over\partial\alpha^I} + \ldots \end{equation}\label{DeformingVectorFieldLeadingTerm} where $Y^I_{a_1\cdots a_n}x^{a_1}\cdots x^{a_n}{\partial\over\partial\alpha^I}$ commute with $\rho_{gauge}\langle\xi\rangle^I_J \alpha^J {\partial\over\partial\alpha^I} + \rho_{phys}\langle\xi\rangle^a_b x^b {\partial\over\partial x^a}$ . In other words, the tensor field $Y_{a_1\cdots a_n}^I$ defines an element of $\mbox{Hom}_{\bf g}(S^n{\cal H}, {\bf a}/{\bf g})$. The flux of such vector fields preserves the condition that $\theta^I_{a_1\cdots a_k} = 0$ for $k\leq n$. To keep $\theta^I_{a_1\cdots a_k} = 0$ for $k>n$, we need to add to (\eqref{DeformingVectorFieldLeadingTerm}) some terms of higher order in $x$ (the $\ldots$ in Eq. (\eqref{DeformingVectorFieldLeadingTerm})); the existence of such terms depends on the vanishing of $\mbox{Ext}^1(S^n{\cal H}, {\bf a}/{\bf g})$. For example, consider a vector field $Y^{(0)}$: \begin{equation} Y^{(0)} = y^I_a x^a {\partial\over\partial\alpha^I} \end{equation} where $y^I_a$ is a $\bf g$-invariant tensor in $\mbox{Hom}({\cal H}, {\bf a}/{\bf g})$. The commutator $\left[ v\langle\xi\rangle\,,\,Y^{(0)}\right]$ contains terms: \begin{equation} u\langle\xi\rangle = \left( \psi\langle\xi\rangle^a_{a_1 a_2}y^{I}_a - \theta\langle\xi\rangle^I_{a_1,I_1} y^{I_1}_{a_2} \right) x^{a_1}x^{a_2} {\partial\over\partial \alpha^I} \end{equation} They automatically satisfy: \begin{equation} [q\langle\xi\rangle,u\langle\eta\rangle] - (\xi\leftrightarrow\eta) - u\langle[\xi,\eta]\rangle = 0 \end{equation} and under the assumption of vanishing $\mbox{Ext}^1(S^2{\cal H}, {\bf a}/{\bf g})$ exists $Y^{(1)}$: \begin{align}Y^{(1)}\;=\; &y_{a_1a_2}^{I} x^{a_1}x^{a_2}{\partial\over\partial \alpha^I}\\u\langle\xi\rangle \;=\; &[q\langle\xi\rangle, Y^{(1)}]\end{align} Therefore we can construct, order by order in $x$-expansion, a vector field: \begin{equation} Y = \left(y^{I}_a x^a + y^{I}_{ab} x^a x^b + \ldots\right) {\partial\over\partial\alpha^I} \end{equation} defining a $\bf g$-invariant section of the normal bundle of $M_{gauge-fixed}$, and therefore a deformation of $M_{gauge-fixed}$ as a $\bf g$-invariant submanifold. The corresponding deformation of $\Psi$ is: \begin{align}\delta \Psi = [v\langle c\rangle, Y]|_{\alpha = 0}\;=\; &\sum_{m\geq 1}\sum_{n\geq 1} \psi\langle c\rangle^a_{a_1\cdots a_m,I}\;y^{I}_{a_{m+1}\cdots a_{m+n}}\; x^{a_1}\cdots x^{a_{m+n}}{\partial\over\partial x^a} \end{align} We do not see any apriori reason why this $\delta\Psi$ could be absorbed by a gauge transformation, {\it i.e.} why would exist $\Phi$ such that $\delta \Psi = Q_{phys}\Phi + [\Psi,\Phi]$. \subsubsection{Conclusion}\label{sec:AdSAbstractConclusion} We have studied the problem of classifying the normal forms of the action of a Lie supergroup $A$ in the vicinity of an orbit $\cal O$ with nontrivial stabilizer. As invariants of the action, we found families of equivalence classes of solutions of MC equations modulo gauge transformations. These families are parameterized by $\bf g$-invariant submanifolds $M_{gauge-fixed}$. Remember that $\bf g$ is a finite-dimensional Lie superalgebra, while $\bf a$ is infinite-dimensional, and therefore $\cal O$ is infinite-dimensional. One would think that the deformations of $M_{gauge-fixed}$ ``along $\cal O$'' will be ``as complicated as $\cal O$''. That would be ugly. But in fact, the space of deformations of $M_{gauge-fixed}$ is ``no more complicated than $\cal H$''. For examle, when $\cal H$ is finite-dimensional, the space of deformations is also finite-dimensional at each order in $x$. Roughly speaking, at the order $n$, it is $\mbox{Hom}_{\bf g}(S^n{\cal H}, {\bf a}/{\bf g})$. Even though $\mbox{Hom}_{\bf g}(S^n{\cal H}, {\bf a}/{\bf g})$ may be non-zero, it is certainly nicer than ${\bf a}/{\bf g}$. In other words, the deformations of $M_{gauge-fixed}$ do not involve the dependence on all $\alpha^I$, but only on finite-dimensional subspaces, the images of intertwining operators $S^n{\cal H} \longrightarrow {\bf a}/{\bf g}$. \subsection{Pure spinor formalism}\label{PureSpinor} Here we will briefly outline the pure spinor implementation of supergravity in the vicinity of AdS. We use the notations of \cite{Mikhailov:2011si}. Let ${\bf g}_0\subset {\bf g}$ be the subalgebra preserving a point in $AdS_5\times S^5$. In the pure spinor formalism, the space of vertex operators (cochains) of ghost number $n$ transforms in an induced representation of $\bf g$: \begin{equation} C_{ps}^n = \mbox{Coind}_{{\bf g}_0}^{\bf g} {\cal P}^n \end{equation} where ${\cal P}^n$ is the space of homogeneous polynomials of the order $n$ on the pure spinor variable. The space of linearized SUGRA solutions corresponds to the cohomology at ghost number $n=2$: \begin{align}T_p M \;=\; &Z^2_{ps} = \mbox{ker}\;:\; C_{ps}^2 \longrightarrow C_{ps}^3\\T_p {\cal O} \;=\; &B^2_{ps} = \mbox{im}\;:\; C_{ps}^1 \longrightarrow C_{ps}^2\\{\cal H}\;=\; &H_{ps}^2 = Z_{ps}^2/B_{ps}^2\end{align} This defines an extension of $\cal H$ with $T_p {\cal O}$: \begin{align}\mbox{} &0 \longrightarrow B^2_{ps} \longrightarrow Z^2_{ps} \longrightarrow {\cal H} \longrightarrow 0 \label{ExtensionOfH} \end{align} The structure of extension is described by a cocycle \begin{equation} \alpha\in Z^1({\bf g}, \mbox{Hom}({\cal H}, B^2_{ps})) \end{equation} The extension is nontrivial iff $\alpha$ defines a nontrivial class in $\mbox{Ext}^1_{\bf g}({\cal H}, B^2_{ps})$. This class is an obstacle to the existence of a covariant vertex. We have seen (Section \ref{sec:VicinityOfPoint}) that, more generally, $\mbox{Ext}^1_{\bf g}(S^n{\cal H}, B^2_{ps})$ are obstacles to finding a $\bf g$-invariant gauge for nonlinear solutions. We will now show that $\mbox{Ext}^1_{\bf g}(S^n{\cal H}, B^2_{ps})$, although nonzero, is in some sense small. Let us denote: \begin{equation} V = S^n{\cal H} \end{equation} Notice that $B^2_{ps}$ fits in a short exact sequence or representations, with the corresponding long exact sequence of cohomologies: \begin{align}\mbox{} &0 \longrightarrow Z^1_{ps} \longrightarrow C^1_{ps} \longrightarrow B^2_{ps} \longrightarrow 0\\\mbox{} & \mbox{Ext}^1_{\bf g}(V,C^1_{ps}) \longrightarrow \mbox{Ext}^1_{\bf g}(V,B^2_{ps}) \longrightarrow \mbox{Ext}^2_{\bf g}(V,Z^1_{ps}) \longrightarrow \mbox{Ext}^2_{\bf g}(V,C^1_{ps}) \end{align} Shapiro's lemma implies that $\mbox{Ext}^1_{\bf g}(V,C^1_{ps})= \mbox{Ext}^2_{\bf g}(V,C^1_{ps}) = 0$. For example, for $\mbox{Ext}^1_{\bf g}(V,C^1_{ps})$ we have: \begin{equation} \mbox{Ext}^1_{\bf g}(V,C^1_{ps}) = \mbox{Ext}^1_{{\bf g}_0}(V|_{{\bf g}_0}, {\cal P}^1) = 0 \end{equation} because $V|_{{\bf g}_0}$ is semisimple as a representation of ${\bf g}_0$, and $H^1({\bf g}_0)=0$. Therefore: \begin{equation} \mbox{Ext}^1_{\bf g}(V, B^2_{ps}) = \mbox{Ext}^2_{\bf g}(V,Z^1_{ps}) \end{equation} Notice that $Z^1$ is in the following exact sequences (the pure spinor cohomology at ghost number one, $H^1_{ps}$, corresponds to the global symmetries $\bf g$.): \begin{align}\mbox{} &0\longrightarrow B^1_{ps} \longrightarrow Z^1_{ps} \longrightarrow [H^1_{ps} = {\bf g}]\longrightarrow 0\\\mbox{} &\mbox{Ext}^1_{\bf g}(V,{\bf g}) \longrightarrow \mbox{Ext}^2_{\bf g}(V,B^1_{ps}) \longrightarrow \mbox{Ext}^2_{\bf g}(V,Z^1_{ps}) \longrightarrow \mbox{Ext}^2_{\bf g}(V,{\bf g}) \longrightarrow \mbox{Ext}^3_{\bf g}(V,B^1_{ps}) \end{align} BRST exact vertices of ghost number one, {\it i.e.} $B^1_{ps}$, fit in the following exact sequences: \begin{align}\mbox{} &0\longrightarrow {\bf C} \longrightarrow C^0_{ps} \stackrel{Q}{\longrightarrow} B^1_{ps}\longrightarrow 0\\\mbox{} & \mbox{Ext}^2_{\bf g}(V,C^0_{ps}) \longrightarrow \mbox{Ext}^2_{\bf g}(V,B^1_{ps}) \longrightarrow \mbox{Ext}^3_{\bf g}(V,{\bf C}) \longrightarrow \mbox{Ext}^3_{\bf g}(V,C^0_{ps}) \end{align} These observations together imply that $\mbox{Ext}^1_{\bf g}(V, B^2)$ fits into a short exact sequence of linear spaces: \begin{equation} 0 \longrightarrow {\mbox{ker}\left[\mbox{Ext}^3_{\bf g}(V, {\bf C})\longrightarrow \mbox{Ext}_{{\bf g}_0}^3(V, {\bf C})\right] \over \mbox{im}\left[\mbox{Ext}^1_{\bf g}(V,{\bf g})\rightarrow\mbox{Ext}^2_{\bf g}(V, B^1)\right]} \longrightarrow \mbox{Ext}^1_{\bf g}(V, B^2) \longrightarrow \mbox{Ext}^2_{\bf g}(V, {\bf g}) \longrightarrow 0 \end{equation} This means that $\mbox{Ext}^1_{\bf g}(V, B^2)$ is a direct sum: \begin{align} \mbox{Ext}^1_{\bf g}(V, B^2)\;=\; & E \oplus \mbox{Ext}^2_{\bf g}(V, {\bf g}) \\ \mbox{\tt where } E\;=\; & {\mbox{ker}\left[\mbox{Ext}^3_{\bf g}(V, {\bf C})\longrightarrow \mbox{Ext}_{{\bf g}_0}^3(V, {\bf C})\right] \over \mbox{im}\left[\mbox{Ext}^1_{\bf g}(V,{\bf g})\rightarrow\mbox{Ext}^2_{\bf g}(V, B^1)\right]} \end{align} Notice that $E$ is a factorspace of a subspace of $\mbox{Ext}^3_{\bf g}(V, {\bf C})$. In this sense, $\mbox{Ext}^1_{\bf g}(S^n{\cal H}, B^2)$ is ``lesser than'': \begin{equation}\label{CohomologiesOfGaugePart} \mbox{Ext}^3_{\bf g}(S^n{\cal H}, {\bf C}) \oplus \mbox{Ext}^2_{\bf g}(S^n{\cal H}, {\bf g}) \end{equation} This is the ``upper limit'' on the cohomology group containing the obstacle to the existence of $M_{gauge-fixed}$ at the order $n$ in $x$-expansion. \paragraph {When $n=1$} In particular $n=1$ corresponds to the obstacle to the existence of a covariant vertex, {\it i.e.} to the splitting of the short exact sequence of Eq. (\ref{ExtensionOfH}). In Section \ref{sec:BetaDef} we will show that the actual obstacle is zero in the case of linearized beta-deformation (which is the representation with smallest spin). At the same time, results of \cite{Mikhailov:2011si} imply that the obstacle is zero for linearized solutions with large enough spin. The conjectured existence of covariant vertices is the interpolation between these two cases. \subsection{If $M_{gauge-fixed}$ does not exist} We do not have a proof that the obstacle in $\mbox{Ext}_{\bf g}^1\left(S^n{\cal H},{{\bf a} / {\bf g}}\right)$ defined in Section \ref{sec:VicinityOfPoint} is actually zero. If it is not zero, then we cannot restrict to $M_{gauge-fixed}$ (because $M_{gauge-fixed}$ does not exist). Then we must study the full expansion of $F_{\epsilon *}^{-1}v$ of Eq. (\ref{vNearP}), in powers of {\em both} $x^a$ {\em and} $\alpha^I$. However, we do not need to take into account all $\alpha^I$. We only need those $\alpha^I$ which represent the obstacle in $\mbox{Ext}_{\bf g}^1\left(S^n{\cal H},{{\bf a} / {\bf g}}\right)$. Therefore, the complication is not actually as bad as it could have been. The main point is that, even though ${\bf a}/{\bf g}$ seems an ugly infinite-dimensional space, its cohomologies are reduced to expressions like (\ref{CohomologiesOfGaugePart}) by the ``magic'' of the Shapiro's lemma. \vspace{10pt} \paragraph {In the rest of this Section} we will accept as a working hypothesis that $M_{gauge-fixed}$ exists. Also, we will leave open the question of non-uniqueness of $M_{gauge-fixed}$. \subsection{Normalizable SUGRA solutions}\label{sec:NormalizableSolutions} ``Normalizable'' means decreasing sufficiently rapidly near the boundary. \vspace{10pt} \noindent All {\em linearized} normalizable SUGRA solutions are periodic in the global time $t$. They approximate some complete (nonlinear) solutions. The nonlinear solutions are {\em not} periodic. But, since linearized solutions {\em are} periodic, we can define the monodromy transformation $m$ as in Section \ref{sec:MonodromyTransformation}. The space of normalizable ({\it i.e.} rapidly decreasing at the boundary) solutions has a symplectic form. This is true at the linearized level as well as for the non-linear solutions. Then we can choose a map $T_0 M\rightarrow M$ so that it preserves the symplectic structure\footnote{This can be done, for example, in the following way. Pick some timelike surface. Then, to every linearized SUGRA solution associate a nonlinear solution for which the values of SUGRA fields and their time derivatives at that timelike surface are same as for the linearized solution}. Therefore, we can now identify $\Psi$ with the corresponding Hamiltonian, which we denote $H_{\Psi}$, or just $H$. The MC equation (\ref{MaurerCartan}) becomes: \begin{equation} QH + {1\over 2} \{H,H\} = 0 \end{equation} Remember that $H$ is of cubic and higher order in the coordinates on $T_0M$. Given the monodromy matrix of Eq. (\ref{MonodromyTransformation}) we can (in perturbation theory) define a vector field $\xi$ such whose flux generates it: \begin{equation} e^{\xi} = m \end{equation} It has some Hamiltonian $H_{\xi}$ which is of cubic and higher order in the coordinates on $T_0M$. In some sense, the quantization of $H_{\xi}$ should give the spectrum of anomalous dimensions. This program is complicated, though, by a non-straigthforward action of the symmetry, see Section \ref{sec:Symmetries}. \subsection{Non-normalizable SUGRA solutions} The non-normalizable SUGRA solutions correspond to the deformations of the boundary theory. Consider the following element of $PSU(2,2|4)$ --- the symmetry group of $AdS_5\times S^5$: \begin{equation} S=\mbox{diag}(i,i,i,i,-i,-i,-i,-i)\in PSU(2,2|4) \end{equation} Suppose that our deformation is invariant under $(-1)^FS$: \begin{equation}\label{SInv} U = (-1)^FSU \end{equation} Then, the corresponding {\em linearized} solution is periodic in the global time of AdS. \begin{figure}[!htb] \center{\includegraphics[width=\textwidth] {defS.png}} \caption{\label{fig:defS} Transformation $S$, as it acts on the boundary of AdS} \end{figure} Indeed, let us consider the retarded wave generated by an insertion of some operator $\cal O$ at the point $b$ on the boudary. It is the same as retarded boundary-to-bulk propagator. It is a generalized function with support on the future light cone of $b$. Let $l\in {\bf R}^{2+4}$ be a light-like vector encoding the point $b$ on the boundary. Then the boundary-to-bulk propagator, in sufficiently small neighborhood of $b$ is, schematically: \begin{equation}\label{BoundaryToBulkPropagator} {\delta(v\cdot l)\over (v\cdot l)^{\Delta -1}} \end{equation} where $v\in {\bf R}^{2+4}$ with $(v,v)=1$ corresponds to a point inside AdS; $\Delta$ is the conformal dimension of $\cal O$. The future light cone gets re-focused at $Sb$. The free solution (\ref{BoundaryToBulkPropagator}) gets then reflected from the boundary at the point $Sb$, and when reflected changes sign. Therefore, in order to cancel the reflection, we have to put the same operator at the point $SU$. \vspace{10pt} \noindent However, the corresponding nonlinear solution may or may not be periodic. If it is not periodic, then the deviation from periodicity is characterized by the monodromy of Eq. (\ref{MonodromyTransformation}). In any case, the solution of the Maurer-Cartan equation is more fundamental than the monodromy transformation. \subsection{Simplest non-periodic linearized solution} (This subsection is a side remark.) \noindent As we mentioned in Section \ref{sec:NormalizableSolutions}, all {\em normalizable} linearized solutions are periodic in global time $t$. But of course, this is not true for non-normalizable solutions. (Indeed, nothing prevents us from considering non-periodic boundry conditions at the boundary of AdS. There exist corresponding solutions, which are not periodic.) As a simplest example, consider the dilaton linearly dependent on $t$: \begin{equation}\label{LinearDilaton} \phi = \alpha t = {\alpha\over 2i} \log {Z\over\overline{Z}} \quad , \quad \alpha=\mbox{const} \end{equation} This is a solution of SUGRA only at the linearized level. Indeed, the energy $(\dot{\phi})^2$ is nonzero, and it will deform the metric. It would be interesting to see if it approximates some solution of nonlinear equations with the following property: the action of $\partial\over\partial t$ on it is the shift of dilaton. If we act on $\phi$ of Eq. (\ref{LinearDilaton}) by generators of $\bf g$, we get an infinite-dimensional representation. This infinite-dimensional representation contains a 1-dimensional invariant subspace, because the action of $\partial\over\partial t$ gives constant. (But the action of $K_i$ and $\bar{K}_i$ of Eqs. (\ref{GenK}), (\ref{GenBarK}) results in expressions like $X_i\over Z$, {\it etc}, an infinite-dimensional space.) \section{Beta-deformation and its generalizations}\label{sec:BetaDeformation} We will now consider the case of beta-deformation. See \cite{Leigh:1995ep,Milian:2016xuy} for the description on the field theory side, and \cite{Bedoya:2010qz,Benitez:2018xnh} for the AdS description\footnote{Particular ``subsectors'' of AdS beta-deformations were described earlier in \cite{Fayyazuddin:2002vh,Aharony:2002hx,Lunin:2005jy,Chen:2006bh}. While most of work has been on special cases associated to Yang-Baxter equations, the authors of \cite{Aharony:2002hx} studied more generic values of the parameter, which have nonzero beta-funcion. In this work we consider most general values of the deformation parameter. }. It does satisfy Eq. (\ref{SInv}). Linearized beta-deformations transform in the following representation: \begin{equation}\label{DefCalH} {\cal H} = {({\bf g}\wedge {\bf g})_0\over {\bf g}} \end{equation} where the subindex $0$ means zero internal commutator in the central extended $\hat{\bf g}$. This means $x\wedge y\in ({\bf g}\wedge {\bf g})_0$ has $[x,y]=0$ where the commutator is taken in $\hat{\bf g}$ ({\it i.e.} the unit matrix is not discarded). It was shown in \cite{Aharony:2002hx} that the renormalization of beta-deformation is again a beta-deformation, and the anomalous dimension is an expression cubic in the beta-deformation parameter. We will conjecture that the expansion of $\Psi$ starts with quadratic terms, and not with cubic terms. This explains why the obstacle found in \cite{Aharony:2002hx} is actually quadratic rather than cubic. But first, in order to make contact with Sections \ref{sec:GeometricaAbstractionAdS}, \ref{sec:NormalFormA}, \ref{PureSpinor} we will discuss the description of the beta-deformation in the pure spinor formalism. \subsection{Description of beta-deformation in pure spinor formalism} \label{sec:BetaDef} Vertex operators of physical states live in $C^2_{ps}$ --- cochains of ghost number two. The vertex corresponding to beta-deformation, as constructed in \cite{Mikhailov:2011si,Bedoya:2010qz} is actually not covariant. It transforms in $({\bf g}\wedge {\bf g})_0$ instead of Eq. (\ref{DefCalH}). Some components of the vertex, transforming in $\bf g$, are BRST exact: \begin{equation} \begin{array}{rcccccl} & 0 & & 0 & & & \cr & \downarrow & & \downarrow & & & \cr 0 \longrightarrow & {\bf g} & \stackrel{i}{\longrightarrow} & ({\bf g}\wedge {\bf g})_0 & \longrightarrow & {({\bf g}\wedge {\bf g})_0\over {\bf g}} & \longrightarrow 0 \cr & f\downarrow & & j\downarrow & & || & \cr & C_{ps}^1 & \stackrel{Q_{ps}}{\longrightarrow} & C_{ps}^2 & & {\cal H} & \end{array} \end{equation} This defines a nontrivial extension, which can be characterized by a cocycle: \begin{equation} \alpha \;:\; {({\bf g}\wedge {\bf g})_0\over {\bf g}} \longrightarrow {\bf g} \end{equation} defining a nonzero class in $\mbox{Ext}^1\left({({\bf g}\wedge {\bf g})_0\over {\bf g}}, {\bf g}\right)$. The existence of the commutative square formed by $i,j,Q_{ps},f$ is nontrivial. Then nontriviality is in the fact that $f$ commutes with the action of $\bf g$. Generally speaking, the variation of $f$ under the action of $\bf g$ could be non-zero, it just has to take values in $Z^1_{ps}$. But there is a $\bf g$-invariant $f$: \begin{equation} f(\xi) = \mbox{STr}(g\xi g^{-1} (\lambda_L + \lambda_R)) \end{equation} The composition $f\circ\alpha$ defines an element $H^1({\bf g}, \mbox{Hom}({\cal H}, C_{ps}^1))$, but this group is zero by Shapiro's lemma: \begin{equation} H^1({\bf g}, \mbox{Hom}({\cal H}, C^1_{ps})) = H^1({\bf g}_0 , \mbox{Hom}({\cal H}, {\cal P}^1)) \;=\;0 \end{equation} (Here ${\cal P}^1$ is the space of linear functions of pure spinors.) Therefore exists $\beta\in C^0({\bf g}, \mbox{Hom}({\cal H}, C^1_{ps}))$ such that: \begin{equation} f\circ \alpha = Q_{\rm Lie}\beta \end{equation} This means that the BRST-equivalent vertex: \begin{equation} V' = V - Q_{ps} \beta \end{equation} is $\bf g$-covariant. This is {\em not} the vertex found in \cite{Mikhailov:2011si,Bedoya:2010qz}. It is probably a linear combination of the vertex of \cite{Mikhailov:2011si,Bedoya:2010qz} and the one found in \cite{Flores:2019dwr}. \subsection{Restriction of $\Psi$ to even subalgebra} Let us start by forgetting about fermionic symmetries. In other words, consider the restriction of $\Psi$ on the even subalgebra ${\bf g}_{\rm ev}\subset {\bf g}$. Explicit computations of \cite{Aharony:2002hx} suggest\footnote{although the computations was only done for the simplest deformation, the one constant in AdS} that $\Psi$ starts with cubic terms, {\it i.e.} with $v_2$ (see Eq. (\ref{ExpansionOfVectorField})) rather than $v_1$. The $v_2$ is certainly nonzero, and cannot be removed by the gauge transformations of Section \ref{sec:GaugeTransformations}. This leads to an apparent contradiction. Indeed, $v_2$ being non-removable means that it represents a nontrivial class in: \begin{equation} H^1\left({\bf g}_{\rm ev}, \mbox{Hom}({\cal H}^{\otimes 3} , {\cal H})\right) \end{equation} But this cohomology group is zero, because $\mbox{Hom}\left({\cal H}^{\otimes 3} , {\cal H})\right)$ is finite-dimensional. The $H^1$ of ${\bf g}_{\rm ev}$ with coefficient in a finite-dimensional representation is zero --- see \cite{FeiginFuchs}. What actually happens is: \begin{equation}\label{BosonicCocycle} [v_2] = \phi_{\rm bos} \in H^1\left({\bf g}_{\rm ev}, \mbox{Hom}({\cal H}^{\otimes 3} , \widehat{\cal H})\right) \end{equation} where $\widehat{\cal H}$ is some infinite-dimensional extension of $\cal H$. We will now explain this. \subsection{Infinite-dimensional extension of finite-dimensional $\cal H$}\label{sec:InfiniteDimensionalExtension} The perturbative nonlinear solution involves terms proportional to $\log (Z\overline{Z})$ in the notations of Appendix \ref{sec:AdSNotations}. These terms, under the action of $K_i$ and $\overline{K}_i$ (see Eqs. (\ref{GenK}), (\ref{GenBarK})), generate an infinite-dimensional representation. We will first explain the origin of $\log (Z\overline{Z})$-terms, and then the structure of the infinite-dimensional extension of $\cal H$. \paragraph {Log terms} We will now explain the origin of at the third order in the deformation parameter. Following \cite{Aharony:2002hx}, let us consider those linearized beta-deformations which only involve the RR fields and the NSNS $B$-field in the direction of $S^5$. At the linear level, these solutions do not deform $AdS_5$ at all\footnote{The existence of such deformations may appear contradictory, because $S^5$ is a compact manifold. Indeed, the NSNS $B$-field is a two-form. Nonzero harmonic two-form cannot exist on a compact manifold. But in fact, because of the undeformed $AdS_5\times S^5$ has the RR five-form turned on, the linearized equations actually mix RR with NSNS fields, leading effectively to massive equations. }. At the cubic order, the interacting term has three linearized solutions combine in a term proportional, again, to the beta-deformation of $S^5$. This means that we have to solve the equation in $AdS_5$: \begin{equation} {\bf L}_{\rm A} \phi = 1 \end{equation} The solution is $\log (Z\overline{Z})$, see Eq. (\ref{LogZBarZ}). At higher orders of $\epsilon$-expansion, more complicated function appear, see Appendix \ref{sec:BasicFunctions}. All non-rational dependence of $Z,\overline{Z},\vec{X}$ is through $|Z|^2=Z\overline{Z}$. Denominators are powers of $Z$ and $\overline{Z}$, while $\vec{X}$ only enter polynomially. \paragraph {Structure of $\widehat{\cal H}$} (Notations of Appendix \ref{sec:AdSNotations}.) Consider, for example, a massless scalar field, whose $S^5$-dependence corresponds to a harmonic polynomial $Y({\bf N})$ of degree $\Delta_S$. There are solutions of the form: \begin{equation}\label{PhiTimesY} \phi(Z,\overline{Z},\vec{X}) Y({\bf N}) \end{equation} where $\phi$ is a harmonic polynomial of degree $\Delta_A = \Delta_S$. Such solutions generate a finite-dimensional representation $V$ of $\bf g$. Now let us allow $\phi$ to have denominators, either ${1\over Z^m}$ or $1\over\overline{Z}^m$, keeping the same overall homogeneity degree $\Delta_A = \Delta_S$. (It is important that $Z$ is never zero, in fact $|Z|^2 > 1$.) Then, the solutions (still given by Eq. (\ref{PhiTimesY})) generate an infinite-dimensional representation $\widehat{V}$ of $\bf g$. It contains a finite-dimensional subspace corresponding to polynomial $\phi$. Therefore, $\widehat{V}$ is an extension of $V$. This extension is non-split\footnote{There are actually two representations, one allowing $1\over Z^m$ and another allowing $1\over\overline{Z}^m$. But we want to keep the real structure, so we must combine them.}, because there is no invariant subspace complementary to $V\subset \widehat{V}$. We explained the construction of extension for the simplest case of the massless scalar field. The construction for other finite-dimensional representations is the same. Finite-dimensional $V$ involves scalar fields, tensor fields, and fermionic spinor fields, and they all can be non-constant spherical harmonics on $S^5$. Importantly, for a finite-dimensional $V$, all these fields are polynomials in $Z,\overline{Z},\vec{X}$. To extend $V$ to $\widehat{V}$, we just allow negative power of $Z$ or negative powers of $\overline{Z}$. We would like to stress that all these extensions only contain rational functions of $Z,\overline{Z},\vec{X}$, with denominators powers of $Z$ and $\overline{Z}$. The perturbative solution contains non-rational terms, {\it e.g.} $\log(Z\overline{Z})$. But $\Psi$ only has rational functions, all logs got differentiated out. In this sense $\Psi$ is simpler than $\phi$ ({\it cp.} Section \ref{sec:PsiIsSimple}). \paragraph {Example of cocycle} A nontrivial cocycle $\psi$ in $H^1({\bf so}(2,4),\widehat{\bf C})$ can be defined by the following formulas (notations of Eqs. (\ref{GenK}), (\ref{GenBarK})): \begin{align} & \psi\left({\partial\over\partial t}\right) \;=\; i \\ & \psi(K_i) = 2{X_i\over Z} \\ & \psi(\overline{K}_i) = 0 \\ & \psi(\mbox{\tt\small rotations of $S^3$}) = 0 \end{align} (In other words, $\psi(x)$ is the variation of $\log Z$ under $x$.) Here $\widehat{\bf C}$ is the infinite-dimensional extension of the trivial representations, generated by massless scalar fields allowing denominators powers of $Z$. Composing this $\psi$ with some intertwining operators $V^{\otimes n}\rightarrow {\bf C}$ we may get nontrivial cocycles in $H^1({\bf so}(2,4),\mbox{Hom}(V^{\otimes n},\widehat{\bf C}))$. \subsection{Renormgroup flow generates infinite-dimensional representations} We must stress that $\Psi$ takes values in $\widehat{\cal H}$, and not in $\cal H$. Of course, $\cal H$ is contained in $\widehat{\cal H}$, as a subrepresentation. But there is no invariant projector from $\widehat{\cal H}$ to $\cal H$. We can say that the renormgroup flow of the beta-deformation results in the infinite-dimensional extension of the representation of beta-deformation. \vspace{10pt} \noindent The same is true for other finite-dimensional deformations. A renormgroup flow of a finite-dimensional deformation generates infinite-dimensional extensions of (possibly other) finite-dimensional representations. \vspace{10pt} \noindent This, of course, implies that we should also extend our space of linearized solutions. We should start with $\widehat{V}$ rather than just $V$. Otherwise, in the notations of Section \ref{sec:ActionOfSymmetry}, $v$ will lead out\footnote{The rule of the game is to always include sufficiently large class of linearized solutions, so that the action of $\bf g$ on the lifted solutions does not lead out of the space of lifted solutions} of ({\it i.e.} not tangent to) the image of $F$. Then, the coefficient of $\epsilon^m$ in the expansion of $\Psi$ in powers of $\epsilon$ lives in $\mbox{Hom}(\widehat{V}^{\otimes m}, \widehat{V})$. \subsection{Nonlinear beta-deformations have trivial monodromy}\label{sec:TrivialMonodromy} We have an intuitive argument, that the monodromy always takes values in unitary representations. Indeed, non-normalizable excitations can be thought of as waves bouncing back and forth from the boundary of AdS. They can be all damped by emitting appropriate excitations from the boundary, {\it i.e.} by adjusting the boundary conditions. Only normalizable modes remain. (See Section \ref{sec:NormalizableAndNonNormalizableMonodromy}.) \begin{centering{\small For example, suppose that we need to solve the equation $\square f = Z \cos\theta$ where $\theta$ is some angular coordinate of $S^5$ (it corresponds, in the notations of Appendix \ref{sec:ScalarSolutionsInAdS}, to $Y({\bf N})$ being a linear function of ${\bf N}$, {\it i.e.} $\Delta_S=1$). The simplest solution is $f = {1\over 5}(\log Z)Z\cos\theta$. It has nontrivial monodromy ${i\over 5}Z$. But we can add to it the expression ${1\over 5} \left((\log \overline{Z} ) Z + {1\over 2\overline{Z}}\sum X_i^2\right)\cos\theta$ which is annihilated by $\square$ and cancels the monodromy. In general, exists solution with all logs being $\log(Z\overline{Z})$, no monodromy. }\end{minipage \vspace{5pt} \noindent Suppose that the monodromy were nontrivial. Let us consider the lowest order in $\epsilon$-expansion where it be nontrivial. At the lowest order, it commutes with the undeformed action of $\bf g$. Therefore, it must take values in a finite-dimensional representation (since the tensor product of any number of $\cal H$ is still finite-dimensional). But finite-dimensional representations are not unitary. This argument implies, more generally, that the monodromy of finite-dimensional deformations is identity. In Section \ref{sec:BoundarySMatrix} we will consider infinite-dimensional deformations, with nontrivial monodromy. \subsection{Lifting of $\Psi$ to superalgebra}\label{sec:ObstacleIsQuadratic} Let $Q_{\rm ev}$ be the part of $Q$ (see Eq. (\ref{BRSTOperator})) involving only the ghosts of {\em even} generators of ${\bf g}$ (essentially, all the ghosts of odd generators all put to zero). The $\phi_{\rm bos}$ of Eq. (\ref{BosonicCocycle}) is annihilated by $Q_{\rm ev}$. What happens if we act on $\phi_{\rm bos}$ with the full $Q$, including the terms containing odd indices? Can we extend $\phi_{\rm bos}$ to a cocycle $\phi$ of ${\bf g}$? To answer this question, let us look at the spectral sequence corresponding to ${\bf g}_{\rm ev}\subset {\bf g}$ \cite{FeiginFuchs} It exists for any representation $V$. At the first page we have: \begin{align} E_1^{0,1} \; = \; & H^1({\bf g}_{\rm ev}; V) \\ E_1^{1,0} \;=\; & H^0({\bf g}_{\rm ev}; \mbox{Hom}({\bf g}_{\rm odd}, V)) \;=\; \mbox{Hom}_{{\bf g}_{\rm ev}}({\bf g}_{\rm odd}, V) \end{align} Our $\phi_{\rm bos}$ belongs to $E_1^{0,1}$. The first obstacle lives in \begin{equation} E_1^{1,1}= H^1({\bf g}_{\rm ev}; \mbox{Hom}({\bf g}_{\rm odd}, V)) \end{equation} We actually know that the SUGRA solution exist. Therefore this obstacle automatically vanishes. But there is another obstacle, which arizes when we go to the second page. It lives in: \begin{align} E_2^{2,0} \;=\; & H^2({\bf g},{\bf g}_{\rm ev};V) = H^2\left({\bf g},{\bf g}_{\rm ev};\mbox{Hom}({\cal H}^{\otimes 3} , \widehat{\cal H})\right)\;= \\ \;=\; & H^2\left({\bf g},{\bf g}_{\rm ev};\mbox{Hom}({\cal H}^{\otimes 3} , {\cal H})\right) \end{align} We used the fact that relative cochains are ${\bf g}_{\rm ev}$-invariant, therefore the cocycles automatically fall into the finite-dimensional ${\cal H}\subset\widehat{\cal H}$. In fact, this obstacle does not have to be zero, because there is something that can cancel it. Remember that $\Psi$ is generally speaking not annihilated by $Q$, but rather satisfies Eq. (\ref{MaurerCartan}). And, in fact, there is a nontrivial cocycle: \begin{equation}\label{CocycleInS2HH} \psi \in H^1({\bf g}, \mbox{Hom}(S^2{\cal H}, {\cal H})) \end{equation} We conjecture that the supersymmetric extension $\phi$ of $\phi_{\rm bos}$ indeed exists, but instead of satisfying $Q\phi=0$ satisfies: \begin{equation}\label{QPhiIsPsiPsi} Q\phi = [\psi,\psi] \end{equation} \begin{centering{\small This conjecture should be verified by explicit computations, which we leave for future work. It may happen that the obstacle which would take values in the cohomology group of Eq. (\ref{CocycleInS2HH}) actually vanishes for some reason. It seems that the computations done in \cite{Aharony:2002hx} are not sufficient to settle this issue, because it was only done for one state (the beta-deformation constant in AdS)}\end{minipage \vspace{10pt} \noindent We will now describe $\psi$ of Eq. (\ref{CocycleInS2HH}). \paragraph {Step 1: construct an element of $H^1\left({\bf g}, \mbox{Hom}({\bf g}, {\cal H})\right)$} Consider an element of $H^1\left({\bf g}, \mbox{Hom}({\bf g}, {\cal H})\right)$ corresponding to the extension\footnote{Remember that $H^1\left({\bf g}, \mbox{Hom}(L_1,L_2)\right)$ is $\mbox{Ext}(L_1,L_2)$; it corresponds to the extensions \cite{Knapp} of $L_2$ by $L_1$.}: \begin{equation} 0 \longrightarrow {\cal H} \longrightarrow {{\bf g}\wedge {\bf g}\over {\bf g}} \longrightarrow {\bf g} \longrightarrow 0 \end{equation} (Remember that $\cal H$ is defined in Eq. (\ref{DefCalH}).) \paragraph {Step 2: compose it with an intertwiner $S^2{\cal H} \rightarrow {\bf g}$} It was shown in \cite{Bedoya:2010qz} that there exists a $\bf g$-invariant map \begin{equation}\label{FromS2HToG} S^2{\cal H} \rightarrow {\bf g} \end{equation} --- we will review the construction of this map in Section \ref{sec:Intertwiner}. Composing it with the element of $H^1\left({\bf g}, \mbox{Hom}({\bf g}, {\cal H})\right)$ we get a class in $H^1({\bf g}, \mbox{Hom}(S^2{\cal H}, {\cal H}))$. \subsection{Construction of the intertwiner $S^2{\cal H} \rightarrow {\bf g}$}\label{sec:Intertwiner} We will now construct the intertwining operator in Eq. (\ref{FromS2HToG}). \paragraph {Algebraic preliminaries} Suppose that we have an associative algebra $A$. For any $x_1\otimes\cdots\otimes x_{k} \in A^{\otimes n}$ consider their product: \begin{equation} \mu(x_1\otimes\cdots\otimes x_{k}) = x_1 \cdots x_{k}\in A \end{equation} In particular, take $A = \mbox{Mat}(m|n)$ — the algebra of supermatrices. Let us view the exterior product $\Lambda^k A = A\wedge\cdots\wedge A$ as a subspace in $A^{\otimes k}$. \vspace{5pt} \begin{centering{\small For any linear superspace $L$, there is a natural action of the symmetric group $S_n$ on the tensor product $L^{\otimes n}$. For example, when $n=2$, the transposition $\tau_{12}$ acts as: $\tau_{12} v\otimes w = (-)^{vw}w\otimes v$. The exterior product $\Lambda^n L$ is the subspace of $L^{\otimes n}$ where permutations act by multiplication by a sign of permutation. For example, for $n=2$ it is generated by expressions $v\wedge w = {1\over 2}(v\otimes w - (-)^{vw}w\otimes v)$.} \end{minipage \vspace{5pt} \noindent For any element $x_1\otimes\cdots\otimes x_{2k} \in \Lambda^{2k} A$ we define: \begin{equation} \langle x_1\wedge\cdots\wedge x_{2k}\rangle = \mu(x_1\wedge\cdots\wedge x_{2k}) \end{equation} We observe that: \begin{align} \mbox{STr}\langle x_1\wedge\cdots\wedge x_{2k}\rangle = 0 \\ \langle{\bf 1}\wedge x_2 \wedge \cdots \wedge x_{2k}\rangle = 0 \end{align} Therefore, the operation $\langle\_\rangle$ defines a map: \begin{equation} \Lambda^{2k}{\bf pl}(m|n) \rightarrow {\bf sl}(m|n) \end{equation} We degine the ``split Casimir operator'': \begin{equation} C = k^{ab} t_a\otimes t_b\;\in \; {\bf gl}(m|m)\otimes {\bf gl}(m|m) \end{equation} where $\{t_a\}$ are generators and $k^{ab}$ some coefficients, which we now define. It satisfies: \begin{equation} k^{ab}t_a\otimes [t_b,x] + (-)^{bx} k^{ab}[t_a,x]\otimes t_b = 0 \end{equation} In particular, if we think of generators as matrices: \begin{equation} [k^{ab} t_at_b\,,\,t_c] = 0 \end{equation} In matrix notations: \begin{equation} C^a{}_b{}^c{}_d = (-)^c\delta^a{}_d \delta^c{}_b \end{equation} Notice that for any matrix $X$: \begin{equation}\label{in-the-middle} C^a{}_b{}^c{}_d X^b{}_c = \mbox{STr}(X) \; \delta^a{}_d \end{equation} We define: \begin{equation} \delta x = k^{ab} t_a \otimes [t_b,x] = k^{ab} t_a \wedge [t_b,x] = (-)^{bx + 1} k^{ab}[t_a,x]\wedge t_b \end{equation} \paragraph {Description of $\cal H$} In this language, the representation $\cal H$ in which beta-deformations transform consists of expressions: \begin{align} \sum_i x_i\wedge y_i \in & {\bf sl}(4|4)\wedge {\bf sl}(4|4) \\ \mbox{\tt\small modulo: } & x\wedge {\bf 1}\simeq 0 \label{PEquivalence}\\ \mbox{\tt\small and } & \delta x\simeq 0 \label{DeltaEquivalence} \end{align} \paragraph {Description of the intertwiner} For $B_1$ and $B_2$ belonging to $\cal H$, we define: \begin{align} f\;:\; & S^2{\cal H} \rightarrow {\bf g} \label{Intertwiner} \\ f(B_1\wedge B_2) \;=\;& \langle B_1\wedge B_2\rangle \; \mbox{mod} \; {\bf 1} \end{align} The correctness w.r.to the equivalences relation of Eq. (\ref{PEquivalence}) follows immediately. It remains to verify the correctness w.r.to Eq. (\ref{DeltaEquivalence}). Indeed, under the condition $[y,z]=0$ and $\mbox{STr}(y)=\mbox{STr}(z)=0$ we have: \begin{align} & 24 f(\delta x, y\wedge z) \;=\; 24 \left\langle \delta x \wedge y\wedge z \right\rangle \;=\; 24 \left\langle k^{ab}t_a\wedge [t_b,x] \wedge y\wedge z \right\rangle\;=\; \nonumber\\ \;=\; & \phantom{+} (-)^{y(b+x) + bx}\langle k^{ab} t_a yx t_b z \rangle + (-)^{bx + by}\langle k^{ab} t_a xy t_b z \rangle + \nonumber\\ & + \; (-)^{yx + z(x+b) + bx}\langle k^{ab}yt_a zx t_b \rangle + (-)^{yx + zb + bx}\langle k^{ab}y t_a xz t_b \rangle\;- \nonumber\\ & - \; (-)^{zy} (z\leftrightarrow y)\;= \nonumber\\ \;=\; & 2\mbox{STr}(xy)\;z \;+\; (-)^{yz}2\mbox{STr}(xz)\;y \;-\;(-)^{zy} (z\leftrightarrow y)\;=\; 0 \end{align} (We used Eq. (\ref{in-the-middle}).) Therefore, we constructed a well-defined intertwining operator: \begin{equation} S^2{\cal H} \longrightarrow {\bf g} \end{equation} where ${\cal H} = {({\bf g}\wedge {\bf g})_0\over {\bf g}}$. \paragraph {Intertwiner maps $H^1\left({\bf g}, \mbox{Hom}({\bf g}, {\cal H})\right)$ to $H^1\left({\bf g}, \mbox{Hom}(S^2{\cal H}, {\cal H})\right)$} Our intertwining operator $f\;:\;S^2{\cal H}\rightarrow {\bf g}$ generates a short exact sequence: \begin{align} & 0\rightarrow \mbox{Hom}({\bf g}, {\cal H}) \rightarrow \mbox{Hom}(S^2{\cal H}, {\cal H}) \rightarrow \mbox{Hom}(S^2_0{\cal H}, {\cal H}) \rightarrow 0 \\ &\mbox{\tt\small where } S^2_0{\cal H} =\mbox{ker} f \end{align} and therefore a long exact sequence: \begin{align} 0 & \longrightarrow H^0\left({\bf g}, \mbox{Hom}({\bf g},{\cal H})\right) \longrightarrow H^0\left({\bf g}, \mbox{Hom}(S^2{\cal H},{\cal H})\right) \\ & \longrightarrow H^0\left({\bf g}, \mbox{Hom}(S_0^2{\cal H},{\cal H})\right) \longrightarrow \\ & \longrightarrow H^1\left({\bf g}, \mbox{Hom}({\bf g},{\cal H})\right) \longrightarrow H^1\left({\bf g}, \mbox{Hom}(S^2{\cal H},{\cal H})\right) \longrightarrow \ldots \end{align} But $H^0\left({\bf g}, \mbox{Hom}(S_0^2{\cal H},{\cal H})\right) = 0$, therefore composition with $f$ is an injective map $H^1\left({\bf g}, \mbox{Hom}({\bf g},{\cal H})\right) \longrightarrow H^1\left({\bf g}, \mbox{Hom}(S^2{\cal H},{\cal H})\right)$. We will now show that $H^0\left({\bf g}, \mbox{Hom}(S_0^2{\cal H},{\cal H})\right) = 0$, {\it i.e.} there are no intertwining operators. Suppose that there {\em is} an intertwiner \begin{equation} \phi\;:\; S_0^2{\cal H} \rightarrow {\cal H} \end{equation} Let us compute it on a decomposable element $(x_1\wedge x_2)\bullet (y_1\wedge y_2) \in {\cal H}\bullet{\cal H}$. Here $\bullet$ denotes the symmetrized tensor product. The only way of contracting indices resulting in an antisymmetric tensor is: \begin{equation} \{x_2,y_1\}\otimes [x_1,y_2] - (-)^{(x_2+y_1)(x_1+y_2)}[x_2,y_1]\otimes \{x_1,y_2\} \end{equation} antisymmetrized over both $x_1\leftrightarrow x_2$ and $y_1\leftrightarrow y_2$. (The terms like $[x,y]\otimes [x,y]$ belong to $S^2{\bf g}$ rather than $\Lambda^2{\bf g}$.) But anticommutators are not allowed, because $\phi$ should be correctly defined with respect to $x\simeq x + {\bf 1}$. \subsection{Structure of $S^2{\cal H}$} Let us denote: \begin{equation} (S^2{\bf sl}(4|4))_{\rm STL} \end{equation} the subspace of $S^2({\bf sl}(4|4))$ consiting of elements $x\bullet y$ such that $\mbox{STr}(xy)=0$. The map: \begin{align} & S^2{\cal H}\rightarrow (S^2({\bf sl}(4|4)))_{\rm STL} \label{S2HtoS2sl}\\[5pt] &(x_1\wedge x_2)\bullet (y_1\wedge y_2)\;\mapsto \nonumber\\ &\mapsto\; (-)^{x_2 y_1 + 1} [x_1,y_1]\bullet [x_2,y_2] + (-)^{x_1 y_1 + x_1 x_2} [x_2,y_1]\bullet [x_1,y_2] \nonumber \end{align} is an intertwiner. There is a map \begin{align} \left[S^2{\bf sl}(4|4)\right]_{\rm STL} & \longrightarrow {\bf sl}(4|4) \label{S2sltosl}\\[5pt] x\bullet y & \mapsto xy \end{align} The composition of the map of Eq. (\ref{S2HtoS2sl}) and the map of Eq. (\ref{S2sltosl}), combined with the projector ${\bf sl}(4|4) \longrightarrow {\bf psl}(4|4)$, equals to the map $f$ of Eq. (\ref{Intertwiner}): \begin{align} & f\;:\; \left(\; S^2{\cal H} \longrightarrow \left[S^2{\bf sl}(4|4)\right]_{\rm STL} \longrightarrow {\bf sl}(4|4) \longrightarrow {\bf psl}(4|4) \; \right) \end{align} By definition $S^2_0{\cal H}=\mbox{ker}f$. This means that $S^2_0{\cal H}=\mbox{ker}f$ has some invariant subspaces: \begin{align} & \mbox{ker}\; \left(\; S^2{\cal H} \longrightarrow \left[S^2{\bf sl}(4|4)\right]_{\rm STL} \longrightarrow {\bf sl}(4|4) \; \right) \\ & \mbox{ker}\; \left(\; S^2{\cal H} \longrightarrow \left[S^2{\bf sl}(4|4)\right]_{\rm STL} \; \right) \end{align} This finer structure does not seem to be relevant for the leading term in the beta-function. \subsection{Role of $\psi$ in anomaly cancellation} Our construction of the cocycle as a product: \begin{equation} H^1\left({\bf g}\;,\;\mbox{Hom}({\bf g}, {\cal H})\right) \otimes H^0\left({\bf g}\;,\;\mbox{Hom}({\cal H}\otimes {\cal H}, {\bf g})\right) \rightarrow H^1\left({\bf g}\;,\;\mbox{Hom}({\cal H}\otimes {\cal H}, {\cal H})\right) \end{equation} suggests that it participates in anomaly cancellation. It was explained in \cite{Bedoya:2010qz,Mikhailov:2012id} that at the level of the classical sigma-model there is no reason for the parameter of the beta-deformation to have zero internal commutator. From the point of view of the classical worldsheet, the beta-deformations live in ${\bf g}\wedge {\bf g}\over {\bf g}$, and not necessarily in $({\bf g}\wedge {\bf g})_0\over {\bf g}$. But at the quantum level, on the curved worldsheet, the deformations with nonzero internal commutator suffer from one-loop anomaly. This suggest the following anomaly cancellation scenario. Let us start with the linearized {\em physical} ({\it i.e.} with zero internal commutator) beta-deformaion, and start constructing, order by order in the deformation parameter $\epsilon$, the corresponding nonlinear solution. The classical construction goes fine, but at the secondr order in $\epsilon$ we may encounter a one-loop anomaly of precisely the right form to be cancelled by a non-physical beta-deformation. (``Nonphysical'' means with non-zero internal commutator.) Then, we just add, with the coefficient $\epsilon^2$, some nonphysical beta-deformation, to cancel that anomaly. But the subtlety is, that the extension of physical deformations by nonphysical: \begin{equation} 0 \longrightarrow {({\bf g}\wedge {\bf g})_0\over {\bf g}} \longrightarrow {{\bf g}\wedge {\bf g}\over {\bf g}} \longrightarrow {\bf g} \longrightarrow 0 \end{equation} is not split. In other words, it is impossible to lift $\bf g$ back to ${{\bf g}\wedge {\bf g}\over {\bf g}}$ in a way preserving symmetries. In this sense, the anomally may break global symmetries. In our language this means that the nontrivial $v_1$ of Eq. (\ref{ExpansionOfVectorField}) may be induced by quantum corrections at the first order in $\alpha'$. But our conjecture is that the nontrivial $v_1$ is present already at the classical level and its cohomology class participates in Eq. (\ref{CocycleInS2HH}). Both conjectures have to be settled by explicit computations, which we have not done. \subsection{Outline of computation}\label{sec:OutlineOfComputation} We believe that the best framework for actually computing $\Psi$ and proving the conjectured Eq. (\ref{QPhiIsPsiPsi}) is the pure spinor formalism. This can be done using the homological perturbation theory developed in \cite{Bedoya:2010qz,Benitez:2018xnh}. The basic idea is to consider the deformation of the pure spinor BRST operator: \begin{equation} Q_{ps} = Q_{ps}^{(0)} + \epsilon Q_{ps}^{(1)} + \epsilon^2 Q_{ps}^{(2)} + \ldots \end{equation} The explicit expression for $Q_{ps}^{(1)}$ was obtained in \cite{Bedoya:2010qz,Benitez:2018xnh}. The next step is to find $Q_{ps}^{(2)}$ such that $Q_{ps}$ is nilpotent up to terms of the order $\epsilon^{\geq 3}$. This was done in \cite{Bedoya:2010qz,Benitez:2018xnh}, but only for a special class of deformations (essentially, those leading to the integrable model, see \cite{Berenstein:2004ys,Bundzik:2005zg,Klimcik:2008eq,Delduc:2013qra,Hollowood:2014qma,Hoare:2015wia,Benitez:2018xnh}). The deviation from $\bf g$-covariance would arise for non-integrable deformations, {\it i.e.} those cases where the $Q^{(2)}$ was {\em not} found in \cite{Bedoya:2010qz,Benitez:2018xnh}. \subsection{Other finite-dimensional deformations} Besides beta-deformations, there are infinitely many other finite-dimensional deformations \cite{Mikhailov:2011af,Mikhailov:2017uoh}. The formalism developed in this paper should be also applicable to them. \section{Comparison to boundary S-matrix}\label{sec:BoundarySMatrix} \subsection{Periodic array of operator insertions} Let ${\cal O}_1$ and ${\cal O}_2$ be two local operators, and $\rho_1$ and $\rho_2$ some c-number densities with support in sufficiently small compact space-time regions. Let us consider the following deformation of the action ({\it cf.} Eq. (\ref{SInv})) : \begin{figure}[!htb] \center{\includegraphics[width=\textwidth] {array.png}} \caption{\label{fig:TwoArrays} Two periodic arrays of insertions} \end{figure} \begin{equation} \delta S = \epsilon_1 \sum_{n=-\infty}^{\infty} \left((-1)^FS\right)^n \int d^4x \rho_1(x){\cal O}_1 + \epsilon_2 \sum_{n=-\infty}^{\infty} \left((-1)^FS\right)^n \int d^4x \rho_2(x){\cal O}_2 \end{equation} where $\epsilon_1$ and $\epsilon_2$ are two nilpotent coefficients. This is, essentially, an infinite periodic array (designed to satisfy Eq. (\ref{SInv})) of compactly supported deformations. At the linearized level, {\it i.e.} assuming $\epsilon_1\epsilon_2=0$, the two terms in the deformation transform in two infinite-dimensional representations, ${\cal H}_1$ and ${\cal H}_2$. But if we do not assume $\epsilon_1\epsilon_2=0$, then there will be a term in the SUGRA solution proportional to $\epsilon_1\epsilon_2$, and it will {\em not} transform in ${\cal H}_1\otimes {\cal H}_2$. We can consider the space of {\em all possible} completions of linearized solutions to nonlinear solutions. The terms proportional to $\epsilon_1\epsilon_2$ form a linear space $X$, which, as a representation of $\bf g$, is an extension: \begin{equation} 0 \longrightarrow \left[\begin{array}{c} \mbox{\tt\small solutions of} \cr \mbox{\tt\small linearized equations} \end{array}\right] \longrightarrow X \longrightarrow {\cal H}_1\otimes {\cal H}_2 \longrightarrow 0 \end{equation} Even if we restrict on ${\bf g}_{\rm even}$, there are such nontrivial extensions. The corresponding cocycle: \begin{equation} \psi\in H^1\left( {\bf g}_{\rm even} \;,\; \mbox{Hom}\left( {\cal H}_1\otimes {\cal H}_2\;,\; \left[\begin{array}{c} \mbox{\tt\small solutions of} \cr \mbox{\tt\small linearized } \cr \mbox{equations} \end{array} \right] \right) \right) \end{equation} is nontrivial, the average of $\psi(\partial/\partial t)$ over the period (see Eq. (\ref{SecondDerivativeOfM})) is nonzero. Let us insert, instead of an infinite array, just two operators: ${\cal O}_1$ and ${\cal O}_2$. Consider the ``retarded'' solution excited by them. The waves will keep bouncing from the boundary of AdS, interacting in the middle. Therefore the $\epsilon_1\epsilon_2$ part will grow in global time like square of $t$. We consider this a complication. To simplify the analysis, let us make four insertions (instead of just two): \begin{equation}\label{FourInsertions} \epsilon_1{\cal O}_1 \;,\quad \epsilon_2{\cal O}_2 \;,\quad \epsilon_1(-1)^FS{\cal O}_1 \;,\quad \epsilon_2 (-1)^FS{\cal O}_2 \end{equation} Then the terms linear in $\epsilon_1$, as well as the terms linear in $\epsilon_2$, will cancel in the future. But, before they cancel, there will be some interaction, generating terms proportional to $\epsilon_1\epsilon_2$. In the future, the $\epsilon_1\epsilon_2$-terms become a solution of the free equation. This is the ``retarted'' solution generated by these insertions, {\it i.e.} the one which is pure $AdS_5\times S^5$ in the past. \subsection{Monodromy {\it vs} boundary S-matrix} Let us suppose that $\rho_1$ is a delta-function at the point $b_1$ on the boundary, and $\rho_2$ delta-function at the point $b_2$. Genarally speaking, every point $b$ on the boundary of AdS defines a Poincare patch, which can be defined as follows. Consider the future of $b$, denote it ${\cal F}(b)$ (a subset of AdS). Notice that for any $n>0$: $S^n {\cal F}(b) \subset {\cal F}(b)$. Consider the ``first fundamental domain'' of ${\cal F}(b)$ with respect to the action of $S$, {\it i.e.} the set of points $x\in {\cal F}(b)$ such $S^{-1}x\notin {\cal F}(b)$. This is the Poincare patch ${\cal P}(b)$ corresponding to $b$ (the beige area on Figure \ref{fig:defS}). For the retarded solution corresponding to the insertions (\ref{FourInsertions}) all the interaction happens inside ${\cal P}(b_1)\cap {\cal P}(b_2)$. This implies that the average of $\psi(\partial/\partial t)$, in the sense of Eq. (\ref{SecondDerivativeOfM}) can be computed as the integral over ${\cal P}(b_1)\cap {\cal P}(b_2)$. In fact, since the boundary-to-bulk propagator has support on the light cone, see Eq. (\ref{BoundaryToBulkPropagator}), the integral is supported on $\partial{\cal P}(b_1)\cap \partial{\cal P}(b_2)$. The integrand is the retarded propagator times triple-interaction vertex. On the other hand, in the definition of the boundary S-matrix \cite{Witten:1998qj} the integration of the interaction vertex is over the whole Euclidean AdS. It is not clear to us how these two definitions are related. \subsection{Normalizable and non-normalizable contributions to monodromy}\label{sec:NormalizableAndNonNormalizableMonodromy} Notice that $\partial{\cal P}(b_1)\cap \partial{\cal P}(b_2)$ goes all the way to the boundary, therefore there is no reason why the monodromy would be a normalizable solution. However, the non-normalizable part is due to waves bouncing back and forth in AdS reflecting from the boundary. Therefore all the non-normalizable terms can be damped by making adjustments, of the order $\epsilon_1\epsilon_2$, of the boundary conditions. In other words, correcting the defining Eq. (\ref{FourInsertions}) by adding some operators of the order $\epsilon_1\epsilon_2$. Then, the monodromy of the modified array will be a normalizable solution. We used this in Section \ref{sec:TrivialMonodromy}. \section{Discussion and open questions}\label{sec:Discussion} A general theme of AdS/CFT is comparison of field theory computations with supergravity computations. The analysis of the present paper is incomplete, and potentially leaves a {\em mismatch} between field theory and supergravity. Indeed, on the field theory side we use the formalism of Sections \ref{sec:IntroRenorm}, \ref{sec:ActionOfGOnDefs}, \ref{sec:GeometricaAbstraction}. While on the supergravity side, we use the formalism of Sections \ref{sec:GeometricaAbstractionAdS}, \ref{sec:NormalFormA}, which is similar but different. \subsection{Is it true that symmetries of QFT naturally act on the space of its deformations?} It is essential for our reasoning, that there is a {\em natural action} of the symmetries of QFT on the space of its deformations, Section \ref{sec:ActionOfGOnDefs}. This action should be natural, {\it i.e.} should not depend on how we describe deformations. Strictly speaking, our reasoning in Section \ref{sec:ActionOfGOnDefs} used a particular way of thinking about the deformations. Therefore, we are in danger of using an unnatural definition. \subsection{Gauge group is not an invariant} Renormgroup invariants in QFT match certain cohomological invariants of the action of the group of gauge transformations of supergravity, as described in Section \ref{sec:NormalFormA}. But gauge group is not actually an invariant of the theory \footnote{Gauge transformations characterize the redundancy of the given Lagrangian desciption of the theory. A different Lagrangian descriptions of the same theory can have slightly different gauge symmetries.}. Therefore, we are in danger of being non-invariant. However, the invariants which we describe in Section \ref{sec:NormalFormA} actually depend, in some sense, on the cohomology of the gauge transformations. Our construction uses certain ``flabbyness'' of the algebra of gauge transformations, essentially allowing to reduce the cohomologies to those of $\bf g$ (using the Shapiro's lemma). We hope that the cohomologies {\em are} invariant. \subsection{Maybe there is no mismatch} If our conjectures in Section \ref{sec:NormalFormA} hold, namely: \begin{itemize} \item Exists $M_{gauge-fixed}$ and \item Solution of MC Equation does not depend on the choice of $M_{gauge-fixed}$ \end{itemize} then the invariants of Section \ref{sec:NormalFormA} just match the invariants on the QFT side, so there is no discrepancy. We think that this is, most likely, what actually happens. \subsection{Computation of the MC invariants} As we already mentioned in Section \ref{sec:OutlineOfComputation}, an important open question is to develop the homological perturbation theory of \cite{Bedoya:2010qz,Benitez:2018xnh} and to actually compute the Maurer-Cartan invariants which we defined. \section*{Acknowledgments} We want to thank Alexei Rosly for many useful discussions. This work was supported in part by FAPESP grant 2014/18634-9 ``Dualidade Gravitac$\!\!,\tilde{\rm a}$o/Teoria de Gauge''. and in part by RFBR grant RFBR 18-01-00460 ``String theory and integrable systems''.
2,869,038,156,570
arxiv
\section{Introduction} We consider perfect fluid solutions of the Einstein field equations, \begin{equation} R_{ab} - \smfrac{1}{2} R \, g_{ab} = T_{ab} , \label{EFE} \end{equation} on a 4-dimensional spacetime $(M, g)$, with energy-momentum tensor given by \begin{equation}\label{intro_Tab} T_{ab}= (\mu+p) u_{a}u_{b}+p g_{ab}, \end{equation} $\mu$ and $p$ being respectively the energy density and pressure of the fluid and the unit time-like vector field $u_a$ being the fluid's (covariant) 4-velocity. As is well known, the covariant derivative of $u_a$ can be decomposed as \begin{equation}\label{uadecomp} u_{a;b}=\smfrac{1}{3}\theta (g_{ab}+u_a u_b)+\sigma_{ab}+\omega_{ab}-\dot{u}_{a}u_{b}, \label{eq2} \end{equation} where $\theta$ is the fluid's (rate of volume) expansion, $\dot{u}_{a}$ is the acceleration and $\sigma_{ab}$, $\omega_{ab}$ are respectively the shear and vorticity tensors, which are uniquely defined by (\ref{eq2}) and the properties \begin{equation} u^a \dot{u}_{a} = 0,\ u^a\omega_{ab} = \ u^a\sigma_{ab} =0, \ \sigma_{[ab]}=\omega_{(ab)}=0, \ {\sigma^a}_a=0. \end{equation} The physical significance of these so called \emph{kinematical quantities} has been discussed by many authors, see for example \cite{Ellis1971}. Among well known explicit solutions of the Einstein field equations \cite{Kramer}, in which some of these quantities vanish, we note the following \textit{shear-free} ($\sigma_{ab}=0$) solutions: the Einstein static universe ($\theta=\dot u_a=\sigma_{ab}=\omega_{ab}=0$), FLRW universes ($\theta \neq 0$, $\dot u_a=\sigma_{ab}=\omega_{ab}=0$) and the G\"odel universe ($\theta = \dot u_a = \sigma_{ab}=0$, $\omega_{ab} \neq 0$). Imposing a \textit{barotropic equation of state} $p = p(\mu)$, a common feature of the above examples, leads to extra restrictions on the solution space: for example all barotropic and shear-free ($\sigma_{ab}=0$) perfect fluids with non-vanishing expansion and vanishing vorticity are known explicitly \cite{CollinsKV}. This is not the case for barotropic and shear-free perfect fluids with vanishing expansion and non-vanishing vorticity \cite{Karimian2012}, although also here large classes of solutions exist (for example all rigidly rotating axisymmetric and stationary perfect fluids belong to this family). Remarkably, barotropic and shear-free perfect fluids in which both expansion and vorticity are non-zero seem to be confined to the limiting situation of `$\Lambda$-type' models (meaning $p=-\mu=constant$), the only example known to us being a Bianchi IX model found by Obukhov et al.~\cite{Obukhov}. This brings us to the subject of the present paper, the so-called \emph{shear-free fluid conjecture} which claims that \begin{quote} general relativistic, shear-free perfect fluids obeying a barotropic equation of state such that $p + \mu \neq 0$, are either non-expanding or non-rotating. \end{quote} During the last few years there has been a renewed interest in this conjecture, which, if true, would be a remarkable consequence of the full Einstein field equations: on the one hand Newtonian perfect fluids with a barotropic equation of state, which are rotating and expanding but non-shearing, are known to exist \cite{Ellis2011,HeckmannSchucking, Narlikar,SenSopSze}, while on the other hand in, for example, $f(R)$ gravity there is no counterpart of the conjecture either \cite{Sofuoglu}. The first suggestion that the vanishing of shear could play a decisively restricting role in the construction of expanding and rotating perfect fluids appeared in 1950, without proof, in a somewhat obscure contribution by G\"odel \cite{Godel2} on homogeneous rotating cosmological models. A precise formulation of G\"odel's claim was given in 1957 by Sch\"ucking \cite{Schucking}, who gave a short coordinate based proof that spatially homogeneous dust models ($p=0$) could be either rotating or expanding, but not both. The condition of vanishing pressure was dropped by Banerji \cite{Banerji}, who gave a similar coordinate based proof for (tilted) spatially homogeneous perfect fluids obeying a $\gamma$-\textit{law equation of state}, $p=(\gamma-1)\mu$, with $\gamma - 1 \neq \frac{1}{9}$ \footnote{the reason why $\gamma - 1 = \frac{1}{9}$ is special was clarified in \cite{NorbertCQG1999}, where also a proof was given for non-spatially homogeneous spacetimes}. Sch\"ucking's result was generalized in 1967 by Ellis \cite{Ellis1}, who used the orthonormal tetrad formalism to show that the restriction of spatial homogeneity was redundant for dust spacetimes (in \cite{WhiteCollins} it was observed that Ellis' result remained valid in the presence of a cosmological constant). In \cite{TreciokasEllis} Treciokas and Ellis proved, again using a combination of an orthonormal tetrad formalism and an adapted choice of coordinates, that the conjecture held true also for the equation of state $p=\frac{1}{3} \mu$, a result which was generalised by Coley \cite{Coley} to allow for a possible non-zero cosmological constant. In \cite{TreciokasEllis} an outline of an argument was presented, indicating the validity of the conjecture for perfect fluids in which the \textit{acceleration potential} $r = \exp \int_{p_0}^p \frac{1}{\mu+p} \textrm{d} p$ satisfies an equation of the form $\dot{r}= \beta(r)$, where the \textit{dot-operator} is the derivative along the fluid 4-velocity. This result (which implies the validity of the conjecture for a general equation of state, once one additionally assumes spatial homogeneity, as was the case in \cite{Banerji, KingEllis, Whiteth}) will play a key role in the sequel. However the details of the underlying proof remained veiled until 1988, when Lang and Collins \cite{Lang,Langth} explicitly showed that $\omega \theta=0$ indeed follows, provided there exists a functional relation of the form $\theta=\theta(\mu)$ (which, by the conservation law $\dot{\mu} + (\mu+p) \theta=0$, is equivalent with $\dot{r}=\beta (r)$). A `covariant' proof of this same result was given by Sopuerta in \cite{Sopuerta1998}. While Treciokas and Ellis already questioned the possible existence of rotating and expanding perfect fluids with $p=p(\mu)$, their non-existence was explicitly conjectured by Collins \cite{CollinsKV}, following a series of papers in which the conjecture was proved successively for the cases where the vorticity vector is parallel to the acceleration (see \cite{WhiteCollins}, or \cite{SenSopSze} for a fully covariant proof), or in which the Weyl tensor is purely magnetic \cite{Collins1984} or purely electric \cite{Langth, CyganowskiCarminati}. Since then the conjecture has been proved also in a large number of special cases, such as $\textrm{d} p/\textrm{d} \mu = -\frac{1}{3}$ \cite{CyganowskiCarminati, Langth, slob, NorbertCQG1999}; $\theta=\theta(\omega)$ \cite{Sopuerta1998}; Petrov types N \cite{Carminati1990} and III \cite{CarminatiCyganowski1996,CarminatiCyganowski1997}; the existence of a conformal Killing vector parallel to the fluid flow \cite{Coley}; the Weyl tensor having either a divergence-free electric part \cite{NorbertKarimianCarminatiHuf2012}, or a divergence-free magnetic part, in combination with an equation of state which is of the $\gamma$-law type \cite{NorbertCarminatiKarimian2007} or which is sufficiently generic \cite{CarminatiKarimianNorbertVu2009}, and in the case where the Einstein field equations are linearised about a FLRW background \cite{Nzioki} . A major step has been achieved recently by the second author \cite{Slobodeanu2014} proving the conjecture for an arbitrary $\gamma$-law equation of state (except for the cases $\gamma-1 = -\smfrac{1}{5}, -\smfrac{1}{6},-\smfrac{1}{11},-\smfrac{1}{21},\smfrac{1}{15}, \smfrac{1}{4}$) and a vanishing cosmological constant. In this approach, reminiscent of Pantilie's classification result on Einstein manifolds \cite{Pantilie2002}, the Einstein field equations were seen as a second order differential system in the length scale function, with the integrability conditions for this system allowing one to prove the conjecture \emph{via} some sufficient conditions in terms of \emph{basic functions}, i.e.~functions that are constant along the fluid flow.\\ Finally, in a recent paper by Carminati~\cite{Carminati2015}, an attempt of a proof was given for a linear equation of state and vanishing cosmological constant. However this proof is invalid, as inappropriate use was made of Maple's \texttt{solve} command, which for parametric polynomial systems only returns generic solutions\footnote{namely solutions valid in an open set of the parameter space, i.e.~\texttt{solve(a*x = 0,x)} will only return \texttt{x = 0}; a bug in Maple's \texttt{solve} code (Maple support, private communication) also prevents the issuing of a warning message that solutions might have been lost, even if the \texttt{parametric = full} option is used.}. Furthermore, the set of equations used in \cite{Carminati2015} was under-determined, a fact made obvious by inspection of the special case in which there is a Killing vector along the vorticity, leading to the simplifications $\textrm{u3 = f3 = T13 = T23 = g3 = m3 = n = 0}$.\\ In the present paper we will complete the proof of \cite{Slobodeanu2014}, covering the exceptional values of $\gamma$ and allowing also for a non-zero cosmological constant, or, equivalently, generalising the equation of state to the form $ p=(\gamma -1) \mu + p_0$. Inclusion of the constant term is important, first of all as the analysis of the conjecture for a general equation of state in a natural way is split into two branches, either $p''=\textrm{d}^2 p / \textrm{d} \mu^2 =0$, leading to $ p=(\gamma -1) \mu + p_0$, or to $p'' \neq 0$, with the analysis of the second case heavily leaning on the former. A second justification for including the $p_0$ term is that the dimension of a solution space of a set of exact solutions of the Einstein field equations, obtained by imposing kinematic restrictions, may change drastically by the inclusion of a non-zero cosmological constant. A typical example is provided by the Petrov type I \emph{silent universes}, for which the orthogonal spatially homogeneous Bianchi type I metrics most likely \cite{Sopuerta1997} are the only admissible metrics when $\Lambda=0$, but which for $\Lambda >0$ have been shown \cite{NorbertWylleman2004} to contain a peculiar set of non-OSH models.\\ In addition we generalize the formalism of \cite{Slobodeanu2014} to a general equation of state and we present some theorems, which not only will play a key role in the present proof for a linear equation of state, but which likely will also be useful when tackling the conjecture in its full generality, when $p=p(\mu)$ is an arbitrary function ($p \neq-\mu$) of the matter density. These theorems tell us that the conjecture is valid provided certain algebraic restrictions are obeyed by the kinematical quantities, or that, if the conjecture does \emph{not} hold, there exists a Killing vector along the vorticity. In the latter case the equations describing the problem simplify dramatically, but the accompanying loss of information turns this sub-case, as remarked already by Collins \cite{CollinsKV}, into an exceptionally elusive one. The simplest of these criteria (Corollary \ref{co1}) says that, for an expanding and rotating perfect fluid obeying a barotropic equation of state, the existence of a Killing vector along the vorticity is equivalent with the acceleration being orthogonal with the vorticity.\\ We begin with introducing in section 2 the necessary notations and conventions, while in section 3 we make the link with the formalism used in \cite{Slobodeanu2014} and present the governing equations for the case of an arbitrary equation of state. In section 4 we prove the general theorems mentioned above. In section 5 we prove the conjecture for the case of a linear equation of state, by splitting the argument according to whether the acceleration is orthogonal to vorticity or not, in Theorems \ref{Theorem4} and \ref{Theorem5}. The last sections are dedicated to conclusions and technical Appendices. \section{Notations and fundamental equations} We introduce at each point of spacetime an orthonormal tetrad $(\w{e}_a)=(\w{e}_0, \w{e}_\alpha)$ with the time-like unit vector $\w{e}_0$ coinciding with the fluid 4-velocity $\w{u}$ (henceforth Latin indices are tetrad indices taking the values 0,1,2,3, while Greek indices are spatial triad indices taking the values 1, 2, 3). Boldface symbols always will refer to vector (tensor) fields, but for readability (and as is customary in the literature, see e.g.~\cite{EllisMaartensmacCallum2012}) we will also write $\w{e}_a=\partial_a$: for example $\w{u}=\partial_0$, $\dot{\w{u}}=\dot{u}^\alpha \partial_\alpha$, $\dot{\w{u}}^2 = \dot{u}_\alpha \dot{u}^\alpha$ etc. ... \\ The volume 4-form components will be denoted by $\eta_{abcd}$ with the convention $\eta_{0123}=-1$; its restriction to tangent hyperplanes orthogonal to $\w{u}$ is $\varepsilon_{\alpha\beta\gamma}$. To a space-like 2-form one associates a vector field by Hodge duality, e.g.~the vorticity vector $\w{\omega}$ has components $\omega_\alpha = \smfrac{1}{2}\varepsilon_{\alpha\beta\gamma}\omega^{\beta\gamma}$. The notation $\omega$ will stand for the norm of the vorticity vector / 2-form. To fix the sign conventions let us point out that the metric components are $(g_{ab})=\mathrm{diag}(-1,1,1,1)$ and that the Riemann and Ricci curvature tensors respectively satisfy \begin{equation}\label{ricciv} {v^a}_{;d;c}-{v^a}_{;c;d} = {R^a}_{bcd}v^b \ , \quad R_{ab}={R^m}_{amb}, \end{equation} while the 'trace-free part' of the curvature, given by the Weyl tensor, is \begin{equation}\label{Weyldef} C_{abcd}=R_{abcd} - (g_{a[c}R_{d]b}+g_{b[d}R_{c]a})+\smfrac{1}{3}R \, g_{a[c}g_{d]b}. \end{equation} \paragraph{An extended tetrad formalism.} We will use the extended orthonormal tetrad formalism \cite{EllisMaartensmacCallum2012, Norbert2013}, in which the \textit{main variables} are \begin{itemize} \item the tetrad basis vectors $\partial_a$, \item the kinematical quantities $\dot{u}_\alpha$, $\omega_\alpha$, $\theta$, $\sigma_{\alpha \beta}$, \item the local angular velocity $\Omega_\alpha$ of the triad $\partial_{\alpha}$ with respect to a set of Fermi-propagated axes and the Kundt-Sch\"ucking-Behr variables \cite{MacCallum1971} $a_{\alpha}$ and $n_{\alpha \beta}=n_{\beta \alpha}$ which parametrize the purely spatial commutation coefficients ${{\gamma}^\alpha}_{\beta\kappa}$. They are defined by the relations (see \cite{vanElstPHD} for more explicit formulae) \begin{eqnarray}\label{comm_explicit} {}[\partial_0,\partial_\alpha] &=& \dot{u}_\alpha \partial_0 - \left(\smfrac{1}{3} \theta\delta_\alpha^{\beta}+\sigma_\alpha^{\beta} +{\varepsilon^\beta}_{\alpha\gamma}(\omega^\gamma+\Omega^\gamma)\right) \partial_\beta~, \\ {}[\partial_\alpha, \partial_\beta ] &=& {{\gamma}^c}_{\alpha\beta}\partial_c\equiv -2\varepsilon_{\alpha\beta\gamma}\omega^\gamma\partial_0 + \left(2a_{[\alpha}\delta^\gamma_{\beta]}+ \varepsilon_{\alpha\beta\delta} n^{\delta\gamma}\right)\partial_\gamma~.\label{comm_explicit_bis} \end{eqnarray} Sometimes, it is computationally advantageous to replace $a_{\alpha}$ and $n_{\alpha\beta}$ $(\alpha\neq\beta)$ with new variables $q_{\alpha}$ and $r_{\alpha}$ defined by \begin{equation*} n_{\alpha-1 \,\alpha+1}=(r_{\alpha}+q_{\alpha})/2, \quad a_{\alpha}=(r_{\alpha }-q_{\alpha})/2. \end{equation*} \item the energy density $\mu$ and pressure $p$, \item the `electric' and `magnetic' parts $E_{\alpha \beta}$, $H_{\alpha \beta}$ of the Weyl tensor with respect to $\w{u}$: \begin{equation} E_{ab}= C_{acbd}u^c u^d , \quad H_{ab}= \frac{1}{2} \eta_{amcd} {C^{cd}}_{bn}u^m u^n. \label{EHdef} \end{equation} They are symmetric trace-free tensors that determine the Weyl curvature. \end{itemize} In addition, we shall use the following \textit{auxiliary variables}: the spatial gradient of the expansion scalar, $z_\alpha=\partial_\alpha \theta$, and the (covariant) divergence of the acceleration, $ j\equiv {\dot{u}^a}_{;a} = \partial_{\alpha}\dot{u}^{\alpha}+\dot{u}^{\alpha}\dot{u}_{\alpha}-2 \dot{u}^{\alpha} a_{\alpha}$.\\ Note that with this choice of variables, once we assume that the Einstein equations (\ref{EFE}) are satisfied, the Riemann tensor is actually \emph{defined} in terms of $\w{E},\w{H},p$ and $\mu$ via (\ref{Weyldef}, \ref{EHdef}) \footnote{for example $R_{1212}=\smfrac{1}{3} \mu - E_{33}$. See \cite{vanElstPHD} for a full list of such relations.}, with the symmetry and trace-free properties of $\w{E}$ and $\w{H}$ guaranteeing the usual symmetry properties of a curvature tensor. The usual defining formulae (obtained from the second Cartan structure equations or, equivalently, from (\ref{ricciv})), \begin{equation}\label{cartan2bis} {R^a}_{bcd}={\Gamma ^a}_{bd,c}-{\Gamma ^a}_{bc,d}+{\Gamma ^e}_{bd} {\Gamma ^a}_{ec} - {\Gamma ^e}_{bc} {\Gamma ^a}_{ed} - {{\gamma}^e}_{cd} {\Gamma ^a}_{be}~, \end{equation} become then a set of first order partial differential \emph{equations} in the connection coefficients ${\Gamma ^c}_{ab}$, which are related to the main variables of the formalism through the commutation coefficients: \begin{equation}\label{commdef2} \Gamma_{\ ab}^c = \smfrac{1}{2}\left(\gamma_{\ ba}^c + \gamma_{\ cb}^a - \gamma_{\ ac}^b \right). \end{equation} This set of equations (\ref{cartan2bis}) is automatically satisfied \cite{Norbert2013} if we take as \textit{governing equations} of the formalism the following system: \begin{enumerate}[i)] \item Einstein field equations (\ref{EFE}), \item the Jacobi equations $\left[\partial_{[a},\left[\partial_b,\partial_{c]}\right]\right]=0$, or \begin{equation}\label{Jacobibis} \partial_{[a}{{\gamma}^d}_{bc]}-{{\gamma}^d}_{e[a} {{\gamma}^e}_{bc]} =0~, \end{equation} \item 18\footnote{3 of which are identities under the Jacobi equations} Ricci equations ${u^a}_{;d;c}-{u^a}_{;c;d}= {R^a}_{0cd}$ and \item 20 Bianchi equations $R^a{}_{b[cd;e]} = 0$, \end{enumerate} where the $R_{ab}$ components in $(i)$ are replaced, via (\ref{cartan2bis}), in terms of commutation coefficients ${{\gamma}^a}_{bc}$ and their derivatives. This system of equations contains a large number of redundancies (e.g. the field equations follow as integrability conditions for the Bianchi equations) and is integrable. For a detailed discussion see \cite{Norbert2013} where the equations have been written out in detail. \paragraph{Tetrad fixing conventions.} It has become customary \cite{WhiteCollins} to align $\partial_{3}$ with $\w{\omega}$, such that $\w{\omega} =\omega\partial_{3}\neq 0$. Applying the commutators $[\partial_3, \partial_\alpha]$ to $p$ and using the Euler and Jacobi equations one can show that the spatial triad can be taken to be co-rotating: $\w{\Omega}+\w{\omega}=0$, with the remaining tetrad freedom consisting of rotations in the $(1,2)$ plane, $\partial_1+ i\,\partial_2 \to e^{i\alpha} (\partial_1+i\,\partial_2)$ satisfying $\partial_0\alpha =0$.\\ In accordance with the definition of basic variables (see section 3), we will call such transformations \emph{basic rotations}. Notice that, under $\partial_1+i\,\partial_2 \to e^{i\alpha} (\partial_1+ i\,\partial_2)$, \begin{equation*} \smfrac{1}{2}(n_{11}-n_{22})+ i \, n_{12} \longrightarrow e^{2 i \alpha} \left(\smfrac{1}{2}(n_{11}-n_{22})+ i \, n_{12}\right), \end{equation*} while under $\sigma_{ab}=0$ and $\w{\Omega}+\w{\omega}=0$ the evolution equations for $n_{11} -n_{22}$ and $n_{12}$ are identical: it therefore follows that one can specialize the tetrad by means of a basic rotation so as to achieve $n_{11}=n_{22}\equiv n$. This fixes the tetrad, unless \begin{equation}\label{extra_rot} n_{12}=n_{11}-n_{22}=0, \end{equation} in which case further basic rotations can (and will) be used to obtain extra simplifications. \paragraph{Conventions related to the equation of state.} Throughout the paper we assume $p=p(\mu)$ with $p+\mu \neq 0$. We adopt the notations: $p' =\textrm{d} p / \textrm{d} \mu$, $G\equiv \frac{p''}{p'}(p+\mu) -p'+\frac{1}{3}$, $G' =\textrm{d} G / \textrm{d} \mu$ and $G_p=G'/p'$. \noindent Although the assumption $p+\mu\neq 0$ appears throughout the literature on the subject, the question whether an \emph{arbitrary} Einstein space can contain a shear-free, but rotating and expanding time-like congruence, seems to have attracted little attention \footnote{See \cite{Pantilie2002} for the analogue question in the Riemannian case. Here the example of the Eguchi-Hanson (Ricci-flat) metric provides us with a shear-free, expanding and rotating congruence.}. As one is setting up a set of 5 partial differential equations for the 3 components of the vector field $\w{u}$, it is clear that some restrictions -- either on the time-like congruence or on the geometry -- seem to be inevitable. \begin{re} The Ricci equations together with the vanishing of the shear imply that the magnetic part of the Weyl tensor is determined algebraically by \begin{eqnarray} H_{11} = -\omega (\dot{u}_3+r_3), \, H_{22}=-\omega (\dot{u}_3-q_3),\, H_{12}=0, \nonumber \\ H_{13} = \smfrac{1}{3}z_2 -\omega q_1, \, H_{23}= -\smfrac{1}{3}z_1 + \omega r_2 \label{def_H}. \end{eqnarray} \end{re} For the system of equations yielded by the extended tetrad formalism, imposing the existence of a barotropic equation of state $p=p(\mu)$ as well as the vanishing of the shear results in new chains of integrability conditions. The procedure of building up the sequence of integrability conditions has been carried out in several papers and for details of their derivation we refer the reader for example to \cite{NorbertKarimianCarminatiHuf2012}. The final result of this procedure, taking into account all Jacobi equations and Einstein field equations, the 18 Ricci equations, the contracted Bianchi equations, the `$\dot{\w{E}}$', `$\dot{\w{H}}$' and `$\w{\nabla}\cdot \w{E}$ ' Bianchi equations and all integrability conditions on $\mu, \theta, \dot{u}_\alpha$ and $\omega$ (the $[\partial_1,\, \partial_3]\omega$ and $[\partial_2,\, \partial_3]\omega$ relations being equivalent with the `$\w{\nabla}\cdot \w{H}$' equations) is presented in Appendix 1; see also \cite{Norbert2013}, or \cite{MaartensBassett1998} for the compact `1+3 covariant form' of some of these equations. \section{Formulation in terms of basic variables} An all-important role in the proof will be played by so-called basic objects (cf.~\cite{Slobodeanu2014} and reference therein), having their origin in the foliation theory. Let $\mathcal{H}$ denote the space-like subspace of the tangent space, orthogonal to the velocity $\w{u}$. The component along $\mathcal{H}$ or the restriction to $\mathcal{H}$ will be indicated by a superscript. Recall that a tensorial object $\varsigma$ in $(\otimes^r \mathcal{H}) \otimes (\otimes^s \mathcal{H}^*)$ is called \emph{basic} if $(\mathcal{L}_{\w{u}} \varsigma)^\mathcal{H} =0$, $\mathcal{L}$ denoting here the Lie derivative. In particular, \begin{de} A function $f$ on $M$ is \emph{basic} if it is conserved along the flow, $\w{u}(f)=0$, and a vector field $\w{X}$ belonging to $\mathcal{H}$ is \emph{basic} if $[\w{u}, \w{X}]^\mathcal{H}=0$. \end{de} Some immediate properties of basic functions are provided by the following lemma, the proof of which is easily checked: \begin{lm} \label{bas} $(i)$ A linear combination of basic vector fields, with basic coefficient functions, is basic.\\ \medskip $(ii)$ The horizontal part of the commutator of two basic vector fields is basic.\\ \medskip $(iii)$ If $\w{X}$ is a basic vector field and $f$ a basic function on $M$, then $\w{X}(f)$ is a basic function on $M$.\\ \end{lm} In the case of a $\gamma$-law equation of state a length scale $\lambda^{-1}$ was introduced in \cite{Slobodeanu2014}, enabling one to write pressure and energy density as $p=\frac{r-3}{3}\lambda^r, \mu= \lambda^r$ with $r=3\gamma$. This not only leads to a simplification of the equations, but also plays a key role in some of the arguments, such as in \emph{Proposition 5} of \cite{Slobodeanu2014}. In order to generalise this proposition to the case of a general barotropic equation of state and to formulate similar useful criteria, we will introduce the function $\lambda=\lambda(\mu)$ as follows, \begin{equation} \lambda = \exp \int \frac{\textrm{d} \mu}{3(p+\mu)}. \end{equation} The case of a linear equation of state (including a possible non-zero cosmological constant), $p'=\frac{r}{3}-1$, can then be expressed by \begin{equation}\label{expl_eqstate} p=\left(\frac{r}{3}-1\right) \lambda^r-\mu_0, \ \mu=\lambda^r+\mu_0 , \end{equation} ($\mu_0, r$ constants). Throughout the paper we will assume $p' \notin \left\{0, \pm \smfrac{1}{3}, \smfrac{1}{9}\right\}$ (these cases have been already settled; see the introduction for references).\\ In the next sections we need to identify the basic quantities that recurrently appear in our equations, and that are related to the variables of the perfect fluid problem. We provide now the following dictionary\footnote{henceforth \emph{fraktur} symbols will be used to indicate basic objects}, where, for reasons which will become clear in section 4, we found it convenient to introduce also rescaled acceleration variables $\U_\alpha= \dot{u}_\alpha/(\lambda p^\prime)$: \begin{lm}\label{dict} The following modified variables are conserved along the flow (are basic functions): \begin{eqnarray} & \O = \frac{p+\mu}{\lambda^{5}}\,\omega,\quad \N={\frac {n}{\lambda}}, \label{convert_O}\\ & \b_1 = -\smfrac{4}{3}\,{\frac {p+\mu}{{\lambda}^{6}}}z_1 - \smfrac{2}{3}\,{\frac {\left(9p^\prime - 1 \right) (p+\mu)\,\omega}{{\lambda}^ {5}}}\U_2, \label{convert_b1}\\ & \b_2 = -\smfrac{4}{3}\,\frac {p+\mu}{\lambda^6}z_2 + \smfrac{2}{3}\,{\frac {\left(9p^\prime - 1 \right) (p+\mu) \omega}{{\lambda}^{5}}}\U_1, \label{convert_b2}\\ & \b_3=-\smfrac{4}{3}\,{\frac {p+\mu}{{\lambda}^{6}}}z_3, \label{convert_b3}\\ & \Q_1 = -\smfrac{1}{3}\,\U_1+ \frac {q_1}{\lambda}, \quad \R_1=\smfrac{1}{3}\,\U_1 + \frac {r_1}{\lambda}, \label{convert_Q1R1}\\ &\Q_2 = -\smfrac{1}{3}\,\U_2 + \frac{q_2}{\lambda},\quad \R_2=\smfrac{1}{3}\,\U_2+\frac {r_2}{\lambda}, \label{convert_Q2R2}\\ &{\Q_3}+{\R_3}=-\smfrac{1}{3}\,\U_3 +\frac {q_3}{\lambda}, \quad {\Q_3}-{\R_3}=\smfrac{1}{3}\,\U_3+\frac{r_3}{\lambda}, \label{convert_Q3R3}\\ & \J=(1-2 G) \dot{\w{U}}^2+\lambda^{-2}\left(\theta^2-3 \mu-2\frac{j}{p'}\right)+9\O^2\frac{\lambda^8}{(p+\mu)^2}, \label{convert_J}\\ & \E_{\alpha \beta} = \frac{3p' + 1}{\lambda^2 p'}E_{\alpha \beta} + G \U_\alpha\U_\beta, \quad (\alpha,\beta)=(1,2),\, (1,3),\, (2,3), \label{convert_E123}\\ & \E_0 = \frac{3p' + 1}{\lambda^2 p'}(E_{11}-E_{22}) + G (\U_1^2-\U_2^2) ,\label{convert_E0}\\ & \E_3 = \frac{3p' + 1}{\lambda^2 p'}E_{33}-\frac{G}{3} (\U_1^2+\U_2^2-2\U_3^2)+\frac{2(9p'+1)\O^2}{3p'}\frac{\lambda^8}{(p+\mu)^2} .\label{convert_E3} \end{eqnarray} \end{lm} \begin{proof} Straightforward but lengthy computation using the propagation rules (see appendix 1) for each quantity involved. \end{proof} \begin{re}\label{Einvar} Note that the basic objects $\smfrac{1}{2}\E_0 + i \E_{12}$ and $\E_{13} + i \E_{23}$, transform as follows under a basic rotation in the $(1,2)$ plane: \begin{eqnarray*} \smfrac{1}{2}\E_0 + i \E_{12} \longrightarrow e^{2 i \alpha} (\smfrac{1}{2}\E_0 + i \E_{12}), \nonumber \\ \E_{13}+i \E_{23} \longrightarrow e^{i \alpha} (\E_{13}+i \E_{23}). \end{eqnarray*} This shows that conditions like $\E_{13}=\E_{23}=0$, occurring in for example Lemma \ref{Theorem3}, have a truly invariant (frame-independent) meaning. \begin{comment} It is also straightforward to derive from the definitions of the $\b_\alpha$ quantities that $\b_1+i \b_2 \longrightarrow e^{i \alpha} (\b_1+i \b_2)$. \end{comment} \end{re} It will be convenient also to rewrite the spatial basis in terms of the basic vector fields \begin{equation}\label{basxyz} \X=\lambda^{-1}\partial_1, \quad \Y = \lambda^{-1}\partial_2, \quad \Z = \lambda^{-1}\partial_3. \end{equation} It follows then that $\U_1= -3 \X(\ln \lambda)$, $\U_2= -3 \Y(\ln \lambda)$, and $\U_3= -3 \Z(\ln \lambda)$. Acting with the operators (\ref{basxyz}) on (\ref{convert_O}$_a$) one obtains the basic equations \begin{equation}\label{XYZO} \X(\O) = -\O \Q_1 - \smfrac{1}{2} \b_2, \quad \Y(\O) = \O \R_2 + \smfrac{1}{2} \b_1, \quad \Z(\O) = -2 \O \R_3 . \end{equation} with integrability conditions given by (\ref{basic_eq1}+\ref{basic_eq6},\ref{basic_eq2},\ref{basic_eq3}). \begin{re} In terms of the vector fields (\ref{basxyz}), many quantities in Lemma \ref{dict} are easily recognised as being basic. First recall \cite{Slobodeanu2014} that to our fluid one can locally associate a transversally conformal submersion $\varphi:(M,g) \to (N,h)$ onto a Riemannian $3$-manifold having $\w{u}$ tangent to its fibres. Then notice that any tensorial quantity constructed by pull-back is clearly basic. In particular, since $\lambda^2 g^\mathcal{H} = \varphi^*h$ is basic, it follows (using also Lemma \ref{bas}) that $\N = \lambda^2 g([\Z, \X], \Y)$ is basic. A similar argument holds for $\Q_i$ and $\R_i$: \begin{eqnarray*} \fl \Q_{1} &= \lambda^2 g([\Z, \X], \Z) , \quad \Q_{2}= \lambda^2 g([\X, \Y], \X), \quad \Q_{3} = \smfrac{\lambda^2}{2}\big( g([\Z, \X], \X)-g([\Z, \Y], \Y)\big), \\ \fl \R_{1} & = \lambda^2 g([\X, \Y], \Y), \quad \R_{2}= -\lambda^2 g([\Z, \Y], \Z), \ \R_{3}=-\smfrac{\lambda^2}{2}\big( g([\Z, \X], \X) + g([\Z, \Y], \Y)\big). \end{eqnarray*} As to $\J$, $\E_{\alpha \beta}$, $\E_{0}$ and $\E_{3}$, they correspond, up to constant factors, to the following pull-backed curvatures of the `material manifold' $N$: $R^N \circ \varphi$ (scalar curvature), $\varphi^* R_{\alpha \beta}^N$ ($\alpha \neq \beta$), $\varphi^* R_{11}^N -\varphi^* R_{22}^N$, and $\frac{1}{3}R^N \circ \varphi-\varphi^* R_{33}^N$ (Ricci curvatures), respectively; in particular, they are basic functions. Finally, if the equation of state is linear, $\O$, $\b_1$, $\b_2$ and $\b_3$ correspond respectively to the basic functions $\frac{p'+1}{2}\Omega(\X, \Y)$, $(p'+1)\beta(\X)$, $(p'+1)\beta(\Y)$ and $(p'+1)\beta(\Z)$, defined in \cite{Slobodeanu2014}. \end{re} Translating the equations of Appendix 1 in terms of the basic variables $\O,\J,\b_{\alpha},\Q_\alpha,\R_\alpha,\E_0,\E_3$ and the non-basic variables $p, \mu, \theta, \U_{\alpha}$, augmented with all information obtainable by acting with the $\partial_\alpha$ operators on the remaining dictionary elements, one derives a set of equations which can be split into \begin{itemize} \item evolution equations for the non-basic quantities $\mu, \theta, \U_\alpha$, namely (\ref{e0mu}) and \begin{eqnarray} \partial_0 \theta = \smfrac{1}{2} \lambda^2p'[(1-2 G) \dot{\w{U}}^2-\J]+\smfrac{1}{2} \O^2(9p'+4)(p+\mu)^{-2}\lambda^{10}\nonumber \\ \ \ \ \ \ \ \ +\smfrac{1}{6}(3p'-2)\theta^2- \smfrac{1}{2}(3p'+1)\mu -\smfrac{3}{2}p, \label{utheta}\\ \partial_0 \U_1 = p' \theta \U_1+\frac{\lambda^5}{p+\mu}\left(-\frac{3}{4}\b_1-\frac{9p'-1}{2}\O \U_2 \right) ,\label{uU1}\\ \partial_0 \U_2 = p' \theta \U_2+\frac{\lambda^5}{p+\mu}\left(-\frac{3}{4}\b_2+\frac{9p'-1}{2}\O \U_1\right) ,\label{uU2}\\ \partial_0 \U_3 = p' \theta \U_3 -\frac{\lambda^5}{p+\mu}\frac{3\b_3}{4},\label{uU3} \end{eqnarray} \item purely basic equations, which will play only a minor role and which, for the sake of readability, are presented in Appendix 2, \item algebraic equations in $\dot{U}_\alpha$ and $\theta$, with the basic functions $\B_1, \ldots, \B_{16}$ defined by equations (\ref{basic_eq11}--\ref{basic_eq26}) of Appendix 2, \end{itemize} \begin{eqnarray} \fl \ \ \ \ \ \ \diamondsuit \ \textrm{the } \dot{\w{H}} \textrm{ equations (\ref{H13_0},\ref{H23_0},\ref{H12_0},\ref{H11_0},\ref{H22_0}):} \nonumber \\ \fl \left[\frac{3p'^2}{3p'+1}\E_{12}-\h_8\O\frac{\lambda^3 \theta}{p+\mu} \right]\U_1-\smfrac{3}{2}\left[\frac{p'^2}{3p'+1}(\E_0-3\E_3)+ \frac{(p'+1)(81p'^2-5)}{2(3p'+1)}\frac{\O^2\lambda^8}{(p+\mu)^2}\right]\U_2\nonumber \\ -\frac{3p'^2}{3p'+1}\E_{23}\U_3 -(3p'+1)\big(\Q_1\O + \smfrac{1}{4} \b_2 \big)\frac{\lambda^3 \theta}{p+\mu}\nonumber \\ - 3\O\big(\R_2\O+\smfrac{1}{8}(9p'+11)\b_1\big)\frac{\lambda^8}{(p+\mu)^2} +\B_{12}=0, \label{m_eq12}\\[3mm] \fl -\smfrac{3}{2}\left[\frac{p'^2}{3p'+1}(\E_0+3\E_3)- \frac{(p'+1)(81p'^2-5)}{2(3p'+1)}\frac{\O^2\lambda^8}{(p+\mu)^2}\right]\U_1 - \left[\frac{3p'^2}{3p'+1}\E_{12}+\h_8\O \frac{\lambda^3 \theta}{p+\mu} \right]\U_2\nonumber \\ +\frac{3p'^2}{3p'+1}\E_{13}\U_3 +(3p'+1)\big(\R_2\O + \smfrac{1}{4}\b_1 \big) \frac{\lambda^3 \theta}{p+\mu}\nonumber \\ - 3\O\big(\Q_1\O+\smfrac{1}{8}(9p'+11)\b_2\big)\frac{\lambda^8}{(p+\mu)^2}+\B_{13}=0 \label{m_eq13},\\[3mm] \fl \frac{3{p'}^2}{3{p'}+1}(\E_{13}\U_1-\E_{23}\U_2-\E_0\U_3)+ 6\Q_3\frac{\O^2\lambda^8}{(p+\mu)^2}-\B_{11}=0, \label{m_eq11}\\[3mm] \fl \frac{6 {p'}^2}{3{p'}+1}(\E_{23}\U_1+\E_{13} \U_2-2 \E_{12}\U_3)- 2(3p'+1)\Q_3\O\frac{\lambda^3 \theta}{p+\mu} + \B_9-\B_{10} = 0,\label{m_eq9}\\[3mm] \fl \frac{6 {p'}^2}{3{p'}+1}(\E_{23}\U_1-\E_{13} \U_2)-\frac{\lambda^3\theta \O}{p+\mu}\left(\smfrac{2}{3}(9{p'}G-9{p'}^2+1) \U_3 +2(3{p'}+1)\R_3 \right)\nonumber \\ -\smfrac{9}{2}(p'+1)\b_3\frac{\O\lambda^8}{(p+\mu)^2} -\B_9-\B_{10} = 0,\label{m_eq10}\\[3mm] \fl \ \ \ \ \ \ \diamondsuit \ \textrm{the integrability conditions } \partial_A \partial_3 \theta - \partial_3 \partial_A \theta-[\partial_A,\, \partial_3] \theta = 0\ (A=1,2): \nonumber \\ \fl \h_1 \O \U_2\U_3 + \smfrac{5}{2}(3p'-1) \b_3 \U_1 + 2(9{p'}-1)\O(\Q_3+\R_3)\U_2 \nonumber \\ +\left[2 \O \R_2(9{p'}-1)-\b_1(3{p'}-2)\right]\U_3 - \frac{2(9{p'}-1)}{3{p'}+1}\O\E_{23} - 3 \B_1 = 0, \label{m_eq1} \\[3mm] \fl \h_1 \O \U_1\U_3 -\smfrac{5}{2}(3{p'}-1)\b_3 \U_2 -2(9{p'}-1)\O(\Q_3-\R_3)\U_1 \nonumber \\ -\left[2 \O \Q_1(9{p'}-1)-\b_2(3{p'}-2)\right]\U_3 - \frac{2(9{p'}-1)}{3{p'}+1}\O\E_{13}+3\B_2 = 0, \label{m_eq2} \\[3mm] \fl \ \ \ \ \ \ \diamondsuit \ \textrm{equations resulting by evaluation of } \partial_1(\ref{convert_b2})-\partial_2(\ref{convert_b1}) \textrm{ and } \partial_2(\ref{convert_b2})-\partial_1(\ref{convert_b1}):\nonumber \\ \fl \O [ \h_2 (\U_1^2+\U_2^2)+\h_3 \U_3^2] + 3 (3 {p'} +1) (9{p'}-1) \O \R_3 \U_3+\smfrac{3}{2} (3 {p'} +1) (15 {p'} \mu+6 p+\mu)\lambda^{-2}\O\nonumber \\ +\smfrac{3}{4}(3 {p'} +1) [2 \O \Q_1(9{p'} -1)+3 \b_2(4{p'} -1)] \U_1 \nonumber \\ - \smfrac{3}{4}(3 {p'} +1) [2 \O \R_2(9{p'} -1)+3 \b_1(4{p'} -1)] \U_2 \nonumber \\ -\smfrac{5}{2}(3p' + 1)(3p' - 1)\O \lambda^{-2} \theta^2 -\smfrac{3}{2} (135 {p'}^2 + 96 p' + 1) \lambda^8 (p+\mu)^{-2} \O ^3 \nonumber \\ +[\smfrac{1}{2}(45{p'}^2 + 12 p' -1) \J +\smfrac{3}{2} (9 {p'} -1) \E_{3}] \O + \smfrac{9}{4}(3p' + 1)\B_3 = 0, \label{m_eq3} \\[3mm] \fl \big[\h_4 (\U_1^2 - \U_2^2) - p'(3G - 2) \E_0\big]\theta + \frac{(3p' +1)\lambda^5}{p+\mu}\big\{\h_5 \O \U_1\U_2 -(9p' -1) \O \E_{12} \nonumber \\ +\smfrac{1}{4} [\h_6 \b_1-2 (3 p' +1) (9 p'-1 ) \O \R_2] \U_1 -\smfrac{1}{4}[\h_6 \b_2-2 (3 p' +1) (9 p' - 1) \O \Q_1] \U_2 \nonumber \\ +\smfrac{3}{4} (3 {p'} +1) \B_4 \big\} = 0, \label{m_eq4} \\[3mm] \fl \ \ \ \ \ \ \diamondsuit \ \textrm{equations resulting by evaluation of } \partial_1(\ref{convert_b2}) + \partial_2(\ref{convert_b1}) \textrm{ and } \partial_2(\ref{convert_b2}) + \partial_1(\ref{convert_b1}): \nonumber \\ \fl 2\big[- \h_4 \U_1\U_2 + p'(3 G-2) \E_{12}\big]\theta + \frac{(3p' +1)\lambda^5}{p+\mu}\big\{\smfrac{1}{2}\h_5 \O (\U_1^2-\U_2^2) - \smfrac{1}{2}(9 p' - 1)\O \E_0 + \smfrac{3}{4} (3 p' + 1) \B_5 \nonumber \\ -\smfrac{1}{4}[\h_6 \b_2 - 2(3p' + 1)(9p' -1) \O \Q_1] \U_1 -\smfrac{1}{4}[\h_6 \b_1-2 (3 {p'} +1) (9 {p'} -1 ) \O \R_2] \U_2 \nonumber\\ -(9 {p'} -1) (3p' +1) \O \Q_3 \U_3 \big\} = 0, \label{m_eq5} \\[3mm] \fl \big[\h_4 (\U_1^2 + \U_2^2 - 2 \U_3^2) + 3p'(3G - 2) \E_3 \big] \theta + \frac{(3p' +1)\lambda^5}{p+\mu}\big\{-\smfrac{9}{4} (3p' +1) \B_6 \nonumber \\ \fl \ \ \ \ + \smfrac{3}{4} [2 (3 {p'} +1) (9 {p'} -1) \O \R_2+(6 G {p'} +3 {p'} +1) \b_1]\U_1 \nonumber \\ \fl \ \ \ \ + \smfrac{3}{4} [2 (3 {p'} +1) (9 {p'} -1) \O \Q_1+(6 G {p'} +3 {p'} +1) \b_2]\U_2 \nonumber\\ \fl \ \ \ \ -\smfrac{9}{4} (4 G {p'} -9 {p'} ^2+1) \b_3 \U_3 \big\} -\frac{\lambda^8 \theta}{(p+\mu)^2}(36 G {p'} -81 {p'} ^3+27 {p'} ^2-27 {p'} -7) \O ^2 = 0, \label{m_eq6} \\[3mm] \fl \ \ \ \ \ \ \diamondsuit \ \textrm{equations resulting by evaluation of } \partial_1(\ref{convert_b3}) \textrm{ and } \partial_2(\ref{convert_b3}):\nonumber \\ \fl \big[\h_4 \U_1\U_3 - p'(3 G-2)\E_{13}\big]\theta + \frac{(3p' +1)\lambda^5}{p+\mu} \big\{\h_7 \O \U_2\U_3 +\smfrac{1}{8}(18 G {p'} -63 {p'} ^2+7)\b_3 \U_1 \nonumber \\ -\smfrac{1}{2} (9 {p'} -1) (3 {p'} +1) (\Q_3-\R_3) \O \U_2 \nonumber \\ + \smfrac{1}{4}(9Gp' -9{p'}^2+1) \b_1 \U_3 - \smfrac{3}{4}(3p'+1)\B_7\big\} = 0, \label{m_eq7} \\[3mm] \fl \big[\h_4 \U_2 \U_3 - p'(3 G-2) \E_{23}\big]\theta + \frac{(3p' +1)\lambda^5}{p+\mu} \big\{- \h_7 \O \U_1\U_3 +\smfrac{1}{8}(18 G {p'} -63 {p'} ^2+7) \b_3 \U_2 \nonumber \\ -\smfrac{1}{2} (9 {p'} -1) (3 {p'} +1) (\Q_3+\R_3) \O \U_1 \nonumber \\ + \smfrac{1}{4}(9 G {p'} -9 {p'} ^2+1) \b_2 \U_3 -\smfrac{3}{4} (3 {p'} +1) \B_8 \big\}= 0. \label{m_eq8} \end{eqnarray} In these equations $\h_1,\ldots, \h_7$ are functions of $\mu$ defined by \begin{eqnarray} \h_1 = 2\frac{36 {p'} G +9{p'}^2-1}{3(3{p'}+1)},\nonumber\\ \h_2 = 45 G {p'}^2+21 G {p'} +9 {p'}^2-1,\nonumber\\ \h_3= \smfrac{1}{2} (90 G {p'} ^2+6 G {p'} +9 {p'} ^2-1),\\ \h_4 = p'\big(3 G' (p+\mu)(1+3{p'})+3 G^2 -18 G {p'} ^2-6 G {p'} -2 G\big),\nonumber\\ \h_5 = 6 Gp'(9p' + 1) - 54{p'}^3 + 9{p'}^2 + 6 {p'} -1,\nonumber\\ \h_6 = 18 G p'-54 p'^2-3 p'+5,\nonumber\\ \h_7 = \smfrac{1}{6}(9p' -1)(9 G p' - 9{p'} ^2+1),\nonumber \\ \h_8 = \smfrac{1}{2} (9Gp'-9p'^2+1) \nonumber . \end{eqnarray} \noindent Note that (\ref{m_eq1}, \ref{m_eq2}, \ref{m_eq3}, \ref{m_eq4}) in the case of a linear equation of state correspond respectively to equations (26), (27), (28) and (23) of \cite{Slobodeanu2014}. \section{General theorems} In this section we present some criteria which will be used later on, but which also may turn out to be helpful when tackling the conjecture for a general barotropic equation of state. We begin with two theorems generalising Proposition 5 of \cite{Slobodeanu2014} for arbitrary $p(\mu)$. \begin{te} \label{th1} If for a shear-free perfect fluid obeying a barotropic equation of state $\U_1$ and $\U_2$ are basic, then $\omega \theta = 0$. \end{te} \begin{proof} Assume $\omega \theta \neq 0$. Using the evolution equations for $\mu, \u_1,\u_2$, the conditions $\partial_0(\U_1)= \partial_0(\U_2)=0$ are equivalent with $z_1+\theta \u_1 = z_2+\theta \u_2=0$. Applying $\partial_0$ to the latter two equations and substituting for $z_1,z_2$ yields a homogeneous system in $\u_1,\u_2$, the coefficient matrix of which is positive definite (in which case \cite{WhiteCollins} applies and the proof ends), unless \begin{equation}\label{th1_eq1} \left(\frac{2}{3}-G-2p'\right)\theta^2+2\omega^2-\frac{\mu+3 p}{2}+j = 0 \end{equation} and \begin{equation}\label{th1_eq2} G + p^\prime - \frac{1}{3} = 0. \end{equation} The second of these conditions implies that we have a linear equation of state ($p''=0$), which, when substituted in the first, yields \begin{equation}\label{th1_j} j= \left(p'-\frac{1}{3}\right) {\theta}^{2}-2\,{\omega}^{2}+\frac{\mu + 3p}{2}. \end{equation} Acting on this with $\partial_3$ gives, using (\ref{e3j}), $\theta (z_3+\theta \u_3)(3p'-2)=0$. If $z_3+\theta \u_3=0$ then, by (\ref{gradmu}, \ref{gradtheta}), we have $z_\alpha -\theta \frac{\partial_\alpha p}{p+\mu} =0$ and hence, with $F=\log \theta - \int (p+\mu)^{-1}\textrm{d} p$, $\textrm{d} F = -\partial_0 F \w{u}^\flat$. This shows that $\w{u}$ is hypersurface orthogonal (and hence the vorticity vanishes), unless $F$ is constant, whence $\theta=\theta(\mu)$, which is the case treated in \cite{TreciokasEllis,Langth, Lang}. Hence, as $\omega \theta \neq 0$, we necessarily have $p'=2/3$. Then however the $\partial_1$ derivative of $z_2+\theta \u_2=0$ gives $2\omega^2 \theta^2-z_3^2-\theta \u_3 z_3=0$, which can be used to eliminate the $\u_3 z_3$ term arising in $\partial_0(\ref{th1_j})$ to yield $\theta^2(20 \omega^2+5 \u_3^2+\frac{9}{2}(p+\mu))-5z_3^2=0$. Propagating this again along $\w{u}$ and using the previous results to eliminate $z_\alpha$ and $j$, eventually gives $(p+\mu) \theta^2=0$. \end{proof} \begin{te} \label{th2} If for a rotating and expanding shear-free perfect fluid, obeying a barotropic equation of state, $\U_3$ is basic, then a Killing vector along the vorticity exists. \end{te} \begin{proof} As in Theorem \ref{th1} one sees that $\partial_0(\U_3)=0$ is equivalent with $z_3+\theta \u_3=0$. Propagating this along $\w{u}$ shows, using the evolution equations of Appendix 1, that $\u_3=0$ or (\ref{th1_eq1}) holds.\\ We first show that $\u_3\neq 0$ is inconsistent with $\omega \theta \neq 0$. Acting on (\ref{th1_eq1}) with $\partial_3$, one obtains in place of (\ref{th1_eq2}), \begin{equation}\label{th1_eq3} G+p'-\frac{1}{3} +\frac{p+\mu}{3}G_p= 0. \end{equation} Using this to eliminate $G_p$ from the relations obtained by acting with $\partial_1$ and $\partial_2$ on (\ref{th1_eq1}), one finds \begin{eqnarray} \fl & & {\theta}^{2} \left( 6G+9{p'}-4 \right) {\u_1}-\smfrac{3}{2}\omega \theta\left( 9G-2 \right) {\u_2}+\theta\left( 6G+9{ p'}-4 \right) {z_1}-\smfrac{3}{2}\omega\left( 1-9{p'} \right) { z_2}=0 , \label{th1_eq4}\\ \fl & & \smfrac{3}{2}\omega\theta\left( 9G-2 \right) {\u_1}+{\theta}^{2} \left( 6G+9{p'}-4 \right) {\u_2}+\smfrac{3}{2}\omega\left( 1-9{ p'} \right) {z_1}+\theta\left( 6G+9{p'}-4 \right) {z_2}=0. \label{th1_eq5} \end{eqnarray} In the case of a linear equation of state ($G+p'-\frac{1}{3}=0$) this becomes a homogeneous system in the variables $z_1+\theta \u_1, z_2+\theta \u_2$, with a coefficient matrix which is positive definite, unless $\theta(p'-2/3)=\omega(9p'-1)=0$ and hence we are done by Theorem \ref{th1}. If there is no linear equation of state (in which case the $(\u_1, \u_2)$-coefficient matrix of (\ref{th1_eq4}, \ref{th1_eq5}) is positive definite), solving (\ref{th1_eq4}, \ref{th1_eq5}) for $\u_1,\u_2$ leads to expressions which are homogeneous in $z_1,z_2$. Propagating (\ref{th1_eq4}, \ref{th1_eq5}) along $\w{u}$, one obtains a new homogeneous system $a z_1+b z_2=-b z_1+a z_2 =0$ with \begin{eqnarray} \fl a &=& -36\omega \theta (3G+3p'-1)^2(81 G^2+162G p'-96 G-72p'+28), \label{th1_eq6}\\ \fl b &=& 6(3G+3p'-1)[4\theta^2(6G+9p'-4)(27 G^2-33G+81 G p'+54 p'^2-45 p'+10) \nonumber\\ \fl && + 9\omega^2(9G-2)(3G+12p'-12). \end{eqnarray} Again the coefficient matrix is positive definite, as $a^2+b^2 = \partial_0 b = 0$ would lead to an inconsistency with (\ref{th1_eq3}), unless we have a linear equation of state. It follows that $z_1=z_2=0$, hence also $\u_1=\u_2=0$ and we are done by Theorem \ref{th1}.\\ Having excluded the case $\u_3\neq 0$, we now turn to the case where acceleration and vorticity are orthogonal: $\u_3=0$ and hence also $z_3=0$. From the $\partial_\alpha \u_3=0$ equations one now obtains \begin{eqnarray} E_{13} - r_3 \u_1 = 0,\label{th1_eq7}\\ E_{23} +q_3 \u_2 = 0,\label{th1_eq8}\\ E_{33}+\smfrac{1}{3}j+\smfrac{2}{3}\omega^2-\u_1q_1+\u_2r_2=0 , \label{th1_eq8bis} \end{eqnarray} with $\partial_0$(\ref{th1_eq7},\ref{th1_eq8}) leading to two further equations, \begin{eqnarray} p' r_3[(3G-2)\theta \u_1-(3p'+1)z_1)]=0,\label{th1_eq9}\\ p' q_3[(3G-2)\theta \u_2-(3p'+1)z_2)]=0. \label{th1_eq10} \end{eqnarray} Clearly $r_3q_3 \neq 0$ implies the existence of a function $f(\mu)$, such that $\partial_\alpha(f(\mu) \theta)=0$ and hence either $\w{u}$ is hypersurface orthogonal ($\omega=0$) or $\theta=\theta(\mu)$ (and then again $\omega \theta =0$). It follows that we can restrict to the cases $r_3\neq 0=q_3$ or $r_3=0\neq q_3$ (which are equivalent under a discrete rotation) and the case $q_3=r_3=0$. The latter is easy: by (\ref{th1_eq7},\ref{th1_eq8}) we have $E_{13}=E_{23}=0$, with $j$ given by (\ref{th1_eq8bis}). One can verify that herewith the $\partial_3$ derivatives of all invariants vanish, implying the existence of a Killing vector $K \partial_3$ along the vorticity. Alternatively one can explicitly verify the existence of this Killing vector, by showing that the Killing equations $k_{(a;b)}=0$, with $\w{k} = K \partial_3$, form an integrable set. The Killing equations are given in explicit form by \begin{equation} \partial_0 K-\frac{\theta}{3}K = \partial_1 K - q_1 K = \partial_2 K + r_2 K = \partial_3 K = 0 \end{equation} and acting on $K$ with the commutators (\ref{comm_explicit}) shows that the resulting integrability conditions are identically satisfied: except for $[\partial_1,\, \partial_2] K$, this is an immediate consequence of the equations (\ref{e0r},\ref{e0q},\ref{divH1},\ref{divH2}), while for $[\partial_1,\, \partial_2] K$ one has to use equations (\ref{e2q1},\ref{e1r2}). It remains to show the inconsistency of (for example) the case $r_3\neq 0=q_3$: taking a $\partial_3$ derivative of (\ref{th1_eq9}) (with $\partial_3 G =0$) and expressing that $\partial_1 q_3=0$, one obtains $n=0$ and $\u_1-r_1=0$. In terms of the basic variables introduced in section 2 we have then the following restrictions: \begin{eqnarray} \N=\b_3 = \Q_3+\R_3=\E_{23}=\E_{13}+6 \R_1 \R_3=0, \\ \U_1= \frac{3 \R_1}{1+3p'}, \end{eqnarray} with $\R_3\neq 0$. Herewith the algebraic equation (\ref{m_eq2}) can be written as \begin{equation} 8 \O \R_1 \R_3 (9p'-1) +\B_2(3p'+1)=0, \label{th1_eq15} \end{equation} implying that $p'$ is basic (and hence $\theta=0$, unless $p'$ is constant), or that $\R_1=\B_2=0$. In the latter case (\ref{m_eq11}) reads $6\O^2 \R_3 \lambda^8+\B_{11}(p+\mu)^2=0$, implying that $\lambda^{8}(p+\mu)^{-2}$ is a basic function and hence $\theta=0$. On the other hand, in the case when $p'$ is constant (linear equation of state) and $\R_1 \neq 0$, equation (\ref{m_eq11}) reduces to \begin{equation} 6\O^2 \R_3 \frac{\lambda^8}{(p+\mu)^2} + \B_{11} +\frac{54 p'^2}{(3p'+1)^2}\R_1^2\R_3=0 \end{equation} and again we see that $\lambda^{8}(p+\mu)^{-2}$ is basic, so $\theta=0$. \end{proof} As the previous theorem applies in particular to the case $\U_3=0$ ($\dot{\w{u}}$ orthogonal to $\w{\omega}$) and as, vice versa, the existence of a Killing vector along the vorticity automatically implies $\U_3=0$, we also obtain the following corollary: \begin{co}\label{co1} For a rotating and expanding shear-free perfect fluid with a barotropic equation of state, the acceleration is orthogonal to the vorticity if and only if a Killing vector exists along the vorticity. \end{co} \section{Linear equation of state} We first demonstrate in Lemma \ref{Theorem3} that for a linear equation of state the vanishing of the basic variables $\E_{13},\E_{23}$ and $\Q_3$ implies the existence of a Killing vector along the vorticity. In Theorem \ref{Theorem4} we show that for a linear equation of state the conjecture holds true, unless the conditions for Lemma \ref{Theorem3} are satisfied. The final `elusive case' \cite{CollinsKV}, in which there is a Killing vector along the vorticity, is then dealt with in Theorem \ref{Theorem5}. Recall first an observation which will be helpful in the sequel. \begin{re}[\cite{Slobodeanu2014}]\label{pol} If a function $f$ on $M$ satisfies $\alpha_n f^n + ... + \alpha_1 f + \alpha_0 =0$, where $n \in \mathbb{N}$ and $\alpha_i$'s are all basic functions, then either $f$ is basic or $\alpha_i=0$ for all $i=0,1, ..., n$. \end{re} \begin{lm}\label{Theorem3} If for a rotating and expanding shear-free perfect fluid, obeying a linear equation of state $p=(\gamma-1)\mu + constant$, the basic variables $\E_{13},\E_{23}$ and $\Q_3$ vanish, then a Killing vector exists along the vorticity. \end{lm} \begin{proof} Let us assume that $\U_3$ is not basic, as otherwise Theorem \ref{th2} applies. Since the equation of state is linear, the determinant of the linear system (\ref{m_eq1},\ref{m_eq2}) in $\U_1$, $\U_2$ \begin{equation}\label{dete} D=\O^2(9p'-1)^2\left[(3p'-1)^2 \U_3^2 -6 \R_3 (9p'^2-1) \U_3 + \textrm{basic terms}\right], \end{equation} has basic coefficients, so can be assumed to be non-zero due to Remark \ref{pol}. Solving this system for $\U_1,\U_2$ yields rational expressions in $\U_3$ with basic coefficients; this allows us in the following to obtain various equations only in terms of $\U_3$. Since by hypothesis $\Q_3=0$ we have $n_{12}=n_{11}-n_{22}=0$ and we are free to choose a basic rotation making for example $\E_{12}=0$. Together with $\E_{13}=\E_{23}=0$ and the conditions for a linear equation of state ($G'=G+p'-\smfrac{1}{3}=0$) the equations of the previous section simplify considerably. In particular one obtains from (\ref{m_eq11}, \ref{m_eq9}) $\B_{10}=\B_9$ and $ \frac{3p'^2}{3p'+1}\E_0 \U_3 + \B_{11}=0$. By Theorem \ref{th2} this implies the existence of a Killing vector along the vorticity, unless $\E_0=\B_{11}=0$. Since $\E_{12}$ and $\E_0$ are both zero, a further basic rotation may be taken (cf. Remark \ref{Einvar}), making $\b_1 = \b_2$. By (\ref{basic_eq24}) we have then $\B_9=\B_{10}= -3 \b_3\E_3 / (8 \O)$, such that (\ref{m_eq10}) simplifies to \begin{equation}\label{specsimp} \fl \ \frac{\lambda^3 \theta}{p+\mu}\left((6 p^\prime + 1)(3 p' - 1)\U_3 - 3 (3\,{p'}+1 ) {\R_3}\right)- \frac{27\lambda^8}{4(p+\mu)^2}(p'+1) \b_3 + \frac{9\E_3 \b_3}{8\O^2}=0 \end{equation} showing that $\b_3 \neq 0$ unless $p'=-1/6$. \medskip We see Equations (\ref{m_eq6},\ref{m_eq7},\ref{specsimp}) as an algebraic system in the variables $\lambda^3 \theta(p+\mu)^{-1}$ and $\lambda^8 (p+\mu)^{-2}$; by eliminating the first variable from (\ref{specsimp}) and (\ref{m_eq7}), then from (\ref{specsimp}) and (\ref{m_eq6}), and finally taking the resultant of the two relations with respect to the second variable, we obtain a compatibility condition in the form of a polynomial equation in $\U_3$ with basic coefficients. But then Remark \ref{pol} requires that the leading coefficient is vanishing; assuming $p' \neq -1/6$ this is equivalent to $$ \R_2 = \frac{9{p^\prime}^2-6p^\prime - 1}{2(3p^\prime+1)(9p^\prime-1)}\frac{\b_1}{\O}. $$ Analogously, repeating the argument with (\ref{m_eq8}) in the place of (\ref{m_eq7}), we get $\R_2 = \Q_1$. By propagating the equations (\ref{m_eq1},\ref{m_eq2}) we obtain a homogeneous system in $\theta$ and $\lambda^5 (p+\mu)^{-1}$, whose necessarily vanishing determinant leads us to a third degree polynomial equation in $\U_3$ with basic coefficients. Again (cf. Remark \ref{pol}) this requires the cancellation of every coefficient. This shows us that $\b_1=\b_2=0$ would imply $\U_1=\U_2=0$ (a contradiction, cf. Theorem \ref{th1}), so we may assume $\b_1 \neq 0$. The vanishing of the leading coefficient yields a formula for $\R_3$, which we substitute in the second degree coefficient, from which we obtain two possible expressions of $\b_3$ (in terms of other basic quantities), unless $5p^\prime + 1=0$. If $5p^\prime + 1 \neq 0$, then substituting each of the two expressions of $\b_3$ (and after taking into account further conditions arising from the cancellation of lower degree coefficients of the basic polynomial) shows that $\U_1, \U_2$ are basic, so Theorem \ref{th1} applies. If $5p^\prime + 1 = 0$, then $\partial_0$(\ref{m_eq1}) and (\ref{m_eq7}) form another homogeneous system in $\theta$ and $\lambda^5 (p+\mu)^{-1}$, the necessarily vanishing determinant of which leads us to a new polynomial equation in $\U_3$ with basic coefficients for which Remark \ref{pol} applies. Again by cumulating step by step the constraints issued from the cancellation of various coefficients, we are led finally to the same outcome: $\U_1, \U_2$ should be basic and Theorem \ref{th1} applies. \medskip When $p'=-1/6$ the above formulae for $\R_2$ and $\Q_1$ no longer hold (so neither do the subsequent considerations), but now equations (\ref{m_eq4},\, \ref{m_eq5}) reduce to \begin{eqnarray} (\O \R_2+\b_1)\U_1-(\O\Q_1+\b_2) \U_2-2 \O \U_1\U_2+\smfrac{3}{5}\B_4=0,\\ \O(\U_2^2-\U_1^2)-(\O\Q_1+\b_2)\U_1-(\O\R_2+\b_1)\U_2+\smfrac{3}{5}\B_5=0 \end{eqnarray} and elimination of $\U_1$ or $\U_2$ results in a fourth degree polynomial relation for $\U_2$ or $\U_1$, with basic coefficients and leading coefficient $\O^3$. It follows that $\U_1$ and $\U_2$ are basic and we are done by Theorem \ref{th1}. \end{proof} Full details of the previous proof can be found in Maple or Mathematica worksheets, which are available from the authors. The $p'\neq -1/6$ part of the above proof follows closely the $\widetilde{b}_3=\widetilde{c}_3=0$ case in the proof given in Section 5 of \cite{Slobodeanu2014}, which is independent of whether the cosmological constant vanishes or not and which left aside the exceptional cases $p' \in \{-1/6, -1/5\}$. \begin{te}\label{Theorem4} If a rotating and expanding shear-free perfect fluid obeys a linear equation of state $p=(\gamma-1)\mu + constant$, then a Killing vector exists along the vorticity. \end{te} \begin{proof} For a linear equation of state the determinant of the linear system (\ref{m_eq1},\ref{m_eq2}) in $\U_1$, $\U_2$ is given by (\ref{dete}) and hence can be assumed to be non-zero, unless $\U_3$ is basic (in which case Theorem \ref{th2} applies). Solving this system for $\U_1,\U_2$ and proceeding as in Lemma 4 of \cite{Slobodeanu2014} by evaluating $\partial_1 \U_1 -\partial_2 \U_2$, we obtain a polynomial equation of degree 7 in $\U_3$, containing only basic coefficients and with leading term $\O^7 \Q_3 (3p'+1)(3p'-1)^6(9p'-1)^6\U_3^7$. Since we assume $\omega \theta \neq 0$, it follows that $\U_3$ is basic (hence we are done by Theorem \ref{th2}), or that $\Q_3=0$. In the latter case we can choose a basic rotation making $\E_{12}=0$, under which equations (\ref{m_eq11}, \ref{m_eq9}) get simplified respectively to \begin{eqnarray}\label{Th4_U1U2} \E_{13}\U_1-\E_{23}\U_2= \E_0 \U_3 + \B_{11}\frac{3p'+1}{3p'^2},\nonumber \\ \E_{23}\U_1+\E_{13}\U_2= (\B_{10}-\B_9)\frac{3p'+1}{6p'^2}. \end{eqnarray} Unless the determinant of this system vanishes (in which case Lemma \ref{Theorem3} applies), we can solve (\ref{Th4_U1U2}) to obtain expressions for $\U_1,\U_2$ which are linear in $\U_3$. Subsituting these in equations (\ref{m_eq1},\ref{m_eq2}) leads to two quadratic equations in $\U_3$, with basic coefficients and with leading terms respectively \begin{equation} \O \E_0 \frac{\E_{A3}}{\E_{13}^2+\E_{23}^2}\frac{(9p'-1)(3p'-1)}{3p'+1} \U_3^2, \ \ \ (A=1,2). \end{equation} It follows that $\U_3$ is basic (and we are done by Theorem \ref{th2}), unless $\E_0=0$, in which case equations (\ref{Th4_U1U2}) show that $\U_1$ and $\U_2$ are basic and we are done by Theorem \ref{th1}. \end{proof} \begin{te}\label{Theorem5} If a shear-free perfect fluid obeys a linear equation of state $p=(\gamma-1)\mu + constant$ and if a Killing vector exists along the vorticity, then $\omega \theta=0$. \end{te} \begin{proof} Assume that $\omega \theta \neq 0$. If a Killing vector along the vorticity exists, then $\dot{u}_3=0$ and we can impose all relations obtained in the proof of Theorem \ref{th2}: \begin{equation} \dot{u}_3= z_3 = q_3 = r_3= E_{13}=E_{23}=0, \end{equation} together with (\ref{th1_eq8bis}). Translating these in terms of basic variables, we obtain, besides $\dot{u}_3= z_3=0$ (i.e.~$\partial_3 \mu =\partial_3 \theta=0$), \begin{equation} \b_3=\Q_3=\R_3=\E_{13}=\E_{23}=0 \end{equation} and two algebraic equations, namely (\ref{th1_eq8bis}) becoming (cf. also (42) in \cite{Slobodeanu2014}) \begin{eqnarray}\label{th5_extra1} \fl \qquad (p'-1)(6p'+1)(\U_1^2+\U_2^2)-6(3p'+1)(\Q_1\U_1-\R_2 \U_2) \nonumber \\ \fl \qquad + 3(9p'-5) \frac{\lambda^8 \O^2}{(p+\mu)^2} + (3p'+1)(\theta^2-3\mu)\lambda^{-2} +6\E_3 -(3p'+1)\J = 0 \end{eqnarray} and (\ref{basic_eq7}) simplifying to \begin{equation} \B_6=\R_2\b_2-\Q_1\b_1. \end{equation} Under these restrictions the equations (\ref{m_eq11},\ref{m_eq9},\ref{m_eq10},\ref{m_eq7},\ref{m_eq8}) also tell us that \begin{equation} \B_7=\B_8=\B_9=\B_{10}=\B_{11}=0. \end{equation} Furthermore, $\Q_3$ being 0, we have the freedom of performing an extra basic rotation in the $(1,2)$ plane, allowing us to remove one of the basic isotropy-breaking variables, $\b_1, \b_2,\Q_1,\R_2,\partial_1 \J,\partial_2 \J,\E_{12}$ or $\E_0$. \\ We introduce now a new variable $\modU=\U_1^2+\U_2^2$, for which the time evolution can be written as \begin{equation}\label{e0U_a} \partial_0 \modU=2 p'\theta \modU -\frac{3\lambda^5}{2(p+\mu)} (\b_1 \U_1+\b_2 \U_2). \end{equation} Our aim will be to construct a polynomial system with basic coefficients, in which the main variables are $\modU,\theta$ and $p+\mu$, while $\mu$ is given by (\ref{expl_eqstate}), namely \begin{equation}\label{mu_expr} \mu = \frac{p+\mu}{p'+1}+\mu_0. \end{equation} In order to eliminate the variables $\U_1,\U_2$, we need the following equation (cf. also (43) in \cite{Slobodeanu2014}), obtained as linear combination of (\ref{m_eq3}) and (\ref{th5_extra1}), \begin{eqnarray} \fl \left( 4\,p'-1 \right)(\b_2\, \U_1-\b_1\, \U_2)-\frac {42\,{p'} ^{3}+13\,{p'}^{2}-8\,p'+1} {3(3\,p'+1)}\O\modU -\frac{\O}{3}(7\,p'-3)(\theta^2 -3 \mu)\lambda^{-2} \label{eq7bis}\\ \fl + 4\O(p+\mu)\lambda^{-2} - \frac{63\,{p'}^2+82\,p'-1 }{3\,p'+1} \frac{\lambda^{8}{\O}^{3}}{(p+\mu)^2} + \frac{21\,p'-1}{9}\O \, \J + \frac{4(9\,p'-1)}{3(3\,p'+1)}\O \, \E_3 + \B_3=0. \nonumber \end{eqnarray} The subsequent time evolutions of this equation will be calculated using (\ref{utheta}--\ref{uU2}) and (\ref{e0mu},\ref{e0p}). The first element of this sequence, $\partial_0(\ref{eq7bis})$, is given by (cf. also (44) in \cite{Slobodeanu2014}) \begin{eqnarray} \fl \qquad (11 p'+ 1)(\b_1\,\U_1+\b_2\,\U_2)\frac{\lambda^5 \theta^{-1}}{p+\mu} + \smfrac{4}{9} p' \left( 21 p'+11 \right) \modU \nonumber \\ \fl \qquad + \frac{8(3\,p'+1)}{9(3\,p' - 1)}\J + \frac {4\left( 9\,p'-1 \right)}{3\left(3\,p'-1 \right)}\E_3 + \frac {3\,p'+1}{3\,p'-1} \frac{\B_3}{\O} \label{eq7bis0} \\ \fl \qquad + \smfrac{2}{3}\,\frac{\left( 21\,p'+1 \right) \left( 9\,p'-5 \right) \left(p'+1 \right)}{p'\, \left(3\,p'-1 \right)} \frac{ {\lambda}^{8}\O^{2}}{(p+\mu)^2} + \smfrac{4}{3}\, \frac{\left(3\,p'+1 \right)\left(6\,p' + 1 \right)}{p'\left( 3\,p'-1 \right)}(p+\mu)\lambda^{-2}=0. \nonumber \end{eqnarray} First notice that, if $p'=-1/11$, $\partial_0$(\ref{eq7bis0}) and (\ref{e0U_a}) give rise to a new equation from which $\b_1 \U_1+\b_2 \U_2$ can be calculated. The next derivative $\partial^2_0$(\ref{eq7bis0}) involves a term $\b_2 \U_1-\b_1 \U_2$, allowing one (as $\modU = \U_1^2+\U_2^2\neq0$) to eliminate successively $\U_1,\U_2$, $\theta$ and $\modU$ from the sequence of derivatives $\partial_0^{(0)}$(\ref{eq7bis}), \ldots, $\partial^{(4)}_0$(\ref{eq7bis}). Eventually, after substituting (\ref{expl_eqstate}), an equation in powers of $\lambda$ results, with basic coefficients not all 0, showing that $\lambda$ is basic (details are available from the authors). So henceforth we will assume $p'\neq -1/11$, allowing us to rewrite (\ref{e0U_a}) as \begin{eqnarray} \fl \partial_0 \modU =\frac{\theta}{(3p'-1)(11p'+1)} \left[\smfrac{4}{3}p'(3p'-1)(27p'+7) \modU + 2\frac{(3p'+1)(6p'+1)}{p'}(p+\mu)\lambda^{-2}\right.\nonumber \\ \fl \left. + \smfrac{4}{3}(3p'+1) \J +2(9p'-1)\E_3+\smfrac{3}{2}(3p'+1)\frac{\B_3}{\O}+\frac{(21p'+1)(9p'-5)(p'+1)}{p'} \frac{\lambda^8 \O^2}{(p+\mu)^2}\right]. \label{e0U_b} \end{eqnarray} At this stage it becomes advantageous to apply a basic rotation such that, for example $\b_2=\b_1$. In the following lines we only describe the outline of the proof, as the output of the calculations is far too lengthy for publication. A Maple or Mathematica worksheet with all the details can be obtained from the authors. First we should consider the special case $p'=1/4$: the linear terms in $\U_1,\U_2$ are then absent from (\ref{eq7bis}), but reappear in its evolution via (\ref{e0U_a}) and $\partial^{(2)}_0$(\ref{eq7bis0}). Similar to the case $p' = 1/11$ elimination of $\U_1,\U_2$, $\theta$ and $\modU$ from the sequence of derivatives $\partial_0^{(0)}$(\ref{eq7bis}), \ldots , $\partial^{(4)}_0$(\ref{eq7bis}) leads then to $\lambda$ being basic, whence $\theta=0$.\\ When $p'\neq-1/11$ and $p'\neq1/4$, we first proceed as in \cite{Slobodeanu2014}: using (\ref{eq7bis},\ref{eq7bis0}) to eliminate the linear $\U_1,\U_2$ terms from the sequence $\partial_0^{(2)}$(\ref{eq7bis}), \ldots, $\partial^{(4)}_0$(\ref{eq7bis}), we obtain three relations of the form $\mathcal{P}_i=a_i \modU^2+b_i\modU \theta^2 +c_i \modU +d_i \theta^2 +e_i=0$ ($i=1,2, 3$), with $a_i,b_i,c_i,d_i,e_i$ polynomials having basic coefficients of degree 2 and 20 in respectively $p+\mu$ and $\lambda$. Eliminating $\theta$ results in two relations $\mathcal{R}_i(\modU,p+\mu, \lambda)=0$ ($i=1, 2$), with $\mathcal{R}_1, \mathcal{R}_2$ polynomials of third degree in $\modU$ and having degrees $9$ and $30$ in respectively $p+\mu$ and $\lambda$. Their resultant $\mathcal{F}(p+\mu, \lambda)$ with respect to $\modU$ factorises as follows over $\mathbb{Q}$: \begin{eqnarray} \fl \mathcal{F} = \O^9 p'^{13}(p+\mu)^{18}\lambda^{18}(4p'-1)(6p'+1)(3p'+1)^2(3p'-1)^9\nonumber \\ \times (21p'+11)^4(11p'+1)^6(117 p'^2+69p'+2) (96 p'^2+47 p'+1) \mathcal{F}_1 \mathcal{F}_2 \end{eqnarray} with $\mathcal{F}_1$, $\mathcal{F}_2$ respectively of degrees (6,20) and (21,70) in $p+\mu$ and $\lambda$ and both having basic coefficients depending on $\O, \J, \b_1, \E_3, \B_3$. Using (\ref{expl_eqstate}) any such polynomial in $p+\mu$ and $\lambda$ will be written as $\sum_{i,j} c_{i,j} \lambda^{i r+j}$ with $c_{i,j}$ basic functions. The remaining part of the proof is based on Lemma 3 of \cite{Slobodeanu2014}, which essentially says that a finite sum $\sum_{i,j} c_{i,j} \lambda^{i r+j}$ of products of basic functions and real powers of a (non-basic) function $\lambda$ can only be 0 if all coefficients vanish: if a `reference coefficient' $c_{i_0,j_0} \neq 0$ and if for all $(i,j) \neq (i_0,j_0)$ there are no cancellations corresponding to $i r + j = i_0 r + j_0$, then $\lambda$ is basic. As cancellations can only occur for rational values of $r$, this implies a.o.~that for irrational $r$ all $c_{i,j}$ must be identically 0.\\ While the cases $p'\in\{-\smfrac{1}{3},-\smfrac{1}{11},0,\smfrac{1}{4},\smfrac{1}{3}\}$ have been dealt with before, the special cases $6p'+1=0$, $117{p'}^2+69p'+2=0$, $96{p'}^2+47p'+1=0$ and $21p'+11=0$ correspond to the situation where the degree w.r.t.~$\modU$ of $\mathcal{R}_1, \mathcal{R}_2$ decreases to 2 or 1 (for $21p'+11=0$). Calculating the resultant of $\mathcal{R}_1, \mathcal{R}_2$, after simplifying first w.r.t.~the given $p'$ relations, results in $\lambda$ being a root of a polynomial in some fractional power of $\lambda$ (and with basic coefficients not all being 0). It follows that $\lambda$ is basic, whence $\theta=0$.\\ The case $\mathcal{F}_1=0$ is slightly more complicated: after substituting $p$ and $\mu$ as functions of $\lambda$ via (\ref{expl_eqstate}), the occurring terms belong to the set $\{\lambda^{6r},\lambda^{5r},\lambda^{5r+2},\lambda^{4r+4},\lambda^{4r+2}, \lambda^{3r+10},\lambda^{2r+10},\lambda^{2r+12},\lambda^{20}\}$, with the coefficients $c_{0,20}$ and $c_{6,0}$ of $\lambda^{20}$ and $\lambda^{6r}$ polynomials in $r$ having no common factor and the former being irreducible over $\mathbb{Q}$. The case $c_{0,20}=0$ hereby being excluded, the case $c_{0,20}\neq 0$ implies that the $\lambda^{20}$ term must cancel with one of the remaining terms in $\mathcal{F}_1$, leading to $r\in \{\smfrac{9}{2},\smfrac{10}{3},\smfrac{18}{5},4,5\}$. While the cases $r\in\{\smfrac{10}{3},4\}$ ($p'\in\{\smfrac{1}{9},\smfrac{1}{3}\}$) have been excluded before, the cases $r\in \{\smfrac{9}{2},\smfrac{18}{5},5\}$ can easily be excluded by direct substitution in $\mathcal{F}_1$ and by verifying that the resulting polynomial in (some rational power of) $\lambda$ is not identically 0.\\ The hardest case $\mathcal{F}_2=0$ can be dealt with in a similar way. First notice that the coefficients $c_{0,70}, c_{3,60}$ and $c_{21,0}$ of $\mathcal{F}_2=\sum_{i,j} c_{i,j} \lambda^{i r+j}$ are polynomials in $r$ of degrees respectively 42, 45 and 57, with the rational roots belonging either to the set $\{0,2,\smfrac{5}{2},\smfrac{8}{3},\smfrac{30}{11},3,\smfrac{10}{3},\smfrac{15}{4},4\}$ of already excluded $r$-values, or to the set $\{\smfrac{8}{3},\smfrac{20}{7},\smfrac{14}{3}\}$. Again by direct substitution in $\mathcal{F}_2$ it is easy to show that the latter three values of $r$ are excluded, while a simple evaluation of resultants shows that $c_{0,70}, c_{3,60}$ and $c_{21,0}$ have no common irrational roots (besides those corresponding to the previously excluded case $117{p'}^2+69p'+2=0$). It follows that each of the terms $\lambda^{70}, \lambda^{60+3 r}$ and $\lambda^{21r}$ must cancel with one of the other terms in $\mathcal{F}_2$, yielding three large sets of $r$-values to be investigated. However the intersection of the three sets only contains the excluded value $r=\smfrac{10}{3}$ and therefore $\mathcal{F}_2=0$ implies that $\lambda$ is basic, whence $\theta=0$. \end{proof} In Section 6 of [30] a very similar proof to this final `elusive case' was given with the assumption that the cosmological constant vanishes and which does not cover the exceptional cases $p' \in \{ -\frac{1}{6},-\frac{1}{11},-\frac{1}{21}, \frac{1}{4} \}$. We notice that the system obtained by iterated propagation of (73) was there seen as a system in $\theta^2$ and $\partial_0 \theta$. The different choice of variables employed here allowed a unitary treatment of the cases $p' \in \{ -\frac{1}{11}, \frac{1}{4} \}$, while $p' = -\frac{1}{6}$ has been easily eliminated and $p' = -\frac{1}{21}$ no longer occurs. \begin{re} One could wonder whether it is always possible to fix the tetrad -- as we did in the proof of Theorem \ref{Theorem5} -- such that all basic variables become invariants and hence such that, in the case of a Killing vector along the vorticity, all the occurring $\partial_3$ derivatives become 0. It is easy to see, \emph{even for a non-linear equation of state}, that the exceptional situation, in which all the basic isotropy-breaking variables, $\b_1=\b_2=\Q_1=\R_2=\E_{12}=\E_0=\partial_1 \J=\partial_2 \J$ vanish, is inconsistent: (\ref{basic_eq27},\ref{basic_eq28}) imply then $\B_{12}=\B_{13}=0$, turning (\ref{m_eq12},\ref{m_eq13}) into a homogeneous system in $\U_1,\U_2$, the determinant of which is positive definite (and hence the acceleration is parallel to the vorticity), unless $9 G p'-9 {p'}^2+1 = 0$ and \begin{equation*} (p'+1)(81{p'}^2-5)\O^2\lambda^8-6{p'}^2(p+\mu)^2 \E_3 = 0. \end{equation*} Propagating this second equation along $\w{u}$ and simplifying the result by means of $9 G p'-9 {p'}^2+1 = 0$ leads then to a contradiction. \begin{comment} implying that $\mu$ is basic and hence $\theta=0$.\\ With the tetrad fixed in some way we can assume that all $\partial_3$ derivatives are 0 and hence, in particular, $\b_3=\Q_3=\R_3=\E_{13}=\E_{23}= \partial_3 \u_1=\partial_3 \u_2=0$, with the latter two equations also implying $\N=0$. The only surviving commutator is then \begin{equation} [X,\, Y] = -2\frac{\O \lambda^3}{p+\mu}\partial_0+\Q_2 X+\R_1 Y, \end{equation} The basic equation (\ref{basic_eq31}) yields now $\B_{16}=0$, while the remaining equations of Appendix 2 can be solved explicitly for $X(\b_1),Y(\b_1),X(\b_2),Y(\b_2),X(\E_3),Y(\E_3)$ (however the resulting integrability conditions don't lead to any (a priori) useful information).\\ \end{comment} \end{re} \section{Conclusion and discussion} For shear-free perfect fluids obeying a barotropic equation of state (with $p+\mu\neq 0$) and obeying the Einstein field equations (with or without cosmological constant) we first have demonstrated two theorems, showing that (Theorem \ref{th1}) $\omega \theta=0$ once $\dot{u}_1 / (\lambda p')$ and $\dot{u}_2 / (\lambda p')$ are basic and (Theorem \ref{th2}) that either $\omega \theta=0$ or a Killing vector along the vorticity vector exists once $\dot{u}_3 / (\lambda p')$ is basic. In particular, Theorem \ref{th2} shows that (when $\omega \theta \neq 0$) the existence of a Killing vector along the vorticity is equivalent to the orthogonality of acceleration and vorticity. Next we have demonstrated (Theorem 3 and 4) that $\omega \theta=0$ once the equation of state is linear: $p=(\gamma -1) \mu + p_0$, covering in the new proof all the exceptional cases of \cite{Slobodeanu2014} and generalising the result to the possible presence of a cosmological constant (absorbed in $p_0$). While doing so we generalised the formalism of \cite{Slobodeanu2014} to general equations of state, hoping herewith (and with the aid of theorems \ref{th1} and \ref{th2}) to have provided the interested reader with a new technique to tackle the Shear-free Fluid Conjecture in its full generality. In section 5 we have demonstrated that the assumption of a linear equation of state together with $\omega \theta \neq 0$ implies $\Q_3=0$. Lemma \ref{Theorem3} was then used to reduce the problem to the situation where a Killing vector exists along the vorticity. We are convinced that this lemma is also valid for a general barotropic equation of state (although we have not been able to provide a detailed proof of this claim), and hence may play a key role in the general proof. The hardest part will then undoubtedly remain to prove the conjecture in the case where there is a Killing vector along the vorticity ...\\ The interested reader can obtain Maple or Mathematica worksheets with full details of all the proofs from the authors. \section{Appendix 1} The following is the initial set of equations describing a shear-free perfect fluid with a general barotropic equation of state $p=p(\mu)$, assuming throughout $\omega, p'$ and $3 p' +1 \neq 0$.\\ \textbf{a)} \ evolution equations ($A=1,2$, $\alpha=1,2,3$) \begin{eqnarray} \partial_{0}\mu &=& -(p+\mu) \theta, \quad \textrm{(conservation of mass)} \label{e0mu} \\ \partial_{0}p &=& -(p+\mu) p' \theta, \label{e0p} \\ \partial_{0}\theta&=&-\smfrac{1}{3}\theta^{2} + 2\omega^{2}-\smfrac{1}{2}(\mu+3p) + j, \ \textrm{(Raychaudhuri eq.)} \label{Raych} \\ \partial_0 \dot{u}_{\alpha} &=& p' z_{\alpha} - G\theta \dot{u}_{\alpha}, \label{e0acc} \\ \partial_0 \omega &=&\smfrac{1}{3} \omega \theta(3p'-2), \label{e0om} \\ \partial_0 r_{\alpha} &=& - \frac{1}{3} z_{\alpha} - \frac{\theta}{3} (\dot{u}_{\alpha} + r_{\alpha}), \label{e0r}\\ \partial_0 q_{\alpha} &=& \frac{1}{3} z_{\alpha} + \frac{\theta}{3} (\dot{u}_{\alpha}-q_{\alpha}), \label{e0q} \\ \partial_0 n &=& -\frac{\theta}{3} n, \label{e0n} \\ \partial_0 z_1 &=& \theta (p'-1) z_1-\smfrac{1}{2}\omega(9p' -1) z_2+ \smfrac{1}{2}\theta\omega(9G-2)\dot{u}_2, \label{e0z1}\\ \partial_0 z_2 &=& \theta (p'-1) z_2+\smfrac{1}{2}\omega(9p' -1) z_1- \smfrac{1}{2}\theta\omega(9G-2)\dot{u}_1, \label{e0z2} \\ \partial_0 z_3 &=& \theta (p' -1) z_3, \label{e0z3} \\ \partial_0 j &=& \theta \left[G_p(p+\mu)- 2G + 1 \right]\dot{\w{u}}^2 -(2G-1)\dot{\w{u}}\cdot \w{z} \nonumber \\ &-& \theta\left[\smfrac{1}{3}(3G+1)j - p'(9p'-1)\omega^2\right], \label{e0jfirst} \end{eqnarray} $\diamondsuit$ the $\dot{\w{E}}$ second Bianchi identities, \begin{eqnarray} \partial_0 E_{AA} &=& \frac{1}{3(3p'+1)}[(18p'^2-3p'+G-1)\theta\omega^2- (3G+9p'+1)\theta E_{AA} \nonumber\\ & & -2G(3\dot{u}_A z_A -\dot{\w{u}} \cdot \w{z}) -(2G-(p+\mu)G_p)\theta (3\dot{u}_A^{~2}-\dot{\w{u}}^2) ], \\ \partial_0 E_{33} &=& -\frac{1}{3(3p'+1)}[2(18p'^2-3p'+G-1)\theta\omega^2+ (3G+9p'+1)\theta E_{33} \nonumber\\ & & +2G(3\dot{u}_3 z_3 -\dot{\w{u}} \cdot \w{z}) +(2G-(p+\mu)G_p)\theta (3\dot{u}_3^{~2}-\dot{\w{u}}^2) ],\label{e0E33} \\ \partial_0 E_{\alpha\beta} &=& -\frac{1}{3p'+1} [(G+3p'+\smfrac{1}{3})\theta E_{\alpha\beta}+G (\dot{u}_\alpha z_\beta+\dot{u}_\beta z_\alpha) \nonumber\\ & & + (2G-(p+\mu)G_p) \theta\dot{u}_\alpha \dot{u}_\beta ], \label{e0Ealfabeta} \end{eqnarray} \textbf{b)} \ spatial equations \begin{eqnarray} \partial_\alpha p = - (p+\mu) \dot{u}_{\alpha}, \quad \textrm{(Euler equations)} \label{gradp} \\ \partial_\alpha \mu = -\frac{p+\mu}{p'} \dot{u}_{\alpha}, \quad \label{gradmu} \\ \partial_\alpha \theta = z_\alpha, \quad (\textrm{definition of } \w{z}) \label{gradtheta} \end{eqnarray} \begin{eqnarray} \partial_1 \omega = \smfrac{2}{3} z_2-\omega(q_1+2 \dot{u}_1), \quad & \big(\textrm{\small{(02)--Einstein field equation}}\big) \label{gradom1}\\ \partial_2 \omega = - \smfrac{2}{3} z_1+\omega (r_2-2\dot{u}_2), \quad & \big(\textrm{\small{(01)--Einstein field equation}}\big)\label{gradom2}\\ \partial_3 \omega = \omega( \dot{u}_3+r_3-q_3), & \label{gradom3} \end{eqnarray} \begin{eqnarray} \partial_{1}\dot{u}_{1} = \smfrac{1}{3}(j - \omega^2) - q_{2}\dot{u}_{2} + r_{3}\dot{u}_{3} -\dot{u}_1^2+E_{11}, & \big(\textrm{\small{(11)--Einstein field eq.}}\big) \label{e1u1}\\ \partial_{2}\dot{u}_{2} = \smfrac{1}{3}(j - \omega^2) - q_3\dot{u}_3 + r_1\dot{u}_1 - \dot{u}_2^2 + E_{22}, & \big(\textrm{\small{(22)--Einstein field eq.}}\big)\label{e2u2}\\ \partial_{3}\dot{u}_{3} = \smfrac{1}{3}(j + 2\omega^2) - q_1\dot{u}_1 + r_2 \dot{u}_2 - \dot{u}_3^2 + E_{33}, & \big(\textrm{\small{(33)--Einstein field eq.}}\big)\label{u33}\\ \partial_1 \dot{u}_2 = -p'\omega \theta + q_2 \dot{u}_1+\smfrac{1}{2} n_{33} \dot{u}_3-\dot{u}_1 \dot{u}_2+E_{12}, & \big(\textrm{\small{(12)--Einstein field eq.}}\big)\label{e1u2}\\ \partial_2 \dot{u}_1 = p'\omega \theta-r_1 \dot{u}_2-\smfrac{1}{2} n_{33} \dot{u}_3-\dot{u}_1 \dot{u}_2+E_{12}, & \big(\textrm{\small{(21)--Einstein field eq.}}\big)\label{e2u1}\\ \partial_1 \dot{u}_3 = -\smfrac{1}{2} n_{33} \dot{u}_2 - r_3 \dot{u}_1-\dot{u}_1 \dot{u}_3+E_{13}, & \big(\textrm{\small{(13)--Einstein field eq.}}\big) \label{e1u3} \\ \partial_2 \dot{u}_3 = \smfrac{1}{2}n_{33}\dot{u}_1 + q_3 \dot{u}_2-\dot{u}_2 \dot{u}_3+E_{23}, & \big(\textrm{\small{(23)--Einstein field eq.}}\big) \label{e2u3} \\ \partial_3 \dot{u}_1 = -(\smfrac{1}{2} n_{33} - n)\dot{u}_2 + q_1 \dot{u}_3 - \dot{u}_1 \dot{u}_3+E_{13}, & \big(\textrm{\small{(31)--Einstein field eq.}}\big) \label{e3u1} \\ \partial_3 \dot{u}_2 = (\smfrac{1}{2}n_{33} - n) \dot{u}_1 - r_2 \dot{u}_3 - \dot{u}_2 \dot{u}_3 + E_{23}, & \big(\textrm{\small{(32)--Einstein field eq.}}\big) \label{e3u2} \end{eqnarray} \begin{eqnarray} \partial_1 j &=& p'\theta z_1-\frac{1}{6} \omega (27p'+13)z_2+\frac{1}{3} (18 \omega^2+\theta^2-3j-3\mu)\dot{u}_1 - \frac{p+\mu}{2p'}\dot{u}_1 \nonumber \\ &+&\frac{1}{2}\theta \omega(9G-2) \dot{u}_2 +4 \omega^2 q_1, \label{e1j}\\ \partial_2 j &=& p'\theta z_2+\frac{1}{6} \omega (27p'+13)z_1 + \frac{1}{3}(18 \omega^2+\theta^2-3j-3\mu)\dot{u}_2 - \frac{p+\mu}{2p'}\dot{u}_2\nonumber \\ &-&\frac{1}{2}\theta \omega(9G-2) \dot{u}_1 -4 \omega^2 r_2, \label{e2j}\\ \partial_3 j &=& p'\theta z_3+\frac{1}{3}(\theta^2-18 \omega^2-3j-3\mu)\dot{u}_3 -\frac{p+\mu}{2p'} \dot{u}_3-4 (r_3-q_3) \omega^2, \label{e3j} \end{eqnarray} \begin{eqnarray} \partial_1 z_1 &=& \frac{1}{1+3p'}[\theta(3G-2)E_{11}+(2G-(p+\mu)G_p)\theta(3\dot{u}_1^{~2}-\dot{\w{u}}^{2})\nonumber\\ & & +2(3G-3p'-1)\dot{u}_1z_1-2 G\dot{\w{u}}\cdot \w{z} +\theta\omega^2 (9p'^2+6p'-G-1)] \nonumber \label{e1Z1} \\ & & + r_3z_3 - q_2z_2 \label{Z11}, \\ \partial_2 z_2 &=& \frac{1}{1+3p'}[\theta(3G-2)E_{22}+(2G-(p+\mu)G_p)\theta(3\dot{u}_2^{~2}-\dot{\w{u}}^2)\nonumber \\ & & +2(3G-3p'-1)\dot{u}_2z_2-2 G\dot{\w{u}}\cdot \w{z}+\theta\omega^2(9p'^2+6p'-G-1)]\nonumber \label{e2Z2}\\ & & + r_1z_1-q_3z_3\label{Z22},\\ \partial_3 z_3 &=&\frac{1}{1+3p'}[\theta(3G-2)E_{33}+(2G-(p+\mu)G_p)\theta(3\dot{u}_3^{~2}-\dot{\w{u}}^{2})\nonumber \\ & & +2(3G-3p'-1)\dot{u}_3z_3-2 G\dot{\w{u}}\cdot \w{z}+\theta\omega^2(9p'^2- 6p' + 2G + 1)]\nonumber \\ & & + r_2z_2 - q_1 z_1, \label{Z33} \end{eqnarray} \begin{eqnarray} \partial_1 z_2 &=& q_2z_1+\frac{n_{33}}{2}z_3+\frac{\omega}{6}(2\theta^2-12 \omega^2-6 j + 9 p+3 \mu) \nonumber \\ &+& \frac{1}{1+3p'}[(3G-1-3p')(\dot{u}_2 z_1+ \dot{u}_1 z_2)+3(2G-(p+\mu)G_p)\theta\dot{u}_1 \dot{u}_2\nonumber \\ & & +\theta (3G-2)E_{12}]\label{e1Z2},\\ \partial_2 z_1 &=& -r_1 z_2-\frac{n_{33}}{2}-\frac{\omega}{6}(2\theta^2-12 \omega^2-6 j + 9p+3 \mu) \nonumber \\ & & + \frac{1}{1+3p'}[(3 G-1-3p')(\dot{u}_2 z_1+ \dot{u}_1 z_2)+3p'(2G-(p+\mu)G_p)\theta\dot{u}_1 \dot{u}_2\nonumber \\ & & +\theta (3G-2)E_{12}]\label{e2Z1},\\ \partial_3 z_1 &=& q_1 z_3+\left(n-\frac{n_{33}}{2}\right)z_2 +\frac{1}{1+3p'}[(3G-1-3p')(\dot{u}_3 z_1+ \dot{u}_1z_3) \nonumber \\ & & + 3\theta(2G-(p+\mu)G_p)\dot{u}_1\dot{u}_3+\theta(3G-2) E_{13}],\\ \partial_3 z_2 &=& -r_2 z_3-\left(n-\frac{n_{33}}{2}\right)z_1 +\frac{1}{1+3p'}[(3G-1-3p')(\dot{u}_3 z_2+ \dot{u}_2z_3)\nonumber \\ & & + 3\theta(2G-(p+\mu)G_p)\dot{u}_2\dot{u}_3+\theta(3G-2) E_{23}] ,\\ \partial_1 z_3 &=& -r_3 z_1-\frac{n_{33}}{2}z_2+\frac{1}{1+3p'}[(3G-1-3p')(\dot{u}_3 z_1+ \dot{u}_1z_3)\nonumber \\ & & + 3\theta(2G-(p+\mu)G_p)\dot{u}_1\dot{u}_3+\theta(3G-2) E_{13}],\label{e1Z3} \\ \partial_2 z_3 &=& q_3 z_2+\frac{n_{33}}{2}z_1 +\frac{1}{1+3p'}[(3G-1-3p')(\dot{u}_3 z_2+ \dot{u}_2z_3) \nonumber \\ & & + 3\theta(2G-(p+\mu)G_p)\dot{u}_2\dot{u}_3+\theta(3G-2)E_{23}]\label{e2Z3}, \end{eqnarray} \medskip $\diamondsuit$ \ two equations obtained as linear combinations of the (12)--Einstein field equation and one of the Jacobi equations, \begin{eqnarray}\label{e2q1} \partial_2q_1+\smfrac{1}{2}\partial_3 n_{33}-r_2(r_1+q_1)-n(q_3+r_3)+n_{33}q_3-\smfrac{1}{3}\omega \theta +E_{12} =0, \end{eqnarray} \begin{eqnarray}\label{e1r2} \partial_1r_2+\smfrac{1}{2}\partial_3 n_{33}+q_1(r_2+q_2)+n(q_3+r_3)-n_{33}r_3-\smfrac{1}{3}\omega \theta -E_{12} =0, \end{eqnarray} \begin{comment} \begin{eqnarray}\label{e2q1} \partial_2q_1&=&\frac{1}{3(1+3p')\omega}[(2G-(p+\mu)G_p)\theta(\dot{\w{u}}^2-3\dot{u}_3^{~2})+2 G(\dot{\w{u}}\cdot \w{z}-3\dot{u}_3 z_3)\nonumber\\ & &-\theta (3G-2)E_{33}-(2G+9p'^2-9p')\theta\omega^2]+\frac{1}{3\omega}(3\dot{u} _3-3q_3+r_3)z_3\nonumber\\ & &-\frac{1}{3\omega}(r_2z_2-q_1z_1)+(r_1+q_1)r_2+n(q_3+r_3)-E_{12}, \end{eqnarray} \begin{eqnarray}\label{e1r2} \partial_1r_2&=&\frac{1}{3(1+3p')\omega}[(2G-(p+\mu)G_p)\theta(\dot{\w{u}}^2-3\dot{u}_3^{~2})+2 G(\dot{\w{u}}\cdot \w{z}-3\dot{u}_3 z_3)\nonumber\\ & &-\theta (3G-2)E_{33}-(2G+9p'^2-9p')\theta\omega^2]+\frac{1}{3\omega}(3\dot{u} _3-q_3+3r_3)z_3\nonumber\\ & &-\frac{1}{3\omega}(r_2z_2-q_1z_1)-(r_2+q_2)q_1-n(q_3+r_3)+E_{12}, \end{eqnarray} \end{comment} $\diamondsuit$ \ linear combinations of the (13)-- and (23)--Einstein field equations and the Jacobi equations, \begin{eqnarray} \partial_2 r_3 = -\smfrac{1}{2}\partial_1 n_{33}-q_1 n_{33}-q_2(q_3+r_3)+E_{23}, \label{e2r3} \\ \partial_1 q_3 = -\smfrac{1}{2}\partial_2 n_{33}+r_2 n_{33}+r_1(q_3+r_3)-E_{13}, \label{e1q3} \end{eqnarray} \textbf{c)} \ the (03)--Einstein field equation \begin{equation}\label{defn33} n_{33} = \frac{2}{3\omega} z_3, \quad \end{equation} \textbf{d)} \ remaining combinations of the Jacobi equations and the $(1,3)$, $(2,3)$ and $(\alpha,\alpha)$--Einstein field equations \begin{eqnarray} \partial_1 n+\partial_3 q_2 = \smfrac{1}{2} \partial_1 n_{33} +n(r_1-q_1) +r_3(r_2+q_2)+q_1 n_{33}-E_{23} \label{cons1}, \\ \partial_2 n+\partial_3 r_1 = \smfrac{1}{2} \partial_2 n_{33} +n(r_2-q_2) -q_3(r_1+q_1)-r_2 n_{33}+E_{13} \label{cons2} \end{eqnarray} and \begin{eqnarray} \partial_3 q_3-\partial_2 r_2 &=& E_{11}-\frac{\mu}{3}+\frac{\theta^2}{9}+\frac{n_{33}^2}{4}-r_2^2-q_3^2 +r_1q_1\label{ein11bis}, \\ \partial_1 q_1-\partial_3 r_3 &=& E_{22}-\frac{\mu}{3}+\frac{\theta^2}{9}+\frac{n_{33}^2}{4}-r_3^2-q_1^2 +r_2q_2\label{ein22bis}, \\ \partial_2 q_2-\partial_1 r_1 &=& E_{33}-\frac{\mu}{3}+\frac{\theta^2}{9}-\frac{3 n_{33}^2}{4} + n \, n_{33}+3\omega^2- q_2^2-r_1^2 +r_3 q_3, \label{ein33bis} \end{eqnarray} \textbf{e)} the `$\dot{\w{H}}$' second Bianchi identities \begin{eqnarray} \fl \partial_0 H_{11} + \partial_2 E_{13} - \partial_3 E_{12}&=& E_{11} n -\left(n-\frac{n_{33}}{2}\right) E_{22} - \frac{1}{2} E_{33} n_{33}+(q_3+2\u_3)E_{12}\nonumber \\ & & +(r_2-2\u_2) E_{13} -(r_1+q_1)E_{23}-\theta H_{11}-\omega H_{12}, \label{H11_0}\\ % \fl \partial_0 H_{22} + \partial_3 E_{12} - \partial_1 E_{23}&=& E_{22} n - \left(n-\frac{n_{33}}{2}\right) E_{11}-\frac{1}{2} E_{33} n_{33}+(r_3-2\u_3)E_{12}\nonumber \\ & & +(q_1+2\u_1) E_{23} -(r_2 + q_2)E_{13}-\theta H_{22}+\omega H_{12}, \label{H22_0}\\ % \fl \partial_0 H_{12}-\partial_3 E_{22}+\partial_2 E_{23} &=& (q_3+2\u_3)E_{22}-(q_3-\u_3)E_{33}+\left(2 n-\frac{n_{33}}{2}\right)E_{12}-\frac{p+\mu}{6 p^\prime} \u_3\nonumber \\ & & +(r_1+\u_1)E_{13}+(2r_2-\u_2)E_{23}+(H_{11}-H_{33})\omega - H_{12} \theta, \label{H12_0}\\ % \fl \partial_0 H_{13} +\partial_2 E_{33}-\partial_3 E_{23} &=& (\u_2-2r_2)E_{22}-(r_2-2\u_2)E_{11}+(q_1-\u_1) E_{12} +\frac{p+\mu}{6p^\prime} \u_2\nonumber \\ & & +(2 q_3+\u_3) E_{23}+\left(n+\frac{n_{33}}{2}\right) E_{13}-H_{13}\theta+H_{23} \omega, \label{H13_0}\\ % \fl \partial_0 H_{23} -\partial_1 E_{33}+\partial_3 E_{13} &=& -(\u_1+2q_1)E_{11}-(q_1+2\u_1)E_{22}+(r_2+\u_2) E_{12}-\frac{p+\mu}{6 p^\prime}\u_1\nonumber \\ & & +(2 r_3-\u_3) E_{13}+\left(n+\frac{n_{33}}{2}\right) E_{23}-H_{23}\theta-H_{13} \omega, \label{H23_0} \end{eqnarray} \textbf{f)} the `$\w{\nabla \cdot H}$' second Bianchi equations (with $\w{\nabla \cdot H}_3$ becoming an identity under these two) \begin{eqnarray} \fl \partial_3 q_1+\partial_1 r_3 = -4 E_{13}+\frac{3G-2}{3(3p'+1)} \frac{\theta}{\omega} E_{23}+3(\u_1\u_3 +\u_1 r_3-\u_3 q_1) +\frac{2G-G_p(p+\mu)}{3p'+1}\frac{\theta}{\omega}\u_2\u_3 \nonumber \\ +\frac{G z_3\u_2}{\omega(3p'+1)}+\frac{(G-3p'-1) z_2\u_3}{\omega(3p'+1)}-\frac{z_1 z_3}{9\omega^2} +\frac{(q_3-4r_3)z_2}{3\omega}+\frac{r_2 z_3}{3\omega} \nonumber \\ +r_1 r_3+q_1 q_3+q_3 r_1-n r_2 \label{divH1},\\ \fl \partial_3 r_2+\partial_2 q_3 = 4 E_{23}+\frac{3G-2}{3(3p'+1)} \frac{\theta}{\omega} E_{13}-3(\u_2\u_3 -\u_2 q_3+\u_3 r_2) +\frac{2G-G_p(p+\mu)}{3p'+1}\frac{\theta}{\omega}\u_1\u_3 \nonumber \\ +\frac{G z_3\u_1}{\omega(3p'+1)}+\frac{(G-3p'-1) z_1\u_3}{\omega(3p'+1)}+\frac{z_2 z_3}{9\omega^2} -\frac{(r_3-4q_3)z_1}{3\omega}-\frac{q_1 z_3}{3\omega} \nonumber \\ -r_2 r_3-q_2 q_3-r_3 q_2+n q_1, \label{divH2} \end{eqnarray} $\diamondsuit$ \ `$\w{\nabla \cdot E}$' second Bianchi equations (taking into account (\ref{def_H})) \begin{eqnarray} \fl \partial_{\beta} {E^\beta}_1 +E_{11}(2 q_1 -r_1)+E_{12}(2 q_2-r_2)+E_{13}(q_3-2r_3)+E_{22}(r_1+q_1)+E_{23}(n_{33}-n) \nonumber \\ +\omega z_2-3 \omega^2 q_1+\frac{\mu+p}{3 p'} \u_1 =0, \label{divE1} \\ \fl \partial_{\beta} {E^\beta}_2 -E_{22}(2 r_2 -q_2)-E_{12}(2 r_1-q_1)-E_{23}(r_3-2q_3)-E_{11}(r_2+q_2)-E_{13}(n_{33}-n) \nonumber \\ -\omega z_1+3 \omega^2 r_2+\frac{\mu+p}{3 p'} \u_2 =0, \label{divE2} \\ \fl \partial_{\beta} {E^\beta}_3 +E_{13}(2 q_1 -r_1) - E_{23}(2 r_2 -q_2)+E_{33}(2q_3-r_3)+E_{11}(r_3+q_3) \nonumber \\ -3 \omega^2(q_3-r_3-2\u_3)+\frac{\mu+p}{3 p'} \u_3 =0. \label{divE3} \end{eqnarray} \section{Appendix 2} Here we present the purely basic differential equations accompanying the algebraic relations constructed in section 3. The first set contains the definitions of the basic variables $\B_1, \ldots, \B_{16}$: \begin{eqnarray} \fl \X(\b_3) - \Z(\b_1)=(\R_3-\Q_3)\b_1-\N\b_2-\Q_1\b_3-\B_1, \label{basic_eq11} \\ \fl \Y(\b_3) - \Z(\b_2)=(\R_3+\Q_3)\b_2+\N\b_1+\R_2\b_3-\B_2, \label{basic_eq12} \\ \fl \X(\b_2) - \Y(\b_1) = \R_1\b_2+\Q_2\b_1-\frac{\b_3^2}{2\O}+\B_3, \label{basic_eq13} \\ \fl \X(\b_1) - \Y(\b_2)= -\R_1\b_1-\Q_2\b_2+2 \Q_3\b_3-\B_4, \label{basic_eq14} \\ \fl \X(\b_2) + \Y(\b_1)= \Q_2\b_1-\R_1\b_2+\B_5, \label{basic_eq15} \\ \fl \X(\b_1) + \Y(\b_2) = \R_1\b_1-\Q_2\b_2-2 \R_3\b_3+\B_6, \label{basic_eq16} \\ \fl \X(\b_3)= (\R_3-\Q_3)\b_1+\frac{\b_2\b_3}{4\O}+\B_7, \label{basic_eq17} \\ \fl \Y(\b_3)= (\R_3+\Q_3)\b_2-\frac{\b_1\b_3}{4\O}+\B_8, \label{basic_eq18} \\ \fl \Y(\E_{13}) - \Z(\E_{12}) = \left(\N+\frac{\b_3}{8 \O}\right)\E_0+\frac{3 \b_3}{8 \O} \E_3+(\Q_3+\R_3)\E_{12} \nonumber \\ +\R_2\E_{13}-(\Q_1+\R_1)\E_{23}+\B_9, \label{basic_eq24} \\ % \fl \X(\E_{23}) - \Z(\E_{12}) = \left(\N+\frac{\b_3}{8 \O}\right)\E_0-\frac{3 \b_3}{8 \O} \E_3-(\Q_3-\R_3)\E_{12} \nonumber \\ -\Q_1\E_{23}+(\Q_2+\R_2)\E_{13}-\B_{10}, \label{basic_eq25} \\ % \fl \Y(\E_3-\smfrac{1}{6}\J) - \Z(\E_{23}) =\smfrac{1}{2} \R_2(\E_0+3\E_3) +\Q_1 \E_{12} \nonumber \\ +\left(\N-\frac{\b_3}{4\O}\right)\E_{13}+2 (\Q_3+\R_3) \E_{23} + \B_{12},\label{basic_eq27} \\ \fl \X(\E_3-\smfrac{1}{6}\J) + \Z(\E_{13}) =-\smfrac{1}{2} \Q_1(\E_0-3\E_3) +\R_2 \E_{12}\nonumber \\ +\left(\N-\frac{\b_3}{4\O}\right)\E_{23}+2 (\Q_3-\R_3) \E_{13} - \B_{13},\label{basic_eq28} \\ \fl \smfrac{1}{2} \X(\E_3+\E_0) + \Y(\E_{12}) = \R_1\E_0-2\Q_2\E_{12}-(\Q_3+\R_3) \E_{13}+\frac{3\b_3}{4\O} \E_{23}- \B_{13}-\frac{\B_{14}}{6}, \label{basic_eq29} \\ \fl \smfrac{1}{2} \Y(\E_3-\E_0) + \X(\E_{12}) = \Q_2\E_0+2\R_1\E_{12}+(\Q_3-\R_3) \E_{23}-\frac{3\b_3}{4\O} \E_{13}+\B_{12}-\frac{\B_{15}}{6}, \label{basic_eq30} \\ \fl \Z(\E_0-\E_3)-2 \X(\E_{13}) = (\Q_3-\R_3)(\E_0-3 \E_3)+\left(4\N+\frac{\b_3}{2\O}\right)\E_{12}+4\Q_1 \E_{13}+2\Q_2 \E_{23} \nonumber \\ +2 \B_{11}+\smfrac{1}{3}\B_{16}, \label{basic_eq31} \\ \fl \Z(\E_0+\E_3)+2 \Y(\E_{23}) = -(\Q_3+\R_3)(\E_0+3 \E_3)+\left(4\N+\frac{\b_3}{2\O}\right)\E_{12}+4\R_2 \E_{23}+2\R_1 \E_{13}\nonumber \\ +2 \B_{11} - \smfrac{1}{3}\B_{16}, \label{basic_eq26} \end{eqnarray} To this we add \begin{itemize} \item[$\diamondsuit$] the integrability conditions of (\ref{XYZO}), namely (\ref{basic_eq1}+\ref{basic_eq6},\ref{basic_eq2},\ref{basic_eq3}), \item[$\diamondsuit$] the three $(\alpha \alpha)$--Einstein field equations, namely (\ref{basic_eq8},\ref{basic_eq9},\ref{basic_eq10}), \item[$\diamondsuit$] the four equations (\ref{e2q1},\ref{e1r2},\ref{cons1},\ref{cons2}), namely (\ref{basic_eq1},\ref{basic_eq6},\ref{basic_eq4},\ref{basic_eq5}), \end{itemize} all simplified with the relations obtained by acting with the $\X,\Y,\Z$ operators on (\ref{convert_b1}--\ref{convert_b3}): \begin{eqnarray} \fl \X(\R_2)= \frac{1}{4\O} (4\R_3 \b_3-2\Q_3\b_3-\Q_1\b_1+\R_2\b_2-\B_6) -\Q_1(\R_2+\Q_2)-2\N\Q_3 +\frac{\E_{12}}{3}, \label{basic_eq1}\\ \fl \Y(\Q_1)= \frac{1}{4\O} (4\R_3 \b_3+2\Q_3\b_3-\Q_1\b_1+\R_2\b_2-\B_6) +\R_2(\R_1+\Q_1)+2\N\Q_3 -\frac{\E_{12}}{3}, \label{basic_eq6}\\ \fl 2\Y(\R_3)+\Z( \R_2)+\frac{1}{2\O} \Z(\b_1)= \N \left(\Q_1+\frac{\b_2}{2\O}\right)-\R_2(\Q_3-\R_3)-\frac{\b_1}{2\O} (\Q_3+3\R_3),\label{basic_eq2}\\ \fl 2 \X(\R_3)-\Z(\Q_1)-\frac{1}{2\O} \Z(\b_2) = \N \left(\R_2+\frac{\b_1}{2\O}\right)-\Q_1(\Q_3+\R_3)-\frac{\b_2}{2\O} (\Q_3-3\R_3),\label{basic_eq3}\\ \fl \X(\N)+\frac{1}{4\O} \X(\b_3)+\Z(\Q_2)= -\left(\N+\frac{3\b_3}{4\O}\right) \Q_1+(\Q_3-\R_3) \Q_2 +\R_2 \Q_3+\N \R_1-\R_2\R_3 \nonumber \\ -\frac{\b_2\b_3}{8\O^2}-\smfrac{1}{3}\E_{23} ,\label{basic_eq4}\\ \fl \Y(\N)+\frac{1}{4\O} \Y(\b_3)+\Z(\R_1)= \left(\N+\frac{3\b_3}{4\O}\right) \R_2-(\Q_3+\R_3) \R_1 +\R_2 \Q_3-\N \Q_2-\Q_1\Q_3 \nonumber \\ +\frac{\b_1\b_3}{8\O^2}+\smfrac{1}{3}\E_{13} ,\label{basic_eq5}\\ \fl \X(\R_1)-\Y(\Q_2)= \R_1^2+\Q_2^2+\R_3^2-\Q_3^2-\frac{\J}{9}-\frac{\E_3}{3}+\frac{\N\b_3}{2\O}+\smfrac{3}{16}\frac{\b_3}{\O^2} ,\label{basic_eq8} \\ \fl \Y(\R_2)-\Z(\Q_3+\R_3)= (\Q_3+\R_3)^2+\R_2^2-\Q_1\R_1-\frac{\J}{9}+\frac{\E_3-\E_0}{6}-\smfrac{1}{16}\frac{\b_3^2}{\O^2},\label{basic_eq9} \\ \fl \X(\Q_1)-\Z(\Q_3-\R_3)= -(\Q_3-\R_3)^2-\Q_1^2+\Q_2\R_2+\frac{\J}{9}-\frac{\E_3+\E_0}{6}+\smfrac{1}{16}\frac{\b_3^2}{\O^2},\label{basic_eq10} \\ \fl \Z(\b_3)=\R_2\b_2-\Q_1\b_1-\B_6,\label{basic_eq7} \end{eqnarray} $\diamondsuit$ \ the equations obtained by evaluation of $\X(\ref{convert_Q3R3}_a)$ and $\Y(\ref{convert_Q3R3}_b)$, \begin{eqnarray} \fl \X(\Q_3+\R_3) = 2\Q_3\R_1-\smfrac{3}{4}\frac{\b_3\R_2}{\O}+\smfrac{1}{4}\frac{\b_2(\Q_3+\R_3)}{\O}-\frac{\E_{13}}{3}+\frac{\B_8}{4 \O}- \smfrac{3}{16}\frac{\b_1\b_3}{\O^2},\label{basic_eq19} \\ \fl \Y(\Q_3-\R_3) = -2\Q_3\Q_2+\smfrac{3}{4}\frac{\b_3\Q_1}{\O}+\smfrac{1}{4}\frac{\b_1(\R_3-\Q_3)}{\O}+\frac{\E_{23}}{3}+\frac{\B_7}{4 \O}+ \smfrac{3}{16}\frac{\b_2\b_3}{\O^2},\label{basic_eq20} \end{eqnarray} $\diamondsuit$ \ and the $\w{\nabla \cdot E}$ Bianchi equations\footnote{the $\w{\nabla}\cdot \w{H}$ equations are identities under the $\O$-integrability conditions, (\ref{basic_eq1}+\ref{basic_eq6},\ref{basic_eq2},\ref{basic_eq3})} (\ref{divE1},\ref{divE2},\ref{divE3}): \begin{eqnarray} \fl \smfrac{1}{2} \X(\E_0-\E_3+\smfrac{2}{3}\J)+\Y(\E_{12})+\Z(\E_{13})= (\R_1 -\smfrac{1}{2}\Q_1)\E_0+\smfrac{3}{2}\Q_1\E_3+\R_2 \E_{12}- 2 \Q_2\E_{12}\nonumber \\ +(\Q_3-3\R_3)\E_{13}+\left(\N +\frac{\b_3}{\O}\right)\E_{23} ,\label{basic_eq21} \\%divE_1 \fl \smfrac{1}{2}\Y(\E_0+\E_3-\smfrac{2}{3}\J)-\X(\E_{12})-\Z(\E_{23})= (-\Q_2 +\smfrac{1}{2}\R_2)\E_0+\smfrac{3}{2}\R_2\E_3+\Q_1 \E_{12}- 2 \R_1\E_{12}\nonumber \\ +(\Q_3+3\R_3)\E_{23}+\left(\N +\frac{\b_3}{\O}\right)\E_{13} , \label{basic_eq22} \\%divE_2 \fl \X(\E_{13})+\Y(\E_{23})+\Z(\E_3+\smfrac{1}{3}\J) = (\R_1-2\Q_1)\E_{13}+(2\R_2-\Q_2) \E_{23} \nonumber \\ -\Q_3\E_0-3\R_3\E_3 . \label{basic_eq23} \end{eqnarray} \section*{Acknowledgement} All calculations were performed using the Maple 2015 symbolic algebra package and checked with Mathematica 7.0. \section*{References}
2,869,038,156,571
arxiv
\section{Introduction} \label{sec:Introduction} The large amount of proton-proton (\ensuremath{\Pp\Pp}\xspace) collision data at a center-of-mass energy of 13\TeV at the CERN LHC allows for precision measurements of standard model (SM) processes with very small production rates. Precise measurements of the inclusive and differential cross sections of the \ensuremath{\ttbar\cPZ}\xspace process are of particular interest because it can receive sizable contributions from phenomena beyond the SM (BSM)~\cite{Bylund2016,Englert2016}. The \ensuremath{\ttbar\cPZ}\xspace production is the most sensitive process for directly measuring the coupling of the top quark to the \PZ boson. Also, this process is an important background to several searches for BSM phenomena, as well as to measurements of certain SM processes, such as \ttbar production in association with the Higgs boson (\ensuremath{\ttbar\PH}\xspace). {\tolerance=500 The inclusive cross section for \ensuremath{\ttbar\cPZ}\xspace production has been measured by both the CMS and ATLAS Collaborations using \ensuremath{\Pp\Pp}\xspace collision data at $\sqrt{s}=13\TeV$, corresponding to an integrated luminosity of about 36\fbinv. The CMS Collaboration used events containing three or four charged leptons~(muons or electrons) collected in 2016 and reported a value $\sigma(\ensuremath{\ttbar\cPZ}\xspace)=0.99^{+0.09}_{-0.08}\stat\,^{+0.12}_{-0.10}\syst\unit{pb}$~\cite{Sirunyan:2017uzs}. The ATLAS Collaboration used events with two, three, or four charged leptons in a data sample collected in 2015 and 2016 and measured $\sigma(\ensuremath{\ttbar\cPZ}\xspace)=0.95\pm 0.08\stat\pm 0.10\syst\unit{pb}$~\cite{Aaboud:2019njj}. \par} In this paper, we report an updated measurement of the \ensuremath{\ttbar\cPZ}\xspace cross section in three- and four-lepton final states using \ensuremath{\Pp\Pp}\xspace collision data collected with the CMS detector in 2016 and 2017, corresponding to a total integrated luminosity of 77.5\fbinv. The \PZ boson is detected through its decay to an oppositely charged lepton pair. While the data analysis strategy remains similar to the one presented in Ref.~\cite{Sirunyan:2017uzs}, this new measurement benefits largely from an improved lepton selection procedure based on multivariate analysis techniques and a more inclusive trigger selection. In addition to the inclusive cross section, the differential cross section is measured as a function of the transverse momentum of the \PZ boson, \ensuremath{\pt(\PZ)}\xspace, and \ensuremath{\cos\theta^\ast_{\PZ}}\xspace. The latter observable is the cosine of the angle between the direction of the \PZ boson in the detector reference frame and the direction of the negatively charged lepton in the rest frame of the \PZ boson. Because of the key role of the top quark interaction with the \PZ boson in many BSM models~\cite{Hollik:1998vz,Agashe:2006wa,Kagan:2009bn,Ibrahim:2010hv,Ibrahim:2011im,Grojean:2013qca}, the differential cross section measurements can be used to constrain anomalous \ensuremath{\ttbar\cPZ}\xspace couplings. To this end, we pursue two different interpretations. A Lagrangian containing anomalous couplings~\cite{AguilarSaavedra:2008zc} is used to obtain bounds on the vector and axial-vector currents, as well as on the electroweak magnetic and electric dipole moments of the top quark. The interpretation is extended in the context of SM effective field theory (SMEFT)~\cite{Grzadkowski:2010es}, and we constrain the Wilson coefficients of the relevant BSM operators of mass dimension~6. There are 59 operators, among which we select the four most relevant linear combinations, as described in Ref.~\cite{AguilarSaavedra:2018nen}. This paper is organized as follows. In Section~\ref{sec:cms}, a brief description of the CMS detector is provided. In Section \ref{sec:objects}, the simulation of signal and background processes is discussed, followed by the description of the selection of events online (during data taking) and offline (after data taking) in Section~\ref{sec:eventselection}. The background estimation is discussed in Section~\ref{sec:backgrounds}, and the sources of systematic uncertainties that affect the measurements are discussed in Section~\ref{sec:Systematic}. In Section~\ref{sec:Results}, we present the results of the inclusive and differential measurements, followed by the limits on anomalous couplings and SMEFT interpretation. The results are summarized in Section~\ref{sec:Conclusions}. \section{The CMS detector} \label{sec:cms} The central feature of the CMS apparatus is a superconducting solenoid of 6\unit{m} internal diameter, providing a magnetic field of 3.8\unit{T}. Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter, each composed of a barrel and two endcap sections. Forward calorimeters extend the pseudorapidity ($\eta$) coverage. Muons are detected in gas-ionization chambers embedded in the steel magnetic flux-return yoke outside the solenoid. Events of interest are selected using a two-tiered trigger system~\cite{Khachatryan:2016bia}. The first level, composed of custom hardware processors, uses information from the calorimeters and muon detectors to select events, while the second level selects events by running a version of the full event reconstruction software optimized for fast processing on a farm of computer processors. A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in Ref.~\cite{Chatrchyan:2008zzk}. \section {Data samples and object selection} \label{sec:objects} The data sample used in this measurement corresponds to an integrated luminosity of 77.5\fbinv of \ensuremath{\Pp\Pp}\xspace collision events collected with the CMS detector during 2016 and 2017. To incorporate the LHC running conditions and the CMS detector performance, the two data sets were analyzed independently with appropriate calibrations applied, and combined at the final stage to extract the cross section value, as described in more detail in Section~\ref{sec:Systematic}. Simulated Monte Carlo (MC) events are used to model the signal selection efficiency, to test the background prediction techniques, and to predict some of the background yields. Two sets of simulated events for each process are used in order to match the different data-taking conditions in 2016 and 2017. Events for the \ensuremath{\ttbar\cPZ}\xspace signal process and a variety of background processes, including production of \ensuremath{\PW\cPZ}\xspace and triple vector boson (\ensuremath{\PV\PV\PV}\xspace) events, are simulated at next-to-leading order (NLO) in perturbative quantum chromodynamics (QCD) using the \MGvATNLO~v2.3.3 and v2.4.2 generators~\cite{Alwall:2014hca}. In these simulations, up to one additional jet is included in the matrix element calculation. The NLO \POWHEG~v2~\cite{powheg2} generator is used for simulation of the \ttbar production process, as well as for processes involving the Higgs boson produced in vector boson fusion (VBF) or in association with vector bosons or top quarks. The NNPDF3.0 (NNPDF3.1) \cite{Ball:2014uwa,Ball:2017nwa} parton distribution functions (PDFs) are used for simulating the hard process. Table~\ref{table:samples} gives an overview of the event generators, PDF sets, and cross section calculations that are used for the signal and background processes. For all processes, the parton showering and hadronization are simulated using \PYTHIA 8.203~\cite{Sjostrand:2007gs,Sjostrand:2014zea}. The modeling of the underlying event is done using the CUETP8M1~\cite{Skands:2014pea,CMS-PAS-GEN-14-001} and CP5 tunes~\cite{Sirunyan:2019dfx} for simulated samples corresponding to the 2016 and 2017 data sets, respectively. The CUETP8M2 and CUETP8M2T4 tunes~\cite{CMS-PAS-TOP-16-021} are used for the 2016 \ensuremath{\ttbar\PH}\xspace and \ensuremath{\ttbar\PV\PV}\xspace samples, respectively. Double counting of the partons generated with \MGvATNLO and \PYTHIA is removed using the \textsc{FxFx}~\cite{Frederix:2012ps} matching schemes for NLO samples. The \ensuremath{\ttbar\cPZ}\xspace cross section measurement is performed in a phase space defined by the invariant mass of an oppositely charged and same-flavor lepton pair $70\le\ensuremath{m(\ell\ell)}\xspace\le110\GeV$. Using a simulated signal sample, the contribution of $\ttbar\Pgg^{*}$ was verified to be negligible. The \PZ boson branching fractions to charged and neutral lepton pairs are set to $(\Z\to\ell\ell,\nu\nu)=0.301$~\cite{Tanabashi:2018oca}. The theoretical prediction of the inclusive \ensuremath{\ttbar\cPZ}\xspace cross section is computed for $\sqrt{s}=13\TeV$ at NLO in QCD and electroweak accuracy using \MGvATNLO and the PDF4LHC recommendations~\cite{Butterworth:2015oua} to assess the uncertainties. It is found to be $0.84\pm 0.10$\unit{pb}~\cite{deFlorian:2016spz,Frixione:2015zaa,Frederix:2018nkq}, with the renormalization and factorization scales \ensuremath{\mu_F}\xspace and \ensuremath{\mu_R}\xspace set to $\ensuremath{\mu_R}\xspace=\ensuremath{\mu_F}\xspace=\ensuremath{m(\cPqt)}\xspace+\ensuremath{m(\cPZ)}\xspace/2$, where $\ensuremath{m(\cPqt)}\xspace=172.5\GeV$ is the on-shell top quark mass~\cite{deFlorian:2016spz}. \begin{table}[htb] \centering \topcaption{ Event generators used to simulate events for the various processes. For each of the simulated processes shown, the order of the cross section normalization, the event generator used, the perturbative order of the generator calculation, and the NNPDF versions at NLO and at next-to-next-to-leading order (NNLO) used in simulating samples for the 2016 (2017) data sets. } \label{table:samples} \cmsTable{ \begin{tabular}{ccccc} \multirow{2}{*}{Process} & Cross section & \multirow{2}{*}{Event generator} & Perturbative & \multirow{2}{*}{NNPDF version} \\ & normalization & & order & \\ \hline \noalign{\vskip\cmsTabSkip} \ensuremath{\ttbar\cPZ}\xspace, \ensuremath{\cPqt\PZ\Pq}\xspace, \ensuremath{\ttbar\PW}\xspace, \ensuremath{\PW\cPZ}\xspace, $\PZ$+jets, & \multirow{2}{*}{NLO} & \MGvATNLO & \multirow{2}{*}{NLO} & \multirow{2}{*}{3.0 NLO (3.1 NNLO)} \\ \ensuremath{\PV\PV\PV}\xspace, \ensuremath{\ttbar\Pgg^{(*)}}\xspace, $\PW\Pgg^{(*)}$, \ensuremath{\PZ\Pgg^{(*)}}\xspace & & v2.2.3 (v2.4.2) & & \\[\cmsTabSkip] \multirow{2}{*}{$\Pg \Pg \to \ensuremath{\PZ\cPZ}\xspace$} & \multirow{2}{*}{NLO \cite{Caola:2015psa}} & \MCFM v7.0.1 \cite{Campbell:2010ff} & \multirow{2}{*}{LO} & \multirow{2}{*}{3.0 LO (3.1LO)} \\ & & {\small\textsc{JHUGen}}~v7.0.11~\cite{Bolognesi:2012mm} &\\[\cmsTabSkip] $\qqbar \to \ensuremath{\PZ\cPZ}\xspace$ & NNLO \cite{Cascioli:2014yka}& \POWHEG~v2 \cite{Melia:2011tj,Nason:2013ydw} & NLO & 3.0 NLO (3.1 NNLO) \\[\cmsTabSkip] \multirow{2}{*}{$\PW \PH$, $\PZ \PH$} & \multirow{2}{*}{NLO} & \POWHEG~v2 \textsc{minlo HVJ}~\cite{Luisoni:2013kna} & \multirow{2}{*}{NLO} & \multirow{2}{*}{3.0 NLO (3.1 NNLO)} \\ & & {\small\textsc{JHUGen}}~v7.0.11~\cite{Bolognesi:2012mm} &\\[\cmsTabSkip] VBF \PH & NLO & \POWHEG~v2 & NLO & 3.0 NLO (3.1 NNLO) \\[\cmsTabSkip] \ensuremath{\ttbar\PH}\xspace & NLO & \POWHEG~v2 \cite{Hartanto:2015uka} & NLO & 3.0 NLO (3.1 NNLO) \\[\cmsTabSkip] \ttbar & NNLO+NNLL \cite{Czakon:2011xx} & \POWHEG~v2 & NLO & 3.0 NLO (3.1 NNLO) \\[\cmsTabSkip] \ensuremath{\ttbar\PV\PV}\xspace, \ensuremath{\cPqt\PH\PW}\xspace, \ensuremath{\cPqt\PH\Pq}\xspace, \ensuremath{\cPqt\PW\PZ}\xspace & LO & \MGvATNLO & LO & 3.0 LO (3.1 NNLO) \end{tabular} } \end{table} All events are processed through a simulation of the CMS detector based on \GEANTfour~\cite{Geant} and are reconstructed with the same algorithms as used for data. Minimum-bias \ensuremath{\Pp\Pp}\xspace interactions occuring in the same or nearby bunch crossing, referred to as pileup (PU), are also simulated, and the observed distribution of the reconstructed \ensuremath{\Pp\Pp}\xspace interaction vertices in an event is used to ensure that the simulation describes the data. The CMS particle-flow (PF) algorithm~\cite{Sirunyan:2017ulk} is used for particle reconstruction and identification, yielding a consistent set of electron~\cite{Khachatryan:2015hwa}, muon~\cite{Chatrchyan:2012xi}, charged and neutral hadron, and photon candidates. These particles are defined with respect to the primary IV (PV), chosen to have the largest value of summed physics-object $\pt^2$, where these physics objects are reconstructed by a jet-finding algorithm~\cite{Cacciari:2008gp,Cacciari:2011ma} applied to all charged tracks associated with the vertex. Jets are reconstructed by clustering PF candidates using the anti-\kt algorithm~\cite{Cacciari:2008gp} with a distance parameter $R=0.4$. The influence of PU is mitigated through a charged hadron subtraction technique, which removes the energy of charged hadrons not originating from the PV~\cite{CMS-PAS-JME-14-001}. Jets are calibrated separately in simulation and data, accounting for energy deposits of neutral particles from PU and any nonlinear detector response~\cite{Chatrchyan:2011ds,Khachatryan:2016kdb}. Jets with $\pt> 30\GeV$ and $\abs{\eta}<2.4$ are selected for the analysis. Jets are identified as originating from the hadronization of \cPqb quarks using the \textsc{DeepCSV} algorithm~\cite{Sirunyan:2017ezt}. This algorithm achieves an averaged efficiency of 70\% for \cPqb quark jets to be correctly identified, with a misidentification rate of 12\% for charm quark jets and 1\% for jets originating from \cPqu, \cPqd, \cPqs quarks or gluons. Lepton identification and selection are critical ingredients in this measurement. Prompt leptons are those originating from direct \PW or \PZ boson decays, while nonprompt are those that are either misidentified jets or genuine leptons resulting from semileptonic decays of hadrons containing heavy-flavor quarks. To achieve an effective rejection of the nonprompt leptons, a multivariate analysis has been developed separately for electrons and muons similar to the one presented in Ref.~\cite{Sirunyan:2018hoz}. A boosted decision tree (BDT) classifier is used via the TMVA toolkit~\cite{Hocker:2007ht} for the multivariate analysis. In addition to the lepton \pt and $\abs{\eta}$, the training uses several discriminating variables. These comprise the kinematic properties of the jet closest to the lepton; the impact parameter in the transverse plane of the lepton track with respect to the PV; a variable that quantifies the quality of the geometric matching of the track in the silicon tracker with the signals measured in the muon chambers; variables related to the ECAL shower shape of electrons; two variants of relative isolation---one computed with a fixed ($R=0.3$) and another with a variable cone size depending on the lepton \pt~\cite{Khachatryan:2016uwr}. The relative isolation is defined as the scalar \pt sum of the particles within a cone around the lepton direction, divided by the lepton \pt. Comparing a stringent requirement on the BDT output to the non-BDT-based lepton identification used in Ref.~\cite{Sirunyan:2017uzs}, an increase of up to 15\% in prompt lepton selection efficiency is achieved, while the nonprompt lepton selection efficiency is reduced by about a factor 2 to 4, depending on the lepton \pt. Muons~(electrons) passing the BDT selection and having $\pt> 10\GeV$ and $\abs{\eta}<2.4~(2.5)$ are selected. The efficiency for prompt leptons in the \ensuremath{\ttbar\cPZ}\xspace signal events in the three lepton channel is around 90\% when averaged over \pt range used in the analysis for both electrons and muons. In the four-lepton channel, a less stringent lepton selection is used and it results in an average efficiency of 95\%. In order to avoid double counting, jets within a cone of $\Delta R=\sqrt{\smash[b]{(\Delta\eta)^2+(\Delta\phi)^2}}=0.4$ around the selected leptons are discarded, where $\Delta\eta$ and $\Delta\phi$ are the differences in pseudorapidity and azimuthal angle, respectively. \section{Event selection and observables} \label{sec:eventselection} Events are selected using a suite of triggers each of which requires the presence of one, two, or three leptons. For events selected by the triggers that require at least one muon or electron, the \pt threshold for muons (electrons) was 24 (27)\GeV during 2016 and 27 (32)\GeV in 2017. For triggers that require the presence of at least two leptons, the \pt thresholds are 23 and 17\GeV for the highest \pt (leading) and 12 and 8\GeV for the second-highest \pt (subleading) electron and muon, respectively. This strategy ensures an overall trigger efficiency higher than 98\% for events passing the lepton selection described below over the entire 2016 and 2017 data sets. These efficiencies are measured in data samples with an independent trigger selection and compared to those obtained in simulation. The measured differences are mitigated by reweighting the simulation by appropriate factors that differ from unity by less than 2~(3)\% in the 2016~(2017) data set. Events with exactly three leptons ($\Pgm\Pgm\Pgm$, $\Pgm\Pgm\Pe$, $\Pgm\Pe\Pe$, or $\Pe\Pe\Pe$) satisfying $\pt >40, 20, 10\GeV$ or exactly four leptons ($\Pgm\Pgm\Pgm\Pgm$, $\Pgm\Pgm\Pgm\Pe$, $\Pgm\Pgm\Pe\Pe$, $\Pgm\Pe\Pe\Pe$, or $\Pe\Pe\Pe\Pe$) with $\pt > 40$, 10, 10, 10\GeV are analyzed separately. In both categories, exactly one oppositely charged and same-flavor lepton pair consistent with the \PZ boson hypothesis is required, namely, for the three- and four-lepton categories $\abs{\ensuremath{m(\ell\ell)}\xspace - \ensuremath{m(\cPZ)}\xspace} <10$ and $20\GeV$, respectively. This selection reduces the contributions from background events with zero or more than one \PZ boson. Events containing zero jets are rejected. The measurement uses the jet multiplicity \ensuremath{N_\text{j}}\xspace in different event categories depending on the number of \cPqb-tagged jets \ensuremath{N_{\cPqb}}\xspace in the event. For the three-lepton channel these are $\ensuremath{N_{\cPqb}}\xspace= 0, 1, \ge 2 $, while for the four-lepton channel these categories are limited to $\ensuremath{N_{\cPqb}}\xspace=0, \ge 1$. The analysis makes use of several control regions in data to validate the background predictions, as well as to control the systematic uncertainties associated with them. The details are given in Section~\ref{sec:backgrounds}. \section{Background predictions} \label{sec:backgrounds} Several SM processes contribute to the three- and four-lepton final states. The \ensuremath{\ttbar\cPZ}\xspace process typically produces events with large jet and \cPqb-tagged jet multiplicities. In contrast, events with $\ensuremath{N_{\cPqb}}\xspace=0$ are dominated by background processes. Following closely the methodologies used in~Ref.~\cite{Sirunyan:2017uzs}, the separation between signal and backgrounds is obtained from a binned maximum-likelihood fit with nuisance parameters. In the fit, the contributions from the various background processes are allowed to vary within their uncertainties. The main contributions to the background arise from processes with at least one top quark produced in association with a \PW, \PZ, or Higgs boson, \ie, \ensuremath{\ttbar\PH}\xspace, \ensuremath{\ttbar\PW}\xspace, \ensuremath{\cPqt\PW\PZ}\xspace, \ensuremath{\cPqt\PZ\Pq}\xspace, \ensuremath{\cPqt\PH\Pq}\xspace, \ensuremath{\cPqt\PH\PW}\xspace, \ensuremath{\ttbar\PV\PV}\xspace, and $\ttbar\ttbar$. They are collectively denoted as \ensuremath{\cPqt(\cPaqt)\mathrm{X}}\xspace and estimated using simulated samples. We consider both the theoretical and experimental systematic uncertainties in the background yields for the \ensuremath{\cPqt(\cPaqt)\mathrm{X}}\xspace category. The theoretical uncertainty in the inclusive cross section is evaluated by varying \ensuremath{\mu_R}\xspace and \ensuremath{\mu_F}\xspace in the matrix element and parton shower description by a factor of 2 up and down, ignoring the anticorrelated variations, as well as the uncertainties stemming from the choice of PDFs. For each of these processes, this uncertainty is found to be not larger than 11\%~\cite{Campbell:2013yla, Frixione:2015zaa, Alwall:2014hca}. Among them, the \ensuremath{\cPqt\PZ\Pq}\xspace cross section was recently measured by the CMS Collaboration with a precision of $15\%$~\cite{Sirunyan:2018zgs}. Thus, we use this measurement and its uncertainty for the \ensuremath{\cPqt\PZ\Pq}\xspace cross section, and 11\% as uncertainty for the normalization of the other processes. The \ensuremath{\PW\cPZ}\xspace production constitutes the second-largest background contribution, in particular for events with three leptons, while in the four-lepton category, \ensuremath{\PZ\cPZ}\xspace production becomes substantial. For both these processes, the prediction of the overall production rate and the relevant kinematic distributions can be validated in data samples that do not overlap with the signal region. Events with three leptons, two of which form a same-flavor pair with opposite charge and satisfy $\abs{\ensuremath{m(\ell\ell)}\xspace- \ensuremath{m(\cPZ)}\xspace} < 10\GeV$ and $\ensuremath{N_{\cPqb}}\xspace=0$, are used to validate the \ensuremath{\PW\cPZ}\xspace background prediction. Four-lepton events with two \PZ boson candidates are used to constrain the uncertainties in the prediction of the \ensuremath{\PZ\cPZ}\xspace yield. \begin{figure}[h!] \centering \includegraphics[width=0.49\textwidth]{Figure_001-a.pdf} \includegraphics[width=0.49\textwidth]{Figure_001-b.pdf}\\ \includegraphics[width=0.49\textwidth]{Figure_001-c.pdf} \includegraphics[width=0.49\textwidth]{Figure_001-d.pdf} \caption{The observed (points) and predicted (shaded histograms) event yields versus lepton flavor (upper left), and the reconstructed transverse momentum of the \PZ boson candidates (upper right) in the \ensuremath{\PW\cPZ}\xspace-enriched data control event category, and versus lepton flavor (lower left) and \ensuremath{N_{\cPqb}}\xspace (lower right) in the \ensuremath{\PZ\cPZ}\xspace-enriched event category. The vertical lines on the points show the statistical uncertainties in the data, and the band the total uncertainty in the predictions. The lower panels show the ratio of the event yields in data to the predictions.} \label{figures:WZ_background} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.49\textwidth]{Figure_002-a.pdf} \includegraphics[width=0.49\textwidth]{Figure_002-b.pdf}\\ \includegraphics[width=0.49\textwidth]{Figure_002-c.pdf} \caption{The observed (points) and predicted (shaded histograms) event yields in regions enriched with nonprompt lepton backgrounds in \ttbar-like processes as a function of the lepton flavors (upper left), the \pt of the lowest-\pt (trailing) lepton (upper right), and \ensuremath{N_{\cPqb}}\xspace (bottom). The vertical lines on the points show the statistical uncertainties in the data, and the band the total uncertainty in the predictions. The lower panels show the ratio of the event yields in data to the background predictions. } \label{figures:ttbar3L_background} \end{figure} Figure~\ref{figures:WZ_background} presents the observed and predicted event yields for these categories and the reconstructed transverse momentum of the \PZ boson candidates, as well as the lepton flavor and \ensuremath{N_{\cPqb}}\xspace in the \ensuremath{\PZ\cPZ}\xspace-enriched control region. Agreement within the systematic uncertainties is observed. A normalization uncertainty of 10\% is assigned to the prediction of the \ensuremath{\PW\cPZ}\xspace and \ensuremath{\PZ\cPZ}\xspace backgrounds~\cite{Sirunyan:2018vkx,Sirunyan:2019bez}, and an additional 20\% uncertainty is appended to the \ensuremath{\PW\cPZ}\xspace background prediction with $\ensuremath{N_\text{j}}\xspace \ge 3$ because of the observed discrepancy in events with high jet multiplicity. We also estimate the potential mismodeling of \ensuremath{\PW\cPZ}\xspace production when heavy-quark pairs from gluon splitting are included by using a control data sample containing a \PZ boson candidate and two \cPqb-tagged jets. The distribution of the angle between the two \cPqb jets is sensitive to the modeling of gluon splitting and good agreement is observed. A systematic uncertainty of 20\% is estimated from possible mismodeling. Taking into account the fraction of simulated \ensuremath{\PW\cPZ}\xspace events with gluon splitting, the additional uncertainty in the prediction of \ensuremath{\PW\cPZ}\xspace events with $\ensuremath{N_{\cPqb}}\xspace \geq 1$ is estimated to be 8\%. The background with nonprompt leptons mainly originates from \ttbar or $\PZ \to \ell \ell$ events in which a nonprompt lepton arises from a semileptonic decay of a heavy-flavor hadron or misidentified jets in addition to two prompt leptons. The lepton selection specifically targets the reduction of nonprompt-lepton backgrounds to a subdominant level, while keeping the signal efficiency high. The details of the nonprompt-lepton background estimation are given in Ref.~\cite{Sirunyan:2017uzs}. In this analysis, it is validated in simulation and with a data control sample that contains three-lepton events without a \PZ boson candidate. Figure~\ref{figures:ttbar3L_background} shows the predicted and observed yields in this control sample for different lepton flavors, as a function of the \pt of the lowest-\pt lepton and \ensuremath{N_{\cPqb}}\xspace. We find good agreement between predicted and observed yields. Based on these studies, a systematic uncertainty of 30\% in the prediction of the background with nonprompt leptons is assigned, while the statistical uncertainty ranges between 5--50\%, depending on the measurement bin. A small contribution to the background comes from \ensuremath{\PV\PV\PV}\xspace processes. We group them in the ``rare'' category as these have relatively small production rates. Processes that involve a photon (\ensuremath{\PZ\Pgg^{(*)}}\xspace and \ensuremath{\ttbar\Pgg}\xspace ) are denoted by \ensuremath{\mathrm{X}\gamma}\xspace. The contribution from both of these categories to the selected event count is evaluated using simulated samples described in Section~\ref{sec:objects}. As in the case of the \ensuremath{\cPqt(\cPaqt)\mathrm{X}}\xspace backgrounds, scale factors are applied to account for small differences between data and simulation in trigger selection, lepton identification, jet energy corrections, and \cPqb jet selection efficiency. The overall uncertainty in the normalization of the rare background category is estimated to be 50\%~\cite{deFlorian:2016spz,Nhung:2013jta}, while for \ensuremath{\mathrm{X}\gamma}\xspace it is 20\%~\cite{Khachatryan:2015kea, Khachatryan:2017jub}. The statistical uncertainty stemming from the finite size of the simulated background samples are typically small, around 5\% and reaching 100\% only in the highest jet multiplicity regions. The simulation of photon conversion is validated in a data sample with three-lepton events where the invariant mass of the three leptons is required to be consistent with the \PZ boson mass. Good agreement between data and simulation is observed. \section{Systematic uncertainties} \label{sec:Systematic} The systematic uncertainties affecting the signal selection efficiency and background yields are summarized in Table~\ref{table:systematics}. The table shows the range of variations in the different bins of the analysis caused by each systematic uncertainty on the signal and background yields, as well as an estimate of the impact of each input uncertainty on the measured cross section. The table also indicates whether the uncertainties are treated as uncorrelated or fully correlated between the 2016 and 2017 data sets. \begin{table}[h!] \topcaption{Summary of the sources, magnitudes, treatments, and effects of the systematic uncertainties in the final \ensuremath{\ttbar\cPZ}\xspace cross section measurement. The first column indicates the source of the uncertainty, the second column shows the corresponding input uncertainty range for each background source and the signal. The third column indicates how correlations are treated between the uncertainties in the 2016 and 2017 data, where \ensuremath{\checkmark}\xspace means fully correlated and \ensuremath{\times}\xspace~uncorrelated. The last column gives the corresponding systematic uncertainty in the \ensuremath{\ttbar\cPZ}\xspace cross section using the fit result. The total systematic uncertainty, the statistical uncertainty and the total uncertainty in the \ensuremath{\ttbar\cPZ}\xspace cross section are shown in the last three lines. } \label{table:systematics} \centering \begin{tabular}{lccc} \multirow{2}{*}{Source} & Uncertainty & Correlated between & Impact on the \ensuremath{\ttbar\cPZ}\xspace \\ & range (\%) & 2016 and 2017 & cross section (\%) \\ \hline Integrated luminosity & 2.5 & \ensuremath{\times}\xspace & 2 \\ PU modeling & 1--2 & \ensuremath{\checkmark}\xspace & 1 \\ Trigger & 2 & \ensuremath{\times}\xspace & 2 \\ Lepton ID efficiency & 4.5--6 & \ensuremath{\checkmark}\xspace & 4 \\ Jet energy scale & 1--9 & \ensuremath{\checkmark}\xspace & 2 \\ Jet energy resolution & 0--1 & \ensuremath{\checkmark}\xspace & $<$1 \\ \cPqb tagging light flavor & 0--4 & \ensuremath{\times}\xspace & $<$1 \\ \cPqb tagging heavy flavor & 1--4 & \ensuremath{\times}\xspace & 2 \\ Choice in \ensuremath{\mu_R}\xspace and \ensuremath{\mu_F}\xspace & 1--4 & \ensuremath{\checkmark}\xspace & 1 \\ PDF choice & 1--2 & \ensuremath{\checkmark}\xspace & $<$1 \\ Color reconnection & 1.5 & \ensuremath{\checkmark}\xspace & 1 \\ Parton shower & 1--8 & \ensuremath{\checkmark}\xspace & $<$1 \\ \ensuremath{\PW\cPZ}\xspace cross section & 10 & \ensuremath{\checkmark}\xspace & 3 \\ \ensuremath{\PW\cPZ}\xspace high jet multiplicity & 20 & \ensuremath{\checkmark}\xspace & 1 \\ \ensuremath{\PW\cPZ}\xspace + heavy flavor & 8 & \ensuremath{\checkmark}\xspace & 1 \\ \ensuremath{\PZ\cPZ}\xspace cross section & 10 & \ensuremath{\checkmark}\xspace & 1 \\ \ensuremath{\cPqt(\cPaqt)\mathrm{X}}\xspace background & 10--15 & \ensuremath{\checkmark}\xspace & 2 \\ X$\gamma$ background & 20 & \ensuremath{\checkmark}\xspace & 1 \\ Nonprompt background & 30 & \ensuremath{\checkmark}\xspace & 1 \\ Rare SM background & 50 & \ensuremath{\checkmark}\xspace & 1 \\ Stat. unc. in nonprompt bkg. & 5--50 & \ensuremath{\times}\xspace & $<$1\\ Stat. unc. in rare SM bkg. & 5--100 & \ensuremath{\times}\xspace & $<$1\\[\cmsTabSkip] Total systematic uncertainty & & & 6\\ Statistical uncertainty & & & 5\\[\cmsTabSkip] Total & & & 8 \end{tabular} \end{table} The uncertainty in the integrated luminosity measurement in the 2016 (2017) data set is 2.5 (2.3)\%~\cite{CMS-PAS-LUM-17-001,CMS-PAS-LUM-17-004}, and is uncorrelated between the two data sets. Simulated events are reweighted according to the distribution of the number of interactions in each bunch crossing corresponding to a total inelastic \ensuremath{\Pp\Pp}\xspace cross section of 69.2\unit{mb}~\cite{Sirunyan:2018nqx}. The uncertainty in the latter, which affects the PU estimate, is 5\%~\cite{ATLAS:2016pu} and leads to about 2\% uncertainty in the expected yields. The uncertainties in the corrections to the trigger selection efficiencies are propagated to the results. A 2\% uncertainty is assigned to the yields obtained in simulation. Lepton selection efficiencies are measured using a ``tag-and-probe'' method~\cite{Chatrchyan:2012xi,Khachatryan:2015hwa} in bins of lepton \pt and $\eta$, and are found to be higher than 60\,(95)\% for lepton $\pt\le25~(>25)\GeV$. These measurements are performed separately in data and simulation. The differences between these two measurements are used to scale the yields obtained in the simulation. They are typically around 1\% and reach 10\% for leptons with $\pt<20\GeV$. The systematic uncertainties related to this source vary between 4.5 and 6\% in the signal and background yields. Uncertainties in the jet energy calibration are estimated by shifting the jet energy corrections in simulation up and down by one standard deviation. Depending on $\pt$ and $\eta$, the uncertainty in jet energy scale changes by 2--5\%~\cite{Khachatryan:2016kdb}. For the signal and backgrounds modeled via simulation, the uncertainty in the measurement is determined from the observed differences in the yields with and without the shift in jet energy corrections. The same technique is used to calculate the uncertainties from the jet energy resolution, which are found to be less than 1\%~\cite{Khachatryan:2016kdb}. The \cPqb tagging efficiency in the simulation is corrected using scale factors determined from data~\cite{Chatrchyan:2012jua,Sirunyan:2017ezt}. These are estimated separately for correctly and incorrectly identified jets, and each results in an uncertainty of about 1--4\%, depending on \ensuremath{N_{\cPqb}}\xspace. To estimate the theoretical uncertainties from the choice of \ensuremath{\mu_R}\xspace and \ensuremath{\mu_F}\xspace, each of these parameters is varied independently up and down by a factor of 2, ignoring the case, in which one parameter is scaled up while the other is scaled down. The envelope of the acceptance variations is taken as the systematic uncertainty in each search bin and is found to be smaller than 4\%. The different sets in the NNPDF3.0 PDF~\cite{Ball:2014uwa} are used to estimate the corresponding uncertainty in the acceptance for the differential cross section measurement, which is typically less than 1\%. The uncertainty associated with the choice of PDFs for the anomalous coupling and SMEFT interpretations is estimated by using several PDFs and assigning the maximum differences as the quoted uncertainty, following the PDF4LHC prescription with the MSTW2008 68\% \CL NNLO, CT10 NNLO, and NNPDF2.3 5f FFN PDF sets (as described in Ref.~\cite{Butterworth:2015oua} and references therein, as well as Refs.~\cite{Ball:2012cx,Martin:2009bu,Gao:2013xoa}). In the parton shower simulation, the uncertainty from the choice of \ensuremath{\mu_F}\xspace is estimated by varying the scale of initial- and final-state radiation up by factors of 2 and $\sqrt{2}$ and down by factors of 0.5 and $1/\sqrt{2}$, respectively, as suggested in Ref.~\cite{Skands:2014pea}. The default configuration in \PYTHIA includes a model of color reconnection based on multiple parton interactions (MPI) with early resonance decays switched off. To estimate the uncertainty from this choice of model, the analysis is repeated with three other color reconnection models within \PYTHIA: the MPI-based scheme with early resonance decays switched on, a gluon-move scheme~\cite{Argyropoulos:2014zoa}, and a QCD-inspired scheme~\cite{Christiansen:2015yqa}. The total uncertainty from color reconnection modeling is estimated by taking the maximum deviation from the nominal result and amounts to 1.5\%. \section {Results} \label{sec:Results} \subsection{Inclusive cross section measurement} The observed data, as well as the predicted signal and background yields, are shown in Fig.~\ref{fig:ttZ_comb} in various jet and \cPqb jet categories, for events with three and four leptons. The signal cross section is extracted from these categories using the statistical procedure detailed in Refs.~\cite{Junk:1999kv, Read:2002hq, ATL-PHYS-PUB-2011-011, Cowan:2010js}. The observed yields and background estimates in each analysis category, and the systematic uncertainties are used to construct a binned likelihood function $L(r, \theta)$ as a product of Poisson probabilities of all bins. As described in Section~\ref{sec:Systematic}, the bins of the two data-taking periods are kept separate, and the correlation pattern of the uncertainty as specified in Table~\ref{table:systematics}. The parameter $r$ is the signal strength modifier, \ie, the ratio between the measured cross section and the central value of the cross section predicted by simulation, and $\theta$ represents the full suite of nuisance parameters. \begin{figure}[h!t] \centering{ \includegraphics[width=.95\textwidth]{Figure_003.pdf}} \caption{Observed event yields in data for different values of \ensuremath{N_\text{j}}\xspace and \ensuremath{N_{\cPqb}}\xspace for events with 3 and 4 leptons, compared with the signal and background yields, as obtained from the fit. The lower panel displays the ratio of the data to the predictions of the signal and background from simulation. The inner and outer bands show the statistical and total uncertainties, respectively. } \label{fig:ttZ_comb} \end{figure} The test statistic is the profile likelihood ratio, $q(r)=-2\ln L(r,\hat\theta_{\text{r}})/L(\hat{r}, \hat{\theta})$, where $\hat\theta_{\text{r}}$ reflects the values of the nuisance parameters that maximize the likelihood function for signal strength $r$. An asymptotic approximation is used to extract the observed cross section of the signal process and the associated uncertainties~\cite{Junk:1999kv, Read:2002hq, ATL-PHYS-PUB-2011-011, Cowan:2010js}. The quantities $\hat{r}$ and $\hat{\theta}$ are the values that simultaneously maximize $L$. The fitting procedure is performed for the inclusive cross section measurements, and separately for the SMEFT interpretation. The combined cross section of the three- and four-lepton channels within the phase space $70\le\ensuremath{m(\ell\ell)}\xspace\le110$\GeV for the $\ell\ell$ pair is measured to be \begin{equation*} \sigma(\Pp\Pp \to \ensuremath{\ttbar\cPZ}\xspace)=0.95\pm0.05\stat\pm0.06\syst\unit{pb}, \end{equation*} in agreement with the SM prediction of $0.84\pm 0.10$\unit{pb} at NLO and electroweak accuracy~\cite{deFlorian:2016spz,Frixione:2015zaa,Frederix:2018nkq} and $0.86^{+0.07}_{-0.08}\,(\text{scale})\pm 0.03\,(\textrm{PDF}+\alpS)$\unit{pb} including also next-to-next-to-leading-logarithmic (NNLL) corrections~\cite{Kulesza:2018tqz}. The measured cross sections for the three- and four-lepton channels are given in Table~\ref{tab:bd3L4L}. \begin{table}[ht!] \centering \topcaption{The measured \ensuremath{\ttbar\cPZ}\xspace cross section for events with 3 and 4 leptons and the combined measurement. \label{tab:bd3L4L}} \begin{tabular}{cc} Lepton requirement & Measured cross section \\[\cmsTabSkip] \hline \noalign{\vskip\cmsTabSkip} 3$\ell$ & $0.97\pm0.06\stat\pm0.06\syst\unit{pb}$ \\[\cmsTabSkip] 4$\ell$ & $0.91\pm0.14\stat\pm0.08\syst\unit{pb}$ \\[\cmsTabSkip] Total & $0.95\pm0.05\stat\pm0.06\syst\unit{pb}$ \end{tabular} \end{table} The background yields and the systematic uncertainties obtained from the fit are, in general, very close to their initial values. The uncertainties associated with the \ensuremath{\PW\cPZ}\xspace background are modelled using three separate nuisance parameters as described in Section~\ref{sec:backgrounds}. Events in the $\ensuremath{N_{\cPqb}}\xspace=0$ categories provide a relatively pure \ensuremath{\PW\cPZ}\xspace control region, which helps constraining two of these uncertainties: the overall normalization uncertainty and the uncertainty in the \ensuremath{\PW\cPZ}\xspace yields with high jet multiplicity. These uncertainties get constrained, respectively, by 30 and 70\% relative to their input values. The third uncertainty controls the \ensuremath{\PW\cPZ}\xspace production with heavy-flavour jets populating the regions with $\ensuremath{N_{\cPqb}}\xspace\geq 1$, and is not substantially constrained in the fit. The individual contributions to the total systematic uncertainty in the measured cross section are listed in the fourth column of Table~\ref{table:systematics}. The largest contribution comes from the imperfect knowledge of the lepton selection efficiencies in the signal acceptance. The uncertainties in parton shower modeling and \ensuremath{\cPqt(\cPaqt)\mathrm{X}}\xspace and \ensuremath{\PW\cPZ}\xspace background yields also form a large fraction of the total uncertainty. With respect to the earlier measurements~\cite{Aaboud:2019njj,Sirunyan:2017uzs}, the statistical (systematic) uncertainty in the inclusive cross section is reduced by about 35~(40)\%. The improvement in the systematic uncertainty is primarily the result of a better lepton selection procedure and the detailed studies of its performance in simulation, and an improved estimation of the trigger and \cPqb tagging efficiencies in simulation. The reported result is the first experimental measurement that is more precise than the most precise theoretical calculations for \ensuremath{\ttbar\cPZ}\xspace production at NLO in QCD. A signal-enriched subset of events is selected by requiring $\ensuremath{N_{\cPqb}}\xspace \geq 1$ and $\ensuremath{N_\text{j}}\xspace \geq 3\,(2)$ for the three (four)-lepton channels. The signal purity is about 65\% for these events. Figure~\ref{fig:ttZ_Kinematics} shows several kinematic distributions for these signal-enriched events. The sum of the signal and background predictions is found to describe the data within uncertainties. The event yields are listed in Table~\ref{tab:yieldsPureRegion}. \begin{figure}[hp!] \centering {\includegraphics[width=0.46\textwidth]{Figure_004-a.pdf}} {\includegraphics[width=0.46\textwidth]{Figure_004-b.pdf}}\\ {\includegraphics[width=0.46\textwidth]{Figure_004-c.pdf}} {\includegraphics[width=0.46\textwidth]{Figure_004-d.pdf}}\\ {\includegraphics[width=0.46\textwidth]{Figure_004-e.pdf}} {\includegraphics[width=0.46\textwidth]{Figure_004-f.pdf}} \caption{ Kinematic distributions from a \ensuremath{\ttbar\cPZ}\xspace signal-enriched subset of events for data (points), compared to the contributions of the signal and background yields from the fit (shaded histograms). The distributions include the lepton flavor (upper left), number of \cPqb-tagged jets (upper right), jet multiplicity (middle left), dilepton invariant mass \ensuremath{m(\ell\ell)}\xspace (middle right), \ensuremath{\pt(\PZ)}\xspace (lower left), and \ensuremath{\cos\theta^\ast_{\PZ}}\xspace (lower right). The lower panels in each plot give the ratio of the data to the sum of the signal and background from the fit. The band shows the total uncertainty in the signal and background yields, as obtained from the fit. } \label{fig:ttZ_Kinematics} \end{figure} \begin{table}[h!] \centering\renewcommand{\arraystretch}{1.1} \topcaption{ The observed number of events for three- and four-lepton events in a signal-enriched sample of events, and the predicted yields and total uncertainties from the fit for each process. \label{tab:yieldsPureRegion}} \begin{tabular}{cccccc} Process & $\Pgm\Pgm\Pgm(\Pgm)$ & $\Pe\Pgm\Pgm(\Pgm)$ & $\Pe\Pe\Pgm(\Pgm/\Pe)$ & $\Pe\Pe\Pe(\Pe)$ & Total \\[\cmsTabSkip] \hline \noalign{\vskip\cmsTabSkip} \ensuremath{\ttbar\cPZ}\xspace & $ 143 \pm 7.1 $ & $ 122 \pm 6.1 $ & $ 112 \pm 5.5 $ & $ 77 \pm 3.9 $ & $ 455 \pm 22 $ \\ \ensuremath{\ttbar\PH}\xspace & $ 4.1 \pm 0.5 $ & $ 3.5 \pm 0.4 $ & $ 3.3 \pm 0.4 $ & $ 2.1 \pm 0.3 $ & $ 13 \pm 1.6 $ \\ \ensuremath{\cPqt(\cPaqt)\mathrm{X}}\xspace & $ 34 \pm 4.2 $ & $ 28 \pm 3.4 $ & $ 24 \pm 2.9 $ & $ 18 \pm 2.3 $ & $ 105 \pm 13 $ \\ \ensuremath{\PW\cPZ}\xspace & $ 18 \pm 4.7 $ & $ 15 \pm 4.2 $ & $ 10 \pm 2.8 $ & $ 11 \pm 3.1 $ & $ 54 \pm 15 $ \\ \ensuremath{\mathrm{X}\gamma}\xspace & $ 1.8 \pm 1.8 $ & $ 2.1 \pm 2.7 $ & $ 0.6 \pm 0.6 $ & $ 4.6 \pm 1.6 $ & $ 9.0 \pm 3.9 $ \\ \ensuremath{\PZ\cPZ}\xspace & $ 2.8 \pm 0.4 $ & $ 2.7 \pm 0.4 $ & $ 2.5 \pm 0.3 $ & $ 2.2 \pm 0.3 $ & $ 10 \pm 1.3 $ \\ Rare & $ 2.9 \pm 1.5 $ & $ 2.1 \pm 1.1 $ & $ 1.8 \pm 1.0 $ & $ 1.4 \pm 0.7 $ & $ 8.3 \pm 4.2 $ \\[\cmsTabSkip] Nonprompt & $ 6.9 \pm 2.9 $ & $ 11 \pm 4.0 $ & $ 6.9 \pm 2.9 $ & $ 8.5 \pm 3.5 $ & $ 33 \pm 13 $ \\[\cmsTabSkip] Total & $ 214 \pm 12 $ & $ 187 \pm 12 $ & $ 161 \pm 9.0 $ & $ 125 \pm 8.2 $ & $ 687 \pm 40 $ \\[\cmsTabSkip] Observed & $ 192 $ & $ 175 $ & $ 152 $ & $ 141 $ & $ 660 $ \end{tabular} \end{table} \subsection{Differential cross section measurement} The differential cross section is measured as a function of \ensuremath{\pt(\PZ)}\xspace and \ensuremath{\cos\theta^\ast_{\PZ}}\xspace. In the simulation, the transverse momentum of the \PZ boson is taken as the final momentum after any QCD and electroweak radiation. The differential cross section is defined in the same phase space as the inclusive cross section reported above, \ie, in the phase space where the top quark pair is produced in association with two leptons with an invariant mass of $70\le\ensuremath{m(\ell\ell)}\xspace\le110$\GeV, corrected for the detector efficiencies and acceptances, as well as for the branching fraction for the \PZ boson decay into a pair of muons or electrons. The measurement of the differential cross section is performed in a signal-enriched sample of events defined by requiring exactly three identified leptons, $\ensuremath{N_{\cPqb}}\xspace \geq 1$, and $\ensuremath{N_\text{j}}\xspace \geq 3$. Since the data samples under study are statistically limited, a rather coarse binning in \ensuremath{\pt(\PZ)}\xspace and \ensuremath{\cos\theta^\ast_{\PZ}}\xspace is chosen for the differential cross section measurement, with four bins in each distribution. The cross sections are calculated from the measured event yields corrected for selection and detector effects by subtracting the background and unfolding the resolution effects. The number of signal events in each bin is determined by subtracting the expected number of background events from the number of events in the data, where the background samples are used without any fit. The \ensuremath{\ttbar\cPZ}\xspace \MGvATNLO MC sample is used to construct a response matrix that takes into account both detector response and acceptance corrections. The same corrections, scale factors, and uncertainties as used in the inclusive cross section are applied. Since the resolution of the lepton momenta is good, the fraction of events migrating from one bin to another is extremely small. In all bins, the purity, defined as the fraction of reconstructed events that originate from the same bin, and the stability, defined as the fraction of generated events that are reconstructed in the same bin, are larger than 94\%. Under such conditions, matrix inversion without regularization provides an unbiased and stable method to correct for detector response and acceptance \cite{Cowan:1998ji}. In this analysis, the \texttt{TUnfold} package~\cite{Schmitt:2012kp} is used to obtain the results for the two measured observables. For each theoretical uncertainty in the signal sample, such as the choice of \ensuremath{\mu_R}\xspace, \ensuremath{\mu_F}\xspace, the PDF, and the parton shower, the response matrix is modified and the unfolding procedure is repeated. The uncertainties in the background expectation are accounted for by varying the number of subtracted background events. Experimental uncertainties from the detector response and efficiency, such as the lepton identification, jet energy scale, and \cPqb tagging uncertainties, are applied as a function of the reconstructed observable. For the latter uncertainties, the unfolding is performed using the same response matrix as for the nominal result and varying the input data within their uncertainties. This choice is made in order to minimize possible contributions from numerical effects in the matrix inversion. \begin{figure}[b!] \centering {\includegraphics[width=0.47\textwidth]{Figure_005-a.pdf}} \hfill {\includegraphics[width=0.47\textwidth]{Figure_005-b.pdf}} \\[\baselineskip] {\includegraphics[width=0.47\textwidth]{Figure_005-c.pdf}} \hfill {\includegraphics[width=0.47\textwidth]{Figure_005-d.pdf}} \caption{Measured differential \ensuremath{\ttbar\cPZ}\xspace production cross sections in the full phase space as a function of the transverse momentum \ensuremath{\pt(\PZ)}\xspace of the \PZ boson (upper row) and \ensuremath{\cos\theta^\ast_{\PZ}}\xspace, as defined in the text (lower row). Shown are the absolute (\cmsLeft) and normalized (\cmsRight) cross sections. The data are represented by the points. The inner (outer) vertical lines indicate the statistical (total) uncertainties. The solid histogram shows the prediction from the \MGvATNLO MC simulation, and the dashed histogram shows the theory prediction at NLO+NNLL accuracy. The hatched bands indicate the theoretical uncertainties in the predictions, as defined in the text. The lower panels display the ratios of the predictions to the measurement. } \label{fig:unfolding} \end{figure} Figure~\ref{fig:unfolding} left and right show, respectively, the measured absolute and normalized differential cross sections as function of \ensuremath{\pt(\PZ)}\xspace and \ensuremath{\cos\theta^\ast_{\PZ}}\xspace, as obtained from the unfolding procedure described above. Also shown is the prediction from the MC generator \MGvATNLO with its uncertainty from scale variations, the PDF choice, and the parton shower~\cite{deFlorian:2016spz,Frixione:2015zaa,Frederix:2018nkq}, as well as a theory prediction at NLO+NNLL accuracy with its uncertainty from scale variations~\cite{Kulesza:2018tqz,Kulesza:2019adl}. Good agreement of the predictions with the measurement is found. The scale variations affect the normalization of the predictions but have negligible impact on their shapes. \subsection{Search for anomalous couplings and effective field theory interpretation} \label{sec:EFT} The role of the top quark in many BSM models~\cite{Hollik:1998vz,Agashe:2006wa,Kagan:2009bn,Ibrahim:2010hv,Ibrahim:2011im,Grojean:2013qca} makes its interactions, in particular the electroweak gauge couplings, sensitive probes that can be exploited by interpreting the differential \ensuremath{\ttbar\cPZ}\xspace cross section in models with modified interactions of the top quark and the \PZ boson. Extending the earlier analysis~\cite{Sirunyan:2017uzs}, where the inclusive cross section measurement was used, we consider an anomalous coupling Lagrangian~\cite{Rontsch:2014cca} \begin{align*} \mathcal{L} = e \overline{u}_\PQt\biggl[ \gamma^{\mu} \bigl(\coupling{1}{V} + \gamma_5 \coupling{1}{A} \bigr) + \frac{\mathrm{i} \sigma^{\mu\nu} p_{\nu}} {\ensuremath{m(\cPZ)}\xspace} \bigl(\coupling{2}{V} + \mathrm{i} \gamma_5 \coupling{2}{A} \bigr) \biggr] v_\PAQt\,\PZ_{\mu}, \end{align*} which contains the neutral vector and axial-vector current couplings, $\coupling{1}{V}$ and $\coupling{1}{A}$, respectively. The electroweak magnetic and electric dipole interaction couplings are denoted by $\coupling{1}{V}$ and $\coupling{1}{A}$, respectively, and the four-momentum of the \PZ boson is denoted by $p_\nu$. In total, there are four real parameters. The current couplings are exactly predicted by the SM as \begin{align*} \coupling{1}{V}^{\mathrm{SM}}&=\frac{I^\mathrm{f}_{3,\Pq}-2Q_\mathrm{f}\swsq}{2\ensuremath{\sinw\cosw}}= 0.2448\,(52),\\ \coupling{1}{A}^{\mathrm{SM}}&=\frac{-I^\mathrm{f}_{3,\Pq}}{2\ensuremath{\sinw\cosw}}= -0.6012\,(14), \end{align*} where $\ensuremath{\theta_{\PW}}$ is the Weinberg angle, and $Q_\mathrm{f}$ and $I^\mathrm{f}_{3,\Pq}$ label the charge and the third component of the isospin of the SM fermions, respectively~\cite{Tanabashi:2018oca}. The dipole moments, moreover, are generated only radiatively in the SM. Their small numerical values, which are well below $10^{-3}$~\cite{Hollik:1998vz,Bernabeu:1995gs,Czarnecki:1996rx}, therefore allow stringent tests of the SM. Beyond \ensuremath{\pt(\PZ)}\xspace, several observables have been considered that are sensitive to anomalous electroweak interactions of the top quark~\cite{Schulze:2016qas}. Among them, $\ensuremath{\cos\theta^\ast_{\PZ}}\xspace$ has a high experimental resolution and provides the best discriminating power when compared to a comprehensive set of alternative choices calculated using the reconstructed leptons, jets, and \cPqb-tagged jets. An alternative interpretation is given in the context of SMEFT in the Warsaw basis~\cite{Grzadkowski:2010es} formed by 59 independent Wilson coefficients of mass dimension 6. Among them, 15 are important for top quark interactions~\cite{Zhang:2010dr}, which in general have a large impact on processes other than \ensuremath{\ttbar\cPZ}\xspace. Anomalous interactions between the top quark and the gluon (chromomagnetic and chromoelectric dipole moment interactions) are tightly constrained by the {\ttbar}+jets measurement~\cite{Sirunyan:2018ucr}. Similarly, the modification of the $\PW\PQt\Pb$ vertex is best constrained by measurements of the \PW~helicity fractions in top quark pair production~\cite{Khachatryan:2016fky} and in $t$-channel single top quark production~\cite{AguilarSaavedra:2010nx}. It is thus appropriate to separately consider the operators that induce anomalous interactions of the top quark with the remaining neutral gauge bosons, the \PZ boson and the photon. In the parametrization adopted here~\cite{AguilarSaavedra:2018nen}, the relevant Wilson coefficients are \ensuremath{c_{\PQt\PZ}}\xspace, \ensuremath{c_{\PQt\PZ}^{[I]}}\xspace, \ensuremath{c_{\phi\PQt}}\xspace, and \ensuremath{c_{\phi \mathrm{Q}}^{-}}\xspace. The former two induce electroweak dipole moments, while the latter two induce anomalous neutral-current interactions. These Wilson coefficients, which are combined as {\allowdisplaybreaks \begin{align*} \ensuremath{c_{\PQt\PZ}}\xspace &= \mathrm{Re}\left( -\ensuremath{\sin\thetaw} C_{\PQu\cmsSymbolFace{B}}^{(33)} + \ensuremath{\cos\thetaw} C_{\PQu\PW}^{(33)}\right) \\ \ensuremath{c_{\PQt\PZ}^{[I]}}\xspace &= \mathrm{Im}\left( -\ensuremath{\sin\thetaw} C_{\PQu\cmsSymbolFace{B}}^{(33)} + \ensuremath{\cos\thetaw} C_{\PQu\PW}^{(33)}\right) \\ \ensuremath{c_{\phi\PQt}}\xspace &= C_{\phi \PQt} = C_{\phi \PQu}^{(33)}\\ \ensuremath{c_{\phi \mathrm{Q}}^{-}}\xspace &= C_{\phi \cmsSymbolFace{Q}} = C_{\phi \PQq}^{1(33)} - C_{\phi \PQq}^{3(33)}, \end{align*}} are the main focus of this work. The Wilson coefficients in the Warsaw basis are denoted by $C_{\PQu\cmsSymbolFace{B}}^{(33)}$, $ C_{\PQu\PW}^{(33)}$, $C_{\phi \PQu}^{(33)}$, $C_{\phi \PQq}^{1(33)}$, and $C_{\phi \PQq}^{3(33)}$, as defined in Ref.~\cite{AguilarSaavedra:2018nen}. The constraints $C_{\phi \PQq}^{3(33)}=0$ and $C_{\PQu\PW}^{(33)}=0$ ensure a SM $\PW\PQt\PQb$ vertex. Wilson coefficients that are not considered in this work are kept at their SM values and the SMEFT expansion parameter is set to $\Lambda=1\TeV$. Based on the best expected sensitivity, we choose the following signal regions in the three- and four-lepton channels. In the three-lepton channel, there are 12 signal regions defined by the four \ensuremath{\pt(\PZ)}\xspace thresholds 0, 100, 200, and 400\GeV, and three thresholds on \ensuremath{\cos\theta^\ast_{\PZ}}\xspace at $-1.0$, $-0.6$, and $0.6$. In the four-lepton channel, the predicted event yields are lower, leading to an optimal choice of only three bins defined in terms of \ensuremath{\pt(\PZ)}\xspace with thresholds at 0, 100, and 200\GeV. The jet multiplicity requirement is relaxed to $\ensuremath{N_{\mathrm{j}}}\xspace\geq 1$. Next, 12 control regions in the three-lepton channel are defined by requiring $\ensuremath{N_{\cPqb}}\xspace=0$ and $\ensuremath{N_{\mathrm{j}}}\xspace\geq 1$, but otherwise reproducing the three-lepton signal selections. The three-lepton control regions guarantee a pure selection of the main \ensuremath{\PW\cPZ}\xspace background. In order to also constrain the leading \ensuremath{\PZ\cPZ}\xspace background of the four-lepton channel, we add three more control regions with $\ensuremath{N_{\cPqb}}\xspace\geq 0$ and $\ensuremath{N_{\mathrm{j}}}\xspace\geq 1$ and require that there be two pairs of opposite-sign same-flavor leptons consistent with the \PZ boson mass in a window of $\pm15\GeV$. A summary of the signal and control regions is given in Table~\ref{table:srdefinition}. \begin{table}[!htb] \centering \topcaption{Definition of the signal regions (SRs) and control regions (CRs). For signal regions SR13, SR14, and SR15 and control regions CR13, CR14, and CR15, there is no requirement on \ensuremath{\cos\theta^\ast_{\PZ}}\xspace.} \label{table:srdefinition} \cmsTable{ \begin{tabular}{cccccccc} \ensuremath{N_{\ell}}\xspace & \ensuremath{N_{\cPqb}}\xspace & \ensuremath{N_\text{j}}\xspace & $N_{\PZ}$ & \ensuremath{\pt(\PZ)}\xspace (\GeVns) & $-1 \leq \ensuremath{\cos\theta^\ast_{\PZ}}\xspace < -0.6 $ & $-0.6 \leq \ensuremath{\cos\theta^\ast_{\PZ}}\xspace < 0.6 $ & $0.6 \leq \ensuremath{\cos\theta^\ast_{\PZ}}\xspace $ \\ \hline \multirow{4}{*}{3} & \multirow{4}{*}{$\geq$1} & \multirow{4}{*}{$\geq$3} & \multirow{4}{*}{1} & \multirow{1}{*}{0--100} & SR1 & SR2 & SR3 \\ & & & & \multirow{1}{*}{100--200} & SR4 & SR5 & SR6 \\ & & & & \multirow{1}{*}{200--400} & SR7 & SR8 & SR9 \\ & & & & \multirow{1}{*}{$\geq$400} & SR10 & SR11 & SR12 \\[\cmsTabSkip] \multirow{3}{*}{4} & \multirow{3}{*}{$\geq$1} & \multirow{3}{*}{$\geq$1} & \multirow{3}{*}{1} & \multirow{1}{*}{0--100} & \multicolumn{3}{c}{SR13} \\ & & & & \multirow{1}{*}{100--200} & \multicolumn{3}{c}{SR14} \\ & & & & \multirow{1}{*}{$\geq$200} & \multicolumn{3}{c}{SR15} \\[\cmsTabSkip] \multirow{4}{*}{3} & \multirow{4}{*}{0} & \multirow{4}{*}{$\geq$1} & \multirow{4}{*}{1} & \multirow{1}{*}{0--100} & CR1 & CR2 & CR3 \\ & & & & \multirow{1}{*}{100--200} & CR4 & CR5 & CR6 \\ & & & & \multirow{1}{*}{200--400} & CR7 & CR8 & CR9 \\ & & & & \multirow{1}{*}{$\geq$400} & CR10 & CR11 & CR12 \\[\cmsTabSkip] \multirow{3}{*}{4} & \multirow{3}{*}{$\geq$0} & \multirow{3}{*}{$\geq$1} & \multirow{3}{*}{2} & \multirow{1}{*}{0--100} & \multicolumn{3}{c}{CR13} \\ & & & & \multirow{1}{*}{100--200} & \multicolumn{3}{c}{CR14} \\ & & & & \multirow{1}{*}{$\geq$200} & \multicolumn{3}{c}{CR15} \\ \hline \end{tabular} } \end{table} The predictions for signal yields with nonzero values of anomalous couplings or Wilson coefficients are obtained by simulating large LO samples in the respective model on a fine grid in the parameter space, including the SM configuration. Then, the two-dimensional (2D) generator-level distributions of \ensuremath{\pt(\PZ)}\xspace and \ensuremath{\cos\theta^\ast_{\PZ}}\xspace for the BSM and the SM parameter points are used to define the reweighting of the nominal NLO \ensuremath{\ttbar\cPZ}\xspace sample. The result of the reweighting procedure is tested on a coarse grid in BSM parameter space, where BSM samples are produced and reconstructed. The differences between the full event reconstruction and the reweighting procedure are found to be negligible for all distributions considered in this work. The theoretical uncertainties in the predicted BSM yields are scaled accordingly. From the predicted yields and the uncertainties, we construct a binned likelihood function $L(\theta)$ as a product of Poisson probabilities, where $\theta$ labels the set of nuisance parameters. The test statistic is the profile likelihood ratio $q=-2\ln(L(\hat{\theta}, \vec C)/L(\hat{\theta}_{\text{max}}))$ where $\hat{\theta}$ is the set of nuisance parameters maximizing the likelihood function at a BSM point defined by the Wilson coefficients collectively denoted by $\vec C$. In the denominator, $\hat{\theta}_\text{max}$ maximizes the likelihood function in the BSM parameter plane. Figure~\ref{figures:regions_EFT} shows the best-fit result in the plane spanned by \ensuremath{c_{\phi\PQt}}\xspace and \ensuremath{c_{\phi \mathrm{Q}}^{-}}\xspace using the regions in Table~\ref{table:srdefinition}. Figure~\ref{figures:EFT_results} displays the log-likelihood scan in the 2D planes spanned by \ensuremath{c_{\phi\PQt}}\xspace and \ensuremath{c_{\phi \mathrm{Q}}^{-}}\xspace, as well as \ensuremath{c_{\PQt\PZ}}\xspace and \ensuremath{c_{\PQt\PZ}^{[I]}}\xspace. Consistent with the measurement of the cross section, the SM value is close to the contour in 2D at 95\% confidence level (\CL) for modified vector and axial-vector current couplings. Models with nonzero electroweak dipole moments predict a harder \ensuremath{\pt(\PZ)}\xspace spectrum that is not observed in data. A systematic uncertainty from an effect of nonzero Wilson coefficients on the background prediction, in particular of the \ensuremath{\cPqt\PZ\Pq}\xspace process amounting to a total of less than 8.5\% in the most sensitive bins, was checked to have a negligible impact. The SM prediction is within the 68\% confidence interval of the best-fit value of the \ensuremath{c_{\PQt\PZ}}\xspace and \ensuremath{c_{\PQt\PZ}^{[I]}}\xspace coefficients. Figure~\ref{figures:BSM_results} shows the complementary scan in the 2D plane spanned by the anomalous current interactions $\coupling{1}{V}$ and $\coupling{1}{A}$, as well as the anomalous dipole interactions $\coupling{2}{V}$ and $\coupling{2}{A}$. In both cases, the SM predictions are consistent with the measurements. Finally, Figs.~\ref{figures:EFT_results1D} and \ref{figures:BSM_results1D} display the one-dimensional (1D) scans, where in each plot, all other coupling parameters are set to their SM values. The corresponding 1D confidence intervals at 68 and 95\% \CL are listed in Table~\ref{table:limits} and are the most stringent direct constraints to date. A comparison of the observed 95\% confidence intervals with earlier measurements is shown in Fig.~\ref{figures:EFT_summary}, together with direct limits obtained within the SMEFiT framework~\cite{Hartland:2019bjb} and by the TopFitter Collaboration~\cite{Buckley:2015lku}. \begin{figure}[!htbp] \centering \includegraphics[width=.90\textwidth]{Figure_006.pdf} \caption{ The observed (points) and predicted (shaded histograms) post-fit yields for the combined 2016 and 2017 data sets in the control and signal regions. In the $\ensuremath{N_{\ell}}\xspace=3$ control and signal regions (bins 1--12), each of the four \ensuremath{\pt(\PZ)}\xspace categories is further split into three \ensuremath{\cos\theta^\ast_{\PZ}}\xspace bins. The horizontal bars on the points give the statistical uncertainties in the data. The lower panel displays the ratio of the data to the predictions and the hatched regions show the total uncertainty. The solid line shows the best-fit prediction from the SMEFT fit. } \label{figures:regions_EFT} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[width=.49\textwidth]{Figure_007-a.pdf} \includegraphics[width=.49\textwidth]{Figure_007-b.pdf}\\ \caption{ Results of scans in two 2D planes for the SMEFT interpretation. The shading quantified by the gray scale on the right reflects the negative log-likelihood ratio $q$ with respect to the best-fit value, designated by the diamond. The solid and dashed lines indicate the 68 and 95\% \CL contours from the fit, respectively. The cross shows the SM prediction. } \label{figures:EFT_results} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[width=.49\textwidth]{Figure_008-a.pdf} \includegraphics[width=.49\textwidth]{Figure_008-b.pdf}\\ \caption{ Results of scans in the axial-vector and vector current coupling plane (\cmsLeft) and the electroweak dipole moment plane (\cmsRight). The shading quantified by the gray scale on the right of each plot reflects the log-likelihood ratio $q$ with respect to the best-fit value, designated by the diamond. The solid and dashed lines indicate the 68 and 95\% \CL contours from the fit, respectively. The cross shows the SM prediction. The area between the dot-dashed ellipses in the axial-vector and vector current coupling plane corresponds to the observed 68\% \CL area from the previous CMS result~\cite{Khachatryan:2015sha}. } \label{figures:BSM_results} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[width=.40\textwidth]{Figure_009-a.pdf} \includegraphics[width=.40\textwidth]{Figure_009-b.pdf}\\ \includegraphics[width=.40\textwidth]{Figure_009-c.pdf} \includegraphics[width=.40\textwidth]{Figure_009-d.pdf}\\ \caption{ 1D scans of two Wilson coefficients, with the value of the other Wilson coefficients set to zero. The shaded areas correspond to the 68 and 95\% \CL intervals around the best fit value, respectively. The downward triangle indicates the SM value. Previously excluded regions at 95\% \CL \cite{Sirunyan:2017uzs} (if available) are indicated by the hatched band. Indirect constraints from Ref. \cite{Zhang:2012cd} are shown as a cross-hatched band. } \label{figures:EFT_results1D} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[width=.40\textwidth]{Figure_010-a.pdf} \includegraphics[width=.40\textwidth]{Figure_010-b.pdf}\\ \includegraphics[width=.40\textwidth]{Figure_010-c.pdf} \includegraphics[width=.40\textwidth]{Figure_010-d.pdf}\\ \caption{ Log-likelihood ratios for 1D scans of anomalous couplings. For the scan of $\coupling{1}{A}$ (upper \cmsLeft), $\coupling{1}{V}$ was set to the SM value of 0.24, and for the scan of $\coupling{1}{V}$ (upper \cmsRight), $\coupling{1}{A}$ was set to the SM value of $-0.60$. For the scans of $\coupling{2}{A}$ (lower \cmsLeft) and $\coupling{2}{V}$ (lower \cmsRight), which correspond to the top quark electric and magnetic dipole moments, respectively, both $\coupling{1}{V}$ and $\coupling{1}{A}$ are set to the SM values. The shaded areas correspond to the 68 and 95\% \CL intervals around the best-fit value, respectively. The downward triangle indicates the SM value. } \label{figures:BSM_results1D} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=.60\textwidth]{Figure_011.pdf} \caption{ The observed 95\% \CL intervals for the Wilson coefficients from this measurement, the previous CMS result based on the inclusive \ensuremath{\ttbar\cPZ}\xspace cross section measurement~\cite{Sirunyan:2017uzs}, and the most recent ATLAS result~\cite{Aaboud:2019njj}. The direct limits within the SMEFiT framework~\cite{Hartland:2019bjb} and from the TopFitter Collaboration~\cite{Buckley:2015lku}, and the 68\% \CL indirect limits from electroweak data are also shown~\cite{Zhang:2012cd}. The vertical line displays the SM prediction. } \label{figures:EFT_summary} \end{figure} \begin{table}[!htbp] \centering \topcaption{Expected and observed 68 and 95\% \CL intervals from this measurement for the listed Wilson coefficients. The expected and observed 95\% \CL intervals from a previous CMS measurement \cite{Sirunyan:2017uzs} and indirect 68\% \CL constraints from precision electroweak data \cite{Zhang:2012cd} are shown for comparison. } \label{table:limits} \cmsTable{ \begin{tabular}{cccccccc} Coefficient & \multicolumn{2}{c}{Expected} & \multicolumn{2}{c}{Observed} & \multicolumn{2}{c}{Previous CMS constraints} & Indirect constraints \\ & 68\% \CL & 95\% \CL & 68\% \CL & 95\% \CL & Exp. 95\% \CL & Obs. 95\% \CL & 68\% \CL \\ \hline \noalign{\vskip\cmsTabSkip} $\ensuremath{c_{\PQt\PZ}}\xspace / \Lambda^2$ & $[-0.7,~0.7]$ & $[-1.1,~1.1]$ & $[-0.8,~0.5]$ & $[-1.1,~1.1]$ & $[-2.0,~2.0]$ & $[-2.6,~2.6]$ & $[-4.7,~0.2]$ \\[\cmsTabSkip] $\ensuremath{c_{\PQt\PZ}^{[I]}}\xspace / \Lambda^2 $ & $[-0.7,~0.7]$ & $[-1.1,~1.1]$ & $[-0.8,~1.0]$ & $[-1.2,~1.2]$ & \NA & \NA & \NA \\[\cmsTabSkip] \multirow{2}{*}{$\ensuremath{c_{\phi\PQt}}\xspace / \Lambda^2 $} & \multirow{2}{*}{$[-1.6,~1.4]$} & \multirow{2}{*}{$[-3.4,~2.8]$} & \multirow{2}{*}{$[1.7,~4.2]$} & \multirow{2}{*}{$[0.3,~5.4]$} & \multirow{2}{*}{$[-20.2,~4.0]$} & $[-22.2,~-13.0]$ & \multirow{2}{*}{$[-0.1,~3.7]$} \\ & & & & & & $[-3.2,~6.0]$ & \\[\cmsTabSkip] $\ensuremath{c_{\phi \mathrm{Q}}^{-}}\xspace / \Lambda^2 $ & $[-1.1,~1.1]$ & $[-2.1,~2.2]$ & $[-3.0,~-1.0]$ & $[-4.0,~0.0]$ & \NA & \NA & $[-4.7,~0.7]$ \end{tabular} } \end{table} \section{Summary} \label{sec:Conclusions} A measurement of top quark pair production in association with a \PZ boson using a data sample of proton-proton collisions at $\sqrt{s}=13\TeV$, corresponding to an integrated luminosity of 77.5\fbinv, collected with the CMS detector at the LHC has been presented. The analysis was performed in the three- and four-lepton final states using analysis categories defined with jet and \cPqb jet multiplicities. Data samples enriched in background processes were used to validate predictions, as well as to constrain their uncertainties. The larger data set and reduced systematic uncertainties such as those associated with the lepton identification, helped to substantially improve the precision on the measured cross section with respect to previous measurements reported in Refs.~\cite{Sirunyan:2017uzs,Aaboud:2019njj}. The measured inclusive cross section $\sigma(\ensuremath{\ttbar\cPZ}\xspace)=0.95\pm0.05\stat\pm0.06\syst\unit{pb}$ is in good agreement with the standard model prediction of $0.84\pm 0.10$\unit{pb}~\cite{deFlorian:2016spz,Frixione:2015zaa,Frederix:2018nkq}. This is the most precise measurement of the \ensuremath{\ttbar\cPZ}\xspace cross section to date, and the first measurement with a precision competing with current theoretical calculations. Absolute and normalized differential cross sections for the transverse momentum of the \PZ boson and for \ensuremath{\cos\theta^\ast_{\PZ}}\xspace, the angle between the direction of the \PZ boson and the direction of the negatively charged lepton in the rest frame of the \PZ boson, are measured for the first time. The standard model predictions at next-to-leading order are found to be in good agreement with the measured differential cross sections. The measurement is also interpreted in terms of anomalous interactions of the \PQt quark with the \PZ boson. Confidence intervals for the anomalous vector and the axial-vector current couplings and the dipole moment interactions are presented. Constraints on the Wilson coefficients in the standard model effective field theory are also presented. \begin{acknowledgments} We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: BMBWF and FWF (Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ, FAPERGS, and FAPESP (Brazil); MES (Bulgaria); CERN; CAS, MoST, and NSFC (China); COLCIENCIAS (Colombia); MSES and CSF (Croatia); RPF (Cyprus); SENESCYT (Ecuador); MoER, ERC IUT, PUT and ERDF (Estonia); Academy of Finland, MEC, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG, and HGF (Germany); GSRT (Greece); NKFIA (Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy); MSIP and NRF (Republic of Korea); MES (Latvia); LAS (Lithuania); MOE and UM (Malaysia); BUAP, CINVESTAV, CONACYT, LNS, SEP, and UASLP-FAI (Mexico); MOS (Montenegro); MBIE (New Zealand); PAEC (Pakistan); MSHE and NSC (Poland); FCT (Portugal); JINR (Dubna); MON, RosAtom, RAS, RFBR, and NRC KI (Russia); MESTD (Serbia); SEIDI, CPAN, PCTI, and FEDER (Spain); MOSTR (Sri Lanka); Swiss Funding Agencies (Switzerland); MST (Taipei); ThEPCenter, IPST, STAR, and NSTDA (Thailand); TUBITAK and TAEK (Turkey); NASU and SFFR (Ukraine); STFC (United Kingdom); DOE and NSF (USA). \hyphenation{Rachada-pisek} Individuals have received support from the Marie-Curie program and the European Research Council and Horizon 2020 Grant, contract Nos.\ 675440, 752730, and 765710 (European Union); the Leventis Foundation; the A.P.\ Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation \`a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the F.R.S.-FNRS and FWO (Belgium) under the ``Excellence of Science -- EOS" -- be.h project n.\ 30820817; the Beijing Municipal Science \& Technology Commission, No. Z181100004218003; the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Lend\"ulet (``Momentum") Program and the J\'anos Bolyai Research Scholarship of the Hungarian Academy of Sciences, the New National Excellence Program \'UNKP, the NKFIA research grants 123842, 123959, 124845, 124850, 125105, 128713, 128786, and 129058 (Hungary); the Council of Science and Industrial Research, India; the HOMING PLUS program of the Foundation for Polish Science, cofinanced from European Union, Regional Development Fund, the Mobility Plus program of the Ministry of Science and Higher Education, the National Science Center (Poland), contracts Harmonia 2014/14/M/ST2/00428, Opus 2014/13/B/ST2/02543, 2014/15/B/ST2/03998, and 2015/19/B/ST2/02861, Sonata-bis 2012/07/E/ST2/01406; the National Priorities Research Program by Qatar National Research Fund; the Ministry of Science and Education, grant no. 3.2989.2017 (Russia); the Programa Estatal de Fomento de la Investigaci{\'o}n Cient{\'i}fica y T{\'e}cnica de Excelencia Mar\'{\i}a de Maeztu, grant MDM-2015-0509 and the Programa Severo Ochoa del Principado de Asturias; the Thalis and Aristeia programs cofinanced by EU-ESF and the Greek NSRF; the Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University and the Chulalongkorn Academic into Its 2nd Century Project Advancement Project (Thailand); the Welch Foundation, contract C-1845; and the Weston Havens Foundation (USA). \end{acknowledgments}
2,869,038,156,572
arxiv
\section{Introduction} The mystery of the origin of cosmic rays (CRs) has lasted for nearly 100 years. Energetics arguments strongly suggest supernovae and their remnants (SNe \& SNRs) may be a source of Galactic CRs. By observing such potential CR accelerators in multiple wavelengths, we can constrain their particle populations, the acceleration processes they undergo, and thus the sources' ability to accelerate CRs up to some of the highest energies observed. We do so for the SNR CTB 37A, which we detect with the Fermi Gamma-ray Space Telescope (Fermi). The slightly extended emission coincides with the nominal position of CTB 37A, existing radio and X-ray data, and H.E.S.S. very high energy gamma-ray observations. Previous studies suggest that the SNR may be interacting with molecular clouds seen in CO to be positionally coincident with the remnant \citep{ReynosoMagnumCO}. The clouds are also kinematically associated with several OH $1720\,$MHz masers, easily explained through collisional shock excitation \citep{Frail96Masers}. In this way, SNR-cloud interactions may be another source of both CRs and gamma-rays emitted by CTB 37A. Motivated by these observations, we present a model of the multiwavelength spectrum which both reasonably reproduces the Fermi data and suggests that this SNR could be accelerating CRs up to the observed energies. An independent analysis of CTB 37A \citep{CastroSlane2010} reaches similar conclusions. \section{Fermi LAT Analysis: Methods and Results}\label{sec:AnalysisMethodResults} The Fermi Gamma-ray Space Telescope, launched 11 June 2008, contains the Gamma-ray Burst Monitor and the Large Area Telescope (LAT). The LAT's silicon tracker modules and Cesium Iodide calorimeter are sensitive to photons in a broad energy range ($\sim 0.02$ to $>300$\,GeV) with a large effective area ($\sim 8,000\,\mathrm{cm}^2$ for $E>1$\,GeV for on-axis events) over a large field of view ($\sim 2.4$\,sr). The front (first $12$) tracker planes have thin Tungsten converter foils enabling a smaller point spread function (PSF) while the back (last $4$) planes' thicker converters permit a larger effective area. We used approximately $18$\,months of the standard, low background {\it diffuse} class LAT data, collected from 8 Aug 2008 to 12 Feb 2010. A zenith cut of $105^{\circ}$ minimizes Earth albedo gamma-rays while limiting the energy range to $0.2 < E < 50$\,GeV minimizes the systematic error. We used the standard science tools (v9r15p6) and instrument response functions (IRFs) (P6\_V3)\footnote{Available at the Fermi Science Support Center. \url{http://fermi.gsfc.nasa.gov/ssc}} to fit a $4.5^{\circ}$ region of interest (ROI) around the known location of the source \citep{GreensCat}. In particular, we used the {\it gtlike} binned maximum likelihood fit to optimize the spectral parameters of all sources from the 1FGL catalog \citep{1FGLCat} in the ROI, iterating to include only significant sources ($>4\,\sigma$). As CTB 37A is quite near the Galactic plane, the smaller ROI limited the number of sources included in the {\it gtlike} fit of the region to a more manageable number while still containing a sufficient number of PSF length scales. We also included the standard isotropic\footnote{All contributions whose spatial distribution is assumed to be isotropic: the extragalactic background, unresolved sources, and the instrumental background.} and Galactic diffuse models: isotropic\_iem\_v02 and gll\_iem\_v02, respectively\fnref{fn:FSSC}. Further details of the LAT instrument and data reduction may be found in \cite{AtwoodLAT}. \fntext[fn:FSSC]{Available at the Fermi Science Support Center: \url{http://fermi.gsfc.nasa.gov/ssc}} \fntext[fn:isoBkg]{All contributions whose spatial distribution is assumed to be isotropic: the extragalactic background, unresolved sources, and the instrumental background.} \subsection{Detection}\label{sec:Detection} This analysis yielded an $18.6\,\sigma$ detection of a point source coincident with the nominal CTB 37A position, as seen in Figure \ref{fig:resMap}, a map of residual Fermi-LAT emission after subtracting all the nearby, fit sources and diffuse backgrounds. For this detection, we used {\it gtlike} to quantify the spectral shape with a power law. Discussion of the fit with another reasonable spectral form, the exponentially cutoff power law, may be found in \ref{sec:Var}. In Section \ref{sec:MWSpec} we explore a physically motivated model. \begin{figure} \begin{center} \includegraphics*[width=10cm]{FermiResMap_RadioXrayHESSPulsation_v2.pdf} \end{center} \caption{The Fermi-LAT residual map (color scale) in the energy range $0.2 - 50\,$GeV shows an excess clearly associated with both the nominal position (black diamond \citep{GreensCat}) and radio emission (southern black contours \citep{Kassim91}) of CTB 37A and not CTB 37B (northern black contours). The Fermi-LAT emission's most likely position and Gaussian extension (blue circle) also coincides with the H.E.S.S. source (red circle \citep{HESS_CTBA}) as well as the XMM-Newton MOS1 detector X-ray data (green contours) in the energy range $0.2-10\,$keV. We performed a blind search for pulsations at the four positions indicated with black squares: at the 1FGL catalog position and the nearby possible X-ray source for CTB 37A and at the nominal and coincident X-ray source positions for CTB 37B.} \label{fig:resMap} \end{figure} \subsection{Diffuse Model}\label{sec:DiffModel} Since CTB 37A lies just above the Galactic plane we ensured that the particular Galactic diffuse background employed was not biasing our detection by examining the global likelihood of three different diffuse models including several variations for the two most reasonable models. As expected, the DIRBE $60\,\mu$m infrared map, mainly tracing thermal emission from e.g., dust heated by stars, including interplanetary dust heated by the Sun, poorly represents the diffuse Galactic gamma-ray background and so led to a low global fit likelihood relative to the standard model. Using a variation on the standard model, fitting it just to a region around the Galactic center ($(l,b) = \pm (20^{\circ}, 20^{\circ})$), also did not substantially improve the global fit to the ROI. Neither including a faint pulsar in the region found after the 1FGL catalog was published nor modulating the original Galactic diffuse model by a power law spectral shape significantly improved the global fit, particularly considering the addition of extra parameters in each case. The global fit did improve when using a variation of the Galactic diffuse model with finer spatial resolution based on $19$ months' data. The fit for this model further improved as we included sources from the 1FGL catalog which had a lower significance ($>3\,\sigma$) in the initial fit and as we modulated the Galactic diffuse model by a power law in energy. Combining these two variations and further including in the combination the faint pulsar both yielded moderate improvements in the global likelihood over any one variation singly. While the $19$ months' model variations yielded the best global fit, they remain under development, and the other models are either worse or show negligible improvement in the global fit relative to the standard diffuse model. Further, none of the reasonable models significantly influence the results for the point source at the nominal CTB 37A position. The minimum source significance was within $8\%$ of the average of all reasonable variations and the best-fit power law index remained within $3\%$ of the average. We therefore conclude that the standard Galactic diffuse model used in the analysis is adequate to the task and further that neither it nor its variations greatly affect our results. \subsection{Location and Extension}\label{sec:LocExtMethod} Having determined the existence of a significant source in the vicinity of the CTB 37 complex, we simultaneously fit for the most likely position and extension of the source. To do so, we stepped around a spatial grid to determine the most likely position for a given (Gaussian) extension, recentering the grid as needed to more completely included the largest extensions ($0.3^{\circ} - 0.5^{\circ}$). We interpolated the most probable extension bounded by a point source and $0.5^{\circ}$ and then iterated over a finer range of extensions on a finer grid spacing (from $1.0^{\circ}$ to $0.1^{\circ}$ on a $5\times 5$ grid) to improve the final location, extension, and their respective statistical errors. The final location is that associated with the most likely extension. This method was previously employed in e.g. \citet{JurgenLMC}. We also verified the results using the alternate tool {\it Sourcelike} \citep[e.g.]{IC443LAT} We obtained an identical extension as for the grid method, and further determined that a uniform disk was no more likely than a Gaussian extension. We thus found a position for the Fermi-LAT source of RA $= 258.68^{\circ} \pm 0.05^{\circ}_{stat} \pm 0.006^{\circ}_{sys}$, DEC $= -38.54^{\circ} \pm 0.04^{\circ}_{stat} \pm 0.02^{\circ}_{sys}$. The symmetrical Gaussian with an extension of $\sigma = 0.13^{\circ} \pm 0.02^{\circ}_{stat} \pm 0.06^{\circ}_{sys}$ was most likely, with a significance of $\sim 4.5\,\sigma$. The position and extension can be seen in Figure \ref{fig:resMap} overlaid on the Fermi-LAT residual map, along with the radio contours for CTB 37A and B and the position and extension of the H.E.S.S. source coincident with CTB 37A. We clearly see that the Fermi-LAT source is coincident with both the CTB 37A radio contours and the H.E.S.S. source, and further that it is inconsistent with CTB 37B. We further checked that the Galactic diffuse model and possible nearby sources did not affect our extension and location determination by performing the same analysis with three of the most reasonable models' variations: including the faint pulsar in the standard model, including less-significant sources from the 1FGL catalog in the $19$ months' diffuse model variation, and including the less-significant sources and modulating the $19$ months variation by a power law in energy. In all three cases, the position and extension found were within statistical errors of the originally determined position and extension. Thus we conservatively take the extreme difference as the systematic error due to the diffuse model, which is only $\sim 30\%$ of the extension and well below the statistical error ($< \pm 0.015^{\circ}$) for the position. The diffuse model systematic error thus derived is smaller than the PSF. We also examined the extension and localization using subsets of events with a nominally better PSF: those at higher energy ($2-50$\,GeV) and those photons which convert in the front of the tracker, where the foils are thinnest and thus cause the least scatter. In both cases, the position remained stable and the extension within the statistical errors. We again use the maximum difference to estimate the order of magnitude of systematic error from the PSF, which we combined in quadrature with the systematic error from the diffuse model. \subsection{Variability and Pulsations}\label{sec:Var} Fermi GST has identified emission from over $70$ pulsars and variable emission from over $180$ blazars. As pulsing or variable emission can strongly influence the results found for a steady source, we performed two searches of the Fermi-LAT source coincident with the CTB 37 complex: a search for long-term variability in the light curve and a blind pulsation search of the most likely candidates in the region. We found no significant variability in the 2-week binned light curves when fitting the spectrum with either a power law or a power law with exponential cutoff (PLEC) spectral shape. All fluxes remained within error of $1\,\sigma$ of the average except for the two lowest, whose errors are systematically underestimated due to low statistics, and the third highest flux, which had an unusually small error. Replacing these with the errors determined for their nearest neighbors (in flux) yields a reduced $\chi^2$ compared to a constant, average flux of $\chi^2/d.o.f. \approx 1.4$ and $5.7$ for the power law and PLEC, respectively. Thus, we do not observe long-term variability. To check for pulsar-like emission, we performed a blind pulsation search at four likely locations coincident with the CTB 37 complex, seen as squares in Figure \ref{fig:resMap}, using 18 months diffuse events with $E \geq 300$\,MeV in a ROI of $r \leq 0.8^{\circ}$. We employed a differencing method with a maximum frequency of $64$\,Hz, a maximum coherency window of $524,288$\,seconds, and a spin down frequency ($\dot{F}$) ranging from $0$ to $3.86 \times 10^{-10}$, equal to that of the Crab. The 1FGL catalog source J1714.5-3830, coincident with the Fermi-LAT source and CTB 37A radio contours, and the X-ray source CXOJ171419.8m383023, coincident with the Fermi-LAT, radio, and H.E.S.S. CTB 37A sources, showed no pulsations. We also examined the nominal position of CTB 37B (RA $= 258.49$, DEC $= -38.20$) and the nearby X-ray source CXOU\_J171405.7, more coincident with CTB 37B's radio contours, for pulsations, likewise finding none. Further details on the Fermi-LAT blind pulsation search may be found in \cite{BlindPulseSearch}. We also find the Fermi-LAT source coincident with CTB 37A inconsistent with a pulsar hypothesis as the best-fit spectral shape, determined with {\it gtlike} to be a PLEC, while consistent with the standard pulsar one, shows only a $\sim 1.5\sigma$ improvement over the simple power law while adding an extra degree of freedom. Further, the best fit cutoff energy $\mathrm{E}_{\mathrm{cut}} = 28.2_{-10.5-21.6}^{+29.8+67.8}\,$GeV is both an order of magnitude higher than typical pulsar cutoff energies (e.g. \cite{PulsarCatalog}) and moreover is also poorly fit, as evidenced by the large errors which are consistent with no cutoff detection. Thus, we find it unlikely that the detected emission comes from a pulsar. \subsection{Spectrum}\label{sec:FermiSpec} To create the Fermi-LAT spectral points shown in Figure \ref{fig:spec}, we used {\it gtlike} to fit power laws to the data within each of $15$ logarithmically-spaced energy bins. The narrowness of the bins in energy space ensures that the exact functional form and parameters of the power law do not substantially influence the values of the points themselves. We do not show the lowest ($0.2-0.3$\,GeV) and highest ($30-50$\,GeV) energy bins due to insufficient statistics. The variations in the Fermi-LAT data points are not overwhelmingly significant in light of systematic errors for this standard Fermi-LAT analysis. \begin{figure} \begin{center} \includegraphics*[width=12cm]{FermiHESSpiSysStat_Spec.pdf} \end{center} \caption{The Fermi-LAT (points, $1 \sigma$ statistical error bars) and H.E.S.S. (grey shaded bow tie \citep{HESS_CTBA}) spectra from emission coincident with SNR CTB 37A. The dashed line, an extrapolation of the H.E.S.S. data into the Fermi-LAT energy range, clearly underpredicts the Fermi-LAT data. The discrepancy is somewhat ameliorated by including H.E.S.S. statistical errors (dotted lines) and is eliminated by very conservatively summing the H.E.S.S. statistical and systematic errors (dot-dashed line). The spectral tension remains after instead normalizing the best-fit H.E.S.S. hadron model to the Fermi-LAT data (solid curve). }\label{fig:spec} \end{figure} \section{Multiwavelength Spectrum}\label{sec:MWSpec} By observing objects in multiple wavelengths, we are able to gain a more complete understanding of their internal workings, and SNRs are no exception. Figure \ref{fig:spec} shows the gamma-ray spectrum for the Fermi-LAT source associated with CTB 37A. \subsection{High Energy Gamma-rays}\label{sec:gammaModel} The gamma-ray band is particularly sensitive to possible CR acceleration through the $\pi^0$ decay channel, initiated by the interaction of high energy hadrons, typically taken to be protons. In 2008, H.E.S.S. detected very high energy gamma-rays coincident with CTB 37A. That data, in combination with radio and X-ray analysis, suggested that a hadron-dominated emission scenario was more likely than a leptonic one, though the latter could not be excluded \citep{HESS_CTBA}. As the Fermi-LAT and H.E.S.S. energy ranges are complementary and the sources themselves coincident, we examined the extrapolation of the H.E.S.S. data into the Fermi-LAT range. The dashed line in Figure \ref{fig:spec}, showing this extrapolation, clearly underpredicts the Fermi-LAT measured spectrum. Including statistical errors somewhat ameliorates the discrepancy, though the global fit likelihood remains more than $8\sigma$ worse than freely fitting the source. Very conservatively directly summing the statistical and systematic errors eliminates the difference, as seen from the dotted and dot-dashed lines, respectively. If instead we use {\it gtlike} to fit the Fermi-LAT data with the H.E.S.S.-indicated hadronic emission model (from $\pi^0$ decay) as seen in Figure \ref{fig:spec} (solid curve), the Fermi-LAT data suggest that H.E.S.S. should observe more emission than it does. To generate the model, we used H.E.S.S. parameters for the SNR at $11.3\,$kpc interacting with clouds of gas mass $\mathrm{N}_{\mathrm{H}} = 6.7 \times 10^4\,\mathrm{M}_{\odot}$ with a power law index of $\gamma = 2.30$ \citep{HESS_CTBA}. While the discrepancy may arise from statistical or systematic errors or differences in source extension as the H.E.S.S. source is slightly smaller than and marginally offset from the Fermi-LAT source, it may also indicate two different spectral components. To determine if the latter is an actual possibility, we extended our model to include the radio and X-ray data. \subsection{A Hadronic + Leptonic Model}\label{sec:MWModel} We fit a one-zone model using reasonable values and containing both hadronic ($\pi^0$ decay) and leptonic (synchrotron, bremsstrahlung, and inverse Compton) emission to the multiwavelength data coincident with SNR CTB 37A, as shown in Figure \ref{fig:MWSpec}. \begin{figure} \centering \subfloat[]{\label{fig:MWSpecFull}\includegraphics[width=10cm, trim=.25cm .4cm .82cm .55cm, clip]{MWSpec_PrelimModel.pdf}} \\ \subfloat[]{\label{fig:MWSpecZoom}\includegraphics[width=10cm, trim=.25cm .4cm .82cm .55cm, clip]{MWSpec_PrelimModel_ZoomCaptv2.pdf}} \caption{\ref{fig:MWSpecFull}. The multiwavelength spectrum of emission associated with CTB 37A, spanning from radio to very high energy gamma-ray emission. We fit the radio data \citep{Kassim91} with a synchrotron model (solid line) having a $20\mu$G magnetic field, determining the lepton population's spectral index and normalization, having assumed a power law with exponential cutoff energy of $50\,$GeV. \ref{fig:MWSpecZoom}. We then model the lepton population's bremsstrahlung (dashed line) and inverse Compton (dot-dashed line) emission. We adjusted the hadron population's pion emission (triple-dot-dashed line) from proton-proton interaction in the same ambient clouds as used for the leptons to produce bremsstrahlung to approximately fit the H.E.S.S. data (grey bow tie) \citep{HESS_CTBA}. The total modeled emission (solid line at high energies) relatively well reproduces the Fermi-LAT data without further adjustments. The XMM-Newton X-ray data from the MOS1 and 2 and PN instruments (open squares) constrains the maximum synchrotron emission at the highest energies.} \label{fig:MWSpec} \end{figure} We selected radio data tabulated by \cite{Kassim91} with frequencies above $200$\,MHz, where absorption has a minimal effect, and fluxes both with errors and consistent with other measurements. For the $8800.0$\,MHz band, we fixed the error at $10\%$ of the total value, which is the same order of magnitude and slighlty larger than the errors in the other bands In a reasonable $20\mu$G magnetic field\footnote{Measurements of Zeeman splitting in OH masers coincident with CTB 37A provide an upper limit on the magnetic field magnitude of $1.5\,$mG \citep{BroganZeeman}. Such OH masers typically occur in molecular clouds carrying ambient magnetic fields which have been compressed by the SNR shock passage \citep{BroganZeeman}. Thus, to obtain the value used, we scaled the field by the size of the cloud.}, leptons produce radio-band synchrotron emission. We calculated this leptonic model component according to the method of \citet{Ginzburg} and \citet{GhiselliniGuilbertSvensson} and assuming an electron population following a standard exponentially cutoff power law with a typical cutoff energy of $\mathrm{E}_{\mathrm{cut}} = 50\,$GeV. We fit the synchrotron component to the radio data, providing the lepton population spectral index and normalization of $\gamma_{\mathrm{e}} = -1.75$ and $\mathrm{N}_{\mathrm{e}} = 3.66\,$e/s/$\mathrm{cm}^2$/GeV/sr ($\sim 219$ times that of the local CR electron spectrum) at $1\,$GeV, respectively. \fntext[fn:Bfield]{Measurements of Zeeman splitting in OH masers, where a SNR shock compressed a molecular cloud carrying ambient magnetic fields, coincident with CTB 37A provide an upper limit on the magnetic field magnitude of $1.5\,$mG \citep{BroganZeeman}. To obtain the value used, we scaled the field by the size of the cloud.} Leptons in this population may also interact with molecular clouds, producing bremsstrahlung emission proportional to the gas mass of the clouds in the region, estimated at $\mathrm{M}_{\mathrm{H}} = 6.5 \times 10^4\,\mathrm{M}_{\odot}$ for the northern and central clouds of \citet{ReynosoMagnumCO}, which are those most consistent with the Fermi-LAT emission. Leptons may also inverse Compton scatter off local starlight and the cosmic microwave background (e.g. as measured by WMAP), producing high energy photons. We calculated the inverse Compton emission using the method described by \citet{BlumenthalGould} with optical and infrared interstellar radiation fields from \citet{Porter2008}. We combined the leptonic emission components with a hadronic model in which a high energy proton population, following a power law with index of $-2.30$, interacts with ambient protons, producing $\pi^0$s which decay to two photons, calculated following the prescription of \citet{Kamae2006}. Using the same gas mass as for the bremsstrahlung component, we renormalized the hadronic gamma-ray flux to approximately fit the H.E.S.S. data, determining a CR proton enhancement factor of $\sim 11.6$ with respect to the local CR proton spectrum. The protons in this steady state model have a total energy of $\sim 1.2 \times 10^{50}\,$ergs, implying a very reasonable $\sim 12\%$ conversion efficiency for a canonical SN kinetic energy of $10^{51}\,$ergs and in good agreement with the H.E.S.S. prediction \citep{HESS_CTBA}. The total lepton energy, at $\sim 5 \times 10^{49}\,$ergs, is a reasonable roughly half the hadron energy. The proton energy corresponding to the maximum H.E.S.S. energy, $\mathrm{E}_{\mathrm{p,max}}$ between $10^{14}\,$eV and $10^{15}\,$eV \citep{HESS_CTBA}, is commensurate with the maximum energy expected from SNe for particles of charge $Z=1$ \citep{myThesis} and with the \textquotedblleft knee\textquotedblright or change in CR spectral slope at $10^{15}\,$eV. Figure \ref{fig:MWSpec} shows the result of fitting this model to the multiwavelength data. The lepton population reproduces the radio-measured synchrotron emission while the combination of $\pi^0$ decay and bremsstrahlung emission reproduce the Fermi-LAT data, notably not previously fit, relatively well with no further adjustments. The inverse Compton component contributes at only about the $1\%$ level. From this we anticipate obtaining similar values when performing a complete fit of all the data, allowing the magnetic field strength as well as the proton and electron populations' parameters ($\mathrm{N}_{\mathrm{p}}$, $\gamma_{\mathrm{p}}$, $\mathrm{N}_{\mathrm{e}}$, $\gamma_{\mathrm{e}}$, $\mathrm{E}_{\mathrm{e,cut}}$) to vary. As synchrotron emission can occur at wavelengths up to X-ray, we examined data from the XMM-Newton observatory. Using approximately $17$\,ks of XMM-Newton observation from 1 March 2006 (ObsID 0306510101), we analyzed the data with the standard XMM-Newton Science Analysis Software (SAS v.9.0)\footnote{Available at \url{http://xmm.esac.esa.int/xsa}.}. We extracted the spectral flux from the region of excess X-ray flux coincident with the Fermi-LAT source. H.E.S.S. analysis of the same data and of Chandra data in the same region found the spectra to be compatible with absorbed thermal emission \citep{HESS_CTBA}. Our XMM-Newton analysis yields a flux likewise consistent with an absorbed thermal spectrum with the same order parameters as those given for the Chandra data: a temperature of several hundred eV and column density of a few$\,\times 10^{22}\,\mathrm{cm}^{-2}$. We suspect we slightly underestimated the parameters relative to those found for Chandra as we extracted the spectrum under the point source assumption \fntext[xmmSAS]{Available at \url{http://xmm.esac.esa.int/xsa}.} Assuming the X-ray emission is thermal, the (lowest energy) x-ray data thus constrains the highest energy synchrotron emission the lepton population may produce: for the reasonable model tested herein, a cutoff energy greater than $\sim 1.5\,$TeV would produce an excess of synchrotron emission not observed in X-rays. In a similar fashion, for this model and fixed set of parameters, we can constrain $\mathrm{E}_{\mathrm{e,cut}} \lesssim 50\,$GeV to avoid excessive flux from bremsstrahlung emission in the H.E.S.S. domain. We anticipate similar constraints from performing a full fit of all the parameters. \section{Conclusions}\label{sec:Conclusions} We robustly detect a source coincident with SNR CTB 37A at $18.6\sigma$ using the Fermi-LAT. Fermi-LAT $\gamma$-ray data, in concert with radio, X-ray, and TeV data, allows us to determine the type of emission and its characteristics by determining the most likely model. In this work, we are able to fit the multiwavelength emission with a reasonable model combining emission from hadronic and leptonic populations. Thus, SNR CTB 37A is a potential CR accelerator. We are currently finishing a complete fit of the multiwavelength data over the allowable ranges for the local magnetic field and the leptonic and hadronic populations' parameters to determine their most probable values. In particular, this method permits us to robustly determine whether one population's emission dominates over the other or if both are necessary to reproduce the observed data. Since our initial model with reasonable values reproduced the Fermi-LAT data moderately well, we anticipate similar values for the more robust fit. Additional sources of data, such as microwave (e.g. Planck), infrared (e.g. Spitzer), and optical (which we will obtain when the source reemerges from behind bright solar system objects) data may further help disentangle the emission models. By assembling individual CR source candidates such as SNRs into statistically significant populations, we will improve our understanding of the potential source classes, allowing comparison to properties derived from direct CR detection experiments, and more fully illuminating a hundred-year mystery. \section{Acknowledgements} The \textit{Fermi} LAT Collaboration acknowledges generous ongoing support from a number of agencies and institutes that have supported both the development and the operation of the LAT as well as scientific data analysis. These include the National Aeronautics and Space Administration and the Department of Energy in the United States, the Commissariat \`a l'Energie Atomique and the Centre National de la Recherche Scientifique / Institut National de Physique Nucl\'eaire et de Physique des Particules in France, the Agenzia Spaziale Italiana and the Istituto Nazionale di Fisica Nucleare in Italy, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), High Energy Accelerator Research Organization (KEK) and Japan Aerospace Exploration Agency (JAXA) in Japan, and the K.~A.~Wallenberg Foundation, the Swedish Research Council and the Swedish National Space Board in Sweden. Additional support for science analysis during the operations phase is gratefully acknowledged from the Istituto Nazionale di Astrofisica in Italy and the Centre National d'\'Etudes Spatiales in France.
2,869,038,156,573
arxiv
\section{Introduction} Lattice calculations of the hadronic spectrum are now reaching a precision where it is essential to resolve the influence of isospin breaking effects. These have two sources, a QCD effect arising from the fact that the $u$ and $d$ quarks have different masses, and an electromagnetic effect due to the $u$ and $d$ having different electric charges. The two effects are comparable in magnitude, so a reliable calculation of isospin breaking requires simulating both the gluon and photon gauge fields. Lattice studies of electromagnetic effects in the pions go back to~\cite{Duncan}. In recent years the interest in QCD+QED has grown, and the pace of work accelerated~\cite{Blum, Aoki:2012st,deDivitiis:2013xla,Borsanyi:2013lga,Zhou:2014gga, Endres:2015gda, Borsanyi:2014jba}. We are carrying out simulations in QCD+QED~\cite{Horsley:2015}. Both gauge theories are fully dynamical, so that the electrical charges of sea-quark loops are included via the fermion determinants. We use a non-compact action for the photon field. The calculations are carried out with three clover-like quarks. Details of the lattice action will be given in section~\ref{lattdetail}, and can be found in~\cite{Horsley:2015,Mainz2013}. In the real world, with $\alpha_{EM}=1/137$, electromagnetic effects on masses are at the 1\% level, or smaller. This would make them hard to measure on the lattice. Therefore we simulate with a QED coupling stronger than in real world, so that we can see effects easily, and then scale back to physical $\alpha_{EM}$. The simulations are carried out with $ \beta_{QED} = 0.8$, equivalent to $e^2 = 1.25,\ \alpha_{EM} = e^2/(4\pi) \approx 0.10\,.$ We will see that this is a good choice, electromagnetic signals are clearly visible, much larger than our statistical errors, but we are also in a region where they still scale linearly in $e^2$, and we do not need to consider higher-order terms. We generate configurations with dynamical $u,d$ and $s$ quarks, and then increase our data range by carrying out partially quenched calculations, with valence $u,d,s$ quarks having different masses from the quarks used in the generation of the configurations. In addition to the $u,d,s$ quarks, we also introduce a fictitious $n$ quark, an extra flavour with electrical charge zero. The $n$ quark is particularly useful for checking that we are in the region where electromagnetic effects are still linearly proportional to $e^2$. In this work we present results on the pseudoscalar mesons. Our meson propagators are calculated from connected graphs only. Because we have no fermion-line disconnected graphs, the $u \bar u, d \bar d, s \bar s$ and $ n \bar n$ states do not mix, so we can measure $M^2(u \bar u), M^2(d \bar d)$ and $M^2(s \bar s)$. In the real world, these states do not exist, they mix strongly to form the $\pi^0, \eta$ and $\eta^\prime$. Disconnected graphs are responsible for the large mass of the $\eta^\prime$, but will have very little effect on the mass of the $\pi^0$. In this work we do not consider the $\eta$ and $\eta^\prime$ further, but we will need a mass for the $\pi^0$, with wave-function proportional to $( u \bar u - d \bar d )/\sqrt{2}$. We use the relation \begin{equation} M^2_{\pi^0} \approx \half \left[M^2(u \bar u) + M^2(d \bar d) \right] \end{equation} which is a very good approximation, with corrections proportional to the small quantity $(m_d-m_u)^2$~\cite{groupy}. This issue does not arise for the flavour non-diagonal mesons, $\pi^+, K^0, K^+,$ which have no disconnected contribution. In the first part of this paper, sections~\ref{extrap} to~\ref{scheme}, we discuss theoretical questions. First we describe how our constant singlet mass procedure~\cite{strange1,groupy} can be applied to QCD+QED. We derive a mass formula for pseudoscalar mesons in this framework. This is all that is needed to calculate physical mass splittings, in particular the $\pi^+$-$\pi^0$ splitting. It also gives us the lattice masses for the $u, d, s$ quarks at the physical point, needed to predict mass splittings in the baryons. A particularly delicate number is the mass difference $m_u-m_d$ (or $m_u/m_d$ mass ratio), which is difficult to extract reliably from a pure QCD simulation, and is much better defined in QCD+QED simulations. We also want to dissect the meson mass into a QCD part and a QED part, to find the electromagnetic $\epsilon$ parameters, which express the electromagnetic contributions to the meson masses ~\cite{FLAG}. We find that there are theoretical subtleties in this separation, leading to scheme and scale dependence in the result. The total energy-momentum tensor is invariant under renormalisation, and so the total mass of any hadron is independent of renormalisation scheme and scale. However the individual contributions from quarks, gluons and photons are not invariant, they all run as the energy scale increases. This is familiar in pure QCD; as the energy scale of Deep Inelastic Scattering rises, the momentum fraction carried by quarks decreases, while the momentum fraction carried by gluons increases~\cite{AltarelliParisi}. The physical picture behind this effect is well known~\cite{textbooks}. As $Q^2$ rises the proton is probed with improved spatial resolution. A parton perceived as a single quark in a low-$Q^2$ measurement is resolved into multiple partons at higher $Q^2$, with most of the new partons being gluons. We should expect a similar effect in QCD+QED, with improved spatial resolution revealing more photons, causing a running of energy from quarks to photons, in parallel with the running from quarks to gluons seen in QCD alone. In QCD+QED, each hadron will be surrounded by a photon cloud. As in pure QED, the total energy in the cloud will be ultra-violet divergent. Crudely, we can think of two components of the cloud. Firstly, there are short wave-length photons, with wave-lengths small compared with a hadron radius. These can be associated with particular quarks. If we look at the hadron with some finite resolution the photons with wavelengths shorter than this resolution are incorporated into the quark masses as self energies. Secondly, there will be longer wave-lengths photons, which can't be associated with particular quarks. These photons must be thought of as the photon cloud of the hadron as a whole, these are the photons that we include when we talk of the electromagnetic contribution to the hadron mass. We expect to see many more really long wave-length photons (large compared to the hadron radius) around a charged hadron than around a neutral hadron. Clearly, in this picture, the value we get for the electromagnetic contribution to the hadron energy is going to depend on our resolution, i.e. on the scheme and scale that we use for renormalising QED. In the final part, section~\ref{results}, we summarise our lattice results for the $\pi^+$-$\pi^0$ splitting and for the scheme-dependent $\epsilon$ parameters, which parameterise the electromagnetic part of the meson masses. We have already published an investigation into the QCD isospin breaking arising from $m_d-m_u$ alone in~\cite{isospin}, and the first results of our QCD+QED program in~\cite{Horsley:2015}, which we discuss at greater length here. \section{Extrapolation Strategy \label{extrap}} In pure QCD we found that there are significant advantages in expanding about a symmetric point with $m_u = m_d = m_s = \overline{m}\;$~\cite{strange1,groupy}. In particular, this approach simplifies the extrapolation to the physical point, and it decreases the errors due to partial quenching. We want to follow a similar approach with QED added, even though the symmetry group is smaller (the $u$ quark is always different from the other two flavours because of its different charge). First we find a symmetric point, with all three quark masses equal, chosen so that the average quark mass, \begin{equation} \overline{m} \equiv \third \left(m_u + m_d + m_s \right) \;, \label{mbar} \end{equation} has its physical value. To do this, we have defined our symmetric point in terms of the masses of neutral pseudoscalar mesons \begin{equation} M^2( u \bar u) = M^2( d \bar d) = M^2( s \bar s) = M^2( n \bar n) = X_\pi^2 \;. \label{symdef} \end{equation} Here $X_\pi$ is an average pseudoscalar mass, defined by \begin{equation} X_\pi^2 = \third \left[ 2 (M_K^\star)^2 + (M_\pi^\star)^2\right] \label{Xpidef} \end{equation} where $\star$ denotes the real-world physical value of a mass. The $n$ is a fictitious electrically neutral quark flavour. We have not included disconnected diagrams, so the different neutral mesons of~(\ref{symdef}) do not mix. We also define the critical $\kappa^c_q$ for each flavour as the place where the corresponding neutral meson is massless \footnote{The critical $\kappa$ defined in eq.~(\ref{kcdef}) is the critical $\kappa$ in the $m_u + m_d + m_s = const$ surface, i.e. if $m_u = 0$, we must have $m_d + m_s = 3 \overline m$. The $\kappa^c$ for the chiral point with all three quarks massless will be different.} \begin{equation} M^2(q \bar q) = 0 \quad \Leftrightarrow \quad m_q = 0 \; . \label{kcdef} \end{equation} Chiral symmetry can be used to argue that neutral mesons are better than charged ones for defining the massless point~\cite{Das}. We then make a Taylor expansion about this point, using the distance from $\overline{m}$ as our parameter to specify the bare quark masses \begin{eqnarray} a\delta m_q &\equiv& a( m_q - \overline{m}) = \frac{1}{2 \kappa} - \frac{1}{2 \kappa_q^{sym}}\;,\\ a\delta \mu_q &\equiv& a( \mu_q - \overline{m}) = \frac{1}{2 \kappa} - \frac{1}{2 \kappa_q^{sym}}\;, \end{eqnarray} where $m_q$ denotes the simulation quark mass (or sea quark mass), while $\mu_q$ represents the masses of partially quenched valence quarks. Note that keeping the average quark mass constant,~(\ref{mbar}), implies the constraint \begin{equation} \dmu + \dmd + \dms = 0 \;. \end{equation} In~\cite{groupy} we wrote down the allowed expansion terms for pure QCD, taking flavour blindness into account. QCD+QED works very much like pure QCD. Since the charge matrix $Q$ is a traceless $3 \times 3$ matrix, \begin{equation} Q = \pmatrix{ \textstyle +\, \frac{2}{3} & 0 & 0 \cr 0 & -\, \frac{1}{3} & 0 \cr 0 & 0 & -\, \frac{1}{3} } \;, \end{equation} electric charge is an octet, so we can build up polynomials in both charge and mass splitting in a way completely analogous to the pure QCD case. The main difference is that we can only have even powers of the charge, so the leading QED terms are $\sim e^2$, while the leading QCD terms are $\sim \delta m$. One very important point to note is that even when all three quarks have the same mass, we do not have full SU(3) symmetry. The different electric charge of the $u$ quark means that it is always distinguishable from the $d$ and $s$ quarks. \section{Meson mass formula} From these considerations we find the following expansion for the mass-squared of an $a \bar b$ meson, incorporating both the QCD and electromagnetic terms \begin{eqnarray} M^2(a \bar b) &=& M^2 + \alpha ( \vma + \vmb) +c (\dmu + \dmd + \dms) \label{QEDmeson} \\ &&{} + \beta_0 {\textstyle \frac{1}{6}} ( \dmu^2 + \dmd^2 + \dms^2) + \beta_1 ( \vma^2 + \vmb^2) + \beta_2 ( \vma - \vmb)^2 \nonumber \\ &&{} + \beta_0^{EM} (e_u^2 + e_d^2 + e_s^2) + \beta_1^{EM} (e_a^2 + e_b^2) + \beta_2^{EM} (e_a - e_b)^2 \nonumber \\ &&{} + \gamma_0^{EM} ( e_u^2 \dmu + e_d^2 \dmd + e_s^2 \dms ) + \gamma_1^{EM} ( e_a^2 \vma + e_b^2 \vmb ) \nonumber \\ &&{} + \gamma_2^{EM} (e_a - e_b)^2 (\vma + \vmb ) + \gamma_3^{EM} (e_a^2 - e_b^2) ( \vma - \vmb ) \nonumber \\ &&{} + \gamma_4^{EM} (e_u^2 +e_d^2 + e_s^2) (\vma + \vmb) \nonumber \\ &&{} + \gamma_5^{EM} (e_a + e_b) (e_u \dmu + e_d \dmd + e_s \dms) \;. \nonumber \end{eqnarray} As well as the terms needed in the constant $\overline{m}$ surface we have also included the term $c (\dmu + \dmd + \dms)$, the leading term describing displacement from the constant $\overline{m}$ surface. Including this term will be useful when we come to discuss renormalisation and scheme dependence, it could also be used to make minor adjustments in tuning. The QCD terms have been derived in~\cite{groupy}. In particular, we discussed the effect of chiral logarithms in section~V.C. of that paper. Briefly, since we are expanding about a point some distance away from all chiral singularities the chiral logarithms do not spoil the expansion, but they do determine the behaviour of the series for large powers of $\delta m_q$, (see for example equation (78) of~\cite{groupy}). We will now discuss briefly the origins of the electromagnetic terms. \subsection{Leading order terms} In what follows we use the following notation: \begin{equation} e^2 = 1/\beta_{QED}\;, \qquad e_q = Q_q e \end{equation} where \begin{equation} Q_u = +\,{\textstyle \frac{2}{3}\;,} \quad Q_d = Q_s = -\, {\textstyle \frac{1}{3}} \;. \end{equation} The leading order EM terms were written down in~\cite{Mainz2013}, \begin{equation} M^2_{EM}(a \bar b) = \beta_0^{EM} (e_u^2 + e_d^2 + e_s^2) + \beta_1^{EM} (e_a^2 + e_b^2) + \beta_2^{EM} (e_a - e_b)^2 \;. \end{equation} Upon examination of each of these terms in more detail, we observe that since all of our simulations have the same choice of sea quark charges, then even if we vary the sea quark masses, $(e_u^2 + e_d^2 + e_s^2)$ is a constant, and we can simply absorb this term into $M^2$ of~(\ref{QEDmeson}). Hence, the $\beta_0^{EM}$ term just stands for the fact that $M^2$ measured in QCD+QED might be different from $M^2$ measured in pure QCD. As we have tuned our expansion point so that the pseudoscalars have the same symmetric-point mass as in pure QCD, the $\beta_0^{EM}$ for the pseudoscalar mesons will be zero, but we will still have to allow $M^2$ for other particles to be different in QCD+QED than in pure QCD. Now consider~(\ref{QEDmeson}) at the symmetric point, for the case of a flavour-diagonal meson, $a \bar a$. At the symmetric point, nearly all terms vanish because $\delta m_q$ and $\delta \mu_q$ are zero. In addition, the electromagnetic terms simplify because $e_b=e_a$. All we are left with is \begin{equation} M^2(a \bar a) = M^2 + \beta_0^{EM} (e_u^2 + e_d^2 + e_s^2) + 2 \beta_1^{EM} e_a^2 \label{symM2} \end{equation} at the symmetric point. However, since we have defined our symmetric point by~(\ref{symdef}), equation~(\ref{symM2}) must give the same answer whether $e_a = -\;\frac{1}{3}e, 0$ or $+\frac{2}{3}e$, so $\beta_1^{EM} $ must be zero (because it would split the masses of the different mesons, according to the charge of their valence quarks). However, having $\beta_1^{EM} =0$ for the pseudoscalar mesons does not mean that this term will also vanish for other mesons, for example the vector mesons. If we tune our masses so that the pseudoscalar $u \bar u$, $d \bar d$ and $s \bar s$ all have the same mass, we would still expect to find that the vector $u \bar u$ meson would have a different mass from the vector $d \bar d$ and $s \bar s$, because there is no symmetry in QCD+QED which can relate the $u$ to the other two flavours. Finally, we observe that the contribution from $\beta_2^{EM}$ is zero for neutral mesons, $e_a=e_b$. However, this is the leading term contributing to the $\pi^+$-$\pi^0$ mass splitting, so it is of considerable physical interest. \subsection{Next Order} Going beyond leading order, the following higher order terms of the form $e^2 \delta m_q$, $e^2 \delta \mu_q$ are possible: \begin{itemize} \item{ Sea charge times sea mass, $\gamma_0^{EM}$ } After imposing the constraints that $\overline{m}$ is kept constant and $e_u+e_d+e_s=0$, there is only one completely symmetric sea-sea polynomial left, \begin{equation} e_u^2 \dmu + e_d^2 \dmd + e_s^2 \dms \;. \end{equation} \item{ Valence charge times sea mass } At this order all polynomials of this type are killed by the $\overline{m} = const$ constraint. \item{ Valence charge times valence mass, $\gamma_1^{EM}, \gamma_2^{EM}, \gamma_3^{EM}$ } In this case there are three independent allowed terms. One convenient basis for the valence-valence terms is \begin{equation} e_a^2 \vma + e_b^2 \vmb\;, \qquad (e_a - e_b)^2 (\vma + \vmb )\;, \qquad (e_a^2 - e_b^2) ( \vma - \vmb )\;, \end{equation} though other choices are possible. \item{ Sea charge times valence mass, $\gamma_4^{EM}$ } The only polynomial of this type is \begin{equation} (e_u^2 + e_d^2 + e_s^2) ( \vma + \vmb )\;. \end{equation} Since $(e_u^2 + e_d^2 + e_s^2)$ is held constant, this term can simply be absorbed into the parameter $\alpha$ of~(\ref{QEDmeson}). \item{ Mixed charge times sea mass, $\gamma_5^{EM}$ } At the symmetric point we can not have mixed charge terms (valence charge times sea charge), because such terms would be proportional to $(e_u+e_d+e_s)$ which is zero. However, away from the symmetric point \begin{equation} (e_a + e_b) (e_u \dmu + e_d \dmd + e_s \dms) \end{equation} is allowed. \end{itemize} We illustrate the different physical origins of these terms by drawing examples of the Feynman diagrams contributing to each of the electromagnetic coefficients in~(\ref{QEDmeson}), Fig.~\ref{Feyn}. \begin{figure}[!htb] \begin{center} \epsfig{file=Figs/feyn.eps,width=13cm} \caption{Examples of the Feynman diagrams contributing to each of the electromagnetic coefficients in the meson mass formula~(\ref{QEDmeson}). All the graphs have a single photon (wavy line), and are all of $O(e^2)$ in the electromagnetic coupling. However, some terms require multiple gluons (curly lines), and so have higher order in the strong coupling $g^2$. \label{Feyn} } \end{center} \end{figure} \section{Lattice setup \label{lattdetail}} We are using the action \begin{equation} S = S_G + S_A + S_F^u + S_F^d + S_F^s \;. \end{equation} Here $S_G$ is the tree-level Symanzik improved SU(3) gauge action, and $S_A$ is the noncompact U(1) gauge action of the photon, \begin{equation} S_A = \half \beta_{QED} \sum_{x, \mu < \nu} \left[ A_\mu(x) + A_\nu(x + \hat\mu) - A_\mu(x + \hat\nu) - A_\nu(x) \right]^2 \;. \end{equation} The fermion action for flavour $q$ is \begin{eqnarray} S_F^q &=& \sum_x {\Bigg\{} \half \sum_\mu \left[ \overline{q}(x) (\gamma_\mu -1) e^{-i Q_q A_\mu(x) } \tilde {U}_\mu(x) q( x + \hat\mu) \right. \nonumber \\ &&{}\qquad \left. - \overline{q}(x) (\gamma_\mu +1) e^{ i Q_q A_\mu(x-\hat\mu) } \tilde {U}^\dagger_\mu(x-\hat\mu) q(x-\hat\mu) \right] \nonumber \\ &&{}\qquad +\; \frac{1}{2 \kappa_q} \overline{q}(x) q(x) - {\textstyle \frac{1}{4} } c_{SW} \sum_{\mu,\nu} \overline{q}(x) \sigma_{\mu \nu} F_{\mu \nu}(x) q(x) \Bigg\} \;, \end{eqnarray} where $ \tilde {U}_\mu$ is a singly iterated stout link. We use the clover coefficient $c_{SW}$ with the value computed non-perturbatively in pure QCD,~\cite{csw}. We do not include a clover term for the electromagnetic field. We simulate this action using the Rational Hybrid Monte Carlo (RHMC) algorithm~\cite{RHMC}. One issue that arises in the simulation of QED is the treatment of constant electromagnetic background fields. In simulations where the electromagnetic field does not couple to the quark determinant these are electromagnetic zero modes, and so need to be handled with particular care. In this simulation the sea quarks are coupled to the electromagnetic field, and so the action does depend on the background field. However we do still need to give special treatment to these modes. We handle constant background fields by adding or subtracting multiples of $6 \pi /(e L_\mu) $ until the background field is in the range \begin{equation} -3 \pi < e B_\mu L_\mu \le 3 \pi \end{equation} This is the mildest way to keep the background fields under control \cite{Gockeler:1991bu}. This procedure leaves fermion determinants unchanged for particles with charges a multiple of $e/3$. It also leaves Polyakov loops unchanged (again, for charges in units of $e/3$). We are investigating the evolution of these background fields in our simulations, and considering what effect they have on finite size effects. We plan to report on these studies in a future paper. We have carried out simulations on three lattice volumes, $24^3 \times 48, 32^3 \times 64$ and $48^3 \times 96$. The $24^3 \times 48$ calculations show clear signs of finite size effects. The differences between $32^3 \times 64$ and $48^3 \times 96$ are quite small, leading us to believe that finite size effects on our largest volume are under control. In this paper we present results from the two largest volumes, which usually are in close agreement. In the few cases where there is a difference, we would favour the results from the largest volume, $48^3 \times 96$. \section{Critical $\kappa$} After several tuning runs we have been carrying out our main simulations at the point \begin{eqnarray} \beta_{QCD} = 5.50 \;, & \quad & \beta_{QED} = 0.8 \;, \\ \kappa_u = 0.124362\;, && \kappa_d = \kappa_s =0.121713 \nonumber \end{eqnarray} which lies very close to the ideal symmetric point defined in~(\ref{symdef}) (but with a much stronger QED coupling than the real world, $\alpha_{QED} = 0.099472\cdots $, instead of the true value $1/137$). At this point the $\delta m_q$ from the sea quark masses are all zero, but we can still learn abut the meson masses by varying the partially quenched valence quark masses, $\delta \mu_q$. The flavour dependence of the meson masses is more complicated in QCD+QED than in pure QCD. We illustrate some of these differences in the sketch Fig.~\ref{pisketch}, showing the way that the flavour-diagonal mesons depend on the quark mass. As well as the physical charge $+\frac{2}{3}$ and $-\frac{1}{3}$ quarks, we also have a fictional charge 0 quark. In QCD+QED we still have the relationship $M^2(q \bar q) \propto m_q$ for flavour-diagonal (neutral) mesons, but the gradients of the $u \bar u, d \bar d, n \bar n$ mesons differ. So, in contrast to pure QCD, equal meson mass at the symmetric point no longer means equal bare quark mass. The bare mass at the symmetric point depends on the quark charge. This situation is illustrated in the left panel of Fig.~\ref{pisketch}, (though the differences between the flavours has been exaggerated for clarity). \begin{figure}[!htb] \begin{center} \epsfig{file=Figs/sketch.eps,width=15cm} \caption{Sketch illustrating the transformation from bare masses (left panel) to Dashen scheme masses (right panel). In the left panel all the flavour diagonal mesons have the same mass at the symmetric point ($\vmq=0$), but have different critical points ($M^2_{\rm PS}=0$). In the Dashen scheme (right panel) we rescale the masses horizontally, so that all the critical points are the same. The different mesons now all depend on $\vmq^D$ in the same way. \label{pisketch} } \end{center} \end{figure} We rescale (renormalise) the quark masses to remove this effect, making the renormalised quark masses at the symmetric point equal. The situation after renormalising in this way is illustrated in the right panel of Fig.~\ref{pisketch}. All the flavour-diagonal mesons, $n \bar n, d \bar d, s \bar s$ and $u \bar u$ now line up, depending in the same way on the new mass $\mu^D$, which we call the ``Dashen scheme" mass, for reasons which should become clear later \footnote{Here, to introduce the idea, we just make a simple multiplicative renormalisation. In fact, the mass renormalisation matrix is not diagonal, there are also terms which mix flavours. We will include these additional terms in section~\ref{Dashq}.}. We will see that using this quark mass also simplifies the behaviour of the mixed flavour mesons, and helps us understand the splitting of a hadron mass into a QCD part and an electromagnetic part. One way to interpret the behaviour in Fig.~\ref{pisketch} is to consider a $u$ and $d$ quark with the same bare lattice mass. Since the magnitude of the charge of the $u$ quark is twice as large as that of the $d$ quark, it will acquire a larger self-energy due to the surrounding photon cloud and hence it will be physically more massive, which is why the mass of the $u \bar u$ meson rises more steeply than the $d \bar d$ meson, when plotted against bare mass. By instead plotting against the Dashen mass, we have effectively added the extra mass of the photon cloud to the quark mass. Two quarks with the same Dashen mass are physically similar in mass, and so they form mesons of the same mass, as seen in the right-hand panel of Fig.~\ref{pisketch}. Applying these ideas to our simulations, in Fig.~\ref{kkdd} we show how the symmetric $\kappa^{sym}$ and critical $\kappa^c$ are determined, using the $d \bar d$ meson as an example. $\kappa^c$ is defined from the point where the partially-quenched meson mass extrapolates to zero,~(\ref{kcdef}), while $\kappa^{sym}$ is defined by the point where the fit line crosses $M_{PS}^2 = X_\pi^2$,~(\ref{symdef}). \begin{figure}[!htb] \begin{center} \epsfig{file=Figs/kk.eps,width=11cm} \caption{ Determination of $\kappa^c$ and $\kappa^{sym}$ for the $d$ quark. $\kappa^c$ is defined from the point where the $d \bar d$ meson mass extrapolates to zero,~(\ref{kcdef}), while $\kappa^{sym}$ is defined by the point where the fit line crosses $M_{PS}^2 = X_\pi^2$,~(\ref{symdef}). \label{kkdd} } \end{center} \end{figure} We repeat this procedure for the $u$ and $n$ quarks and plot the resulting $1/\kappa^c$ and $1/\kappa^{sym}$ values as a function of the square of the quark charges, $Q_q^2$, in Fig.~\ref{kaplot} Here we clearly see that in both cases $1/\kappa$ depends linearly on $Q_q^2$. \begin{figure}[!htb] \begin{center} \epsfig{file=Figs/lkap.eps,width=12cm} \caption{ $1/\kappa^c$ (red squares) and $1/\kappa^{sym}$ (blue circles) plotted against quark charge squared, $Q_q^2$. \label{kaplot} } \end{center} \end{figure} Despite appearances, the two lines are not quite parallel. In Fig.~\ref{bareplot} we plot the bare mass at the symmetric point, \begin{equation} a m^{sym}_q = \frac{1}{2 \kappa_q^{sym}} - \frac{1}{2 \kappa_q^c} \;. \end{equation} $\kappa_q^c$ for each flavour is defined as the point at which the flavour-diagonal $q \bar q$ meson becomes massless. We see that our data show the behaviour shown in the left-hand panel of Fig.~\ref{pisketch}, with each meson reaching the axis at a different point. \begin{figure}[!htb] \begin{center} \epsfig{file=Figs/baremass2.eps, width=11cm} \end{center} \caption{The bare mass at the symmetric point, $ a m_q^{sym}$, as a function of quark charge. We see that the bare mass is not constant, there is about a 10\% difference between the neutral $n$ quark and the $u$ quark. The open red circles show the quark masses after renormalising to remove this charge dependence. \label{bareplot} } \end{figure} The factors needed to bring the charged bare masses into agreement with the neutral bare mass, as in the right-hand panel of Fig.~\ref{pisketch}, are \begin{equation} Z_{m_d}^{QED} = Z_{m_s}^{QED} = 1.023, \qquad Z_{m_u}^{QED}= 1.096 \;. \label{ZDash} \end{equation} As seen in Fig.~\ref{bareplot} this $Z$ factor depends linearly on the quark charge squared. Hence, we can write \begin{equation} \delta \mu^D_q = (1 + K e_q^2 ) \delta \mu_q = (1 + K Q_q^2 e^2 ) \delta \mu_q \;, \label{lead_Dashen} \end{equation} for some constant $K$. By construction, this simplifies the neutral mesons as they will all lie on the same line, see Fig.~\ref{pisketch}. \begin{figure}[!htb] \begin{center} \epsfig{file=Figs/pibare.eps, width=12cm} \caption{ Pseudoscalar $M^2_{PS}$ plotted against bare mass for the $\pi^+$ (red), $u \bar u$ (blue) and $d \bar d$ (black) mesons. The lines simply connect the points. Error bars are small compared with the points. Data are from a $32^3 \times 64$ lattice. \label{pibare} } \end{center} \end{figure} In order to investigate the effect on charged mesons, we first consider the $u \bar u, d \bar d$ and $u \bar d\ (\pi^+)$ meson masses plotted as a function of bare quark mass,~Fig.~\ref{pibare}. We see that in this plot the two neutral mesons, $u \bar u$ and $d \bar d$, lie on different lines. We also observe that the $\pi^+$ data do not lie on a smooth curve. This is not due to statistical errors (which are much too small to see in this plot). It is because the $\pi^+$ meson mass depends both on $\delta m_u + \delta m_d$, as in pure QCD, but also has a significant dependence on $\dmu - \dmd$, which causes those mesons containing quarks with very unequal masses to deviate from the trend. \begin{figure}[!htb] \begin{center} \epsfig{file=Figs/pidash.eps, width=12cm} \caption{ The same data as in Fig.~\ref{pibare}, but this time plotted against Dashen-scheme quark mass. \label{pidash} } \end{center} \end{figure} When we now switch to using the Dashen-scheme quark masses in Fig.~\ref{pidash} we see that the graph looks significantly different. The $u \bar u$ and $d \bar d$ mesons now lie on the same straight line (this is essentially by construction, since equal Dashen-scheme quark mass $\Leftrightarrow$ equal neutral meson mass). More interesting is the fact that the ``jiggles" in the $\pi^+$ mass are largely removed by plotting against Dashen-scheme mass, making it much easier to estimate the EM shift in the $\pi^+$ mass. \section{Dashen scheme quark mass formula \label{Dashq}} In order to derive an expression for the meson masses in the Dashen-scheme, we start with (\ref{QEDmeson}) and proceed by absorbing the QED terms for the neutral pseudoscalar mesons into the quark self-energy by making the definition \begin{eqnarray} \vmq^D &=& \vmq + \Big\{ \half c (\dmu + \dmd + \dms) + \half\gamma_0^{EM} (e_u^2 \dmu + e_d^2 \dmd + e_s^2 \dms) \label{Dashform}\\ &&{} \! \! + \gamma_1^{EM} e_q^2 \vmq + \gamma_4^{EM} (e_u^2 +e_d^2 + e_s^2) \vmq + \gamma_5^{EM} e_q (e_u \dmu + e_d \dmd + e_s \dms) \Big\}/\alpha \; . \nonumber \end{eqnarray} At present we are neglecting $\gamma_0^{EM}$ and $ \gamma_5^{EM}$ because we are working on a symmetric background, $\delta m_q = 0$, and absorbing $\gamma_4^{EM}$ into the coefficient $\alpha$ because we only have data at one value of $\beta_{QED}$. This means that only the $\gamma_1^{EM}$ term is used in calculating $\vma^D$, giving a simple multiplicative transformation from bare mass to Dashen scheme mass. Most of the other terms in~(\ref{Dashform}) represent off-diagonal terms in the quark mass $Z$ matrix. There are many more mixing terms possible in QCD+QED than in pure QCD, but most of them first occur in diagrams with a large number of gluon and quark loops, as can be seen in Fig.~\ref{Feyn}, so they are probably rather small. Substituting~(\ref{Dashform}) into~(\ref{QEDmeson}) we are left with the simpler formula \begin{eqnarray} M^2(a \bar b) &=& M^2 + \alpha ( \vma^D + \vmb^D) + \beta_0 {\textstyle \frac{1}{6}} ( \dmu^2 + \dmd^2 + \dms^2) \label{DashMes} \\ &&{} + \beta_1 ( (\vma^D)^2 + (\vmb^D)^2) + \beta_2 ( \vma^D - \vmb^D)^2 + \beta_2^{EM} (e_a - e_b)^2 \nonumber \\ &&{} + \gamma_2^{EM} (e_a - e_b)^2 (\vma^D + \vmb^D ) + \gamma_3^{EM} (e_a^2 - e_b^2) ( \vma^D - \vmb^D ) \; . \nonumber \end{eqnarray} In~(\ref{DashMes}) all the EM terms vanish for neutral mesons ($e_a = e_b$), leaving \begin{eqnarray} M^2_{neut}(a \bar b) &=& M^2 + \alpha ( \vma^D + \vmb^D) + \beta_0 {\textstyle \frac{1}{6}} ( \dmu^2 + \dmd^2 + \dms^2) \label{Dashneut} \\ &&{} + \beta_1 \left( (\vma^D)^2 + (\vmb^D)^2 \right) + \beta_2 \left( \vma^D - \vmb^D \right)^2\; , \nonumber \end{eqnarray} which clearly has no references to any $EM$~coefficient, or to any charges $e_q$. Hence, by construction, the mass of the neutral pseudoscalar mesons comes purely from the quark masses, and has no electromagnetic contribution. The formula simplifies even further if we consider a flavour-diagonal meson \begin{equation} M^2(a \bar a) = M^2 + 2 \alpha \vma^D + \beta_0 {\textstyle \frac{1}{6}} ( \dmu^2 + \dmd^2 + \dms^2) + 2 \beta_1 (\vma^D)^2 \;. \label{Dashdiag} \end{equation} This agrees with what we see in Figs.~\ref{pisketch} and \ref{pidash}, with the different flavour-diagonal mesons all lying on the same curve when plotted against the Dashen quark mass. In the Dashen scheme the electromagnetic contribution to the meson mass is \begin{eqnarray} M^2_\gamma(a \bar b) &=& \beta_2^{EM} (e_a - e_b)^2 + \gamma_2^{EM} (e_a - e_b)^2 (\vma^D + \vmb^D ) \label{MQED} \\ &&{} + \gamma_3^{EM} (e_a^2 - e_b^2) ( \vma^D - \vmb^D ) \;, \nonumber \end{eqnarray} while the QCD contribution is \begin{eqnarray} M^2_{QCD}(a \bar b) &=& M^2 + \alpha ( \vma^D + \vmb^D) + \beta_0 {\textstyle \frac{1}{6}} ( \dmu^2 + \dmd^2 + \dms^2) \label{MQCD} \\ &&{} + \beta_1 ( (\vma^D)^2 + (\vmb^D)^2) + \beta_2 ( \vma^D - \vmb^D)^2\;. \nonumber \end{eqnarray} Dashen's theorems~\cite{Dashen} state that in the limit of an exact SU(3) chiral symmetry, the neutral mesons have zero electromagnetic self energy; and that the charged mesons electromagnetic self-energies are given by a single constant. Our formulation is such as to maintain the vanishing electromagnetic self-energy of the neutral mesons away from the chiral limit. The $\beta_2^{EM}$ term of our expansion is the generalisation of Dashen’s result, where, in the absence of any strong SU(3) breaking, the electromagnetic self-energy is proportional to the charge-square of the meson. The terms involving $\gamma^{EM}$ therefore encode the deviations associated with leading-order SU(3) breaking of the strong interaction, as anticipated by Dashen. \section{Scheme dependence \label{scheme}} We can calculate electromagnetic contributions to the meson masses from~(\ref{MQED}) in our scheme, but in order to compare our results with those obtained by other groups, we need to be able to quote the QED contribution in other schemes, in particular $\overline{MS}$. To illustrate the issue of scheme dependence, consider the splitting between the $K^0$ and $ K^+$ mesons. In the real world the $K^0$-$ K^+$ splitting comes partly from QED effects, and partly from the $m_d, m_u$ mass difference, which we consider to be the QCD part of the splitting. The ordering of the physical states, with the $K^0$ heavier than the $K^+$ suggests that the quark mass effect dominates, but we expect that there is still a QED contribution of comparable magnitude. Naively, one might think that this QED contribution may be easily determined by performing a simulation with $m_u = m_d$. In this case, there will be no splitting from QCD, so the result will give the splitting due to QED alone. % In pure QCD, setting $m_u = m_d$ is unproblematic as equal bare mass implies equal renormalised mass, regardless of scale or scheme. However in QED+QCD, mass ratios between quarks of different charges are not invariant. The anomalous dimension of the quark mass now depends on the quark charge; at one-loop \begin{equation} \gamma_m = 6 C_F g^2 + 6 Q_f^2 e^2 + \cdots \end{equation} so the $u$ mass runs faster than $d$ mass. If $m_u = m_d$ in one scheme, this will not be true in another. This also implies that there is no good way to compare masses at the physical $e^2$ with pure QCD masses at $e^2 = 0$. \subsection{Changing Scheme} To calculate the electromagnetic part of the meson mass we take the difference between the mass calculated in the full theory, QCD+QED, ($g^2$ and $e^2$ both non-zero) and subtract the mass calculated in pure QCD, ($e^2=0$): \begin{equation} M_\gamma^2 = M^2(g^2,e_\star^2,m_u^\star, m_d^\star, m_s^\star) - M^2(g^2,0,m^{QCD}_u, m^{QCD}_d,m^{QCD}_s) \;. \end{equation} where $e_\star$ is the physical value of the electromagnetic coupling, corresponding to $\alpha_{EM} = 1/137.$ In the full theory the physical quark masses are well defined: we can fix the three physical quark masses by using three physical particle masses (the $\pi^0, K^0$ and $K^+$ would be a suitable choice). In the full theory we should use the physical quark masses, $m^\star$, but we also have to specify which quark masses we are going to use in the pure QCD case, (which is, after all, an unphysical theory). Different ways of choosing the $m^{QCD}$ will give different values for the electromagnetic part of the meson mass. One prescription for choosing the quark masses in the (unphysical) pure QCD case is to use the neutral meson masses. We could tune $m^{QCD}$ by requiring \begin{equation} M^2_{q \bar q}(g^2, e_\star^2, m_u^\star, m_d^\star, m_s^\star) = M^2_{q \bar q}(g^2,0, m^{QCD}_u, m^{QCD}_d,m^{QCD}_s) \end{equation} Since the QCD+QED mass matches the QCD mass, this scheme has zero EM contribution to neutral pseudoscalars by definition. This is our Dashen scheme, discussed above. In this scheme, $M^2_\gamma$ is zero for neutral pseudoscalar mesons, and is given by the simple formula~(\ref{MQED}) for charged mesons. A more conventional choice is to choose $m^\star$ and $m^{QCD}$ the same in $\overline{MS}$ at some particular scale. In this case, we are now presented with the task of determining the quark masses in a certain scheme (e.g. the Dashen scheme) given fixed $\overline{MS}$ masses. Hence we need to calculate the Dashen quark masses by renormalising from $\overline{MS}$ to the Dashen scheme: \begin{eqnarray} m^D(g^2, e_\star^2) &=& Z_m(g^2, e_\star^2, \mu^2) m^{\overline{MS}}(\mu^2) \;, \\ m^D(g^2, 0, \mu^2) &=& Z_m(g^2, 0, \mu^2) m^{\overline{MS}}(\mu^2) \;. \nonumber \end{eqnarray} However, since the renormalisation factor $Z_m$ depends on both $g^2$ and $e^2$, the Dashen mass in pure QCD would not be the same as the Dashen mass in the physical QCD+QED theory: \begin{equation} m^D_{QCD} \equiv m^D(g^2, 0, \mu^2) = \frac{ Z_m(g^2, 0, \mu^2) } { Z_m(g^2, e_\star^2, \mu^2) } m^D(g^2, e_\star^2) \equiv Y_m( g^2, e_\star^2, \mu^2) m^D(g^2, e_\star^2) \;. \label{mDQCD} \end{equation} Hence the Dashen mass is rescaled by a renormalisation constant ratio which we denote $Y_m$. Now, we know in principle what the QCD mass we should subtract is, it is the mass we get by substituting $e^2=0, m^D = m^D_{QCD}$ into our fit formula. So now it is a matter of determining the ratio $Y_m$ in~(\ref{mDQCD}) To proceed, we note that we already know the renormalisation factor from bare lattice mass to Dashen mass, equation~(\ref{lead_Dashen}) and~(\ref{Dashform}): \begin{eqnarray} Y_m^{latt \to D} &=& 1 + \frac{\gamma_1^{EM}}{\alpha} e^2 Q_q^2 \label{Ydashlat} \\ &=& 1 + \alpha_{EM} Q_q^2 \; 2.20(9) \nonumber \;. \end{eqnarray} We also need the renormalisation factor from bare lattice mass to $\overline{MS}$, which can be estimated from lattice perturbation theory~\cite{offshell}. Fortunately, all pure QCD diagrams with only gluons and quarks cancel because we are looking at a ratio of $Z$ factors, so the leading contribution comes from the 1-loop photon diagram, giving \begin{eqnarray} Y_m^{latt \to \overline{MS}} &=& 1 + \frac{e^2 Q_q^2}{16 \pi^2} \left( -6 \ln a \mu + 12.95241 \right) \nonumber \\ &=& 1 + \alpha_{EM} Q_q^2 \; 1.208\;. \end{eqnarray} The numerical value in the second line is obtained for $\mu=2$~GeV and the value of the lattice spacing in our simulations, $a^{-1} = 2.9$~GeV (see Table~\ref{physmass}). However, the one-loop result is not the full answer, there will be higher order diagrams, with one photon plus any number of gluons, giving contributions $\sim e^2 g^2, e^2 g^4, \dots$ To account for these unknown terms we add an error $\sim \pm 30$\% to the coefficient, giving \begin{equation} Y_m^{latt \to \overline{MS}} = 1 + \alpha_{EM} Q_q^2 \; 1.2(4) \;. \end{equation} Combining this with~(\ref{Ydashlat}) gives us the conversion factor from the Dashen scheme to $\overline{MS}$ at $\mu=2$~GeV for our configurations ($a^{-1} = 2.9$~GeV) \begin{equation} Y_m^{D \to \overline{MS}} = 1 - \alpha_{EM} Q_q^2 \; 1.0(5) \equiv 1 + \alpha_{EM} Q_q^2 \Upsilon^{D \to \overline{MS}} \;. \label{Yval} \end{equation} We are now ready to write the transformation formula from Dashen scheme $M_\gamma$ to $M_\gamma$ in $\overline{MS}$. In the Dashen scheme \begin{equation} \left[M^2_\gamma \right]^D = M^2(g^2, e^2, [m_u^\star]^D, [m_d^\star]^D, [m_s^\star]^D )- M^2(g^2, 0, [m_u^\star]^D, [m_d^\star]^D, [m_s^\star]^D ) \label{MgD} \end{equation} with the same Dashen-scheme quark masses in both terms. In $\overline{MS}$ \begin{equation} \left[M^2_\gamma \right]^{\overline{MS}} = M^2(g^2, e^2, [m_u^\star]^D, [m_d^\star]^D, [m_s^\star]^D )- M^2(g^2, 0, [\tilde m_u]^D, [\tilde m_d]^D, [\tilde m_s]^D ) \label{MgMSb} \end{equation} where $[\tilde m_q]^D$ is given by~(\ref{mDQCD}) \begin{equation} [\tilde m_q]^D = \left( 1 + \alpha_{EM} Q_q^2 \Upsilon^{D \to \overline{MS}} \right) [m_q^\star]^D \;. \end{equation} Taking the difference between~(\ref{MgMSb}) and~(\ref{MgD}) gives \begin{equation} \left[M^2_\gamma \right]^{\overline{MS}}\! - \left[M^2_\gamma \right]^D \!= M^2(g^2, 0, [m_u^\star]^D, [m_d^\star]^D, [m_s^\star]^D ) - M^2(g^2, 0, [\tilde m_u]^D, [\tilde m_d]^D, [\tilde m_s]^D ) \end{equation} which holds for the electromagnetic contribution to any hadron. If we are specifically interested in pseudoscalar mesons, we can use the leading order mass formula $M^2(a \bar b) = \alpha (m_a + m_b)$ to give \begin{eqnarray} \left[M^2_\gamma(a \bar b) \right]^{\overline{MS}} &=& \left[M^2_\gamma(a \bar b) \right]^D - \alpha_{EM} \Upsilon^{D \to \overline{MS}} \alpha \left[ Q_a^2 [m_a^\star]^D + Q_b^2 [m_b^\star]^D \right] \nonumber \\ &=& \left[M^2_\gamma(a \bar b) \right]^D - \alpha_{EM} \Upsilon^{D \to \overline{MS}} \half \left[ Q_a^2 M^2(a \bar a) + Q_b^2 M^2(b \bar b) \right] \;. \label{Upsil} \end{eqnarray} This is a rather simple formula, the only difficulty is that at present we only have a rather rough value for the constant $\Upsilon$. \section{Lattice Results \label{results}} The first question to consider is how close our simulation is to the symmetric line, where $M(u \bar u) = M(d \bar d) = M(s \bar s).$ We find that at the simulation point, $M(u \bar u)$ is about 6\% heavier than the other two mesons, so we are not quite at the desired point. In Table~\ref{ksymtab} we show the $\kappa^{sym}_q$ values determined on our two large-volume ensembles. In our fits we make a Taylor expansion about the symmetric point of Table~\ref{ksymtab}, not about our simulation point. (The displacement is rather small, the difference is in the fifth significant figure.) \begin{table}[htb] \begin{center} \begin{tabular}{|c|llc|} \hline flavour & \multicolumn{1}{c}{ $ 32^3 \times 64$} & \multicolumn{1}{c}{ $48^3 \times 96$ } & simulation \cr \hline $n$ & $0.1208142(14)$ & $0.1208135(9) $& \cr $d,s$ & $0.1217026(5)$ & $0.1217032(3) $& 0.121713 \cr $u$ & $0.1243838(10) $& $0.1243824(6) $& 0.124362 \cr \hline \end{tabular} \caption{ The $\kappa$ values of the symmetric point, determined from fits to the pseudoscalar meson data. \label{ksymtab}} \end{center} \end{table} The next question is whether we have the value of $\overline{m}$ correctly matched to the physical value. This is checked by comparing the averaged pseudoscalar mass squared, $X^2_\pi$,~(\ref{Xpidef}), with the corresponding baryon scale \begin{equation} X^2_N = \third \left[ (M_N^\star)^2 + (M_\Sigma^\star)^2 + (M_\Xi^\star)^2 \right] \;. \label{XNdef} \end{equation} We find $X_N/X_\pi = 2.79(3)$, very close to the correct physical value, 2.81, showing that our tuning has found the correct $\overline{m}$ value very successfully. \subsection{The splitting of the $\pi^+$ and $\pi^0$ masses.} The first quantity we wish to consider is the mass difference between the $\pi^+$ and $\pi^0$ mesons. Since in this case we are calculating a physically observable mass difference there is no scheme dependence in the result. First we need to find the $\kappa$ values corresponding to the physical quark masses. Since we have three quark masses to determine we need three pieces of physical input, we choose the masses of the $\pi^0$ and the two kaons \begin{eqnarray} M_{\pi^0} &=& 134.977 {\rm \ MeV,} \nonumber\\ M_{K^0} &=& 497.614 {\rm \ MeV,}\\ M_{K^+} &=& 493.677 {\rm \ MeV} \nonumber \end{eqnarray} at $\alpha_{EM} = 1/137$. This determines the physical point given in Table~\ref{physmass}. We see very close agreement between the lattice scale determined on the two lattice volumes. \begin{table}[htb] \begin{center} \begin{tabular}{|c|cc|} \hline & $ 32^3 \times 64$ & $48^3 \times 96$ \cr \hline $ a \dmu^\star$ & $-0.00834(8)$ &$ -0.00791(4) $\cr $ a \dmd^\star $ & $-0.00776(7) $ &$ -0.00740(4) $\cr $ a \dms^\star$ & $0.01610(15) $ &$ 0.01531(8) $\cr $ a^{-1}$/GeV & 2.89(5) & 2.91(3) \cr \hline \end{tabular} \caption{ Bare quark mass parameters at the physical point, and inverse lattice spacing, defined from $X_\pi$. These masses have been tuned to reproduce the real-world $\pi^0, K^0$ and $K^+$ when $\alpha_{EM} = 1/137$. \label{physmass} } \end{center} \end{table} Using these quark masses we now have a prediction for the one remaining meson mass, the $\pi^+$. Our values on the two lattice spacings are given in Table~\ref{piplus}. \begin{table}[htb] \begin{center} \begin{tabular}{|c|ccc|} \hline & $ 32^3 \times 64$ & $48^3 \times 96$ & Real World \cr \hline $M_{\pi^+}$ & 140.3(5) & 139.6(2) & 139.570 \cr $M_{\pi^+} - M_{\pi^0}$ & 5.3(5) & 4.6(2) & 4.594 \cr \hline \end{tabular} \caption{ The predicted value of the $\pi^+$ mass, and $\pi^+$-$\pi^0$ splitting, in MeV. \label{piplus}} \end{center} \end{table} \subsection{The $\epsilon$ parameters} The $\pi^+$-$\pi^0$ mass splitting that we presented in the previous section is a physically measurable quantity, so it is independent of renormalisation. However, if we now attempt to divide our hadron masses into a QCD part and a QED part, as explained earlier, this is a scheme-dependent concept. When we look with greater resolution we see more short wavelength photons, which had previously been counted as part of the quark mass, and therefore part of the QCD contribution to the mass. The traditional way of expressing the electromagnetic contributions is through the $\epsilon$ parameters, which measure $M^2_\gamma$ in units of \begin{equation} \Delta_\pi \equiv M^2_{\pi^+} - M^2_{\pi^0} \;, \end{equation} a natural choice because it is a quantity of a similar origin, and similar order of magnitude. The $\epsilon$ parameters are defined by~\cite{FLAG} \begin{eqnarray} M^2_\gamma(\pi^0) = M^2_{\pi^0}(g^2,e^2) - M^2_{\pi^0}(g^2,0) &=& \epsilon_{\pi^0} \Delta_\pi \;, \nonumber \\ M^2_\gamma(K^0) = M^2_{K^0}(g^2,e^2) - M^2_{K^0}(g^2,0) &=& \epsilon_{K^0} \Delta_\pi \nonumber\;, \\ M^2_\gamma(\pi^+) = M^2_{\pi^+}(g^2,e^2) - M^2_{\pi^+}(g^2,0) &=& [1 + \epsilon_{\pi^0} -\epsilon_m] \Delta_\pi\;, \\ M^2_\gamma(K^+) = M^2_{K^+}(g^2,e^2) - M^2_{K^+}(g^2,0) &=& \epsilon_{K^+} \Delta_\pi = [1 + \epsilon +\epsilon_{K^0} -\epsilon_m] \Delta_\pi \;. \nonumber \end{eqnarray} $\epsilon_{K^+}$ is defined in this way so that the electromagnetic contribution to the following quantity has a simple expression \begin{equation} [M^2_{K^+} - M^2_{K^0} - M^2_{\pi^+} + M^2_{\pi^0}]_\gamma = \epsilon \Delta_\pi \;. \end{equation} From now on we will neglect the small quantity $\epsilon_m$, the QCD contribution to the $\pi^+$-$\pi^0$ splitting, which comes largely from annihilation diagrams. This is a reasonable assumption here since we note that phenomenological estimates for the this QCD contribution are of order 0.1~MeV (or 2\%) \cite{Gasser:1982ap}, which is within the precision of our present calculation. In the Dashen scheme the $\epsilon$ parameters are simply, \begin{equation} \epsilon_{\pi^0}^D = 0, \qquad \epsilon_{K^0}^D=0, \qquad \qquad \epsilon_{\pi^+}^D = 1 \;, \end{equation} with the only non-trivial quantity, $\epsilon^D$, given by \begin{equation} \epsilon^D = \frac{M_\gamma^2(K^+)}{M_\gamma^2(\pi^+)} -1 = \epsilon_{K^+}^D -1 \end{equation} On our two ensembles we find \begin{eqnarray} \epsilon^D = 0.38(10) & \qquad & 32^3 \times 64 \;,\nonumber \\ \epsilon^D = 0.49(5) && 48^3 \times 96 \;, \end{eqnarray} which agree within errors. In what follows, we use the $48^3 \times 96$ value in our calculations. Using (\ref{Upsil}) to transform these numbers into $\overline{MS}$ with the scale $\mu=2$~GeV, we find: \begin{eqnarray} \epsilon_{\pi^0} &=& - \alpha_{EM} \Upsilon^{D \to \overline{MS}} \half \left[\textstyle \frac{4}{9} M^2(u \bar u) + \frac{1}{9} M^2(d \bar d) \right] /\Delta_\pi = 0.03 \pm 0.02 \;, \nonumber \\ \epsilon_{\pi^+} &=& \epsilon_{\pi^+}^D - \alpha_{EM} \Upsilon^{D \to \overline{MS}} \half \left[ \textstyle \frac{4}{9} M^2(u \bar u) + \frac{1}{9} M^2(d \bar d) \right] /\Delta_\pi = 1.03 \pm 0.02 \;, \nonumber \\ \epsilon_{K^0} &=& - \alpha_{EM} \Upsilon^{D \to \overline{MS}} \half \left[\textstyle \frac{1}{9} M^2(d \bar d) + \frac{1}{9} M^2(s \bar s) \right] /\Delta_\pi = 0.2 \pm 0.1 \;, \\ \epsilon_{K^+} &=& \epsilon_{K^+}^D - \alpha_{EM} \Upsilon^{D \to \overline{MS}} \half \left[ \textstyle \frac{4}{9} M^2(u \bar u) + \frac{1}{9} M^2(s \bar s) \right] /\Delta_\pi = 1.7 \pm 0.1 \;, \nonumber \\ \epsilon &=& \epsilon^D - \alpha_{EM} \Upsilon^{D \to \overline{MS}} \half \left[\textstyle \frac{4}{9} M^2(u \bar u) - \frac{1}{9} M^2(d \bar d) \right] /\Delta_\pi = 0.50 \pm 0.06\;. \nonumber \end{eqnarray} In all cases we are resolving more photons in $\overline{MS}$, and so converting some fraction of the quark mass into electromagnetic energy. This has very little effect in the pions because both quarks are very light, but a much larger effect in the kaons because the strange quark is heavier, and the photon cloud has a mass proportional to the quark mass. \section{Conclusions} We have investigated isospin breaking in the pseudoscalar meson sector from lattice calculations of QCD+QED. This allows us to look simultaneously at both sources of isospin breaking, the quark mass differences, and the electromagnetic interaction, which are of comparable importance. The physical mass differences between the different particles are directly observable, and so must be independent of the renormalisation scheme and scale used. When we try to go beyond this, to say what fraction of a hadron's mass-squared comes from QCD, and from QED, this no longer holds --- changing our resolution changes the fraction. We understand this effect, both formally, in terms of the dependence of the mass renormalisation constant on the electromagnetic coupling, and physically, in terms of the quark mass gaining a contribution from its associated photon cloud. With this understanding, we calculate the electromagnetic contributions to hadron masses in the Dashen scheme, which is easy to implement on the lattice, and then convert these values into the more conventional $\overline{MS}$ scheme. We are also investigating the isospin violating mass splittings in the baryon sector~\cite{Horsley:2015}, as well as the decomposition of these mass differences into QCD and QED parts, both in the Dashen scheme, and in $\overline{MS}$. \section*{Acknowledgements} The computation for this project has been carried out on the IBM BlueGene/Q using DIRAC 2 resources (EPCC, Edinburgh, UK), the BlueGene/P and Q at NIC (J\"ulich, Germany), the SGI ICE 8200 and Cray XC30 at HLRN (The North-German Supercomputer Alliance) and on the NCI National Facility in Canberra, Australia (supported by the Australian Commonwealth Government). Configurations were generated with a new version of the BQCD lattice program~\cite{nakamura10a}, modified to include QCD+QED. Configurations were analysed using the Chroma software library~\cite{edwards04a}. HP was supported by DFG Grant No. SCHI~422/10-1. PELR was supported in part by the STFC under contract ST/G00062X/1, RDY was supported by the Australian Research Council Grant No. FT120100821 and DP140103067 and JMZ was supported by the Australian Research Council Grant No. FT100100005 and DP140103067. We thank all funding agencies.
2,869,038,156,574
arxiv
\section{Introduction and an overview of the main results}\label{Section 1} \subsection{Statistical properties of random Birkhoff sums and a preview}\label{Sec1.1} Probabilistic limit theorems for expanding random dynamical systems have been studied extensively in the past decades. This setup includes a probability space $({\Omega},{\mathcal F},{\mathbb P})$ and a family locally expanding maps $T_{\omega}, {\omega}\in{\Omega}$ which are composed along an orbit of an ergodic and invertible map ${\theta}:{\Omega}\to{\Omega}$ together with a family of equivariant\footnote{As will be discussed below, in applications $\{\mu_{\omega}\}$ is not just any equivariant family, but it is generated by an appropriate random potential (i.e. random Gibbs measures). In other situations $\mu_{\omega}$ can be the appropriate volume measure.} probability measures $\mu_{\omega}$ (i.e. $(T_{\omega})_*\mu_{\omega}=\mu_{{\theta}{\omega}}$ for ${\mathbb P}$-a.a. ${\omega}$) on the domain ${\mathcal E}_{\omega}$ of $T_{\omega}$. When considering a random point $x_0$ distributed according to $\mu_{\omega}$ we get random orbits $$ T_{{\omega}}^n x_0=T_{{\theta}^{n-1}{\omega}}\circ\dots T_{{\theta}{\omega}}\circ T_{\omega} x_0 $$ and the question is whether for ${\mathbb P}$-almost all ${\omega}\in{\Omega}$ random Birkhoff sums of the form $S_n^{\omega}=S_n^{\omega} u(x_0)=\sum_{j=0}^{n-1}u_{{\theta}^j{\omega}}\circ T_{\omega}^j(x_0)$ obey limit theorems like the quenched central limit theorem\footnote{Let us recall that the quenched CLT means that for ${\mathbb P}$-a.a. ${\omega}$ the sequence of random variables $n^{-1/2}(S_n^{\omega}-{\mathbb E}[S_n^{\omega}])$ converges in distribution to a centered normal random variable with variance ${\sigma}^2=\lim_{n\to\infty}\frac 1n\text{Var}(S_n^{\omega})$.} (CLT) and its variety of stronger versions. Here $u_{\omega}$ is random function on the domain of $T_{\omega}$ satisfying some regularity conditions like H\"older continuity (not necessarily uniformly in ${\omega}$). The maps $T_{\omega}$ can very often be described by means of random parameters $a_1({\omega}),...,a_d({\omega})$ such as minimal amount of local expansion, degree, ``ratio" between contraction and expansion, variation of the logarithm of the Jacobian, etc. We call the maps uniformly random when the random variables $a_i({\omega})$ take values on appropriate domains (e.g. minimal local expansion is bounded away from $1$, bounded degree, bounded variation, etc.). Then most of the statistical properties in literature were obtained in the uniformly random case\footnote{See, for instance, the recent results \cite{Davor ASIP, HK, Davor TAMS, Davor CMP, HafYT, DH} and references therein.}, with the exception\footnote{See \cite{ANV} and references therein for results for iid maps when $u_{\omega}$ does not depend on ${\omega}$ and \cite{Su} for results for iid maps which admit a random tower extensions with sufficiently fast decaying tails.} of certain types of maps so that $\{T_{{\theta}^j{\omega}}: 0\leq j<\infty\}$ and $\{u_{{\theta}^j{\omega}}: 0\leq j<\infty\}$ are iid processes on the probability space $({\Omega},{\mathcal F},{\mathbb P})$. Beyond the iid case we are not aware of even a single explicit example with a true non-uniform behavior for which the quenched CLT holds true. The purpose of this manuscript is to provide explicit sufficient conditions for several limit theorems like the CLT for non-uniformly random (partially) expanding maps (which will provide a variety of examples beyond the iid case). Let us note that in \cite[Theorem 2.3]{Kifer 1998} an inducing strategy was developed in order to prove the CLT and related results in the non-uniformly random case. The conditions in \cite[Theorem 2.3]{Kifer 1998} require certain type of regularity of the behavior of the first visiting time to a measurable set $A\subset{\Omega}$ with positive probability so that $\{T_{{\omega}}, {\omega}\in A\}$ are uniformly expanding in an appropriate sense. While the results in \cite{Kifer 1998} were new even in the uniformly random case, to the best of our knowledge, there are no examples in literature showing how to apply this method beyond the uniformly random case (where we can take $A={\Omega}$). Some of the proofs in this paper will be based on applying the inducting strategy in \cite{Kifer 1998} in the non-uniformly random case (see Section \ref{Int Limit} for a more detailed discussion). As will be explained in detail in Section \ref{Int Limit}, our conditions for the CLT and the functional law of iterated logarithm (LIL) will involve some mixing (weak-dependence) related conditions\footnote{These conditions will always hold true under appropriate restrictions on some upper mixing coefficients related to $({\Omega},{\mathcal F},{\mathbb P},{\theta})$.} on sequences $(f_n)_{}$ of random variables of the form $f_n({\omega})=f(a_1({\theta}^n{\omega}),...,a_d({\theta}^n{\omega}))$, where $f$ has an explicit form, together with the integrability assumption $\|u_{\omega}-\mu_{\omega}(u_{\omega})\|_{{\alpha}}\in L^p({\Omega},{\mathcal F},{\mathbb P})$, $p>2$, where $\|\cdot\|_ {\alpha}$ is the standard H\"older norm corresponding to some exponent ${\alpha}$. For instance, when $T_{\omega}$ is a piecewise monotone map on the unit interval with full images on each monotonicity interval, whose Jacobian has sufficiently regular\footnote{See more details in the paragraph below.} variation on each monotonicity interval\footnote{e.g. fiberwise sufficiently small $C^2$-perturbations of piecewise linear maps, see Section \ref{Sec per lin}.} and its minimal amount of expansion on the monotonicity intervals is denoted by $\gamma_{\omega}>1$, then our general conditions yield that the CLT holds true when $\|u_{\omega}-\mu_{\omega}(u_{\omega})\|_{{\alpha}}\in L^p({\Omega},{\mathcal F},{\mathbb P})$ and the sequence of random variables $(\gamma_{{\theta}^n{\omega}})_{n=0}^{\infty}$ on $({\Omega},{\mathcal F},{\mathbb P})$ satisfies some weak type of upper mixing condition. Our condition for the other limit theorems are similar (some also require certain intergability conditions, see Section \ref{Int Limit}). \subsection{On the types of Gibbs measures and the smooth case} The equivariant measures $\mu_{\omega}$ considered in this manuscript correspond to a piecewise H\"older continuous random potential $\phi_{\omega}$ and have the property that they are absolutely continuous with respect to a random conformal measure $\nu_{\omega}$, so that $({\mathcal L}_{\omega})^*\nu_{{\theta}{\omega}}={\lambda}_{\omega}\nu_{\omega}$, where ${\mathcal L}_{\omega} g(x)=\sum_{y: T_{\omega} y=x}e^{\phi_{\omega}(y)}g(y)$ is the transfer operator of $T_{\omega}$ and ${\lambda}_{\omega}>0$ (namely $\mu_{\omega}$ is the random Gibbs measure corresponding to $\phi_{\omega}$, see \cite{MSU,Varandas}). We will call the case ``smooth" if the domain of $T_{\omega}$ is a smooth manifold and $e^{-\phi_{\omega}}$ is the Jacobian of $T_{\omega}$ with respect to the volume measure on the domain (i.e. $\phi_{\omega}=-\ln J_{T_{\omega}}$). In this case we have ${\lambda}_{\omega}=1$ and $\nu_{\omega}$ is the volume measure $m_{\omega}$, and so $\mu_{\omega}$ is absolutely continuous with respect to $m_{\omega}$. Like in \cite{Varandas}, for partially expanding maps we impose a certain restriction on the oscillation of the underlying potential $\phi_{\omega}$, and, in addition, we will impose a certain restriction on the H\"older constant of $\phi_{\omega}$ along inverse branches of $T_{\omega}$, which will be crucial for obtaining the effective RPF rates that will be discussed in the next section. Like in \cite{Varandas}, because of the restriction on the oscillation, the results are less applicable in the smooth case, but includes applications for random measures $\mu_{\omega}$ of maximal entropy (when $\phi_{\omega}=0$) and in the, so-called, high temperature regime when $\phi_{\omega}=\frac{1}{{\beta}}\psi_{\omega}$ for a sufficiently large ${\beta}$ and a given random potential $\phi_{\omega}$ satisfying some regularity conditions. For properly expanding maps it is unnecessary to directly impose restrictions on the oscillation of the potential $\phi_{\omega}$, and we will only impose restrictions on the H\"older constant of the compositions of $\phi_{\omega}$ with the inverse branches of $T_{\omega}$. In the smooth case discussed above, this conditions immediately allows applications to piecewise affine maps\footnote{We will also require that on each injectivity domain has a full image.} (where $\mu_{\omega}$ is the Lebesgue measure), since then $\phi_{\omega}$ is constant on each inverse branch. We will also show that the type of control needed over the local variation in this case is satisfied for fiberwise piecewise $C^2$-perturbations of such piecewise affine maps (see Section \ref{Sec per lin}), and so we provide several examples in the smooth case, as well. \subsection{On our approach: effective RPF rates} The first part of our approach is based on effective rates in the random version of the (normalized) Ruelle-Perron-Frobenius theorem (RPF), a notion which for the sake of convenience is presented here as a definition. \begin{definition}[Effective random rates] Let $\phi_{\omega}$ be a random potential whose supremum norm is bounded by some random variable $b_{\omega}$ and its ``variation" (e.g. local H\"older constant) is bounded by a random variable $v_{\omega}$. Let $\mu_{\omega}$ be the random equivariant (i.e. $(T_{\omega})_*\mu_{\omega}=\mu_{{\theta}{\omega}}$) Gibbs measure corresponding to the potential. We say that the transfer operators\footnote{$L_{\omega}$ is the dual of the Koopman operator $g\to g\circ T_{\omega}$ with respect to the probability measure $\mu_{\omega}$.} $L_{\omega}$ of $T_{\omega}$ (with respect to $\mu_{\omega}$) have random effective rates when acting on a space of functions with ``bounded variation" if there are random variables $0<\rho({\omega})<1$ and $U({\omega})\geq0$ which depend analytically only on the random parameters $a_1({\omega}),...,a_d({\omega})$ and on $b_{\omega}$ and $v_{\omega}$ so that ${\mathbb P}$-a.s. for every function $g$ on the domain of $T_{\omega}$ with bounded variation (i.e. $\|g\|_{var}<\infty$) we have \begin{equation}\label{Effective R} \left\|L_{\omega}^ng-\int gd\mu_{\omega}\right\|_{var}\leq U({\theta}^n{\omega})\rho_{{\omega},n}\|g\|_{var} \end{equation} where $L_{\omega}^n=L_{{\theta}^{n-1}{\omega}}\circ\cdots\circ L_{{\theta}{\omega}}\circ L_{\omega}$ and $\rho_{{\omega},n}=\prod_{j=0}^{n-1}\rho({\theta}^j{\omega})$. \end{definition} In this paper $\|g\|_{var}$ will always be the H\"older norm corresponding to some exponent ${\alpha}$. We refer to Theorem \ref{RPF} for a more precise formulation of the effective RPF rates \eqref{Effective R} obtained in this paper. We also refer to Theorem \ref{Complex RPF} for effective rates for appropriate complex perturbations of the operators $L_{\omega}$, which will be crucial for obtaining some of our results (see Section \ref{Complex Intro}). As noted before, for partially expanding maps in the sense of \cite{Varandas} we obtain effective (real and complex) rates for potentials $\phi_{\omega}$ with a sufficiently regular oscillation (which has applications for measures of maximal entropy and in the high temperature regime). As we have already mentioned this condition limits the applications to the smooth case, but for properly expanding maps (in the sense of \cite{MSU, HK}) we will only require that for each inverse branch $y_{i,{\omega}}$ of $T_{{\omega}}$ the H\"older constant of $\phi_{\omega}\circ y_{i,{\omega}}$ does not exceed $(\gamma_{{\theta}{\omega}}^{\alpha}-1)$ where $\gamma_{\omega}$ is the minimal amount of local expansion of $T_{\omega}$. This condition means that $\phi_{\omega}\circ y_{i,{\omega}}$ is close to being a constant when the map $T_{{\theta}{\omega}}$ has a small amount of expansion. As mentioned in the end of Section \ref{Sec1.1}, the latter condition about the H\"older constants is satisfied for appropriate types of perturbations of piecewise linear or affine maps (see Section \ref{Sec per lin}). The proof of Theorem \ref{RPF} is based on showing that the non-normalized transfer operator ${\mathcal L}_{\omega}$ of $T_{\omega}$ contracts (w.r.t. the real Hilbert metric) an appropriate family of random cones ${\mathcal C}_{\omega}$ which are defined by means of the parameters $a_i({\omega})$ (for instance, for properly expanding maps ${\mathcal C}_{\omega}$ is defined only by means of $\gamma_{\omega}$, where $\gamma_{\omega}$ is the minimal amount of local expansion). This is the main difference here in comparison to many other applications of the contraction properties of real Hilbert metric for random operators (see \cite{Kif-RPF, Kifer thermo, MSU, HK, Varandas} and references therein), where the cones are usually defined by means of a random variable which can be expressed as a series of known random variables (but with unclear integrability or other regularity properties). As mentioned above, in the setup of \cite{MSU, HK} the price to pay for being able to use more explicit cones is an additional limitation on the variation of the potential $\phi_{\omega}$ along inverse branches, while in the setup of \cite{Varandas} we will also require that the amount of expansion dominates the amount of contraction fiberwise and not only on the average. We would like to think about $\rho({\omega})$ in \eqref{Effective R} as the amount of contraction we have on the fiber ${\omega}$. We refer the readers to Remark \ref{Int cond U } for a discussion about situations where $U({\omega})$ is actually bounded. For instance, for the aforementioned example of perturbations of piecewise affine maps (described in Section \ref{Sec per lin}), $U({\omega})$ is bounded if the H\"older constant of the logarithm of the Jacobian of $T_{\omega}$ (on each inverse branch) is bounded\footnote{We refer to Remarks \ref{M rem} and \ref{M rem 2} for a discussion about this matter and its relation to artificiality limiting the minimal amount of contraction by forcing it to be bounded above. This can always be done, but then stronger conditions on the potential are needed, which essentially reduce to the boundedness of the variation of the potential along inverse branches, which in the smooth case is the negative logarithm of the Jacobian.}, while when this H\"older constant is not bounded, we have $U({\omega})\leq C \gamma_{\omega}^{\alpha} e^{\gamma_{\omega}^{\alpha}}$ where $\gamma_{\omega}$ is the miminal amount of contraction. For more general expanding maps we have $U({\omega})\leq C\gamma_{\omega}^{\alpha} e^{4\sup|\phi_{\omega}|+4\gamma_{\omega}^{\alpha}}\left(\deg(T_{\omega})\deg(T_{{\theta}^{-1}{\omega}})\right)^2$. Note that under certain types of mixing assumptions on $({\Omega},{\mathcal F},{\mathbb P},{\theta})$ the precise form of $\rho({\omega})$ does not make much difference, and only the fact that it is a function of the parameters $a_i({\omega})$ plays a significant role (still, we refer to Sections \ref{Aux1} and \ref{Aux2} for the precise form), since we will work under assumptions guaranteeing that the sequences $(a_i({\theta}^n{\omega}))_{n=0}^\infty$ are sufficiently fast mixing. \subsubsection{\textbf{A comparison with existing less explicit random RPF rates}}\label{Int RPF} Let us compare \eqref{Effective R} with a few other random RPF rates in literature. For the maps considered in \cite{MSU, Kifer thermo} (see also references therein) and \cite{Varandas} we have \eqref{Effective R} with a constant $\rho$ but with a random variable $U({\omega})$ which is defined by means of first hitting times to certain sets and a certain random variable $Q_{\omega}$ which can be expressed as a series of known random variables (see, for instance, \cite[Lemma 3.18]{MSU}). In fact, the proof of these results relies\footnote{When \eqref{Effective R} holds true with any kind of random variables $U$ and $\rho$, we can replace $\rho({\omega})$ by a constant smaller than $1$ by considering the number of visits to a set of the form $A_{\varepsilon}=\{\rho({\omega})<1-{\varepsilon}\}$ for ${\varepsilon}\in(0,1)$ small enough. However, this will make the ``new" $U({\omega})$ less explicit and with unclear regularity properties.} on obtaining rates of the form \eqref{Effective R} with random $\rho({\omega})$ which depends on $Q_{\omega}$ (see, for instance, \cite[Proposition 3.17]{MSU}). Note that even though $Q_{\omega}$ has a closed ``formula" it is unclear which type of regularity conditions (e.g. integrability) it satisfies. In any case, since it is not clear which regularity properties $U({\omega})$ has in this setup, it is less likely that these rates will be effective for proving limit theorems under explicit conditions (and not conditions involving some restrictions on the random variable $U({\omega})$). Another less direct approach is based on an appropriate version of Oseledets theorem for the cocycle of transfer operators $\{L_{\omega}\:\,{\omega}\in{\Omega}\}$, and under certain logarithmic integrability conditions (see \cite[Proposition 26]{DS1} and references therein), it yields that \begin{equation}\label{Var RPF} \left\|L_{\omega}^n g-\int gd\mu_{\omega}\right\|_{var}\leq K({\omega})e^{-n{\lambda} }\|g\|_{var} \end{equation} for some ${\lambda}>0$ and a tempered\footnote{Namely, almost surely we have $\lim_{n\to\infty}\frac1{n}K({\theta}^n{\omega})=0$.} random variable $K({\omega})$. Notice that once \eqref{Var RPF} is established with some ${\lambda}$ then the minimal choice for $K({\omega})$ is $$ K({\omega})=\sup_{n}\|L_{\omega}^n-\mu_{\omega}\|_{var}e^{n{\lambda} }. $$ Remark also that $$ {\lambda}\leq {\lambda}_0({\omega}):=-\limsup_{n\to\infty}\frac 1n\ln\|L_{\omega}^n-\mu_{\omega}\|_{var},\,\,{\mathbb P}\text{-a.s.} $$ and so, in a sense, ${\lambda}={\lambda}^{(0)}:=\text{ess-inf }{\lambda}_0({\omega})$ is the smallest possible choice for ${\lambda}$. We note that when $\ln U$ is integrable then \eqref{Effective R} yields that $ {\lambda}^{(0)}\leq \bar{\lambda}=-\int \ln\rho({\omega})d{\mathbb P}({\omega}) $ which is a limitation on the contraction rate in the exponential convergence towards $\mu_{\omega}$. Even though we have the above explicit form for $K({\omega})$, it is unclear which type of regularity (beyond being tempered) the random variable $K({\omega})$ possesses\footnote{e.g., whether it is in $L^p({\Omega},{\mathcal F},{\mathbb P})$ for some $p$ or if $(K({\theta}^n{\omega}))$ satisfies some weak-dependence conditions.} or if it has a finite upper bound which depends (in a reasonable way) only the parameters $a_i({\omega})$ describing the maps $T_{\omega}$. Under the conditions of \cite[Proposition 26]{DS1} in \cite{DS1, DHS} limit theorems were obtained in the smooth case for expanding on the average maps $T_{\omega}$ and random potentials $u_{\omega}$ satisfying (rooughly speaking) that $K({\omega})\|u_{\omega}\|_{var}\leq C$ for\footnote{Let us also note that in \cite[Appendix A]{DHS} it is demonstrated that, in general, scaling conditions of this form are necessary for the validity of certain limit theorems.} some constant $C$. In comparison with the smooth case considered in \cite{DS1, DHS}, we restrict ourselves to maps which have some fiberwise expansion (maybe not on the entire space) and not only expansion on the average. Moreover, we will have an additional assumption on the Jacobian (which will be satisfied for certain perturbations of piecewise affine maps) and certain type of upper mixing conditions on the system $({\Omega},{\mathcal F},{\mathbb P},{\theta})$ as well. On the other hand, as noted above, in general $K({\omega})$ does not seem to be ``computable", and we also consider more general families of equivariant measures $\mu_{\omega}$ corresponding to potentials with sufficiently regular variation (e.g. the maximal entropy and the high-temperature regime cases discussed above). Finally let us mention related results for (partially hyperbolic) iid maps $\{T_{{\theta}^j{\omega}}:\,j\geq0\}$ which admit a random (Young) tower extension (see \cite{BBM, DU, BBR, ABR}). In this setup estimates of the form \begin{equation}\label{Rppf} \left\|L_{\omega}^n g-\int gd\mu_{\omega}\right\|_{L^1(\mu_{\omega})}\leq K({\omega})a_n\|g\|_{var} \end{equation} were obtained for some sequences $a_n\to0$ (the decay rate of $a_n$ is determined by the decay rates of the tails of the random tower) and a random variable $K({\omega})$ which satisfies certain regularity conditions like $K({\omega})\in L^{p}({\Omega},{\mathcal F},{\mathbb P})$ for some $p>1$. While also here $K({\omega})$ does not seem to depend only on some parameters describing the original maps (or something similar), integrbaility conditions on $K({\omega})$ together with polynomial decay of $a_n$ are sufficient to get appropriate control over the non-uniform decay of correlations, which is enough to prove limit theorems like an almost sure invariance principle (see \cite{Su}). However, this is obtained only for iid maps which admit a sufficiently regular random tower extension (and iid functions $\{u_{{\theta}^j{\omega}}:\,0\leq k<\infty\}$). Moreover, even for iid maps several other limits theorems like the ones described in Section \ref{Complex Intro} seem to require more than \eqref{Rppf}. \subsection{A more detailed discussion on the proofs and conditions of the limit theorems}\label{Int Limit} The main difficulty in proving limit theorems in the non-uniformly random case (beyond the iid case) is that the iterates of the annealed transfer operator (see \cite{ANV}) do not describe the statistical behavior of the random Birkhoff sums, and due to strong dependence between $T_{{\omega}},u_{\omega}$ and $T_{{\theta}{\omega}},u_{{\theta}{\omega}}$ it seems less likely that a random tower extension with sufficiently fast decaying tails exists (see again \cite{BBM, DU, BBR, ABR} and \cite{Su}). Instead, our results will rely on the effective rates \eqref{Effective R} described in the previous sections, as described in the following paragraphs. We present two proofs of the central limit theorem (CLT) and the functional law of iterated logarithm (LIL). The first one is based on inducing, and more precisely we use the inducing strategy in \cite[Theorem 2.3]{Kifer 1998}. To the best of our knowledge, this is the first time that a result based on inducing in the ${\Omega}$ direction is applied effectively for expanding maps like the ones considered in this paper (namely, that the required control over the system between two visiting times to the set on the base ${\Omega}$ is achieved). The idea in our proof is that, using \eqref{Effective R}, the conditions of \cite[Theorem 2.3]{Kifer 1998} reduce to certain almost sure growth conditions which involve the random variables $\rho({\omega}), U({\omega})$ and $c({\omega})=\|u_{\omega}-\mu_{\omega}(u_{\omega})\|_{var}=\|u_{\omega}-\mu_{\omega}(u_{\omega})\|_{{\alpha}}$, which in turn can be verified under certain types of mild upper weak-dependence (mixing) assumptions on the sequences\footnote{Recall that $\rho({\omega})$ and $U({\omega})$ are functions of $a_i({\omega})$ and so it is enough to impose upper weak-dependence conditions on $(a_i({\theta}^n{\omega}))_{n=0}^\infty$ for $i=1,2,...,d$.} $(\rho({\theta}^n{\omega}))_{n=0}^\infty$ and $(U({\theta}^n{\omega}))_{n=0}^\infty$ and the integrability condition $c(\cdot)\in L^p({\Omega},{\mathcal F},{\mathbb P}), p>2$. We stress that integrability conditions on $U({\omega})$ are not required and all that is needed is some type of upper mixing conditions and integrability assumptions on $c({\omega})$. Our second proof of the CLT and LIL is not based on inducing, and instead it exploits \eqref{Effective R} directly and also requires that $U({\omega})\in L^p$ (as noted above, $U({\omega})$ is even bounded for a wide class of maps, see Remark \ref{Int cond U }). While in general integrability assumptions on $U({\omega})$ are true additional requirements, the second type of sufficient conditions for the CLT has two advantages over the first set. First, it requires much weaker restrictions on certain upper mixing coefficients related to the system $({\Omega},{\mathcal F},{\mathbb P},{\theta})$. Second, it allows weaker approximation rates of $(\rho({\theta}^n{\omega}))_{n=0}^\infty$ and $(U({\theta}^n{\omega}))_{n=0}^\infty$ than the ones required in the first set of conditions (in the case when the latter sequences can only be approximated by sequences satisfying some type of upper weak-dependence conditions). We also obtain an almost sure invariance principle (ASIP), which concerns strong approximation of the random Birkhoff sums by sums of independent Gaussian random variables (and is a stronger form of the CLT). Under some upper weak-dependence assumptions on $(\rho({\theta}^n{\omega}))_{n=0}^\infty$ and $(U({\theta}^n{\omega}))_{n=0}^\infty$ and some integrability conditions we obtain an ASIP with rates $o(n^{1/4+7p/2+{\varepsilon}})$, where $p$ is the largest number so that the random variable $Y({\omega})$ described in the last paragraph of Section \ref{Sec1.1} belongs to $L^p$. For instance (see Remark \ref{asip Rem}), under certain regularity assumptions on the potential $\phi_{\omega}$ our intergability conditions are $\|u_{\omega}-\mu_{\omega}(u_{\omega})\|_{\alpha}\in L^p({\Omega},{\mathcal F},{\mathbb P})$ and $N({\omega})\in L^p({\Omega},{\mathcal F},{\mathbb P})$ where $N({\omega})=\sup\{v_{\alpha}(g\circ T_{\omega}): v_{\alpha}(g)\leq 1\}$. In the smooth case these conditions hold true for the aforementioned $C^2$-perturbations of the piecewise affine maps (where here $N({\omega})$ essentially coincides with the maximal amount of expansion of the map). In more general circumstances we also require that $U({\omega})\in L^p({\Omega},{\mathcal F},{\mathbb P})$ (in the situations discussed before it is bounded). For non-uniformly random iid maps which admit a random tower extension an ASIP was obtained\footnote{The ASIP for uniformly random maps was treated in several papers in different setups, see \cite{Davor ASIP} and the references in \cite{Su, CIRM paper}.} in \cite{Su}, while for non uniformly random expanding maps driven by a general ergodic system $({\Omega},{\mathcal F},{\mathbb P},{\theta})$ it was obtained in \cite{CIRM paper} by inducing on an appropriate set $A$. The conditions in \cite{CIRM paper} reduce to certain assumptions on the behavior of the random Birkhoff sums $S_n^{\omega} u$ when $n$ is smaller than the first visiting time $n_A({\omega})$ of the orbit of ${\omega}$ to $A$. The proof of the ASIP in this paper is not based on inducing, and instead we apply \eqref{Effective R} directly, but we still think it could be interesting to check how \eqref{Effective R} can be combined with an inducing strategy in order to yield some ASIP rates. Finally, we would also like to refer to \cite{DHS}, where an ASIP was obtained under the scaling conditions described in the penultimate paragraph of Section \ref{Int RPF}. \subsubsection{\textbf{Results which also require random complex effective rates, and the deterministic case}}\label{Complex Intro} \, We also derive a moderate deviations principle (MDP) which deals with the asymptotic behavior of probabilities of the form $\mu_{\omega}\{(S_n^{\omega} u-\mu_{\omega} (S_n^{\omega} u))/a_n\in\Gamma\}$ where $(a_n)$ is a certain type of normalizing sequence and $\Gamma\subset{\mathbb R}$ is an arbitrary Borel set (we refer to these results as moderate deviations\footnote{As opposed to large deviations.} because of the quadratic rate function involved in the formulation). These results are obtained under an additional condition on the potential $u_{\omega}$ which, roughly speaking, means that either the H\"older norm $\|u_{\omega}\|_{\alpha}$ is small when $T_{\omega}$ has some inverse branch with a small amount of contraction, or that it is small when the ratio between the amount of expansion and contraction is close to $1$. Such a condition is close in spirit to the scaling conditions in \cite{DS1, DHS} discussed in the penultimate paragraph of Section \ref{Int RPF}, but the scaling is done according to the amount of expansion of $T_{\omega}$. Under the same additional requirement on the random functions $u_{\omega}$, we will also obtain self-normalized CLT rates and a moderate version of the local CLT. Our CLT rates are of order $n^{-(1/2-6/p)}$ when appropriate random variables (like the ones discussed in previous paragraphs) belong to $L^p({\Omega},{\mathcal F},{\mathbb P})$. When these random variables are bounded (i.e. in the uniformly random case) we have $p=\infty$ and this result recovers the Berry-Esseen theorem \cite[Theorem 7.1.1]{HK} (where the optimal $n^{-1/2}$ rates were obtained, see also \cite{DH, HafYT}). In \cite[Theorem 7.15]{HK}, in the uniformly random case, a local CLT was derived for the type of expanding maps considered in this paper (see also \cite{Davor CMP, Davor TAMS, DH, HafYT}), but the moderate type of local CLT considered here is in a difference scale, it holds true without any additional aperiodicity conditions as in \cite{HK, Davor CMP, Davor TAMS, DH, HafYT} and it is new even in the uniformly random case. On the other hand, it provides local CLT estimates on a weaker scale. The proofs of the MDP, the CLT rates and the moderate local CLT require effective rates for appropriate complex perturbations of the transfer operators $L_{\omega}$, which is established Theorem \ref{Complex RPF}. In fact, for partially expanding maps (as in Section \ref{Maps2}), Theorem \ref{Complex RPF} is new even in the uniformly random case (in that case $U({\omega})$ and $\rho({\omega})$ are constants in the appropriate complex version). The proof of Theorem \ref{Complex RPF} uses Rugh's theory \cite{Rugh} of the contraction properties of complex Hilbert metrics associated with complex cones (see also \cite{Dub1, Dub2}). For uniformly random properly expanding maps $T_{\omega}$ this method was applied successfully for random complex transfer operators for the first time in \cite[Ch. 4-6]{HK}, and here we show how to apply it when the amount of contraction at the ``jump" from ${\omega}$ to ${\theta}{\omega}$ depends on ${\omega}$ (roughly speaking, the amount of contraction is $\rho({\omega})$ appearing on \eqref{Effective R}), as well as for partially expanding maps (for such maps our results are new even in the uniformly random case). Finally, let us note that for the partially expanding maps considered in this paper, the application of \cite{Rugh} is new even in the deterministic case (i.e. in the setup of \cite{castro}). This results in explicit bounds on the spectral gap of appropriate complex perturbations on the dual operator of the Koopman operator corresponding to the deterministic map $T$ and the underlying Gibbs measure. Using such estimates\footnote{The idea is that, contrary to the classical perturbative approach based on an appropriate implicit function theorem, we can control the size of perturbation as well as obtain explicit bounds on the corresponding complex RPF triplets.} we can obtain, for instance, explicit constants in the Berry-Esseen theorem. That is, the methods used in this paper also make it possible to extend \cite[Theorem 1.1]{Dub2} from the properly expanding case to the partially expanding case, and we expect other similar quantitative results to follow. \section{Random expanding maps} As mentioned in Section \ref{Section 1}, similarly to \cite{Varandas} we will consider partially random expanding maps and random Gibbs measures corresponding to random potentials with sufficiently small random oscillation and a small H\"older constant along inverse branches. However, when the maps are properly expanding (i.e. all local inverse branches are strongly contracting) we only need the condition about the H\"older constants, which will allow applications in the smooth case. For that reason we begin the presentation in the setup of \cite[Ch. 6]{HK} (which is similar to \cite{MSU}) and only after that we will present the setup of partially expanding maps. \subsection{Properly expanding maps with a local pairing property}\label{Maps1} \subsubsection{Random spaces and maps} Our setup consists of a probability space $({\Omega},{\mathcal F},{\mathbb P})$ together with an invertible ergodic ${\mathbb P}$-preserving transformation ${\theta}:{\Omega}\to{\Omega}$, of a compact metric space $({\mathcal X},\rho)$ normalized in size so that $\text{diam}{\mathcal X}\leq 1$ together with the Borel ${\sigma}$-algebra ${\mathcal B}$, and of a set ${\mathcal E}\subset{\Omega}\times {\mathcal X}$ measurable with respect to the product ${\sigma}$-algebra ${\mathcal F}\times{\mathcal B}$ such that the fibers ${\mathcal E}_{\omega}=\{x\in {\mathcal X}:\,({\omega},x)\in{\mathcal E}\},\,{\omega}\in{\Omega}$ are compact. The latter yields (see \cite{CV} Chapter III) that the mapping ${\omega}\to{\mathcal E}_{\omega}$ is measurable with respect to the Borel ${\sigma}$-algebra induced by the Hausdorff topology on the space ${\mathcal K}({\mathcal X})$ of compact subspaces of ${\mathcal X}$ and the distance function $\rho(x,{\mathcal E}_{\omega})$ is measurable in ${\omega}$ for each $x\in {\mathcal X}$. Furthermore, the projection map $\pi_{\Omega}({\omega},x)={\omega}$ on ${\mathcal E}$ is measurable and it maps any ${\mathcal F}\times{\mathcal B}$-measurable set to a ${\mathcal F}$-measurable set (see ``measurable projection" Theorem III.23 in \cite{CV}). \begin{remark} Compactness of either ${\mathcal X}$ or ${\mathcal E}_{\omega}$ will only be needed to insure the measurability of ${\omega}\to {\mathcal E}_{\omega}$ in the above sense. Thus, when ${\mathcal E}_{\omega}={\mathcal X}$ for every ${\omega}$ then our results will remain valid for bounded metric spaces ${\mathcal X}$ which are not necessarily compact. \end{remark} Next, let \[ \{T_{\omega}: {\mathcal E}_{\omega}\to {\mathcal E}_{{\theta}{\omega}},\, {\omega}\in{\Omega}\} \] be a collection of maps between the metric spaces ${\mathcal E}_{\omega}$ and ${\mathcal E}_{{\theta}{\omega}}$ so that the map $({\omega},x)\to T_{\omega} x$ on ${\mathcal E}$ is measurable with respect to the restriction of ${\mathcal F}\times{\mathcal B}$ to ${\mathcal E}$. For every ${\omega}\in{\Omega}$ and $n\in{\mathbb N}$ consider the $n$-th step iterates $T_{\omega}^n$ given by \begin{equation}\label{T om n} T_{\omega}^n=T_{{\theta}^{n-1}{\omega}}\circ\cdots\circ T_{{\theta}{\omega}}\circ T_{\omega}: {\mathcal E}_{\omega}\to{\mathcal E}_{{\theta}^n{\omega}}. \end{equation} Our first additional requirement from the maps $T_{\omega}$ is the there is a constant $\xi\leq 1$ and a random variable $\gamma_{\omega}>1$ so that for every $x,x'\in{\mathcal E}_{{\theta}{\omega}}$ with $\rho(x,x')\leq \xi$ we can write \begin{equation}\label{Pair1.0} T_{\omega}^{-1}\{x\}=\{y_i=y_{i,{\omega}}(x): i<k\}\,\,\text{ and }\,\,T_{\omega}^{-1}\{x'\}=\{y_i'=y_{i,{\omega}}(x'): i<k\} \end{equation} and \begin{equation}\label{Pair2.0} \rho(y_i,y_i')\leq ({\gamma}_{\omega})^{-1}\rho(x,x') \end{equation} for all $1\leq i<k$ (where either $k\in{\mathbb N}$ or $k=\infty$). \subsubsection{Additional covering or summability assumptions} When $\xi<1$ we assume that $\deg(T_{\omega})<\infty$ (so $k$ above is finite) and that $\deg(T_{\omega})$ is measurable\footnote{we can assume that $\deg(T_{\omega})\leq d_{\omega}$ for some random variable $d_{\omega}$ instead.}. However, when $\xi=1$ (i.e. when we can pair the inverse images of all pairs of points $x,x'$) then we allow $\deg(T_{\omega})$ to be infinite. Next, when $\xi<1$ we assume the following type of covering properties: \begin{assumption}\label{Ass cov} When $\xi<1$ we assume that: \vskip0.1cm (i) there is an $\xi$-cover of ${\mathcal E}_{\omega}$ by points $x_i=x_{i,{\omega}}$ so that for all $i$ we have $T_{\omega}\big(B_{\omega}(x_i,\xi)\big)={\mathcal E}_{{\theta}{\omega}}$. \vskip0.1cm (ii) for all $x,x'\in{\mathcal E}_{{\theta}{\omega}}$, for every $y\in T_{\omega}^{-1}\{x\}$ there exists $y\in T_{\omega}^{-1}\{x'\}$ so that $\rho(y,y')\leq\xi$. \end{assumption} Note that the second item trivially holds true also when $\xi=1$ since $\text{diam}({\mathcal E}_{\omega})\leq1$. Next, for every ${\omega}\in{\Omega}$ and all $g:{\mathcal E}_{\omega}\to{\mathbb C}$ set \begin{eqnarray*} v(g)=v_{{\alpha},\xi,{\omega}}(g)=\inf\{R: |g(x)-g_{\omega}(x')|\leq R\rho^{\alpha}(x,x')\,\text{ if }\, \rho(x,x')<\xi\}\\ \text{and }\,\,\,\|g\|=\|g\|_{{\alpha},\xi}=\|g\|_\infty+v_{{\alpha},\xi}(g)\hskip1cm \end{eqnarray*} where $\|\cdot\|_\infty$ is the supremum norm and $\rho^{\alpha}(x,x')=\big(\rho(x,x')\big)^{\alpha}$ (and ${\alpha}$ is the same as in \eqref{phi cond}). \begin{remark}\label{RR} If $g:{\mathcal E}\to{\mathbb C}$ is measurable and $g_{\omega}:{\mathcal E}_{\omega}\to{\mathbb C}$ is given by $g_{\omega}(x)=g({\omega},x)$ then the function ${\omega}\to\|g_{\omega}\|$ is measurable by \cite[Lemma 5.1.3]{HK}. \end{remark} Next, consider the Banach spaces $({\mathcal H}_{\omega},\|\cdot\|)=({\mathcal H}_{\omega}^{{\alpha},\xi}, \|\cdot\|_{{\alpha},\xi})$ of all functions $h:{\mathcal E}_{\omega}\to{\mathbb R}$ such that $\|h\|_{{\alpha},\xi}<\infty$ and denote by ${\mathcal H}_{{\omega},{\mathbb C}}={\mathcal H}_{{\omega},{\mathbb C}}^{{\alpha},\xi}$ the space of all complex-valued functions with $\|h\|_{{\alpha},\xi}<\infty$. \subsubsection{The random potential} Let $\phi:{\mathcal E}\to{\mathbb R}$ be a measurable function so that $\|\phi({\omega},\cdot)\|_{\infty}<\infty$. Fix some ${\alpha}\in(0,1]$, and suppose that for ${\mathbb P}$-a.e. ${\omega}$ and for all $x$ and $x'$ with $\rho(x,x')\leq\xi$ for all $i$ we have \begin{equation}\label{phi cond} |\phi_{\omega}(y_{i,{\omega}}(x))-\phi_{\omega}(y_{i,{\omega}}(x'))|\leq H_{\omega} \rho^{\alpha}(x,x') \end{equation} where $\phi_{\omega}(x)=\phi({\omega},x)$ and $H_{\omega}$ is a random variable so that \begin{equation}\label{H cond} H_{\omega}\leq \gamma_{{\theta}{\omega}}^{\alpha}-1. \end{equation} This condition means that the H\"older constant of each composition $\phi_{\omega}\circ y_{i,{\omega}}$ is small when the ``next map" $T_{{\theta}{\omega}}$ has a small amount of contraction on some piece of the space. \begin{remark} Condition \eqref{phi cond} holds true when $v(\phi_{\omega})\leq \gamma_{\omega}^{{\alpha}}H_{\omega}$. However, this condition is more general since it only imposes restrictions on the H\"older constant along the inverse branches of $T_{\omega}$, and, as will be demonstrated in Section \ref{Sec per lin}, it allows applications in the smooth case. The idea is that for piecewise affine maps \eqref{phi cond} holds true with $H_{\omega}=0$, and so \eqref{phi cond} and \eqref{H cond} will hold true for appropriate perturbations of such maps (see Section \ref{Sec per lin}). Condition \eqref{H cond} is also in force when $\phi_{\omega}$ has the form $\frac{\psi_{\omega}}{\gamma_{{\omega}}^{\alpha}(\gamma_{{\theta}{\omega}}^{\alpha}-1)}$ where $\psi_{\omega}$ has H\"older constant (corresponding to the exponent ${\alpha}$) smaller than $1$ (see also Remark \ref{RR}). \end{remark} \begin{remark} In principle, we can define \begin{equation}\label{H def 1} H_{\omega}=\sup_{i}v(\phi_{\omega}\circ y_{i,{\omega}}) \end{equation} and assume that $H_{\omega}\leq \gamma_{{\theta}{\omega}}^{\alpha}-1$. However, when $\xi=1$ the function ${\omega}\to H_{\omega}$ might not be measurable because we did not assume that ${\omega}\to \deg(T_{\omega})$ is measurable. \end{remark} Next, when $\xi=1$ we need following summability condition: \begin{assumption} When $\xi=1$ there is a random variable $D_{\omega}<\infty$ so that $$ \sup_{x\in{\mathcal E}_{{\theta}{\omega}}}\sum_{y\in T_{\omega}^{-1}\{x\}}e^{\phi_{\omega}(y)}\leq D_{\omega}. $$ \end{assumption} Note that this assumption trivially hold true when $\xi<1$ with $D_{\omega}=\deg(T_{\omega})e^{\|\phi_{\omega}\|_\infty}$ ($\|\phi_{\omega}\|_\infty=\sup|\phi_{\omega}|$ is measurable by \cite[Lemma 5.1.3]{HK}). \begin{remark}\label{M rem} The non-uniform expansion comes from the possibility that $\gamma_{\omega}$ will be arbitrary close to $1$ (when $\gamma_{\omega}$ is large then $T_{\omega}$ is strongly expanding). Notice that conditions \eqref{Pair1.0} and \eqref{Pair2.0} remain valid if we replace $\gamma_{\omega}$ with ${\gamma}_{{\omega},M}=\min(M,\gamma_{\omega})$ for some constant $M>1$. While this limits the amount of expansion, some (but not all) of the conditions of the limit theorems that will be proven in this paper require intergability assumptions which are weaker when $\gamma_{\omega}$ is bounded. On the other hand, forcing $\gamma_{\omega}$ to be bounded by replacing it with $\gamma_{{\omega},M}$ essentially means that instead of \eqref{H cond} we require that $H_{\omega}\leq (\gamma_{{\theta}{\omega},M}^{\alpha}-1)$ for some $M>0$, and so there is a trade-off between the aforementioned integrability conditions and the latter stronger version of \eqref{H cond}. \end{remark} \subsubsection{An example and a comparison with \cite{MSU}} One example of maps which satisfy our conditions with $\xi=1$ are piece injective maps. In this case let ${\mathcal E}_{\omega}={\mathcal X}=[0,1)^d$ for some $d\in{\mathbb N}$, and take a random partition of ${\mathcal X}$ into rectangles of the form $[a_1,b_2)\times[a_2,b_2)\times\cdots\times[a_d,b_d)$. Now, on each rectangle we can take a distance expanding map which maps it onto $[0,1)^d$. Note that since ${\mathcal E}_{\omega}$ does not depend on ${\omega}$ there is no need in compactness to insure its measurability. When $\xi<1$ our main conditions are satisfied by the maps $T_{\omega}$ considered in \cite{MSU,HK}, and we refer to \cite[Section 2.1]{MSU} for examples (see also \cite{Kifer thermo}). In comparison with \cite{MSU} we have two main additional conditions. The first is Assumption \ref{Ass cov} (ii), which corresponds to taking $n_\xi({\omega})=1$ in the topological exactness assumption \cite[(2.3)]{MSU}. The second additional condition is \eqref{H cond}, which is a restriction on the potential $\phi_{\omega}$. While we can always choose a potential which satisfies this condition, it is interesting to see when this setup applies to the smooth case when $\phi_{\omega}=-\ln(\text{J}_{T_{\omega}})$ and $\mu_{\omega}$ is the unique absolutely continuous invariant measure w.r.t. the volume measure, and we refer to Section \ref{Sec per lin} for examples in the smooth case (fiberwise piecewise $C^2$ perturbations of certain piecewise linear or affine maps). \subsection{Random maps with dominating expansion}\label{Maps2} \subsubsection{Random spaces and maps} Let $({\Omega},{\mathcal F},{\mathbb P},{\theta}), ({\mathcal X},\rho), \{{\mathcal E}_{\omega}\}$ and $\{T_{\omega}:{\mathcal E}_{\omega}\to{\mathcal E}_{{\theta}{\omega}}\}$ satisfy the same properties described in the first paragraph of Section \ref{Maps1}. In this section, our additional assumptions on the maps $T_{\omega}$ are as follows: we suppose that there exist random variables $l_{\omega}\geq 1$, ${\sigma}_{\omega}>1$, $q_{\omega}\in{\mathbb N}$ and $d_{\omega}\in{\mathbb N}$ so that $q_{\omega}<d_{\omega}$ and for every $x\in{\mathcal E}_{{\theta}{\omega}}$ we can write \begin{equation}\label{Pair1} T_{\omega}^{-1}\{x\}=\{y_{1,{\omega}}(x),...,y_{d_{\omega},{\omega}}(x)\} \end{equation} where for every $x,x'\in{\mathcal E}_{{\theta}{\omega}}$ and for $i=1,2,...,q_{\omega}$ we have \begin{equation}\label{Pair2} \rho(y_{i,{\omega}}(x),y_{i,{\omega}}(x'))\leq l_{\omega}\rho(x,x') \end{equation} while for $i=q_{{\omega}}+1,...,d_{\omega}$, \begin{equation}\label{Pair3} \rho(x_i,x'_i)\leq {\sigma}_{\omega}^{-1}\rho(x,x'). \end{equation} The above conditions are satisfied in the setup of \cite{Varandas} (see Section \ref{Dis} for a discussion). We assume here that \begin{equation}\label{a om} a_{\omega}:=\frac{q_{\omega} l_{\omega}^{\alpha}+(d_{\omega}-q_{\omega}){\sigma}_{\omega}^{-{\alpha}}}{d_{\omega}}<1 \end{equation} which is a quantitative estimate on the amount of allowed contraction, given the amount of expansion $T_{\omega}$ has. Next, denote by ${\mathcal H}_{\omega}$ the space of functions on ${\mathcal E}_{\omega}$ equipped with the norm \[ \|g\|=\|g\|_\infty+v(g) \] where $\|g\|_\infty=\sup|g|$ and $v(g)=v_{\alpha}(g)$ is the smallest number so that $|g(x)-g(y)|\leq v(g)\big(\rho(x,y)\big)^{\alpha}$ for all $x$ and $y$ in ${\mathcal E}_{\omega}$ (namely ${\mathcal H}_{\omega}={\mathcal H}_{\omega}^{{\alpha},1}$ in the notations of the previous section). In the case when ${\alpha}=1$ and each ${\mathcal E}_{\omega}$ is a Riemannian manifold we will also consider the norms $\|g\|=\|g\|_{C^1}=\sup|g|+\sup\|Dg\|$ on the space of $C^1$-functions, namely $v(g)$ above is replaced by the supremum norm of the deferential of $g$ (so in this case $v(g)$ could either be the Lipschitz constant or $\sup\|Dg\|$). \subsubsection{The random potential} Next, let $\phi:{\mathcal E}\to{\mathbb E}$ be a measurable function and let $\phi_{\omega}:{\mathcal E}_{\omega}\to{\mathbb R}$ be given by $\phi_{\omega}(x)=\phi({\omega},x)$. Set $$ {\varepsilon}_{\omega}=\text{osc}(\phi_{\omega})=\sup\phi_{\omega}-\inf\phi_{\omega} $$ be the oscillation of $\phi_{\omega}$ and for some fixed ${\alpha}\in[0,1)$ let \begin{equation}\label{H def} H_{\omega}=\max\{v(\phi_{\omega}\circ y_{i,{\omega}}):\,1\leq i\leq d_{\omega}\} \end{equation} be the maximal H\"older constant along inverse branches. We assume here that both ${\varepsilon}_{\omega}$ and $H_{\omega}$ are finite. Note that if $\phi_{\omega}$ was H\"older continuous on the entire space ${\mathcal E}_{\omega}$ then $H_{\omega}\leq l_{\omega} v(\phi_{\omega})$. Our additional requirements from the function $\phi_{\omega}$ is that \begin{equation}\label{Bound ve} s_{\omega}:=e^{{\varepsilon}_{\omega}}a_{\omega}<1\,\,\text{ and }\,\,e^{{\varepsilon}_{\omega}}H_{\omega}\leq\frac{s_{{\theta}{\omega}}^{-1}-1}{1+s_{\omega}^{-1}} \end{equation} \begin{remark} The assumption about $H_{\omega}$ is a version of the combination of conditions \eqref{phi cond} and \eqref{H cond}. Let ${\delta}_{\omega}$ be so that $(1+{\delta}_{\omega})a_{\omega}<1$ and suppose that $e^{{\varepsilon}_{\omega}}(1+{\delta}_{\omega})a_{\omega}<1$. Then the condition about $H_{\omega}$ is satisfied when $H_{\omega}\leq \frac{{\delta}_{{\theta}{\omega}}a_{\omega}^2}{1+a_{\omega}}$. Note that we can always assume that $a_{\omega}$ is bounded below by some positive constant (by replacing $a_{\omega}$ with $\tilde a_{\omega}=\max(a_{\omega},1-{\varepsilon})$ if needed). This will make no difference in our proofs, and in that case the second condition reads $H_{\omega}\leq C{\delta}_{{\theta}{\omega}}$ for some $C$ which can be arbitrarily close to $1$. \end{remark} \subsubsection{\textbf{A comparison with \cite{Varandas} and (additional) examples}}\label{Dis} \subsubsection*{On the assumptions}Our assumptions \eqref{Pair1}, \eqref{Pair2} and \eqref{Pair3} on the maps correspond to \cite[Assumption (H1)]{Varandas} (see also the proof of \cite[Proposition 5.4]{Varandas}). Our condition $s_{\omega}<1$ is a stronger version of \cite[(2.2)]{Varandas} in \cite[Assumption (H3)]{Varandas}, which requires that $\int \ln s_{\omega} d{\mathbb P}({\omega})<0$ instead. In addition to this difference we also have the additional assumption about $H_{\omega}$, which is an additional fiberwise restriction on the local H\"older constant of the potential on inverse branches. This condition always holds true when $\phi_{\omega}$ is constant on each inverse branch. On the other hand, in \cite{Varandas} there are several other assumptions on the maps $T_{\omega}$ like \cite[Assumptions (H4) and (H5)]{Varandas} or \cite[Assumptions (H4) and (H5')]{Varandas} whose purpose is to prove uniquness of the RPF triplets described in \cite[Theorem A]{Varandas} (see also Theorem \ref{RPF}). While it is natural to work under assumptions that guarantee uniqueness of RPF triplets (and equilibrium states), our result will not require such assumptions and all the limit theorems will hold for a certain type of random equilibrium state (Gibbs measure), which coincides with the unique one under the additional assumptions in \cite{Varandas}. \subsubsection*{Special choices of random potential} Let us discuss two special types of potentials considered in \cite[Theorem D]{Varandas}. First, let us consider the case when when $\phi_{\omega}\equiv 0$. This case corresponds to equivariant measure $\mu_{\omega}$ of maximal entropy (see \cite[Theorem D]{Varandas}), and in our case \eqref{Bound ve} holds true for that choice as long as $a_{\omega}<1$. Again, the main difference in this case in comparison with \cite{Varandas} is that the weaker assumption $\int \ln a_{\omega} d{\mathbb P}({\omega})<0$ was assumed instead. Another special choice for $\phi_{\omega}$ is the case when $\phi_{\omega}=\psi_{\omega}/T$ for some other random potential and a sufficiently large constant $T$ (this is usually referred to as the high-temperature regime). In the high-temperature regime our results for general potentials are mostly effective in the uniformly random case\footnote{In the non-uniformly case we can consider $\psi_{\omega}$ which satisfies \eqref{Bound ve} with ${\varepsilon}_{\omega}/T$ instead of $T$ and take $\phi_{\omega}=\psi_{\omega}/T$.}, but we note that in the setup of this section most of the results will be new even in then. \subsubsection*{Some examples of maps} In \cite[Section 3]{Varandas} several examples were given, and in our setup we can consider the same examples replacing the assumptions about the integral of $\ln a_{\omega}$ by almost sure assumptions on $a_{\omega}$. For instance, let us consider a random finite partition of $[0,1)$ into intervals $I_{{\omega},i}=[a_{{\omega},i},b_{{\omega},i}), i\leq d_{\omega}$. On each $i$ let us take a monotone H\"older continuous map $T_{{\omega},i}:I_{{\omega},i}\to[0,1)$ which IS onto $[0,1)$. Let us assume that the absolute value of the derivatives of $T_{{\omega},1},...,T_{{\omega},p_{\omega}}$ is not less than ${\sigma}_{\omega}>1$, while the derivatives of the other $q_{\omega}=d_{\omega}-p_{\omega}$ maps $T_{{\omega},i}$ does not exceed $l_{\omega}^{-1}$ for some $l_{\omega}\geq1$. Then all the conditions described before are valid if $a_{\omega}<1$. A particular case are the, so called, random Manneville–Pomeau maps. Let ${\beta}({\omega})\in(0,1)$ be a random variable and let us take $I_{{\omega},1}=[0,\frac12)$ and $I_{{\omega},2}=[\frac12,1)$. On the first interval, let $T_{{\omega},1}(x)=x(1+(2x)^{\beta({\omega})})$ while on the second we set $T_{{\omega},2}(x)=2x-1$. Then $q_{\omega}=p_{\omega}=1$, ${\sigma}_{\omega}=2$ and $l_{\omega}=1$. In this case $$ s_{\omega}=e^{{\varepsilon}_{\omega}}\frac{1+2^{-{\alpha}}}{2},\, H_{\omega}=\max\left(v_{\alpha}(\phi_{\omega}\circ T_{{\omega},1}^{-1}), v_{\alpha}(\phi_{\omega}\circ T_{{\omega},2}^{-1})\right). $$ Similar multidimensional examples can be given, for instance ${\mathcal I}_{{\omega},i}$ can be a partition of $[0,1)^d=[0,1)\times\cdots\times [0,1)$ into rectangles with disjoint interiors of the form $I_{{\omega},i}=[a_{1,{\omega}}^{(i)},b_{1,{\omega}}^{(i)})\times\cdots\times[a_{d,{\omega}}^{(i)},b_{d,{\omega}}^{(i)})$ and on each rectangle we can take an injective map whose image is $[0,1)^d$, and assume that some of the maps are distance expanding, while others might contract distance on some regions of the rectangle (e.g. we can start with affine maps and perturb). We would also like to refer to \cite[Section 3.4]{Varandas} for other multidimensional maps, which are included in our setup when the condition $$ \sup_{k}\left(\left(1-\frac{\ell_k}k\right)+\frac{\ell_k}{k}L_k\right)<1 $$ mentioned after \cite[(3.2)]{Varandas} is satisfied (and ${\sigma}_k>1$ for all $k$). Note that there are additional requirements in \cite[Section 3.4]{Varandas} but as mentioned above their purpose is to insure the uniqueness of the RPF triplets, which is not a requirement in this paper. \subsection{Frequently used random variables} In this section we will define several random variables and the random cones that will be involved in the formulation of the effective RPF rates (Theorems \ref{RPF} and \ref{Complex RPF}), as well as in the conditions of the limit theorems. \subsubsection{Properly expanding maps}\label{Aux1} For the maps considered in Section \ref{Maps1}, let the ${\mathcal C}_{\omega}$ be the real cone defined by $$ {\mathcal C}_{{\omega}}=\{g\in {\mathcal H}_{{\omega}}: g\geq0,\, g(x)\leq e^{\gamma_{\omega}^{\alpha} \rho^{\alpha}(x,x')}g(x') \text{ if } \rho(x,x')\leq\xi\}. $$ Let $q({\omega})=\frac{H_{\omega}+1}{\gamma_{{\theta}{\omega}}^{\alpha}}\in(0,1)$, and when $\xi<1$ we set $$ D({\omega})=2(H_{\omega}+\gamma_{\omega}^{\alpha}\xi^{\alpha})+\deg(T_{\omega})+ 2\ln\left(\frac{1+q({\omega})}{1-q({\omega})}\right). $$ while when $\xi=1$ we set $$ D({\omega})=\gamma_{{\theta}{\omega}}^{\alpha}+2\ln\left(\frac{1+q({\omega})}{1-q({\omega})}\right). $$ Let \begin{equation}\label{rho} \rho({\omega})=\tanh (D({\omega})/4)\in(0,1). \end{equation} Then $\rho({\omega})$ will serve as a ``one step contraction coefficient", and in our applications it will be the variable appearing on the right hand side of \eqref{Effective R}. Let us also set $\tilde{\rho}({\omega})=\tanh (7D({\omega})/4)\in(0,1)$ which will serve as the one step contraction coefficient in the effective rates for the complex perturbations. Next, when $\xi<1$ we set $$ B_1({\omega})=e^{H_{\omega}+\gamma_{{\theta}^{-1}{\omega}}^{\alpha}\xi^{\alpha}} \deg(T_{{\theta}^{-1}{\omega}}), $$ $$ K_{\omega}=e^{2\|\phi_{\omega}\|_\infty+2\xi^{\alpha} \gamma_{\omega}^{\alpha}}\deg(T_{\omega})(1+\gamma_{\omega}^{\alpha}), $$ $C_{\omega}=4K_{\omega}$, $ U_{\omega}=6B_1^2({\omega})K_{\omega} $ and \begin{equation}\label{UUUU} U({\omega})=C_{\omega} U_\omega. \end{equation} In our applications this $U({\omega})$ will be the variable appearing on the right hand side of \eqref{Effective R}. When $\xi=1$ we set $B_1({\omega})=e^{\gamma_{\omega}^{\alpha}}$, $$ K_{\omega}=(1+\gamma_{\omega}^{\alpha})e^{\gamma_{\omega}^{\alpha}} $$ and define $C_{\omega}=4K_{\omega}$, $U_{\omega}=6B_1^2({\omega})K_{\omega}$ and $U({\omega})=C_{\omega} U_{\omega}$. Next, for a random function $u_{\omega}\in{\mathcal H}_{\omega}$ so that $\gamma_{\omega}^{-{\alpha}}v_{\alpha}(u_{\omega})+H_{\omega}\leq\gamma_{{\theta}{\omega}}^{\alpha}-1$ and an equivariant family of probability measures $\mu_{\omega}$ let us also define $\tilde D({\omega})$ like $D({\omega})$ but with $Z_{\omega}=\gamma_{{\omega}}^{-{\alpha}}v_{\alpha}(u)+H_{\omega}$ instead of $H_{\omega}$ and $\|u_{\omega}-\mu_{\omega}(u_{\omega})\|_\infty+\|\phi_{\omega}\|_\infty$ instead of $\|\phi_{\omega}\|$ and set $$ E_{\omega}=c_{\omega}\left(1+\cosh(\tilde D({\omega})/2)\right) $$ where $$ c_{\omega}=3\|u_{\omega}-\mu_{\omega}(u_{\omega})\|_\infty+\frac{H_{\omega}+v_{\alpha}(u_{\omega})\gamma_{\omega}^{-{\alpha}}}{\gamma_{{\theta}{\omega}}^{{\alpha}}-(1+H_{\omega}+\gamma_{{\omega}}^{-{\alpha}}v(u_{\omega}))}. $$ Let us also set $$ \bar D_{\omega}=16e^{\|\tilde u_{\omega}\|_\infty}(1+v(\tilde u_{\omega}))(1+H_{\omega}+\gamma_{{\omega}}^{-{\alpha}})D_{\omega} $$ where $D_{\omega}$ is a measurable upper bound of $\|{\mathcal L}_{\omega} \textbf{1}\|_\infty$ and $\tilde u_{\omega}=u_{\omega}-\mu_{\omega}(u_{\omega})$. Note that when $\xi<1$ (or when $\deg(T_{\omega})<\infty$) we can take $D_{\omega} =\deg{T_{\omega}}e^{\|\phi_{\omega}\|_\infty}$ and when $\xi=1$ (and $\deg (T_{\omega})=\infty$) then $\|{\mathcal L}_{\omega} \textbf{1}\|_\infty$ is finite and it is bounded above by a some random variable $D_{\omega}$ (which comes from the assumptions on the maps in this case). Finally, let us also set $M_{\omega}=8(1-e^{-\gamma_{\omega}^{\alpha}\xi^{\alpha}})^{-2}$ (which is bounded above by $16$). \begin{remark}\label{M rem 2} As a continuation of Remark \ref{M rem}, when $H_{\omega}\leq (\gamma_{M,{\theta}{\omega}}^{\alpha}-1)$ for some $M>1$ (i.e. $H_{\omega}\leq\gamma_{{\theta}{\omega}}^{\alpha}-1$ and it is bounded) then we can replace $\gamma_{{\omega}}$ with $\gamma_{{\omega},M}$ (namely assume that $\gamma_{\omega}$ is bounded above). In this case, when $\xi<1$ we have $$ D({\omega})\leq C+\deg(T_{\omega})+2\ln\left(\frac{1+q_M({\omega})}{1-q_M({\omega})}\right),\,\, U({\omega})\leq Ce^{4\|\phi_{\omega}\|_\infty}\left(\deg(T_{\omega})\deg(T_{{\theta}^{-1}{\omega}})\right)^2 $$ where $q_M({\omega})=\frac{H_{\omega}+1}{\gamma_{M,{\theta}{\omega}}^{\alpha}}\in(0,1)$ and $C=C(M)$ is a constant. When $\xi=1$ we have $$ D({\omega})\leq C+2\ln\left(\frac{1+q_M({\omega})}{1-q_M({\omega})}\right),\, U({\omega})\leq C. $$ \end{remark} \subsubsection{Partially expanding maps}\label{Aux2} Consider the real cone \[ {\mathcal C}_{{\omega}}={\mathcal C}_{{\omega},{\kappa}_{\omega}}=\{g\in{\mathcal H}_{\omega}:\,g>0\,\text{ and }\,v(g)\leq s_{\omega}^{-1}\inf g\}. \] Let us also set\footnote{Note that $\zeta_{\omega}<1$ by \eqref{Bound ve}.} $\zeta_{\omega}=s_{{\theta}{\omega}}\left(1+(1+s_{\omega}^{-1})e^{{\varepsilon}_{\omega}}H_{\omega}\right)<1$ and $$ D({\omega}):=2\ln\left(\frac{1+\zeta_{\omega}}{1-\zeta_{\omega}}\right)+2\ln\left(1+\zeta_{\omega} s_{\omega}^{-1}\right). $$ Let \begin{equation}\label{rho 1} \rho({\omega})=\tanh (D({\omega})/4)\in(0,1),\,\,\, \tilde\rho({\omega})=\tanh (7D({\omega})/4)\in(0,1). \end{equation} (which will serve as ``one step contraction coefficients"). We also define $$ K_{\omega}=1+2s_{\omega}^{-1}, B_1({\omega})=1+s_{\omega}^{-1}, $$ $C_{\omega}=2K_{\omega}$ and $U_{\omega}=6B_1^2({\omega})K_{\omega}$. Set also \begin{equation}\label{UUUUU} U({\omega})=C_{\omega} U_{\omega} \end{equation} and $M_{\omega}=6s_{\omega}^{-1}$. Finally, given a random function $u_{\omega}$ on ${\mathcal E}_{\omega}$ and an equivariant family of probability measures $\mu_{\omega}$ set $$ E_{\omega}=c_{\omega}\left(1+\cosh(D({\omega})/2)\right) $$ where with $\tilde u_{\omega}=u_{\omega}-\mu_{\omega}(u_{\omega})$, $$ c_{\omega}=32s_{{\theta}{\omega}}(1+2s_{\omega}^{-1})e^{\|\tilde u_{\omega}\|_\infty+2\|\phi_{\omega}\|_\infty} \|\tilde u_{\omega}\|(1+H_{\omega})(1-\zeta_{\omega})^{-1}. $$ Finally, let us also set $$ \bar D_{\omega}=M_{{\theta}{\omega}}e^{\|\tilde u_{\omega}\|_\infty}(1+v(\tilde u_{\omega}))(1+ H_{\omega}+l_{\omega}^{{\alpha}})D_{\omega} $$ where $D_{\omega}=\deg{T_{\omega}}e^{\|\phi_{\omega}\|_\infty}=d_{\omega} e^{\|\phi_{\omega}\|_\infty}$ and $M_{\omega}=6s_{\omega}^{-1}$. \subsection{An example in the smooth case: fiberwise piecewise perturbations of piecewise linear expanding maps}\label{Sec per lin} As discussed before, the maps described in Sections \ref{Maps1} and \ref{Maps2} were essentially considered in \cite{MSU,HK} and \cite{Varandas}, respectively, with the exception that in \cite{MSU,HK} the potential $\phi_{\omega}$ did not satisfy \eqref{phi cond} and \eqref{H cond}, and in \cite{Varandas} the weaker condition $\int \ln s_{\omega} d{\mathbb P}({\omega})<0$ was considered (instead of $s_{\omega}<1$), and the potential $\phi_{\omega}$ did not satisfy the second estimate in \eqref{Bound ve}, as well. In comparison with \cite{MSU}, the inequality \eqref{H cond} is our main additional assumption on the potential $\phi_{\omega}$ in the setup of Section \ref{Maps1}. While we can always work with a random Gibbs measure $\mu_{\omega}$ corresponding to a potential $\phi_{\omega}$ satisfying \eqref{phi cond} and \eqref{H cond}, it is interesting to see for which maps these conditions hold true in the smooth case when $e^{\phi_{\omega}}$ is the Jacobian of $T_{\omega}$ with respect to the volume measure on ${\mathcal E}_{\omega}$. In this section we will show that \eqref{phi cond} and \eqref{H cond} are valid in the smooth case for certain type of $C^2$ fiberwise perturbations of piecewise linear maps (and similarly we can consider perturbations of piecewise affine maps, but for the sake of simplicity we will describe only the one dimensional case). \subsubsection{The piecewise linear case} Let ${\mathcal I}_{{\omega}}=\{I_{{\omega},i}=[a_i({\omega}),b_i({\omega}))\}$ be a (nontrivial) partition of the unit interval $[0,1)$ into intervals, and on each interval let $\ell_{i,{\omega}}$ be a linear map that maps $I_{{\omega},i}$ to $[0,1)$ (there are two options, either the decreasing one or the increasing one). Then the slope of $\ell_{{\omega},i}$ is $\pm|I_{{\omega},i}|^{-1}$, where $|I_{{\omega},i}|$ is the length of $I_{{\omega},i}$. Let us assume that $I_{{\omega},1}$ is the largest interval and set $$ \gamma_{\omega}(\ell)=|I_{{\omega},1}|^{-1}>1. $$ Next, for each $i$ let $I_{{\omega},i_{\omega}(y)}$ be the unique interval $I_{{\omega},i}$ so that $y\in I_{{\omega},i}$. Then the map $\ell_{\omega}$ defined by $\ell_{\omega}(y)=\ell_{{\omega},i_{\omega}(y)}(y)$ satisfies all the conditions in Section \ref{Maps1} in the case $\xi=1$ with $\gamma_{\omega}=\gamma_{\omega}(\ell)$. Moreover, if we consider the smooth case and take $e^{\phi_{\omega}}$ to be the Jacobian of $\ell_{\omega}$ then, since the map is piecewise linear we have that $H_{\omega}$ in \eqref{phi cond} vanishes, and so \eqref{H cond} trivially holds true, where we can take the H\"older exponent ${\alpha}=1$. Moreover, we have that $\mu_{\omega}$ is the Lebesgue measure and ${\mathcal L}_{\omega} \textbf{1}=\textbf{1}$ (and hence $D_{\omega}=1$ in this case, and so $\bar D_{\omega}=e^{\|u_{\omega}\|_\infty}(1+v_{\alpha}(u_{\omega}))$ depends only on $u_{\omega}$). Recall also that $U({\omega})$ is bounded (see Remark \ref{M rem 2}) and note that $q({\omega})=\frac{1}{\gamma_{\omega}(\ell)\gamma_{{\theta}{\omega}}(\ell)}$ in this case. \subsubsection{Fiberwise piecewise $C^2$-perturbations} Let us explain for which type of piecewise $C^2$-perturbations of $\ell_{\omega}$ the conditions of Section \ref{Maps1} with $\xi=1$ remain true. On each interval $I_{{\omega},i}$, without changing the value of $\ell_{{\omega},i}$ at the end points, let us take a $C^2$ perturbation $T_{{\omega},i}$ of $\ell_{{\omega},i}$ so that \begin{equation}\label{omega wise} \|T_{{\omega},i}-\ell_{{\omega},i}\|_{C^2}\leq {\varepsilon}_{\omega}=\frac12\min\left(\frac14(\gamma_{{\theta}{\omega}}(\ell)-1)\big(\gamma_{{\omega}}(\ell)\big)^2,(\gamma_{{\theta}{\omega}}(\ell)-1),(\gamma_{{\omega}}(\ell)-1)\right). \end{equation} Let us define $T_{{\omega}}(y)=T_{{\omega},i_{\omega}(y)}(y)$ (namely by gluing the maps $T_{{\omega},i}$). Consider again the smooth case and take $e^{\phi_{\omega}}$ to be the Jacobian of $T_{\omega}$. Then $\mu_{\omega}$ is equivalent to the Lebesgue measure (since $\nu_{\omega}$ in Theorem \ref{RPF} is the Lebesgue measure). \begin{lemma} Under \eqref{omega wise} the maps satisfy the conditions in Section \ref{Maps1} with $\xi=1$ and $\gamma_{\omega}=\gamma_{\omega}(\ell)-{\varepsilon}_{\omega}\geq\frac{\gamma_{\omega}(\ell)+1}{2}$. Moreover, the potential $\phi_{\omega}=-\ln J_{T_{\omega}}$ satisfies \eqref{phi cond} with ${\alpha}=1$ and $H_{\omega}$ so that \eqref{H cond} holds true (with ${\alpha}=1$). \end{lemma} \begin{proof} First, it is clear that we can take $\gamma_{\omega}=\gamma_{\omega}(\ell)-{\varepsilon}_{\omega}\geq \frac{\gamma_{\omega}(\ell)+1}{2}$. In order to show that condition \eqref{H cond} is in force it is enough to show that the derivative of each composition $\phi_{\omega}\circ y_{{\omega},i}$ is bounded by $\gamma_{{\theta}{\omega}}-1$. To establish that, note that $y_{{\omega},i}(x)=T_{{\omega},i}^{-1}(x)$ and so $$ \sup_{x}\left|\left(\phi_{\omega}\circ y_{{\omega},i}\right)'(x)\right|=\sup_{x}\left(|\phi_{\omega}'(y_{{\omega},i}(x))|\cdot|y_{{\omega},i}'(x)|\right) \leq \sup_{y}\sup_{i}\left|\frac{T_{{\omega},i}''(y)}{\big(T_{{\omega},i}'(y)\big)^2}\right| \leq \frac{4{\varepsilon}_{\omega}}{\big(\gamma_{{\omega}}(\ell)\big)^2} $$ where in the last inequality we have used that $|T_{{\omega},i}''(y)|=|T_{{\omega},i}''(y)-\ell_{{\omega},i}''(y)|\leq {\varepsilon}_{\omega}$ and that $$ |T_{{\omega},i}'(y)|\geq |\ell_{{\omega},i}'(y)|-{\varepsilon}_{\omega}\geq |I_{{\omega},i}|^{-1}-{\varepsilon}_{\omega}\geq \gamma_{{\omega}}(\ell)-{\varepsilon}_{\omega}\geq \frac12\gamma_{{\omega}}(\ell). $$ Finally, using that ${\varepsilon}_{\omega}\leq \frac18(\gamma_{{\theta}{\omega}}(\ell)-1)\big(\gamma_{{\omega}}(\ell)\big)^2$ and that ${\varepsilon}_{\omega}\leq\frac12(\gamma_{\omega}(\ell)-1)$ we see that $$ \sup_{x}\left|\left(\phi_{\omega}(y_{{\omega},i})\right)'(x)\right|\leq\frac{4{\varepsilon}_{\omega}}{\big(\gamma_{{\omega}}(\ell)\big)^2} \leq \frac12(\gamma_{{\theta}{\omega}}(\ell)-1)\leq \gamma_{{\theta}{\omega}}(\ell)-{\varepsilon}_{\omega}-1\leq \gamma_{{\theta}{\omega}}-1. $$ \end{proof} \begin{remark}\label{C 2 REM} As mentioned in Remark \ref{M rem 2}, when $H_{\omega}$ is bounded then $U({\omega})$ from \eqref{Effective R} will be bounded in our applications. Notice that once \eqref{H cond} established $H_{\omega}$ will be bounded if $\gamma_{\omega}$ is. In our case this just means that $\gamma_{{\omega}}(\ell)$ is bounded, namely that the number of intervals in the partition ${\mathcal I}_{\omega}$ is bounded. \end{remark} \begin{remark} In certain instances $|I_{{\omega},1}|$ is bounded away from $1$ (e.g. $T_{\omega} x=(m_{\omega} x)\text{ mod }1$, $m_{\omega}\in {\mathbb N}$). In this case we can take allow larger ${\varepsilon}_{\omega}$ so that the resulting perturbation will not be uniformly expanding (as $\gamma_{\omega}$ could be arbitrary close to $1$). \end{remark} \section{Preliminaries and main results} \subsection{The random probability space: on the choice of measures $\mu_{\omega}$} For both classes of maps considered in Sections \ref{Maps1} and \ref{Maps2} let $\mu_{\omega}$ be the Gibbs measures corresponding to the potential $\phi_{\omega}$. The the detailed exposition of these measures is postponed to Section \ref{SecRPF} (where our results concerning effective rates are described), and for the meanwhile we refer to \cite{MSU} and \cite{Varandas} for the construction and the main properties of these measures (see also Theorem \ref{RPF}). For instance they are equilibrium states and they have an exponential decay of correlations for H\"older continuous functions. Let us note that the smooth case discussed in Section \ref{Section 1} corresponds to the choice of $\phi_{\omega}=-\ln(J_{T_{\omega}})$ (see Section \ref{Sec per lin}), while the choice of $\phi_{\omega}=0$ corresponds to random measures of maximal entropy (more generally the case $\phi_{\omega}=\psi_{\omega}/T$ for a sufficiently large $T$ and a sufficiently regular potential $\psi_{\omega}$ corresponds to the high temperature regime, see \cite[Theorem D]{Varandas}). \subsection{Upper mixing coefficients} Let $X=\{X_j: j\in{\mathbb Z}\}$ be a stationary ergodic sequence of random variables (taking values on some measurable space) which generates the system $({\Omega},{\mathcal F},{\mathbb P},{\theta})$, so that ${\theta}$ is the left shift on the paths of $X_j$, namely ${\theta}((X_j))=(X_{j+1})$. \begin{remark} Ergodicity of $X$ is not very crucial for our results to hold true since we can always restrict ourselves to the ergodic component of a point ${\omega}\in{\Omega}$. The only difference in the formulations of the results in that the asymptotic variance ${\sigma}^2$ (see Theorem \ref{CLT}) will now depend on the ergodic component of ${\omega}$. The idea is that the proof of the effective rates (Theorems \ref{RPF} and \ref{Complex RPF}) do not require ergodicity, and using the upper mixing coefficients that will be defined soon all our methods will work when the variance of the random Birkhoff sums grows linearly fast. \end{remark} Recall next that the $k$-th upper ${\alpha},\phi$ and $\psi$ mixing coefficients of the sequence $\{X_j\}$ are the smallest numbers ${\alpha}_U(k),\phi_U(k)$ and $\psi_U(k)$ so that for every $n$ and a set $A$ measurable\footnote{Here ${\sigma}\{X_j:\,j\in {\mathcal I}\}$ is the ${\sigma}$-algebra generated by $\{X_j: j\in {\mathcal I}\}$, where ${\mathcal I}\subset{\mathbb Z}$.} with respect to ${\sigma}\{X_j: j\leq n\}$ and a set $B$ measurable with respect to ${\sigma}\{X_m: \,m\geq n+k\}$ we have $$ {\mathbb P}(A\cap B)\leq {\mathbb P}(A){\mathbb P}(B)(1+\psi_U(k)), $$ $$ {\mathbb P}(A\cap B)\leq {\mathbb P}(A){\mathbb P}(B)+\phi_U(k){\mathbb P}(A) $$ and $$ {\mathbb P}(A\cap B)\leq {\mathbb P}(A){\mathbb P}(B)+{\alpha}_U(k). $$ Clearly $$ {\alpha}_U(k)\leq\phi_U(k)\leq\psi_U(k). $$ Notice next that ${\alpha}_U(k), \phi_U(k)$ and $\psi_U(k)$ are decreasing, and so \begin{equation}\label{lim inf} \limsup_{k\to\infty}\eta(k)=\lim_{k\to\infty}\eta(k)=\inf_k\eta(k) \end{equation} where $\eta$ is either ${\alpha}_U, \phi_U$ or $\psi_U$. \begin{remark} Note also that due to stationarity we can always consider only $n=0$ in the definitions of the upper mixing coefficients. We prefer to present the upper mixing coefficients in the above more general form in order to avoid repeating that both forms are equivalents in the course of the proofs. \end{remark} \begin{remark} Recall that the (two sided) mixing coefficients ${\alpha}(k),\phi(k),\psi(k)$ are defined similarly through the inequalities $$ \left|{\mathbb P}(A\cap B)-{\mathbb P}(A){\mathbb P}(B)\right|\leq {\mathbb P}(A){\mathbb P}(B)\psi(k), $$ $$ \left|{\mathbb P}(A\cap B)-{\mathbb P}(A){\mathbb P}(B)\right|\leq {\mathbb P}(A)\phi(k), $$ and $$ \left|{\mathbb P}(A\cap B)-{\mathbb P}(A){\mathbb P}(B)\right|\leq {\alpha}(k). $$ Clearly $\psi_U(k)\leq\psi(k)$, $\phi_U(k)\leq \phi(k)$ and ${\alpha}_U(k)\leq {\alpha}(k)$. The sequences ${\alpha}(k),\phi(k)$ and $\psi(k)$ are classical quantities measuring the long range weak-dependence of the sequence $\{X_n\}$, and we refer to \cite{BrMix} and \cite{Douk} for examples. While our results are new also if we work only with the usual mixing coefficients presented above, the proofs only require assumptions on the upper mixing coefficients, and so in order not to create a confusion we decided to formulate the results by imposing restrictions on the less common (but smaller) upper mixing coefficients (which in general do not seem to imply mixing or even ergodicity). \end{remark} \subsection{Quenched limit theorems for random Birkhoff sums} Let $u_{\omega}:{\mathcal E}_{\omega}\to{\mathbb R}$ be a random function (i.e. $u({\omega},x)=u_{\omega}(x)$ is measurable) so that $u_{\omega}\in {\mathcal H}_{\omega}$ (i.e. $u_{\omega}$ is ${\alpha}$-H\"older continuous). Let us consider the corresponding random Birkhoff sums $$ S_n^{\omega} u=\sum_{j=0}^{n-1}u_{{\theta}^j{\omega}}\circ T_{\omega}^j. $$ In this paper, under appropriate assumptions on $u_{\omega}$, when ${\omega}$ is fixed (chosen from a set of probability one), we will prove limit theorems for the sequence of functions $S_n^{\omega} u(\cdot)$ considered as random variables on the probability space $({\mathcal E}_{\omega},{\mathcal B}_{\omega},\mu_{\omega})$, where ${\mathcal B}_{\omega}$ is the Borel ${\sigma}$-algebra on ${\mathcal E}_{\omega}$. \subsection{The CLT and LIL} First, let us note that, in order to avoid repetitions, in all the result formulated in this section the random variables defined in Sections \ref{Aux1} and \ref{Aux2} will be in constant use, sometime without referring to these sections. Next, in order to formulate our first set of sufficient conditions for the CLT we consider the following assumption. \begin{assumption}\label{Sets Ass} There is a measurable set $A\subset {\Omega}$ with positive probability so that for all ${\omega}\in A$ we have $\rho({\omega})\leq 1-{\varepsilon}$ and $U({\omega})\leq M$ for some ${\varepsilon},M>0$ and that for all $r\in{\mathbb N}$ there is a set $A_r$ which is measurable with respect to ${\sigma}\{X_{j}, |j|\leq r\}$ so that $\beta_r={\mathbb P}(A\setminus A_r)\to 0$ and $\lim_{r\to\infty}{\mathbb P}(A_r)={\mathbb P}(A)$. \end{assumption} The second assumption about the set $A$ means that $A$ can be approximated fast enough by sets which depend only on the values of $X_j$ for $|j|\leq r$. When $\rho$ and $U$ depend only on $X_j, |j|\leq d$ for some $d$ then we can just take $A$ to be a set of the form $A=A_{{\varepsilon},M}=\{{\omega}: \rho({\omega})\leq 1-{\varepsilon},\, U({\omega})\leq M\}$ for a sufficiently small ${\varepsilon}$ and a sufficiently large $M$ (or any measurable subset of such a set with positive probability). In this case ${\beta}_r=0$ for all $r>d$. In the following example we will explain in which circumstances Assumption \ref{Sets Ass} is valid with $\beta_r$ which does not vanish for all $r$ large enough. \begin{example}\label{Egg}[An example with non-vanishing ${\beta}_r$.] Suppose that the following approximation condition holds true: there are random variables $\rho_r$ and $U_r$ measurable with respect to $X_j, |j|\leq r$ so that \begin{equation}\label{Appp} \max(\|\rho-\rho_r\|_{L^1}, \|U-U_r\|_{L^1})\leq {\delta}_r\to 0. \end{equation} Note that condition \eqref{Appp} can also be written as $$ \max(\|\rho-{\mathbb E}[\rho|X_{-r},...,X_r]\|_{L^1}, \|U-{\mathbb E}[U|X_{-r},...,X_r]\|_{L^1})\leq {\delta}_r. $$ This condition is fulfilled when $\rho(...,X_{-1},X_0,X_1,...)$ and $U(...,X_{-1},X_0,X_1,...)$ weakly depend (in an $L^1$-sense) on the coordinates $X_j, |j|\geq r$. Limit theorems under conditions similar to \eqref{Appp} (with some decay rate for ${\delta}_r$) have been studied extensively in weak dependence theory, see \cite{Bil, IL} (where iid $X_j$ are considered). Let $A=A_{{\varepsilon},M}$ and $A_r=\{{\omega}: \rho_r({\omega})\leq 1-{\varepsilon}+\sqrt{{\delta}_r},\, U_r({\omega})\leq M+\sqrt{{\delta}_r}\}$. To estimate ${\beta}_r$, using the Markov inequality we see that $$ {\mathbb P}(A_r)\leq {\mathbb P}(|\rho-\rho_r|\geq \sqrt{{\delta}_r})+{\mathbb P}(|U-U_r|\geq \sqrt{{\delta}_r})+{\mathbb P}(A_{{\varepsilon}+2\sqrt{{\delta}_r},M+2\sqrt{{\delta}_r}})\leq 2\sqrt{\delta}_r +{\mathbb P}(A_{{\varepsilon}+2\sqrt{{\delta}_r},M+2\sqrt{{\delta}_r}}) $$ and $$ {\mathbb P}(A\setminus A_r)\leq{\mathbb P}(|\rho-\rho_r|\geq \sqrt{{\delta}_r})+{\mathbb P}(|U-U_r|\geq \sqrt{{\delta}_r})\leq 2\sqrt{\delta}_r. $$ Hence $\lim_r{\mathbb P}(A_r)={\mathbb P}(A)$ and $\beta_r={\mathbb P}(A\setminus A_r)\leq 2\sqrt{{\delta}_r}$. \end{example} \begin{remark} When ${\omega}\to U({\omega})$ is in $L^1({\Omega},{\mathcal F},{\mathbb P})$ then there is always a sequence ${\delta}_r$ satisfying \eqref{Appp}, but our main results involve certain decay rates for $\beta_r$. \end{remark} \begin{theorem}[CLT]\label{CLT} Let Assumption \ref{Sets Ass} be in force. Assume that the random variable ${\omega}\to\|u_{\omega}-\mu_{\omega}(u_{\omega})\|$ is in $ L^p({\Omega},{\mathcal F},{\mathbb P})$ for some $p>2$ so that $\sum_{j}(\ln j\beta_{C j/\ln j})^{1-2/p}<\infty$ for all $C>0$. In addition, assume that one of the following conditions is in force: \vskip0.1cm (M1) $\sum_{j}({\alpha}_U(C j/\ln j))^{1-2/p}<\infty$ for all $C>0$; \vskip0.1cm (M2) $\limsup_{k\to\infty}\phi_U(k)<{\mathbb P}(A)$ \,\,(i.e. $\phi_U(k)<{\mathbb P}(A)$ for some $k$); \vskip0.1cm (M3) $\limsup_{k\to\infty}\psi_U(k)<\frac 1{1-{\mathbb P}(A)}-1$ \,\,(i.e. $\psi_U(k)<\frac 1{1-{\mathbb P}(A)}-1$ for some $k$). \vskip0.1cm Then: \vskip0.1cm (i) There is a number ${\sigma}\geq0$ so that for ${\mathbb P}$-.a.e ${\omega}$ we have $$ \lim_{n\to\infty}\frac1n\text{Var}_{\mu_{\omega}}(S_n^{\omega} u)={\sigma}^2. $$ Moreover, ${\sigma}>0$ if and only if the function $U({\omega},x)=\sum_{j=0}^{n_A({\omega})-1}(u_{{\theta}^j{\omega}}\circ T_{\omega}^j x-\mu_{{\theta}^j{\omega}}(u_{{\theta}^j{\omega}}))$ has the form $U({\omega},x)=q({\omega},x)-q({\theta}^{n_A({\omega})},T_{\omega}^{n_A({\omega})}x)$ for ${\mathbb P}_A$ almost every ${\omega}$ and all $x$, where $n_A$ is the first return time to $A$, ${\mathbb P}_A(\cdot)={\mathbb P}(\cdot|A)/{\mathbb P}(A)$ is the conditional measure on $A$ and $q$ is a measurable function so that $\int_{A}\int_{{\mathcal E}{\omega}}|q({\omega},x)|^2d\mu_{\omega}(x)d{\mathbb P}({\omega})<\infty$. \vskip0.1cm (ii) The sequence $S_n^{\omega} u$ obeys the CLT: for every real $t$ we have $$ \lim_{n\to\infty}\mu_{\omega}\{x: n^{-1/2}\left(S_n^{\omega} u(x)-\mu_{\omega}(S_n^{\omega} u)\right)\leq t\}=\frac{1}{\sqrt{2\pi}{\sigma}}\int_{-\infty}^{t}e^{-\frac{s^2}{2{\sigma}^2}}ds $$ where if ${\sigma}=0$ the above right hand side is interpreted as the distribution function of the constant random variable $0$. \vskip0.1cm (iii) Set $\tau({\omega},x)=({\theta}{\omega}, T_{\omega} x)$, $\mu=\int_{{\Omega}} \mu_{\omega} d{\mathbb P}({\omega})$ and $\tilde u({\omega},x)=u_{\omega}(x)-\mu_{\omega}(u_{\omega})$. If ${\sigma}>0$ then the following functional version of the law of iterated logarithm (LIL) holds true. Let $\zeta(t)=\left(2t\log\log t\right)^{1/2}$ and $$ \eta_n(t)=\left(\zeta({\sigma}^2 n)\right)^{-1}\sum_{j=0}^{k-1}\left(\tilde u\circ\tau^j+(nt-k)\tilde u\circ\tau^k\right) $$ for $t\in[\frac{k}{n}, \frac{k+1}n), k=0,1,...,n-1$. Then $\mu$-a.s. the sequence of functions $\{\eta_n(\cdot), n\geq 3/{\sigma}^2\}$ is relatively compact in $C[0,1]$ (the space of continuous functions on $[0,1]$ with the supremum norm), and the set of limit points as $n\to\infty$ coincides with the set $K$ of absolutely continuous functions $x\in C[0,1]$ so that $\int_{0}^1(\dot{x}(t))^2dt\leq 1$. \end{theorem} The proof of Theorem \ref{CLT} is based on inducing, and more precisely we apply \cite[Theorem 2.3]{Kifer 1998}. The role the assumptions on the upper mixing coefficient play is that, together with the effective random RPF rates \eqref{Effective R}, they allow us to verify the abstract conditions of \cite[Theorem 2.3]{Kifer 1998} with the set $Q=A$ (where $Q$ is in the notations of \cite[Theorem 2.3]{Kifer 1998}). \begin{remark} When using condition (M1) we only need that $\sum_{j}(\ln j\beta_{j/(3C\ln j)})^{1-2/p}<\infty$ and $\sum_{j}({\alpha}_U(C j/\ln j))^{1-2/p}<\infty$ for some $C$ so that $C|\ln(1-{\mathbb P}(A)/2)|(1-2/p)>1$. \vskip0.1cm When using (M2) we only need that $\sum_{j}(\ln j\beta_{j/(3C\ln j)})^{1-2/p}<\infty$ for some $C$ so that $C|\ln {\delta}|(1-2/p)>1$, where ${\delta}=1-{\mathbb P}(A)+\limsup_{r\to\infty}\phi_U(r)<1$. \vskip0.1cm When using (M3) we only need that $\sum_{j}(\ln j\beta_{j/(3C\ln j)})^{1-2/p}<\infty$ for some $C$ so that $C|\ln{\delta}|(1-2/p)>1$, where ${\delta}=\left(1+\limsup_{r\to\infty}\psi_U(r)\right)\left(1-{\mathbb P}(A)\right)<1$. \end{remark} Next, let us provide alternative conditions for the CLT which involve a stronger type of approximation and moment assumptions on $U({\omega})$, but do not require any approximation rates. \begin{assumption}\label{Approx2} There is a sequence $\beta_r\to0$ as $r\to\infty$ so that for every $r$ there is a random variable $\rho_r({\omega})$ which is measurable with respect to ${\sigma}\{X_j; |j|\leq r\}$ and $$ \|\rho-\rho_r\|_{L^\infty}\leq \beta_r. $$ \end{assumption} This condition holds true when $\rho(...,X_{-1},X_0,X_1,...)$ depends on finitely many coordinates (in this case we can take $\rho_r=\rho$ for $r$ large enough), and it means that $$ \lim_{r\to\infty}\left\|\rho-{\mathbb E}[\rho|X_{-r},...,X_r]\right\|_{L^\infty}=0. $$ \begin{theorem}\label{CLT2} Let Assumption \ref{Approx2} be in force. Moreover, assume that $$ \limsup_{s\to\infty}\psi_{U}(s)<\infty\,\,\,(\text{i.e. } \psi_U(s)<\infty \text{ for some s}) $$ and that $\|u_{\omega}-\mu_{\omega}(u_{\omega})\|, U({\omega})\in L^{3+{\delta}}({\Omega},{\mathcal F},{\mathbb P})$ for some ${\delta}>0$. \vskip0.1cm (i) There is a number ${\sigma}\geq0$ so that for ${\mathbb P}$-a.a. ${\omega}$ we have $$ {\sigma}^2=\lim\frac{1}n\text{Var}_{\mu_{\omega}}(S_n^{\omega} u). $$ Moreover, ${\sigma}=0$ if and only if $\tilde u({\omega},x)=q({\omega},x)-q({\theta}{\omega}, T_{\omega} x)$ for some measurable function $q({\omega},x)$ so that $\int q^2({\omega},x)d\mu_{\omega}(x)d{\mathbb P}({\omega})<\infty$. \vskip0.1cm (ii) The CLT as stated in Theorem \ref{CLT} (ii) is valid. \vskip0.1cm (iii) The functional LIL as stated in Theorem \ref{CLT} (iii) is valid. \end{theorem} Note that the results in Theorem \ref{CLT2} are slightly better than Theorem \ref{CLT} since we obtain a simpler coboundary characterization for the positivity of ${\sigma}$. The proof of Theorem \ref{CLT2} is also based on applying \cite[Theorem 2.3]{Kifer 1998}. However, even though the conditions of \cite[Theorem 2.3]{Kifer 1998} are related to an inducing strategy, we will apply it with the set $Q={\Omega}$, namely we will ``induce" on ${\Omega}$, so the proof will not really be based on inducing. In this case the conditions of \cite[Theorem 2.3]{Kifer 1998} concern the asymptotic behavior of the system $({\Omega},{\mathcal F},{\mathbb P},{\theta})$ itself (that is, of ${\theta}^n{\omega}$ as $n\to\infty$) and not the induced system. Even though this is a stronger requirement, we will verify these conditions using the assumptions on the upper mixing coefficients together with the effective random RPF rates (Theorem \ref{RPF}). \begin{remark}\label{Int cond U } Besides the additional integrability assumptions in \ref{CLT2}, the main difference between Theorems \ref{CLT} and \ref{CLT2} is that in the former we essentially require certain $L^1$-approximation rates (decay rates for $\beta_r$), while in the latter we do not require such rates, but instead we work with the stronger $L^\infty$-approximation coefficients, and only with the upper $\psi$-mixing coefficients. On the other hand, the restrictions on $\limsup_{k\to\infty}\psi_U(k)$ in Theorem \ref{CLT2} are much weaker than the ones in Theorem \ref{CLT}. Concerning the additional integrability assumption, as explained in Remark \ref{M rem 2} (see also Remark \ref{C 2 REM}), under the additional condition on $H_{\omega}$ described there $U({\omega})$ is bounded, and so in this case the additional requirement that $ U({\omega})\in L^{3+{\delta}}({\Omega},{\mathcal F},{\mathbb P})$ is always satisfied, and the only true integrability condition in Theroem \ref{CLT2} is $\|u_{\omega}-\mu_{\omega}(u_{\omega})\|\in L^{3+{\delta}}({\Omega},{\mathcal F},{\mathbb P})$ (like in Theorem \ref{CLT}). Finally, recall that $U({\omega})$ is bounded for the piecewise affine maps and their perturbations consider in Section \ref{Sec per lin}. Thus in this case $\|u_{\omega}-\mu_{\omega}(u_{\omega})\|\in L^{3+{\delta}}({\Omega},{\mathcal F},{\mathbb P})$ is the only integrability condition needed. \end{remark} \subsection{The ASIP} In this section we further assume that there is a random variable $N({\omega})$ so that \begin{equation}\label{N def} v(g\circ T_{\omega})\leq N({\omega})v(g) \end{equation} for all functions $g$ with $v(g)<\infty$. Note that when the maps $T_{\omega}$ are piecewise differntiable with bounded derivatives then we can always take $N({\omega})=\sup\|DT_{\omega}\|$. Our next result is about almost sure approximation of $S_n^{\omega} u$ by sums of independent Gaussians. \begin{theorem}[ASIP]\label{ASIP} Let Assumption \ref{Approx2} hold true and suppose that ${\sigma}>0$. Suppose\footnote{Since ${\mathbb E}_{\mathbb P}[\rho]<1$ this condition holds true when the limit superior does not exceed $1$.} also that \begin{equation}\label{LS} \limsup_{s\to\infty}\psi_U(s)<\frac{1}{{\mathbb E}_{\mathbb P}[\rho]}\,\,\,\,(\text{i.e. }\psi_U(s)<\frac{1}{{\mathbb E}_{\mathbb P}[\rho]}\text{ for some }s). \end{equation} Further assume that ${\omega}\to U_{\omega}$, ${\omega}\to C_{\omega}$, ${\omega}\to N({\omega})$ and ${\omega}\to \|u_{\omega}-\mu_{\omega}(u_{\omega})\|$ belong to $L^p({\Omega},{\mathcal F},{\mathbb P})$ for some $p>8$. Let $\tilde u_{\omega}(x)=u_{\omega}(x)-\mu_{\omega}(u_{\omega})$. Then there is a coupling of $u_{{\theta}^j{\omega}}\circ T_{\omega}^j$ (considered as a sequence of random variables on the probability space $({\mathcal E}_{\omega},\mu_{\omega})$) with a sequence of independent centered Gaussian random variables $Z_j$ so that for every ${\varepsilon}>0$, $$ \max_{1\leq k\leq n}\left|S_k^{\omega}\tilde u-\sum_{j=1}^kZ_j\right|=O(n^{1/4+\frac{9}{2p}+{\varepsilon}}),\text{ a.s.} $$ and $$ \left\|\sum_{j=1}^n Z_j\right\|_2^2=\text{Var}_{\mu_{\omega}}(S_n^{\omega} u)+O(n^{1/2+3/p+{\varepsilon}}). $$ \end{theorem} \begin{remark}\label{asip Rem} Recall that $U({\omega})=U_{\omega} C_{\omega}$ (where $U_{\omega}$ and $C_{\omega}$ were defined in Sections \ref{Aux1} and \ref{Aux2}, depending on the case). Now, as discussed in Remark \ref{M rem 2} (see also Remark \ref{C 2 REM}), when, in addition to \eqref{H cond} we have $\|H_{\omega}\|_{L^\infty}<\infty$ then $U({\omega})$ is bounded. Thus, for such maps the only true integrability conditions in Theorem \ref{ASIP} are $N({\omega}),\|\tilde u_{\omega}\|\in L^p$. Finally, recall that $U({\omega})$ is bounded for the piecewise affine maps and their perturbations considered in Section \ref{Sec per lin}. Now, for such maps $N({\omega})$ is the supremum norm of the gradient and so Theorem \ref{ASIP} holds true when the supremum norm and ${\omega}\to\|\tilde u_{\omega}\|$ are in $L^p$. \end{remark} \subsection{Large deviations principles with a quadratic rate function} Consider the following additional condition. \begin{assumption}\label{Add ass} (i) The random variable $E_{\omega}$ defined in Section \ref{Aux1} (or Section \ref{Aux2}) is bounded. \vskip0.1cm (ii) In the setup of Section \ref{Maps1}, \eqref{phi cond} is satisfied with some $H_{\omega}$ so that $$ Z_{\omega}:=\gamma_{{\omega}}^{-{\alpha}}v(u_{\omega})+H_{\omega}\leq \gamma_{{\theta}{\omega}}^{\alpha}-1. $$ \end{assumption} \begin{remark} In the setup of Section \ref{Maps1}, the condition that $E_{\omega}$ is bounded essentially means that $\|u_{\omega}\|_\infty$ and $Z_{\omega}$ are small when $\gamma_{{\theta}{\omega}}$ is close to $1$. To demonstrate that let us assume that $$ Z_{\omega}\leq r_{\omega}(\gamma_{{\theta}{\omega}}^{\alpha}-1) $$ for some $r_{\omega}<1-{\varepsilon}$, ${\varepsilon}\in(0,1)$. Then by replacing $Z_{\omega}$ with the above upper bound and then using some elementary estimates we see that when $\xi<1$, $$ E_{\omega}\leq C\left(\|u_{\omega}-\mu_{\omega}(u_{\omega})\|_{\infty}+r_{\omega}\right)e^{2Z_{\omega}+2\gamma_{\omega}^{\alpha}+\deg(T_{\omega})}\gamma_{{\theta}{\omega}}^{{\alpha}}\left(\gamma_{{\theta}{\omega}}^{\alpha}-1\right)^{-1} $$ where $C=C_{\varepsilon}$ is some constant, while when $\xi=1$ $$ E_{\omega}\leq C\left(\|u_{\omega}-\mu_{\omega}(u_{\omega})\|_{\infty}+r_{\omega}\right)e^{2\gamma_{\omega}^{\alpha}}\gamma_{{\theta}{\omega}}^{{\alpha}}\left(\gamma_{{\theta}{\omega}}^{\alpha}-1\right)^{-1}. $$ We thus see that $E_{\omega}$ is bounded if $\|u_{\omega}\|_\infty+r_{\omega}$ is small enough (fiberwise). For instance, when $\xi=1$ and $\gamma_{\omega}$ is bounded above we get the sufficient condition $$ \|u_{\omega}-\mu_{\omega}(u_{\omega})\|_\infty+r_{\omega}\leq C(\gamma_{{\theta}{\omega}}^{\alpha}-1) $$ which means that $\|u_{\omega}-\mu_{\omega}(u_{\omega})\|_\infty+r_{\omega}$ is small when $T_{{\theta}{\omega}}$ has a local inverse branch with a close to $1$ amount of contraction $\gamma_{{\theta}{\omega}}^{-1}$. \vskip0.2cm In the setup of Section \ref{Maps2}, the random variable $E_{\omega}$ is bounded when $\|\phi_{\omega}\|_\infty, H_{\omega}$ and $\|u_{\omega}\|$ are (fiberwise) small enough when $\zeta_{\omega}$ is close to $1$. \end{remark} \begin{theorem}\label{MDP1} Under Assumption \ref{Add ass} we have the following. Assume that ${\sigma}>0$ and that for some $p>4$ we have that $\bar D_{\omega}\in L^{2p}({\Omega},{\mathcal F},{\mathbb P})$ and $K_{\omega},M_{\omega},\|\tilde u_{\omega}\|_\infty\in L^p({\Omega},{\mathcal F},{\mathbb P})$, where $\tilde u_{\omega}=u_{\omega}-\mu_{\omega}(u_{\omega})$. Moreover, suppose that for some measurable set $A\subset{\Omega}$ with positive probability we have: \vskip0.1cm (i) $A$ satisfies the approximation properties described in Assumption \ref{Sets Ass}; \vskip0.1cm (ii) the random variable $\max(M_{\omega}, K_{\omega}, U_{\omega})$ is bounded on $A$; \vskip0.1cm (iii) $A$ satisfies the assumptions of Theorem \ref{CLT} with $\frac{p-1}p=1-1/p$ instead of $1-2/p$ (let us denote the corresponding conditions by (M1'), (M2') and (M3'), respectively). \vskip0.1cm Then the following moderate deviations principle holds true for ${\mathbb P}$-a.a. ${\omega}$: for every balanced\footnote{We say that a sequence $(a_n)$ of positive numbers is balanced if $\frac{a_n}{a_{c_n n}}\to 1$ for every sequence $(c_n)$ so that $c_n\to 1$.} sequence $(a_n)$ so that $\frac{a_n}{\sqrt n}\to\infty$, and $a_n=o(n^{1-6/p})$ and all Borel measurable sets $\Gamma\subset{\mathbb R}$ we have \begin{equation}\label{mdp} -\inf_{x\in\Gamma^o}\frac12x^2{\sigma}^{-2}\leq\liminf_{n\to\infty}\frac{1}{a_n^2/n}{\mathbb P}(S_n^{\omega}\tilde u/a_n\in\Gamma) \leq \limsup_{n\to\infty}\frac{1}{a_n^2/n}{\mathbb P}(S_n^{\omega}\tilde u/a_n\in\Gamma)\leq -\inf_{x\in\overline{\Gamma}}\frac12x^2{\sigma}^{-2} \end{equation} where $\Gamma^o$ is the interior of $\Gamma$ and $\overline{\Gamma}$ is its closure. \end{theorem} Note that as an example of a sequence $a_n$ we can take $a_n=n^{q}(\ln n)^{\theta}$ for ${\theta}\geq0$ and $\frac12<q<\min(1,2-p/6)$. As in Example \ref{Egg} the approximation conditions and (M1')-(M3') hold true when $U_{\omega}, M_{\omega}$ and $K_{\omega}$ can be approximated sufficiently fast by functions of $X_j, |j|\leq r$ and that the upper mixing coefficients of the sequence $(X_j)$ satisfy (M1')-(M3'). The following result provides alternative conditions for the MPD. \begin{theorem}\label{MDP2} Under Assumption \ref{Add ass} we have the following. Let the same integrability conditions in Theorem \ref{MDP1} hold with some $p>8$ and suppose again that ${\sigma}>0$. Then the MDP \eqref{mdp} holds true with any sequence $(a_n)$ so that $a_n n^{-\max(6/p,1/2)}\to\infty$ and $a_n=o(n^{1-8/p})$. \end{theorem} \begin{remark} Recall that $M_{\omega}$ is bounded in the setup of Section \ref{Maps1}, and so the condition $M_{\omega}\in L^p$ is not really a restriction in that setup. Moreover, as explained in Remark \ref{M rem 2} (see also Remark \ref{C 2 REM}) when $H_{\omega}$ is also bounded then the random variables $K_{\omega}$ and $U_{\omega}$ are bounded. In this case also the condition $K_{\omega},U_{\omega}\in L^p$ is not really a restriction, and the only real integrability condition is $\bar D_{\omega}\in L^{2p}$. \end{remark} \begin{remark} The main difference between Theorems \ref{MDP1} and \ref{MDP2} is that Theorem \ref{MDP1} essentially requires some mixing assumptions on the sequences of random variables $(U_{{\theta}^j{\omega}}), (M_{{\theta}^j{\omega}})$ and $(K_{{\theta}^j{\omega}})$, while Theorem \ref{MDP2} does not require mixing assumptions. On the other hand, the integrability conditions in Theorem \ref{MDP1} are weaker than the ones in Theorem \ref{MDP2} (i.e. $p>4$ versus $p>8$). Since the first integrability conditions are not much better than the second, Theorem \ref{MDP2} is somehow better than Theorem \ref{MDP1}, and the reason that Theorem \ref{MDP1} is included is that its proof is based on a certain inducing strategy, and we find it interesting to present exact conditions which make the method of proof by inducing effective for proving an MDP for random Birkhoff sums. \end{remark} \subsection{Berry Esseen type estimates and moderate local limit theorem} Using the arguments in the proof of Theorems \ref{MDP1} and \ref{MDP2} we can also prove the following results. \begin{theorem}[A Berry-Esseen theorem]\label{BE} Let ${\sigma}_{{\omega},n}=\sqrt{\text{Var}_{\mu_{\omega}}(S_n^{\omega} u)}$. \vskip0.1cm (i) Under the assumptions of Theorem \ref{MDP1}, ${\mathbb P}$-a.s. we have $$ \sup_{t\in{\mathbb R}}\left|\mu_{\omega}(S_n^{\omega} \tilde u\leq t{\sigma}_{{\omega},n})-\Phi(t)\right|=O(n^{-(1/2-6/p)}) $$ where when $p=\infty$ we use the convention $6/p=0$. (ii) Under the assumptions of Theorem \ref{MDP2}, ${\mathbb P}$-a.s. we have $$ \sup_{t\in{\mathbb R}}\left|\mu_{\omega}(S_n^{\omega} \tilde u\leq t{\sigma}_{{\omega},n})-\Phi(t)\right|=O(n^{-(1/2-8/p)}) $$ where for $p=\infty$ we have $8/p:=0$. \end{theorem} \begin{remark} In the setup of Section \ref{Maps1}, the uniformly random case (i.e. $p=\infty$) was covered in \cite[Theorem 7.1.1]{HK}, see also \cite{DH, HafYT} for optimal rates for different types of random maps. However, in the setup of Section \ref{Maps2}, Theorem \ref{BE} is new even in the uniformly random case (so we get the optimal CLT rates $O(n^{-1/2})$ in that case). In the deterministic case when the maps $T_{\omega}$ and the functions $u_{\omega}$ do not depend on ${\omega}$ the arguments in the proof of Theorem \ref{BE} provide explicit constants in the Berry-Esseen theorem (similarly to \cite[Theorem 1.1]{Dub2}). \end{remark} \begin{theorem}[A moderate local central limit theorem]\label{LLT} Under the assumptions of Theorem \ref{MDP2}, ${\mathbb P}$-a.s. we have the following. Let $(a_n)$ be a sequence so that $a_{n}n^{-2/p}\to\infty$ and $a_n n^{-1/2}\to 0$ (where $p$ comes from Theorem \ref{MDP2}). Then for every continuous function $g:{\mathbb R}\to{\mathbb R}$ with a compact support or an indicator function of a bounded interval we have \begin{equation}\label{llt} \sup_{v\in{\mathbb R}}\left|\sqrt{2\pi {\kappa}_{{\omega},n}}\mu_{{\omega}}(g(S_n^{\omega} \tilde u/a_n-v))-\left(\int g(y)dy\right)e^{\frac{-v^2}{2{\kappa}_{{\omega},n}^2}}\right|=o(1) \end{equation} where ${\kappa}_{{\omega},n}={\sigma}_{{\omega},n}/a_n$. In particular, for every bounded interval $I$ we have $$ \sup_{v\in{\mathbb R}} \left|\sqrt{2\pi {\kappa}_{{\omega},n}}\mu_{{\omega}}(S_n^{\omega}\tilde u\in a_n (v+I))-|I|e^{\frac{-v^2}{2{\kappa}_{{\omega},n}^2}}\right|=o(1). $$ \end{theorem} \begin{remark} The classical local central limit theorem (LCLT) corresponds to the case when $a_n=1$, which is excluded in Theorem \ref{LLT} even when $p=\infty$, where the first requirement on $a_n$ becomes $a_n\to\infty$. The case $p=\infty$ corresponds to the uniformly random case, and we refer to \cite[Ch. 6]{HK} for sufficient conditions\footnote{Which involve a period point of ${\theta}$ and some notion of an aperiodicity of the Birkhoff sums.} for the validity of \eqref{llt} with $a_n=1$ in the uniformly random version of the setup of Section \ref{Maps1}. Relying on Theorem \ref{Complex RPF} below, the classical LCLT can be obtained in the uniformly random version of the maps considered in Section \ref{Maps2} when $l_{\omega}\leq1$, since in that case we have $\|{\mathcal L}_{\omega}^{it,n}\|\leq C(1+|t|)$ for some $C\geq1$ and all $t\in{\mathbb R}$, where ${\mathcal L}_{{\omega}}^{it,n}$ is defined before Theorem \ref{Complex RPF}. Finally, note that Theorem \ref{LLT} is new even in the uniformly random case, which is important especially when the known sufficient conditions for the classical LCLT fail due to a some type of periodicity exhibited by the random Birkhoff sums. \end{remark} \subsection{Key technical tools: real and complex random RPF theorems with effective rates}\label{SecRPF} For all the maps $T_{\omega}$ considered in Sections \ref{Maps1} and \ref{Maps2}, and every complex number $z$ we consider the random transfer operator ${\mathcal L}_{\omega}^{(z)}$ which maps functions on ${\mathcal E}_{\omega}$ to functions on ${\mathcal E}_{{\theta}{\omega}}$ according to the formula \begin{equation}\label{Tr op} {\mathcal L}_{\omega}^{(z)}g(x)=\sum_{y\in T_{\omega}^{-1}\{x\}}e^{\phi_{\omega}(y)+z\tilde u_{\omega}(y)}g(y)=\sum_i e^{\phi_{\omega}(y_{i,{\omega}}(x))+z\tilde u_{\omega}(y_{i,{\omega}}(x))}g(y_{i,{\omega}}(x)). \end{equation} Here $\tilde u_{\omega}=u_{\omega}-\mu_{\omega}(u_{\omega})$. We also set ${\mathcal L}_{\omega}^{(0)}={\mathcal L}_{\omega}$. For each ${\omega},n$ and $z$ write \[ {\mathcal L}_{\omega}^{z,n}={\mathcal L}_{{\theta}^{n-1}{\omega}}^{(z)}\circ\cdots\circ{\mathcal L}_{{\theta}{\omega}}^{(z)}\circ{\mathcal L}_{\omega}^{(z)}. \] It is clear that ${\mathcal L}_{\omega}^{(z)}{\mathcal H}_{\omega}\subset {\mathcal H}_{{\theta}{\omega}}$. We will denote by $({\mathcal L}_{\omega}^{(z)})^*$ the appropriate dual operator. When $z=0$ we denote ${\mathcal L}_{\omega}^n={\mathcal L}_{{\omega}}^{0,n}$. In \cite{MSU} it was shown that in the setup of Section \ref{Maps1} there is a unique triplet $({\lambda}_{\omega},h_{\omega},\nu_{\omega})$ consisting of a random variable ${\lambda}_{\omega}>0$, a random positive function $h_{\omega}\in{\mathcal H}_{\omega}$ and a probability measure on ${\mathcal E}_{\omega}$ so that ${\mathbb P}$-a.s. we have $\nu_{\omega}(h_{\omega})=1$, $$ {\mathcal L}_{\omega}^{{\omega}}h_{\omega}={\lambda}_{\omega} h_{{\theta}{\omega}}\,\,\text{ and }\,\,({\mathcal L}_{\omega})^*\nu_{{\theta}{\omega}}={\lambda}_{\omega}\nu_{\omega}. $$ Moreover, with ${\lambda}_{{\omega},n}=\prod_{j=0}^{n-1}{\lambda}_{{\theta}^j{\omega}}$, there is a constant ${\delta}\in(0,1)$ and a random variable $C(\cdot)$ so that for every $g\in{\mathcal H}_{\omega}$ we have $$ \|({\lambda}_{{\omega},n})^{-1}{\mathcal L}_{{\omega}}^{n}g-\nu_{\omega}(g)h_{{\theta}^n{\omega}}g\|\leq C({\theta}^n{\omega}){\delta}^n\|g\|. $$ The above result is often refereed to as a random Ruelle-Perron-Frobenius (RPF) theorem. The random variable $C({\omega})$ does not have an explicit form, and it can be expressed by means of a first hitting time to a certain set which can be defined by means of some ergodic average and a random variable which can be expressed as a series of known random variables. A similar result follows from \cite{Varandas} in the setup of Section \ref{Maps2} (the uniqueness is obtained under additional assumptions but the construction of the RPF triplets proceeds without the additional requirements). One of the main tools in the proof of all the limit theorems in this paper is the following result, which is an effective version of the above RPF theorem. \begin{theorem}[An effective RPF theorem]\label{RPF} The RPF triplets above $({\lambda}_{\omega},h_{\omega},\nu_{\omega})$ satisfy the following (${\mathbb P}$-a.s.): (i) $h_{\omega}\in {\mathcal C}_{\omega}$ and $\|h_{\omega}\|\leq K_{\omega}$; (ii) $1\leq \sup h_{\omega}\leq B_1({\omega})\inf h_{\omega}\leq B_1({\omega})$; (iii) for every $n$ and $g\in{\mathcal H}_{\omega}$, \begin{equation}\label{RPF ExpC} \left\|\frac{{\mathcal L}_{\omega}^n g}{{\lambda}_{{\omega},n}}-\nu_{\omega}(g) h_{{\theta}^n{\omega}}\right\|\leq C_{{\theta}^n{\omega}}\rho_{{\omega},n} \end{equation} where ${\lambda}_{{\omega},n}=\prod_{j=0}^{n-1}{\lambda}_{{\theta}^j{\omega}}$ and $\rho_{{\omega},n}=\prod_{j=0}^{n-1}\rho({{\theta}^j{\omega}})$; (iv) let the probability measures $\mu_{\omega}$ on ${\mathcal E}_{\omega}$ be given by $\mu_{\omega}=h_{\omega} \nu_{\omega}$. Then ${\mathbb P}$-a.s. we have $(T_{\omega})_*\mu_{\omega}=\mu_{{\theta}{\omega}}$. Moreover, let $L_{\omega}$ be the operator given by $L_{\omega} g=\frac{{\mathcal L}_{\omega} g}{{\lambda}_{\omega} h_{{\theta}{\omega}}}$. Then for every H\"older continuous function $g$ on ${\mathcal E}_{\omega}$ and all $n\geq 1$ we have \begin{equation}\label{Exp L} \|L_{\omega}^n g-\mu_{\omega}(g)\|\leq U({\theta}^n{\omega})\rho_{{\omega},n}\|g\|. \end{equation} (v) [exponential decay of correlations] for every natural $n$ and for all $g\in{\mathcal H}_{\omega}$ and $f\in{\mathcal H}_{{\theta}^n{\omega}}$ we have \begin{equation}\label{DEC} \left|\mu_{\omega}(g\cdot (f\circ T_{\omega}^n))-\mu_{\omega}(g)\mu_{\omega}(f\circ T_{\omega}^n)\right|\leq U({\theta}^n{\omega})\rho_{{\omega},n}\|g\|\|f\|_{L^1(\mu_{{\theta}^n{\omega}})}. \end{equation} \end{theorem} \begin{remark} Notice that ${\lambda}_{\omega}=\nu_{{\theta}{\omega}}({\mathcal L}_{\omega} \textbf{1})$, where $\textbf{1}$ is the function which takes the constant value $1$. Hence, \begin{equation}\label{la bounds} e^{-\|\phi_{\omega}\|_\infty}\leq \inf{\mathcal L}_{\omega} \textbf{1}\leq {\lambda}_{\omega}\leq \sup{\mathcal L}_{\omega} \textbf{1} \end{equation} In the finite degree case $|{\mathcal L}_{\omega} \textbf{1}|\leq \deg(T_{\omega})e^{\|\phi_{{\omega}}\|_\infty}$, while in the case $\xi=1$ (where $\xi$ comes from Section \ref{Maps1}) if the degree is not bounded then our summability assumptions insure that ${\mathcal L}_{\omega} \textbf{1}$ is a bounded function (recall that for piecewise affine maps we always have ${\mathcal L}_{\omega} \textbf{1}=\textbf{1}$.) \end{remark} In the proof of Theorems \ref{MDP1}, \ref{MDP2}, \ref{BE} and \ref{LLT} we will also need the following complex version of Theorem \ref{RPF}. \begin{theorem}\label{Complex RPF} When the random variable $E_{\omega}$ is bounded we have the following. There is a positive number $r_0>0$ so that for any complex number $z$ such that $|z|\leq r_0$ there exist measurable families ${\lambda}_{\omega}(z)$, $h_{\omega}^{(z)}$ and $\nu_{\omega}^{(z)}$ which are analytic in $z$, consisting of a nonzero complex number ${\lambda}_{\omega}(z)$, a complex function $h_{\omega}^{(z)}\in{\mathcal H}_{\omega}$ and a complex continuous linear functional $\nu_{\omega}^{(z)}\in{\mathcal H}_{\omega}^*$ such that: \vskip0.1cm (i) We have \begin{equation}\label{RPF deter equations-General} {\mathcal L}_{\omega}^{(z)} h_{\omega}^{(z)}={\lambda}_{\omega}(z)h_{{\theta}{\omega}}^{(z)},\,\, ({\mathcal L}_{\omega}^{(z)})^*\nu_{{\theta}{\omega}}^{(z)}={\lambda}_{\omega}(z)\nu_{{\omega}}^{(z)}\text{ and } \nu_{\omega}^{(z)}(h_{\omega}^{(z)})=\nu_{\omega}^{(z)}(\textbf{1})=1. \end{equation} Moreover, $h_{\omega}^{(0)}=h_{\omega}$, ${\lambda}_{\omega}(0)={\lambda}_{\omega}$ and $\nu_{\omega}^{(0)}=\nu_{\omega}$. \vskip0.1cm (ii) We have $\|\nu_{\omega}^{(z)}\|\leq M_{\omega}$ and $h_{\omega}^{(z)}=\frac{\hat h_{\omega}^{(z)}}{{\alpha}_{\omega}(z)}$, where ${\alpha}_{\omega}(z):=\nu_{\omega}^{(z)}(\hat h_{\omega}^{(z)}) \not=0$ for some analytic in $z$ family of functions $\hat h_{\omega}^{(z)}$ so that $\|\hat h_{\omega}^{(z)}\|\leq 2\sqrt 2K_{\omega}$ (note that ${\alpha}_{\omega}(0)=1$ and $\|{\alpha}_{\omega}(z)\|\leq 2\sqrt 2M_{\omega} K_{\omega}$). \vskip0.1cm \vskip0.1cm (iii) Let $n_0({\omega})$ be the first time that $|{\alpha}_{\omega}(z)|\geq 2\sqrt 2 M_{\omega} K_{\omega} \rho_{{\theta}^{-n}{\omega},n}$. Then for every $n\geq n_0({\omega})$ and all $g\in{\mathcal H}_{{\theta}^{-n}{\omega}}$ we have \begin{equation}\label{Exponential convergence} \Big\|\frac{{\mathcal L}_{{\theta}^{-n}{\omega}}^{z,n}g}{{\lambda}_{{\theta}^{-n}{\omega},n}(z)}-\nu_{{\theta}^{-n}{\omega}}^{(z)}(g)h^{(z)}_{{\omega}}\Big\|\leq 8 M_{{\theta}^{-n}{\omega}}K_{\omega}\left(|{\alpha}_{\omega}(z)|^{-1}+\frac{M_{\omega} K_{\omega}}{|{\alpha}_{\omega}(z)|^2}\right)\|g\|\tilde\rho_{{\theta}^{-n}{\omega},n}:={\mathcal R}({\omega},n,z) \end{equation} where ${\lambda}_{{\omega},n}(z)={\lambda}_{{\omega}}(z)\cdot{\lambda}_{{\theta}{\omega}}(z)\cdots{\lambda}_{{\theta}^{n-1}{\omega}}(z)$ (recall that $M_{\omega}$ is bounded in the setup of Section \ref{Maps1}). \end{theorem} \vskip0.1cm (iv) Let the operators $L_{\omega}^{(z)}$ be given by $$ L_{{\omega}}^{(z)}g=L_{\omega}(e^{zu_{\omega}}g)=\frac{{\mathcal L}_{{\omega}}^{(z)}(gh_{\omega})}{{\lambda}_{\omega} h_{{\theta}{\omega}}} $$ and set $\bar{\lambda}_{\omega}(z)=\frac{{\lambda}_{\omega}(z)}{{\lambda}_{\omega}}$, $\bar h_{\omega}(z)=\frac{h_{\omega}^{(z)}}{h_{\omega}}$ and $\bar\nu_{\omega}^{(z)}=h_{\omega}\cdot\nu_{\omega}^{(z)}$. Then for all $n\geq n_0({\omega})$, \begin{equation}\label{Exponential convergence CMPLX} \Big\|\frac{L_{{\theta}^{-n}{\omega}}^{z,n}g}{\bar{\lambda}_{{\theta}^{-n}{\omega},n}(z)}-\bar\nu_{{\theta}^{-n}{\omega}}^{(z)}(g)\bar h^{(z)}_{{\omega}}\Big\|\leq 6U_{{\omega}}{\mathcal R}({\omega},n,z). \end{equation} \begin{remark} Since $z\to|{\alpha}_{\omega}(z)|$ is continuous and positive, we have $\beta_{\omega}=\sup_{|z|\leq r_0}|{\alpha}_{\omega}(z)|^{-1}<\infty$ and so, in principle, we can use that to get an upper bound which does not depend on $z$. However, $\beta_{\omega}$ does not have an explict form. Instead, in the proof of the large deviations theorems (Theorems \ref{MDP1} and \ref{MDP2}) we will use that $|{\alpha}_{\omega}(z)-1|\leq C|z|K_{\omega} M_{\omega}$ and that, when $K_{\omega} M_{\omega}\in L^p({\Omega},{\mathcal F},{\mathbb P})$ we have $K_{{\theta}^n{\omega}} M_{{\theta}^n{\omega}}=o(n^{2/p})$, which will produce effective bounds when $|z|=O(n^{-2/p})$. \end{remark} \begin{remark} Theorem \ref{Complex RPF} was proven in \cite[Ch.5]{HK} in the uniformly random version of the setup in Section \ref{Maps1}. However, in the setup of Section \ref{Maps2} Theorem \ref{Complex RPF} is new even in the uniformly random case (i.e. when $s_{\omega}\leq s<1$ for some constant $s$ and all the other random variables are bounded). In fact, it is new even in the deterministic case when and $T_{\omega}=T, \phi_{\omega}=\phi$ and $u_{\omega}=u$ do not depend on ${\omega}$. As mentioned in Section \ref{Section 1}, Theorem \ref{Complex RPF} makes it possible to extend results like \cite[Theorem 1.1]{Dub2} to the partially expanding case. Note that in the uniformly random case $|{\alpha}_{\omega}(z)|\leq C$ for some constant $C>0$ and so, since ${\alpha}_{\omega}(0)=1$, by using the mean value theorem and the Cauchy integral formula and decreasing $r_0$ if needed, we have that $\frac12\leq |{\alpha}_{\omega}(z)|\leq \frac32$ (and so we can replace the term $|{\alpha}_{\omega}(z)|$ with a constant). \end{remark} \section{Proofs of the limit theorems based on the random RPF theorems} \subsection{The CLT and LIL: Proof of Theorem \ref{CLT} by inducing} The proof of Theorem \ref{CLT} is based on an application of \cite[Theorem 2.3]{Kifer 1998} with the set $Q=A$, where $A$ comes from Assumption \ref{Sets Ass}. Let $c({\omega})=\|u_{\omega}-\mu_{\omega}(u_{\omega})\|_{L^2(\mu_{\omega})}$ and let $n_A({\omega})$ be the first hitting time to the set $A$. Set $\tilde u_{\omega}=u_{\omega}-\mu_{\omega}(u_{\omega})$ and \begin{equation}\label{PPsi} \Psi_{\omega}=S_{n_A({\omega})}^{{\omega}}\tilde u_{\omega}=\sum_{j=0}^{n_A({\omega})-1}\tilde u_{{\theta}^j{\omega}}\circ T_{\omega}^j \end{equation} and let $\Theta: A\to A$ be given by $\Theta({\omega})={\theta}^{n_A({\omega})}({\omega})$. Let us also consider the maps ${\mathcal T}_{\omega}=T_{{\omega}}^{n_A({\omega})}$ and the corresponding transfer operators $\tilde L_{\omega}=L_{\omega}^{n_A({\omega})}$. Then the conditions of \cite[Theorem 2.3]{Kifer 1998} are met if \begin{equation}\label{Ver1} \left\|\sum_{j=0}^{n_1({\omega})-1}c({\theta}^j{\omega})\right\|_{L^2({\mathbb P})}<\infty. \end{equation} \begin{equation}\label{Ver2} \left\|{\mathbb I}({\omega}\in A)\sum_{n=0}^\infty|{\mathbb E}_{\mu_{\omega}}[\Psi_{\omega}\cdot\Psi_{\Theta^n{\omega}}\circ {\mathcal T}_{\omega}^{n}]|\right\|_{L^1({\mathbb P})}<\infty \end{equation} and \begin{equation}\label{Ver3} \left\|{\mathbb I}({\omega}\in A)\sum_{n=0}^{\infty}{\mathbb E}_{\mu_{\omega}}(|\tilde L_{\Theta^{-n}{\omega}}^n\Psi_{\Theta^{-n}{\omega}}|)\right\|_{L^2({\mathbb P})}<\infty. \end{equation} \subsubsection{Reduction to tails estimates of the first hitting times} \begin{lemma}\label{LV} All three conditions \eqref{Ver1}, \eqref{Ver2} and \eqref{Ver3} are valid if $c({\omega})\in L^p({\Omega},{\mathcal F},{\mathbb P})$ and \begin{equation}\label{Series} \sum_{j=0}^{\infty}({\mathbb P}(n_A>j))^{1-2/p}<\infty \end{equation} for some $p>2$. \end{lemma} \begin{proof} Let us begin with showing that condition \eqref{Ver1} is in force. Write $$ \sum_{j=0}^{n_A({\omega})-1}c({\theta}^j{\omega})=\sum_{j=0}^{\infty}c({\theta}^j{\omega}){\mathbb I}(n_A>j). $$ Then, by the H\"older inequality we have $$ \left\|\sum_{j=0}^{n_A({\omega})-1}c({\theta}^j{\omega})\right\|_{L^2({\mathbb P})}\leq \sum_{j=0}^{\infty}\|(c\circ{\theta}^j){\mathbb I}(n_A>j)\|_{L^2({\mathbb P})} \leq\|c\|_{L^p({\mathbb P})}^2\sum_{j=0}^{\infty}({\mathbb P}(n_A>j))^{1-2/p}<\infty. $$ Next, let us show that condition \eqref{Ver2} is satisfied. First, let $k_n({\omega})$ be so that ${\mathcal T}_{{\omega}}^{n}=T_{\omega}^{k_n({\omega})}$. Then by using the definition of $\Psi_{\omega}$ and that $\{\mu_{\omega}\}$ is an equivariant family we see that $$ {\mathbb E}_{\mu_{\omega}}[\Psi_{\omega}\cdot\Psi_{\Theta^n{\omega}}\circ {\mathcal T}_{{\omega}}^{n}]=\sum_{j=0}^{n_A({\omega})-1}{\mathbb E}_{\mu_{{\theta}^j{\omega}}}[\tilde u_{{\theta}^j{\omega}}\cdot \Psi_{\Theta^n{\omega}}\circ T_{{\theta}^j{\omega}}^{k_n({\omega})-j}]. $$ Now, by using \eqref{DEC}, the properties of the set $A$ and that $\Theta^n{\omega}\in A$ we see that $$ \left|{\mathbb E}_{\mu_{{\theta}^j{\omega}}}[\tilde u_{{\theta}^j{\omega}}\cdot \Psi_{\Theta^n{\omega}}\circ T_{{\theta}^j{\omega}}^{k_n({\omega})-j}]\right|\leq M(1-{\varepsilon})^{n}\|\Psi_{\Theta^n{\omega}}\|_{{L^1(\mu_{\Theta^n{\omega}})}}\|\tilde u_{{\theta}^j{\omega}}\| $$ where we have used that there are $n$ visits to $A$ between ${\theta}^j{\omega}$ and $\Theta^n{\omega}$ for $j<n_A({\omega})$. Thus, $$ |{\mathbb E}_{\mu_{\omega}}[\Psi_{\omega}\cdot\Psi_{\Theta^n{\omega}}\circ {\mathcal T}_{{\omega}}^{n}]|\leq M \|\Psi_{\Theta^n{\omega}}\|_{L^1(\mu_{\Theta^n{\omega}})}(1-{\varepsilon})^n\sum_{j=0}^{n_A({\omega})-1}\|\tilde u_{{\theta}^j{\omega}}\|. $$ Next, we have $$ \|\Psi_{\Theta^n{\omega}}\|_{L^1(\mu_{\Theta^n{\omega}})} \leq \sum_{j=0}^{n_A({\omega})-1}\|\tilde u_{{\theta}^j{\omega}}\|_{L^1(\mu_{{\theta}^j{\omega}})}\leq \sum_{j=0}^{n_A({\omega})-1}\|\tilde u_{{\theta}^j{\omega}}\|. $$ Let \begin{equation}\label{I om} I({\omega})=\sum_{j=0}^{n_A({\omega})-1}\|\tilde u_{{\theta}^j{\omega}}\|=\sum_{j=0}^{n_A({\omega})-1}c({\theta}^j{\omega}). \end{equation} Then we conclude from the above estimates that $$ \left\|{\mathbb I}({\omega}\in A)\sum_{n=0}^\infty|{\mathbb E}_{\mu_{\omega}}[\Psi_{\omega}\cdot\Psi_{\Theta^n{\omega}}\circ{\mathcal T}_{\omega}^n]|\right\|_{L^1({\mathbb P})}\leq M{\mathbb E}[I({\omega})^2]\sum_{n=0}^\infty(1-{\varepsilon})^n. $$ To complete the proof of \eqref{Ver2} we notice that in the proof of \eqref{Ver1} we showed that $I({\omega})\in L^2({\Omega},{\mathcal F},{\mathbb P})$. Finally, let us verify condition \eqref{Ver3}. First, we have $$ \left\|{\mathbb I}({\omega}\in A)\sum_{n=0}^{\infty}{\mathbb E}_{\mu_{\omega}}(|\tilde L_{\Theta^{-n}{\omega}}^n\Psi_{\Theta^{-n}{\omega}}|)\right\|_{L^2({\mathbb P})}\leq \sum_{n=0}^{\infty}\left\|{\mathbb I}({\omega}\in A){\mathbb E}_{\mu_{\omega}}(|\tilde L_{\Theta^{-n}{\omega}}^n\Psi_{\Theta^{-n}{\omega}}|)\right\|_{L^2({\mathbb P})}. $$ Second, since ${\theta}$ preserves ${\mathbb P}$ and $\{\mu_{\omega}\}$ is an equivariant family for each $n$ we have $$ \left\|{\mathbb I}({\omega}\in A){\mathbb E}_{\mu_{\omega}}(|\tilde L_{\Theta^{-n}{\omega}}^n\Psi_{\Theta^{-n}{\omega}}|)\right\|_{L^2({\mathbb P})}= \left\|{\mathbb I}(\Theta^n{\omega}\in A){\mathbb E}_{\mu_{\Theta^n{\omega}}}(|\tilde L_{{\omega}}^n\Psi_{{\omega}}|)\right\|_{L^2({\mathbb P})} $$ $$ \leq \sum_{j=0}^{n_A({\omega})-1}\left\|{\mathbb E}_{\mu_{\Theta^n{\omega}}}(|L_{{\theta}^j{\omega}}^{u_n({\omega})-j}\tilde u_{{\theta}^j{\omega}}|)\right\|_{L^2({\mathbb P})} $$ where $\Theta^n{\omega}={\theta}^{u_n({\omega})}{\omega}$, and in the last inequality we have used \eqref{PPsi}. Now, since ${\theta}^{u_n}{\omega}\in A$ and there are exactly $n$ returns to $A$ between ``times" $j$ and $u_n$ (since $j<n_A({\omega})$) we get from \eqref{Exp L} that $$ {\mathbb E}_{\mu_{\Theta^n{\omega}}}(|L_{{\theta}^j{\omega}}^{u_n({\omega})-j}\tilde u_{{\theta}^j{\omega}}|)\leq M(1-{\varepsilon})^n\|\tilde u_{{\theta}^j{\omega}}\|. $$ Thus, $$ \left\|{\mathbb I}({\omega}\in A){\mathbb E}_{\mu_{\omega}}(|\tilde L_{\Theta^{-n}{\omega}}^n\Psi_{\Theta^{-n}{\omega}}|)\right\|_{L^2({\mathbb P})}\leq M(1-{\varepsilon})^nI({\omega}), $$ where $I({\omega})$ was defined in \eqref{I om}. Combining the above estimates we conclude that $$ \left\|{\mathbb I}({\omega}\in A)\sum_{n=0}^{\infty}{\mathbb E}_{\mu_{\omega}}(|\tilde L_{\Theta^{-n}{\omega}}^n\Psi_{\Theta^{-n}{\omega}}|)\right\|_{L^2({\mathbb P})}\leq\|I(\cdot)\|_{L^2({\mathbb P})}M\sum_{n=0}^\infty(1-{\varepsilon})^n<\infty $$ and the proof of \eqref{Ver3} is completed. \end{proof} \subsubsection{Tails estimates using upper mixing coefficient: proof of Theorem \ref{CLT}}\label{Tails} In this section we will show that condition \eqref{Series} in Lemma \ref{LV} is valid under the assumptions of Theorem \ref{CLT}. This together with Lemma \ref{LV} and \cite[Theorem 2.3]{Kifer 1998} will complete the proof of Theorem \ref{CLT}. Before we begin with obtaining upper bounds on the tail probabilities ${\mathbb P}(n_A>j)$, let us note that \begin{equation}\label{GenForm} {\mathbb P}(n_A>j)={\mathbb P}\left(\bigcap_{k=1}^{j}{\theta}^{-k}({\Omega}\setminus A)\right). \end{equation} \subsubsection{Proof of Theorem \ref{CLT} under Assumption (M1)} We first need the following result. \begin{lemma}\label{L alpha} Let $I_1,I_2,...,I_m$, $m\geq 2$ be finite subsets of ${\mathbb N}$ so that $I_i$ is to the left of $I_{i+1}$ and the gap between them is at least $L$ for some $L>0$. Let $A_1,A_2,...,A_m$ be sets of the same probability $p={\mathbb P}(A_i)$ so that $A_i$ is measurable with respect to ${\sigma}\{X_j:\,j\in I_i$\}. Then $$ {\mathbb P}\left(\bigcap_{i=1}^{m} A_i\right)\leq p^{m}+{\alpha}_P(L)\sum_{j=0}^{m-2}p^j\leq p^{m}+{\alpha}_P(L)\frac{1}{1-p}. $$ \end{lemma} \begin{proof} We will prove the lemma by induction on $m$. For $m=2$ by the definition of ${\alpha}_U(\cdot)$ we have $$ {\mathbb P}(A_1\cap A_2)\leq {\mathbb P}(A_1){\mathbb P}(A_2)+{\alpha}_P(L) $$ which coincides with the desired upper bound for $m=2$. Next, suppose that the lemma is true for some $m\in{\mathbb N}$ and let $I_1,...,I_{m+1}$ be sets with minimal gap greater or equal to some $L$, and measurable sets $A_1,...,A_{m+1}$ with the same probability $p$ so that $A_i$ is measurable with respect to ${\sigma}\{X_j:\,j\in I_i\}$. Then by the definition of ${\alpha}_U(\cdot)$ we have $$ {\mathbb P}\left(\bigcap_{i=1}^{m+1} A_i\right)\leq {\mathbb P}\left(\bigcap_{i=1}^{m} A_i\right){\mathbb P}(A_{m+1})+{\alpha}_P(L) $$ $$ \leq \left(p^{m}+{\alpha}_P(L)\sum_{j=0}^{m-2}p^j\right)p+{\alpha}_P(L)= p^{m+1}+{\alpha}_P(L)\sum_{j=0}^{m-1}p^j=p^{m+1}+{\alpha}_P(L)\sum_{j=0}^{(m+1)-2}p^j $$ where in the last inequality we have used the induction hypothesis with the sets $A_1,...,A_m$ and that ${\mathbb P}(A_{m+1})=p$. \end{proof} \begin{corollary}\label{Cor3} Under Assumption \ref{Sets Ass}, condition \eqref{Series} holds true under the assumption that \begin{equation}\label{Conv1} \sum_{j}(\ln j\beta_{C j/(3\ln j)})^{1-2/p}<\infty\,\, \text{ and }\,\, \sum_{j}({\alpha}_U(j/(3C\ln) j))^{1-2/p}<\infty \end{equation} for some constant $C$ so that $C|\ln\big(1-{\mathbb P}(A)/2\big)|(1-2/p)>1$. \end{corollary} \begin{proof} First, for all integers $s\geq 1$ we have $$ {\mathbb P}(n_A>j)={\mathbb P}\left(\bigcap_{k=1}^{j}{\theta}^{-k}({\Omega}\setminus A)\right)\leq {\mathbb P}\left(\bigcap_{k=1}^{[j/s]}{\theta}^{-ks}({\Omega}\setminus A)\right). $$ Now, let us take $s$ of the form $s=3r$ for $r\in{\mathbb N}$. Then $$ {\mathbb P}\left(\bigcap_{k=1}^{[j/s]}{\theta}^{-ks}({\Omega}\setminus A)\right)\leq {\mathbb P}\left(\bigcap_{k=1}^{[j/s]}{\theta}^{-ks}({\Omega}\setminus A_r)\right)+[j/s]\beta_r $$ where $A_r$ and $\beta_r$ come from Assumption \ref{Sets Ass}. Thus, \begin{equation}\label{UPB} {\mathbb P}(n_A>j)={\mathbb P}\left(\bigcap_{k=1}^{j}{\theta}^{-k}({\Omega}\setminus A)\right)\leq {\mathbb P}\left(\bigcap_{k=1}^{[j/s]}{\theta}^{-ks}({\Omega}\setminus A_r)\right)+[j/s]\beta_r. \end{equation} Next, by Lemma \ref{L alpha} we have $$ {\mathbb P}\left(\bigcap_{k=1}^{[j/s]}{\theta}^{-ks}({\Omega}\setminus A_r)\right)\leq \left(1-{\mathbb P}(A_r)\right)^{[j/s]}+\frac{{\alpha}_U(r)}{1-{\mathbb P}(A_r)}. $$ Next, let us take $s$ of the form $s=s_j=C^{-1}[j/\ln j]$ for some $C>0$. Using that $\lim_{r\to\infty}{\mathbb P}(A_r)={\mathbb P}(A)>0$ we get that for all $j$ large enough we have $\frac12(1-{\mathbb P}(A))\leq 1-{\mathbb P}(A_r)\leq1-\frac12{\mathbb P}(A)$. We thus see that for $j$ large enough we have $$ {\mathbb P}(n_A>j)\leq \left(1-\frac12{\mathbb P}(A)\right)^{C\ln j}+\frac2{1-{\mathbb P}(A)}{\alpha}_U(C[j/\ln j])+[j/s]\beta_r. $$ Now let us take $C$ so that $C|\ln\big(1-\frac12{\mathbb P}(A)\big)|(1-2/p)>1$. Then the series $\sum_j\left(1-\frac12{\mathbb P}(A)\right)^{C(1-2/p)\ln j}$ converges and now the convergence of the series in \eqref{Series} follows from \eqref{Conv1}. \end{proof} \begin{proof}[Proof of Theorem \ref{CLT} under Assumption (M1)] By Corollary \ref{Cor3} condition \eqref{Series} in Lemma \ref{LV} is valid. The proof of Theorem \ref{CLT} in this case follows now by combining Lemma \ref{LV} and \cite[Theorem 2.3]{Kifer 1998}. \end{proof} \subsubsection{Proof Theorem \ref{CLT} under Assumption (M2)} \begin{lemma}\label{L phi} Let $I_1,I_2,...,I_m$, $m\geq 2$ be finite subsets of ${\mathbb N}$ so that $I_i$ is to the left of $I_{i+1}$ and the gap between them is at least $L$ for some $L>0$. Let $A_1,A_2,...,A_m$ be sets of the same probability $p={\mathbb P}(A_i)$ so that $A_i$ is measurable with respect to ${\sigma}\{X_j:\,j\in I_i\}$. Then $$ {\mathbb P}\left(\bigcap_{i=1}^{m} A_i\right)\leq \left(p+\phi_U(L)\right)^{m-1}. $$ \end{lemma} \begin{proof} We will prove the lemma by induction on $m$. For $m=2$ the lemma follows from the definition of $\phi_U$. Now, suppose that the lemma is true for some $m$. Let $I_1,...,I_{m+1}$ be sets with minimal gap greater or equal to some $L$, and measurable sets $A_1,...,A_{m+1}$ with the same probability $p$ so that $A_i$ is measurable with respect to ${\sigma}\{X_j:\,j\in A_i\}$. Then by the definition of $\phi_U$ we have $$ {\mathbb P}\left(\bigcap_{i=1}^{m} A_i\cap A_{m+1}\right)\leq {\mathbb P}\left(\bigcap_{i=1}^{m} A_i\right)\left({\mathbb P}(A_{m+1})+\phi_U(L)\right) $$ and not the proof of the induction step is completed by using the induction hypothesis with the sets $A_1,...,A_m$. \end{proof} \begin{corollary}\label{Cor2} Suppose that $$ \limsup_{r\to\infty}\phi_U(r)<{\mathbb P}(A) $$ and that $\sum_{j}(\ln j\beta_{j/(3C\ln j)})^{1-2/p}<\infty$ for some $C$ so that $C|\ln {\delta}|(1-p/2)>1$, where ${\delta}=1-{\mathbb P}(A)+\limsup_{r\to\infty}\phi_U(r)$. Then the series on the left hand side of \eqref{Series} converges. \end{corollary} \begin{proof} As in the beginning of the proof of Corollary \ref{Cor3}, for every $s\in{\mathbb N}$ of the form $s=3r$ we have $$ {\mathbb P}(n_A>j)={\mathbb P}\left(\bigcap_{k=1}^{j}{\theta}^{-k}({\Omega}\setminus A)\right)\leq {\mathbb P}\left(\bigcap_{k=1}^{[j/s]}{\theta}^{-ks}({\Omega}\setminus A_r)\right)+[j/s]\beta_r. $$ Now, by Lemma \ref{L phi} we have $$ {\mathbb P}\left(\bigcap_{k=1}^{[j/s]}{\theta}^{-ks}({\Omega}\setminus A_r)\right)\leq \left(1-{\mathbb P}(A_r)+\phi_U(r)\right)^{[j/s]}. $$ Next, since $\lim_{r\to\infty}{\mathbb P}(A_r)={\mathbb P}(A)$ and $\limsup_{r\to\infty}\phi_U(r)<{\mathbb P}(A)$ we see that $$ \limsup_{r\to\infty}\left(1-{\mathbb P}(A_r)+\phi_U(r)\right)={\delta}<1. $$ Thus, if we take $s$ of the form $s=s_j=C^{-1}[j/\ln j]$, then for $j$ large enough we have $$ {\mathbb P}(n_A>j)\leq {\delta}^{[j/s]}+[j/s]\beta_{[s/3]}. $$ If we take $C$ so that $C|\ln{\delta}|(1-p/2)>1$ we get that the series $\sum_j {\delta}^{j(1-2/p)/s_j}$ converges. Now the proof of the lemma is complete since the series $\sum_j([j/s_j]\beta_{[s_j/3]})^{1-2/p}$ converges by the assumptions of the lemma. \end{proof} \begin{proof}[Proof of Theorem \ref{CLT} under Assumption (M2)] By Corollary \ref{Cor2} condition \eqref{Series} in Lemma \ref{LV} is valid. Now the proof of Theorem \ref{CLT} under (M2) follows by combining Lemma \ref{LV} and \cite[Theorem 2.3]{Kifer 1998}. \end{proof} \subsubsection{Proof Theorem \ref{CLT} under Assumption (M3)} We first need the following result. \begin{lemma}\label{L psi} Let $I_1,I_2,...,I_m$, $m\geq 2$ be finite subsets of ${\mathbb N}$ so that $I_i$ is to the left of $I_{i+1}$ and the gap between them is at least $L$ for some $L>0$. Let $A_1,A_2,...,A_m$ be sets so that $A_i$ is measurable with respect to ${\sigma}\{X_j:\,j\in I_i\}$. Then $$ {\mathbb P}\left(\bigcap_{i=1}^{m} A_i\right)\leq (1+\psi_U(L))^{m-1}\prod_{j=1}^{m}{\mathbb P}(A_i). $$ Hence, if ${\mathbb P}(A_i)=p$ for all $i$ and some $p$ then $$ {\mathbb P}\left(\bigcap_{i=1}^{m} A_i\right)\leq p\left(p(1+\psi_U(L))\right)^{m-1}. $$ \end{lemma} \begin{proof} The lemma follows directly by induction and the definition of $\psi_U$. \end{proof} \begin{corollary}\label{Cor1} Suppose that $\limsup_{k\to\infty}\psi_U(k)<\frac 1{1-{\mathbb P}(A)}-1$ and that $\sum_{j}(\ln j\beta_{j/(3C\ln j)})^{1-2/p}<\infty$ for some $C$ so that $C|\ln{\delta}|(1-2/p)>1$, where ${\delta}=\left(1+\limsup_{r\to\infty}\psi_U(r)\right)\left(1-{\mathbb P}(A)\right)<1$. Then the series on the left hand side of \eqref{Series} converges. \end{corollary} \begin{proof} As in the beginning of the proof of Corollary \ref{Cor3}, for all $s=3r$ we have $$ {\mathbb P}(n_A>j)={\mathbb P}\left(\bigcap_{k=1}^{j}{\theta}^{-k}({\Omega}\setminus A)\right)\leq {\mathbb P}\left(\bigcap_{k=1}^{[j/s]}{\theta}^{-ks}({\Omega}\setminus A_r)\right)+[j/s]\beta_r. $$ Next, by applying Lemma \ref{L psi}, we have $$ {\mathbb P}\left(\bigcap_{k=1}^{[j/s]}{\theta}^{-ks}({\Omega}\setminus A_r)\right)\leq (1+\psi_U(r))^{[j/s]-1}(1-{\mathbb P}(A)+\beta_r)^{[j/s]}:=q_{s,j} $$ where we have used that $|{\mathbb P}(A)-{\mathbb P}(A_r)|\leq \beta_r$. Now, let us take $s=s_j=C^{-1}[j/\ln j]$ for some $C>0$. Then when $j$ is large enough we see that $$ \left(1+\psi_U(r)\right)\left(1-{\mathbb P}(A)+\beta_r\right)\leq {\delta}+{\varepsilon}<1 $$ for some ${\varepsilon}$ small enough, where ${\delta}=\left(1+\limsup_{r\to\infty}\psi_U(r)\right)\left(1-{\mathbb P}(A)\right)<1$. Thus, if also $C|\ln{\delta}|(1-2/p)>1$, by taking a sufficiently small ${\varepsilon}$ we get that both series $\sum_j (q_{s_j,j})^{1-2/p}$ and $\sum_{j}([j/s_j]\beta_{[s_j/3]})^{1-2/p}$ converge, and the proof of the corollary is complete. \end{proof} \begin{proof}[Proof of Theorem \ref{CLT} under Assumption (M3)] By the previous corollary condition \eqref{Series} in Lemma \ref{LV} is valid. The proof of Theorem \ref{CLT} in this case follows now by combining Lemma \ref{LV} and \cite[Theorem 2.3]{Kifer 1998}. \end{proof} \begin{remark} The proofs of Corollaries \ref{Cor3}, \ref{Cor2} and \ref{Cor1} show that if $A$ is measurable with respect to ${\sigma}\{X_j, |j|\leq d\}$ for some $d$ then ${\mathbb P}(n_A>j)$ decays exponentially fast in $j$ under the other assumptions of the corollaries (since we can take $\beta_r=0$ if $r>d$). \end{remark} \subsection{A second approach to the CLT and LIL: a direct proof of Theorem \ref{CLT2}} The idea in the proof of Theorem \ref{CLT2} is to verify the conditions of \cite[Theorem 2.3]{Kifer 1998} when $Q={\Omega}$, namely when there is no actual inducing involved. This requires us to verify the following three conditions: \begin{equation}\label{Ver1 2} \left\|c({\omega})\right\|_{L^2({\mathbb P})}<\infty,\, c({\omega})=\|\tilde u_{\omega}\| \end{equation} \begin{equation}\label{Ver2 2} \left\|\sum_{n=0}^\infty|{\mathbb E}_{\mu_{\omega}}[\tilde u_{\omega}\cdot\tilde u_{{\theta}^n{\omega}}\circ T_{\omega}^n]|\right\|_{L^1({\mathbb P})}<\infty \end{equation} and \begin{equation}\label{Ver3 2} \left\|\sum_{n=0}^{\infty}{\mathbb E}_{\mu_{\omega}}(|L_{{\theta}^{-n}{\omega}}^n\tilde u_{{\theta}^{-n}{\omega}}|)\right\|_{L^2({\mathbb P})}<\infty. \end{equation} Next, recall our assumption about the existence of a sequence $\beta_r$ so that $\beta_r\to 0$ and for every $r$ there is a random variable $\rho_r({\omega})$ which is measurable with respect to ${\sigma}\{X_j: |j|\leq r\}$ so that \begin{equation}\label{beta r inf} \|\rho-\rho_r\|_{L^\infty}\leq \beta_r. \end{equation} The first condition \eqref{Ver1 2} is a part of the assumptions of Theorem \ref{CLT2}. In order to verify conditions \eqref{Ver2 2} and \eqref{Ver3 2} we first need the following result. \begin{lemma}\label{psi Lemm 2} Let $I_1,...,I_d$ be intervals in the positive integers so that $I_j$ is to the left of $I_{j+1}$ and the distance between them is at least $L$. Let $Y_1,...,Y_d$ be nonnegative bounded random variables so that $Y_i$ is measurable with respect to ${\sigma}\{X_k: k\in I_i\}$. Then $$ {\mathbb E}\left[\prod_{i=1}^{d}Y_i\right]\leq\left(1+\psi_U(L)\right)^{d-1}\prod_{i=1}^{d}{\mathbb E}[Y_i]. $$ \end{lemma} \begin{proof} Once we prove the lemma for $d=2$ the general case will follow by induction. Let us assume that $d=2$. Next, we have $$ Y_i=\lim_{n\to\infty}Y_i(n)=\lim_{n\to\infty}\sum_k{\mathbb I}((k-1)2^{-n}<Y_i\leq k2^{-n})k2^{-n} $$ and so with ${\alpha}_i(k,n)=\{(k-1)2^{-n}<Y_i\leq k2^{-n}\}$, by the monotone convergence theorem we have $$ {\mathbb E}[Y_1Y_2]=\lim_{n\to\infty}{\mathbb E}[Y_1(n)Y_2(n)]=\lim_{n\to\infty}\sum_{k_1,k_2}(2^{-n}k_1)(2^{-n}k_2){\mathbb P}({\alpha}_1(k,n)\cap{\alpha}_2(k,n))$$$$\leq \lim_{n\to\infty}\sum_{k_1,k_2}(2^{-n}k_1)(2^{-n}k_2)(1+\psi_U(L)){\mathbb P}({\alpha}_1(k,n)){\mathbb P}({\alpha}_2(k,n))$$$$= (1+\psi_U(L))\lim_{n\to\infty}{\mathbb E}[Y_1(n)]{\mathbb E}[Y_2(n)]=\left(1+\psi_U(L)\right){\mathbb E}[Y_1]{\mathbb E}[Y_2] $$ where in the above inequality we have used the definition of the upper mixing coefficients $\psi_U(\cdot)$. \end{proof} Next, we need the following \begin{lemma}\label{L2} Suppose that $$ \lim_{s\to\infty}\Psi_{U}(s)<\infty $$ and that with some ${\delta}>0$ we have $\|u_{\omega}\|, U({\omega})\in L^{3+{\delta}}({\Omega},{\mathcal F},{\mathbb P})$. Then conditions \eqref{Ver2 2} and \eqref{Ver3 2} are in force. \end{lemma} \begin{proof}[Proof of Theorem \ref{CLT2}] The proof of Theorem \ref{CLT2} is completed now by combining Lemma \ref{L2} with \cite[Theorem 2.3]{Kifer 1998} in the case $Q={\Omega}$. \end{proof} \begin{proof}[Proof of Lemma \ref{L2}] Since $0<\rho(\cdot)<1$ we have $\lim_{q\to\infty}\rho^q=0$ and so by the monotone convergence theorem $$ \lim_{q\to\infty}{\mathbb E}_{{\mathbb P}}[\rho^q]=0. $$ Thus, since the limit superior of $\psi_U$ is finite, if $q$ is large enough then we have $$ \limsup_{r\to\infty}\Psi_{U}(r)<\frac1{{\mathbb E}_{{\mathbb P}}[\rho^q]}-1. $$ Let us take $q$ large enough so that its conjugate exponent $p$ satisfies $3p\leq 3+{\delta}$, where ${\delta}$ comes from the assumptions of the lemma (and Theorem \ref{CLT2}). Next, to show that condition \eqref{Ver2 2} is in force, let us fix some $n\geq0$. We first note that by \eqref{DEC} we have $$ |{\mathbb E}_{\mu_{\omega}}[\tilde u_{\omega}\cdot\tilde u_{{\theta}^n{\omega}}\circ T_{\omega}^n]|\leq \|\tilde u_{\omega}\|\|\tilde u_{{\theta}^n{\omega}}\|_{L^1(\mu_{{\theta}^n{\omega}})}U({\theta}^n{\omega})\prod_{j=0}^{n-1}\rho({\theta}^j{\omega}). $$ Next, by applying the generalized H\"older inequality with the exponents $q_1=q_2=q_3=3p$ and $q_4=q$ we get that $$ {\mathbb E}_{{\mathbb P}}\left[\|\tilde u_{\omega}\|\|\tilde u_{{\theta}^n{\omega}}\|_{L^1(\mu_{{\theta}^n{\omega}})}U({\theta}^n{\omega})\prod_{j=0}^{n-1} \rho({\theta}^j{\omega})\right]\leq \|c(\cdot)\|_{3p}^2\|U(\cdot)\|_{3p} \left({\mathbb E}_{{\mathbb P}}\left[\prod_{j=0}^{n-1}\rho^q({\theta}^j{\omega})\right]\right)^{1/q} $$ where we have used that $\|\tilde u_{{\theta}^n{\omega}}\|_{L^1(\mu_{{\theta}^n{\omega}})}\leq c({\omega})$. Now, for all $s$ of the form $s=3r$ have $$ {\mathbb E}_{{\mathbb P}}\left[\prod_{j=0}^{n-1}\rho^q({\theta}^j{\omega})\right]\leq {\mathbb E}_{{\mathbb P}}\left[\prod_{j=1}^{[(n-1)/s]}\rho^q({\theta}^{js}{\omega})\right]\leq {\mathbb E}_{{\mathbb P}}\left[\prod_{j=1}^{[(n-1)/s]}(\rho_r^q({\theta}^{js}{\omega})+C_q\beta_r)\right] $$ $$ \leq \left(1+\psi_U(r)\right)^{[(n-1)/s]-1}\prod_{j=1}^{[(n-1)/s]}({\mathbb E}_{{\mathbb P}}[\rho_r^q]+C_q\beta_r) $$ where in the last inequality we have used Lemma \ref{psi Lemm 2}, and $C_q$ is a constant that depends only on $q$. Since $\lim_{r\to\infty}{\mathbb E}_{{\mathbb P}}[\rho_r^q]={\mathbb E}_{{\mathbb P}}[\rho^q]$, $\lim_{r\to\infty}\beta_r=0$ and $(1+\psi_U(r)){\mathbb E}_{{\mathbb P}}[\rho^{q}]<1$, by fixing a sufficiently large $s=s_0$ we conclude that $$ {\mathbb E}_{{\mathbb P}}[|{\mathbb E}_{\mu_{\omega}}[\tilde u_{\omega}\cdot\tilde u_{{\theta}^n{\omega}}\circ T_{\omega}^n]|]\leq C(1-{\varepsilon})^n $$ for some constants ${\varepsilon}\in(0,1)$ and $C>0$ (which depend on $s_0$ and $q$), and thus Condition \eqref{Ver2 2} is in force. Next, in order to verify condition \eqref{Ver3 2}, by Theorem \ref{RPF} we have $$ |L_{{\theta}^{-n}{\omega}}^n\tilde u_{{\theta}^{-n}{\omega}}|\leq U({\omega})\|\tilde u_{{\theta}^{-n}{\omega}}\|\prod_{j=1}^{n}\rho({\theta}^{-j}{\omega}) $$ and so by the H\"older inequality, $$ \left\|{\mathbb E}_{\mu_{\omega}}(|L_{{\theta}^{-n}{\omega}}^n\tilde u_{{\theta}^{-n}{\omega}}|)\right\|_{L^2({\mathbb P})}\leq \|U({\omega})\|_{L^{2p}({\mathbb P})}\|\tilde u_{\omega}\|_{L^{2p}({\mathbb P})}\left\|\prod_{j=1}^{n}\rho({\theta}^{-j}{\omega})\right\|_{L^{q}({\mathbb P})}. $$ Due to stationarity we have $$ \left\|\prod_{j=1}^{n}\rho({\theta}^{-j}{\omega})\right\|_{L^q({\mathbb P})}=\left\|\prod_{j=0}^{n-1}\rho({\theta}^{j}{\omega})\right\|_{L^q({\mathbb P})}=\left({\mathbb E}_{{\mathbb P}}\left[\prod_{j=0}^{n-1}\rho^q({\theta}^j{\omega})\right]\right)^{1/q} =O((1-{\varepsilon})^n) $$ where the last estimates was obtained in the course of the proof of \eqref{Ver2 2}. This completes the proof of \eqref{Ver3 2}. \end{proof} \subsection{An almost sure invariance principle: proof of Theorem \ref{ASIP}} Let $\beta_r$ satisfy \eqref{beta r inf}. \subsubsection{Key auxiliary result} Before proving Theorem \ref{ASIP} we need the following result. \begin{lemma}\label{MomLem} Under the Assumptions of Theorem \ref{ASIP} we have the following. (i) Let $R_n({\omega})=\sum_{j=0}^{n-1}\rho({\theta}^j{\omega})\cdots \rho({\theta}^{n-1}{\omega})$. Then for every $p\in{\mathbb N}$, \begin{equation}\label{Mom0} {\mathbb E}_{{\mathbb P}}[R_n^p]\leq C_p \end{equation} for some constant $C_p$ which does not depend on $n$. Therefore, for every ${\varepsilon}>0$ we have $$ R_n({\omega})=o(n^{\varepsilon}),\, {\mathbb P}-\text{a.s.} $$ (ii) For every pair of positive integers $(n,m)$ such that $m\leq n$ let $$ R_{m,n}({\omega})=\sum_{k=m}^{n}\sum_{j=k}^{n}\rho({\theta}^k{\omega})\cdot \rho({\theta}^{k+1}{\omega})\cdots \rho({\theta}^{j}{\omega})=\sum_{m\leq k\leq j\leq n} \rho({\theta}^{k}{\omega})\cdots \rho({\theta}^{j}{\omega}). $$ Then for every $\ell\in{\mathbb N}$, ${\varepsilon}>0$ and a positive integer $p$ we have \begin{equation}\label{Mom} {\mathbb E}_{\mathbb P}\left[\sup_{(n,m):\,0\leq n-m\leq \ell}n^{-(1+{\varepsilon})}R_{m,n}^p\right]\leq C_{p,{\varepsilon}}\ell^{1+p} \end{equation} for some constant $C_{p,{\varepsilon}}>0$ which depends only on $p$ and ${\varepsilon}$. Therefore, ${\mathbb P}$-a.s. for every ${\varepsilon}>0$, uniformly in $n$ and $m$ as $(n-m)\to\infty$ we have $$ n^{-{\varepsilon}}R_{m,n}({\omega})=O\left((n-m)^{1+{\varepsilon}}\right),\,\,{\mathbb P}\text{-a.s.} $$ \end{lemma} \begin{proof} (i) First, the almost sure estimate $R_n({\omega})=o(n^{\varepsilon})$ follows from \eqref{Mom0} and the Borel Cantelli Lemma. Indeed, by taking $p>\frac 1{\varepsilon}$ and applying the Markov inequality we arrive at $$ {\mathbb P}(R_n\geq n^{\varepsilon})={\mathbb P}(R_n^p\geq n^{{\varepsilon} p})\leq C_pn^{-p{\varepsilon}}. $$ In order to prove \eqref{Mom0}, let us take $s\in{\mathbb N}$ of the form $s=3r$. Then, since $0<\rho(\cdot)<1$, $$ {\mathbb E}_{{\mathbb P}}[R_n^p]=\sum_{0\leq j_1\leq j_2\leq...\leq j_p<n}{\mathbb E}_{{\mathbb P}}\left[\prod_{k=1}^{p}\prod_{u=j_{k}}^{n-1}\rho({\theta}^u{\omega})\right] \leq \sum_{0\leq j_1\leq j_2\leq...\leq j_p<n}{\mathbb E}_{{\mathbb P}}\left[\prod_{u=j_1}^{n-1}\rho({\theta}^u{\omega})\right] $$ $$ \leq \sum_{0\leq j_1\leq j_2\leq...\leq j_p<n}{\mathbb E}_{{\mathbb P}}\left[\prod_{v=0}^{[(n-1-j_1)/s]}\rho({\theta}^{j_1+sv}{\omega})\right] \leq \sum_{0\leq j_1\leq j_2\leq...\leq j_p<n}{\mathbb E}_{{\mathbb P}}\left[\prod_{v=0}^{[(n-1-j_1)/s]}(\rho_r({\theta}^{j_1+sv}{\omega})+\beta_r)\right] $$ $$ \leq \sum_{0\leq j_1\leq j_2\leq...\leq j_p<n}(1+\psi_U(r))^{[(n-j_1-1)/s]}\prod_{v=0}^{[(n-1-j_1)/s]}{\mathbb E}_{{\mathbb P}}\left[(\rho_r({\theta}^{j_1+sv}{\omega})+\beta_r)\right] $$ $$ = \sum_{0\leq j_1\leq j_2\leq...\leq j_p<n}(1+\psi_U(r))^{[(n-j_1-1)/s]}a_r^{[(n-1-j_1)/s]+1} $$ where $a_r={\mathbb E}[\rho_r]+\beta_r$ and in the last inequality we have used Lemma \ref{psi Lemm 2}. Taking $s$ large enough so that $a_r(1+\psi_U(r))={\delta}<1$ (using \eqref{LS}) and using that $n-1-j_1\geq n-1-j_i$ for $i=1,2,...,d$ we conclude that $$ {\mathbb E}_{{\mathbb P}}[R_n^p]\leq \sum_{0\leq j_1\leq j_2\leq...\leq j_p<n}b^{\sum_{i=1}^{d}(n-1-j_i)}=\left(\sum_{j=0}^{n-1}b^{n-1-j}\right)^p\leq C_p(b)=\frac{1}{(1-b)^p} $$ where $b=b_{p,s}={\delta}^{1/sp}\in(0,1)$. \vskip0.1cm (ii) First, the almost sure estimate $R_{m,n}({\omega})=O(n^{1/p+{\varepsilon}}(n-m)^{1+{\varepsilon}})$ follows from \eqref{Mom} and the Borel Cantelli Lemma. Indeed, for all $A>0$ we have $$ {\mathbb P}\left(\sup_{(n,m):\,0\leq n-m\leq \ell}n^{-(1+{\varepsilon})}R_{m,n}^p\geq A^p\right)=O(\ell^{p+1}A^{-p}) $$ and so for $A_\ell=\ell^{1+\frac{3}{p}}$ we have $$ {\mathbb P}\left(\sup_{(n,m):0\,\leq n-m\leq \ell}n^{-(1+{\varepsilon})/p}R_{m,n}\geq A_\ell\right)\leq C\ell^{-2}. $$ Now, given ${\varepsilon}>0$, by taking $p$ large enough we conclude from the Borel Cantelli Lemma that $$ \sup_{(n,m):\,0\leq m-n\leq \ell}n^{-(1+{\varepsilon})/p}R_{m,n}=O(\ell^{1+{\varepsilon}}),\,\,{\mathbb P}\text{-a.s} $$ Thus, ${\mathbb P}$-a.s. there is a constant $C$ so that for a given $n$ and $m$ with $n-m$ large enough we have $R_{m,n}\leq C(n-m)^{1+{\varepsilon}}n^{1/p+{\varepsilon}/p}$. Finally, by taking $p$ large enough we can also insure that $(1+{\varepsilon})/p<{\varepsilon}$. Next, in order to prove \eqref{Mom}, we have \begin{equation}\label{Tg} {\mathbb E}_{\mathbb P}\left[\sup_{(n,m):\,0\leq n-m\leq \ell}n^{-(1+{\varepsilon})}R_{m,n}^p\right]\leq \sum_{(n,m):\, 0\leq n-m\leq \ell}n^{-(1+{\varepsilon})}{\mathbb E}[R_{m,n}^p]=\sum_{n=1}^{\infty}n^{-(1+{\varepsilon})}\sum_{m=n-\ell}^{n}{\mathbb E}[R_{m,n}^p] \end{equation} $$ \leq\left(\sum_{n=1}^{\infty}n^{-(1+{\varepsilon})}\right)(\ell+1)\sup_{(n,m):\, n-\ell\leq m\leq n}{\mathbb E}[R_{m,n}^p] \leq C_{\varepsilon} \ell\sup_{(n,m):\, n-\ell\leq m\leq n}{\mathbb E}[R_{m,n}^p]. $$ Next, let us estimate ${\mathbb E}[R_{m,n}^p]$ for a fixed pair of positive integers $(n,m)$ so that $n-\ell\leq m\leq n$. We first write $$ {\mathbb E}_{{\mathbb P}}[R_{m,n}^p]=\sum_{m\leq k_1,...,k_p\leq n}\,\sum_{k_i\leq j_i\leq n;\, 1\leq i\leq p}{\mathbb E}_{\mathbb P}\left[\prod_{i=1}^{p}\prod_{u=k_i}^{j_i}\rho({\theta}^{u}{\omega})\right]. $$ For a fixed choice of pairs $(k_i,j_i), i=1,2,...,p$ let $a=a(\{(k_i,j_i)\})$ be an index so that $j_a-k_a$ is the largest among $j_i-k_i$. Since $0<\rho({\omega})<1$, by disregarding the products $\prod_{u=k_i}^{j_i}\rho({\theta}^{u}{\omega})$ for $i\not=a$ we see that $$ {\mathbb E}_{{\mathbb P}}[R_{m,n}^p]\leq \sum_{m\leq k_1,...,k_p\leq n}\,\sum_{k_i\leq j_i\leq n;\, 1\leq i\leq p}{\mathbb E}_{{\mathbb P}}\left[\prod_{u=k_a}^{j_a}\rho({\theta}^{u}{\omega})\right]. $$ Next, since $0<\rho(\cdot)<1$, for all $s$ of the form $s=3r$, by \eqref{beta r inf} and Lemma \ref{psi Lemm 2} we have $$ {\mathbb E}_{\mathbb P}\left[\prod_{u_a=k_a}^{j_a}\rho({\theta}^{u_a}{\omega})\right]\leq {\mathbb E}_{\mathbb P}\left[\prod_{u=0}^{[(j_a-k_a)/s]}\rho({\theta}^{k_a+su}{\omega})\right] $$ $$ \leq {\mathbb E}_{\mathbb P}\left[\prod_{u=0}^{[(j_a-k_a)/s]}(\rho_r({\theta}^{k_a+su}{\omega})+\beta_r)\right] \leq (1+\psi_U(r))^{[(j_a-k_a)/s]}\left({\mathbb E}_{\mathbb P}[\rho_r]+\beta_r\right)^{[(j_a-k_a)/s]+1}. $$ Since $\limsup_{r\to\infty}(1+\psi_U(r))<\frac1{{\mathbb E}[\rho]}$ (by \eqref{LS}), by fixing some $s=s_0$ large enough we get that $(1+\psi_U(r))({\mathbb E}_{\mathbb P}[\rho_r]+\beta_r)={\delta}<1$. Thus, since $j_a-k_a$ is the maximal difference, we have $$ {\mathbb E}_{\mathbb P}\left[\prod_{u_a=k_a}^{j_a}\rho({\theta}^{u_a}{\omega})\right]\leq {\delta}^{[(j_a-k_a)/s]}\leq {\varepsilon}^{\sum_{i=1}^{p}(j_i-k_i)} $$ where ${\varepsilon}={\varepsilon}_{p,s}={\delta}^{\frac{1}{ps}}<1$. Hence for all $n,m$ so that $n-\ell\leq m\leq n$ we have $$ {\mathbb E}[R_{m,n}^p]\leq \sum_{m\leq k_1,...,k_p\leq n}\,\sum_{k_i\leq j_i\leq n;\, 1\leq i\leq p}{\varepsilon}^{\sum_{i=1}^{p}(j_i-k_i)} =\left(\sum_{i=1}^{p}\sum_{k_i=m}^{n}\sum_{j_i=k_i}^{j_i}{\varepsilon}^{j_i-k_i}\right)^p=O((m-n)^p)=O(\ell^p) $$ which together with \eqref{Tg} completes the proof of the maximal moment estimates in (ii). \end{proof} \subsubsection{A martingale co-boundary representation} Let $\tilde u_{\omega}=u_{\omega}-\mu_{\omega}(u_{\omega})$. Set $$ G_{{\omega},n}=\sum_{j=0}^{n-1}L_{{\theta}^j{\omega}}^{n-j}(\tilde u_{{\theta}^j{\omega}}) $$ and $$ M_{{\omega},n}=\tilde u_{{\theta}^n{\omega}}+G_{{\omega},n}-G_{{\omega},n+1}\circ T_{{\theta}^n{\omega}}. $$ Then for every fixed ${\omega}$ we have that $M_{{\omega},n}\circ T_{\omega}^n$ is a reverse martingale difference with respect to the reverse filtration ${\mathcal T}_{\omega}^n=(T_{\omega}^n)^{-1}({\mathcal B}_{\omega})$, where ${\mathcal B}_{\omega}$ is the Borel ${\sigma}$-algebra on ${\mathcal E}_{\omega}$ (see \cite[Proposition 2]{Davor ASIP}). \begin{lemma}\label{M norm} If ${\omega}\to C_{\omega}$, ${\omega}\to U_{{\omega}}$, ${\omega}\to N({\omega})$ and ${\omega}\to \|\tilde u_{\omega}\|$ are in $L^p({\Omega},{\mathcal F},{\mathbb P})$ for some $p$ then for every ${\varepsilon}>0$ for ${\mathbb P}$-a.e. ${\omega}$ we have $$ \|M_{{\omega},n}^2\|=O(n^{8/p+{\varepsilon}}). $$ \end{lemma} \begin{proof} First by Theorem \ref{RPF}, \begin{equation}\label{G est} \|G_{{\omega},n}\|\leq U({\theta}^n{\omega})\sum_{j=0}^{n-1}\|\tilde u_{{\theta}^j{\omega}}\|\rho({\theta}^j{\omega})\cdots \rho({\theta}^{n-1}{\omega})\leq U({\theta}^n{\omega})u_n({\omega})R_n({\omega}) \end{equation} where $u_n({\omega})=\sup_{j\leq n}\|\tilde u_{{\theta}^j{\omega}}\|$ and $R_n$ is defined in Lemma \ref{MomLem} (i). Therefore by the definition \eqref{N def} of $N(\cdot)$ we have $$ \|G_{{\omega},n+1}\circ T_{{\theta}^n{\omega}}\|\leq \|G_{n+1,{\omega}}\| N({\theta}^n{\omega})\leq U({\theta}^{n+1}{\omega})N({\theta}^n{\omega})u_{n+1}({\omega})R_{n+1}({\omega}). $$ We thus conclude that $$ \|M_{{\omega},n}^2\|\leq 3\|M_{{\omega},n}\|^2\leq A\left(U({\theta}^n{\omega})+U({\theta}^{n+1}{\omega})\right)^2\left(1+N({\theta}^n{\omega})\right)^2u_{n+1}^2({\omega})\left(R_n({\omega})+R_{n+1}({\omega})\right)^2 $$ where $A$ is an absolute constant. Now the lemma follows by recalling that $U({\omega})=6U_{\omega} C_{\omega}$ (where $U_{\omega}$ and $C_{\omega}$ were defined in Sections \ref{Aux1} and \ref{Aux2}) and using Lemma \ref{MomLem} (i) together with the fact that for any random variable $Q({\omega})$, if $Q({\omega})\in L^p({\Omega},{\mathcal F},{\mathbb P})$ then $|Q({\theta}^n{\omega})|=o(n^{1/p})$,\, ${\mathbb P}$-almost surely (as a consequence of the mean ergodic theorem). \end{proof} Next, we need the following quadratic variation estimates. \begin{lemma}\label{QV} If ${\omega}\to C_{\omega}$, ${\omega}\to U_{{\omega}}$, ${\omega}\to N({\omega})$ and ${\omega}\to \|\tilde u_{\omega}\|$ are in $L^p({\Omega},{\mathcal F},{\mathbb P})$ for some $p$ then for every ${\varepsilon}>0$, for ${\mathbb P}$-a.e. ${\omega}$ we have $$ \sum_{k=0}^{n-1}{\mathbb E}_{\mu_{\omega}}[M_{{\omega},k}^2\circ T_{\omega}^k|{\mathcal T}_{\omega}^{k+1}]=\sum_{k=0}^{n-1}{\mathbb E}_{\mu_{\omega}}[M_{{\omega},k}^2\circ T_{\omega}^k]+o(n^{1/2+9/p+{\varepsilon}}\ln^{3/2+{\varepsilon}}n),\, \mu_{\omega}\text{ a.s.} $$ \end{lemma} \begin{proof} Set $$ A_{k,{\omega}}={\mathbb E}_{\mu_{\omega}}[M_{{\omega},k}^2\circ T_{\omega}^k|{\mathcal T}_{\omega}^{k+1}],\, B_{k,{\omega}}={\mathbb E}_{\mu_{\omega}}[A_{k,{\omega}}]={\mathbb E}_{\mu_{\omega}}[M_{{\omega},k}^2\circ T_{\omega}^k]=\mu_{{\theta}^k{\omega}}(M_{{\omega},k}^2) $$ and $Y_{k,{\omega}}=A_{k,{\omega}}-B_{k,{\omega}}$. Then, by \cite[Lemma 9]{CIRM paper} in order to prove the lemma it is enough to show that for all $n>m$ and all ${\varepsilon}>0$ we have \begin{equation}\label{Q} \left\|\sum_{k=m}^{n}Y_{k,{\omega}}\right\|_{L^2(\mu_{\omega})}^2\leq C(n-m)n^{2/q+{\varepsilon}} \end{equation} where $C$ is a constant which may depend on ${\omega}$ and ${\varepsilon}$. In order to prove \eqref{Q}, we first have $$ \left\|\sum_{k=m}^{n}Y_k\right\|_{L^2(\mu_{\omega})}^2=\text{Var}\left(\sum_{k=m}^{n}A_k\right)= \left\|\sum_{k=m}^{n}A_k\right\|_{L^2(\mu_{\omega})}^2-\left(\sum_{k=m}^n B_k\right)^2 $$ where we abbreviate $A_{k,{\omega}}=A_k$ and $B_{k,{\omega}}=B_k$. Thus, $$ \left\|\sum_{k=m}^{n}Y_k\right\|_{L^2(\mu_{\omega})}^2\leq \sum_{k=m}^{n}\mu_{\omega}(A_{k}^2)+2\left(\sum_{m\leq i<j\leq n}\mu_{\omega}(A_{i}A_{j})-\sum_{m\leq i<j\leq n}B_{i}B_{j}\right). $$ Next, arguing as in \cite[Lemma 6]{Davor ASIP} we have $$ A_i=L_{{\theta}^i{\omega}}(M_i^2)\circ T_{\omega}^{i+1} $$ where we abbreviate $M_i=M_{{\omega},i}$. Hence, by also using that $(T_{\omega}^{i+1})_*\mu_{{\omega}}=\mu_{{\theta}^{i+1}{\omega}}$ and that $L_{\omega}$ is the dual of $T_{\omega}$ (w.r.t. $\mu_{\omega}$) we see that $$ \mu_{\omega}(A_{i}A_{j})=\int L_{{\theta}^i{\omega}}(M_i^2)\cdot (L_{{\theta}^j{\omega}}(M_j^2)\circ T_{{\theta}^{i+1}{\omega}}^{j-i})d\mu_{{\theta}^{i+1}{\omega}} =\int L_{{\theta}^{i}{\omega}}^{j-i+1}(M_i^2)\cdot L_{{\theta}^j{\omega}}(M_j^2)d\mu_{{\theta}^{j+1}{\omega}}. $$ Now, by \eqref{Exp L} we have $$ \left\|L_{{\theta}^{i}{\omega}}^{j-i+1}(M_i^2)-\mu_{{\theta}^i{\omega}}(M_i^2)\right\|\leq U({\theta}^j{\omega})\|M_i^2\|\rho({\theta}^i{\omega})\cdots \rho({\theta}^{j}{\omega}). $$ Since $$ \int L_{{\theta}^j{\omega}}(M_j^2)d\mu_{{\theta}^{j+1}{\omega}}={\mathbb E}[A_{j}]=B_{j} $$ and $U({\theta}^j{\omega})=o(j^{2/p})$ (as $U({\omega})=C_{\omega} U_{\omega}\in L^{2p}$) we conclude from the above estimates together with Lemma \ref{M norm} that when $j>i$, $$ |\mu_{\omega}(A_{i}A_{j})-B_iB_j|\leq U({\theta}^j{\omega})\|M_i^2\|\|M_j^2\|\rho({\theta}^i{\omega})\cdots \rho({\theta}^{j}{\omega})=O(n^{18/p+{\varepsilon}})\rho({\theta}^i{\omega})\cdots \rho({\theta}^{j}{\omega}) $$ for every ${\varepsilon}>0$. Thus, $$ \left|\sum_{m\leq i<j\leq n}\mu_{\omega}(A_{i}A_{j})-\sum_{m\leq i<j\leq n}B_{i}B_{j}\right|\leq Cn^{18/p+{\varepsilon}}\sum_{m\leq i<j\leq n}\rho({\theta}^i{\omega})\cdots \rho({\theta}^{j}{\omega})\leq Cn^{18/p+{\varepsilon}}R_{n,m}({\omega}). $$ Now the proof of the lemma is completed using Lemma \ref{MomLem} (ii). \end{proof} \subsubsection{Proof of Theorem \ref{ASIP}} First, we have $$ S_n^{\omega} \tilde u=\sum_{j=0}^{n-1}M_{{\omega},j}\circ T_{\omega}^j+G_{{\omega},n}\circ T_{\omega}^n-G_{{\omega},0}. $$ Next, by \eqref{G est}, Lemma \ref{MomLem} (i) and the assumption that $C_{\omega}, U_{\omega}, \|\tilde u_{\omega}\|\in L^p({\Omega},{\mathcal F},{\mathbb P})$ we see that for every ${\varepsilon}>0$, $$ \|G_{{\omega},n}\|=o(n^{3/p+{\varepsilon}}), \,\text{a.s.} $$ and so \begin{equation}\label{Mapprox} \left\|S_n^{\omega} \tilde u-\sum_{j=0}^{n-1}M_{{\omega},j}\circ T_{\omega}^j\right\|_{L^\infty(\mu_{\omega})}=O(n^{3/p+{\varepsilon}}). \end{equation} In particular, with ${\sigma}_n^2={\sigma}_{n,{\omega}}^2={\mathbb E}_{\mu_{\omega}}[(S_n^{\omega}\tilde u)^2]$ and ${\varepsilon}$ is small enough we have $$ {\sigma}_n^2:=\left\|\sum_{j=0}^{n-1}M_{{\omega},j}\circ T_{\omega}^j\right\|_{L^2(\mu_{\omega})}^2=\sum_{j=0}^{n-1}{\mathbb E}_{\mu_{{\theta}^j{\omega}}}[M_{{\omega},j}^2]={\sigma}_n^2+O(n^{1/2+3/p+{\varepsilon}})\asymp {\sigma}^2 n $$ where we have used that ${\sigma}_n^2/n\to {\sigma}^2>0$ and that $p>3$. In order to complete the proof we apply \cite[Theorem 2.3]{CM} (taking into account \cite[Remark 2.4]{CM}) with the reverse martingale difference $(M_{{\omega},n}\circ T_{\omega}^n)$ and the sequence $a_n=n^{1/2+9/p+{\varepsilon}}\ln^{3/2+{\varepsilon}}n$, noticing that ${\mathbb E}_{\mu_{{\theta}^n{\omega}}}[M_{{\omega},n}^2]=O(n^{8/p+{\varepsilon}})$ (by Lemma \ref{M norm}), and so when $p>8$ we have ${\mathbb E}_{\mu_{{\theta}^n{\omega}}}[M_{{\omega},n}^2]=O({\sigma}_n^{2s})$ for some $0<s<1$. Taking into account Lemma \ref{QV}, the first additional condition (i) of \cite[Theorem 2.3]{CM} holds true. In order to verify the second additional condition (ii) with $v=2$, for ${\mathbb P}$-a.a. ${\omega}$ we have $$ \sum_{n\geq 1}a_n^{-2}{\mathbb E}_{{\theta}^n{\omega}}[M_{{\omega},n}^4]\leq C_{\omega}\sum_{n\geq 1}a_n^{-2}n^{^{16/p+{\varepsilon}}} $$ which is a convergent series since $a_n^{-2}n^{^{16/p+{\varepsilon}}}\leq n^{-1-2/p+2{\varepsilon}}=O(n^{-1-{\delta}}), {\delta}>0$ (assuming that ${\varepsilon}$ is small enough). In the above estimate we used that $\|M_{{\omega},n}^4\|\leq 3\|M_{{\omega},n}^2\|^2$ together with Lemma \ref{M norm}. We conclude that ${\mathbb P}$-a.s. there is a coupling of the reverse martingale $(M_{{\omega},n}\circ T_{\omega}^n)$ with a Gaussian independent sequence $(Z_n)$, so that the ASIP rates in Theorem \ref{ASIP} hold true with $\sum_{j=0}^{n-1}M_{{\omega},n}\circ T_{\omega}^j$ instead of $S_n^{\omega} u-\mu_{\omega}(S_n^{\omega} u)=S_n^{\omega}\tilde u$. Finally, in order to pass from the ASIP for the reverse martingale $(M_{{\omega},n}\circ T_{\omega}^n)$ to the ASIP for the random Birkhoff sums $S_n^{\omega} u$ we use \eqref{Mapprox} and then the, so-called, Berkes-Philipp lemma (which allows us to further couple $u_{{\theta}^j{\omega}}\circ T_{\omega}^j$ with the Gaussian sequence). \qed \subsection{Large deviations principle with quadratic rate function} In the circumstances of both Theorems \ref{MDP1} and \ref{MDP2}, by the Gartner-Ellis theorem (see \cite{DemZet}) in order to prove the appropriate moderate deviations principle it is enough to show that for all real $t$ we have $$ \lim_{n\to\infty}\frac 1{s_n}\ln {\mathbb E}[e^{ta_n S_n^{\omega} \tilde u/n}] =\frac12 t^2{\sigma}^2 $$ where the sequence $a_n$ is described in Theorems \ref{MDP1} and \ref{MDP2}, $s_n=a_n^2/n$ and ${\sigma}^2=\lim_{n\to\infty}\frac 1n\text{Var}_{\mu_{\omega}}(S_n^{\omega} u)$ (which does not depend on ${\omega}$ and is assumed to be positive). Henceforth we will assume that $\mu_{\omega}(u_{\omega})=0$, which means that we replace $u_{\omega}$ by $\tilde u_{\omega}=u_{\omega}-\mu_{\omega}(u_{\omega})$. \subsubsection{Auxiliary estimates} We first need the following result. \begin{lemma}\label{BB} Let $(\bar{\lambda}_{\omega}(z), \bar h_{\omega}^{(z)}, \bar\nu_{\omega}^{(z)})$ be the normalized RPF triplets from Theorem \ref{Complex RPF}. There is a constant $r>0$ so that ${\mathbb P}$-a.s. for every complex number $z$ with $|z|\leq r$ we have $$ \|\bar\nu_{\omega}^{(z)}\|\leq M_{\omega} K_{\omega}, \,\|\bar h_{\omega}^{(z)}\|\leq \frac{2\sqrt 2 U_{\omega} K_{\omega}}{|{\alpha}_{\omega}(z)|} $$ and $$ |\bar{\lambda}_{\omega}(z)|\leq 3e^{\|u_{\omega}\|_\infty}(1+2H_{\omega})\|{\mathcal L}_{\omega}\textbf{1}\|_{\infty}\leq \bar D_{\omega} $$ (where $K_{\omega}, M_{\omega}, U_{\omega}$ and $\bar D_{\omega}$ are defined in Sections \ref{Aux1} and \ref{Aux2}). \end{lemma} \begin{proof} Using the upper and lower bounds in Theorem \ref{RPF} together with \eqref{la bounds} we see that $$ \|\bar\nu_{\omega}^{(z)}\|\leq M_{\omega} \|h_{\omega}\|\leq M_{\omega} K_{\omega}\,\,\text{ and }\,\, \|\bar h_{\omega}^{(z)}\|\leq \frac{U_{\omega} \cdot(2\sqrt 2K_{\omega})}{|{\alpha}_{\omega}(z)|} $$ where we used that $v(1/h)\leq v(h)(\inf h)^{-2}$ for every positive function $h$ (and so $\|1/h_{\omega}\|\leq U_{\omega}$). To bound ${\lambda}_{\omega}(z)$, notice that ${\lambda}_{\omega}(z)=\nu_{{\theta}{\omega}}^{(z)}({\mathcal L}_{\omega}^{(z)}\textbf{1})$ which yields that $$ |{\lambda}_{\omega}(z)|\leq M_{{\theta}{\omega}}\|{\mathcal L}_{\omega}^{(z)}\textbf{1}\|\leq M_{{\theta}{\omega}}\|e^{zu_{\omega}}\|\|{\mathcal L}_{\omega}\|. $$ Firstly, let us bound the norm $\|{\mathcal L}_{\omega}\|$. Let $g$ be a H\"older continuous function. Then $\|{\mathcal L}_{\omega} g\|_{\infty}\leq \|{\mathcal L}_{\omega} \textbf{1}\|_\infty \|g\|_\infty$ (this part does not require continuity of $g$, only boundedness). Secondly, let us estimate the H\"older constant of ${\mathcal L}_{\omega} g$. In the setup of Section \ref{Maps1} set $c_{\omega}=\gamma_{\omega}^{-1}$, while in the setup of Section \ref{Maps2} set $c_{\omega}=l_{\omega}$. Then for every two points $x,x'\in{\mathcal E}_{{\theta}{\omega}}$ so that $\rho(x,x')\leq \xi$ (where we set $\xi=1=\text{diam}({\mathcal E}_{\omega})$ in the setup of Section \ref{Maps2}) we have $$ |{\mathcal L}_{\omega} g(x)-{\mathcal L}_{\omega} g(x')|\leq \sum_{i}e^{\phi_{\omega}(y_i)}|g(y_i)-g(y_i')|+ \sum_{i}|e^{\phi_{\omega}(y_i)}-e^{\phi_{\omega}(y_i')}||g(y_i')| $$ $$ \leq v(g)c_{\omega}^{\alpha} \rho^{\alpha}(x,x')\|{\mathcal L}_{\omega}\textbf{1}\|_{\infty}+2H_{\omega} \rho^{\alpha}(x,x')\|g\|_{\infty}\|{\mathcal L}_{\omega} \textbf{1}\|_{\infty}\leq(c_{\omega}^{\alpha}+2H_{\omega}))\|{\mathcal L}_{\omega}\textbf{1}\|_{\infty}\|g\|\rho^{\alpha}(x,x') $$ In the second inequality we have also used that $$ |e^{\phi_{\omega}(y_i)}-e^{\phi_{\omega}(y_i')}|\leq(e^{\phi_{\omega}(x_i)}+e^{\phi_{\omega}(y_i)})H_{\omega}\rho^{\alpha}(x,y) $$ which is obtained using the mean value theorem and the definition of $H_{\omega}$ (in either \eqref{phi cond} or \eqref{H def}). Here $y_i=y_{i,{\omega}}(x)$ and $y_i'=y_{i,{\omega}}(x')$ are the inverse images of $x$ and $x'$ under $T_{\omega}$, respectively. Combining the above estimates we see that \begin{equation}\label{See} \|{\mathcal L}_{\omega}\|\leq (1+H_{\omega}+c_{\omega}^{\alpha})\|{\mathcal L}_{\omega} \textbf{1}\|_{\infty}=\tilde D_{\omega}. \end{equation} Finally, using also that ${\lambda}_{\omega}\geq e^{-\|\phi_{\omega}\|_\infty}$ we conclude that when $|z|\leq 1$ then $$ |\bar{\lambda}_{\omega}(z)|=\frac{|{\lambda}_{\omega}(z)|}{{\lambda}_{\omega}}\leq M_{{\theta}{\omega}}e^{\|\phi_{\omega}\|_\infty}\|e^{zu_{\omega}}\|\tilde D_{\omega}\leq M_{{\theta}{\omega}}e^{\|\phi_{\omega}\|_\infty}e^{\|u_{\omega}\|_\infty}(1+v(u_{\omega}))\tilde D_{\omega}\leq \bar D_{\omega}. $$ \end{proof} \begin{corollary}\label{Cor BB} There exist constants $r_1,C_1>0$ so that ${\mathbb P}$-a.s. if $|z|\leq r_1$ then $$ |\bar{\lambda}_{\omega}(z)-1|\leq C_1|z|\bar D_{\omega} $$ and for every $r_2\leq r_1$ if $|z|\leq r_2/2$ then with $\beta_{\omega}=\inf_{|z|\leq r_2}|{\alpha}_{\omega}(z)|$ we have $$ \|\bar h_{\omega}^{(z)}-\textbf{1}\|\leq 2\sqrt 2 U_{\omega} K_{\omega}|z|\beta_{\omega}^{-1}. $$ \end{corollary} \begin{proof} Since $z\to\bar{\lambda}_{\omega}(z)$ and $z\to\bar h_{\omega}^{(z)}$ are analytic, the corollary follows from Lemma \ref{BB} together with the Cauchy integral formula. \end{proof} \subsubsection{MDP via inducing: proof of Theorem \ref{MDP1}} Let $A$ be the set from the assumptions of Theorem \ref{MDP1}. Then there is a constant $Q$ so that for every ${\omega}\in A$ we have $\max(M_{\omega}, K_{\omega}, U_{\omega})\leq Q$. Let $n_A$ be the first visiting time to $A$. Then, using the upper bounds on $|{\alpha}_{\omega}(z)|$ from Theorem \ref{Complex RPF} and the Cauchy integral formula, we see that there is a constant $r_0>0$ so that if $|z|\leq r_0$ then for every ${\omega}\in A$ we have $$ |{\alpha}_{\omega}(z)-1|\leq 2\sqrt 2 Q^2|z|<\frac12 $$ and so $$ \beta_{\omega}=\min_{|z|\leq r_0}|{\alpha}_{\omega}(z)|\geq\frac12. $$ Now, let $n$ be so that ${\theta}^n{\omega}\in A$. Then if $|z|\leq r_0$ we have $|{\alpha}_{{\theta}^n{\omega}}(z)|\geq \frac12$. On other hand, since $K_{\omega}$ and $M_{\omega}$ are in $L^p({\Omega},{\mathcal F},{\mathbb P})$ we have $\max(K_{{\theta}^n{\omega}}, M_{{\theta}^n{\omega}})=o(n^{1/p})$ (a.s.). Using also that $0<\rho({\omega})<1$ we see that $K_{{\theta}^n{\omega}}M_{{\theta}^n{\omega}}\rho_{{\omega},n}$ decays to $0$ exponentially fast. In particular, for every $n$ large enough he have $$ \beta_{\omega}\geq 2\sqrt 2K_{{\theta}^n{\omega}}M_{{\theta}^n{\omega}}\rho_{{\omega},n}. $$ Hence, by applying \eqref{Exponential convergence CMPLX} with ${\theta}^n{\omega}$ instead of ${\omega}$ we see that if ${\theta}^n{\omega}\in A$ and $n$ is large enough then \begin{equation}\label{Exponential convergence new1} \Big\|\frac{L_{{\omega}}^{z,n}g}{\bar{\lambda}_{{\omega},n}(z)}-\bar\nu_{{\omega}}^{(z)}(g)\bar h^{(z)}_{{\theta}^n{\omega}}\Big\|\leq C(Q)M_{{\omega}}\|g\|\rho_{{\omega},n} \end{equation} for every H\"older continuous function $g$, where $C(Q)$ is a constant that depends on $Q$, but not on ${\omega}$ or $n$. Next, notice that under the Assumptions of Theorem \ref{MDP1} we have that $\bar D_{\omega}$ from Lemma \ref{BB} belongs to $L^{2p}$. Hence $\bar D_{{\theta}^j{\omega}}=o(|j|^{2/p})$ and by Corollary \eqref{Cor BB} we have $$ |\bar{\lambda}_{{\theta}^j{\omega}}(z)-1|\leq C|j|^{2/p}. $$ Thus there are uniformly bounded analytic branches (vanishing at the origin) of $\ln\bar {\lambda}_{{\theta}^j{\omega}}(z)$ for $j\leq n$ on any domain of the form $|z|=o(n^{-2/p})$. Let us denote these branches by $\Pi_{{\theta}^j{\omega}}(z)$. Now, when ${\theta}^n{\omega}\in A$ then by Corollary \ref{Cor BB} when $|z|$ is small enough for ${\mathbb P}$-a.a. ${\omega}$ we have \begin{equation}\label{h bar est} \|\bar h_{{\theta}^n{\omega}}^{(z)}-\textbf{1}\|\leq \frac12. \end{equation} and so $$ \frac 12\leq|\mu_{\omega}(\bar h_{\omega}^{(z)})|\leq \frac32 $$ Therefore we can also develop uniformly bounded branches of $\ln \mu_{\omega}(\bar h_{\omega}^{(z)})$ around the complex origin which vanishes at the origin. Next, by using \eqref{Exponential convergence new1} we see that for $n$ large enough, if ${\theta}^n{\omega}\in A$ and $|z|=O(n^{-2/p})$ then \begin{equation}\label{Char} {\mathbb E}[e^{zS_n^{\omega} \tilde u}]=\mu_{{\theta}^n{\omega}}(L_{\omega}^{z,n} \textbf{1})=\bar{\lambda}_{{\omega},n}(z)\left(\mu_{{\theta}^n{\omega}}(\bar h_{{\theta}^n{\omega}}^{(z)})+O(\rho_{{\omega},n}z)\right). \end{equation} Since $\rho_{{\omega},n}$ decays exponentially fast to $0$ and $|\mu_{{\theta}^n{\omega}}(\bar h_{{\theta}^n{\omega}}^{(z)})-1|\leq \frac12 |z|$ (by \eqref{h bar est}) by taking the logarithms of both sides and using anlyticity (and the Cauchy integral formula) we see that when ${\theta}^n{\omega}\in A$ and $|z|=O(n^{-2/p})$ and $n$ is large enough we have $$ \ln {\mathbb E}[e^{zS_n^{\omega} \tilde u}]=\Pi_{{\omega},n}(z)+O(|z|)+O({\delta}^n) $$ where $\Pi_{{\omega},n}(z)=\sum_{j=0}^{n-1}\Pi_{{\theta}^j{\omega}}(z)$ and ${\delta}={\delta}_{\omega}\in (0,1)$, and we have used that $\ln(1+w)=O(w)$ when $|w|$ is small enough. By taking the derivatives at $z=0$ and using the Cauchy integral formula on domains of the form $|z|=O(n^{-2/p})$ we see that $$ 0={\mathbb E}[S_n^{\omega} \tilde u]=\Pi_{{\omega},n}'(0)+O(n^{2/p}) $$ and $$ {\sigma}_{{\omega},n}^2={\mathbb E}[(S_n^{\omega} \tilde u)^2]=\Pi_{{\omega},n}''(0)+O(n^{4/p}). $$ Moreover, since $|\Pi_{{\omega},n}(z)|=O(n)$, by using the Cauchy integral formula to estimate the error term in the second order Taylor expansion of $\Pi_{{\omega},n}(z)$ around $z=0$ we see that when $|z|=O(n^{-2/p})$ then $$ \Pi_{{\omega},n}(z)=z\Pi_{{\omega},n}'(0)+\frac12 z^2\Pi_{{\omega},n}''(0)+O(|z|^3)n^{1+6/p} $$ and so $$ \ln {\mathbb E}[e^{zS_n^{\omega} \tilde u}]=O(n^{2/p})z+\frac12 z^2{\sigma}_{{\omega},n}^2+O(n^{4/p})z^2+ O(|z|^3)n^{1+6/p}+ O(|z|)+O({\delta}^n). $$ Let us now fix some $t\in{\mathbb R}$ and take $z=t_n=itb_n/n$, where $b_n$ satisfies $b_n\gg n^{4/p}$ and $\frac{b_n}{n}=o(n^{-6/p})$. Then, since $p>4$, $$ \frac{\ln {\mathbb E}[e^{t(b_n/n)S_n^{\omega} \tilde u}]}{b_n^2/n}=o(1)+\frac12 t^2({\sigma}_{{\omega},n}^2/n),\,\, \sigma=\lim_{n\to\infty}\frac1n {\sigma}_{{\omega},n}^2. $$ Thus, \begin{equation}\label{By} \lim_{n\to\infty, {\theta}^n{\omega}\in A}\frac{1}{b_n^2/n}\ln {\mathbb E}[e^{t_n S_n^{\omega} \tilde u}]=\frac12 t^2{\sigma}^2. \end{equation} The next step will be to use \eqref{By} to derive a similar result without the restriction ${\theta}^n{\omega} \in A$. Let $(a_n)$ be a sequence with the properties described in Theorem \ref{MDP1}. Let us take some $n$ so that ${\theta}^n{\omega}\not\in A$, and let $m=m_n=m_n({\omega})$ be the largest time $m\leq n$ so that ${\theta}^{m}{\omega}\in A$. Then $$ \left|\ln {\mathbb E}[e^{ta_n S_n^{\omega} \tilde u/n}]-\ln {\mathbb E}[e^{ta_n S_{m_n}^{\omega} \tilde u/n}]\right|\leq |ta_n/n|\cdot\left\|\sum_{j=m_n}^{n-1}\tilde u_{{\theta}^j{\omega}}\circ T_{{\theta}^{m_n}{\omega}}^{n-m_n}\right\|_{\infty}. $$ Now, if we set $$ \tilde \Psi_{\omega}=\sum_{j=0}^{n_A({\omega})-1}\|\tilde u_{{\theta}^j{\omega}}\|_{\infty} $$ then $$ \left\|\sum_{j=m_n}^{n-1}\tilde u_{{\theta}^j{\omega}}\circ T_{{\theta}^m_n{\omega}}^{n-m_n}\right\|_{\infty}\leq \tilde \Psi_{{\theta}^{m_n({\omega})}{\omega}}. $$ Observe now that with $c({\omega})=\|\tilde u_{\omega}\|_\infty$ we have $$ \tilde \Psi_{\omega}=\sum_{j=0}^\infty c({\theta}^j{\omega}){\mathbb I}(n_A({\omega})>j) $$ and so by the H\"older inequality, if $q$ denotes the conjugate exponent of $p$ then $$ \|\tilde \Psi_{\omega}\|_{L^p({\mathbb P})}\leq \|c(\cdot)\|_{L^p({\mathbb P})}\sum_{j=0}^{\infty}({\mathbb P}(n_A>j))^{1/q}. $$ Arguing as in Section \ref{Tails} we see that under each one of the conditions (M1'), (M2') or (M3') we have $\sum_{j=0}^{\infty}({\mathbb P}(n_A>j))^{1/q}<\infty$. We thus see that $\|\tilde \Psi_{\omega}\|_{L^p({\mathbb P})}<\infty$ and so $\tilde \Psi_{{\theta}^j{\omega}}=o(j^{1/p})$ almost surely. Thus, since $p>2$ and $a_n/\sqrt n\to\infty$ we see that $$ \frac{|ta_n/n|\left\|\sum_{j=m_n}^{n-1}\tilde u_{{\theta}^j{\omega}}\circ T_{{\theta}^{m_n}{\omega}}^{n-m_n}\right\|_{\infty}}{s_n}=O(a_n^{-1}n^{1/p})\to 0,\,\,s_n=a_n^2/n. $$ Finally, since $m_n=n(1+o(1))$ by the assumptions on the sequence $(a_n)$ in Theorem \ref{MDP1} we have $a_n=a_{m_n}(1+o(1))$ and $s_n=a_n^2/n=s_{m_n}(1+o(1))$. Therefore by \eqref{By} we have $$ \lim_{n\to\infty}\frac 1{s_n}\ln {\mathbb E}[e^{ta_n S_n^{\omega} \tilde u/n}]= \lim_{n\to\infty, {\theta}^n\in A}\frac 1{s_n}\ln {\mathbb E}[e^{t_n S_n^{\omega} \tilde u}]=\frac12 t^2{\sigma}^2 $$ and the proof of Theorem \ref{MDP1} is complete. \qed \subsubsection{A direct approach to the MDP: proof of Theorem \ref{MDP2}} Recall that when $|z|\leq r_0$ (for some constant $r_0$) then $|{\alpha}_{\omega}(z)|\leq 2\sqrt 2 K_{\omega} M_{\omega}$. Now, using the Cauchy integral formula, when $|z|\leq r_0/2$ we have $$ |{\alpha}_{\omega}(z)-1|\leq CK_{\omega} M_{\omega}|z| $$ where $C=C(r_0)$ is some constant. Since $K_{\omega}$ and $M_{\omega}$ are in $L^p({\Omega},{\mathcal F},{\mathbb P})$ we have $K_{{\theta}^j{\omega}} M_{{\theta}^j{\omega}}=o(j^{2/p})$ and so when $|z|=O(n^{-2/p})$, then for every $n$ large enough \begin{equation}\label{al est0} |{\alpha}_{{\theta}^n{\omega}}(z)-1|\leq {\delta}_n\to 0 \,\text{ as }\,n\to\infty. \end{equation} Thus, for such $z$'s when $n$ is large enough so that $n^{2/p}\rho_{{\omega},n}<1/4$ we can apply \eqref{Exponential convergence CMPLX} with ${\theta}^n{\omega}$ instead of ${\omega}$ and $|z|=O(n^{-2/p})$ and get that \begin{equation}\label{Exponential convergence new2} \Big\|\frac{L_{{\omega}}^{z,n}g}{\bar{\lambda}_{{\omega},n}(z)}-\bar\nu_{{\omega}}^{(z)}(g)\bar h^{(z)}_{{\theta}^n{\omega}}\Big\|\leq CM_{{\omega}}\|g\|{\delta}_{\omega}^n \end{equation} for some ${\delta}_{\omega}\in(0,1)$, where we have used that $K_{{\theta}^n{\omega}}, M_{{\theta}^n{\omega}}$ and $U_{{\theta}^n{\omega}}$ grow at most polynomially fast and $\rho_{{\omega},n}$ decays to $0$ exponentially fast in $n$. Next, by applying the Cauchy integral formula on a domain of the form $\{|z|=O(n^{-2/p})\}$ and using Lemma \ref{BB} to bound the derivative of $z\to\bar h_{{\theta}^n{\omega}}^{(z)}$ on such domains (taking into account \eqref{al est0} and that $U_{{\theta}^n{\omega}} K_{{\theta}^n{\omega}}=o(n^{2/p})$) we see that when $|z|=O(n^{-2/p})$ then \begin{equation}\label{hhh0} \|\bar h_{{\theta}^n{\omega}}^{(z)}-\textbf{1}\|=|z|O(n^{4/p}). \end{equation} Thus, we can develop a branch of $\ln \mu_{{\theta}^n{\omega}}(h_{{\theta}^n{\omega}}^{(z)})$ on a domain of the form $|z|=O(n^{-2/p})$ so that \begin{equation}\label{hhh} \ln \mu_{{\theta}^n{\omega}}(h_{{\theta}^n{\omega}}^{(z)})=1+|z|O(n^{4/p}). \end{equation} Similarly, by the Assumptions of Theorem \ref{MDP2} we have that $\bar D_{\omega}$ defined in Lemma \ref{BB} belongs to $L^{2p}({\Omega},{\mathcal F},{\mathbb P})$. Hence $\bar D_{{\theta}^j{\omega}}=o(|j|^{2/p})$ and by using Corollary \eqref{Cor BB} we see that $$ |\bar{\lambda}_{{\theta}^j{\omega}}(z)-1|=o(j^{2/p})|z|. $$ Thus there are uniformly bounded branches (vanishing at the origin) of $\bar {\lambda}_{{\theta}^j{\omega}}(z)$ for $j\leq n$ on any domain of the form $|z|=O(n^{-2/p})$. Let us denote these branches by $\Pi_{{\theta}^j{\omega}}(z)$. Next, by \eqref{Exponential convergence new2} we have $$ {\mathbb E}[e^{zS_n^{\omega} \tilde u}]=\mu_{{\theta}^n{\omega}}(L_{\omega}^n \textbf{1})=\bar{\lambda}_{{\omega},n}(z)\left(\mu_{{\theta}^n{\omega}}(\bar h_{{\theta}^n{\omega}}^{(z)})+O({\delta}_{\omega}^n z)\right). $$ Using the above estimates, by taking the logarithm of both sides and using anlyticity (and the Cauchy integral formula) we see that when $|z|=O(n^{2/p})$ and $n$ is large enough then $$ \ln {\mathbb E}[e^{zS_n^{\omega} \tilde u}]=\Pi_{{\omega},n}(z)+O(|z|n^{4/p})+O(\tilde {\delta}_{\omega}^n) $$ where $\Pi_{{\omega},n}(z)=\sum_{j=0}^{n-1}\Pi_{{\theta}^j{\omega}}(z)$, $\tilde {\delta}_{\omega}\in(0,1)$ and we have used that $\ln(1+w)=O(w)$ when $|w|$ is small enough. By taking the derivatives at $z=0$ and using the Cauchy integral formula we see that \begin{equation}\label{Pi 1} 0={\mathbb E}[S_n^{\omega} \tilde u]=\Pi_{{\omega},n}'(0)+O(n^{6/p}) \end{equation} and \begin{equation}\label{Pi 2} {\sigma}_{{\omega},n}^2={\mathbb E}[(S_n^{\omega} \tilde u)^2]=\Pi_{{\omega},n}''(0)+O(n^{8/p}). \end{equation} Moreover, since $|\Pi_{{\omega},n}(z)|=O(n)$, by using the Cauchy integral formula to estimate the error term in the second order Taylor expansion of $\Pi_{{\omega},n}(z)$ around $z=0$ we see that when $|z|=O(n^{-2/p})$ then \begin{equation}\label{Tay2} \Pi_{{\omega},n}(z)=z\Pi_{{\omega},n}'(0)+\frac12 z^2\Pi_{{\omega},n}''(0)+|z|^3O(n^{1+8/p}) \end{equation} and so $$ \ln {\mathbb E}[e^{zS_n^{\omega} \tilde u}]=O(n^{6/p})z+\frac12 z^2{\sigma}_{{\omega},n}^2+O(n^{8/p})z^2+ |z|^3O(n^{1+8/p})+ O(|z|n^{4/p})+O(\tilde{\delta}_{\omega}^n). $$ Finally, let us fix some $t\in{\mathbb R}$ and take $z=t_n=ta_n/n$. Then, since $p>8$, $a_n\gg n^{6/p}$ and $\frac{a_n}{n}=o(n^{-8/p})$ we have $$ \frac{\ln {\mathbb E}[e^{t(a_n/n)S_n^{\omega} \tilde u}]}{a_n^2/n}=o(1)+\frac12 t^2({\sigma}_{{\omega},n}^2/n). $$ Thus, $$ \lim_{n\to\infty}\frac{1}{a_n^2/n}\ln {\mathbb E}[e^{t(a_n/n) S_n^{\omega} \tilde u}]=\frac12 t^2{\sigma}^2 $$ and the proof of Theorem \ref{MDP2} is complete. \qed \subsection{Berry-Esseen type estimates} In this section we will prove Theorem \ref{BE} (ii), and the proof of Theorem \ref{BE} (i) is similar (we will provide a few details after completing the proof of the second part). \begin{lemma}\label{ll Let $\Pi_{{\omega},n}$ be as defined in the proof of Theorem \ref{MDP2}. (i) We have $$ \left|\frac{\Pi_{{\omega},n}''(0)}{{\sigma}_{{\omega},n}^2}-1\right|=O(n^{8/p-1})=o(1). $$ (ii) On any domain of the form $|t/{\sigma}_{{\omega},n}|=O(n^{-2/p})$ we have $$ |{\lambda}_{{\omega},n}(it/{\sigma}_{{\omega},n})|=|e^{\Pi_{{\omega},n}(it/{\sigma}_{{\omega},n})}|\leq e^{-ct^2/2} $$ where $c\in(0,\frac12)$ is some constant. \end{lemma} \begin{proof} The first part follows from \eqref{Pi 2}, and the second part follows from the first and \eqref{Tay2} together with the fact that $\Pi_{{\omega},n}'(0)\in{\mathbb R}$ (since $\Pi_{{\omega},n}(t)\in{\mathbb R}$ when $t$ is real) and recalling that ${\sigma}_{{\omega},n}^2$ grows linearly fast in $n$. \end{proof} \begin{proof}[Proof of Theorem \ref{BE} (ii)] Suppose $\mu_{\omega}(u_{\omega})=0$. Let $d_n=n^{\frac12-2/p}$. Then by the Esseen inequality (see \cite{IL} or a generalized version \cite[\S XVI.3]{Feller}) there is an absolute constant $C$ so that \begin{equation}\label{Ess} \sup_{t\in{\mathbb R}}\left|\mu_{\omega}(S_n^{\omega} u\leq t{\sigma}_{{\omega},n})-\Phi(t)\right|\leq \frac{C}{d_n}+\int_{-d_n}^{d_n}\frac{\left|\mu_{\omega}(e^{it S_n^{\omega} u/{\sigma}_{{\omega},n}})-e^{-t^2/2}\right|}{|t|}dt. \end{equation} In order to bound the integral on the right hand side, first by \eqref{Tay2}, \eqref{Pi 1}, \eqref{Pi 2} and Lemma \ref{ll} (i), for every $t\in[-d_n,d_n]$ we have $$ \Pi_{{\omega},n}(it/{\sigma}_{{\omega},n})=-t^2/2+O(|t|n^{6/p-1/2})+O(t^2n^{8/p-1}) +O(n^{8/p-1/2}|t|^3) $$ where we have used that ${\sigma}_{{\omega},n}^2$ grows linearly fast in $n$, which, in particular, insures that $z=it/{\sigma}_{{\omega},n}=O(n^{-2/p})$. Using also Lemma \ref{ll} (ii) and the mean value theorem we get that $$ \left|e^{\Pi_{{\omega},n}(it/{\sigma}_{{\omega},n})}-e^{-t^2/2}\right|\leq e^{-ct^2/2}\left(O(|t|n^{6/p-1/2})+O(t^2n^{8/p-1}) +O(n^{8/p-1/2}|t|^3\right). $$ Using now \eqref{Char}, \eqref{hhh0} and Lemma \ref{ll} (ii) we see that \begin{equation}\label{Char dif} \left|\mu_{\omega}(e^{it S_n^{\omega} u/{\sigma}_{{\omega},n}})-e^{-t^2/2}\right|\leq C_{\omega}|t|e^{-ct^2}\left(n^{6/p-1/2}+|t|n^{8/p-1}+t^2 n^{8/p-1/2}+n^{4/p-1/2}\right) \end{equation} for some constant $C_{\omega}$ which depends on ${\omega}$ but not on $t$ or $n$. The proof of Theorem \ref{BE} (ii) is completed now by combining \eqref{Ess} with \eqref{Char dif}. \end{proof} The proof of Theorem \ref{BE} (i) proceeds similarly for $n$'s so that ${\theta}^n{\omega}\in A$, and in order to pass to general indexes $n$ we use that $\tilde \Psi_{{\theta}^j{\omega}}=o(n^{1/p})$ together with \cite[Lemma 3.3]{HK BE} (applied with $a=\infty$). \subsection{A moderate local limit theorem: proof of Theorem \ref{LLT}} As in the proof of Theorem \ref{BE}, let us assume that $\mu_{\omega}(u_{\omega})=0$. By using a density argument (see \cite[Section VI.4]{HH}) it is enough to obtain \eqref{llt} for a function $g\in L^1({\mathbb R})$ whose Fourier transform has a compact support. Note that such a function $g$ satisfies the inversion formula. Let $g$ be a function with these properties and let $L>0$ be so that $\hat g(x)=0$ if $|x|>L$. Then, by the inversion formula for $g$, for all $y\in{\mathbb R}$ we have $$ g(y)=\int_{-\infty}^\infty \hat g(x)e^{iyx}dx=\int_{-L}^{L}\hat g(x)e^{iyx}dx. $$ Taking some $v\in{\mathbb R}$, setting $y=S_n^{\omega} u/a_n-v$ and then integrating with respect to $\mu_{\omega}$ we see that $$ {\mathbb E}_{\mu_{\omega}}[g(S_n^{\omega} u/a_n-v)]={\mathbb E}_{\mu_{\omega}}\left[\int_{-L}^{L}\hat g(x)e^{ixS_n^{\omega} u/a_n}e^{-ivx}dx\right]=\int_{-L}^{L}\hat g(x)e^{-ivx}\mu_{\omega}(e^{it S_n^{\omega} u/a_n})dx= $$ $$ \frac{a_n}{{\sigma}_{{\omega},n}}\int_{-L{\sigma}_{{\omega},n}/a_n}^{L{\sigma}_{{\omega},n}/a_n}\hat g(a_n t/{\sigma}_{{\omega},n})e^{-iv a_nt/{\sigma}_{{\omega},n}}{\mathbb E}[e^{it S_n^{\omega} u/{\sigma}_{{\omega},n}}]dt $$ where in the last equality we used the change of variables $x=\frac{a_n}{{\sigma}_{{\omega},n}}t$. Here $(a_n)$ is the sequence specified in Theorem \ref{LLT}. Now, since $a_n n^{-2/p}\to\infty$, the estimate \eqref{Char dif} is valid on the domain $\{|t|\leq L{\sigma}_{{\omega},n}/a_n\}$. Therefore, uniformly in $v\in{\mathbb R}$, we have $$ \frac{{\sigma}_{{\omega},n}}{a_n}{\mathbb E}_{\mu_{\omega}}[g(S_n^{\omega} u/a_n-v)]-\int_{-L{\sigma}_{{\omega},n}/a_n}^{L{\sigma}_{{\omega},n}/a_n}\hat g(a_n t/{\sigma}_{{\omega},n})e^{-iv a_nt/{\sigma}_{{\omega},n}}e^{-t^2/2}dt=o(1). $$ Next, set ${\kappa}_n={\kappa}_{{\omega},n}={\sigma}_{{\omega},n}/a_n$. Then, in order to complete the proof of Theorem \ref{LLT} we need to show that, uniformly in $v\in{\mathbb R}$, we have $$ \int_{-L{\sigma}_{{\omega},n}/a_n}^{L{\sigma}_{{\omega},n}/a_n}\hat g(a_n t/{\sigma}_{{\omega},n})e^{-iv a_nt/{\sigma}_{{\omega},n}}e^{-t^2/2}dt-\frac{1}{\sqrt{2\pi}}e^{-\frac{v^2}{2{\kappa}_n^2}}=o(1). $$ To prove that let us take an arbitrary small ${\varepsilon}>0$ and fix $T$ large enough so that \begin{equation}\label{g} \|g\|_{L^1({\mathbb R})}\int_{|t|>T}e^{-t^2/2}dt<{\varepsilon}/3. \end{equation} Then, using that $\sup|\hat g|\leq \|g\|_{L^1({\mathbb R})}$ we see that for every $n$ large enough and all $v\in{\mathbb R}$ we have $$ \left|\int_{-L{\sigma}_{{\omega},n}/a_n}^{L{\sigma}_{{\omega},n}/a_n}\hat g(a_n t/{\sigma}_{{\omega},n})e^{-iv a_nt/{\sigma}_{{\omega},n}}e^{-t^2/2}dt -\int_{-T}^{T}\hat g(a_n t/{\sigma}_{{\omega},n})e^{-iv a_nt/{\sigma}_{{\omega},n}}e^{-t^2/2}dt\right|<{\varepsilon}/3 $$ where we have used that ${\sigma}_{{\omega},n}/a_n\to\infty$. Next, since $\lim_{x\to0}\hat g(x)=\hat g(0)=\int g(y)dy$ we see that for every $n$ large enough we have $$ \sup_{v\in{\mathbb R}}\left|\int_{-T}^{T}\hat g(a_n t/{\sigma}_{{\omega},n})e^{-iv a_nt/{\sigma}_{{\omega},n}}e^{-t^2/2}dt- \int_{-T}^{T}\hat g(0)e^{-iv a_nt/{\sigma}_{{\omega},n}}e^{-t^2/2}dt\right|<{\varepsilon}/3. $$ Now, using again \eqref{g} we see that $$ \sup_{v\in{\mathbb R}}\left|\int_{-T}^{T}\hat g(0)e^{-iv a_nt/{\sigma}_{{\omega},n}}e^{-t^2/2}dt-\int_{-\infty}^{\infty}\hat g(0)e^{-iv a_nt/{\sigma}_{{\omega},n}}e^{-t^2/2}dt\right|<{\varepsilon}/3. $$ We conclude from the above estimates that, for every $n$ large enough uniformly in $v$ we have $$ \left|\frac{{\sigma}_{{\omega},n}}{a_n}{\mathbb E}_{\mu_{\omega}}[g(S_n^{\omega} u/a_n-v)]-\int_{-\infty}^{\infty}\hat g(0)e^{-iv a_nt/{\sigma}_{{\omega},n}}e^{-t^2/2}dt\right|<{\varepsilon}. $$ Finally, by the inversion formula $$ \int_{-\infty}^{\infty}\hat g(0)e^{-iv a_nt/{\sigma}_{{\omega},n}}e^{-t^2/2}dt=\hat g(0)\frac{1}{\sqrt{2\pi}}e^{-\frac{v^2}{2{\kappa}_n^2}} $$ and the proof of Theorem \ref{LLT} is complete. \section{Proof of the real RPF theorem (Theorem \ref{RPF})} \subsection{Effective rates for properly expanding maps: proof of Theorem \ref{RPF} (i)-(iii) in the setup of Section \ref{Maps1}} \subsubsection{The cones} For each $a>0$ let us consider the real Birkhoff cone $$ {\mathcal C}_{{\omega},a}=\{g\in {\mathcal H}_{{\omega},{\alpha}}: g\geq0,\, g(x)\leq e^{a d_{\omega}(x,x')^{\alpha}}g(x') \text{ if } d_{\omega}(x,x')\leq\xi\}. $$ Set also ${\mathcal C}_{{\omega}}={\mathcal C}_{{\omega},\gamma_{\omega}^{\alpha}}$. \begin{lemma}\label{Cont1} We have $$ {\mathcal L}_{\omega}{\mathcal C}_{{\omega}}\subset{\mathcal C}_{{\theta}{\omega}, H_{\omega}+1}\subset{\mathcal C}_{{\theta}{\omega}}. $$ \end{lemma} \begin{proof} First, by \eqref{H cond} we have $H_{\omega}+1\leq\gamma_{{\theta}{\omega}}^{\alpha}$ and so the second inclusion holds true. To prove the first inclusion, let $g\in {\mathcal C}_{{\omega}}$ and let $x,x'\in{\mathcal E}_{{\theta}{\omega}}$ be so that $\rho(x,x')\leq\xi$. Then, with $y_i=y_{i,{\omega}}(x)$ and $y_i'=y_{i,{\omega}}(x')$ as in \eqref{Pair1.0}, we have $$ {\mathcal L}_{\omega} g(x)=\sum_{i}e^{\phi_{\omega}(y_i)}g(y_i)\leq \sum_{i}e^{\phi_{\omega}(y_i')+\rho_{\omega}^{\alpha}(x,x')H_{\omega}}e^{\gamma_{\omega}^{\alpha}\rho_{\omega}^{\alpha}(y_i,y_i')}g(y_i') $$ $$ \leq e^{(H_{\omega}+\gamma_{\omega}^{\alpha}\gamma_{\omega}^{-{\alpha}})\rho_{{\theta}{\omega}}^{{\alpha}}(x,x')}\sum_{i}e^{\phi_{\omega}(y_i')}g(y_i')=e^{(H_{\omega}+1)\rho_{{\theta}{\omega}}(x,x')}{\mathcal L}_{\omega} g(x'). $$ \end{proof} Next, \begin{lemma}\label{BalLemma} For all $g\in{\mathcal C}_{{\omega}}$ and every $x,x'\in{\mathcal E}_{{\theta}{\omega}}$ we have \begin{equation}\label{Bal} {\mathcal L}_{\omega} g(x)\leq B({\omega}){\mathcal L}_{\omega} g(x') \end{equation} where when $\xi<\text{diam}({\mathcal E}_{\omega})=1$ $$ B({\omega})=e^{H_{\omega}\xi^{\alpha}+\gamma_{\omega}^{\alpha}\xi^{\alpha}} \deg(T_{\omega}). $$ while when $\xi=1$ we have $$ B({\omega})=e^{\gamma_{{\theta}{\omega}}^{{\alpha}}}. $$ \end{lemma} \begin{proof} Suppose first that $\xi<1$. Then $$ {\mathcal L}_{\omega} g(x)\leq \deg(T_{\omega})\max_{y\in T_{\omega}^{-1}\{x\}}e^{\phi_{\omega}(y)}g(y)=e^{\phi_{\omega}(y_0)}g(y_0) $$ for some $y_0$. On the other hand, let $y'\in T_{{\omega}}^{-1}\{x'\}$ be so that $\rho(y_0,y')\leq \xi$ (existence of such $y'$ follows from our assumptions on the map $T_{\omega}$). Then, since $g\in{\mathcal C}_{{\omega}}$, we have $$ e^{\phi_{\omega}(y_0)}g(y_0)\leq e^{H_{\omega}+\gamma_{\omega}^{\alpha}\xi^{\alpha}}e^{\phi_{\omega}(y')}g(y') $$ where we have also used \eqref{phi cond}. On the other hand, $$ e^{\phi_{\omega}(y')}g(y')\leq {\mathcal L}_{\omega} g(x') $$ which together with the previous estimates yields the desired result in the case $\xi<1$. When $\xi=1$ then ${\mathcal L}_{\omega} g\in{\mathcal C}_{{\theta}{\omega}}$ and so (since $\xi=1$), $$ {\mathcal L}_{\omega} g(x)\leq e^{\gamma_{\omega}^{\alpha}}{\mathcal L} g(x') $$ for all $x,x'$. \end{proof} \begin{corollary}\label{Cor diam} The the projective diameter of ${\mathcal L}_{{\omega}}{\mathcal C}_{{\omega}}$ inside ${\mathcal C}_{{\theta}{\omega}}$ does not exceed $D({\omega})$ (which was defined in Section \ref{Aux1}). \end{corollary} \begin{proof} The statement follows from Lemma \ref{Cont1} and \ref{BalLemma}, and it appears in various forms in several places, and we refer to \cite[Lemma 5.7.1]{HK} or \cite{Kifer thermo}. \end{proof} \subsubsection{Reconstruction of $\nu_{\omega}$ using dual cones}\label{nu sec 1} Let ${\mathcal C}_{{\omega}}={\mathcal C}_{{\omega},\gamma_{\omega}^{\alpha}}$ and let ${\mathcal C}_{\omega}^*$ be the dual cone which is given by $$ {\mathcal C}_{\omega}^*=\left\{\nu\in{\mathcal H}_{\omega}^*:\nu(g)\geq 0,\,\forall g\in{\mathcal C}_{\omega}\right\}. $$ Let ${\mathcal L}_{\omega}^*:{\mathcal H}_{{\theta}{\omega}}^*\to{\mathcal H}_{{\omega}}^*$ be the dual operator. Then by \cite[Lemma A.2.6]{HK} the projective diameter of ${\mathcal L}_{\omega}^* {\mathcal C}_{{\theta}{\omega}}^*$ inside ${\mathcal C}_{\omega}^*$ equals the the projective ${\mathcal L}_{{\omega}}{\mathcal C}_{{\omega},}$ inside ${\mathcal C}_{{\theta}{\omega}}$ (which by Corollary \ref{Cor diam} does not exceed $D({\omega})$). Note that \cite[Lemma A.2.6]{HK} is technically about complex cones, but the arguments needed in the case of real cones are essentially the same\footnote{Noticing also that the closure of the cone $\tilde{\mathcal C}_{\omega}=\left\{\nu\in{\mathcal H}_{\omega}^*:\nu(g)> 0,\,\forall g\in{\mathcal C}_{\omega}\setminus\{0\}\right\}$ coincides with ${\mathcal C}_{\omega}^*$ (because there is a linear functional which is strictly positive on ${\mathcal C}_{\omega}\setminus\{0\}$).}. We need now the following result. \begin{lemma}\label{Dual aper} For every $\mu\in {\mathcal C}_{\omega}^*$ and all $h\in{\mathcal H}$ we have $$ |\mu(h)|\leq 2\|h\|\mu(\textbf{1}). $$ \end{lemma} \begin{proof} First, let us show that a closed ball of radius $1/2$ around $\textbf{1}$ is contained in ${\mathcal C}_{\omega}$. Indeed, let $h=1+f$ where $\|f\|\leq \frac12$. Then $h$ belongs to ${\mathcal C}_{\omega}$ if and only if for all $x$ and $x'$ so that $\rho(x,x')\leq \xi$ we have $$ h(x)\leq e^{\gamma_{\omega}^{\alpha}\rho(x,x')^{\alpha}}h(x'),\,\,\,\rho(x,x')^{\alpha}=\left(\rho(x,x')\right)^{\alpha} $$ which can also be written as \begin{equation}\label{Obt} f(x)-f(x')\leq (e^{\gamma_{\omega}^{\alpha}\rho(x,x')^{\alpha}}-1)(1+f(x')). \end{equation} Now, since $e^t-1\geq t$ for all $t\geq0$ and $1+f(x')\geq 1-\|f\|_\infty$ we have $$ (e^{\gamma_{\omega}^{\alpha}\rho(x,x')^{\alpha}}-1)(1+f(x'))\geq \gamma_{\omega}^{\alpha}\rho(x,x')^{\alpha} (1-\|f\|_\infty)\geq \frac12\rho(x,x')^{\alpha} $$ where we have used that $\gamma_{\omega}^{\alpha}\geq1$ and $\|f\|_{\infty}\leq\|f\|\leq \frac12$. On the other hand, since $v(f)\leq \|f\|\leq\frac12$ we have $$ f(x)-f(x')\leq \rho(x,x')^{\alpha} v(f)\leq \frac12\rho(x,x')^{\alpha}. $$ Combing the last two estimates we obtain \eqref{Obt}. Next, let $\mu\in{\mathcal C}_{\omega}^*$, and let $h\in{\mathcal H}_{\omega}$ be so that $\|h\|\leq 1$. Then $1\pm\frac12h\in{\mathcal C}_{\omega}$ and so $$ \mu(1\pm\frac12h)\geq 0, $$ that is $$ |\mu(h)|\leq 2\mu(\textbf{1}). $$ \end{proof} Next, let $\rho({\omega})=\tanh (D({\omega})/4)$ (as was defined in Section \ref{Aux1}). Let $\mu\in{\mathcal C}_{{\theta}^n{\omega}}^*$ and $\nu\in{\mathcal C}_{{\theta}^m{\omega}}^*$ for some $m\geq n$. Then by the projective contraction properties of linear maps (see \cite{Birk} and \cite[Theorem 1.1]{Liv}) the projective distance between $({\mathcal L}_{\omega}^n)^*\mu$ and $({\mathcal L}_{\omega}^m)^*\nu=({\mathcal L}_{\omega}^n)^*({\mathcal L}_{{\theta}^n{\omega}}^{m-n})^*\nu$ does not exceed $\rho_{{\omega},n}=\prod_{j=0}^{n-1}\rho({\theta}^j{\omega})$. Hence by\footnote{These results are formulated for complex cones, but a real complex cone is embedded in its canonical complexification (together with the corresponding projective metrics)} \cite[Theorem A.2.3]{HK} and \cite[Lemma 5.2]{Rugh}, $$ \left\|\frac{({\mathcal L}_{\omega}^n)^*\mu}{\mu({\mathcal L}_{\omega}^n\textbf{1})}-\frac{({\mathcal L}_{\omega}^m)^*\nu}{\nu({\mathcal L}_{\omega}^m\textbf{1})}\right\|\leq \sqrt 2\rho_{{\omega},n}. $$ Notice that $\rho_{{\omega},n}$ converges exponentially fast to $0$ for ${\mathbb P}$-a.a. ${\omega}$ (indeed $\rho(\cdot)<1$ and ${\theta}$ is ergodic). Thus, for any sequence $\mu_n$ so that $\mu\in{\mathcal C}_{{\theta}^n{\omega}}^*$ the limit $$ \nu_{\omega}=\lim_{n\to\infty}\frac{({\mathcal L}_{\omega}^n)^*\mu}{\mu({\mathcal L}_{\omega}^n\textbf{1})} $$ exists, belongs to ${\mathcal C}_{\omega}^*$ and it does not depend on the choice of the sequence (hence ${\omega}\to\nu_{\omega}$ is measurable). Moreover, by fixing $n$ and letting $m\to\infty$ we have \begin{equation}\label{AAb} \left\|\frac{({\mathcal L}_{\omega}^n)^*\mu}{\mu({\mathcal L}_{\omega}^n\textbf{1})}-\nu_{\omega}\right\|\leq \sqrt 2\rho_{{\omega},n}. \end{equation} Note that $\nu_{\omega}(\textbf{1})=1$. Furthermpre, by plugging in $({\mathcal L}_{{\theta}{\omega}}^n)^*\mu$ inside ${\mathcal L}_{\omega}^*$ and using \eqref{AAb} with ${\theta}{\omega}$ instead of ${\omega}$ we see that there is a number ${\lambda}_{\omega}$ so that ${\mathcal L}_{\omega}^*\nu_{{\theta}{\omega}}={\lambda}_{\omega}\nu_{\omega}$. Plugging in $g=\textbf{1}$ we also see that ${\lambda}_{\omega}=\nu_{{\theta}{\omega}}({\mathcal L}_{\omega} \textbf{1})$. Finally, since ${\mathcal H}_{\omega}$ is dense in $C({\mathcal E}_{\omega})$ and $\nu_{\omega}$ is positive we get that $\nu_{\omega}$ can be extended to a probability measure on ${\mathcal E}_{\omega}$. \subsubsection{Reconstruction of $h_{\omega}$ with effective rates}\label{h sec} We first need the following result. \begin{lemma}\label{Lemma 5.5} In the case $\xi<1$ for every $i$ we have $$ \nu_{\omega}(B(x_i,\xi))\geq \frac{e^{-2\|\phi_{\omega}\|_\infty}}{\deg (T_{\omega})}:=b_{\omega}. $$ where the points $x_{i}=x_{i,{\omega}}$ are described in Section \ref{Maps1}. \end{lemma} \begin{proof} First, recall our (covering) assumption that for all $i$ we have $T_{\omega}(B_{\omega}(x_i,\xi))={\mathcal E}_{{\theta}{\omega}}$. Hence, for every $x\in{\mathcal E}_{{\theta}{\omega}}$ we have $$ \left({\mathcal L}_{\omega}(\textbf{1}_{B_{\omega}(x_i,\xi)})\right)(x)\geq e^{-\|\phi_{\omega}\|_\infty}. $$ Next, since ${\lambda}_{\omega}=\nu_{{\theta}{\omega}}({\mathcal L}_{\omega} \textbf{1})$ we have $$ {\lambda}_{\omega}\leq \|{\mathcal L}_{{\omega}}\textbf{1}\|_\infty\leq \deg(T_{\omega})e^{\|\phi_{\omega}\|_\infty} $$ and so $$ \nu_{\omega}(B(x_i,\xi))=\nu_{{\theta}{\omega}}({\lambda}_{\omega}^{-1}{\mathcal L}_{\omega}(\textbf{1}_{B_{\omega}(x_i,\xi)}))\geq {\lambda}_{\omega}^{-1}e^{-\|\phi_{\omega}\|_{\infty}}\geq \frac{e^{-2\|\phi_{\omega}\|_\infty}}{\deg (T_{\omega})}. $$ \end{proof} \begin{lemma}\label{Aper} For every $g\in{\mathcal C}_{{\omega}}$ we have $$ \|g\|\leq K_{\omega}\nu_{\omega}(g) $$ where when $\xi<1$, $$ K_{\omega}=e^{2\|\phi_{\omega}\|_\infty+2\xi^{\alpha}\gamma_{\omega}^{\alpha}}\deg(T_{\omega})(1+\gamma_{\omega}^{\alpha}) $$ while when $\xi=1$ we have $K_{\omega}=(1+\gamma_{\omega}^{\alpha})e^{\gamma_{\omega}^{\alpha}}$. \end{lemma} \begin{proof} Fix some $g\in{\mathcal C}_{{\omega}}$. Suppose first that $\xi<1$. Let $x\in{\mathcal E}_{\omega}$ and let $i$ be so that $\rho(x,x_i)\leq \xi$, $x_i=x_{i,{\omega}}$. Then since $g\in{\mathcal C}_{\omega}$ we have $$ g(x)\leq e^{\xi^{\alpha} \gamma_{\omega}^{\alpha}}g(x_i)\leq e^{2\xi^{\alpha} \gamma_{\omega}^{\alpha}}\inf\{g(y): d(y,x_i)\leq \xi\}\leq \frac{e^{2\xi^{\alpha} \gamma_{\omega}^{\alpha}}}{\nu_{\omega}(B(x_i,\xi))}\int_{B_{\omega}(x_i,\xi)}g(z)dz$$$$\leq e^{2\xi^{\alpha} \gamma_{\omega}^{\alpha}}b_{\omega}^{-1}\nu_{\omega}(g). $$ where in the last inequality we have used Lemma \ref{Lemma 5.5}. By taking the supremum over all possible choices of $x$ we see that $$ \sup g= \|g\|_\infty\leq e^{2\xi^{\alpha} \gamma_{\omega}^{\alpha}}b_{\omega}^{-1}\nu_{\omega}(g). $$ When $\xi=1$ we have $$ \sup g\leq e^{\gamma_{\omega}^{\alpha}}\inf g\leq e^{\gamma_{\omega}^{\alpha}}\nu_{\omega}(g). $$ Finally, in both cases, if $g(x)>g(x')$ and $\rho(x,x')\leq\xi$ then $$ g(x)-g(x')=g(x)(1-g(x)/g(x'))\leq \|g\|_\infty(1-e^{-\gamma_{\omega}^{\alpha}\rho^{\alpha}(x,x')})\leq \|g\|_\infty \gamma_{\omega}^{\alpha}\rho^{\alpha}(x,x'). $$ Thus, $$ v(g)=v_{{\alpha},\xi}(g)\leq \|g\|_\infty\gamma_{\omega}^{\alpha} $$ and so $$ \|g\|=v(g)+\|g\|_\infty\leq \left(1+\gamma_{\omega}^{\alpha}\right)\|g\|_\infty. $$ Now the lemma follows from the above upper bounds on $\|g\|_\infty$. \end{proof} Next, arguing as in \cite[Theorem 5.3.1 (iii)]{HK} we can prove the following result. \begin{lemma}\label{generating} For every $f\in{\mathcal H}_{\omega}$ there are $f_1,f_2\in{\mathcal C}_{{\omega}}$ and negative constants $c_1,c_2$ so that $f=f_1-c_1-(f_2-c_2)$ and $$ \|f_1\|+\|f_2\|+|c_1|+|c_2|\leq r_{\omega}\|f\| $$ where $r_{\omega}=4\left(1+\frac{2}{\gamma_{\omega}^{\alpha}}\right)\leq 8$. \end{lemma} Next, by applying \cite[Theorem A.2.3]{HK} and taking into account Corollary \ref{Cor diam} and Lemma \ref{Aper} we see that for every $g\in{\mathcal C}_{{\theta}^{-n}{\omega}}$ and $f\in{\mathcal C}_{{\theta}^{-m}{\omega}}$ with $m\geq n$ we have $$ \left\|\frac{{\mathcal L}_{{\theta}^{-n}{\omega}}^n g}{\nu_{{\theta}^{-n}{\omega}}({\mathcal L}_{{\theta}^{-n}{\omega}}^n g)}-\frac{{\mathcal L}_{{\theta}^{-m}{\omega}}^n f}{\nu_{{\theta}^{-m}{\omega}}({\mathcal L}_{{\theta}^{-m}{\omega}}^nf)}\right\|\leq \frac 12K_{\omega}\rho_{{\theta}^{-n}{\omega},n}. $$ Notice that $$ \nu_{{\theta}^{-n}{\omega}}({\mathcal L}_{{\theta}^{-n}{\omega}}^n g)={\lambda}_{{\theta}^{-n}{\omega},n}\nu_{{\theta}^{-n}{\omega}}(g), \,\nu_{{\theta}^{-m}{\omega}}({\mathcal L}_{{\theta}^{-m}{\omega}}^m f)={\lambda}_{{\theta}^{-m}{\omega},m}\nu_{{\theta}^{-m}{\omega}}(f) $$ where ${\lambda}_{{\omega},n}=\prod_{j=0}^{n-1}{\lambda}_{{\theta}^j{\omega}}$. We thus see that for every sequence $(g_n)$ so that $g_n\in{\mathcal C}_{{\theta}^{-n}{\omega}}$ the limit \begin{equation}\label{h lim} h_{\omega}=\lim_{n\to\infty}\frac{{\mathcal L}_{{\theta}^{-n}{\omega}}^n g_n}{\nu_{{\theta}^{-n}{\omega}}(g_n){\lambda}_{{\theta}^{-n}{\omega},n}} \end{equation} exists, it does not depend on the sequence $(g_n)$, it belongs to ${\mathcal C}_{\omega}$ and $\nu_{\omega}(h_{\omega})=1$ (and so by Lemma \ref{Aper} we have $\|h_{\omega}\|\leq K_{\omega}$). Moreover, by taking $g_n=\textbf{1}$ for every $n$ and applying ${\mathcal L}_{\omega}$ to both sides of \eqref{h lim} we see that ${\mathcal L}_{\omega} h_{\omega}={\lambda}_{\omega} h_{{\theta}{\omega}}$. Furthermore, by fixing $n$, taking $f=f_m\in{\mathcal C}_{{\theta}^{-m}{\omega}}$ and letting $m\to\infty$ we see that for every $g\in{\mathcal C}_{{\theta}^{-n}{\omega}}$ we have \begin{equation}\label{uu} \left\|\frac{{\mathcal L}_{{\theta}^{-n}{\omega}}^n g}{{\lambda}_{{\theta}^{-n}{\omega},n}}-\nu_{{\theta}^{-n}{\omega}}(g)h_{\omega}\right\|\leq \frac12 K_{\omega}\nu_{{\theta}^{-n}{\omega}}(g)\rho_{{\theta}^{-n}{\omega},n}. \end{equation} In addition, since $h_{\omega}\in {\mathcal C}_{\omega}$ by \eqref{Bal}, for all $x,x'\in{\mathcal E}_{{\theta}{\omega}}$ we have $$ h_{{\theta}{\omega}}(x)=({\lambda}_{\omega})^{-1}{\mathcal L}_{{\omega}}h_{{\omega}}(x)\leq {\lambda}_{\omega}^{-1}B({\omega}){\mathcal L}_{{\omega}}h_{{\omega}}(x')=B({\omega})h_{{\theta}{\omega}}(x'). $$ Since $\nu_{\omega}(h_{\omega})=1$ we conclude that $\min h_{{\theta}{\omega}}\geq B({\omega})^{-1}$. Finally, by Lemma \ref{generating}, for every $g\in{\mathcal H}_{{\theta}^{-n}{\omega}}$, there are $g_1,g_2$ in ${\mathcal C}_{{\theta}^{-n}{\omega}}$ and nonnegative constants $c_1,c_2$ so that $g=g_1-c_1-(g_2-c_2)$ and $$ \|g_1\|+\|g_2\|+|c_1|+|c_2|\leq 8\|g\|. $$ Thus, by applying \eqref{uu} with $g=g_1, g=g_2$ and the constant functions $g=-c_1$ and $g=-c_2$ we see that $$ \left\|\frac{{\mathcal L}_{{\theta}^{-n}{\omega}}^n g}{{\lambda}_{{\theta}^{-n}{\omega},n}}-\nu_{{\theta}^{-n}{\omega}}(g)h_{\omega}\right\|\leq 4 K_{\omega}\|g\|\rho_{{\theta}^{-n}{\omega},n}. $$ \subsection{Effective rates for partially expanding maps: proof of Theorem \ref{RPF} (i)-(iii) in the setup of Section \ref{Maps2}} \subsubsection{The cones} Set ${\kappa}_{\omega}=\frac{1}{s_{\omega}}$ and consider the real cone \[ {\mathcal C}_{{\omega}}={\mathcal C}_{{\omega},{\kappa}_{\omega}}=\{g\in{\mathcal H}_{\omega}:\,g>0\,\text{ and }\,v(g)\leq{\kappa}_{\omega}\inf g\}. \] \begin{lemma}\label{Inc1} We have \begin{equation}\label{Inclusion} {\mathcal L}_{\omega}{\mathcal C}_{{\omega}}\subset{\mathcal C}_{{\theta}{\omega},\zeta_{\omega}{\kappa}_{{\theta}{\omega}}}\subset{\mathcal C}_{{\theta}{\omega}} \end{equation} where\footnote{The fact that $\zeta_{\omega}<1$ follows from the condition on $H_{\omega}$ in \eqref{Bound ve}.} $\zeta_{\omega}=\frac{s_{\omega}{\kappa}_{\omega}+(1+{\kappa}_{\omega})e^{{\varepsilon}_{\omega}}H_{\omega}}{{\kappa}_{{\theta}{\omega}}}=s_{{\theta}{\omega}}\left(1+(1+s_{\omega}^{-1})e^{{\varepsilon}_{\omega}}H_{\omega}\right)<1$. \end{lemma} \begin{proof}[Proof of Lemma \ref{Cont1}] The proof of \eqref{Inclusion} proceeds similarly to the proof of \cite[Theorem 5.1]{castro}. Let $g\in{\mathcal C}_{{\omega}}={\mathcal C}_{{\omega},{\kappa}_{\omega}}$. Fix some ${\omega}$ and two points $x,y$ in ${\mathcal E}_{{\theta}{\omega}}$ and denote by $(x_i)$ and $(y_i)$ their inverse images under $T_{\omega}$, respectively. Then \begin{eqnarray*} \frac{|{\mathcal L}_{\omega} g(x)-{\mathcal L}^{(j)}g(y)|}{\inf{\mathcal L}_{\omega} g}\leq \frac{|{\mathcal L}_{\omega} g(x)-{\mathcal L}_{\omega} g(y)|}{d_{\omega} e^{\inf\phi_{\omega}}\inf g}\leq d_{\omega}^{-1}\sum_{i=1}^{d_{\omega}}e^{\phi_{\omega}(x_i)-\inf\phi_{\omega}}|g(x_i)-g(y_i)|(\inf g)^{-1}\\+d_{\omega}^{-1}\sum_{i=1}^{d_{\omega}}|(g(y_i)/\inf g)e^{-\inf\phi_{\omega}}|e^{\phi_{\omega}(x_i)}-e^{\phi_{\omega}(y_i)}|:=I_1+I_2. \end{eqnarray*} Next, since $g\in{\mathcal C}_{\omega}$ for each $i$ we have $|g(x_i)-g(y_i)|\leq v(g)\rho^{\alpha}(x_i,y_i)\leq{\kappa}_{\omega}\inf g\cdot\rho^{\alpha}(x_i,y_i)$. Moreover, we have $\phi_{\omega}(x_i)-\inf\phi_{\omega}\leq\sup\phi_{\omega}-\inf\phi_{\omega}={\varepsilon}_{\omega}$. Combining these estimates and taking into account \eqref{Pair1}, \eqref{Pair2} and \eqref{Pair3} we get that $$ I_1\leq {\kappa}_{\omega}\rho^{\alpha}(x,y)e^{{\varepsilon}_{\omega}}d_{\omega}^{-1}(L_{\omega}^{\alpha} q_{\omega}+(d_{\omega}-q_{\omega}){\sigma}_{\omega}^{-{\alpha}})=\rho^{\alpha}(x,y)s_{\omega}{\kappa}_{\omega} $$ where we recall that $s_{\omega}$ was defined in \eqref{Bound ve}. In order to bound $I_2$, we first observe that $\sup g\leq\inf g+v(g)\leq (1+{\kappa}_{\omega})\inf g$ and that by the definition \ref{H def} of the local H\"older constant $H_{\omega}$ and the mean value theorem we see that \[ |e^{\phi_{\omega}(x_i)}-e^{\phi_{\omega}(y_i)}|\leq e^{\max(\phi_{\omega}(x_i),\phi_{\omega}(y_i))}|\phi_{\omega}(x_i)-\phi_{\omega}(y_i)| \leq e^{\inf\phi_{\omega}+{\varepsilon}_{\omega}} H_{\omega}\rho^{\alpha}(x,y). \] Using these estimates we obtain that \[ I_2\leq \rho^{\alpha}(x,y)(1+{\kappa}_{\omega})e^{{\varepsilon}_{\omega}}H_{\omega}. \] We conclude that \[ v({\mathcal L}_{\omega} g)\leq \left(s_{\omega}{\kappa}_{\omega}+(1+{\kappa}_{\omega})e^{{\varepsilon}_{\omega}}H_{\omega}\right)\inf {\mathcal L}_{\omega} g= \zeta_{\omega}{\kappa}_{{\theta}{\omega}}\inf {\mathcal L}_{\omega} g \] and therefore \begin{equation}\label{Real inv} {\mathcal L}_{\omega}{\mathcal C}_{{\omega},{\kappa}_{\omega}}\subset{\mathcal C}_{{\theta}{\omega},\zeta_{\omega}{\kappa}_{{\theta}{\omega}}}\subset{\mathcal C}_{{\theta}{\omega},{\kappa}_{{\theta}{\omega}}}={\mathcal C}_{{\theta}{\omega}}. \end{equation} \end{proof} Next, \begin{corollary}\label{Cor diam 1} The projective diameter of ${\mathcal L}_{{\omega}}{\mathcal C}_{{\omega}}$ inside ${\mathcal C}_{{\theta}{\omega}}$ does not exceed $$ D({\omega}):=2\ln\left(\frac{1+\zeta_{\omega}}{1-\zeta_{\omega}}\right)+2\ln\left(1+\zeta_{\omega}{\kappa}_{\omega}\right). $$ \end{corollary} \begin{proof} See \cite[Section 4]{castro} or \cite[Section 5]{Varandas} (recalling our assumption that $\text{diam}({\mathcal E}_{\omega})\leq 1$). \end{proof} \subsubsection{Reconstruction of $\nu_{\omega}$ using dual cones} Let ${\mathcal C}_{\omega}^*$ be the dual cone of ${\mathcal C}_{\omega}$. Let ${\mathcal L}_{\omega}^*:{\mathcal H}_{{\theta}{\omega}}^*\to{\mathcal H}_{{\omega}}^*$ be the dual operator. Then, as explained in Section \ref{nu sec 1}, the projective diameter of ${\mathcal L}_{\omega}^* {\mathcal C}_{{\theta}{\omega}}^*$ inside ${\mathcal C}_{\omega}^*$ equals the the projective ${\mathcal L}_{{\omega}}{\mathcal C}_{{\omega}}$ inside ${\mathcal C}_{{\theta}{\omega}}$ (which does not exceed $D({\omega})$). Next, we need the following result. \begin{lemma}\label{Dual aper1} For every $\mu\in {\mathcal C}_{\omega}^*$ and all $h\in{\mathcal H}$ we have $$ |\mu(h)|\leq k_{\omega}\|h\|\mu(\textbf{1}) $$ where $k_{\omega}=\frac{1+{\kappa}_{\omega}}{{\kappa}_{\omega}}=1+s_{\omega}\leq 2$ (and ${\kappa}_{\omega}=\frac1{s_{\omega}}$). \end{lemma} \begin{proof} As in the proof of Lemma \ref{Dual aper}, it is enough to show that a closed ball or radius $\frac{{\kappa}_{\omega}}{1+{\kappa}_{\omega}}$ around $\textbf{1}$ is contained in ${\mathcal C}_{\omega}$. Indeed, let $h=1+f$ where $\|f\|<1$. Then $\inf (1+f)\geq 1-\|f\|_\infty\geq 1-\|f\|$ and $v(1+f)=v(f)\leq \|f\|$. Hence $h\in{\mathcal C}_{\omega}$ if $$ \|f\|\leq {\kappa}_{\omega}(1-\|f\|). $$ \end{proof} Finally, let $\rho({\omega})=\tanh (D({\omega})/4)$. Let $\mu\in{\mathcal C}_{{\theta}^n{\omega}}^*$ and $\nu\in{\mathcal C}_{{\theta}^m{\omega}}^*$ for some $m\geq n$. Then the projective distance between $({\mathcal L}_{\omega}^n)^*\mu$ and $({\mathcal L}_{\omega}^m)^*\nu=({\mathcal L}_{\omega}^n)^*({\mathcal L}_{{\theta}^n{\omega}}^{m-n})^*\nu$ does not exceed $\rho_{{\omega},n}=\prod_{j=0}^{n-1}\rho({\theta}^j{\omega})$. Thus, as in Section \ref{nu sec 1} we conclude that $$ \left\|\frac{({\mathcal L}_{\omega}^n)^*\mu}{\mu({\mathcal L}_{\omega}^n\textbf{1})}-\frac{({\mathcal L}_{\omega}^m)^*\nu}{\nu({\mathcal L}_{\omega}^m\textbf{1})}\right\|\leq \sqrt 2k_{\omega}\rho_{{\omega},n}. $$ Thus, for any sequence $\mu_n$ so that $\mu\in{\mathcal C}_{{\theta}^n{\omega}}^*$ the limit $$ \nu_{\omega}=\lim_{n\to\infty}\frac{({\mathcal L}_{\omega}^n)^*\mu}{\mu({\mathcal L}_{\omega}^n\textbf{1})} $$ exists, belongs to ${\mathcal C}_{\omega}^*$ and it does not depend on the choice of the sequence. Moreover, by fixing $n$ and letting $m\to\infty$ we have $$ \left\|\frac{({\mathcal L}_{\omega}^n)^*\mu}{\mu({\mathcal L}_{\omega}^n\textbf{1})}-\nu_{\omega}\right\|\leq \sqrt 2k_{{\omega}}\rho_{{\omega},n}. $$ Note that $\nu_{\omega}(\textbf{1})=1$. Moreover, as in Section \ref{nu sec 1} there is a number ${\lambda}_{\omega}$ so that ${\mathcal L}_{\omega}^*\nu_{{\theta}{\omega}}={\lambda}_{\omega}\nu_{\omega}$, where ${\lambda}_{\omega}=\nu_{{\theta}{\omega}}({\mathcal L}_{\omega} \textbf{1})$. Furthermore, $\nu_{\omega}$ is a probability measure. \subsubsection{Reconstruction of $h_{\omega}$ with effective rates} We first need the following result. \begin{lemma} For every $g\in{\mathcal C}_{{\omega}}={\mathcal C}_{{\omega},{\kappa}_{\omega}}$ we have $$ \|g\|\leq K_{\omega}\inf g\leq K_{\omega}\nu_{\omega}(g) $$ where $$ K_{\omega}=1+2{\kappa}_{\omega}. $$ \end{lemma} \begin{proof} First, since $g\in{\mathcal C}_{\omega}$ we have $$ \|g\|=\sup g+v(g)\leq \sup g+{\kappa}_{\omega}\inf g. $$ Second, in order to estimate $\sup g$, using that $v(g)\leq {\kappa}_{\omega}\inf g$ we see that for every $x\in{\mathcal E}_{\omega}$ we have $|g(x)-\inf g|\leq v(g)\text{diam}({\mathcal E}_{\omega})^{\alpha}\leq {\kappa}_{\omega}\text{diam}({\mathcal E}_{\omega})^{\alpha}\inf g$. Taking into account that $\text{diam}({\mathcal E}_{\omega})\leq 1$ we conclude that $\sup g\leq (1+{\kappa}_{\omega})\inf g$, which completes the proof of the lemma. \end{proof} The next result we need is the following. \begin{lemma}\label{generating1} For every $g\in{\mathcal H}_{\omega}$ there is a constant $c(g)>0$ and a function $g_1\in{\mathcal C}_{\omega}$ so that $g=g_1-c(g)$ and $$ \|g_1\|+c(g)\leq 3\|g\|. $$ \end{lemma} \begin{proof} Let $c(g)=v(g)/{\kappa}_{\omega}+\sup|g|\leq \|g\|$. Then $g_1=g+c(g)\in{\mathcal C}_{{\omega}}$ and so $g=g_1-c_g$ and since ${\kappa}_{\omega}\geq1$ we have \[ \|g_1\|+c(g)=\|g+c(g)\|+c(g)\leq \|g\|+2c(g)\leq 3\|g\|. \] \end{proof} By repeating the arguments in Section \ref{h sec} we see that for every $m,n\in{\mathbb N}$ such that $m\geq n$ and all $g\in{\mathcal C}_{{\theta}^{-n}{\omega}}$ and $f\in{\mathcal C}_{{\theta}^{-m}{\omega}}$ we have $$ \left\|\frac{{\mathcal L}_{{\theta}^{-n}{\omega}}^n g}{\nu_{{\theta}^{-n}{\omega}}({\mathcal L}_{{\theta}^{-n}{\omega}}^n g)}-\frac{{\mathcal L}_{{\theta}^{-m}{\omega}}^n g}{\nu_{{\theta}^{-m}{\omega}}({\mathcal L}_{{\theta}^{-m}{\omega}}^nf)}\right\|\leq \frac 12K_{\omega}\rho_{{\theta}^{-n}{\omega},n}. $$ Moreover, for any sequence $(g_n)$ so that $g_n\in{\mathcal C}_{{\theta}^{-n}{\omega}}$ the limit $$ h_{\omega}=\lim_{n\to\infty}\frac{{\mathcal L}_{{\theta}^{-n}{\omega}}^n g_n}{\nu_{{\theta}^{-n}{\omega}}(g_n){\lambda}_{{\theta}^{-n}{\omega},n}} $$ exists, it does not depend on $(g_n)$, it belongs to ${\mathcal C}_{\omega}$ and $\nu_{\omega}(h_{\omega})=1$. Furthermore, ${\mathcal L}_{\omega} h_{\omega}={\lambda}_{\omega} h_{{\theta}{\omega}}$ and $$ \left\|\frac{{\mathcal L}_{{\theta}^{-n}{\omega}}^n g}{{\lambda}_{{\theta}^{-n}{\omega},n}}-\nu_{{\theta}^{-n}{\omega}}(g)h_{\omega}\right\|\leq \frac12 K_{\omega}\nu_{{\theta}^{-n}{\omega}}(g)\rho_{{\theta}^{-n}{\omega},n}. $$ In addition, since $h_{\omega}\in {\mathcal C}_{\omega}$ we have $\sup h_{\omega}\leq B_1({\omega})\inf h_{\omega}$ where $B_1({\omega})=1+{\kappa}_{\omega}$ and we have used that $\text{diam}({\mathcal E}_{\omega})\leq1$. Since $\nu_{\omega}(h_{\omega})=1$ we conclude that $\min h_{{\omega}}\geq B_1({\omega})^{-1}$. Finally, arguing as in Section \ref{h sec}, by using Lemma \ref{generating1} instead of Lemma \ref{generating} we see that $$ \left\|\frac{{\mathcal L}_{{\theta}^{-n}{\omega}}^n g}{{\lambda}_{{\theta}^{-n}{\omega},n}}-\nu_{{\theta}^{-n}{\omega}}(g)h_{\omega}\right\|\leq 2 K_{\omega}\|g\|\rho_{{\theta}^{-n}{\omega},n}. $$ \subsection{Decay of correlations and the normalized transfer operators: proof of Theorem \ref{RPF} (iv)-(v)}\label{SecDec} Let the operator $L_{\omega}$ be defined by $$ L_{\omega} g=\frac{{\mathcal L}_{\omega}(g h_{\omega})}{h_{{\theta}{\omega}}{\lambda}_{\omega}}. $$ Then, using that $\|fg\|\leq 3\|f\|\|g\|$ for every two H\"older continuous functions, we see that $$ \left\|L_{\omega}^{n} g-\mu_{\omega}(g)\right\|=\left\|\frac{{\mathcal L}_{\omega}^{n} (gh_{\omega})}{{\lambda}_{{\omega},n}h_{{\theta}^n{\omega}}}-\nu_{\omega}(gh_{\omega})\right\|=\left\|\left(\frac 1{h_{{\theta}^n{\omega}}}\right)\left({\mathcal L}_{\omega}^n(g h_{\omega})-\nu_{{\omega}}(g h_{{\omega}})h_{{\theta}^n{\omega}}\right)\right\| $$ $$ \leq 3\left\|\frac1 {h_{{\theta}^n{\omega}}}\right\| \left \|\frac{{\mathcal L}_{\omega}^{n} (gh_{\omega})}{{\lambda}_{{\omega},n}}-\nu_{\omega} (gh_{\omega})h_{{\theta}^n{\omega}}\right\|. $$ Now, since $h_{\omega}\geq B({\theta}^{-1}{\omega})^{-1}$ we have $\|1/h_{\omega}\|\leq v(h_{\omega})B^2({\theta}^{-1}{\omega})+B({\theta}^{-1}{\omega})\leq 2B^2({\theta}^{-1}{\omega})K_{\omega}=U_{\omega}/3$. Thus, by \eqref{RPF ExpC}, $$ \|L_{\omega}^n g-\mu_{\omega}(g)\|\leq U_{{\theta}^n{\omega}} C_{{\theta}^n{\omega}}\rho_{{\omega},n}=U({\theta}^n{\omega})\rho_{{\omega},n}. $$ Now the decay of correlations \eqref{DEC} follows from the equality $$ \text{Cov}_{\mu_{\omega}}(g, f\cdot T_{\omega}^n)=\int f\cdot (L_{\omega}^n g-\mu_{\omega}(g))d\mu_{{\theta}^n{\omega}}. $$ \section{Random complex RPF theorems with effective rates: proof of Theorem \ref{Complex RPF}} As opposed to the previous sections in this section we will begin with the setup of Section \ref{Maps2}. The reason is that in the setup of Section \ref{Maps1} the appropriate projective estimates needed to prove Theorem \ref{Complex RPF} are similar to \cite[Ch. 4-5]{HK} (with some modifications). Henceforth, for the sake of convenience, we will always assume that $\mu_{\omega}(u_{\omega})=0$, namely we will replace $u_{\omega}$ with $\tilde u_{\omega}=u_{\omega}-\mu_{\omega}(u_{\omega})$. \subsection{Complex cones contractions for random maps with expansion and contraction} The proof of Theorem \ref{Complex RPF} relies on the theory of the canonical complexification of real Birkhoff cones. We will give a reminder of the appropriate results concerning this theory in the body of the proof of Theorem \ref{Complex cones Thm} below, and the readers are referred to \cite[Appendix A]{HK} for a summary of the main definitions and results concerning contraction properties of real and complex cones (the properties of the complex cones is essentially a summary of the appropriate results in \cite{Rugh, Dub1, Dub2}). Let ${\mathcal C}_{{\omega},{\mathbb C}}$ be the canonical complexification of the cone ${\mathcal C}_{\omega}$ (see \cite[Appendix A]{HK}), and let ${\mathcal C}_{{\omega},{\mathbb C}}^{*}:=\{\nu\in{\mathcal H}_{\omega}^*:\,\nu(c)\not=0\,\,\,\forall\nu\in{\mathcal C}_{{\omega},{\mathbb C}}\setminus\{0\}\}$ be its complex dual cone. \begin{theorem}\label{Complex cones Thm} (i) The cones ${\mathcal C}_{{\omega},{\mathbb C}}$ and their duals ${\mathcal C}_{{\omega},{\mathbb C}}^{*}$ have bounded aperture: for all $g\in{\mathcal C}_{{\omega},{\mathbb C}}$ and $\nu\in{\mathcal C}_{{\omega},{\mathbb C}}^*$ and every point $x_{\omega}\in{\mathcal E}_{\omega}$ we have \begin{equation}\label{Aper cmplx} \|g\|\leq Q_{\omega}|g(x_{\omega})|\,\,\text{ and }\,\,\|\nu\|\leq M_{\omega}|\nu(\textbf{1})| \end{equation} where $Q_{\omega}=2\sqrt 2(1+2{\kappa}_{\omega})=2\sqrt 2 K_{\omega}$ and $M_{\omega}=\frac{1}{6{\kappa}_{\omega}}$, ${\kappa}_{\omega}=s_{\omega}^{-1}$ (ii) The cones ${\mathcal C}_{{\omega},{\mathbb C}}$ are linearly convex, namely for every $g\not\in{\mathcal C}_{{\omega},{\mathbb C}}$ there exists $\mu\in{\mathcal C}_{{\omega},{\mathbb C}}^*$ such that $\mu(g)=0$. (iii) The cones ${\mathcal C}_{{\omega},{\mathbb C}}$ are reproducing: for any complex-value function $g\in{\mathcal H}_{\omega}$ there is are constant $c_1(g), c_2(g)>0$ and functions $g_1,g_2\in{\mathcal C}_{\omega}\subset {\mathcal C}_{{\omega},{\mathbb C}}$ so that $g=g_1-c_1(g)+i(g_2-c_2(g))$ and $$ \|g_1\|+c_1(g)+\|g_2\|+c_2(g)\leq 6\|g\|. $$ (iv) Let $$ c_{\omega}=32{\kappa}_{{\theta}{\omega}}^{-1}(1+2{\kappa}_{\omega})e^{\|u_{\omega}\|_\infty+2\|\phi_{\omega}\|_\infty} \|u_{\omega}\|(1+H_{\omega})(1-\zeta_{\omega})^{-1} $$ and for all complex $z$ so that $|z|\leq1$ set $$ {\delta}_{\omega}(z)=2|z|c_{\omega}\left(1+\cosh(D({\omega})/2)\right). $$ Then, if ${\delta}_{\omega}(z)\leq 1-e^{-D({\omega})}$ we have that \[ {\mathcal L}_{\omega}^{(z)}{\mathcal C}_{{\omega},{\mathbb C}}\subset{\mathcal C}_{{\theta}{\omega},{\mathbb C}} \] and the Hilbert diameter of the image with respect to the complex projective metric corresponding to the cone ${\mathcal C}_{{\theta}{\omega},{\mathbb C}}$ does not exceed $7D({\omega})$. \end{theorem} \begin{proof}[Proof of Theorem \ref{Complex cones Thm}] (i) First (see \cite[Appendix A]{HK} and \cite[Section 5]{Rugh}) we have \begin{equation}\label{Complexification} {\mathcal C}_{{\omega},{\mathbb C}}=\{g\in {\mathcal H}_{\omega}:\,\Re\big(\overline{\mu(g)}\nu(g)\big) \geq0\,\,\,\,\forall\mu,\nu\in{\mathcal C}_{{\omega}}^*\}. \end{equation} We begin with showing that the complex cones ${\mathcal C}_{{\omega},{\mathbb C}}$ and their duals have bounded aperture. First, for every point $a\in {\mathcal E}_{\omega}$ and $g\in{\mathcal C}_{{\omega}}$ we have \[ \|g\|=\sup g+v(g)\leq \inf g+2v(g)\leq (1+2{\kappa}_{\omega})\inf g\leq (1+2{\kappa}_{\omega})g(a) \] where we have used that $g(x)-g(y)\leq (\text{diam}({\mathcal E}_{\omega}))^\alpha v(g)\leq v(g)$ for every real-valued function on ${\mathcal E}_{\omega}$. By applying \cite[Lemma 5.2]{Rugh} we conclude that for every $g\in{\mathcal C}_{{\omega}, {\mathbb C}}$ we have \[ \|g\|\leq 2\sqrt 2(1+2{\kappa}_{\omega})g(a). \] Next, in order to show that the cone ${\mathcal C}_{{\omega},{\mathbb C}}$ has bounded aperture we will apply \cite[Lemma A.2.7]{HK} which states that $$ \|\nu\|\leq M_{\omega}\nu(\textbf{1}),\,\,\forall\,\nu\in{\mathcal C}_{{\omega},{\mathbb C}}^* $$ if the complex cone ${\mathcal C}_{{\omega},{\mathbb C}}$ contains the ball of radius $1/M_{\omega}$ around the constant function $\textbf{1}$. The first step in showing that such a ball exists is the following representation of the real cone: \[ {\mathcal C}_{{\omega}}=\{g\in{\mathcal H}_{\omega}: s(g)\geq 0,\,\forall\,\,\, s\in \Gamma_{\omega}\} \] where $\Gamma_{\omega}\subset {\mathcal H}_{\omega}^*$ is the class of linear functionals $s$ which either have the form $s(g)=s_a(g)=g(a)$ for some $a\in{\mathcal E}_{\omega}$ or have the form \[ s=s_{x,y,t,{\kappa}_{\omega}}(g)={\kappa}_{\omega} g(t)-\frac{g(x)-g(y)}{\rho^{\alpha}(x,y)} \] for some $x,y,t\in{\mathcal E}_{\omega}$ so that $x\not=y$. Then (see \cite[Appendix A]{HK}), since $\Gamma_{\omega}$ generates the dual cone ${\mathcal C}_{{\omega}}^*$, the cannonical complexification of ${\mathcal C}_{\omega}$ can be written in the following form: \begin{equation}\label{complexification} {\mathcal C}_{{\omega},{\mathbb C}}=\{x\in{\mathcal H}_{{\omega},{\mathbb C}}:\,\Re\big(\overline{\mu(x)}\nu(x)\big)\geq0,\,\,\,\,\,\forall\mu, \nu\in \Gamma_{\omega}\}. \end{equation} Using (\ref{complexification}), it is enough to show that for all $g$ of the form $g=\textbf{1}+h$ with $\|h\|\leq \bar{\varepsilon}_{\omega}:=\frac1{M_{\omega}}$, and every $s_1,s_2\in \Gamma_{\omega}$ we have $$ \Re(s_{1}(g)\cdot\overline{s_2(g)})\geq 0. $$ Notice that $s_i(g)=1+s_i(h)$ and so $$ \Re(s_{1}(g)\cdot\overline{s_2(g)})\geq 1-|s_1(h)|-|s_2(h)|-|s_1(h)s_2(h)|. $$ Now, there are four cases. When $s_i(g)=g(a_i)$ for some $a_i\in{\mathcal E}_{\omega}$ then $$ \Re(s_{1}(g)\cdot\overline{s_2(g)})\geq 1-2\|h\|_\infty-\|h\|_\infty^2>0 $$ since $\|h\|\leq\bar{\varepsilon}_{\omega}<\frac13$. Let us suppose next that $s_1(g)=g(a)$ and $s_2(g)={\kappa}_{\omega} g(t)-\frac{g(x)-g(y)}{\rho^{\alpha}(x,y)}$. Then $$ \Re(s_{1}(g)\cdot\overline{s_2(g)})\geq 1-{\kappa}_{\omega}\|h\|_\infty-v(h)-\|h\|_\infty^2{\kappa}_{\omega}-\|h\|_\infty v(h)\geq 1-2\|h\|({\kappa}_{\omega}+1) $$ where we have used that $\|h\|\leq1$. Notice that the above right hand side is nonnegative if $\|h\|\leq \frac 1{M_{\omega}}=\frac1{6{\kappa}_{\omega}}$ since ${\kappa}_{\omega}\geq1$. A similar inequality holds true when $s_2(g)=g(a)$ and $s_1(g)={\kappa}_{\omega} g(t)-\frac{g(x)-g(y)}{\rho^{\alpha}(x,y)}$. Let us assume now that $s_i(g)={\kappa}_{\omega} g(t_i)-\frac{g(x_i)-g(y_i)}{\rho^{\alpha}(x_i,y_i)}$ for appropriate choices of $t_i,x_i,y_i$, $i=1,2$. Then $$ \Re(s_{1}(g)\cdot\overline{s_2(g)})\geq 1-2({\kappa}_{\omega}\|h\|_\infty+v(h))-\left({\kappa}_{\omega}\|h\|_\infty+v(h)\right)^2$$$$\geq 1-2{\kappa}_{\omega}\|h\|-({\kappa}_{\omega}\|h\|)-2{\kappa}_{\omega}\|h\|-\|h\|\geq 1-6{\kappa}_{\omega}\|h\| $$ where in the last two inequalities we have used that ${\kappa}_{\omega}\geq1$ and $\|h\|\leq {\kappa}_{\omega}^{-1}\leq 1$. The above left hand side is nonnegative since $\|h\|\leq \bar{\varepsilon}_{\omega}=M_{\omega}^{-1}=\frac{1}{6{\kappa}_{\omega}}$. \vskip0.1cm (ii) By \cite[Lemma 4.1]{Dub2}, in order to show that the cone ${\mathcal C}_{{\omega},{\mathbb C}}$ is linearly convex it is enough to show that there there is a continues linear functional $\ell$ which is strictly positive on ${\mathcal C}_{\omega}\setminus\{0\}$. Clearly we can take $\ell(g)=g(a)$ for an arbitrary point $a$ in ${\mathcal E}_{\omega}$. \vskip0.1cm (iii) This is a direct consequence of Lemma \ref{generating1} applied with the real and imaginary parts of $g$. \vskip0.1cm (iv) Recall first that by Lemma \ref{Cont1}, for all ${\omega}$, \begin{equation}\label{Inclusion0} {\mathcal L}_{\omega}{\mathcal C}_{{\omega}}\subset{\mathcal C}_{{\theta}{\omega},\zeta_{\omega}{\kappa}_{{\theta}{\omega}}}. \end{equation} We will next prove that for every $s\in\Gamma_{{\theta}{\omega}}$, $g\in{\mathcal C}_{{\omega}}$ (the real cone) and a complex number $z$ so that $|z|\leq 1$ we have \begin{equation}\label{Comparison Key} \left|s({\mathcal L}_{\omega}^{(z)}g)-s({\mathcal L}_{\omega} g)\right|\leq c_{\omega}|z| s({\mathcal L}_{\omega} g). \end{equation} After this is established we can apply \cite[Theorem A.2.4]{HK} and obtain item (iv). Let us first consider the case when $s(f)=f(a)$ for some $a\in{\mathcal E}_{{\theta}{\omega}}$. Then $$ \left|s({\mathcal L}_{\omega}^{(z)}g)-s({\mathcal L}_{\omega} g)\right|=\left|{\mathcal L}_{\omega}\big(g(e^{zu_{\omega}}-1)\big)(a)\right|\leq \|e^{zu_{\omega}}-1\|_\infty {\mathcal L}_{\omega} g(a)= \|e^{zu_{\omega}}-1\|_\infty s({\mathcal L}_{\omega} g) $$ $$ \leq |z|\|u_{\omega}\|_\infty e^{\|u_{\omega}\|_\infty}s({\mathcal L}_{\omega} g)\leq c_{\omega}|z|s({\mathcal L}_{\omega} g). $$ Next, let us consider the case when $s=s_{x,y,t,{\kappa}_{{\theta}{\omega}}}$. We first need the following simple result/observation: let $A$ and $A'$ be complex numbers, $B$ and $B'$ be real numbers, and let ${\varepsilon}_1>0$ and $\zeta\in(0,1)$ be so that \begin{itemize} \item $B>B'$ \item $|A-B|\leq{\varepsilon}_1B$ \item $|A'-B'|\leq{\varepsilon}_1 B$ \item $|B'/B|\leq\zeta$. \end{itemize} Then \[ \left|\frac{A-A'}{B-B'}-1\right|\leq 2{\varepsilon}_1(1-\zeta)^{-1}. \] The proof of this result is elementary, just write \[ \left|\frac{A-A'}{B-B'}-1\right|\leq\left|\frac{A-B}{B-B'}\right|+ \left|\frac{A'-B'}{B-B'}\right|\leq \frac{2B{\varepsilon}_1}{B-B'}=\frac{2{\varepsilon}_1}{1-B'/B}. \] Next, fix some nonzero $g\in{\mathcal C}_{{\omega}}$ and $(x,y,t)\in{\Delta}_{{\theta}{\omega}}$. Then, in order to obtain \eqref{Comparison Key} when $s=s_{x,y,t,{\kappa}_{\omega}}$ we need to show that the conditions of the above result hold true with $A={\kappa}_{{\theta}{\omega}}{\mathcal L}_{\omega}^{(z)} g(t)$, \begin{equation*} B={\kappa}_{{\theta}{\omega}}{\mathcal L}_{\omega} g(t),\,\,\, A'=\frac{{\mathcal L}_{\omega}^{(z)} g(x)-{\mathcal L}_{\omega}^{(z)} g(y)}{\rho^{\alpha}(x,y)}, \,\,\,\,B'= \frac{{\mathcal L}_{\omega} g(x)-{\mathcal L}_{\omega} g(y)}{\rho^{\alpha}(x,y)} \end{equation*} and $\zeta=\zeta_{\omega}$ and ${\varepsilon}_1=16{\kappa}_{{\theta}{\omega}}^{-1}(1+2{\kappa}_{\omega})(1+H_{\omega})e^{\|u_{\omega}\|_\infty+2\|\phi_{\omega}\|_\infty}\|u_{\omega}\||z|$. We begin by noting that $B>B'$ since the function ${\mathcal L}_{\omega} g$ is a nonzero member of the cone ${\mathcal C}_{{\omega},{\mathbb R},\zeta_{\omega}{\kappa}_{{\theta}{\omega}}}$. In fact, this already implies that \[ |B'/B|\leq \zeta_{\omega}\inf{\mathcal L}_{\omega} g/B\leq\zeta_{\omega}<1. \] Next, notice that when $|z|\leq 1$ we have \begin{eqnarray*} |A-B|={\kappa}_{{\theta}{\omega}}|{\mathcal L}_{\omega}^{(z)} g(t)-{\mathcal L}_{\omega} g(t)|={\kappa}_{{\theta}{\omega}}|{\mathcal L}_{\omega}(g(e^{zu_{\omega}}-1))(t)|\\\leq {\kappa}_{{\theta}{\omega}}\|e^{zu_{\omega}}-1\|_\infty{\mathcal L}_{\omega} g(t)\leq |z|e^{\|u_{\omega}\|_\infty}\|u_{\omega}\|_\infty B. \end{eqnarray*} Finally, let us estimate the difference $|A'-B'|$. For each $a,b\in{\mathcal E}_{\omega}$ we define \[ {\Delta}_{a,b}(z)=e^{\phi_{\omega}(a)}(e^{zu_{\omega}(a)}-1)g(a)-e^{\phi_{\omega}(b)}(e^{zu_{\omega}(b)}-1)g(b). \] Denote again by $x_i$ and $y_i$ the preimages of $x$ and $y$ under $T_{\omega}$, respectively, where $1\leq i\leq d_{\omega}$. Then \[ \rho^\alpha(x,y)(A'-B')=\sum_{i=1}^{d_{\omega}}{\Delta}_{x_i,y_i}(z). \] Next, by using the mean value theorem we see that \[ |{\Delta}_{x_i,y_i}(z)|=|{\Delta}_{x_i,y_i}(z)-{\Delta}_{x_i,y_i}(0)|\leq|z|\sup_{|q|\leq |z|}|{\Delta}'_{x_i,y_i}(q)|. \] In order to estimate the above derivative, first note that \[ |e^{\phi_{\omega}(x_i)}-e^{\phi_{\omega}(y_i)}|\leq (e^{\phi_{\omega}(x_i)}+e^{\phi_{\omega}(y_i)})H_{\omega}\rho^{\alpha}(x,y). \] Therefore, for every complex $q$ so that $|q|\leq1$ and all $1\leq i\leq d_{\omega}$, \[ |{\Delta}'_{x_i,y_i}(q)|\leq 8(1+H_{\omega})e^{\|u_{\omega}\|_\infty}\|u_{\omega}\|(e^{\phi(x_i)}+e^{\phi(y_i)})\|g\|\rho^{\alpha}(x,y). \] We conclude that for every $z\in{\mathbb C}$ with $|z|\leq1$, $$ |A'-B'|\leq 8|z|(1+H_{\omega})e^{\|u_{\omega}\|_\infty}\|u_{\omega}\|\|g\|({\mathcal L}_{\omega}\textbf{1}(x)+{\mathcal L}_{\omega}\textbf{1}(y)). $$ Now, since $g\in{\mathcal C}_{\omega}$ we have $\|g\|\leq r_{\omega} \inf g$, where $r_{\omega}=(1+2{\kappa}_{\omega})$ and $$ \inf g\leq d_{\omega}^{-1}e^{\|\phi_{\omega}\|_\infty}{\mathcal L}_{\omega} g(t)={\kappa}_{{\theta}{\omega}}^{-1}d_{\omega}^{-1}e^{\|\phi_{\omega}\|_\infty}B. $$ Using also that ${\mathcal L}_{\omega} \textbf{1}\leq d_{\omega} e^{\|\phi_{\omega}\|_\infty}$ we see that $$ |A'-B'|\leq 16{\kappa}_{{\theta}{\omega}}^{-1}(1+2{\kappa}_{\omega})(1+H_{\omega})e^{\|u_{\omega}\|_\infty+2\|\phi_{\omega}\|_\infty}\|u_{\omega}\||z|B. $$ We thus conclude that we can take $$ \zeta=\zeta_{\omega}\,\,\text{ and }\,{\varepsilon}_1=16{\kappa}_{{\theta}{\omega}}^{-1}(1+2{\kappa}_{\omega})(1+H_{\omega})e^{\|u_{\omega}\|_\infty+2\|\phi_{\omega}\|_\infty}\|u_{\omega}\||z| $$ in the above general result, which completes the proof of \eqref{Comparison Key} in the case $s=s_{x,y,t,{\kappa}_{{\theta}{\omega}}}$ since $c_{\omega}=2{\varepsilon}_1(1-\zeta)^{-1}$. \end{proof} \subsubsection{A complex RPF theorem with effective rates}\label{Cmplx pf} \begin{proof}[Proof of Theorem \ref{Complex RPF} in the setup of Section \ref{Maps2}] Set $$ {\delta}({\omega})=2c_{\omega}\left(1+\cosh(D({\omega})/2)\right)=2E_{\omega}. $$ Then ${\delta}_{\omega}(z)=|z|{\delta}({\omega})$ and the assumptions in Theorem \ref{Complex RPF} insure that $$ A:=\text{esssup}\left({\delta}({\omega})(1-e^{-D({\omega})})^{-1}\right)<\infty. $$ Hence the condition ${\delta}_{\omega}(z)\leq 1-e^{-D({\omega})}$ holds true when $|z|\leq 1/A$. Relying on Theorem \ref{Complex cones Thm}, proceeding as in the of the proof of the real RPF theorem (Theorem \ref{RPF}), we see that there is a constant $r_0$ so that ${\mathbb P}$-a.s. for every complex number $z$ so that $|z|\leq r_0$ there is a triplet consisting of a nonzero complex random variable ${\lambda}_{\omega}(z)$, a random function $\hat h_{\omega}^{(z)}\in{\mathcal C}_{{\omega},{\mathbb C}}$ and a random linear functional $\nu_{\omega}^{(z)}\in{\mathcal C}^*_{{\omega},{\mathbb C}}$ so that $\nu_{\omega}^{(z)}(\textbf{1})=\nu_{\omega}(\hat h_{\omega}^{(z)})=1$, $$ ({\mathcal L}_{{\omega}}^{(z)})^{*}\nu_{{\theta}{\omega}}^{(z)}={\lambda}_{\omega}(z)\nu_{\omega}^{(z)} $$ and for every $\mu\in{\mathcal C}_{\omega}$ $$ \left\|\frac{({\mathcal L}_{\omega}^{z,n})^*\mu}{\mu({\mathcal L}_{\omega}^{z,n}\textbf{1})}-\nu_{\omega}^{(z)}\right\|\leq M_{\omega}\tilde\rho_{{\omega},n} $$ where $\tilde\rho({\omega})=\tanh(7D({\omega})/4)$. Moreover, for every $g\in{\mathcal C}_{{\theta}^{-n}{\omega},{\mathbb C}}$ we have \begin{equation}\label{Exp Temp} \left\|\frac{{\mathcal L}_{{\theta}^{-n}{\omega}}^{z,n} g}{\nu_{\omega}({\mathcal L}_{{\theta}^{-n}{\omega}}^{z,n} g)}-\hat h_{\omega}^{(z)}\right\|\leq \sqrt 2 K_{\omega}\tilde \rho_{{\theta}^{-n}{\omega},n}. \end{equation} Since $\nu_{\omega}^{(z)}$ and $\hat h_{\omega}^{(z)}$ are uniform limits (in $z$) of analytic in $z$ measurable functions they are analytic in $z$ and measurable in ${\omega}$. Similarly, also ${\lambda}_{\omega}(z)$ is analytic in $z$. Since $\nu_{\omega}^{(z)}(\textbf{1})=1$ we conclude from \eqref{Aper cmplx} that $\|\nu_{\omega}^{(z)}\|\leq M_{\omega}$. Moreover, since $\nu_{\omega}(\hat h_{\omega}^{(z)})=1$ we conclude from \eqref{Aper cmplx} that $\|\hat h_{\omega}^{(z)}\|\leq 2\sqrt 2K_{\omega}$. It is also clear that ${\lambda}_{\omega}(0)={\lambda}_{\omega}$, $\nu_{\omega}^{(0)}=\nu_{\omega}$ and $\hat h_{\omega}^{(0)}=h_{\omega}$. To correct the fact that $\nu_{\omega}^{(z)}(\hat h_{\omega}^{(z)})$ might not equal $1$ (notice that it does not vanish since $\nu_{\omega}^{(z)}$ belongs to the dual cone) let us define $$ h_{\omega}^{(z)}=\frac{\hat h_{\omega}^{(z)}}{{\alpha}_{\omega}(z)} $$ where ${\alpha}_{\omega}(z)=\nu_{\omega}^{(z)}(\hat h_{\omega}^{(z)})$. Notice that ${\alpha}_{\omega}(0)=1$, ${\alpha}_{\omega}(z)$ is analytic in $z$ and $$ |{\alpha}_{\omega}(z)|\leq \|\nu_{\omega}^{(z)}\|\|\hat h_{\omega}^{(z)}\|\leq 2\sqrt 2 M_{\omega} K_\omega. $$ Let us now obtain \eqref{Exponential convergence}, which in particular will yield that ${\mathcal L}_{\omega}^{(z)}(h_{\omega}^{(z)})={\lambda}_{\omega}(z)h_{{\theta}{\omega}}^{(z)}$. Let us first prove a version of \eqref{Exponential convergence} for functions in the cone ${\mathcal C}_{\omega}$. First, for every complex number $z$ such that $|z|\leq r_0$ and $q\in {\mathcal C}_{{\theta}^{-n}{\omega}}$ we have \[ {\lambda}_{{\theta}^{-n}{\omega},n}(z)\nu_{{\theta}^{-n}{\omega}}^{(z)}(q)= \nu_{\omega}^{(z)}({\mathcal L}_{{\theta}^{-n}{\omega}}^{z,n}q). \] Next, set $$ b_n(q,z)=b_n({\omega},q,z)= \frac{\nu_{\omega}({\mathcal L}_{{\theta}^{-n}{\omega}}^{z,n}q)}{\nu_{\omega}^{(z)}({\mathcal L}_{{\theta}^{-n}{\omega}}^{z,n}q)}. $$ Then, $$ \left\|\frac{{\mathcal L}_{{\theta}^{-n}{\omega}}^{z,n}q}{{\lambda}_{{\theta}^{-n}{\omega},n}(z)\nu_{{\theta}^{-n}{\omega}}^{(z)}(q)} -h_{\omega}^{(z)}\right\|= \left\|b_n(q,z)\cdot\frac{{\mathcal L}_{{\theta}^{-n}{\omega}}^{z,n}q}{\nu_{\omega}({\mathcal L}_{{\theta}^{-n}{\omega}}^{z,n}q)}-h_{\omega}^{(z)}\right\| $$ $$ \leq\left\|b_n(q,z)\left(\frac{{\mathcal L}_{{\theta}^{-n}{\omega}}^{z,n}q}{\nu_{\omega}({\mathcal L}_{{\theta}^{-n}{\omega}}^{z,n}q)}-\hat h_{\omega}^{(z)}\right) \right\|+ \left\|\left(b_n(q,z)-\frac1{{\alpha}_{\omega}(z)}\right) \hat h_{\omega}^{(z)}\right\|:=I_1+I_2 $$ where we have used the above definition of $h_{\omega}^{(z)}$. Now, by \eqref{Exp Temp} we have $$ \left|(b_n(q,z))^{-1}-{\alpha}_{\omega}(z)\right|\\= \left|\nu_{\omega}^{(z)}\left(\frac{{\mathcal L}_{{\theta}^{-n}{\omega}}^{z,n}q}{\nu_{\omega}({\mathcal L}_{{\theta}^{-n}{\omega}}^{z,n}q)}\right)-\nu_{\omega}^{(z)}(\hat h_{\omega}^{(z)})\right| $$ $$ \leq\|\nu_{\omega}^{(z)}\| \left\|\frac{{\mathcal L}_{{\theta}^{-n}{\omega}}^{z,n}q}{\nu_{\omega}({\mathcal L}_{{\theta}^{-n}{\omega}}^{z,n}q)}-\hat h_{\omega}^{(z)}\right\|\leq \sqrt 2 M_{\omega} K_{\omega} \tilde\rho_{{\theta}^{-n}{\omega},n}. $$ Hence, if $n$ satisfies that $\sqrt 2 M_{\omega} K_{\omega} \tilde\rho_{{\theta}^{-n}{\omega},n}<\frac12|{\alpha}_{\omega}(z)|$ (which, since $z\to {\alpha}_{\omega}(z)$ is analytic and non-vanishing, is true ${\mathbb P}$-a.s. for every $n$ large enough uniformly in $z$) then \begin{equation}\label{bnq} \left|b_n(q,z)-\frac1{{\alpha}_{\omega}(z)}\right|\leq \frac{2\sqrt 2 M_{\omega} K_{\omega} \tilde\rho_{{\theta}^{-n}{\omega},n}}{|{\alpha}_{\omega}(z)|^2}\leq |{\alpha}_{\omega}(z)|^{-1}. \end{equation} Combing this with the upper bound $\|\hat h_{\omega}^{(z)}\|\leq 2\sqrt 2K_{\omega}$ we conclude that for such $n$'s we have $$ I_2\leq \frac{8M_{\omega} K_{\omega}^2 \tilde\rho_{{\theta}^{-n}{\omega},n}}{|{\alpha}_{\omega}(z)|^2}. $$ Using now \eqref{Exp Temp} together with \eqref{bnq} and that $|{\alpha}_{\omega}(z)|\leq 2\sqrt 2 M_{\omega} K_\omega$ we see that for $n$ satisfying the above properties we have $$ I_1\leq |b_n(q,z)|\sqrt 2 K_{\omega}\rho_{{\theta}^{-n}{\omega},n}\leq2|{\alpha}_{\omega}(z)|^{-1} \left(\sqrt 2 K_{\omega}\tilde\rho_{{\theta}^{-n}{\omega},n}\right). $$ By combining the above estimates on $I_1$ and $I_2$ we see that $$ \left\|\frac{{\mathcal L}_{{\theta}^{-n}{\omega}}^{z,n}q}{{\lambda}_{{\theta}^{-n}{\omega},n}(z)\nu_{{\theta}^{-n}{\omega}}^{(z)}(q)} -h_{\omega}^{(z)}\right\|\leq\left(2\sqrt 2 K_{\omega}|{\alpha}_{\omega}(z)|^{-1}+8M_{\omega} K_{\omega}^2|{\alpha}_{\omega}(z)|^{-2}\right)\tilde\rho_{{\theta}^{-n}{\omega},n}:=R({\omega},n,z). $$ Finally, using that $\|\nu_{{\theta}^{-n}{\omega}}^{(z)}\|\leq M_{{\theta}^{-n}{\omega}}$ we conclude that for every $g\in{\mathcal C}_{{\theta}^{-n{\omega}},{\mathbb C}}$ we have $$ \left\|\frac{{\mathcal L}_{{\theta}^{-n}{\omega}}^{z,n}q}{{\lambda}_{{\theta}^{-n}{\omega},n}(z)} -h_{\omega}^{(z)}\right\|\leq M_{{\theta}^{-n}{\omega}} R({\omega},n,z). $$ Now the proof of \eqref{Exponential convergence} is completed by using the regeneration property stated in Theorem \ref{Complex cones Thm} (iii). \end{proof} \subsection{Complex cones for properly expanding maps} In this section we will briefly explain how to prove Theorem \ref{Complex RPF} in the setup of Section \ref{Maps1}. Since this is completed similarly to the proof in the setup of Section \ref{Maps2} (using ideas from \cite[Ch.5]{HK}) we will formulate the results concerning complex cones without their proofs. We suppose here that \eqref{phi cond} holds true and that $u_{\omega}$ satisfies $v(u_{\omega})\leq H_{\omega}$ with some $H_{\omega}$ so that $$ \gamma_{{\omega}}^{-{\alpha}}v(u_{\omega})+H_{\omega}\leq \gamma_{{\theta}{\omega}}^{\alpha}-1. $$ Then, by replacing $\phi_{\omega}$ with $\phi_{{\omega},t}=\phi_{\omega}+tu_{\omega}$ all the results concerning real cones hold true for $\phi_{{\omega},t}$ when $t\in[-1,1]$, with $Z_{\omega}=\gamma_{{\omega}}^{-{\alpha}} v(u_{\omega})+H_{\omega}$ instead of $H_{\omega}$ and with $\|\phi_{\omega}\|_\infty+\|u_{\omega}\|_\infty$ instead of $\|\phi_{\omega}\|_\infty$. We will need the following result, whose proof proceeds essentially as in \cite[Ch. 5]{HK}. \begin{theorem}\label{Complex cones Thm0} (i) The cones ${\mathcal C}_{{\omega},{\mathbb C}}$ and their duals ${\mathcal C}_{{\omega},{\mathbb C}}^{*}$ have bounded aperture: for all $g\in{\mathcal C}_{{\omega},{\mathbb C}}$ and $\nu\in{\mathcal C}_{{\omega},{\mathbb C}}^*$ we have \[ \|g\|\leq 2\sqrt 2K_{\omega} |\nu_{\omega}(g)|\,\,\text{ and }\,\,\|\nu\|\leq M_{\omega}|\nu(\textbf{1})| \] where $M_{\omega}$ and $K_{\omega}$ were defined in Section \ref{Aux1}. (ii) The cones ${\mathcal C}_{{\omega},{\mathbb C}}$ are linearly convex, namely for every $g\not\in{\mathcal C}_{{\omega},{\mathbb C}}$ there exists $\mu\in{\mathcal C}_{{\omega},{\mathbb C}}^*$ such that $\mu(g)=0$. (iii) The cones ${\mathcal C}_{{\omega},{\mathbb C}}$ are reproducing: for every complex-valued function $g\in{\mathcal H}_{\omega}$ there are constants $c_1(g), c_2(g)>0$ and functions $g_1,g_2\in{\mathcal C}_{\omega}\subset {\mathcal C}_{{\omega},{\mathbb C}}$ so that $g=g_1-c_1(g)+i(g_2-c_2(g))$ and $$ \|g_1\|+c_1(g)+\|g_2\|+c_2(g)\leq 2r_{\omega}\|g\| $$ where $r_{\omega}=8\left(1+\frac{2}{\gamma_{\omega}^{\alpha}}\right)\leq 24$. (iv) Let $c_{\omega}$ and $\tilde D({\omega})$ be defined as in Section \ref{Aux1}. Then, if ${\delta}_{\omega}(z)\leq 1-e^{-\tilde D({\omega})}$ we have that \[ {\mathcal L}_{\omega}^{(z)}{\mathcal C}_{{\omega},{\mathbb C}}\subset{\mathcal C}_{{\theta}{\omega},{\mathbb C}} \] and the Hilbert diameter of the image with respect to the complex projective metric corresponding to the cone ${\mathcal C}_{{\theta}{\omega},{\mathbb C}}$ does not exceed $7\tilde D({\omega})$, with $\tilde D({\omega})$ defined in Section \ref{Aux1}. \end{theorem} Relying on Theorem \ref{Complex cones Thm0} the proof of Theorem \ref{Complex RPF} in the setup of Section \ref{Maps1} proceeds exactly as in Section \ref{Cmplx pf}. \begin{remark} Notice that $\tilde D({\omega})\geq1$ and so $1-e^{-\tilde D({\omega})}\geq 1-e^{-1}$. Hence, if $$ E_{\omega}=c_{\omega}\left(1+\cosh(\tilde D({\omega})/2)\right) $$ is a bounded random variable then there is a constant $r_0>0$ so that the condition ${\delta}_{\omega}(z)\leq 1-e^{-\tilde D({\omega})}$ holds true when $|z|\leq r_0$. Notice also that $\cosh(\tilde D({\omega})/2)\leq e^{\tilde D({\omega})/2}$. \end{remark} \subsection{The ``normalized" complex operators} In this section we will prove \eqref{Exponential convergence CMPLX} relying on \eqref{Exponential convergence}. Let us consider the operators $L_{\omega}^{(z)}$ given by $$ L_{\omega}^{(z)} g=\frac{{\mathcal L}_{\omega}^{(z)}(g h_{\omega})}{h_{{\theta}{\omega}}{\lambda}_{\omega}}. $$ Then $$ \left\|\frac{L_{\omega}^{z,n} g}{\bar{\lambda}_{{\omega},n}(z)}-\mu^{(z)}_{\omega}(g)\bar h_{\omega}^{(z)}\right\|=\left\|\frac{{\mathcal L}_{\omega}^{z,n} (gh_{\omega})}{{\lambda}_{{\omega},n}(z)h_{{\theta}^n{\omega}}}-\nu_{\omega}^{(z)}(gh_{\omega})\frac{h_{{\theta}^n{\omega}}^{(z)}}{h_{{\theta}^n{\omega}}}\right\| $$ $$ \leq 3\left\|\frac1 {h_{{\theta}^n{\omega}}}\right\| \left \|\frac{{\mathcal L}_{\omega}^{z,n} (gh_{\omega})}{{\lambda}_{{\omega},n}(z)}-\nu_{\omega}^{(z)}(gh_{\omega})h_{{\theta}^n{\omega}}^{(z)}\right\|. $$ Now, as in Section \ref{SecDec} we have $3\|1/h_{\omega}\|\leq U_{\omega}$ and thus \eqref{Exponential convergence CMPLX} follows from \eqref{Exponential convergence}.
2,869,038,156,575
arxiv
\section{Introduction} Studies of gamma-ray burst (GRB) spectral evolution have recently begun to uncover trends which may constrain the emission mechanisms. Earlier reports on spectral evolution focused solely on the ``hardness'' of bursts, measured either by the ratio between two detector channels or with more physical variables such as the spectral break or peak power energy $E_{\rm{pk}}$ (\cite{ford95}) which is the maximum of $\nu F_{\nu}$, where $\nu$ is photon energy and $F_{\nu}$ is the specific energy flux. Such hardness parameters were typically found to either follow a ``hard-to-soft'' trend (\cite{norr86}), decreasing monotonically while the flux rises and falls, or to ``track'' the flux during GRB pulses (\cite{gole83}). The recent discovery that $E_{\rm{pk}}$ often decays exponentially in bright, long, smooth BATSE GRB pulses \textit{as a function of photon fluence} $\Phi$ provides a new constraint for emission models (\cite{lian96}), and the fact that the decay constant $\Phi_{\rm{0}}$ often remains fixed from pulse to pulse within a single burst hints at a regenerative source rather than a single catastrophic event (\cite{mesz93}). However, that study concentrated only on the evolution of $E_{\rm{pk}}$. To further explore the origin of the spectral break, we begin the analysis of two additional parameters in the spectral evolution, the asymptotic low-energy power law slope $\alpha$ below $E_{\rm{pk}}$ and the high-energy power law slope $\beta$ above $E_{\rm{pk}}$ as they are defined in the Band {\it et al.} (1993) GRB spectral function \( \begin{array}{rclrr} N_{\rm{E}}(E) & = & A\left(\frac{E}{100 \rm{keV}}\right)^{\alpha} exp\left(-\frac{E}{E_{\rm{0}}}\right), & (\alpha-\beta)E_{\rm{0}} \geq E, \\ \mbox{} & = & A\left[\frac{(\alpha-\beta)E_{\rm{0}}}{100 \rm{keV}}\right]^{\alpha- \beta} exp(\beta-\alpha) \left(\frac{E}{\rm{100 keV}}\right)^{\beta}, & (\alpha-\beta)E_{\rm{0}} \leq E, & (1) \end{array} \) \vspace{2mm} \noindent where A is the amplitude (in $\rm{photons} \: \rm{sec}^{-1} \rm{cm}^{-2} \rm{kev}^{-1}$) and $E_{0} = E_{\rm{pk}} / (2 + \alpha)$. We note that $\alpha$ is not the maximum low energy slope of the GRB function within the detector range, but is the asymptotic limit of the slope if extrapolated to arbitrarily low energies. The observed values and variability of all three parameters are crucial in evaluating the wide field of proposed models of gamma-ray burst emission. For example, many models of the spectral break require $\alpha$ to stay constant (e.g. self-absorption) or to have negative values (e.g. $-\frac{2}{3}$, \cite{katz94}, \cite{tava96}), even when $E_{\rm{pk}}$ evolves. In our study, we find that there are hints that $\beta$ decreases over the course of some bursts, as suggested by COMPTEL results (\cite{hanl95}) and stays constant in others. In this letter, though, we focus on the evolution of $\alpha$ and save the discussion of $\beta$ for future work (\cite{pree97}). \section{Spectral Evolution Patterns} To determine the evolution of the spectral shape of GRBs, we examine High Energy Resolution data collected from the BATSE Large-Area Detectors (LADs) on board the Compton Gamma-Ray Observatory (\cite{fish89}). We select bursts that have either a BATSE 3B catalog (\cite{meeg96}) fluence $> 2 \times 10^{-5}$ erg cm$^{-2}$, peak photon fluxes $\stackrel{>}{\sim}$ 10 photons s$^{-1}$ cm$^{-2}$ on the 1024 ms timescale, or a signal-to-noise ratio (SNR) $> 7.5$ between 25 and 35 keV with the BATSE Spectral Detector (SD). The inclusion of a few other bursts results in a set of 79 bursts. The counts from the LAD most normal to the line of sight of each burst are background-subtracted and then binned into time intervals each with a SNR of $\sim 45$ within the 28 keV to 1800 keV range. Employing a non-linear $\chi^{2}$-minimization algorithm (\cite{bevi69}), we fit the Band {\it et al.} GRB function to each interval and thus obtain the time evolution of the three Band {\it et al.} parameters which define the spectral shape. Figures 1 through 3 show sample BATSE bursts (3B) 910807, 910927, 911031, 920525, and 931126, displayed for illustrative purposes. (See \cite{lian96} for $E_{\rm{pk}}$-fluence diagrams for 911031, 920525, and 931126.) The spectra show that for these bursts, $\alpha$ generally rises and falls with the instantaneous $E_{\rm{pk}}$, though exact correlation between the two parameters is not evident. In Figures 1 and 2, $\beta$ stays relatively constant throughout most of the primary pulse, while $E_{\rm{pk}}$ and $\alpha$ both steadily decrease. To determine if $\alpha$ does indeed evolve in time in a majority of bursts, we fit a zeroth (M=0) and first-order (M=1) polynomial to the $\alpha$ evolution in each burst. Assuming a null hypothesis in which $\alpha$ is constant during a burst and the time-resolved values of $\alpha$ are normally distributed about the mean, we expect the value $\Delta\chi^{2}=\chi^2_{\rm{M=0}}-\chi^2_{\rm{M=1}}$ to be distributed as $\chi^2$ with 1 degree of freedom (\cite{eadi71}). We calculate for each burst the probability Q of randomly drawing a value greater than or equal to $\Delta\chi^{2}$. We observe that 67 of the 79 bursts have a Q $\le$ 0.05, (which gives a D=0.8 in a K-S test) and 46 have a Q below our acceptable cutoff of 0.001. We conclude that a majority of bursts in our sample show evidence for at least a first-order trend in $\alpha$. The five sample bursts above suggest that evolution of $\alpha$ mimics that of $E_{\rm{pk}}$. To see if this occurs in other bursts, we attempt to disprove the null hypothesis that $\alpha$ is uncorrelated with $E_{\rm{pk}}$. To test the degree of correlation between $\alpha$ and $E_{\rm{pk}}$, in each of our 79 bursts we compute the Spearman rank correlation $r_{\rm{s}}$ (\cite{pres92}). For each burst with a positive $r_{\rm{s}}$, we find the probability $P_{+}$ of randomly drawing a value of $r_{\rm{s}}$ that high or higher assuming no correlation exists. For each burst with a negative $r_{\rm{s}}$, we find the probability $P_{\rm{-}}$ of randomly drawing a value of $r_{\rm{s}}$ that low or lower assuming no anti-correlation. The divisions of the bursts in this way precludes the inclusion of systematic anti-correlations, which could occur given the negative covariance between $\alpha$ and $E_{\rm{pk}}$ and the observed shape of our $\chi^{2}$ minimum contours. We next calculate the Kolmogorov-Smirnov (K-S) D statistic between the measured distribution of $P_{\rm{+}}$ or $P_{\rm{-}}$ and the distribution one would expect if no correlation or anti-correlation existed. We find D=0.45 for the 47 positively correlated bursts. The likelihood of this value, assuming no intrinsic correlation, is $2\times10^{-8}$. The bursts showing negative correlation, which suffer from systematics described above, are still consistent (likelihood = 0.04) with a non-correlation hypothesis. From this we conclude that a positive correlation exists between $\alpha$ and $E_{\rm{pk}}$ in at least some subset of bursts. To determine if this relation exists in ``hard-to-soft'' or ``tracking'' pulses, we select 18 pulses which we determine to be clearly ``hard-to-soft'' and 12 pulses which are clearly ``tracking'' from the $>240$ pulses within our 79 bursts. Pulses are included in the ``hard-to-soft'' category if the maximum $E_{\rm{pk}}$ occurs before the flux peak and is greater than $E_{\rm{pk}}$ at the flux peak by at least $\sigma_{\rm{E_{\rm{pk}}}}$. Pulses are ``tracking'' if the rise and fall of $E_{\rm{pk}}$ coincides with those of flux to within 1 time bin (typically $\sim \frac{1}{2}$ sec) and if the rise lasts at least 3 time bins. We do not pretend that all pulses fall into one of these two categories, but instead treat them as extreme examples in a continuum of evolutionary patterns. Following the same analysis described above on these smaller populations, we find D=0.46 for ``hard-to-soft'' pulses and D=0.45 for ``tracking'' pulses. in cases with positive $r_{\rm{s}}$. The likelihood of these observed values of $r_{\rm{s}}$ assuming no intrinsic correlations is 0.003 for the ``hard-to-soft'' pulses and 0.02 for the ``tracking'' pulses. In contrast, while 4 of the 18 ``hard-to-soft'' pulses and 6 of the 12 ``tracking'' pulses were anti-correlated, the likelihood of these randomly occurring was 0.78 for the ``hard-to-soft'' cases and 0.29 for the ``tracking'' cases, values consistent with the null hypothesis of no anti-correlation. In Figure~4, we compare the cumulative distributions of the 14 ``hard-to-soft'' and 6 ``tracking'' pulses which are positively correlated to that of the 47 positively correlated bursts. We find that both distributions of pulses are similar to the distribution of the bursts which implies an $E_{\rm{pk}}-\alpha$ correlation. We conclude from this statistical evidence that for ``hard-to-soft'' and, with less confidence, for ``tracking'' pulses the asymptotic low-energy power-law slope $\alpha$ evolves in a manner similar to $E_{\rm{pk}}$. \section{Low-Energy Power Index in the Rise Phase of Pulses} Assuming that $\alpha$ mimics $E_{\rm{pk}}$, it follows that \textit{$\alpha$ decreases monotonically for ``hard-to-soft'' pulses}, whereas it increases during the rise phase of ``tracking'' pulses. We compare the averaged values of $\alpha$ during the rise phase for these two groups and find that \textit{those in ``hard-to-soft'' pulses are significantly higher}. While none of the 12 ``tracking'' pulses has an average $\alpha_{\rm{rise}}~>~0$, 7 of the 18 ``hard-to-soft'' pulses had an average $\alpha_{\rm{rise}} > 0$ (see Figure~5). A K-S test between the two distributions gives a value of D=0.56, implying a probability of 0.014 that these two samples were randomly taken from the same distribution. We next examine the highest value of $\alpha_{max}$ that occurs in our time-resolved spectra. This value serves as a valuable test for GRB emission models. In Figure~6, we provide the distribution of $\alpha_{max}$ found in each of our 79 bursts. Only a few bursts examined so far suggest that their maximum $\alpha$ may be $> +1$. As indicated in Figure~6, all of the bursts with $\alpha_{max} > +1$ have large statistical uncertainties. The nearly linear decrease of $\alpha$ with respect to time in 3B 910927 suggests that its relatively high $\alpha_{\rm{max}}$ of $1.6~\pm~0.3$, found using data from the LAD most normal to the burst, is not merely a statistical fluctuation (see Figure~1). Further examination reveals, however, that this burst is still consistent with $\alpha \le +1$ for its duration. In addition, jointly fitting the data from the two LADs most normal to the burst reduces $\alpha_{\rm{max}}$ to $1.03~\pm~0.15$. We also note that fitting 3B 910927 with a broken power law instead of the Band {\it et al.} GRB function, gives \textit{the same linear decrease of the low-energy power law slope $\gamma_{\rm{1}}$ with respect to time}. The fit for the broken power law also has reduced-$\chi^{2}$ values comparable to those of the Band {\it et al.} GRB function fit. However, $\gamma_{\rm{1}}~<~0$ throughout the burst, a value of the low-energy slope lower than that found using the Band {\it et al.} GRB function. If the GRB function better represents the underlying physics than this difference would be expected. The parameter $\gamma_{\rm{1}}$ measures the effective average slope below $E_{\rm{pk}}$ whereas $\alpha$ measures asymptotic value, allowing for the curvature of the exponential function. \section{Summary and Discussion} We establish that the asymptotic low-energy power law slope, represented by the Band {\it et al.} parameter $\alpha$, evolves with time rather than remaining fixed to its time-integrated value in 58\% of the bursts in our sample. We find strong evidence that a correlation between the parameters $E_{\rm{pk}}$ and $\alpha$ exists in the time-resolved spectra of some BATSE gamma-ray bursts and with slightly less confidence, we determine that this correlation exists in both ``hard-to-soft'' and ``tracking'' pulses. We also find that in $\sim 40\%$ of the ``hard-to-soft'' pulses, the average value of $\alpha$ during the flux rise phase is $> 0$, while for ``tracking'' pulses the average $\alpha$ is always $\le 0$. For 3B 910927, using data from only the LAD receiving the most counts, we determine a maximum value of $\alpha = 1.6\pm0.3$. However, we cannot yet prove that $\alpha_{\rm{max}}~>~1$ in any burst examined so far due to broadness of the $\chi^{2}$ minimum. GRB spectral breaks can in principle be caused by synchrotron emission with a low-energy cutoff or self-absorption. However, in the former case, $\alpha$ is always $\leq -\frac{2}{3}$ (\cite{katz94}, \cite{tava96}) with no evolution. Such a low and constant $\alpha$ is inconsistent with many observed BATSE bursts. For instance, in 3B 910927, fitting the time bin in which $\alpha$ is maximum with an $\alpha$ fixed to $-\frac{2}{3}$ results in a Q of $1.5 \times 10^{-11}$, much lower the Q=0.35 obtained when $\alpha$ is a free parameter. In the case of self-absorption, $\alpha$ could go as high as +1 (thermal) or +1.5 (nonthermal, power-law) (\cite{rybi79}). But again, in such models, $\alpha$ cannot evolve with time, only $E_{\rm{pk}}$ (which would be interpreted as the self-absorption frequency) can. Hence, these conventional interpretations of the spectral break of GRB continua can be ruled out by our results. Implications of our results on various cosmological scenarios (e.g. \cite{shav96}) remain to be be investigated. The spectral breaks can also be caused by multiple Compton scattering (\cite{lian96}). In this case, the decay of $\alpha$ in ``hard-to-soft'' pulses can be interpreted as the Thomson thinning of a Comptonizing plasma (\cite{lian97}) and the initial $\alpha$ can in principle go as high as +2, because in the limit $\tau_{\rm{T}}$ (Thomson depth) $\rightarrow \infty$ one would expect a Wien peak. However, several factors make it difficult to clearly measure an early low-energy power law $\sim$~+2 even if the spectral break is related to a Wien peak. The most obvious problem is that the highest $\tau_{\rm{T}}$ would occur earliest in a ``hard-to-soft'' pulse, when the flux is the lowest, so that fitting a precise spectral model becomes difficult. Another problem is that even if the true GRB spectral break is Wien-like, if one had used a function other than the Band {\it et al.} function (eg. broken power law) or simply measured the slope within the BATSE range, one could get a slope flatter than +2. This is evident in Figure~6, in which the maximum slope for the same set of bursts appears to only approach +1 while $\alpha_{\rm{max}}$ appears to approach +2. This is because the exponential curvature depresses the apparent slope relative to the asymptotic power law of the Wien function. Also important is that the Band {\it et al.} GRB function does not take into account the soft X-ray upturn expected from saturated Comptonization of soft photons (\cite{rybi79}, \cite{lian97}). If the lower boundary of the fitting energy window is below the relative minimum in the saturated Comptonization photon spectrum (\cite{pozd83}), any fitted, low-energy power law, such as the Band {\it et al.} GRB function, will be flatter than the true slope for the Wien peak. Preliminary results show that moving the lower energy cutoff of the fitting region allows one to get a higher $\alpha$. However, the uncertainty in $\alpha$ increases when reducing the size of the fitting window and thus the higher value of $\alpha$ may be misleading. Evidence for the X-ray upturns in the low-energy spectra have been found by Preece {\it et al.} (1996) who found positive residuals between the BATSE data and their fitted Band {\it et al.} GRB functions in many bursts. However, further analysis of time-resolved, low-energy GRB spectra is needed before a model involving saturated Comptonization can be tested. \acknowledgements AC thanks NASA-MSFC for the Graduate Student Research Program fellowship. This work is supported by NASA grant NAG5-1515.
2,869,038,156,576
arxiv
\section{Introduction} \label{sec:introduction} \subsection{Triangulations, multitriangulations and $0$-$1$-fillings} \sloppypar The systematic study of $0$-$1$-fillings of polyominoes with restricted chain lengths likely originates in an article by Jakob Jonsson~\cite{Jonsson2005}. At first, he was interested in a generalisation of triangulations, where the objects under consideration are maximal sets of diagonals of the $n$-gon, such that at most $k$ diagonals are allowed to cross mutually. Thus, in the case $k=1$ one recovers ordinary triangulations. He realised these objects as fillings of the staircase shaped polyomino with row-lengths $n-1, n-2,\dots,1$ with zeros and ones. The condition that at most $k$ diagonals cross mutually then translates into the condition that the longest north-east chain in the filling has length $k$, see Definition~\ref{dfn:fillings-and-chains}. Instead of studying fillings of the staircase shape only, he went on to consider more general shapes which he called \Dfn{stack} and \Dfn{moon polyominoes}, see Definition~\ref{dfn:moon} and Figure~\ref{fig:moon}. For stack polyominoes he was able to prove that the number of maximal fillings depends only on $k$ and the multiset of heights of the columns, not on the particular shape of the polyomino. He conjectured that this statement holds more generally for moon polyominoes, which was eventually proved by the author~\cite{Rubey2006} using a technique introduced by Christian Krattenthaler~\cite{Krattenthaler2006} based on Sergey Fomin's growth diagrams for the Robinson-Schensted-Knuth correspondence. However, the proof given there is not fully bijective: what one would hope for is a correspondence between fillings of any two moon polyominoes that differ only by a permutation of the columns. This article is a step towards this goal. \subsection{RC-graphs and the subword complex} RC-graphs (for \lq reduced word compatible sequence graphs\rq, see~\cite{MR1281474}, also known as \lq pipe dreams\rq\ see~\cite{MR2180402}) were introduced by Sergey Fomin and Anatol Kirillov~\cite{MR1394950} to prove various properties of Schubert polynomials. Namely, for a given permutation $w$, the Schubert polynomial $\mathfrak S_w$ can be regarded as the generating function of rc-graphs, see Remark~\ref{rmk:Schubert}. A different point of view is to consider them as facets of a certain simplicial complex. Let $w_0$ be the long permutation $n\cdots21$, and consider its reduced factorisation $$Q=s_{n-1}\cdots s_2 s_1\; s_{n-1}\cdots s_3 s_2\; \cdots\cdots\; s_{n-1}s_{n-2}\; s_{n-1}.$$ Then the subword complex associated to $Q$ and $w$ introduced by Allen Knutson and Ezra Miller~\cite{MR2180402,MR2047852} has as facets those subwords of $Q$ that are reduced factorisations of $w$. Subword complexes enjoy beautiful topological properties, which are transferred by the main theorem of this article to the simplicial complex of $0$-$1$-fillings, as observed by Christian Stump~\cite{Stump2010}, see also the article by Luis Serrano and Christian Stump~\cite{SerranoStump2010}. The intimate connection between maximal fillings and rc-graphs demonstrated by the main theorem of this article, Theorem~\ref{thm:filling-dream}, \emph{should} not have come as a surprise. Indeed, Sergey Fomin and Anatol Kirillov \cite{MR1471891} established a connection between reduced words and reverse plane partitions already thirteen years ago, which is not much less than the case of Ferrers shapes in Theorem~\ref{thm:ne-se}. They even pointed towards the possibility of a bijective proof using the Edelman-Greene correspondence. More recently, the connection between Schubert polynomials and triangulations was noticed by Alexander Woo~\cite{Woo2004}. Vincent Pilaud and Michel Pocchiola~\cite{PilaudPocchiola2009} discovered rc-graphs (under the name \lq beam arrangements\rq) more generally for multitriangulations, however, they were unaware of the theory of Schubert polynomials. In particular, Theorem 3.18 of Vincent Pilaud's thesis~\cite{Pilaud2010} (see also Theorem~21 of~\cite{PilaudPocchiola2009}) is a variant of our Theorem~\ref{thm:filling-dream} for multitriangulations. Finally, Christian Stump and the author of the present article became aware of an article by Vincent Pilaud and Francisco Santos~\cite{MR2471876} that describes the structure of multitriangulations in terms of so-called $k$-stars (introduced by Harold Coxeter). We then decided to translate this concept to the language of fillings, and discovered pipe dreams yet again. \section{Definitions} \label{sec:definitions} \subsection{Polyominoes} \label{sec:polyominoes} \begin{figure}[h] \begin{equation*} \begin{array}{ccc} \young(:::\hfil,% ::\hfil\hfil\hfil,% ::\hfil\hfil\hfil\hfil,% \hfil\hfil\hfil\hfil\hfil\hfil\hfil,% \hfil\hfil\hfil\hfil\hfil\hfil\hfil,% :\hfil\hfil\hfil\hfil\hfil\hfil,% :::\hfil\hfil) & \young(\hfil\hfil\hfil\hfil\hfil\hfil\hfil,% \hfil\hfil\hfil\hfil\hfil\hfil\hfil,% ::\hfil\hfil\hfil\hfil,% ::\hfil\hfil\hfil,% :::\hfil) & \young(\hfil\hfil\hfil\hfil\hfil\hfil\hfil,% \hfil\hfil\hfil\hfil\hfil\hfil\hfil,% \hfil\hfil\hfil\hfil,% \hfil\hfil\hfil,% \hfil) \end{array} \end{equation*} \caption{a moon-polyomino, a stack-polyomino and a Ferrers diagram} \label{fig:moon} \end{figure} \begin{dfn}\label{dfn:polyominoes} A \Dfn{polyomino} is a finite subset of the quarter plane $\mathbb N^2$, where we regard an element of $\mathbb N^2$ as a cell. A \Dfn{column} of a polyomino is the set of cells along a vertical line, a \Dfn{row} is the set of cells along a horizontal line. We are using \lq English\rq\ (or matrix) conventions for the indexing of the rows and columns of polyominoes: the top row and the left-most column have index $1$. The polyomino is \Dfn{convex}, if for any two cells in a column (rsp. row), the elements of $\mathbb N^2$ in between are also cells of the polyomino. It is \Dfn{intersection-free}, if any two columns are \Dfn{comparable}, {\it i.e.}, the set of row coordinates of cells in one column is contained in the set of row coordinates of cells in the other. Equivalently, it is intersection-free, if any two rows are comparable. For example, the polyomino \begin{equation*} \young(::\hfil,% ::\hfil\hfil\hfil,% \hfil\hfil\hfil\hfil\hfil,% \hfil\hfil\hfil\hfil,% ::\hfil) \end{equation*} is convex, but not intersection-free, since the first and the last columns are incomparable. \end{dfn} \begin{dfn}\label{dfn:moon} A \Dfn{moon polyomino} (or L-convex polyomino) is a convex, intersection-free polyomino. Equivalently we can require that any two cells of the polyomino can be connected by a path consisting of neighbouring cells in the polyomino, that changes direction at most once. A \Dfn{stack polyomino} is a moon-polyomino where all columns start at the same level. A \Dfn{Ferrers diagram} is a stack-polyomino with weakly decreasing row widths $\lambda_1,\lambda_2,\dots,\lambda_n$, reading rows from top to bottom. Because a moon-polyomino is intersection free, the set of rows of maximal length in a moon polyomino must be consecutive. We call the set of rows including these and the rows above the \Dfn{top half} of the polyomino. Similarly, the set of columns of maximal length, and all columns to the right of these, is the \Dfn{right half} of the polyomino. The intersection of the top and the right half is the \Dfn{top right quarter} of $M$. \end{dfn} \subsection{Fillings and Chains} \label{sec:fillings-chains} \begin{dfn}\label{dfn:fillings-and-chains} A \Dfn{$0$-$1$-filling} of a polyomino is an assignment of numbers $0$ and $1$ to the cells of the polyomino. Cells containing $0$ are also called \Dfn{empty}. A \Dfn{north-east chain} is a sequence of non-zero entries in a filling such that the smallest rectangle containing all its elements is completely contained in the moon polyomino and such that for any two of its elements one is strictly to the right and strictly above the other. \end{dfn} As it turns out, it is more convenient to draw dots instead of ones and leave cells filled with zeros empty. Two examples of (rather special) fillings of a moon polyomino are depicted in Figure~\ref{fig:top-bot}. In both examples the length of the longest north-east chain is $2$. \begin{dfn} $\Set F_{01}^{ne}(M, k)$ is the set of $0$-$1$-fillings of the moon polyomino $M$ whose longest north-east chain has length $k$ and that are \Dfn{maximal}, {\it i.e.}, assigning an empty cell a $1$ would create a north-east chain of length $k+1$. For a vector $\Mat r$ of integers, $\Set F_{01}^{ne}(M, k, \Mat r)$ is the subset of $\Set F_{01}^{ne}(M, k)$ consisting of those fillings that have exactly $\Mat r_i$ zero entries in row $i$. For any filling in $\Set F_{01}^{ne}(M, k)$, and an empty cell $\epsilon$, there must be a chain $C$ such that replacing the $0$ with $1$ in $\epsilon$, and adding $\epsilon$ to $C$, would make $C$ into a $(k+1)$-chain. In this situation, we say that $C$ is a \Dfn{maximal chain for} $\epsilon$. \end{dfn} For example, when $M$ is the moon polyomino {\tiny$\young(:\hfil\hfil,\hfil\hfil\hfil\hfil,\hfil\hfil\hfil\hfil,:\hfil\hfil)$}, the set $\Set F_{01}^{ne}(M, 1)$ consists of ten fillings, as can be inferred from Figure~\ref{fig:poset}. \begin{rmk} Note that extending the first $k$ rows and columns of a Ferrers diagram does not affect the set $\Set F_{01}^{ne}$, which is why we choose to fix the number of zero entries instead of entries equal to $1$, although the latter might seem more natural at first glance. \end{rmk} \begin{figure} \centering \begin{tikzpicture}[scale=0.6] \node (v1) at (2.5cm, 5.0cm) [draw=none] {$1$}; \node (v7) at (0.4952cm,4.0097cm) [draw=none] {$7$}; \node (v6) at (0.0cm,1.7845cm) [draw=none] {$6$}; \node (v5) at (1.3874cm,0.0cm) [draw=none] {$5$}; \node (v4) at (3.6126cm,0.0cm) [draw=none] {$4$}; \node (v3) at (5.0cm,1.7845cm) [draw=none] {$3$}; \node (v2) at (4.5048cm,4.0097cm) [draw=none] {$2$}; \draw [thick,grey] (v1) to (v2); \draw [thick,grey] (v1) to (v3); \draw [thick] (v1) to (v5); \draw [thick,grey] (v1) to (v6); \draw [thick,grey] (v1) to (v7); \draw [thick,grey] (v2) to (v3); \draw [thick,grey] (v2) to (v4); \draw [thick] (v2) to (v5); \draw [thick,grey] (v2) to (v7); \draw [thick,grey] (v3) to (v4); \draw [thick,grey] (v3) to (v5); \draw [thick] (v3) to (v6); \draw [thick] (v3) to (v7); \draw [thick,grey] (v4) to (v5); \draw [thick,grey] (v4) to (v6); \draw [thick,grey] (v5) to (v6); \draw [thick,grey] (v5) to (v7); \draw [thick,grey] (v6) to (v7); \content{0.78}{(6.7,5.5)}{% 0/0/$1$,1/0/$2$,2/0/$3$,3/0/$4$,4/0/$5$,5/0/$6$, -1/1/$7$,-1/2/$6$,-1/3/$5$,-1/4/$4$,-1/5/$3$,-1/6/$2$} \node at (9,2.4) {\young(\mbox{$\color{grey}\bullet$}\g\mbox{$\bullet$}\hfil\mbox{$\color{grey}\bullet$}\g,\mbox{$\color{grey}\bullet$}\hfil\mbox{$\bullet$}\mbox{$\color{grey}\bullet$}\g,\mbox{$\bullet$}\x\mbox{$\color{grey}\bullet$}\g,\hfil\mbox{$\color{grey}\bullet$}\g,\mbox{$\color{grey}\bullet$}\g,\mbox{$\color{grey}\bullet$})}; \end{tikzpicture} \begin{tikzpicture}[scale=0.95] \node (v1) at (1cm, 6cm) [draw=none, grey] {$\bullet$}; \node (v2) at (1.5cm, 6cm) [draw=none, grey] {$\bullet$}; \node (v3) at (2cm, 6cm) [draw=none] {$\bullet$}; \node (v4) at (2.5cm, 6cm) [draw=none] {$\bullet$}; \node (v5) at (3cm, 6cm) [draw=none] {$\bullet$}; \node (v6) at (3cm, 5.5cm) [draw=none] {$\bullet$}; \node (v7) at (3.5cm, 5.5cm) [draw=none] {$\bullet$}; \node (v8) at (3.5cm, 5cm) [draw=none] {$\bullet$}; \node (v9) at (3.5cm, 4.5cm) [draw=none] {$\bullet$}; \node (v10) at (3.5cm, 4cm) [draw=none, grey] {$\bullet$}; \node (v11) at (3.5cm, 3.5cm) [draw=none, grey] {$\bullet$}; \node (w1) at (1.5cm, 5.5cm) [draw=none] {$\bullet$}; \node (w2) at (2cm, 5.5cm) [draw=none] {$\bullet$}; \node (w3) at (2cm, 5cm) [draw=none] {$\bullet$}; \node (w4) at (2.5cm, 5cm) [draw=none] {$\bullet$}; \node (w5) at (3cm, 5cm) [draw=none] {$\bullet$}; \node (w6) at (3cm, 4.5cm) [draw=none] {$\bullet$}; \node (w7) at (3cm, 4cm) [draw=none] {$\bullet$}; \draw [thick, grey] (1cm,6cm) -- (2cm,6cm); \draw [thick] (2cm,6cm) -- (3cm,6cm) -- (3cm,5.5cm) -- (3.5cm,5.5cm) -- (3.5cm,4.5cm); \draw [thick] (1.5cm,5.5cm) -- (2cm,5.5cm) -- (2cm,5cm) -- (3cm,5cm) -- (3cm,4cm); \draw [thick, grey] (3.5cm,4.5cm) -- (3.5cm,3.5cm); \node at (6.65, 4.8) {\young(\mbox{$\color{grey}\bullet$}\g\mbox{$\bullet$}\x\mbox{$\bullet$}\hfil,:\mbox{$\bullet$}\x\hfil\mbox{$\bullet$}\x,::\mbox{$\bullet$}\x\mbox{$\bullet$}\x,:::\hfil\mbox{$\bullet$}\x,::::\mbox{$\bullet$}\mbox{$\color{grey}\bullet$},:::::\mbox{$\color{grey}\bullet$})}; \end{tikzpicture} \caption{a $2$-triangulation with corresponding filling of the staircase $\lambda_0$ and a fan of two Dyck paths with corresponding filling of the reverse staircase $\lambda_0^{rev}$.} \label{fig:triangulation-Dyck} \end{figure} \begin{rmk} For the staircase shape $\lambda_0$ with $n-1$ rows the set $\Set F_{01}^{ne}(\lambda_0, k)$ has a particularly beautiful interpretation, namely as the set of $k$-triangulations of the $n$-gon. More precisely, label the vertices of the $n$-gon clockwise from $1$ to $n$, and identify a cell of the shape in row $i$ and column $j$ with the pair $(n-i+1, j)$ of vertices. Thus, the entries in the filling equal to $1$ define a set of diagonals of the $n$-gon. It is not hard to check that a north-east chain of length $k$ in the filling corresponds to a set of $k$ mutually crossing diagonals in the $n$-gon. Maximal fillings of the reverse staircase shape $\lambda_0^{rev}$ for a given $k$ are in bijection with fans of $k$ Dyck paths. An illustration of both correspondences is given in Figure~\ref{fig:triangulation-Dyck}. These correspondences were Jakob Jonsson's~\cite{Jonsson2005} starting point to prove (in a quite non-bijective fashion) that there are as many $k$-triangulations of the $n$-gon as fans of $k$ non-intersecting Dyck paths with $n-2k$ up steps each. Luis Serrano and Christian Stump~\cite{SerranoStump2010} provided the first completely bijective proof of this fact, which we generalise in Section~\ref{sec:Edelman-Greene}. Remarkably, Alex Woo~\cite{Woo2004} used the same methods already much earlier for the case of triangulations and Dyck paths, {\it i.e.}, $k=1$. \end{rmk} \subsection{Pipe dreams} In this section we collect some results around pipe dreams and rc-graphs. All of these statements can be found in~\cite{MR1281474} together with precise references. \begin{figure} \centering \begin{tikzpicture} \tpipedream{0.475}{(1.95, 0.6875)}{% 0/0/black/black,1/0/black/black,3/0/black/black,4/0/black/black,5/0/black/black,6/0/black/white,% 0/1/black/black,1/1/black/black,4/1/black/black,5/1/black/white,% 1/2/black/black,3/2/black/black,4/2/black/white,% 0/3/black/black,3/3/black/white,% 0/4/black/black,1/4/black/black,2/4/black/white,% 0/5/black/black,1/5/black/white,% 0/6/black/white% }% \cpipedream{0.475}{(1.95, 0.6875)}{% 2/0/black/black,2/1/black/black,3/1/black/black,0/2/black/black,% 2/2/black/black,1/3/black/black,2/3/black/black}% \content{0.475}{(1.95, 1.6375)}{% 0/0/$1$,1/0/$2$,2/0/$3$,3/0/$4$,4/0/$5$,5/0/$6$,6/0/$7$, -1/1/$1$,-1/2/$2$,-1/3/$6$,-1/4/$4$,-1/5/$7$,-1/6/$5$,-1/7/$3$}% % \content{0.475}{(5.95, 1.1625)}{% 0/0/\mbox{$\bullet$},1/0/\mbox{$\bullet$},3/0/\mbox{$\bullet$},4/0/\mbox{$\bullet$},5/0/\mbox{$\bullet$},6/0/\mbox{$\bullet$},% 0/1/\mbox{$\bullet$},1/1/\mbox{$\bullet$},4/1/\mbox{$\bullet$},5/1/\mbox{$\bullet$},% 1/2/\mbox{$\bullet$},3/2/\mbox{$\bullet$},4/2/\mbox{$\bullet$},% 0/3/\mbox{$\bullet$},3/3/\mbox{$\bullet$},% 0/4/\mbox{$\bullet$},1/4/\mbox{$\bullet$},2/4/\mbox{$\bullet$},% 0/5/\mbox{$\bullet$},1/5/\mbox{$\bullet$},% 0/6/\mbox{$\bullet$},% 2/0/+,2/1/+,3/1/+,0/2/+,% 2/2/+,1/3/+,2/3/+}% \content{0.475}{(5.95, 1.6375)}{% 0/0/$1$,1/0/$2$,2/0/$3$,3/0/$4$,4/0/$5$,5/0/$6$,6/0/$7$, -1/1/$1$,-1/2/$2$,-1/3/$6$,-1/4/$4$,-1/5/$7$,-1/6/$5$,-1/7/$3$}% % \content{0.475}{(9.95, 1.1625)}{% 0/0/\mbox{$\bullet$},1/0/\mbox{$\bullet$},3/0/\mbox{$\bullet$},4/0/\mbox{$\bullet$},5/0/\mbox{$\bullet$},6/0/\mbox{$\bullet$},% 0/1/\mbox{$\bullet$},1/1/\mbox{$\bullet$},4/1/\mbox{$\bullet$},5/1/\mbox{$\bullet$},% 1/2/\mbox{$\bullet$},3/2/\mbox{$\bullet$},4/2/\mbox{$\bullet$},% 0/3/\mbox{$\bullet$},3/3/\mbox{$\bullet$},% 0/4/\mbox{$\bullet$},1/4/\mbox{$\bullet$},2/4/\mbox{$\bullet$},% 0/5/\mbox{$\bullet$},1/5/\mbox{$\bullet$},% 0/6/\mbox{$\bullet$},% 2/0/3,2/1/4,3/1/5,0/2/3,% 2/2/5,1/3/5,2/3/6}% \content{0.475}{(9.95, 1.6375)}{% 0/0/$1$,1/0/$2$,2/0/$3$,3/0/$4$,4/0/$5$,5/0/$6$,6/0/$7$, -1/1/$1$,-1/2/$2$,-1/3/$6$,-1/4/$4$,-1/5/$7$,-1/6/$5$,-1/7/$3$} \end{tikzpicture} \caption{the reduced pipe dream associated to the reduced factorisation $s_3 s_5 s_4 s_5 s_3 s_6 s_5$ of $1,2,6,4,7,5,3$.} \label{fig:dreams} \end{figure} \begin{dfn}\label{dfn:pipe} A \Dfn{pipe dream} for a permutation $w$ is a filling of a the quarter plane $\mathbb N^2$, regarding each element of $\mathbb N^2$ as a cell, with \Dfn{elbow joints} $\textelbow$ and a finite number of \Dfn{crosses} $\textcross$, such that a pipe entering from above in column $i$ exits to the left from row $w^{-1}(i)$. A pipe dream is \Dfn{reduced} if each pair of pipes crosses at most once, it is then also called \Dfn{rc-graph}. $\Set{RC}(w)$ is the set of reduced pipe dreams for $w$, and, for a vector $\Mat r$ of integers, $\Set{RC}(w, \Mat r)$ is the subset of $\Set{RC}(w)$ having precisely $\Mat r_i$ crosses in row $i$. \end{dfn} Usually it will be more convenient to draw dots instead of elbow joints and sometimes to omit crosses. We will do so without further notice. \begin{rmk} We can associate a reduced factorisation of $w$ to any pipe dream in $\Set{RC}(w)$ as follows: replace each cross appearing in row $i$ and column $j$ of the pipe dream with the elementary transposition $(i+j-1, i+j)$. Then the reduced factorisation of $w$ is given by the sequence of transpositions obtained by reading each row of the pipe dream from right to left, and the rows from top to bottom. An example can be found in Figure~\ref{fig:dreams}, where we write $s_i$ for the elementary transposition $(i,i+1)$. \end{rmk} \begin{rmk}\label{rmk:Schubert} Using reduced pipe dreams, it is possible to define the Schubert polynomial $\mathfrak S_w$ for the permutation $w$ in a very concrete way. For a reduced pipe dream $D\in\Set{RC}(w)$, define $x^D=\prod_{(i,j)\in D} x_i$, where the product runs over all crosses in the pipe dream. Then the Schubert polynomial is just the generating function for pipe dreams: \begin{equation*} \mathfrak S_w = \sum_{D\in\Set{RC}(w)} x^D. \end{equation*} This definition of Schubert polynomials and their evaluation by Sergey Fomin and Anatol Kirillov~\cite{MR1471891} was used by Christian Stump~\cite{Stump2010} to give a simple proof of the product formula for the number of $k$-triangulations of the $n$-gon \begin{equation*} \prod_{1\leq i,j<n-2k} \frac{i+j+2k}{i+j}. \end{equation*} \end{rmk} We now define an operation on pipe dreams which was introduced in a slightly less general form by Nantel Bergeron and Sara Billey~\cite{MR1281474}. It will be the main tool in the proof of Theorem~\ref{thm:filling-dream}. \begin{dfn} Let $D\in\Set{RC}(w)$ be a pipe dream. Then a \Dfn{chute move} is a modification of $D$ of the following form: \begin{equation*} \begin{array}{@{}c@{}}\\[-5ex] \begin{array}{@{}r|c|c|c|c|c|l@{}} \multicolumn{5}{c}{}&\multicolumn{1}{c}{ \phantom{+}}& \multicolumn{1}{c}{\begin{array}{@{}c@{}}\\{.\hspace{1pt}\raisebox{2pt}{.}\hspace{1pt}\raisebox{4pt}{.}}\end{array}} \\\cline{2-6} &\mbox{$\bullet$}&+&\cdots&+&+\\\cline{2-3}\cline{5-6} &+ &+&\cdots&+&+\\\cline{2-3}\cline{5-6} &\multicolumn{5}{c|}{\vdots\hfill\vdots\hfill\vdots\hfill}&\\\cline{2-3}\cline{5-6} &+ &+&\cdots&+&+\\\cline{2-3}\cline{5-6} &\mbox{$\bullet$}&+&\cdots&+&\mbox{$\bullet$}\\\cline{2-6} \multicolumn{1}{c}{\begin{array}{@{}c@{}}{.\hspace{1pt}\raisebox{2pt}{.}\hspace{1pt}\raisebox{4pt}{.}}\\ \\ \end{array}}& \multicolumn{1}{c}{\phantom{+}} \end{array} \quad\stackrel{\text{chute}}\rightsquigarrow\quad \begin{array}{@{}r|c|c|c|c|c|l@{}} \multicolumn{5}{c}{}&\multicolumn{1}{c}{ \phantom{+}}& \multicolumn{1}{c}{\begin{array}{@{}c@{}}\\{.\hspace{1pt}\raisebox{2pt}{.}\hspace{1pt}\raisebox{4pt}{.}}\end{array}} \\\cline{2-6} &\mbox{$\bullet$}&+&\cdots&+&\mbox{$\bullet$}\\\cline{2-3}\cline{5-6} &+ &+&\cdots&+&+\\\cline{2-3}\cline{5-6} &\multicolumn{5}{c|}{\vdots\hfill\vdots\hfill\vdots\hfill}&\\\cline{2-3}\cline{5-6} &+ &+&\cdots&+&+\\\cline{2-3}\cline{5-6} &+ &+&\cdots&+&\mbox{$\bullet$}\\\cline{2-6} \multicolumn{1}{c}{\begin{array}{@{}c@{}}{.\hspace{1pt}\raisebox{2pt}{.}\hspace{1pt}\raisebox{4pt}{.}}\\ \\ \end{array}}& \multicolumn{1}{c}{\phantom{+}} \end{array} \\[-3ex] \end{array} \end{equation*} More formally, a \Dfn{chutable rectangle} is a rectangular region $r$ inside a pipe dream $D$ with at least two columns and two rows such that all but the following three locations of $r$ are crosses: the north-west, south-west, and south-east corners. Applying a \Dfn{chute move} to $D$ is accomplished by placing a \textcross\ in the south-west corner of a chutable rectangle $r$ and removing the \textcross\ from the north-east corner of $r$. We call the inverse operation \Dfn{inverse chute move}. \end{dfn} The following lemma was given by Nantel Bergeron and Sara Billey~\cite[Lemma~3.5]{MR1281474} for two rowed chute moves, the proof is valid for our generalised chute moves without modification: \begin{lem}\label{lem:chute-closure}% The set $\Set{RC}(w)$ of reduced pipe dreams for~$w$ is closed under chute moves. \end{lem} \begin{proof} The pictorial description of chute moves in terms of pipes immediately shows that the permutation associated to the pipe dream remains unchanged. For example, here is the picture associated with a three rowed chute move: \begin{equation*} \begin{tikzpicture}[scale=0.88] \tpipedream{0.5}{(0,0)}{% 0/0/black/black,% 0/2/black/black,7/2/black/black}% \cpipedream{0.5}{(0,0)}{% 1/0/grey/grey,2/0/grey/grey,3/0/grey/grey,4/0/grey/grey,5/0/grey/grey,6/0/grey/grey,7/0/grey/grey,% 0/1/grey/grey,1/1/grey/grey,2/1/grey/grey,3/1/grey/grey,4/1/grey/grey,5/1/grey/grey,6/1/grey/grey,7/1/grey/grey,% 1/2/grey/grey,2/2/grey/grey,3/2/grey/grey,4/2/grey/grey,5/2/grey/grey,6/2/grey/grey}% \end{tikzpicture} \quad\raisebox{0.5cm}{$\stackrel{\text{chute}}\rightsquigarrow$}\quad \begin{tikzpicture}[scale=0.88] \tpipedream{0.5}{(0,0)}{% 0/0/black/black,7/0/black/black,% 7/2/black/black}% \cpipedream{0.5}{(0,0)}{% 1/0/grey/grey,2/0/grey/grey,3/0/grey/grey,4/0/grey/grey,5/0/grey/grey,6/0/grey/grey,% 0/1/grey/grey,1/1/grey/grey,2/1/grey/grey,3/1/grey/grey,4/1/grey/grey,5/1/grey/grey,6/1/grey/grey,7/1/grey/grey,% 0/2/grey/grey,1/2/grey/grey,2/2/grey/grey,3/2/grey/grey,4/2/grey/grey,5/2/grey/grey,6/2/grey/grey}% \end{tikzpicture} \end{equation*} \end{proof} \begin{rmk} It follows that chute moves define a partial order on $\Set{RC}(w)$, where $D$ is covered by $E$ if there is a chute move transforming $E$ into $D$. Nantel Bergeron and Sara Billey restricted their attention to two rowed chute moves. For this case, their main theorem states that the poset defined by chute moves has a unique maximal element, namely $$ D_{top}(w)=\left\{(c,j): % c\leq \#\{i: i < w^{-1}_j, w_i>j\}\right\}. $$ It is easy to see that considering general chute moves, the poset has also a unique minimal element, namely $$ D_{bot}(w)=\left\{(i,c): % c\leq \#\{j: j > i, w_j < w_i\}\right\}. $$ In the next section we will show a statement similar in spirit to the main theorem of Nantel Bergeron and Sara Billey for the more general chute moves defined above. \end{rmk} \begin{figure} \begin{center} \small \setlength{\arraycolsep}{0.6ex} \def\lr#1{\multicolumn{1}{|c|}{\raisebox{-.3ex}{$#1$}}} \def\lrg#1{\multicolumn{1}{|c|}{\raisebox{-.3ex}{\cellcolor[gray]{0.7}$#1$}}} \def\hhline{------}{\hhline{------}} \def\hhline{-----}{\hhline{-----}} \def\hhline{----}{\hhline{----}} \def\hhline{---}{\hhline{---}} \def\hhline{--}{\hhline{--}} \def\hhline{-}{\hhline{-}} \def\addtolength{\arraycolsep}{-0.2ex}\addtolength{\arrayrulewidth}{0.2ex}{} \scalebox{0.4}{ \begin{tikzpicture}[>=latex,line join=bevel,] \node (--bullet+--bullet++--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) at (454bp,723bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{-----}\lr{}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{---}\lr{}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) at (540bp,609bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{}&\lr{\bullet}\\\hhline{-----}\lr{}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{---}\lr{}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet++--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) at (330bp,837bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{}&\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{-----}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{---}\lr{}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++++--bullet++--bullet+--bullet++--bullet+--bullet) at (382bp,39bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{-----}\lr{}&\lr{}&\lr{}&\lr{\bullet}\\\hhline{----}\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{---}\lr{}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet) at (229bp,267bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{}&\lr{\bullet}\\\hhline{-----}\lr{}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{---}\lr{}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) at (416bp,837bp) [draw,draw=none] {${\addtolength{\arraycolsep}{-0.2ex}\addtolength{\arrayrulewidth}{0.2ex}{\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lrg{\bullet}&\lrg{}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{------}\lrg{\bullet}&\lrg{}&\lrg{}&\lrg{}&\lr{\bullet}\\\hhline{-----}\lrg{\bullet}&\lrg{\bullet}&\lrg{\bullet}&\lrg{\bullet}\\\hhline{----}\lr{\bullet}&\lrg{}&\lrg{\bullet}\\\hhline{---}\lr{\bullet}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}}$}; \node (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet++--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) at (444bp,495bp) [draw,draw=none] {${\addtolength{\arraycolsep}{-0.2ex}\addtolength{\arrayrulewidth}{0.2ex}{\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lrg{\bullet}&\lrg{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{------}\lrg{\bullet}&\lrg{}&\lrg{\bullet}&\lrg{}&\lr{\bullet}\\\hhline{-----}\lrg{}&\lrg{}&\lrg{\bullet}&\lrg{\bullet}\\\hhline{----}\lr{\bullet}&\lrg{}&\lrg{\bullet}\\\hhline{---}\lr{\bullet}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}}$}; \node (--bullet+--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet) at (72bp,495bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{-----}\lr{}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{---}\lr{}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) at (540bp,495bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}\\\hhline{-----}\lr{}&\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{---}\lr{}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet++--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet) at (626bp,837bp) [draw,draw=none] {${\addtolength{\arraycolsep}{-0.2ex}\addtolength{\arrayrulewidth}{0.2ex}{\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lrg{\bullet}&\lrg{}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{------}\lrg{\bullet}&\lrg{\bullet}&\lrg{}&\lrg{}&\lr{\bullet}\\\hhline{-----}\lrg{}&\lrg{\bullet}&\lrg{}&\lrg{\bullet}\\\hhline{----}\lr{\bullet}&\lrg{\bullet}&\lrg{\bullet}\\\hhline{---}\lr{\bullet}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}}$}; \node (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) at (158bp,609bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{}&\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{-----}\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{---}\lr{}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) at (549bp,381bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{-----}\lr{}&\lr{}&\lr{}&\lr{\bullet}\\\hhline{----}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{---}\lr{}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet++++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet) at (351bp,1179bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{}&\lr{}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{-----}\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}\\\hhline{----}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{---}\lr{\bullet}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) at (463bp,381bp) [draw,draw=none] {${\addtolength{\arraycolsep}{-0.2ex}\addtolength{\arrayrulewidth}{0.2ex}{\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lrg{\bullet}&\lrg{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{------}\lrg{\bullet}&\lrg{}&\lrg{\bullet}&\lrg{\bullet}&\lr{\bullet}\\\hhline{-----}\lrg{}&\lrg{}&\lrg{}&\lrg{\bullet}\\\hhline{----}\lr{\bullet}&\lrg{}&\lrg{\bullet}\\\hhline{---}\lr{\bullet}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}}$}; \node (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) at (282bp,609bp) [draw,draw=none] {${\addtolength{\arraycolsep}{-0.2ex}\addtolength{\arrayrulewidth}{0.2ex}{\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lrg{\bullet}&\lrg{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{------}\lrg{\bullet}&\lrg{}&\lrg{}&\lrg{}&\lr{\bullet}\\\hhline{-----}\lrg{\bullet}&\lrg{}&\lrg{\bullet}&\lrg{\bullet}\\\hhline{----}\lr{\bullet}&\lrg{}&\lrg{\bullet}\\\hhline{---}\lr{\bullet}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}}$}; \node (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet+--bullet++--bullet+--bullet++--bullet+--bullet) at (315bp,267bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{-----}\lr{}&\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{---}\lr{}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) at (330bp,723bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{}&\lr{}&\lr{}&\lr{\bullet}\\\hhline{-----}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{---}\lr{}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet) at (158bp,495bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{}&\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{-----}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{---}\lr{}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) at (206bp,723bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{}&\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{-----}\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{\bullet}&\lr{}&\lr{\bullet}\\\hhline{---}\lr{\bullet}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet++++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) at (287bp,951bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{}&\lr{}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{-----}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{---}\lr{}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet++--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) at (377bp,381bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{}&\lr{\bullet}\\\hhline{-----}\lr{}&\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{---}\lr{}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet+--bullet+++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) at (120bp,723bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{-----}\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{---}\lr{}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+++--bullet+--bullet++--bullet+--bullet++--bullet+--bullet) at (333bp,153bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}\\\hhline{-----}\lr{}&\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{---}\lr{}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet) at (459bp,951bp) [draw,draw=none] {${\addtolength{\arraycolsep}{-0.2ex}\addtolength{\arrayrulewidth}{0.2ex}{\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lrg{\bullet}&\lrg{}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{------}\lrg{\bullet}&\lrg{}&\lrg{}&\lrg{}&\lr{\bullet}\\\hhline{-----}\lrg{\bullet}&\lrg{\bullet}&\lrg{}&\lrg{\bullet}\\\hhline{----}\lr{\bullet}&\lrg{\bullet}&\lrg{\bullet}\\\hhline{---}\lr{\bullet}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}}$}; \node (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) at (451bp,267bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{-----}\lr{}&\lr{}&\lr{}&\lr{\bullet}\\\hhline{----}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{---}\lr{}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet++++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) at (308bp,1065bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{}&\lr{}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{-----}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{\bullet}&\lr{}&\lr{\bullet}\\\hhline{---}\lr{\bullet}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) at (626bp,723bp) [draw,draw=none] {${\addtolength{\arraycolsep}{-0.2ex}\addtolength{\arrayrulewidth}{0.2ex}{\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lrg{\bullet}&\lrg{}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{------}\lrg{\bullet}&\lrg{\bullet}&\lrg{}&\lrg{}&\lr{\bullet}\\\hhline{-----}\lrg{}&\lrg{\bullet}&\lrg{\bullet}&\lrg{\bullet}\\\hhline{----}\lr{\bullet}&\lrg{}&\lrg{\bullet}\\\hhline{---}\lr{\bullet}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}}$}; \node (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet+--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) at (368bp,609bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{-----}\lr{}&\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{\bullet}&\lr{}&\lr{\bullet}\\\hhline{---}\lr{\bullet}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet) at (244bp,381bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{}&\lr{}&\lr{}&\lr{\bullet}\\\hhline{-----}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{---}\lr{}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet) at (712bp,609bp) [draw,draw=none] {${\addtolength{\arraycolsep}{-0.2ex}\addtolength{\arrayrulewidth}{0.2ex}{\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lrg{\bullet}&\lrg{}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{------}\lrg{\bullet}&\lrg{\bullet}&\lrg{}&\lrg{\bullet}&\lr{\bullet}\\\hhline{-----}\lrg{}&\lrg{}&\lrg{}&\lrg{\bullet}\\\hhline{----}\lr{\bullet}&\lrg{\bullet}&\lrg{\bullet}\\\hhline{---}\lr{\bullet}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}}$}; \node (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet) at (158bp,381bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{-----}\lr{}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{---}\lr{}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet++--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet) at (416bp,1065bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{}&\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{-----}\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}\\\hhline{----}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{---}\lr{\bullet}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) at (626bp,609bp) [draw,draw=none] {${\addtolength{\arraycolsep}{-0.2ex}\addtolength{\arrayrulewidth}{0.2ex}{\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lrg{\bullet}&\lrg{}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{------}\lrg{\bullet}&\lrg{\bullet}&\lrg{\bullet}&\lrg{}&\lr{\bullet}\\\hhline{-----}\lrg{}&\lrg{}&\lrg{\bullet}&\lrg{\bullet}\\\hhline{----}\lr{\bullet}&\lrg{}&\lrg{\bullet}\\\hhline{---}\lr{\bullet}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}}$}; \node (--bullet+--bullet++--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) at (454bp,609bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{-----}\lr{}&\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{---}\lr{}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet++--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet) at (545bp,951bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{-----}\lr{}&\lr{\bullet}&\lr{}&\lr{\bullet}\\\hhline{----}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{---}\lr{\bullet}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet) at (34bp,723bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{}&\lr{}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{-----}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{---}\lr{}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) at (120bp,837bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{}&\lr{}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{-----}\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{---}\lr{}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) at (244bp,495bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{}&\lr{}&\lr{}&\lr{\bullet}\\\hhline{-----}\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{---}\lr{}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet+--bullet+++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet) at (72bp,609bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{-----}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{---}\lr{}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet+--bullet+++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) at (206bp,837bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{-----}\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{\bullet}&\lr{}&\lr{\bullet}\\\hhline{---}\lr{\bullet}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet++--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) at (540bp,837bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{-----}\lr{}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{\bullet}&\lr{}&\lr{\bullet}\\\hhline{---}\lr{\bullet}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet++--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) at (373bp,951bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{}&\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{-----}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{\bullet}&\lr{}&\lr{\bullet}\\\hhline{---}\lr{\bullet}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) at (626bp,495bp) [draw,draw=none] {${\addtolength{\arraycolsep}{-0.2ex}\addtolength{\arrayrulewidth}{0.2ex}{\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lrg{\bullet}&\lrg{}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{------}\lrg{\bullet}&\lrg{\bullet}&\lrg{\bullet}&\lrg{\bullet}&\lr{\bullet}\\\hhline{-----}\lrg{}&\lrg{}&\lrg{}&\lrg{\bullet}\\\hhline{----}\lr{\bullet}&\lrg{}&\lrg{\bullet}\\\hhline{---}\lr{\bullet}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}}$}; \node (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) at (358bp,495bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{-----}\lr{}&\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{---}\lr{}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet++--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) at (540bp,723bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{-----}\lr{}&\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{\bullet}&\lr{}&\lr{\bullet}\\\hhline{---}\lr{\bullet}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \node (--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) at (201bp,951bp) [draw,draw=none] {${\raisebox{-.6ex}{$\begin{array}[b]{cccccc}\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{}&\lr{}&\lr{}&\lr{\bullet}\\\hhline{------}\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}&\lr{\bullet}\\\hhline{-----}\lr{\bullet}&\lr{}&\lr{\bullet}&\lr{\bullet}\\\hhline{----}\lr{\bullet}&\lr{}&\lr{\bullet}\\\hhline{---}\lr{\bullet}&\lr{\bullet}\\\hhline{--}\lr{\bullet}\\\hhline{-}\end{array}$}}$}; \draw [black,->] (--bullet+--bullet+--bullet+++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (206bp,789bp) and (206bp,781bp) .. (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (590bp,675bp) and (582bp,665bp) .. (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [very thick,->] (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet) ..controls (496bp,916bp) and (499bp,914bp) .. (502bp,912bp) .. controls (535bp,890bp) and (549bp,897bp) .. (583bp,876bp) .. controls (583bp,876bp) and (584bp,876bp) .. (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet++--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet) ..controls (434bp,1017bp) and (437bp,1008bp) .. (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet); \draw [black,->] (--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) ..controls (120bp,789bp) and (120bp,781bp) .. (--bullet+--bullet+--bullet+++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet++++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) ..controls (250bp,916bp) and (247bp,914bp) .. (244bp,912bp) .. controls (211bp,890bp) and (197bp,897bp) .. (163bp,876bp) .. controls (163bp,876bp) and (162bp,876bp) .. (--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) ..controls (295bp,451bp) and (317bp,432bp) .. (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet++--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (458bp,333bp) and (457bp,325bp) .. (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (185bp,675bp) and (182bp,666bp) .. (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet++++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (300bp,1017bp) and (298bp,1009bp) .. (--bullet+--bullet++++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (266bp,561bp) and (263bp,552bp) .. (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) ..controls (301bp,786bp) and (293bp,774bp) .. (287bp,762bp) .. controls (269bp,727bp) and (273bp,714bp) .. (249bp,684bp) .. controls (235bp,666bp) and (217bp,650bp) .. (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (576bp,789bp) and (584bp,779bp) .. (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet); \draw [black,->] (--bullet+--bullet++++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet) ..controls (378bp,1131bp) and (384bp,1122bp) .. (--bullet+--bullet++--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) ..controls (454bp,675bp) and (454bp,667bp) .. (--bullet+--bullet++--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [very thick,->] (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet++--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet) ..controls (657bp,786bp) and (664bp,774bp) .. (669bp,762bp) .. controls (683bp,728bp) and (694bp,689bp) .. (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet); \draw [black,->] (--bullet+--bullet++++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) ..controls (305bp,903bp) and (308bp,894bp) .. (--bullet+--bullet++--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) ..controls (544bp,447bp) and (544bp,439bp) .. (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (540bp,789bp) and (540bp,781bp) .. (--bullet+--bullet++--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) ..controls (226bp,570bp) and (278bp,540bp) .. (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (336bp,917bp) and (333bp,914bp) .. (330bp,912bp) .. controls (311bp,895bp) and (303bp,895bp) .. (287bp,876bp) .. controls (264bp,845bp) and (268bp,832bp) .. (249bp,798bp) .. controls (244bp,789bp) and (239bp,780bp) .. (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet) ..controls (187bp,333bp) and (193bp,324bp) .. (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet) ..controls (108bp,561bp) and (116bp,551bp) .. (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) ..controls (540bp,561bp) and (540bp,553bp) .. (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (238bp,675bp) and (244bp,666bp) .. (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) ..controls (490bp,675bp) and (498bp,665bp) .. (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (590bp,561bp) and (582bp,551bp) .. (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) ..controls (84bp,789bp) and (76bp,779bp) .. (--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (265bp,689bp) and (298bp,668bp) .. (325bp,648bp) .. controls (325bp,648bp) and (326bp,647bp) .. (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet+--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet); \draw [very thick,->] (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (626bp,675bp) and (626bp,667bp) .. (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) ..controls (244bp,447bp) and (244bp,439bp) .. (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet++--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (416bp,447bp) and (410bp,438bp) .. (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet++--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) ..controls (379bp,699bp) and (395bp,691bp) .. (411bp,684bp) .. controls (448bp,667bp) and (462bp,670bp) .. (497bp,648bp) .. controls (497bp,648bp) and (497bp,648bp) .. (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (380bp,789bp) and (372bp,779bp) .. (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) ..controls (270bp,674bp) and (240bp,649bp) .. (239bp,648bp) .. controls (224bp,616bp) and (226bp,575bp) .. (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) ..controls (413bp,561bp) and (405bp,551bp) .. (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) ..controls (340bp,444bp) and (336bp,432bp) .. (334bp,420bp) .. controls (326bp,386bp) and (321bp,346bp) .. (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet+--bullet++--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) ..controls (136bp,675bp) and (139bp,666bp) .. (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (391bp,903bp) and (394bp,894bp) .. (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (170bp,789bp) and (162bp,779bp) .. (--bullet+--bullet+--bullet+++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet+--bullet++--bullet+--bullet++--bullet+--bullet) ..controls (322bp,219bp) and (324bp,211bp) .. (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+++--bullet+--bullet++--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (167bp,903bp) and (160bp,894bp) .. (--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) ..controls (427bp,189bp) and (409bp,130bp) .. (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++++--bullet++--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet++++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (335bp,1017bp) and (341bp,1008bp) .. (--bullet+--bullet++--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) ..controls (507bp,333bp) and (499bp,323bp) .. (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet+--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (364bp,561bp) and (364bp,553bp) .. (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) ..controls (490bp,561bp) and (498bp,551bp) .. (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) ..controls (379bp,792bp) and (397bp,776bp) .. (--bullet+--bullet++--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) ..controls (366bp,447bp) and (368bp,439bp) .. (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet++--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet) ..controls (543bp,903bp) and (542bp,895bp) .. (--bullet+--bullet++--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (594bp,447bp) and (587bp,438bp) .. (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [very thick,->] (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (407bp,765bp) and (396bp,718bp) .. (373bp,684bp) .. controls (357bp,662bp) and (345bp,664bp) .. (325bp,648bp) .. controls (325bp,648bp) and (324bp,647bp) .. (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet) ..controls (398bp,1017bp) and (395bp,1008bp) .. (--bullet+--bullet++--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet++--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) ..controls (408bp,333bp) and (414bp,324bp) .. (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) ..controls (500bp,464bp) and (493bp,460bp) .. (487bp,456bp) .. controls (458bp,437bp) and (447bp,439bp) .. (420bp,420bp) .. controls (420bp,420bp) and (419bp,420bp) .. (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet++--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [very thick,->] (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet++--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet) ..controls (626bp,789bp) and (626bp,781bp) .. (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet) ..controls (195bp,346bp) and (198bp,344bp) .. (201bp,342bp) .. controls (231bp,321bp) and (243bp,326bp) .. (272bp,306bp) .. controls (272bp,306bp) and (273bp,306bp) .. (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet+--bullet++--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet) ..controls (194bp,447bp) and (202bp,437bp) .. (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet); \draw [very thick,->] (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (589bp,460bp) and (586bp,458bp) .. (583bp,456bp) .. controls (551bp,435bp) and (537bp,440bp) .. (506bp,420bp) .. controls (506bp,420bp) and (505bp,420bp) .. (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet) ..controls (466bp,1021bp) and (485bp,1005bp) .. (502bp,990bp) .. controls (502bp,990bp) and (503bp,989bp) .. (--bullet+--bullet++--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) ..controls (99bp,675bp) and (96bp,666bp) .. (--bullet+--bullet+--bullet+++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet) ..controls (579bp,903bp) and (586bp,894bp) .. (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet++--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (576bp,675bp) and (584bp,665bp) .. (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet++--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) ..controls (372bp,310bp) and (367bp,266bp) .. (358bp,228bp) .. controls (356bp,219bp) and (354bp,210bp) .. (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+++--bullet+--bullet++--bullet+--bullet++--bullet+--bullet); \draw [very thick,->] (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (319bp,574bp) and (322bp,572bp) .. (325bp,570bp) .. controls (356bp,549bp) and (369bp,554bp) .. (401bp,534bp) .. controls (401bp,534bp) and (402bp,534bp) .. (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet++--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet); \draw [very thick,->] (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet) ..controls (676bp,561bp) and (668bp,551bp) .. (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (503bp,688bp) and (500bp,686bp) .. (497bp,684bp) .. controls (462bp,661bp) and (445bp,670bp) .. (411bp,648bp) .. controls (411bp,648bp) and (411bp,648bp) .. (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet+--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet); \draw [very thick,->] (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (465bp,813bp) and (481bp,805bp) .. (497bp,798bp) .. controls (534bp,781bp) and (548bp,784bp) .. (583bp,762bp) .. controls (583bp,762bp) and (583bp,762bp) .. (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet); \draw [black,->] (--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (203bp,903bp) and (204bp,895bp) .. (--bullet+--bullet+--bullet+++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet); \draw [very thick,->] (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (626bp,561bp) and (626bp,553bp) .. (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet); \draw [very thick,->] (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (589bp,574bp) and (586bp,572bp) .. (583bp,570bp) .. controls (548bp,547bp) and (533bp,554bp) .. (497bp,534bp) .. controls (494bp,532bp) and (490bp,530bp) .. (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet++--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+++--bullet+--bullet++--bullet+--bullet++--bullet+--bullet) ..controls (354bp,105bp) and (357bp,96bp) .. (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++++--bullet++--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) ..controls (158bp,561bp) and (158bp,553bp) .. (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet) ..controls (72bp,561bp) and (72bp,553bp) .. (--bullet+--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet++++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet) ..controls (333bp,1131bp) and (330bp,1122bp) .. (--bullet+--bullet++++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (410bp,916bp) and (413bp,914bp) .. (416bp,912bp) .. controls (442bp,892bp) and (472bp,874bp) .. (--bullet+--bullet++--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet) ..controls (238bp,333bp) and (236bp,325bp) .. (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) ..controls (194bp,561bp) and (202bp,551bp) .. (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet+--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (400bp,561bp) and (406bp,552bp) .. (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet++--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet); \draw [very thick,->] (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet) ..controls (441bp,903bp) and (438bp,894bp) .. (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet) ..controls (272bp,219bp) and (282bp,208bp) .. (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+++--bullet+--bullet++--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (504bp,675bp) and (496bp,665bp) .. (--bullet+--bullet++--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet) ..controls (158bp,447bp) and (158bp,439bp) .. (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet) ..controls (50bp,675bp) and (53bp,666bp) .. (--bullet+--bullet+--bullet+++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (504bp,789bp) and (496bp,779bp) .. (--bullet+--bullet++--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [very thick,->] (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet++--bullet+++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (452bp,447bp) and (454bp,439bp) .. (--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++++--bullet+--bullet++--bullet+--bullet+--bullet+--bullet); \draw [black,->] (--bullet+--bullet++++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (264bp,1018bp) and (253bp,1005bp) .. (--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet) ..controls (330bp,789bp) and (330bp,781bp) .. (--bullet+--bullet++--bullet+--bullet+--bullet+--bullet++++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet++--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet+--bullet) ..controls (355bp,903bp) and (352bp,894bp) .. (--bullet+--bullet++--bullet++--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet); \draw [black,->] (--bullet+--bullet+--bullet+++--bullet+--bullet+--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet) ..controls (108bp,447bp) and (116bp,437bp) .. (--bullet+--bullet+--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet+--bullet++--bullet+--bullet++--bullet+--bullet); \end{tikzpicture}} \end{center} \caption{the poset of reduced pipe dreams for the permutation $1, 2, 6, 4, 5, 3$. The interval of $0$-$1$-fillings with $k=1$ of the moon polyomino \protect{\tiny$\young(:\hfil\hfil,\hfil\hfil\hfil\hfil,\hfil\hfil\hfil\hfil,:\hfil\hfil)$}\ is emphasised.} \label{fig:poset} \end{figure} After generating and analysing some of these posets using \texttt{Sage}~\cite{Sage-Combinat}, see Figure~\ref{fig:poset} for an example, we became convinced that they should have much more structure: \begin{cnj}\label{cnj:lattice} The poset of reduced pipe dreams defined by (general) chute moves is in fact a lattice. \end{cnj} There is another natural way to transform one reduced pipe dream into another, originating in the concept of flipping a diagonal of a triangulation. Namely, consider an elbow joint in the pipe dream. Since any pair of pipes crosses at most once, there is at most one location where the pipes originating from the given elbow joint cross. If there is such a crossing, replace the elbow joint by a cross and the cross by an elbow joint. Clearly, the result is again a reduced pipe dream, associated to the same permutation. It is believed (see Vincent Pilaud and Michel Pocchiola~\cite{PilaudPocchiola2009}, Question~51) that the simplicial complex of multitriangulations can be realised as a polytope, in this case the graph of flips would be the graph of the polytope. Note that the graph of chute moves is a subgraph of the graph of flips. Is Conjecture~\ref{cnj:lattice} related to the question of polytopality? \section{Maximal Fillings of Moon Polyominoes and Pipe Dreams} \label{sec:maximal-fillings-rc} Consider a maximal filling in $\Set F_{01}^{ne}(M, k)$. Recall that we regard a moon polyomino $M$ as a finite subset of $\mathbb N^2$. Also, recall that a pipe dream is nothing but a filling of $\mathbb N^2$ with elbow joints and a finite number of crosses. Thus, replacing zeros in the filling of the moon polyomino with crosses, and all cells in the filling containing ones as well as all cells not in $M$ with elbow joints, we clearly obtain a pipe dream for some permutation $w$. An example of this transformation is given in Figure~\ref{fig:pipe-dream-filling}. We will see in this section that the pipe dreams obtained in this way are in fact reduced. \begin{figure} \centering \begin{tikzpicture} \node at (0,0) {\young(:\mbox{$\bullet$}\hfil,\mbox{$\bullet$}\x\hfil\hfil,\hfil\mbox{$\bullet$}\hfil\mbox{$\bullet$},:\hfil\hfil\mbox{$\bullet$},:\mbox{$\bullet$}\x)}; \node at (2.8975,-0.005) {\young(:\hfil\hfil,\hfil\hfil\hfil\hfil,\hfil\hfil\hfil\hfil,:\hfil\hfil\hfil,:\hfil\hfil)}; \content{0.475}{(1.95, 1.1625)}{% 0/0/\mbox{$\bullet$},1/0/\mbox{$\bullet$},3/0/\mbox{$\bullet$},4/0/\mbox{$\bullet$},5/0/\mbox{$\bullet$},6/0/\mbox{$\bullet$},% 0/1/\mbox{$\bullet$},1/1/\mbox{$\bullet$},4/1/\mbox{$\bullet$},5/1/\mbox{$\bullet$},% 1/2/\mbox{$\bullet$},3/2/\mbox{$\bullet$},4/2/\mbox{$\bullet$},% 0/3/\mbox{$\bullet$},3/3/\mbox{$\bullet$},% 0/4/\mbox{$\bullet$},1/4/\mbox{$\bullet$},2/4/\mbox{$\bullet$},% 0/5/\mbox{$\bullet$},1/5/\mbox{$\bullet$},% 0/6/\mbox{$\bullet$},% 2/0/+,2/1/+,3/1/+,0/2/+,% 2/2/+,1/3/+,2/3/+}% % \node at (6.8975,-0.005) {\young(:\hfil\hfil,\hfil\hfil\hfil\hfil,\hfil\hfil\hfil\hfil,:\hfil\hfil\hfil,:\hfil\hfil)}; \tpipedream{0.475}{(5.95, 0.6875)}{% 0/0/black/black,1/0/black/black,3/0/black/black,4/0/black/black,5/0/black/black,6/0/black/white,% 0/1/black/black,1/1/black/black,4/1/black/black,5/1/black/white,% 1/2/black/black,3/2/black/black,4/2/black/white,% 0/3/black/black,3/3/black/white,% 0/4/black/black,1/4/black/black,2/4/black/white,% 0/5/black/black,1/5/black/white,% 0/6/black/white% }% \cpipedream{0.475}{(5.95, 0.6875)}{% 2/0/black/black,2/1/black/black,3/1/black/black,0/2/black/black,% 2/2/black/black,1/3/black/black,2/3/black/black}% \content{0.475}{(5.95, 1.6375)}{% 0/0/$1$,1/0/$2$,2/0/$3$,3/0/$4$,4/0/$5$,5/0/$6$,6/0/$7$, -1/1/$1$,-1/2/$2$,-1/3/$6$,-1/4/$4$,-1/5/$7$,-1/6/$5$,-1/7/$3$}% \end{tikzpicture} \caption{a maximal filling and the associated pipe dream.} \label{fig:pipe-dream-filling} \end{figure} One may notice that the permutation associated with the pipe dream so constructed depends somewhat on the embedding of the polyomino into the quarter plane. Although one can check that this dependence is not substantial for what is to follow, we will assume for simplicity that the top row and the left-most column of the polyomino have index $1$ and indices increase from top to bottom and from left to right. Even without the knowledge that the pipe dream is reduced we can speak of chute moves applied to fillings in $\Set F_{01}^{ne}(M, k)$. However, a priori it is not clear under which conditions the result of such a move is again a filling in $\Set F_{01}^{ne}(M, k)$. In particular, we have to deal with the fact that under this identification all cells outside $M$ are also filled with \emph{elbow joints}, corresponding to \emph{ones}. Of course, to determine the set of north-east chains we have to consider the original filling and the boundary of $M$, and \emph{disregard} elbow joints outside. Similar to the approach of Nantel Bergeron and Sara Billey we will consider two special fillings $D_{top}(M, k)$ and $D_{bot}(M, k)$. These will turn out to be the maximal and the minimal element in the poset having elements $\Set F_{01}^{ne}(M, k)$, where one filling is smaller than another if it can be obtained by applying chute moves to the latter. Figure~\ref{fig:top-bot} displays an example of the following construction: \begin{dfn} Let $M$ be a moon polyomino and $k\geq0$. Then $D_{top}(M, k)\in\Set F_{01}^{ne}(M, k)$ is obtained by putting ones into all cells that can be covered by any rectangle of size at most $k\times k$, which is completely contained in the moon polyomino, and that touches the boundary of $M$ with its lower-left corner. Similarly, $D_{bot}(M, k)\in\Set F_{01}^{ne}(M, k)$ is obtained by putting ones into all cells that can be covered by any rectangle of size at most $k\times k$, which is completely contained in the moon polyomino, and that touches the boundary of $M$ with its upper-right corner. \end{dfn} \begin{figure}[h] \begin{equation*} \young(:::\mbox{$\bullet$}\x\hfil\hfil,% ::\mbox{$\bullet$}\x\hfil\hfil\hfil,% ::\mbox{$\bullet$}\x\hfil\hfil\hfil,% \mbox{$\bullet$}\x\mbox{$\bullet$}\hfil\hfil\hfil\hfil\hfil,% \mbox{$\bullet$}\x\mbox{$\bullet$}\hfil\hfil\hfil\hfil\hfil,% :\mbox{$\bullet$}\x\mbox{$\bullet$}\x\hfil\hfil\mbox{$\bullet$},% :\mbox{$\bullet$}\x\mbox{$\bullet$}\x\mbox{$\bullet$}\x\mbox{$\bullet$},% :::\mbox{$\bullet$}\x\mbox{$\bullet$}\x)% \quad \young(:::\mbox{$\bullet$}\x\mbox{$\bullet$}\x,% ::\mbox{$\bullet$}\x\mbox{$\bullet$}\x\mbox{$\bullet$},% ::\mbox{$\bullet$}\hfil\hfil\mbox{$\bullet$}\x,% \mbox{$\bullet$}\x\hfil\hfil\hfil\mbox{$\bullet$}\x\mbox{$\bullet$},% \mbox{$\bullet$}\x\hfil\hfil\hfil\mbox{$\bullet$}\x\mbox{$\bullet$},% :\hfil\hfil\hfil\hfil\hfil\mbox{$\bullet$}\x,% :\hfil\hfil\hfil\hfil\hfil\mbox{$\bullet$}\x,% :::\hfil\hfil\mbox{$\bullet$}\x)% \end{equation*} \caption{The special fillings $D_{top}(M,k)$ and $D_{bot}(M,k)$ for $k=2$ of a moon polyomino.} \label{fig:top-bot} \end{figure} We can now state the main theorem of this article: \begin{thm}\label{thm:filling-dream}\sloppypar Let $M$ be a moon polyomino and $k\geq 0$. The set $\Set F_{01}^{ne}(M, k, \Mat r)$ can be identified with the set of reduced pipe dreams $\Set{RC}\big(w(M, k), \Mat r\big)$ having all crosses inside of $M$ for some permutation $w(M, k)$ depending only on $M$ and $k$: replace zeros with crosses and all cells containing ones as well as all cells not in $M$ with elbow joints. More precisely, the set $\Set F_{01}^{ne}(M, k)$ is an interval in the poset of reduced pipe dreams $\Set{RC}\big(w(M, k)\big)$ with maximal element $D_{top}(M, k)$ and minimal element $D_{bot}(M, k)$. \end{thm} As already remarked in the introduction various versions of this theorem were independently proved by various authors by various methods. The most general version is due to Luis Serrano and Christian Stump~\cite[Theorem~2.6]{SerranoStump2010}, whose proof employs properties of subword complexes and who thus obtain additionally many interesting properties of the simplicial complex of $0$-$1$-fillings. The advantage of our approach using chute moves is the demonstration of the property that $\Set F_{01}^{ne}(M, k)$ is in fact an interval in the bigger poset of reduced pipe dreams. In particular, if Conjecture~\ref{cnj:lattice} turns out to be true then $\Set F_{01}^{ne}(M, k)$ is also a lattice. An illustration is given in Figure~\ref{fig:poset}. Let us first state a very basic property of chute moves as applied to fillings: \begin{lem}\label{lem:chute-moon-closure} Let $M$ be a moon polyomino. Chute moves and their inverses applied to a filling in $\Set F_{01}^{ne}(M, k)$ produce another filling in $\Set F_{01}^{ne}(M, k)$ whenever all zero entries remain in $M$. \end{lem} \begin{proof} We only have to check that chain lengths are preserved, which is not hard. \end{proof} Most of what remains of this section is devoted to prove that there is precisely one filling in $\Set F_{01}^{ne}(M, k)$ that does not admit a chute move such that the result is again in $\Set F_{01}^{ne}(M, k)$, namely $D_{bot}(M, k)$, and precisely one filling that does not admit an inverse chute move with the same property, namely $D_{top}(M, k)$. Although the strategy itself is actually very simple the details turn out to be quite delicate. Thus we split the proof into a few auxiliary lemmas. Let us fix $k$, a moon polyomino $M$, and a maximal filling $D\in\Set F_{01}^{ne}(M, k)$ different from $D_{bot}(M, k)$. We will then explicitly locate a chutable rectangle. Throughout the proof maximality of the filling will play a crucial role. The first lemma is used to show that certain cells of the polyomino must be empty because otherwise the filling would contain a chain of length $k+1$: \begin{lem}[Chain induction]\label{lem:chain-induction} Consider a maximal filling of a moon polyomino. Let $\epsilon$ be an empty cell such that all cells below $\epsilon$ in the same column are empty too, except possibly those that are below the lowest cell of the column left of $\epsilon$. Assume that for \emph{each} of these cells $\delta$ there is a maximal chain for $\delta$ strictly north-east of $\delta$. Then there is a maximal chain for $\epsilon$ strictly north-east of $\epsilon$. Similarly, let $\epsilon$ be an empty cell such that all cells left of $\epsilon$ in the same row are empty too, except possibly those that are left of the left-most cell of the row below $\epsilon$. Assume that for \emph{each} of these cells $\delta$ there is a maximal chain for $\delta$ strictly north-east of $\delta$. Then there is a maximal chain for $\epsilon$ strictly north-east of $\epsilon$. \end{lem} \begin{rmk} Note that for the conclusion of Lemma~\ref{lem:chain-induction} to hold we really have to assume that \emph{all} cells below $\epsilon$ are empty: in the maximal filling for $k=1$ \begin{equation*} \young(\mbox{$\bullet$}\epsilon\mbox{$\bullet$},% \hfil\delta\mbox{$\bullet$},% \mbox{$\bullet$}\x) \end{equation*} there is a maximal chain for $\delta$ north-east of $\delta$, but no maximal chain for $\epsilon$ north-east of $\epsilon$. The following example demonstrates that it is equally necessary that the filling is maximal: \begin{equation*} \young(\mbox{$\bullet$},\epsilon\mbox{$\bullet$},\hfil\mbox{$\bullet$}) \end{equation*} \end{rmk} \begin{proof} Assume on the contrary that there is no maximal chain for $\epsilon$ north-east of $\epsilon$. Consider a maximal chain $C_\epsilon$ for $\epsilon$ that has as many elements north-east of $\epsilon$ as possible. Let $\delta$ be the cell in the same column as $\epsilon$, below $\epsilon$, in the same row as the top entry of $C_\epsilon$ which is south-east of $\epsilon$. By assumption, there is a maximal chain $C_\delta$ for $\delta$ north-east of $\delta$. We have to consider two cases: If the widest rectangle containing $C_\epsilon$ is not as wide as the smallest rectangle containing $C_\delta$, then the entry of $C_\epsilon$ to the left of $\delta$ would extend $C_\delta$ to a $(k+1)$-chain, which is not allowed: \begin{center} \setlength{\unitlength}{0.5cm} \begin{picture}(12,11) \put(2,0){\framebox(8,10){}} \put(10,10){$C_\epsilon$} \put(0,3){\framebox(12,5){}} \put(12,8){$C_\delta$} \put(5.5,0){\dashbox{0.3}(1,10){}} \put(5.5,3){\framebox(1,1){$\delta$}} \put(5.5,6){\framebox(1,1){$\epsilon$}} \put(4,3){\makebox(1,1){$\mbox{$\bullet$}$}} \put(3.5,1.8){\makebox(1,1){$\mbox{$\bullet$}$}} \put(2.5,1){\makebox(1,1){${.\hspace{1pt}\raisebox{2pt}{.}\hspace{1pt}\raisebox{4pt}{.}}$}} \put(7,7){\makebox(1,1){$\mbox{$\bullet$}$}} \put(7.6,8){\makebox(1,1){$\mbox{$\bullet$}$}} \put(8.5,8.5){\makebox(1,1){${.\hspace{1pt}\raisebox{2pt}{.}\hspace{1pt}\raisebox{4pt}{.}}$}} \put(7,4){\makebox(1,1){$\mbox{$\bullet$}$}} \put(8.8,5){\makebox(1,1){${.\hspace{1pt}\raisebox{2pt}{.}\hspace{1pt}\raisebox{4pt}{.}}$}} \put(11,5.8){\makebox(1,1){$\mbox{$\bullet$}$}} \end{picture} \end{center} If the smallest rectangle containing $C_\epsilon$ is at least as wide as the widest rectangle containing $C_\delta$, then we obtain a maximal chain for $\epsilon$ north-east of $\epsilon$ by induction. Let $c_\epsilon^1, c_\epsilon^2,\dots$ be the sequence of elements of $C_\epsilon$ north-east of $\epsilon$, and $c_\delta^1, c_\delta^2,\dots$ the sequence of elements of $C_\delta$ north-east of $\delta$. We will show that $c_\epsilon^i$ must be strictly north and weakly west of $c_\delta^i$, for all $i$. Thus, the elements $c_\epsilon^1, c_\epsilon^2,\dots$ together with the elements of $C_\delta$ outside the smallest rectangle containing $C_\epsilon$ form a maximal chain for $\epsilon$ north-east of $\epsilon$. $c_\epsilon^1$ is strictly north of $c_\delta^1$, since otherwise $C_\delta$ would be a maximal chain for $\epsilon$. $c_\epsilon^1$ cannot be strictly east of $c_\delta^1$, since in this case $c_\delta^1$ together with $C_\epsilon$ would be a $(k+1)$-chain. Suppose now that $c_\epsilon^{i-1}$ is strictly north and weakly west of $c_\delta^{i-1}$. $c_\delta^i$ cannot be strictly north-east of $c_\epsilon^{i-1}$, since this would yield a $k$-chain north-east of $\epsilon$. $c_\delta^i$ must be strictly east of $c_\epsilon^{i-1}$, since $c_\delta^i$ is strictly east of $c_\delta^{i-1}$, which in turn is weakly east of $c_\epsilon^{i-1}$ by the induction hypothesis. Thus, $c_\epsilon^{i-1}$ is weakly north and strictly west of $c_\delta^i$. $c_\epsilon^i$ cannot be strictly north-east of $c_\delta^i$, since then the elements of $C_\epsilon$ south-west of $\epsilon$ together with the elements $c_\delta^1,\dots,c_\delta^i$ and $c_\epsilon^i,c_\epsilon^{i+1}, \dots$ would form a $(k+1)$-chain. Finally, $c_\epsilon^i$ must be strictly north of $c_\delta^i$, since $c_\epsilon^i$ is strictly north of $c_\epsilon^{i-1}$, which in turn is weakly north of $c_\delta^i$. \end{proof} \begin{lem}\label{lem:chutable-rectangle} Consider a maximal filling of a moon polyomino. Suppose that there is a rectangle with at least two columns and at least two rows completely contained in the polyomino, with all cells empty except the north-west, south-east and possibly the south-west corners. Then the south-west corner is indeed non-empty, {\it i.e.}, the rectangle is chutable. \end{lem} Note that we must insist that the south-west corner of the rectangle is part of the polyomino. Here is a maximal filling with $k=1$, where the three cells in the south-west do not form a chutable rectangle, since the south-west corner is missing: \begin{equation*} \young(:\mbox{$\bullet$}\hfil\mbox{$\bullet$},% \mbox{$\bullet$}\hfil\hfil\hfil,% \mbox{$\bullet$}\hfil\hfil\mbox{$\bullet$},% :\mbox{$\bullet$}\x)% \end{equation*} However, we can weaken this assumption in a different way: \begin{lem}\label{lem:chutable-rectangle-2} Consider a maximal filling of a moon polyomino. Suppose that there is a rectangle with at least two columns and at least two rows such that all cells of its top row and its right column are contained in the polyomino. Assume furthermore that all cells of the rectangle that are in the polyomino are empty except the north-west, south-east and possibly the south-west corners. Finally, suppose that there is no maximal chain for the cell in the north-east corner strictly north east of it. Then the cell in the south-west corner is indeed in the polyomino and non-empty, {\it i.e.}, the rectangle is chutable. \end{lem} \begin{proof}[Proof of Lemma~\ref{lem:chutable-rectangle}] Suppose on the contrary that the cell in the south-west corner is empty, too. Then, the situation is as in the following picture: \begin{center} \setlength{\unitlength}{0.5cm} \begin{picture}(6,4)% \put(0,0){\framebox(6,4){}} % \put(0,0){\framebox(1,1){$\delta$}} % \put(5,0){\framebox(1,1){$\mbox{$\bullet$}$}} % \put(5,3){\framebox(1,1){$\epsilon$}}% \put(0,3){\framebox(1,1){$\mbox{$\bullet$}$}} % \end{picture} \end{center} Since the filling is maximal but the cells $\delta$ and $\epsilon$ are empty, there must be maximal chains for these cells. The corresponding rectangles must not cover any of the two cells containing ones, since that would imply the existence of a $(k+1)$-chain. Thus, any maximal chain for $\delta$ must be strictly south-west of $\delta$, and any maximal chain for $\epsilon$ must be strictly north-east of $\epsilon$. Since the polyomino is intersection free, the top row of the rectangle containing the maximal chain for $\epsilon$ is either contained in the bottom row of the rectangle containing the maximal chain for $\delta$, or vice versa. In both cases, we have a contradiction. \end{proof} The next lemma parallels the main Lemma~3.6 in the article by Nantel Bergeron and Sara Billey~\cite{MR1281474}: \begin{lem}\label{lem:two-column-chute} Consider a maximal filling of a moon polyomino. Suppose that there is a cell $\gamma$ containing a $1$ with an empty cell $\epsilon$ in the neighbouring cell to its right, such that there are at least as many cells above $\gamma$ as above $\epsilon$. Then the filling contains a chutable rectangle. Similarly, suppose that there is a cell $\gamma$ containing a $1$ with an empty cell $\epsilon$ in the neighbouring cell below it, such that there are at least as many cells right of $\gamma$ as right of $\epsilon$. Then the filling contains a chutable rectangle. \end{lem} \begin{proof} Suppose that all of the cells in the column containing $\epsilon$, which are below $\epsilon$ and weakly above the bottom cell of the column containing $\gamma$, are empty. Let $\delta$ be the lowest cell in this region. There must then be a maximal chain for $\delta$ that is north-east of $\delta$. By Lemma~\ref{lem:chain-induction}, we conclude that there is also a maximal chain for $\epsilon$ north-east of $\epsilon$. However, then the $1$ in the cell left of $\epsilon$ together with this chain yields a $(k+1)$-chain, since the rectangle containing the maximal chain for $\epsilon$ extends by hypothesis to the column left of $\epsilon$. We can thus apply Lemma~\ref{lem:chutable-rectangle} to the following rectangle: the south-east corner being the top non-empty cell below $\epsilon$, and the north-west corner being the lowest cell containing a $1$ in the column of $\gamma$, strictly above the chosen south-east corner. \end{proof} Finally, the main statement follows from a careful analysis of fillings different from $D_{bot}(M, k)$, repeatedly applying the previous lemmas to exclude obstructions to the existence of a chutable rectangle: \begin{thm}\label{thm:maximal-fillings-chutable} Any maximal filling other than $D_{bot}(M,k)$ admits a chute move such that the result is again a filling of $M$. Any maximal filling other than $D_{top}(M,k)$ admits an inverse chute move such that the result is again a filling of $M$. \end{thm} \begin{proof} Suppose that all cells in the top-right quarter of $M$ that contain a $1$ in $D_{bot}(M,k)$ also contain a $1$ in the filling $F$ at hand. It follows, that all cells that are empty in $D_{bot}(M,k)$ are empty in $F$, too, because there is a maximal chain for each of them. Thus, in this case $F=D_{bot}(M,k)$. Otherwise, consider the set of left-most cells in the top-right quarter, that contain a $1$ in $D_{bot}(M,k)$ but are empty in $F$, and among those the top cell, $\epsilon$. If its left or lower neighbour contains a $1$, we can apply Lemma~\ref{lem:two-column-chute} and are done. Otherwise, we have to find a rectangle as in the hypothesis of Lemma~\ref{lem:chutable-rectangle}. The difficulty in this undertaking is to prove that the lower left corner is indeed part of the polyomino. To ease the understanding of the argument, we will frequently refer to the following sketch: \begin{center} \setlength{\unitlength}{0.5cm} \begin{picture}(21,12) \put(0,9){\line(1,0){2}} \put(2,9){\line(0,1){1}} \put(2,10){\line(1,0){1.5}} \multiput(3.5,10)(0.5,0){35}{\line(1,0){0.1}} \put(21,10.1){$R$} % \put(3.8,10.3){{.\hspace{1pt}\raisebox{2pt}{.}\hspace{1pt}\raisebox{4pt}{.}}} \put(5,11){\line(1,0){1}} \put(6,11){\line(0,1){1}} \multiput(6,0)(0,0.5){24}{\line(0,1){0.25}} \multiput(6,12)(0.5,0){8}{\line(1,0){0.25}} \put(1,7){\framebox(1,1){$\mbox{$\bullet$}^\alpha$}} \put(2,7){$\overbrace{\makebox(4,1){}}^\ell$} \put(1,7){\dashbox{0.3}(15,1){}} \put(10,7){\framebox(1,1){$\epsilon$}} \put(10,8){\framebox(4,4){$k\times k$}} \put(10,8){\makebox(1,1){$\mbox{$\bullet$}$}} \put(10,11){\makebox(1,1){$\mbox{$\bullet$}$}} \put(13,8){\makebox(1,1){$\mbox{$\bullet$}$}} \put(13,11){\makebox(1,1){$\mbox{$\bullet$}$}} % \put(10,0){\framebox(1,1){$\mbox{$\bullet$}^\beta$}} \put(10,0){\dashbox{0.3}(1,7){}} % \put(3,5){\framebox(1,1){$\mbox{$\bullet$}^{\alpha'}$}} \put(12,2){\framebox(1,1){$\mbox{$\bullet$}^{\beta'}$}} \put(3,2){\framebox(1,1){$\omega$}} \put(12,5){\framebox(1,1){$\delta$}} % \put(16,7){\framebox(4.7,3.7){$X$}} \put(16,7){\makebox(1,1){$\mbox{$\bullet$}$}} \put(16,9.7){\makebox(1,1){$\mbox{$\bullet$}$}} \put(19.7,7){\makebox(1,1){$\mbox{$\bullet$}$}} \put(19.7,9.7){\makebox(1,1){$\mbox{$\bullet$}$}} \put(14,8){\makebox(1,1){$\mbox{$\bullet$}$}} \put(15,8){\makebox(1,1){$\mbox{$\bullet$}$}} \put(14,9.7){\makebox(1,1){$\mbox{$\bullet$}$}} \put(15,9.7){\makebox(1,1){$\mbox{$\bullet$}$}} \end{picture} \end{center} By construction, there is a $k\times k$ square filled with ones just above $\epsilon$, and there is no $k$-chain strictly north-east of $\epsilon$. This implies in particular that the top cell in the left-most column of the polyomino must be lower than the top row of the $k\times k$ square, because otherwise there could not be any maximal chain for $\epsilon$. By Lemma~\ref{lem:chain-induction} there must therefore be a non-empty cell left of $\epsilon$, which we label $\alpha$, and a non-empty cell below $\epsilon$, which we label $\beta$. Note that there may be entries to the right of $\epsilon$, in the same row, which are non-empty. However, we can assume that to the right of the first such entry all other cells in this row are non-empty, too, because otherwise we could apply Lemma~\ref{lem:two-column-chute}. We can now construct a chutable rectangle: let $\beta'$ be the top cell containing a $1$ below an empty cell weakly to the right of $\epsilon$, and if there are several, the left-most. Also, let $\alpha'$ be the lowest cell among the right-most containing a $1$, which are weakly below $\alpha$, but strictly above $\beta'$. Let $\delta$ be the cell in the same row as $\alpha'$ and the same column as $\beta'$. Let $\omega$ be the cell in the same column as $\alpha'$ and the same row as $\beta'$ -- a priori we do not know however that $\omega$ is a cell in the polyomino. We then apply Lemma~\ref{lem:chutable-rectangle-2} to the rectangle defined by $\alpha'$ and the first non-empty cell to the right of $\omega$, in the same row. To achieve our goal we show that there cannot be a maximal chain for $\delta$ north-east of $\delta$. Suppose on the contrary that there is such a chain. At least its top-right element must be in a row (denoted $R$ in the sketch) strictly above the top cell of the column containing $\alpha'$: otherwise, $\alpha'$ together with this chain would form a $(k+1)$-chain. In the sketch, the non-empty cells that are implied are indicated by the rectangle denoted $X$, which must be of size $k\times k$ at least. If the size of the region to the right of $\alpha$ indicated by $\ell$ in the sketch is $0$ we let $\sigma$ be the bottom-left cell of a maximal chain for $\epsilon$. Otherwise, define $\sigma$ to be the bottom-left cell of a maximal chain for the right neighbour of $\alpha$. In both cases, $\sigma$ must be in a column weakly west of $\alpha$ -- in the latter case because there are fewer than $k$ cells above the right neighbour of $\alpha$. By intersection free-ness applied to the column containing $\sigma$ and the columns containing cells of $X$, the latter columns must all extend at least down to the row containing $\sigma$. But in this case the maximal chain with bottom left cell $\sigma$ can be extended to a $(k+1)$-chain using the cells in $X$. We have shown that a maximal chain for $\delta$ must have some elements south-east of $\delta$. We can now apply Lemma~\ref{lem:chutable-rectangle-2}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:filling-dream}] All pipe dreams in $\Set{RC}(w)$ contained in $M$ are maximal $0$-$1$ fillings of $M$, since they can be generated by applying sequences of chute moves to $D_{top}(M,k)$. Since we can apply chute moves to any maximal $0$-$1$-filling of $M$ except $D_{bot}(M,k)$, all such fillings arise in this fashion. (We have to remark here that in case the pipe dream associated to some filling would not be reduced, applying chute moves eventually exhibits that the filling was not maximal.) Together with Lemma~\ref{lem:chute-moon-closure}, this implies that all fillings $F_{01}^{ne}(M, k)$ have the same associated permutation. Note that this procedure implies, as a by-product, that all maximal $0$-$1$-fillings of $M$ have the same number of entries equal to zero, {\it i.e.}, the simplicial complex of $0$-$1$-fillings is pure. \end{proof} \section{Applying the Edelman-Greene correspondence} \label{sec:Edelman-Greene} Using the identification described in the previous section, we can apply a correspondence due to Paul Edelman and Curtis Greene~\cite{MR871081}, that associates pairs of tableaux to reduced factorisations of permutations. This in turn will yield the desired bijective proof of Jakob Jonsson's result at least for stack polyominoes. The main result of this section was obtained for Ferrers shapes earlier by Luis Serrano and Christian Stump~\cite{SerranoStump2010} using the same proof strategy. For stack polyominoes the description of the $P$-tableau is different, thus we believe it is useful to repeat the arguments here. The following theorem is a collection of results from Paul Edelman and Curtis Greene~\cite{MR871081}, Richard Stanley~\cite{MR782057} and Alain Lascoux and Marcel-Paul Sch\"utzenberger~\cite{MR686357}, and describes properties of the \Dfn{Edelman-Greene} correspondence: \begin{thm}\label{thm:edelman-greene} Let $w$ be a permutation and $s_i$ be the elementary transposition $(i, i+1)$. Consider pairs of words $(u,v)$ of the same length $\ell$, such that $s_{v_1},s_{v_2},\dots,s_{v_\ell}$ is a reduced factorisation of $w$ and $u_i\leq u_{i+1}$, with $u_i=u_{i+1}$ only if $v_i>v_{i+1}$. There is a bijection between such pairs of words and pairs $(P, Q)$ of Young tableaux of the same shape, such that $P$ is column and row strict and whose reading word is a reduced factorisation of $w$, and such that the transpose of $Q$ is semistandard. Moreover, if $w$ is vexillary, {\it i.e.}, $2143$-avoiding, the tableau $P$ is the same for all reduced factorisations of $w$. \end{thm} This correspondence can be defined via row insertion. We insert a letter $x$ into row $r$ of a tableau $P$ whose last letter is different from $x$ as follows: if $x$ is (strictly) greater than all letters in row $r$, we just append $x$ to row $r$. If row $r$ contains both the letters $x$ and $x+1$ we insert $x+1$ into row $r+1$. Otherwise, let $y$ be the smallest letter in row $r$ that is strictly greater than $x$, replace $y$ in row $r$ by $x$ and insert $y$ into row $r+1$. We can now construct the pair of tableaux $(P, Q)=(P_\ell, Q_\ell)$ from a pair of words $(u, v)$ as in the statement of Theorem~\ref{thm:edelman-greene}: let $P_0$ and $Q_0$ be empty tableaux. Insert the letter $v_i$ into the first row of $P_{i-1}$ to obtain $P_i$, and place the letter $u_i$ into the cell of $Q_i$ determined by the condition that $P_i$ and $Q_i$ have the same shape. It turns out that the permutations associated to moon polyominoes are indeed vexillary: \begin{prop}\label{prop:vexillary} For any moon-polyomino $M$ and any $k$ the permutation $w(M, k)$ is vexillary. \end{prop} \begin{rmk} There are vexillary permutations which do not correspond to moon polyominoes. For example, the only two reduced pipe dreams for the permutation $4,2,5,1,3$ are \begin{equation*} \begin{tikzpicture} \content{0.475}{(0, 0)}{% 0/0/+,1/0/+,2/0/+,3/0/\mbox{$\bullet$},4/0/\mbox{$\bullet$},% 0/1/+,1/1/\mbox{$\bullet$},2/1/+,3/1/\mbox{$\bullet$},% 0/2/+,1/2/\mbox{$\bullet$},2/2/\mbox{$\bullet$},% 0/3/\mbox{$\bullet$},1/3/\mbox{$\bullet$},% 0/4/\mbox{$\bullet$}}% \content{0.475}{(0, 0.475)}{% 0/0/$1$,1/0/$2$,2/0/$3$,3/0/$4$,4/0/$5$, -1/1/$4$,-1/2/$2$,-1/3/$5$,-1/4/$1$,-1/5/$3$}% \end{tikzpicture} \quad\raisebox{40pt}{\text{and}}\quad \begin{tikzpicture} \content{0.475}{(0, 0)}{% 0/0/+,1/0/+,2/0/+,3/0/\mbox{$\bullet$},4/0/\mbox{$\bullet$},% 0/1/+,1/1/\mbox{$\bullet$},2/1/\mbox{$\bullet$},3/1/\mbox{$\bullet$},% 0/2/+,1/2/+,2/2/\mbox{$\bullet$},% 0/3/\mbox{$\bullet$},1/3/\mbox{$\bullet$},% 0/4/\mbox{$\bullet$}}% \content{0.475}{(0, 0.475)}{% 0/0/$1$,1/0/$2$,2/0/$3$,3/0/$4$,4/0/$5$, -1/1/$4$,-1/2/$2$,-1/3/$5$,-1/4/$1$,-1/5/$3$}% \end{tikzpicture} \end{equation*} \end{rmk} \begin{proof} It is sufficient to prove the claim for $k=0$, since the empty cells in the filling $D_{top}(M, k)$ for any $k$ again form a moon polyomino. Thus, suppose that the permutation associated to $M$ is not vexillary. Then we have indices $i<j<k<\ell$ such that $w(j)<w(i)<w(\ell)<w(k)$. It follows that the pipes entering in columns $i$ and $j$ from above cross, and so do the two pipes entering in columns $k$ and $\ell$, and thus correspond to cells of the moon polyomino. Since any two cells in the moon polyomino can be connected by a path of neighbouring cells changing direction at most once, there is a third cell where either the pipes entering from $i$ and $\ell$ or from $j$ and $k$ cross, which is impossible. \end{proof} \begin{thm}[for Ferrers shapes, Luis Serrano and Christian Stump \cite{SerranoStump2010}]\label{thm:ne-se} Consider the set $\Set F_{01}^{ne}(S, k, \Mat r)$, where $S$ is a stack polyomino. Let $\mu_i$ be the number of cells the $i$\textsuperscript{th} row of $S$ is indented to the right, and suppose that $\mu_1=\dots =\mu_k=\mu_{k+1}=0$. Let $u$ be the word $1^{\Mat r_1}, 2^{\Mat r_2},\dots$ and let $v$ be the reduced factorisation of $w$ associated to a given pipe dream. Then the Edelman-Greene correspondence applied to the pair of words $(u, v)$ induces a bijection between $\Set F_{01}^{ne}(S, k, \Mat r)$ and the set of pairs $(P, Q)$ of Young tableaux satisfying the following conditions: \begin{itemize} \item the common shape of $P$ and $Q$ is the multiset of column heights of the empty cells in $D_{top}(S, k)$, \item the first row of $P$ equals $(k+1, k+2+\mu_{k+2}, k+3+\mu_{k+3},\dots)$, and the entries in columns are consecutive, \item $Q$ has type $\{1^{\Mat r_1}, 2^{\Mat r_2},\dots\}$, and entries in column $i$ are at most $i+k$. \end{itemize} Thus, the common shape of $P$ and $Q$ encodes the row lengths of $S$, the entries of the first row of $P$ encode the left border of $S$, and the entries of $Q$ encode the filling. \end{thm} \begin{rmk} In particular, this theorem implies an explicit bijection between the sets $\Set F_{01}^{ne}(S_1, k, \Mat r)$ and $\Set F_{01}^{ne}(S_2, k, \Mat r)$, given that the multisets of column heights of $S_1$ and $S_2$ coincide. Curiously, the most natural generalisation of the above theorem to moon polyominoes is not true. Namely, one may be tempted to replace the condition on $Q$ by requiring that the entries of $Q$ are between $Q_{top}$ and $Q_{bot}$ component-wise. However, this fails already for $k=1$ and the shape \begin{equation*} \Yvcentermath0 \young(:\hfil\hfil,% :\hfil\hfil,% \hfil\hfil\hfil,% \hfil\hfil\hfil)\,\,, \end{equation*} with $P=\young(345,5)$, $Q_{top}=\young(123,3)$ and $Q_{bot}=\young(234,4)$. In this case, the tableau $Q=\young(124,3)$ has preimage \begin{equation*} \Yvcentermath0 \young(:\mbox{$\bullet$}\hfil,% :\mbox{$\bullet$}\x\hfil,% \mbox{$\bullet$}\hfil\mbox{$\bullet$},% \mbox{$\bullet$}\hfil\mbox{$\bullet$}). \end{equation*} \end{rmk} \begin{rmk} One might hope to prove Conjecture~\ref{cnj:lattice} by applying the Edelman-Greene correspondence, and checking that the poset is a lattice on the tableaux. However, at least for the natural component-wise order on tableaux, the correspondence is not order preserving, not even for the case of Ferrers shapes. \end{rmk} \begin{proof} In view of Proposition~\ref{prop:vexillary}, to obtain the tableau $P$ it is enough to insert the reduced word given by the filling $D_{top}(S, k)$ using the Edelman-Greene correspondence, which is not hard for stack polyominoes. It remains to prove that the entries in column $i$ of $Q$ are at most $i+k$ precisely if $(u,v)$ comes from a filling in $\Set F_{01}^{ne}(S, k)$. To this end, observe that the shape of the first $i$ columns of $P$ equals the shape of the tableau obtained after inserting the pair of words $\left((u_1, u_2,\dots,u_\ell),(v_1, v_2,\dots,v_\ell)\right)$, where $\ell$ is such that $u_\ell\leq k+i$ and $u_{\ell+1}>k+i$. Namely, this is the case if and only if the first $i+k+\mu_{i+k+1}$ positions of the permutation corresponding to $(v_1, v_2,\dots,v_\ell)$ coincide with those of the permutation $w$ corresponding to $v$ itself, as can be seen by considering $D_{top}(w)$, whose empty cells form again a stack polyomino. This in turn is equivalent to all letters $v_m$ being at least $k+i+1+\mu_{k+i+1}$ for $m>\ell$, {\it i.e.}, whenever the corresponding empty cell of the filling occurs in a row below the $(i+k)$\textsuperscript{th} of $S$, and thus, when it is inside $S$. \end{proof} \section*{Acknowledgements} I am very grateful to my wife for encouraging me to write this note, and for her constant support throughout. I would also like to thank Thomas Lam and Richard Stanley for extremely fast replies concerning questions about Theorem~\ref{thm:edelman-greene}. I would like to acknowledge that Christian Stump provided a preliminary version of \cite{Stump2010}. Luis Serrano and Christian Stump informed me privately that they were able to prove that all $k$-fillings of Ferrers shapes yield the same permutation $w$, however, their ideas would not work for stack polyominoes. I was thus motivated to attempt the more general case. \providecommand{\cocoa} {\mbox{\rm C\kern-.13em o\kern-.07em C\kern-.13em o\kern-.15em A}} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
2,869,038,156,577
arxiv
\section{Stratified Sampling Properties and Storage Overhead} \label{sec:analysis} In this section prove the two properties stated in Section~\ref{sec:stratified-samples} and give the storage overhead for Zipf distribution. \input{stratified-sample-properties.tex} \input{stratified-sample-overhead.tex} \section{System Overview} \label{sec:overview} As it is built on top of Hive~\cite{hive}, {\system} supports a hybrid programming model that allows users to write SQL-style declarative queries with custom user defined functions (UDFs). In addition, for aggregation queries (\ie \texttt{AVG, SUM, PERCENTILE} etc.), users can annotate queries with either a maximum error or maximum execution time constraint. Based on these constraints, {\system} selects an appropriately sized data sample at runtime on which the query operates (see ~\xref{sec:example} below for an example). Specifically, to specify an error bound, the user supplies a bound of the form $(\epsilon,C)$, indicating that the query should return an answer that is within $\pm\epsilon$ of the true answer with a confidence $C$. As an example, suppose we have a table {\it Sessions}, storing the sessions of users browsing a media website with five columns: \emph{Session}, \emph{Genre}, \emph{OS} (running on the user's device), \emph{City}, and \emph{URL} (of the site visited by the user). Then the query: {\small \begin{verbatim} SELECT COUNT(*) FROM Sessions WHERE Genre = `western' GROUP BY OS ERROR WITHIN 1 \end{verbatim} } \noindent will return the number of sessions looking at media from the ``western'' \emph{Genre} for each OS to within a relative error of $\pm 10\%$ within a $95\%$ confidence interval. Users can also specify absolute errors. Alternatively, users can instead request a time bound. For example, the query: {\small \begin{verbatim} SELECT COUNT(*), RELATIVE ERROR AT 9 FROM Sessions WHERE Genre = `western' GROUP BY OS WITHIN 5 SECONDS \end{verbatim} } \noindent will return with the most accurate answer within $5$ seconds, and will report the estimated count along with an estimate of the relative error at $95\%$ confidence. This enables a user to perform rapid exploratory analysis on massive amounts of data, wherein she can progressively tweak the query bounds until the desired accuracy is achieved. \subsection{Settings and Assumptions} \label{sec:assumptions} In this section, we discuss several assumptions we made in designing \system{}. \noindent \textbf{Queries with Joins.} Currently, \system~ supports two types of joins. (i) Arbitrary joins \footnote{Note that this does not contradict the theoretical results on the futility of uniform sampling for join queries~\cite{Chaudhuri:1999}, since \system~employs stratified samples for joins.} are allowed (self-joins or joining two tables) as long as there is a stratified sample on one of the join tables that contains the join key in its column-set\footnote{In general, \system{} supports arbitrary $k$-way joins (i.e. joins between $k$ tables) as long as there are at least $k-1$ stratified samples (each corresponding to one of the join operands) that all contain the join key in their column-set.}. (ii) In the absence of any suitable stratified sample, the join is still allowed as long as one of the two tables fits in memory (since \system~ does not sample tables that fit in memory). The latter is, however, more common in practice as data warehouses typically consist of one large de-normalized ``fact'' table (\eg ad impressions, click streams, pages served) that may need to be joined with other ``dimension'' tables using foreign-keys. Dimension tables (\eg representing customers, media, or locations) are often small enough to fit in the aggregate memory of cluster nodes. \noindent \textbf{Workload Characteristics.} Since our workload is targeted at ad-hoc queries, rather than assuming that exact queries are known a priori, we assume that the \emph{query templates} (\ie the set of columns used in {\tt WHERE} and {\tt GROUP-BY} clauses) remain fairly stable over time. We make use of this assumption when choosing which samples to create. This assumption has been empirically observed in a variety of real-world production workloads~\cite{rope, recurring-scope} and is also true of the query trace we use for our primary evaluation (a $2$-year query trace from Conviva Inc). We however do not assume any prior knowledge of the specific values or predicates used in these clauses. Note that, although {\system} creates a set of stratified samples based on past query templates, at runtime, it can still use the set of available samples to answer any query, even if it is not from one of the historical templates. In Section~\ref{sec:optimal-view-creation}, we show that our optimization framework takes into account the distribution skew of the underlying data in addition to templates, allowing it to perform well even when presented with previously unseen templates. \noindent \textbf{Closed-Form Aggregates.} In this paper, we focus on a small set of aggregation operators: {\tt COUNT}, {\tt SUM}, {\tt MEAN}, {\tt MEDIAN/QUANTILE}. We estimate error of these functions using standard estimates of closed-form error (see Table~\ref{tab:closedform}). However, using techniques proposed in~\cite{kai-paper}, closed-form estimates can be easily derived for any combination of these basic aggregates as well as any algebraic function that is \emph{mean-like and asymptotically normal} (see~\cite{kai-paper} for formal definitions). \noindent \textbf{Offline Sampling.} {\system} computes samples of input data and reuses them across many queries. One challenge with any system like \system{} based on offline sampling is that there is a small but non-zero probability that a given sample may be non-representative of the true data, e.g., that it will substantially over- or under-represent the frequency of some value in an attribute compared to the actual distribution, such that a particular query $Q$ may not satisfy the user-specified error target. Furthermore, because we do not generate new samples for each query, no matter how many times $Q$ is asked, the error target will not be met, meaning the system can fail to meet user specified confidence bounds for $Q$. Were we to generate a new sample for every query (i.e., perform {\it online} sampling), our confidence bounds would hold, because our error estimates ensure that the probability of such non-representative events is proportional to the user-specified confidence bound. Unfortunately, such re-sampling is expensive and would significantly impact query latency. Instead, our solution is to periodically replace samples with new ones in the background, as described in~\xref{sec:sample-maintenance}. \subsection{Architecture} \label{sec:arch} Fig.~\ref{arch} shows the overall architecture of \system. {\system} builds on the Apache Hive framework~\cite{hive} and adds two major components to it: (1) an offline sampling module that creates and maintains samples over time, and (2) a run-time sample selection module that creates an {\it Error-Latency Profile (ELP)} for ad-hoc queries. The ELP characterizes the rate at which the error (or response time) decreases (or increases) as the size of the sample on which the query operates increases. This is used to select a sample that best satisfies the user's constraints. {\system} augments the query parser, optimizer, and a number of aggregation operators to allow queries to specify constraints for accuracy, or execution time. \begin{figure}[htbp] \begin{center} \includegraphics*[width=2.5in]{figures/architecture.pdf} \vspace{-.15in} \caption{{\system} architecture.} \label{arch} \end{center} \vspace{-.2in} \end{figure} \subsubsection{Offline Sample Creation and Maintenance} \label{sec:sample-creation} This component is responsible for creating and maintaining a set of uniform and stratified samples. We use uniform samples over the entire dataset to handle queries on groups of columns with relatively uniform distributions, and stratified samples (on one or more columns) to handle queries on groups of columns with less uniform distributions. This component consists of three sub-components: \begin{asparaenum} \item \textbf{Offline Sample Creation.} Based on statistics collected from the data (\eg average row sizes, key skews, column histograms etc.), and historic query templates, {\system} computes a set of uniform samples and multiple sets of stratified samples from the underlying data. We rely on an optimization framework described in \xref{sec:optimal-view-creation}. Intuitively, the optimization framework builds stratified samples over column(s) that are (a) most useful for the query templates in the workload, and (b) most skewed, \ie they have long-tailed distributions where rare values are more likely to be excluded by a uniform sample. \item \textbf{Sample Maintenance.} As new data arrives, we periodically update the initial set of samples. Our update strategy is designed to minimize performance overhead and avoid service interruption. A monitoring module observes overall system performance, detecting any significant changes in data distribution (or workload), and triggers periodic sample replacement, and updates, deletions, or creations of new samples. \item \textbf{Storage optimization.} In addition to caching samples in memory, to maximize disk throughput, we partition each sample into many small files, and leverage the block distribution strategy of HDFS~\cite{hdfs} to spread those files across the nodes in a cluster. Additionally, we optimize the storage overhead, by recursively building larger samples as a union of smaller samples that are built on the same set of columns. \end{asparaenum} \subsubsection{Run-time Sample Selection} \label{sec:sample-slection} Given a query, we select an optimal sample at runtime so as to meet its accuracy or response time constraints. We do this by dynamically running the query on smaller samples to estimate the query's selectivity, error rate, and response time, and then extrapolate to a sample size that will satisfy user-specified error or response time goals. \xref{solution:selection} describes this procedure in detail. \subsection{An Example} \label{sec:example} To illustrate how {\system}~operates, consider a table derived from a log of downloads by users from a media website, as shown in Figure~\ref{fig:startified-samples-example}. The table consists of five columns: \emph{Session}, \emph{Genre}, \emph{OS}, \emph{City}, and \emph{URL}. Assume we know the query templates in the workload, and that $30\%$ of the queries had \emph{City} in their {\tt WHERE/GROUP BY} clause, $25\%$ of the queries had \emph{Genre}~{\tt AND}~\emph{City} in their {\tt WHERE/GROUP BY} clause, and so on. \begin{figure}[htbp] \begin{center} \includegraphics*[width=225pt]{figures/stratified-samples-example.pdf} \caption{An example showing the samples for a table with five columns, and a given query workload.} \label{fig:startified-samples-example} \end{center} \vspace{-.2in} \end{figure} Given a storage budget, {\system} creates several multi-dimen\-sion\-al and multi-resolution samples based on past query templates and the data distribution. These samples are organized in \emph{sample families}, where each family contains multiple samples of different granularities. One family consists of uniform samples, while the other families consist of stratified samples biased on a given set of columns. In our example, {\system} decides to create two sample families of stratified samples: one on \emph{City}, and another one on $(\emph{OS}, \emph{URL})$. Note that despite \emph{Genre} being a frequently queried column, we do not create a stratified sample on this column. This could be due to storage constraint or because \emph{Genre} is uniformly distributed, such that queries that only use this column are already well served by the uniform sample. Similarly, {\system} does not create stratified samples on columns $(\emph{Genre},~\emph{City})$, in this case because queries on these columns are well served by the stratified samples on the \emph{City} column. {\system} also creates several instances of each sample family, each with a different size, or {\it resolution}. For instance, {\system} may build three biased samples on columns$(\emph{OS},~\emph{URL})$ with $1$M, $2$M, and $4$M tuples respectively. In \xref{sec:optimal-view-creation}, we present an algorithm for optimally picking these sample families. For every query, at run time, {\system} selects the appropriate sample family and the appropriate sample resolution to answer the query based on the user specified error or response time bounds. In general, the columns in the {\tt WHERE/GROUP BY} of a query may not exactly match any of the existing stratified samples. For example, consider a query, $Q$, whose {\tt WHERE} clause is (\emph{OS}$=$\emph{'Win7'} {\tt AND} \emph{City}$=$\emph{'NY'} {\tt AND} \emph{URL}$=$'\url{www.cnn.com}'). In this case, it is not clear which sample family to use. To get around this problem, {\system} runs $Q$ on the smallest resolutions of other candidate sample families, and uses these results to select the appropriate sample, as described in detail in ~\xref{solution:selection}. \section{background}\label{background} \section{Conclusion}\label{sec:conclusion} In this paper, we presented~\system, a parallel, sampling-based approximate query engine that provides support for ad-hoc queries with error and response time constraints. \system~is based on two key ideas: (i) a multi-dimensional, multi-granularity sampling strategy that builds and maintains a large variety of samples, and (ii) a run-time dynamic sample selection strategy that uses smaller samples to estimate query selectivity and choose the best samples for satisfying query constraints. Evaluation results on real data sets and on deployments of up to $100$ nodes demonstrate the effectiveness of \system{} at handling a variety of queries with diverse error and time constraints, allowing us to answer a range of queries within $2$ seconds on $17$ TB of data with 90-98\% accuracy. \section{Evaluation}\label{evaluation} In this section, we evaluate {\system}'s performance on a $100$ node EC2 cluster using two workloads: a workload from Conviva Inc.~\cite{Conviva} and the well-known TPC-H benchmark~\cite{tpch}. First, we compare {\system} to query execution on full-sized datasets to demonstrate how even a small trade-off in the accuracy of final answers can result in orders-of-magnitude improvements in query response times. Second, we evaluate the accuracy and convergence properties of our optimal multi-dimensional, multi-granular stratified-sampling approach against both random sampling and single-column stratified-sampling approaches. Third, we evaluate the effectiveness of our cost models and error projections at meeting the user's accuracy/response time requirements. Finally, we demonstrate {\system}'s ability to scale gracefully with increasing cluster size. \eat{We evaluate a few different aspects of \system~: (i) The performance of \system~versus other approaches, e.g., a) Native Hive running on both MapReduce and Spark, b) Sampling approaches that do not pre-compute samples (\eg online aggregation) and c) Sampling approaches based on single-column stratified samples rather than an optimally chosen set of multi-dimensional ones along with (ii) The ability of \system~to meet the user's accuracy/response time requirements. We report experiments on two different sets of workloads: (1) A real-world dataset and query logs from Conviva Inc.~\cite{Conviva} and (2) the TPC-H benchmark. The Conviva and the TPC-H datasets were $17$ TB and $1$ TB (i.e., a scale factor of $1000$) in size respectively and were stored simultaneously across $100$ Amazon EC2 extra large nodes\footnote{Amazon EC2 extra large nodes have $8$ CPU cores ($2.66$ GHz), $68.4$ GB of RAM, with an instance-attached disk of $800$ GB.}. The cluster is configured to utilize $75$ TB of distributed disk storage and $6$ TB of distributed RAM cache.} \subsection{Evaluation Setting} The Conviva and the TPC-H datasets were $17$ TB and $1$ TB (\ie a scale factor of $1000$) in size, respectively, and were both stored across $100$ Amazon EC2 extra large instances (each with $8$ CPU cores ($2.66$ GHz), $68.4$ GB of RAM, and $800$ GB of disk). The cluster was configured to utilize $75$ TB of distributed disk storage and $6$ TB of distributed RAM cache.\notesameer{should we highlight here that we are evaluating other frameworks against an implementation of \system{} on spark only and cite shark paper if readers are interested in hive vs. shark comparison?} \begin{figure*}[ht] \vspace{-.2in} \centering \subfigure[Sample Families (Conviva)]{ \includegraphics*[width=160pt]{figures/StorageOverhead-Conviva.pdf} \label{fig:storageconviva} } \subfigure[Sample Families (TPC-H)]{ \includegraphics*[width=160pt]{figures/StorageOverhead-TPCH.pdf} \label{fig:storagetpch} } \subfigure[{\system} Vs. No Sampling]{ \includegraphics*[width=160pt]{figures/qs-vs-hive-vs-shark.pdf} \label{fig:qs-vs-hive-vs-shark} } \vspace{-.1in} \caption{\ref{fig:storageconviva} and~\ref{fig:storagetpch} show the relative sizes of the set of stratified sample(s) created for $50$\%, $100$\% and $200$\% storage budget on Conviva and TPC-H workloads respectively.~\ref{fig:qs-vs-hive-vs-shark} compares the response times (in log scale) incurred by Hive (on Hadoop), Shark (Hive on Spark) -- both with and without input data caching, and {\system}, on simple aggregation.} \label{fig:optimization} \vspace{-.1in} \end{figure*} \vspace{.1in} \noindent \textbf{Conviva Workload.} The Conviva data represents information about video streams viewed by Internet users. We use query traces from their SQL-based ad-hoc querying system which is used for problem diagnosis and data analytics on a log of media accesses by Conviva users. These access logs are $1.7$ TB in size and constitute a small fraction of data collected across $30$ days. Based on their underlying data distribution, we generated a $17$ TB dataset for our experiments and partitioned it across $100$ nodes. The data consists of a single large _fact_ table with $104$ columns, such as, customer ID, city, media URL, genre, date, time, user OS, browser type, request response time, etc. The $17$ TB dataset has about $5.5$ billion rows. The raw query log consists of $19,296$ queries, from which we selected different subsets for each of our experiments. We ran our optimization function on a sample of about $200$ queries representing $42$ query templates. We repeated the experiments with different storage budgets for the stratified samples-- $50\%, 100\%$, and $200\%$. A storage budget of $x\%$ indicates that the cumulative size of all the samples will not exceed $\frac{x}{100}$ times the original data. So, for example, a budget of $100\%$ indicates that the total size of all the samples should be less than or equal to the original data. Fig.~\ref{fig:storageconviva} shows the set of sample families that were selected by our optimization problem for the storage budgets of $50\%, 100\%$ and $200\%$ respectively, along with their cumulative storage costs. Note that each stratified sample family has a different size due to variable number of distinct keys in the columns on which the sample is biased. Within each sample family, each successive resolution is twice as large than the previous one and the value of $K$ in the stratified sampling is set to $100,000$. \vspace{.1in} \noindent \textbf{TPC-H Workload.} We also ran a smaller number of experiments on TPC-H to demonstrate the generality of our results, with respect to a standard benchmark. All the TPC-H experiments ran on the same $100$ node cluster, on $1$ TB of data (\ie a scale factor of $1000$). The $22$ benchmark queries in TPC-H were mapped to $6$ unique query templates. Fig.~\ref{fig:storagetpch} shows the set of sample families selected by our optimization problem for the storage budgets of $50\%, 100\%$ and $200\%$, along with their cumulative storage costs. Unless otherwise specified, all the experiments in this paper are done with a $50\%$ additional storage budget (\ie samples could use an additional storage of up to $50\%$ of the original data size). \subsection{\systemheader{} vs. No Sampling} We first compare the performance of \system{} versus frameworks that execute queries on complete data. In this experiment, we ran on two subsets of the Conviva data, with $7.5$ TB and $2.5$ TB respectively, spread across $100$ machines. We chose these two subsets to demonstrate some key aspects of the interaction between data-parallel frameworks and modern clusters with high-memory servers. While the smaller $2.5$ TB dataset can be be completely cached in memory, datasets larger than $6$ TB in size have to be (at least partially) spilled to disk. To demonstrate the significance of sampling even for the simplest analytical queries, we ran a simple query that computed {\tt average} of user session times with a filtering predicate on the date column ($dt$) and a {\tt GROUP BY} on the $city$ column. We compared the response time of the full (accurate) execution of this query on Hive~\cite{hive} on Hadoop MapReduce~\cite{hadoopmr}, Hive on Spark (called Shark~\cite{shark}) -- both with and without caching, against its (approximate) execution on \system~with a $1\%$ error bound for each {\tt GROUP BY} key at $95\%$ confidence. We ran this query on both data sizes (\ie corresponding to $5$ and $15$ days worth of logs, respectively) on the aforementioned $100$-node cluster. We repeated each query $10$ times, and report the average response time in Figure~\ref{fig:qs-vs-hive-vs-shark}. Note that the Y axis is log scale. In all cases, \system{} significantly outperforms its counterparts (by a factor of $10-100\times$), because it is able to read far less data to compute a fairly accurate answer. For both data sizes,\eat{response times are just a few seconds in} \system{} returned the answers in a few seconds as compared to thousands of seconds for others. In the $2.5$ TB run, Shark's caching capabilities considerably help, bringing the query runtime down to about $112$ seconds. However, with $7.5$ TB data size, a considerable portion of data is spilled to disk and the overall query response time is considerably longer. \begin{figure*}[ht] \vspace{-.2in} \centering \subfigure[Error Comparison (Conviva)]{ \includegraphics*[width=160pt]{figures/optimization-conviva.pdf} \label{fig:opt-conviva} } \subfigure[Error Comparison (TPC-H)]{ \includegraphics*[width=160pt]{figures/optimization-tpch.pdf} \label{fig:opt-tpch} } \subfigure[Error Convergence (Conviva) ]{ \includegraphics*[width=160pt]{figures/ola.pdf} \label{fig:ola-comparison} } \vspace{-.1in} \caption[]{\ref{fig:opt-conviva} and~\ref{fig:opt-tpch} compare the average statistical error per template when running a query with fixed time budget for various sets of samples.~\ref{fig:ola-comparison} compares the rates of error convergence with respect to time for various sets of samples.} \label{fig:optproblem} \vspace{-.1in} \end{figure*} \eat{ \begin{figure}[ht] \begin{center} \includegraphics*[width=225pt]{figures/qs-vs-hive-vs-shark.pdf} \caption{A comparison of response times (in log scale) incurred by Hive (on Hadoop), Shark (Hive on Spark)-- both with and without full input data caching, and {\system}, on simple aggregation.} \label{fig:qs-vs-hive-vs-shark} \end{center} \end{figure} } \subsection{Multi-Dimensional Stratified Sampling} \label{sec:multi-dimensional-stratified-samples} Next, we ran a set of experiments to evaluate the error~(\xref{sec:multi-dimension-sampling-exp}) and convergence~(\xref{sec:convergence-exp}) properties of our optimal multi-dimensional, multi-granular stratified-sampling approach against both simple random sampling, and one-dimensional stratified sampling (\ie stratified samples over a single column). For these experiments we constructed three sets of samples on both Conviva and TPC-H data with a $50\%$ storage constraint: \begin{asparaenum} \item \textbf{Multi-Dimensional Stratified Samples}. The sets of columns to stratify on were chosen using \system{}'s optimization framework (\xref{sec:optimal-view-creation}), restricted so that samples could be stratified on no more than $3$ columns (considering four or more column combinations caused our optimizer to take more than a minute to complete). \item \textbf{Single-Dimensional Stratified Samples}. The column to stratify on was chosen using the same optimization framework, restricted so a sample is stratified on exactly one column. \item \textbf{Uniform Samples}. A sample containing $50\%$ of the entire data, chosen uniformly at random. \end{asparaenum} \subsubsection{Error Properties} \label{sec:multi-dimension-sampling-exp} In order to illustrate the advantages of our multi-dimensional stratified sampling strategy, we compared the average statistical error at $95\%$ confidence while running a query for $10$ seconds over the three sets of samples, all of which were constrained to be of the same size. For our evaluation using Conviva's data we used a set of $40$ queries (with $5$ unique query templates) and $17$ TB of uncompressed data on $100$ nodes. We ran a similar set of experiments on the standard TPC-H queries. The queries we chose were on the $lineitem$ table, and were modified to conform with HiveQL syntax. In Figures~\ref{fig:opt-conviva}, and \ref{fig:opt-tpch}, we report results per-query template, with numbers in parentheses indicating the percentage of queries with a given template. For common query templates, multi-dimensional samples produce smaller statistical errors than either one-dimensional or random samples. The optimization framework attempts to minimize expected error, rather than per-query errors, and therefore for some specific query templates single-dimensional stratified samples behave better than multi-dimensional samples. Overall, however, our optimization framework significantly improves performance versus single column samples. \subsubsection{Convergence Properties} \label{sec:convergence-exp} We also ran experiments to demonstrate the convergence properties of multi-dimensional stratified samples used by \system{}. We use the same set of three samples as \xref{sec:multi-dimensional-stratified-samples}, taken over $17$ TB of Conviva data. Over this data, we ran multiple queries to calculate average session time for a particular ISP's customers in $5$ US Cities and determined the latency for achieving a particular error bound with $95\%$ confidence. Results from this experiment (Figure~\ref{fig:ola-comparison}) show that error bars from running queries over multi-dimensional samples converge orders-of-magnitude faster than random sampling, and are significantly faster to converge than single-dimensional stratified samples. \eat{ \begin{figure}[ht] \begin{center} \includegraphics*[width=225pt]{figures/ola.pdf} \caption{Comparison of Convergence Properties.} \label{fig:ola-comparison} \end{center} \end{figure} } \subsection{Time/Accuracy Guarantees} \label{sec:time-accuracy} In this set of experiments, we evaluate \system{}'s effectiveness at meeting different time/error bounds requested by the user. To test time-bounded queries, we picked a sample of $20$ Conviva queries, and ran each of them $10$ times, with a time bound from $1$ to $10$ seconds. Figure~\ref{fig:time-sla-bounds} shows the results run on the same $17$ TB data set, where each bar represents the minimum, maximum and average response times of the $20$ queries, averaged over $10$ runs. From these results we can see that \system{}~is able to accurately select a sample to satisfy a target response time. Figure~\ref{fig:error-sla-bounds} shows results from the same set of queries, also on the $17$ TB data set, evaluating our ability to meet specified error constraints. In this case, we varied the requested error bound from $2\%$ to $32\%$ . The bars again represent the minimum, maximum and average errors across different runs of the queries. Note that the measured error is almost always at or less than the requested error. However, as we increase the error bound, the measured error becomes closer to the bound. This is because at higher error rates the sample size is quite small and error bounds are wider. \begin{figure*}[ht] \vspace{-.2in} \centering \subfigure[Response Time Bounds]{ \includegraphics*[width=160pt]{figures/time-sla-bounds.pdf} \label{fig:time-sla-bounds} } \subfigure[Relative Error Bounds]{ \includegraphics*[width=160pt]{figures/error-sla-bounds.pdf} \label{fig:error-sla-bounds} } \subfigure[Scaleup]{ \includegraphics*[width=160pt]{figures/scaleup.pdf} \label{fig:scaleup} } \vspace{-.1in} \caption[]{\ref{fig:time-sla-bounds} and~\ref{fig:error-sla-bounds} plot the actual vs. requested response time and error bounds in {\system}.~\ref{fig:scaleup} plots the query latency across $2$ different query workloads (with cached and non-cached samples) as a function of cluster size} \label{fig:bounds} \vspace{-.1in} \end{figure*} \eat{ \begin{figure}[htbp] \begin{center} \includegraphics*[width=225pt]{figures/time-sla-bounds.pdf} \caption{Actual vs. requested query response time in {\system}} \label{fig:time-sla-bounds} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics*[width=225pt]{figures/error-sla-bounds.pdf} \caption{Actual vs. requested query error bounds in {\system}} \label{fig:error-sla-bounds} \end{center} \end{figure} } \eat{ Figure~\ref{fig:error-vs-time-micro-bench} shows how, for a simple average operator running on the $17$ TB data set, the response time and error bounds varies as we vary the sample size. Other operators also exhibit a very similar behavior. Note that while error bars are sufficiently wider for smaller samples, the answers get fairly accurate as the sample size increases. \eat{ \begin{figure}[t] \begin{center} \includegraphics*[width=225pt]{figures/error-vs-time-micro-bench.pdf} \caption{This figure depicts the variation of statistical error and response time with respect to sample sizes.} \label{fig:error-vs-time-micro-bench} \end{center} \end{figure} } } \subsection{Scaling Up} Finally, in order to evaluate the scalability properties of \system{} as a function of cluster size, we created $2$ different sets of query workload suites consisting of $40$ unique Conviva queries each. The first set (marked as $selective$) consists of highly selective queries -- i.e., those queries that only operate on a small fraction of input data. These queries occur frequently in production workloads and consist of one or more highly selective WHERE clauses. The second set (marked as $bulk$) consists of those queries that are intended to crunch huge amounts of data. While the former set's input is generally striped across a small number of machines, the latter set of queries generally runs on data stored on a large number of machines, incurring a higher communication cost. Figure~\ref{fig:scaleup} plots the query latency for each of these workloads as a function of cluster size. Each query operates on $100n$ GB of data (where $n$ is the cluster size). So for a $10$ node cluster, each query operates on $1$ TB of data and for a $100$ node cluster each query operates on around $10$ TB of data. Further, for each workload suite, we evaluate the query latency for the case when the required samples are completely cached in RAM or when they are stored entirely on disk. Since in reality any sample will likely partially reside both on disk and in memory these results indicate the min/max latency bounds for any query. \eat{ \begin{figure}[htbp] \begin{center} \includegraphics*[width=225pt]{figures/scaleup.pdf} \caption{Query latency across 2 different query workloads (with cached and non-cached samples) as a function of cluster size} \label{fig:scaleup} \end{center} \end{figure} } \if{0} \begin{figure}[htbp] \begin{center} \includegraphics*[width=225pt]{figures/query-response-time-cdf.pdf} \caption{A CDF of the response time of all queries in the work-suite ) took 20 queries ran them 10x each, with the target of 5\% error connecting the means \notesameer{Optional}} \label{fig:query-response-time-cdf} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics*[width=225pt]{figures/statisticalerror-vs-samplingratio.pdf} \caption{(\notesameer{Optional}) This graph plots the statistical error ($+$) and effective sampling ratio ($X$) with respect to the query response times. This figure depicts our entire optimization space.} \label{fig:statisticalerror-vs-samplingratio} \end{center} \end{figure} \fi \section{Implementation}\label{implementation} Fig.~\ref{fig:implementation} describes the entire {\system} ecosystem. {\system} is built on top of the Hive Query Engine~\cite{hive}, supports both Hadoop MapReduce~\cite{hadoopmr} and Spark~\cite{spark} (via Shark~\cite{shark}) at the execution layer and uses the Hadoop Distributed File System~\cite{hdfs} at the storage layer. Our implementation required changes in a few key components. We added a shim layer of {\it {\systeminitalics} Query Interface} to the HiveQL parser that enables queries with response time and error bounds. Furthermore, it detects data input, which causes the {\it Sample Creation and Maintenance} module to create or update the set of random and multi-dimensional samples at multiple granularities as described in \xref{solution:sampling}. We further extend the HiveQL parser to implement a {\it Sample Selection} module that re-writes the query and iteratively assigns it an appropriately sized biased or random sample as described in \xref{solution:selection}. We also added an {\it Uncertainty Propagation} module to modify the pre-existing aggregation functions summarized in Table~\ref{tab:closedform} to return errors bars and confidence intervals in addition to the result. Finally, we extended the SQLite based Hive Metastore to create {\it {\systeminitalics} Metastore} that maintains a transparent mapping between the non-overlapping logical samples and physical HDFS data blocks as shown in Fig.~\ref{fig:sampleselection}. \begin{figure}[tbp] \begin{center} \includegraphics*[width=150pt]{figures/implementation.pdf} \caption{\system's Implementation Stack} \label{fig:implementation} \vspace{-0.3in} \end{center} \end{figure} We also extend Hive to add support for sampling from tables. This allows us to leverage Hive's parallel execution engine for sample creation in a distributed environment. Furthermore, our sample creation module optimizes block size and data placement for samples in HDFS. \eat{We extended Hive with support for sampling; this allows us to build samples by leveraging the Hive parallel query execution engine.} \if{0} In order to incur little overhead in creating and replacing samples as new data in being added in the system, \system{} has parallel sample creation framework that creates in-place binomial samples for data stored in HDFS. This is achieved by leveraging the Hive query execution engine to create and maintain samples in {\system}. Specifically, we augmented the HiveQL to add {\tt SAMPLE ON [RANDOM | Column Name(s)]} operators. The {\tt SAMPLE ON RANDOM} operator shares the same semantics of a {\tt WHERE} clause and outputs each input row with a probability $p$. The parameterized {\tt SAMPLE ON Column Name(s)} operator takes one or more columns parameters and aggregates the input data on the unique values (_key_) in those column combinations. It then applies the {\tt SAMPLE ON RANDOM} operator on each unique _key_ to create stratified samples. \fi In \system{}, uniform samples are generally created in a few hundred seconds. This is because the time taken to create them only depends on the disk/memory bandwidth and the degree of parallelism. On the other hand, creating stratified samples on a set of columns takes anywhere between a $5-30$ minutes depending on the number of unique values to stratify on, which decides the number of reducers and the amount of data shuffled. \eat{Please note that using binomial sampling in place of reservoir sampling does introduce some statistical bias in the system which is appropriately corrected.} \if{0} \subsection{Storage Optimizations} \label{sec:data-partition} Query response times in \system~are usually dominated by the time to access stored samples on the disk. As such, there are two storage-related implementation questions that have a significant impact on the sample access time: (1) What is the size of the file system block?, and (2) How is a sample stored in the underlying file system? We answer these questions next in the context of a HDFS-like file system. Smaller block sizes lead to higher parallelism as HDFS can do a better job spreading a file across the nodes in a cluster. On the other hand, large blocks reduce file system overhead~\cite{namenode-pressure}, and improves the disk throughput. In our implementation, we balance this trade-off by picking $3$ MB blocks, such that each block can be read in a fraction of a second. While these blocks are significantly smaller than the ones used in most HDFS deployments, our experience so far has not revealed a significant impact on the read throughput and the system scalability for our deployment. \eat{ To answer the first question, consider a cluster consisting of $M$ disk drives, and let $R$ be the disk throughput. Then, we can read a file of size $X$ in as little as $X/(M R)$ time. This assume that the blocks of the file are perfectly load balanced across all disks. To achieve a good level of load balancing across a large variety of file sizes we want the blocks to be as small as possible. Unfortunately, small blocks have a negative impact on disk throughput, and incur a high file system overhead. For these reasons and because they target batch workloads, the typical deployments of HDFS use block sizes ranging from $64$ MB, all the way up to $1$ GB. Unfortunately, such large blocks would significantly impact query response times. Assuming the sequential throughput of a disk is $R = 50$ MB/s, the time it takes to read a sample larger than $1$ GB is bounded below by $1.3$s for $64$ GB blocks, and as much as $20$s for $1$ GB blocks. As a result, in our deployment, we use much smaller blocks of a few MBs. While these blocks are significantly smaller than the ones used in most HDFS deployments, our experience so far has not revealed a significant impact on the read throughput and the system scalability. } We consider two answers to the second question: maintain the entire sample in a single file or partition the sample in many small files. First, assume we store the sample in a single file. Recall that a stratified sample, $S(\phi, K)$, is clustered by the values in $\phi$. In order to take advantage of this clustering, we need to know at which location in the file does a key starts. This requires us to maintain metadata associated to each file. The alternative is to split the sample into smaller files along the key values in $\phi$. In this case, the query plan will identify the file(s) in which the keys are stored and then execute the query only on these files. We picked the second approach for it significantly reduces the {\it {\systeminitalics} Metastore}'s complexity. \eat{ Since maintaining non-overlapping samples requires us to split a sample in multiple files anyway, in this paper we use the second approach and map each sample to many small files. In particular, we pick the size of a file to be roughly $B \times M$, where $B$ is the block size. In the best case scenario, this design allows us to read a file as fast as reading a block. This approach simplifies the design as it does not require to maintain additional metadata per file, as in the case of storing each sample in a large file. } \begin{figure}[htbp] \begin{center} \includegraphics*[width=225pt]{figures/sample2file-mapping.pdf} \caption{Example of mapping the stratified sample in Figure~\ref{fig:stratified-sample} to many small files.} \label{fig:sample2file-mapping} \end{center} \end{figure} Figure~\ref{fig:sample2file-mapping} shows an example where the sample $S(\phi, K_1)$ in Figure~\ref{fig:stratified-sample} is partitioned into small files. In particular, the lower row, corresponding to $S(\phi, K_3)$, is partitioned into files $\{F_{31}, F_{32}, F_{33}, F_{34}\}$. Similarly, the second row, which together with the first row correspond to $S(\phi, K_2)$, is partitioned into files $\{F_{21}, F_{22}, F_{23}\}$. Finally, the last row, which together with the previous two rows correspond to $S(\phi, K_1)$, is partitioned into $\{F_{11}, F_{12}, F_{13}, F_{14}\}$. Now consider query $Q$ whose {\tt WHERE} clause is $(\phi = x)$. If we use sample $S(\phi, K_3)$ to answer $Q$, then $Q$ needs to read only file $F_{3, 2}$. If we use sample $S(\phi, K_2)$, then $Q$ will read both $F_{3,2}$ and $F_{2, 2}$. Finally, if we use the largest sample, $S(\phi, K_1)$, then $Q$ will read all files that contain value $x$,~\ie~$F_{3,2}$, $F_{2,2}$, and $F_{1, 2}$, respectively. \fi \section{Introduction}\label{introduction} Modern data analytics applications involve computing aggregates over a large number of records to ``roll up'' web clicks, online transactions, content downloads, phone calls, and other features along a variety of different dimensions, including demographics, content type, region, and so on. Traditionally, such queries have been answered via sequential scans of large fractions of a database to compute the appropriate statistics. Increasingly, however, these applications demand near real-time response rates. Examples include (i) in a search engine, recomputing what ad(s) on websites to show to particular classes of users as content and product popularity changes on a daily or hourly basis (e.g., based on trends on social networks like Twitter or real time search histories) (ii) in a financial trading firm, quickly comparing the prices of securities to fine-grained historical averages to determine items that are under or over valued, or (iii) in a web service, determining the subset of users who are affected by an outage or are experiencing poor quality of service based on the service provider or region. \if{0} Modern web services generate huge amounts of data that ultimately derives its value from being analyzed in a timely manner. Companies and their analysts often explore this data to improve their products, increase their retention rate and customers engagement, and diagnose problems in their service. Examples of such exploratory queries include: \vspace{.05in} \noindent{\textbf{Root cause analysis and problem diagnosis:}} Imagine a subset of users of a video site (\eg Netflix) experience quality issues, such as a high level of buffering or long start-up times. Diagnosing such problems needs to be done quickly to avoid lost revenue. The causes can be varied: an ISP or edge cache in a certain geographic region may be overloaded, a new OS or player release may be buggy, or a particular piece of content may be corrupt. Diagnosing such problems requires segmenting data across tens of dimensions to find the particular attributes (\eg client OS, browser, firmware, device, geolocation, ISP, CDN, content) that best characterize the users experiencing the problem. \vspace{.05in} \noindent {\textbf{Advertising and Marketing:}} Consider a business that wants to adapt its policies and decisions in near real time to maximize its revenue. This might again involve aggregating across multiple dimensions to understand how an ad performs given a particular group of users, content, site, and time of the day. Performing such analysis quickly is essential, especially when there is a change in the environment, \eg new ads, new content, new page layout. Indeed, it can lead to a material difference if one is able to re-optimize the ad placement every minute, rather than every day or week. \vspace{.05in} \noindent {\textbf{A/B testing:}} Consider an online service company that aims to optimize its business by improving the retention of it users. Often this is done by using A/B testing to experiment with anything from new products to slight changes in the web page layout, format, or colors.\eat{\footnote{A much publicized example is Google's testing with $41$ shades of blue for their home page~\cite{nytimes-ab-testing}.}} The number of combinations and changes that one can test is daunting, even for a company as large Google. Furthermore, such tests need to be conducted carefully as they may negatively impact the user experience. Again, in this usage scenario, it is more important to identify the trend fast, rather than accurately characterize the impact on every user. \vspace{.1in} \fi In these and many other analytic applications, queries are unpredictable (because the exact problem, or query is not known in advance) and quick response time is essential as data is changing quickly, and the potential profit (or loss in profit in the case of service outages) is proportional to response time. Unfortunately, the conventional way of answering such queries requires scanning the entirety of several terabytes of data. This can be quite inefficient. For example, computing a simple average over $10$ terabytes of data stored on $100$ machines can take in the order of $30-45$ minutes on Hadoop if the data is striped on disks, and up to $5-10$ minutes even if the entire data is cached in memory. This is unacceptable for rapid problem diagnosis, and frustrating even for exploratory analysis. As a result, users often employ ad-hoc heuristics to obtain faster response times, such as selecting small slices of data (\eg an hour) or arbitrarily sampling the data~\cite{minitables, qubole}. These efforts suggest that, at least in many analytic applications, users are willing to forgo accuracy for achieving better response times. In this paper, we introduce \system, a new distributed parallel approximate query-processing framework than runs on Hive/Hadoop~\cite{hive} as well as Shark~\cite{shark} (i.e., ``Hive on Spark~\cite{spark}'', which supports caching inputs and intermediate data). \system{} allows users to pose SQL-based aggregation queries over stored data, along with response time or error bound constraints. Queries over multiple terabytes of data can be answered in seconds, accompanied by meaningful error bounds relative to the answer that would be obtained if the query ran on the full data. The basic approach taken by \system~ is to precompute and maintain a carefully chosen set of random samples of the user's data, and then select the best sample(s) at runtime, for answering the query while providing error bounds using statistical sampling theory. While uniform samples provide a reasonable approximation for uniformly or near-uniformly distributed data, they work poorly for skewed distributions (\eg exponential or zipfian). In particular, estimators over infrequent subgroups (\eg smartphone users in Berkeley, CA compared to New York, NY) converge slower when using uniform samples, since a larger fraction of the data needs to be scanned to produce high-confidence approximations. Furthermore, uniform samples may not contain instances of certain subgroups, leading to missing rows in the final output of queries. Instead, {\it stratified} or {\it biased} samples~\cite{sampling-book}, which over-represent the frequency of rare values in a sample, better represent rare subgroups in such skewed datasets. Therefore \system~maintains both a set of uniform samples, and a set of stratified samples over different combinations of attributes. As a result, when querying rare subgroups, \system~ (i) provides faster-converging estimates (\ie tighter {\it approximation errors}), and thus lower processing times, compared to uniform samples~\cite{online-agg-mr} and, (ii) significantly reduces the number of missing subgroups in the query results (\ie {\it subset error}~\cite{subset-error}), enabling a wider range of applications (\eg more complex joins that would be not be possible otherwise~\cite{Chaudhuri:1999}). However, maintaining stratified samples over all combinations of attributes is impractical. Conversely, only computing stratified samples on columns used in past queries limits the ability to handle new ad-hoc queries. Therefore, we formulate the problem of sample creation as an optimization problem. Given a collection of past query {\it templates} (query templates contain the set of columns appearing in {\tt WHERE} and {\tt GROUP BY} clauses without specific values for constants) and their historical frequencies, we choose a collection of stratified samples with total storage costs below some user configurable storage threshold. These samples are designed to efficiently answer any instantiation of past query templates, and to provide good coverage for future queries and unseen query templates. In this paper, we refer to these stratified samples, constructed over different sets of columns (dimensions), as _multi-dimensional_ samples. In addition to multi-dimensional samples, \system~ also maintains _multi-resolution_ samples. For each multi-dimensional sample, we maintain several samples of progressively larger sizes (that we call multi-resolution samples). Given a query,~\system~picks the best sample to use at runtime. Having samples of different sizes allows us to efficiently answer queries of varying complexity with different accuracy (or time) bounds, while minimizing the response-time (or error). A single sample, would hinder our ability to provide as fine-grained a trade-off between speed and accuracy. Finally, when the data distribution or the query load changes, \system~ refines the solution while minimizing the number of old samples that need to be discarded or new samples that need to be generated. Our approach is substantially different from related sampling-based approximate query answering systems. One line of related work is {\it Online Aggregation}~\cite{control, online-agg, online-agg-joins} (OLA) and its extensions~\cite{online-agg-mr, db-online, ola-mr-pansare}. Unlike OLA, pre-computation and maintenance of samples allows \system~ to store each sample on disk or memory in a way that facilitates efficient query processing (\eg clustered by primary key and/or other attributes), whereas online aggregation has to access the data in random order to provide its statistical error guarantees. Additionally, unlike OLA, \system{} has prior knowledge of the sample size(s) on which the query runs (based on its response time or accuracy requirements). This additional information both helps us better assign cluster resources (\ie degree of parallelism and input disk/memory locality), and better leverage a number of standard distributed query optimization techniques~\cite{join-comparison-in-mr}. There is some related work that proposes pre-computing (sometimes stratified) samples of input data based on past query workload characteristics~\cite{surajit-optimized-stratified, babcock-dynamic,sciborq}. As noted above, \system~computes both multi-dimensional (\ie samples over multiple attributes) and multi-resolution sampling (\ie samples at different granularities), which no prior system does. We discuss related work in more detail in~\xref{related}. In summary, we make the following contributions: \begin{itemize*}\vspace{-.1in} \item We develop a multi-dimensional, multi-granular stratified sampling strategy that provides faster convergence, minimizes missing results in the output (\ie subset error), and provides error/latency guarantees for ad-hoc workloads. (\xref{sec:stratified-samples}, \xref{sec:convergence-exp}) \item We cast the decision of what stratified samples to build as an optimization problem that takes into account: (i) the skew of the data distribution, (ii) query templates, and (iii) the storage overhead of each sample. (\xref{sec:optimal-view-creation}, \xref{sec:multi-dimension-sampling-exp}) \item We develop a run-time dynamic sample selection strategy that uses multiple smaller samples to quickly estimate query selectivity and choose the best samples for satisfying the response time and error guarantees. (\xref{sec:select-samplefamily}, \xref{sec:time-accuracy}) \vspace{-.1in} \end{itemize*} \system{} is a massively parallel query processing system that incorporates these ideas. We validate the effectiveness of \system's design and implementation on a $100$ node cluster, using both the TPC-H benchmarks and a real-world workload derived from Conviva Inc~\cite{Conviva}. Our experiments show that \system~ can answer a range of queries within $2$ seconds on $17$ TB of data within 90-98\% accuracy. Our results show that our multi-dimensional sampling approach, versus just using single dimensional samples (as was done in previous work) can improve query response times by up to three orders of magnitude and are further a factor of $2\times$ better than approaches that apply online sampling at query time. Finally, \system~is open source\footnote{\url{http://blinkdb.org}} and several on-line service companies have expressed interest in using it. Next, we describe the architecture and the major components of \system. \section*{Acknowledgements} The authors would like to thank Anand Iyer, Joe Hellerstein, Michael Franklin, Surajit Chaudhuri and Srikanth Kandula for their invaluable feedback and suggestions throughout this project. The authors are also grateful to Ali Ghodsi, Matei Zaharia, Shivaram Venkataraman, Peter Bailis and Patrick Wendell for their comments on an early version of this manuscript. Sameer Agarwal and Aurojit Panda are supported by the Qualcomm Innovation Fellowship during 2012-13. This research is also supported in part by gifts from Google, SAP, Amazon Web Services, Blue Goji, Cloudera, Ericsson, General Electric, Hewlett Packard, Huawei, IBM, Intel, MarkLogic, Microsoft, NEC Labs, NetApp, Oracle, Quanta, Splunk, VMware and by DARPA (contract \#FA8650-11-C-7136). \let\oldthebibliography=\thebibliography \let\endoldthebibliography=\endthebibliography \renewenvironment{thebibliography}[1]{% \begin{oldthebibliography}{#1}% \setlength{\parskip}{0ex}% \setlength{\itemsep}{0ex}% }% {% \end{oldthebibliography}% } {\small \bibliographystyle{abbrv} \section{Related Work} \label{related} Prior work on interactive parallel query processing frameworks has broadly relied on two different sets of ideas. One set of related work has focused on using additional resources (\ie memory or CPU) to decrease query processing time. Examples include \emph{Spark}~\cite{spark}, \emph{Dremel}~\cite{dremel} and \emph{Shark}~\cite{shark}. While these systems deliver low-latency response times when each node has to process a relatively small amount of data (\eg when the data can fit in the aggregate memory of the cluster), they become slower as the data grows unless new resources are constantly being added in proportion. Additionally, a significant portion of query execution time in these systems involves shuffling or repartitioning massive amounts of data over the network, which is often a bottleneck for queries. By using samples, \system{} is able to scale better as the quantity of data grows. Additionally, being built on Spark, {\system} is able to effectively leverage the benefits provided by these systems while using limited resources. Another line of work has focused on providing approximate answers with low latency, particularly in database systems. Approximate Query Processing (AQP) for decision support in relational databases has been the subject of extensive research, and can either use samples, or other non-sampling based approaches, which we describe below. \textbf{Sampling Approaches.} There has been substantial work on using sampling to provide approximate responses, including work on stratified sampling techniques similar to ours (see~\cite{aqp-survey} for an overview). Especially relevant are: \begin{asparaenum} \item \emph{STRAT}~\cite{surajit-optimized-stratified} relies on a single stratified sample, chosen based on the exact tuples accessed by each query. In contrast \system~ uses a set of samples computed using query templates, and is thus more amenable to ad-hoc queries. \item \emph{SciBORQ}~\cite{sciborq} is a data-analytics framework designed for scientific workloads, which uses special structures, called \emph{impressions}. Impressions are biased samples where tuples are picked based on past query results. SciBORQ targets exploratory scientific analysis. In contrast to \system, SciBORQ only supports time-based constraints. SciBORQ also does not provide any guarantees on the error margin. \item {\it Babcock et al.}~\cite{babcock-dynamic} also describe a stratified sampling technique where biased samples are built on a single column, in contrast to our multi-column approach. In their approach, queries are executed on all biased samples whose biased column is present in the query and the union of results is returned as the final answer. Instead, \system~runs on a single sample, chosen based on the current query. \end{asparaenum} \textbf{Online Aggregation.} Online Aggregation (OLA)~\cite{online-agg} and its successors~\cite{online-agg-mr, ola-mr-pansare} proposed the idea of providing approximate answers which are constantly refined during query execution. It provides users with an interface to stop execution once a sufficiently good answer is found. The main disadvantages of Online Aggregation is that it requires data to be streamed in a random order, which can be impractical in distributed systems. While~\cite{ola-mr-pansare} proposes some strategies for implementing OLA on Map-Reduce, their strategy involves significant changes to the query processor. Furthermore, {\system}, unlike OLA, can store data clustered by a primary key, or other attributes, and take advantage of this ordering during data processing. Additionally \system~can use knowledge about sample sizes to better allocate cluster resources (parallelism/memory) and leverage standard distributed query optimization techniques~\cite{join-comparison-in-mr}. \eat{ In order to provide statistical guarantees at all stages of the query processing, OLA has to access the data in random order. This is impractical when dealing with large amounts of data. Moreover, due to the use of stratified sampling, under the same time or error constraints, \system{} is more likely to operate on values from the long tail of a skewed distribution. Finally, Online Aggregation requires significant changes to the query processor and user interface, requiring, for example, the ability to scan tables in random error, or to maintain and output incremental answers. } \textbf{Non-Sampling Approaches.} There has been a great deal of work on ``synopses'' for answering specific types of queries (e.g., wavelets, histograms, sketches, etc.)\footnote{Please see~\cite{aqp-survey} for a survey}. Similarly materialized views and data cubes can be constructed to answer specific queries types efficiently. While offering fast responses, these techniques require specialized structures to be built for every operator, or in some cases for every type of query and are hence impractical when processing arbitrary queries. Furthermore, these techniques are orthogonal to our work, and \system~could be modified to use any of these techniques for better accuracy on certain types of queries, while resorting to samples on others. \section{Experimental Results}\label{results} \subsection{Synthetic Workload} To test our system we rely on synthetic data generated by a tool, with similar characteristics as that of the user access logs from Conviva Inc. Our generator produces data where some fields, for instance zip code, city and state, are closely correlated, while others, for instance the site visited, and session length are loosely correlated. The distribution of most of the rows is derived from data found on the internet about machine, and browser popularity, ISP popularity, and assumptions about the population of different cities. We believe that such synthetic data allows us to test many aspects of our system, in particular, it allows us to test the efficiency of our sample selection strategy, our selectivity in the face of filtering clauses, performance of the overall system, etc. In addition to testing on synthetic data, we also plan on testing our system on real, anonymized data acquired from Conviva Inc, and other sources. \begin{table}[htdp] \caption{Fields in Synthetic Data} \begin{center} \begin{tabular}{|l|l|l|} \hline \bf{Name} & \bf{Type} & \bf{Description} \\ \hline \texttt{session\_id} & \texttt{int} & Session ID \\ \texttt{session\_start} & \texttt{double} & Start time \\ \texttt{session\_end} & \texttt{double} & End time \\ \texttt{session\_state} & \texttt{int} & HTTP status\\ \texttt{error\_code} & \texttt{int} & Error code, in case of errors\\ \texttt{user\_id} & \texttt{int} & User ID\\ \texttt{country} & \texttt{string} & User country\\ \texttt{city} & \texttt{string} & User city\\ \texttt{state} & \texttt{string} & User state\\ \texttt{zipcode} & \texttt{string} & Zipcode\\ \texttt{content\_id} & \texttt{int} & Content-ID\\ \texttt{ip\_address} & \texttt{string} & IP address\\ \texttt{provider} & \texttt{string} & Provider \\ \texttt{os\_type} & \texttt{string} & Operating System \\ \texttt{os\_version} & \texttt{string} & OS version \\ \texttt{flash\_type} & \texttt{string} & Is Flash present\\ \texttt{flash\_version} & \texttt{string} & Flash Player version\\ \hline \end{tabular} \end{center} \label{default} \end{table} In addition to these tables, we use a synthetic workload generator to generate a variety of queries, and use these queries to both test the system, and the effectiveness of our sample selection strategy. Our current queries are based on the kinds of generic queries one would expect to run on data as structured above, and on data from Conviva Inc about the kind of queries they use. We plan on eventually running some of the TPC-H workloads for further testing. \subsection{Benchmarks} \begin{figure}[] \begin{center} \includegraphics*[width=225px]{figures/ErrorBarsWithTimeForAverage.pdf} \caption{Variation of error (with a 95\% confidence interval) and query latency with respect to sample sizes for a query that calculates the average session duration: \texttt{select avg(session\_end - session\_start) from syntheticvistordata}} \label{avg} \end{center} \end{figure} \begin{figure}[] \begin{center} \includegraphics*[width=225px]{figures/count.pdf} \caption{Variation of error (with a 95\% confidence interval) and query latency with respect to sample sizes for a query that calculates the number of entries in the database: \texttt{select count(*) from syntheticvistordata}} \label{count} \end{center} \end{figure} We evaluated \system~by storing 9GB of our synthetic workload data in HDFS running across 3 Amazon EC2 large instances (7.5GB RAM, 4 EC2 Compute Units and high I/O performance). In order to compare the tradeoffs between sample size, error bars and query latency, we then created sixteen samples of varying sizes ranging from 2KB to 64MB in multiples of two. Figure~\ref{avg} plots the query latency and error bars at 95\% confidence interval as a function of sample sizes for the \texttt{average} operator. For the purpose of this experiment, all the input tables were cached in memory as an RDD and the the query computed the average of the user session duration (\texttt{session\_end - session\_start}). Figure~\ref{count} similarly plots the count query latency and error bars at 95\% confidence interval as a function of sample sizes for the \texttt{count} operator. This query computed the number of entries in the table and also operated on in-memory samples. To evaluate the efficiency of our statistical method we ran a few benchmarks outside of the cluster. These benchmarks were run on data drawn from a gamma distribution, from which we drew a limited size sample. In the graph shown in Figure~\ref{blb-perf-fig1} we evaluated the performance of BLB in accurately computing the variance of the k-means operator when applied over a fixed size sample. We evaluated the quality by first computing the variance of k-means over a large set of samples, of the same size, drawn from the underlying distribution, on which we ran the k-means algorithm. We then compared the variance returned by bag of little bootstraps, against this variance. As the graph shows, the error in our variance calculation decreases rapidly, and BLB is in face an effective method for calculating the variance in the distribution of a complex function. To evaluate the quality of BLB vs a more traditional analytical approach, we ran a set of experiments comparing standard error of mean, with both BLB and traditional bootstrap. All the methods were used to calculate variance of the mean, for a sample drawn from the gamma distribution. As can be seen in Figure~\ref{blb-perf-fig2}, the performance of all three methods is similar, and BLB does in fact behave in a manner very similar to traditional closed form approaches. We would have expected BLB to perform slightly better, given that bootstrap should in theory show faster convergence than other methods, however for variance, bootstrap can perform no better than the traditional closed form version. \begin{figure}[] \begin{center} \includegraphics*[width=225px]{figures/kmeans.pdf} \caption{Evaluating BLB performance in predicting variance in K-Means} \label{blb-perf-fig1} \end{center} \end{figure} \begin{figure}[] \begin{center} \includegraphics*[width=225px]{figures/AlgorithmComparison.pdf} \caption{Evaluating BLB vs bootstrap vs standard error of mean for computing variance of mean} \label{blb-perf-fig2} \end{center} \end{figure} \subsection{Sample Maintenance}\label{sec:sample-maintenance} {\system}'s reliance on offline sampling can result in situations where a sample is not representative of the underlying data. Since statistical guarantees are given across repeated resamplings, such unrepresentative samples can adversely effect decisions made using \system. Such problems are unavoidable when using offline sampling, and affect all systems relying on such techniques. As explained in \xref{implementation}, \system~uses a parallel binomial sampling framework to generate samples when data is first added. We rely on the same framework for sample replacement, reapplying the process to existing data, and replacing samples when the process is complete. To minimize the overhead of such recomputation, \system~uses a low-priority, background task to compute new samples from existing data. The task is designed to run when the cluster is underutilized, and is designed to be suspended at other times. Furthermore, the task utilizes no more than a small fraction of unutilized scheduling slots, thus ensuring that any other jobs observe little or no overhead. \section{BlinkDB Runtime} \label{solution:selection} In this section, we provide an overview of query execution in \system. and our approach for online sample selection. Given a query $Q$, the goal is to select one (or more) sample(s) at~\emph{run-time} that meet the specified time or error constraints and then compute answers over them. Selecting a sample involves first selecting a _sample family_ (\ie~dimension), and then selecting a _sample resolution_ within that family. The selection of a sample family depends on the set of columns in $Q$'s clauses, the selectivity of its selection predicates, and the data distribution. In turn, the selection of the resolution within a sample family depends on $Q$'s time/accuracy constraints, its computation complexity, and the physical distribution of data in the cluster. As with traditional query processing, accurately predicting the selectivity is hard, especially for complex {\tt WHERE} and {\tt GROUP-BY} clauses. This problem is compounded by the fact that the underlying data distribution can change with the arrival of new data. Accurately estimating the query response time is even harder, especially when the query is executed in a distributed fashion. This is (in part) due to variations in machine load, network throughput, as well as a variety of non-deterministic (sometimes time-dependent) factors that can cause wide performance fluctuations. Rather than try to model selectivity and response time, our sample selection strategy takes advantage of the large variety of non-overlapping samples in {\system} to estimate the query error and response time at \emph{run-time}. In particular, upon receiving a query, \system~``probes'' the smaller samples of one or more sample families in order to gather statistics about the query's selectivity, complexity and the underlying distribution of its inputs. Based on these results, {\system} identifies an optimal sample family and resolution to run the query on. In the rest of this section, we explain our query execution, by first discussing our mechanism for selecting a sample family (\xref{sec:select-samplefamily}), and a sample size (\xref{sec:select-samplesize}). We then discuss how to produce unbiased results from stratified samples (\xref{sec:unbiased-biased}), followed by re-using intermediate data in \system{} (\xref{sec:reusing}). \subsection{Selecting the Sample Family} \label{sec:select-samplefamily} Choosing an appropriate sample family for a query primarily depends on the set of columns used for _filtering_ and/or _grouping_. The {\tt WHERE} clause itself may either consist of conjunctive predicates ({\tt condition1 AND condition2}), disjunctive predicates ({\tt condition1 OR condition2}) or a combination of the two. Based on this, {\system} selects one or more suitable sample families for the query as described in ~\xref{sec:conjunctive-predicates} and~\xref{sec:disjunctive-predicates}. \begin{table*}[t] \vspace{-.1in} {\small \hfill{} \begin{tabular}{| p{0.08\textwidth} | p{0.42\textwidth} | p{0.42\textwidth} |} \hline \bf Operator & \bf Calculation & \bf Variance\\ \hline \hline \texttt{Avg} & $\frac {\sum{X_i}}{n}$ [$X_i$: observed values; $n$: sample size) & $\frac{S_n^2}{n}$ ($S_n^2$: sample variance] \\\hline \texttt{Count} & $\frac{N}{n}\sum{\mathbf{I}_{K}}$ [$\mathbf{I}_{K}$: matching tuple indicator; $N$: Total Rows] & $\frac{N^2}{n} c(1-c)$ [$c$: fraction of items which meet the criterion] \\\hline \texttt{Sum} & $\left(\frac{N}{n}\sum{\mathbf{I}_{K}} \right) \bar{X}$ & $N^2\frac{S_n^2}{n} c(1-c)$\\\hline \texttt{Quantile} & $x_{\lfloor h\rfloor}+ (h - \lfloor h\rfloor)(x_{\lceil h\rceil} -x_{\lfloor h\rfloor})$ [$x_i$: $i^{th}$ ordered element & $\frac {1}{f(x_p)^2}\frac{p(1-p)}{n}$ [$f$: pdf for data] \\ & in sample; $p$: specified quantile; $h$: $p\times n$] &\\\hline \end{tabular} \hfill{} } \vspace{-.1in} \caption{Error estimation formulas for common aggregate operators.} \vspace{-.1in} \label{tab:closedform} \end{table*} \subsubsection{Queries with Conjunctive Predicates} \label{sec:conjunctive-predicates} Consider a query $Q$ whose {\tt WHERE} clause contains only conjunctive predicates. Let $\phi$ be the set of columns that appear in these clause predicates. If $Q$ has multiple {\tt WHERE} and/or {\tt GROUP BY} clauses, then $\phi$ represents the union of the columns that appear in each of these predicates. If \system finds one or more stratified sample family on a set of columns $\phi_i$ such that $\phi \subseteq \phi_i$, we simply pick the $\phi_i$ with the smallest number of columns, and run the query on $SFam(\phi_i)$. However, if there is no stratified sample on a column set that is a superset of $\phi$, we run $Q$ in parallel on the smallest sample of all sample families currently maintained by the system. Then, out of these samples we select the one that corresponds to the highest ratio of (i) the number of rows \emph{selected} by $Q$, to (ii) the number of rows \emph{read} by $Q$ (\ie number of rows in that sample). Let $SFam(\phi_i)$ be the family containing this sample. The intuition behind this choice is that the response time of $Q$ increases with the number of rows it reads, while the error decreases with the number of rows $Q$'s {\tt WHERE} clause selects. A natural question is why probe all sample families, instead of only those built on columns that are in $\phi$? The reason is simply because the columns in $\phi$ that are missing from a family's column set, $\phi_i$, can be negatively correlated with the columns in $\phi_i$. In addition, we expect the smallest sample of each family to fit in the aggregate memory of the cluster, and thus running $Q$ on these samples is very fast. \eat{ Let $n_{i, m}$ be the number of rows selected by $Q_i$ when running on the smallest sample of the selected family, $S(\phi_i, K_m)$. If $n_{i,m} < K_m$, then we are done as $S(\phi_i, K_m)$ contains all rows in the original table that match $Q$, and thus, in this case, we get an exact answer. Otherwise, we select sample $S(\phi_i, K_q)$ where $K_q$ is the smallest value in $SFam(\phi)$ that is larger than $K_m n/n_{i,m}$. This ensures that the expected number of rows selected by $Q$ when running on sample $S(\phi_i, K_q)$ is $\geq n$. As a result, the answer of $Q$ on $S(\phi_i, K_q)$ will meet $Q_i$'s error constraint. \vspace{.1in} \noindent{\textbf{Response time constraints:}} If $Q$ specifies a response time constraint, we select the sample family on which to run $Q$ the same way as above. Again, let $SFam(\phi_i)$ be the selected family and let $r_{i,m}$ be the number of rows that $Q$ reads when running on $S(\phi_i, K_m)$. In addition, let $r$ be the maximum number of rows that $Q$ can read without exceeding its response time constraint assuming the sample is stored on the disk. Consider sample $S(\phi_i, K_q)$ where $K_q$ is the largest value in $SFam(\phi_i)$ that is smaller than $K_m r/r_{i,m}$. Similarly, let $(\phi_i, K'_q)$ be the largest sample of $SFam(\phi_i)$ stored in memory. Finally, we select the largest sample between $S(\phi_i, K_q)$ and $S(\phi_i, K'_q)$ and execute $Q_i$ on it. } \subsubsection{Queries with Disjunctive Predicates} \label{sec:disjunctive-predicates} Consider a query $Q$ with disjunctions in its {\tt WHERE} clause. In this case, we rewrite $Q$ as a union of queries $\{Q_1, Q_2,~\ldots, Q_p\}$, where each query $Q_i$ contains only conjunctive predicates. Let $\phi_j$ be the set of columns in $Q_j$'s predicates. Then, we associate with every query $Q_i$ an error constraint (\eg standard deviation $s_i$) or time constraint, such that we can still satisfy $Q$'s error/time constraints when aggregating the results over $Q_i$ $(1 \leq i \leq p)$ in parallel. Since each of the queries, $Q_i$ consists of only conjunctive predicates, we select their corresponding sample families using the selection procedure described in~\xref{sec:conjunctive-predicates}. \subsection{Selecting the Sample Size} \label{sec:select-samplesize} Once a sample family is decided, {\system} needs to select an appropriately sized sample in that family based on the query's response time or error constraints. We accomplish this by constructing an {\it Error-Latency Profile} (ELP) for the query. The ELP characterizes the rate at which the error decreases (and the query response time increases) with increasing sample sizes, and is built simply by running the query on smaller samples to estimate the selectivity and project latency and error for larger samples. For a distributed query, its runtime scales with sample size, with the scaling rate depending on the exact query structure ({\tt JOINS, GROUP BYs} etc.), physical placement of it's inputs and the underlying data distribution~\cite{rope}. As shown in Table~\ref{tab:closedform}, the variation of error (or the variance of the estimator) primarily depends on the variance of the underlying data distribution and the actual number of tuples processed in the sample, which in turn depends on the selectivity of a query's predicates \eat{ For queries with time constraints, we assume linear scaling with increasing data. This is reasonable given the operations we support, however the initial analysis must be run on samples large enough to make sure they are larger than a base amount, with the base amount determined by processor cache sizes, RAM sizes, and buffer sizes. By running on multiple instances of data with this smaller size, the system can determine the average time processing such a query would take, and linearly scale it up to just below what is allowed by the time constraints. This scaling provides an ideal sample size, that is rounded down, allowing the system to satisfy the supplied constraints, while simultaneously minimizing error. Assuming exponential sample sizes, with a base size of $k$, the scaling will produce a sample with no fewer than $\frac{1}{k}$th the number of rows, and hence an error which is only $\sqrt{k}$ higher than expected. } \vspace{.1in} \noindent \textbf{Error Profile:} An error profile is created for all queries with error constraints. If $Q$ specifies an error (\eg standard deviation) constraint, the {\system} error profile tries to predict the size of the smallest sample that satisfies $Q$'s error constraint. \eat{Assume this sample consists of a set of randomly selected rows from the original table that match $Q$'s filter predicates.} Table~\ref{tab:closedform} shows the formulas of the variances for the most common aggregate operators. Note that in all these examples, the variance is proportional to $\sim 1/n$, and thus the standard deviation (or the statistical error) is proportional to $\sim 1/\sqrt{n}$, where $n$ is the number of rows from a sample of size $N$ that match $Q$'s filter predicates. The ratio $n/N$ is called the _selectivity_ $s_q$ of the query. Let $n_{i, m}$ be the number of rows selected by $Q$ when running on the smallest sample of the selected family, $S(\phi_i, K_m)$. Furthermore, \system{} estimates the query selectivity $s_q$, sample variance $S_n$ (for {\tt Avg/Sum}) and the input data distribution $f$ (for {\tt Quantiles}) as it runs on this sample. Using these parameter estimates, we calculate the number of rows $n = n_{i,m}$ required to meet $Q$'s error constraints using the equations in Table~\ref{tab:closedform}. Then we select the sample $S(\phi_i, K_q)$ where $K_q$ is the smallest value in $SFam(\phi)$ that is larger than $n*(K_m/n_{i,m})$. This ensures that the expected number of rows selected by $Q$ when running on sample $S(\phi_i, K_q)$ is $\geq n$. As a result, the answer of $Q$ on $S(\phi_i, K_q)$ is expected to meet $Q_i$'s error constraint. \vspace{.1in} \noindent{\textbf{Latency Profile:}} Similarly, a latency profile is created for all queries with response time constraints. If $Q$ specifies a response time constraint, we select the sample family on which to run $Q$ the same way as above. Again, let $SFam(\phi_i)$ be the selected family and let $n_{i,m}$ be the number of rows that $Q$ reads when running on $S(\phi_i, K_m)$. In addition, let $n$ be the maximum number of rows that $Q$ can read without exceeding its response time constraint. $n$ depends on the physical placement of input data (disk vs. memory), the query structure and complexity, and the degree of parallelism (or the resources available to the query). As a simplification, {\system} simply predicts $n$ by assuming latency scales linearly with input size input data, as is commonly done in parallel distributed execution environments~\cite{mantri-osdi, late-osdi}. To avoid non-linearities that may arise when running on very small in-memory samples, \system{} runs a few smaller samples until performance seems to grow linearly and then estimates appropriate linear scaling constants (\ie {\it data processing rate(s), disk/memory I/O rates etc.}) for the model. These constants are used to estimate a value of $n$ that is just below what is allowed by the time constraints. Once $n$ is estimated, {\system} picks sample $S(\phi_i, K_q)$ where $K_q$ is the largest value in $SFam(\phi_i)$ that is smaller than $n*(K_m /n_{i,m})$ and executes $Q$ on it in parallel. \eat{ Similarly, let $(\phi_i, K'_q)$ be the largest sample of $SFam(\phi_i)$ stored in memory. Finally, we select the largest sample between $S(\phi_i, K_q)$ and $S(\phi_i, K'_q)$ and execute $Q_i$ on it. } \begin{figure}[htbp] \begin{center} \includegraphics*[width=225pt]{figures/SampleSelection.pdf} \caption{Mapping of {\system}'s non-overlapping samples to HDFS blocks} \label{fig:sampleselection} \end{center} \vspace{-.1in} \end{figure} \subsection{Query Answers from Stratified Samples} \label{sec:unbiased-biased} Consider the \emph{Sessions} table, shown in Table~\ref{tab:toy-table}, and the following query against this table. {\small \begin{verbatim} SELECT City, SUM(SessionTime) FROM Sessions GROUP BY City WITHIN 5 SECONDS \end{verbatim} } If we have a uniform sample of this table, estimating the query answer is straightforward. For instance, suppose we take a uniform sample with 40\% of the rows of the original \emph{Sessions} table. In this case, we simply scale the final sums of the session times by $1/0.4=2.5$ in order to produce an unbiased estimate of the true answer\footnote{Here we use the terms _biased_ and _unbiased_ in a statistical sense, meaning that although the estimate might vary from the actual answer, its _expected value_ will be the same as the actual answer. }. Using the same approach on a stratified sample may produce a biased estimate of the answer for this query. For instance, consider a stratified sample of the \emph{Sessions} table on the \emph{Browser} column, as shown in Table~\ref{tab:toy-sample}. Here, we have a cap value of $K=1$, meaning we keep all rows whose \emph{Browser} only appears once in the original \emph{Sessions} table (e.g., \emph{Safari} and \emph{IE}), but when a browser has more than one row (\ie \emph{Firefox}), only one of its rows is chosen, uniformly at random. In this example we have choose the row that corresponds to \emph{Yahoo.com}. Here, we cannot simply scale the final sums of the session times because different values were sampled with different rates. Therefore, to produce unbiased answers, \system~ keeps track of the effective sampling rate applied to each row, e.g. in Table~\ref{tab:toy-sample}, this rate is $0.33$ for \emph{Firefox} row, while it is $1.0$ for \emph{Safari} and \emph{IE} rows since they have not been sampled at all. Given these per-row sample rates, obtaining an unbiased estimates of the final answer is straightforward, e.g., in this case the sum of sessions times is estimated as $\frac{1}{0.33}*20+\frac{1}{1}*82$ for \emph{New York} and as $\frac{1}{1}*22$ for \emph{Cambridge}. Note that here we will not produce any output for \texttt{Berkeley} (this would not happen if we had access to a stratified sample over \emph{City}, for example). In general, the query processor in \system~ performs a similar correction when operating on stratified samples. \begin{table}[!h] {\small \hfill{} \begin{tabular}{| c | c | c | c |} \hline \bf URL & \bf City & \bf Browser & \bf SessionTime\\ \hline cnn.com & New York & Firefox & 15 \\ \hline yahoo.com & New York & Firefox & 20 \\ \hline google.com & Berkeley & Firefox & 85\\ \hline google.com & New York & Safari & 82 \\ \hline bing.com & Cambridge & IE & 22 \\ \hline \end{tabular} \hfill{} } \caption{\texttt{Sessions} Table.} \label{tab:toy-table} \end{table} \begin{table}[!h] \vspace{-.2in} {\small \hfill{} \begin{tabular}{| c | c | c | c | c | } \hline \bf URL & \bf City & \bf Browser & \bf SessionTime & \bf SampleRate\\ \hline yahoo.com & New York & Firefox & 20 & 0.33 \\ \hline google.com & New York & Safari & 82 & 1.0\\ \hline bing.com & Cambridge & IE & 22 & 1.0\\ \hline \end{tabular} \hfill{} } \caption{A sample of \texttt{Sessions} Table stratified on \texttt{Browser} column.} \label{tab:toy-sample} \end{table} \subsection{Re-using Intermediate Data} \label{sec:reusing} Although {\system} requires a query to operate on smaller samples to construct its ELP, the intermediate data produced in the process is effectively utilized when the query runs on larger samples. Fig.~\ref{fig:sampleselection} decouples the logical and physical view of the non-overlapping samples maintained by {\system} as described in~\xref{sec:stratified-samples}. Physically, each progressively bigger logical sample ($A$, $B$ or $C$) consists of all data blocks of the smaller samples in the same family. {\system} maintains a transparent mapping between logical samples and data blocks, \ie $A$ maps to (I), $B$ maps to (I, II) and $C$ maps to (I, II, III). Now, consider a query $Q$ on this data. First, {\system} creates an ELP for $Q$ by running it on the smallest sample $A$, \ie it operates on the first two data blocks to estimate various query parameters described above and caches all intermediate data in this process. Subsequently, if sample $C$ is chosen based on the $Q$'s error/latency requirements, {\system} only operates on the additional data blocks, utilizing the previously cached intermediate data. \eat{ Please note that the {\system}'s incremental block processing techniques shares some aspects of stream processing frameworks (such as OLA) } \eat{ \srm{Following should be made more formal.} Our solution to this problem is therefore based on a mixture of query analysis techniques, and heuristics to determine the last unpredictable factor. On first receiving a query, \system~ analyzes the query to determine what columns show up in \texttt{where} clauses, \texttt{group by} clauses or other similar clauses. Should a ``sample family'' covering exactly those columns be found, the subsequent analysis is run on that family, but in the absence of such a family, the analysis is run on the smallest samples of every family. This allows use of the analytic information derived below, and some metric of selectivity to choose the precise sample family on which the query should be executed. \srm{Following paragraph needs to refer back to section~\ref{sec:stratified-sample-overhead}. In fact what follow seems to be essentially a description of what was already said there.} Figure~\ref{closedform} lists equations used for calculating the value, and variance of the four operations \system~ currently supports. We rely on the fact that all these operations are asymptotically normal to compute a $95\%$ confidence interval for quantities reported. The variance for all currently supported operations are inversely proportional to the sample size, and hence we can use the strategy discussed section~\ref{sec:stratified-samples} to select an appropriately sized sample. For queries with time constraints, we assume linear scaling with increasing data. This is reasonable given the operations we support, however the initial analysis must be run on samples large enough to make sure they are larger than a base amount, with the base amount determined by processor cache sizes, RAM sizes, and buffer sizes. By running on multiple instances of data with this smaller size, the system can determine the average time processing such a query would take, and linearly scale it up to just below what is allowed by the time constraints. This scaling provides an ideal sample size, that is rounded down, allowing the system to satisfy the supplied constraints, while simultaneously minimizing error. Assuming exponential sample sizes, with a base size of $k$, the scaling will produce a sample with no fewer than $\frac{1}{k}$th the number of rows, and hence an error which is only $\sqrt{k}$ higher than expected. \srm{This feels incomplete. I was expecting some algorithm that talked about how we could use histograms, etc to estimate the fraction of rows in a sample that would satisfy a query and that would talk about how to pick the sample based on that.} } \section{Sample Creation}\label{solution:sampling} As described in~\xref{sec:sample-creation}, \system~creates a set of multi-dimensional, multi-resolution samples to accurately and quickly answer ad-hoc queries. In this section, we describe sample creation in detail. First, in~\xref{sec:stratified-samples}, we discuss the creation of a sample family, a set of stratified samples of different sizes, on the same set of columns. In particular, we show how the choice of stratified samples impact the query's accuracy and response time, and evaluate the overhead for skewed distributions. Next, in~\xref{sec:optimal-view-creation} we formulate and solve an optimization problem to decide on the sets of columns on which we build sample families. \input{stratified-samples} \subsection{Optimization Framework} \label{sec:optimal-view-creation} We now describe the optimization framework we developed to select subsets of columns on which to build sample families. Unlike prior work which focuses on single-column stratified samples~\cite{babcock-dynamic}, \system{} creates multi-dimensional (\ie multi-column) stratified samples. Having stratified samples on multiple columns that are frequently queried together can lead to significant improvements in both query accuracy and latency, especially when the set of columns have a skewed joint distribution. However, these samples lead to an increase in the storage overhead because (1) samples on multiple columns can be larger than single-column samples since multiple columns often contains more unique values than individual columns, and (2) there are an exponential number of subsets of columns, all of which may not fit in our storage budget. As a result, we need to be careful in choosing the set of columns on which to build stratified samples. Hence, we formulate the trade off between storage and query accuracy/performance as an optimization problem, described next. \subsubsection{Problem Formulation} \label{sec:formulation} The optimization problem takes three factors into account in determining the sets of columns on which stratified samples should be built: the \emph{non-uniformity/skew of the data}, \emph{workload characteristics}, and the \emph{storage cost of samples}. \vspace{.1in} \noindent \textbf{Non-uniformity (skew) of the data.} Intuitively, the greater the skew for a set of columns, the more important it is to have a stratified sample on those columns. If there is no skew, the uniform sample and stratified sample will be identical. Formally, for a subset of columns $\phi$ in table $T$, let $D(\phi)$ denote the set of all distinct values appearing in $\phi$. Recall from Table~\ref{tab:notations} that $F(\phi, T, v)$ is the frequency of value $v$ in $\phi$. Let $\Delta(\phi)$ be a non-uniformity metric on the distribution of the values in $\phi$. The higher the non-uniformity in $\phi$'s distribution the higher the value of $\Delta(\phi)$. When $\phi$'s distribution is uniform (\ie when $F(\phi, T, v)=\frac{|D(\phi)|}{|T|}$ for $v\in D(\phi)$), $\Delta(\phi)=0$. In general, $\Delta$ could be any metric of the distribution's skew (e.g., \emph{kurtosis}). In this paper, for the sake of simplicity, we use a more intuitive notion of non-uniformity, defined as: \begin{displaymath} \Delta(\phi)=|\{v\in D(\phi) | F(\phi,T,v)<K\}| \end{displaymath} where $K$ represents the cap corresponding to the largest sample in the family, $S(\phi, K)$ (see~\xref{sec:stratified-samples}). Intuitively, this metric captures the length of $\phi$'s tail, i.e., the number of unique values in $\phi$ whose frequencies are less than $K$. While the rest of this paper uses this metric, our framework allows other metrics to be used. \vspace{.1in} \noindent \textbf{Workload.} The utility of a stratified sample increases if the set of columns it is biased on occur together frequently in queries. One way to estimate such co-occurrence is to use the frequency with which columns have appeared together in past queries. However, we wish to avoid over-fitting to a particular set of queries since future queries may use different columns. Hence, we use a \emph{query workload} defined as a set of $m$ query templates and their weights: $$\langle \phi_{1}^T, w_{1}\rangle, \cdots, \langle \phi_{m}^T, w_{m}\rangle$$ where $0<w_{i}\leq 1$ is the weight (normalized frequency or importance) of the $i$'th query template and $\phi_{i}^T$ is the set of columns appearing in the $i$'th template's {\tt WHERE} and {\tt GROUP BY} clauses\footnote{Here, {\tt HAVING} clauses are treated as columns in the {\tt WHERE} clauses.}. \vspace{.1in} \noindent \textbf{Storage cost.} Storage is the main constraint against building too many multi-dimensional sample families, and thus, our optimization framework takes the storage cost of different samples into account. We use $Store(\phi)$ to denote the storage cost (say, in MB) of building a sample family on a set of columns $\phi$. Given these three factors defined above, we now introduce our optimization formulation. Let the overall storage budget be $\mathbb{S}$. Consider the set of $\alpha$ column combinations that are {\it candidates} for building sample families on, say $\phi_{1},\cdots,\phi_{\alpha}$. For example, this set can include all column combinations that co-appeared at least in one of the query templates. Our goal is to select $\beta$ subsets among these candidates, say $\phi_{i_1},\cdots,\phi_{i_{\beta}}$, such that $$\sum_{k=1}^{\beta}Store(\phi_{i_k}) \leq \mathbb{S}$$ and these subsets can ``\emph{best}'' answer our queries. Specifically, in \system, we maximize the following mixed linear integer program (MILP): \begin{equation} G=\sum_{i=1}^{m} w_{i} \cdot y_{i} \cdot \Delta(\phi^{T}_{i}) \label{eq:goal} \end{equation} \vskip -0.1in subject to \vskip -0.1in \begin{equation} \sum_{j=1}^{\alpha} Store(\phi_{j}) \cdot z_{j} \leq \mathbb{S} \label{eq:storage} \end{equation} \vskip -0.1in and \vskip -0.1in \begin{equation} \forall 1\leq i\leq m: ~~ y_{i} \leq \underset{\phi_{j}\subseteq \phi_{i}^T }{\max} \frac{|D(\phi_{j})|}{|D(\phi_{i}^T)|} \cdot z_{j} \label{eq:coverage} \end{equation} \noindent where $0\leq y_{i}\leq 1$ and $z_{j}\in\{0,1\}$. Here, $z_{j}$ variables determines whether a sample family is built or not, i.e., when $z_{j}=1$, we build a sample family on $\phi_{j}$; otherwise, when $z_{j}=0$, we do not. The goal function (\ref{eq:goal}) aims to maximize the weighted sum of the coverage of the query templates. The degree of coverage of query template $\phi^{T}_i$ with a set of columns $\phi_j \subseteq \phi^{T}_i$, is the probability that a given value in $\phi^{T}_i$ is also present in the stratified sample associated with $\phi_j$, i.e., $S(\phi_j, K)$. Since this probability is hard to compute in practice, in this paper we approximate it by $y_i$ value which is determined by constraint (\ref{eq:coverage}). The $y_i$ value is in $[0, 1]$, with $0$ meaning no coverage, and $1$ meaning full coverage. The intuition behind (\ref{eq:coverage}) is that when we build a stratified sample on a subset of columns $\phi_{j}\subseteq \phi^{T}_{i}$, namely when $z_{j}=1$, we have partially covered $\phi^{T}_{i}$ too. We compute this coverage as the ratio of the number of unique values between the two sets, i.e., $|D(\phi_{j})|/|D(\phi_{i}^T)|$. When the number of unique values in $\phi_j$ and $\phi^T_i$ are the same we are guaranteed to see all the unique values of $\phi^T_i$ in the stratified sample over $\phi_j$ and therefore the coverage will be $1$. Finally, we need to weigh the coverage of each set of columns by their importance: a set of columns $\phi^{T}_{i}$ is more important to cover when (1) it has a higher frequency, which is represented by $w_{i}$, or (2) when the joint distribution of $\phi^{T}_{i}$ is more skewed (non-uniform), which is represented by $\Delta(\phi^{T}_{i})$. Thus, the best solution is when we maximize the sum of $w_{i} \cdot y_{i} \cdot \Delta(\phi_{i})$ for all query templates, as captured by our goal function (\ref{eq:goal}). Having presented our basic optimization formulation, we now address the problem of choosing the initial candidate sets, namely $\phi_{1},\cdots,\phi_{\alpha}$. In~\xref{sec:drift} we discuss how this problem formulation handles changes in the data distribution as well as changes of workload. \subsubsection{Scaling the Solution} \label{sec:candidates} Naively, one can use the power set of all the columns as the set of candidate column-sets. However, this leads to an exponential number of variables in the MILP formulation and thus, becomes impractical for tables with more than $O(20)$ columns. To reduce this exponential search space, we restrict the candidate subsets to only those that have appeared together at least in one of the query templates (namely, $\{\phi|\exists~i,~\phi\subseteq\phi_{i}^T\}$). This does not affect the optimality of the solution, because a column $A$ that has not appeared with the rest of the columns in $\phi$ can be safely removed without affecting any of the query templates. In our experiments, we have been able to solve our MILP problems with $O(10^6)$ variables within $6$ seconds using an open-source solver \cite{glpk}, on a commodity server. However, when the {\tt WHERE/GROUP BY} clauses of the query templates exceed $O(20)$ columns, the number of variables can exceed $10^6$. In such cases, to cope with this combinatorial explosion, we further limit candidate subsets to those consisting of no more than a fixed number of columns, say $3$ or $4$ columns. This too has proven to be a safe restriction since, in practice, subsets with a large number of columns have many unique values, and thus, are not chosen by the optimization framework due to their high storage cost. \subsubsection{Handling Data/Workload Variations} \label{sec:drift} Since \system~is designed to handle ad-hoc queries, our optimization formulation is designed to avoid {\it over-fitting} samples to past queries by: (i) only looking at the set of columns that appear in the query templates instead optimizing for specific constants in queries and (ii) considering infrequent subsets with a high degree of skew (captured by $\Delta$ in~\xref{sec:formulation}). In addition, \system~periodically (currently, daily) updates data and workload statistics to decide whether the current set of sample families are still effective or if the optimization problem needs to be re-solved based on the new input parameters. When re-solving the optimization, \system{} tries to find a solution that is robust to workload changes by favoring sample families that require fewer changes to the existing set of samples, as described below. Specifically, \system~ allows the administrator to decide what percentage of the sample families (in terms of storage cost) can be discarded/added to the system whenever \system~triggers the sample creation module as a result of changes in data or workload distribution. The administrator makes this decision by manually setting a parameter $0\leq r\leq 1$, which is incorporated into an extra constraint in our MILP formulation: \vskip -0.1in \begin{equation} \sum_{j=1}^{\alpha} (\delta_{j}- z_{j})^{2} \cdot S (\phi_{j}) \leq r\cdot \sum_{j=1}^{\alpha} \delta_{j} \cdot S (\phi_{j}) \label{eq:drift} \end{equation} Here $\delta_{j}$'s are additional input parameters stating whether $\phi_{j}$ already exists in the system (when $\delta_{j}=1$) or it does not ($\delta_{j}=0$). In the extreme case, when the administrator chooses $r=1$, the constraint (\ref{eq:drift}) will trivially hold and thus, the sample creation module is free to create/discard any sample families, based on the other constraints discussed in~\xref{sec:formulation}. On the other hand, setting $r=0$ will completely disable this module in~\system, i.e., no new samples will be created/discarded because $\delta_{j}=z_{j}$ will be enforced for all $j$'s. For values of $0<r<1$, we ensure that the total size of the samples that need to created/discarded is at most a fraction $r$ of the total size of existing samples in the system (note than we have to {\it create} a new sample when $z_{j}=1$ but $\delta_{j}=0$ and need to {\it delete} an existing sample when $z_{j}=0$ but $\delta_{j}=1$). When \system~runs the optimization problem for the first time $r$ is always set to $1$. \subsection{Overhead} \noindent \textbf{\bf Storage Overhead for Zipf distribution.} We evaluate the storage overhead of maintaining a stratified sample, $S(\phi, K)$, for a Zipf distribution, one of the most popular heavy tail distributions for real-world datasets. Without loss of generality, assume $F(\phi, T, x) = M/rank(x)^{s}$, where $rank(x)$ represents the rank of $x$ in $F(\phi, T, x)$ (\ie value $x$ with the highest frequency has rank $1$), and $s \geq 1$. Table~\ref{tab:zipf-overhead} shows the overhead of $S(\phi, K)$ as a percentage of the original table size for various values of Zipf's exponent, $s$, and for various values of $K$. The number of unique values in the original table size is $M = 10^9$. For $s = 1.5$ the storage required by $S(\phi, K)$ is only $2.4\%$ of the original table for $K=10^4$, $5.2\%$ for $K=10^5$, and $11.4\%$ for $K=10^6$. \begin{table}[htbp] {\small \begin{center} \begin{tabular}{|c|r|r|r|} \hline $s$ & $K = 10,000$ & $K = 100,000$ & $K = 1,000,000$\\ \hline\hline $1.0$ & $0.49$ & $0.58$ & $0.69$ \\\hline $1.1$ & $0.25$ & $0.35$ & $0.48$ \\\hline $1.2$ & $0.13$ & $0.21$ & $0.32$ \\\hline $1.3$ & $0.07$ & $0.13$ & $0.22$ \\\hline $1.4$ & $0.04$ & $0.08$ & $0.15$ \\\hline $1.5$ & $0.024$ & $0.052$ & $0.114$ \\\hline $1.6$ & $0.015$ & $0.036$ & $0.087$ \\\hline $1.7$ & $0.010$ & $0.026$ & $0.069$ \\\hline $1.8$ & $0.007$ & $0.020$ & $0.055$ \\\hline $1.9$ & $0.005$ & $0.015$ & $0.045$ \\\hline $2.0$ & $0.0038$ & $0.012$ & $0.038$ \\\hline \end{tabular} \end{center} } \caption{The storage required to maintain sample $S(\phi, K)$ as a fraction of the original table size. The distribution of $\phi$ is Zipf with exponent $s$, and the highest frequency ($M$) of $10^9$. } \label{tab:zipf-overhead} \end{table} \eat{ \begin{figure}[htbp] \begin{center} \includegraphics*[width=150pt]{figures/zipf-example.pdf} \caption{The computation of the cardinality of a $K$-sample for a Zipf distribution. The stratified sample is represented by the gray areas. The frequency of $k^{*}$ is $K$, while the frequency of $m$ is $1$.} \label{fig:zipf-example} \end{center} \end{figure} } \eat{ Note that $M$ represents the highest frequency, and the lowest frequency is $1$. Then, the cardinality of $F(\phi, T, x)$ (\ie the number of tuples in the original table) is \begin{equation} H(s, m) = \sum_{x=1}^{m} \frac{1}{x^{s}}, \end{equation} \noindent where $H(s, m)$ is the generalized harmonic mean, and $m = M^{1/s}$. Let $k^{*} = (M/K)^{1/s}$, that is, the frequency of $k^{*}$ is $K$. Then, the cardinality of $S(\phi, K)$, is $K \times k^{*} + M \times (H(s, m) - H(s, k^{*}))$, as shown in Figure~\ref{fig:zipf-example}. Thus, the ratio of $S(\phi, K)$'s cardinality to the cardinality of $T$ is: \begin{equation} R(s, M, K) = \frac{K \times k^{*} + M \times (H(s, m) - H(s, k^{*}))}{M \times H(s, m)} \end{equation} } \section{Properties} \vskip 0.1in \noindent \textbf{Performance properties.} Recall that $S(\phi, K^{opt})$ represents the smallest possible stratified sample on $\phi$ that satisfies the error or response time constraints of query $Q$, while $S(\phi, K')$ is the closest sample in $SFam(\phi)$ that satisfies $Q$'s constraints. Then, we have the following results. \begin{lemma} Assume an I/O-bound query $Q$ that specifies an error constraint. Let $r$ be the response time of $Q$ when running on the optimal-sized sample, $S(\phi, K^{opt})$. Then, the response time of $Q$ when using sample family $SFam(\phi) = \{S(\phi, K_i)\}, (0 \leq i < m)$ is at most $c + 1/K^{opt}$ times larger than $r$. \end{lemma} \begin{proof} Let $i$ be such that \begin{equation} \left\lfloor \frac{K}{c^{i+1}} \right\rfloor < K^{opt} \leq \left\lfloor \frac{K}{c^{i}} \right\rfloor. \label{eq:query-error-1} \end{equation} \noindent Assuming that the error of $Q$ decreases monotonically with the increase in the sample size, $S(\phi, \lfloor K/c^{i} \rfloor)$ is the smallest sample in $SFam(\phi)$ that satisfies $Q$'s error constraint. Furthermore, from Eq.~(\ref{eq:query-error-1}) it follows that \begin{equation} \left\lfloor \frac{K}{c^{i+1}} \right\rfloor < c K^{opt} + 1. \label{eq:query-error-bounds} \end{equation} \noindent In other words, in the worst case, $Q$ may have to use a sample whose cap is at most $c + 1/K^{opt}$ times larger than $K^{opt}$. Let $K' = c K^{opt} + 1$, and let $A = \{a_1, a_2, ..., a_k\}$ be the set of values in $\phi$ selected by $Q$. By construction, both samples $S(\phi, K')$ and $S(\phi, K^{opt})$ contain all values in the fact table, and therefore in set $A$. Then, from the definition of the stratified sample, it follows that the frequency of any $a_i \in A$ in sample $S(\phi, K')$ is at most $K'/K^{opt}$ times larger than the frequency of $a_i$ in $S(\phi, K^{opt})$. Since the tuples matching the same value $a_i$ are clustered together in both samples, they are accessed sequentially on the disk. Thus, the access time of all tuples matching $a_i$ in $S(\phi, K')$ is at most $c + 1/K^{opt}$ times larger than the access time of the same tuples in $S(\phi, K^{opt})$. Finally, since we assume that the $Q$'s execution is I/O-bound, it follows that $Q$'s response time is at most $c + 1/K^{opt}$ times worse than $Q$'s response time on the optimal sample, $S(\phi, K^{opt})$. \end{proof} \begin{lemma} Assume a query, $Q$, that specifies a response time constraint, and let $S(\phi, K^{opt})$ be the largest stratified sample on column set $\phi$ that meets $Q$'s constraint. Assume standard deviation of $Q$ is $\sim 1/\sqrt{n}$, where $n$ is the number of tuples selected by $Q$ from $S(\phi, K^{opt})$. Then, the standard deviation of $Q$ when using sample family $SFam(\phi)$ increases by at most $1/\sqrt{1/c - 1/K^{opt}}$ times. \end{lemma} \begin{proof} Let $i$ be such that \begin{equation} \left\lfloor \frac{K}{c^{i}} \right\rfloor \leq K^{opt} < \left\lfloor \frac{K}{c^{i-1}} \right\rfloor. \label{eq:query-time-1} \end{equation} \noindent Assuming that the response time of $Q$ decreases monotonically with the sample size, $S(\phi, \lfloor K/c^{i} \rfloor)$ is the largest sample in $SFam(\phi)$ that satisfies $Q$'s response time. Furthermore, from Eq.~(\ref{eq:query-time-1}) it follows that \begin{equation} \label{eq:query-time-bounds} \left\lfloor \frac{K}{c^{i}} \right\rfloor > \frac{K^{opt}}{c} - 1. \end{equation} \noindent Assuming that the number of tuples selected by $Q$ is proportional to the sample size, the standard deviation of running $Q$ on $S(\phi, \lfloor K/c^{i} \rfloor )$ increases by at most $1/\sqrt{1/{c} - 1/K^{opt})}$ times. \end{proof} \subsection{Multi-resolution Stratified Samples} \label{sec:stratified-samples} In this section, we describe our techniques for constructing a family of stratified samples from input tables. We describe how we maintain these samples in~\xref{sec:sample-maintenance}. Table~\ref{tab:notations} contains the notation used in the rest of this section. \begin{table} \begin{center} {\small \begin{tabular}{| l | l |} \hline {\bf Notation} & {\bf Description} \\ \hline\hline $T$ & fact (original) table \\ \hline $\phi$ & set of columns in $T$ \\ \hline $R(p)$ & random sample of $T$, where each row in $T$ \\ & is selected with probability $p$ \\ \hline $S(\phi, K)$ & stratified sample associated to $\phi$, where \\ & frequency of every value $x$ in $\phi$ is capped by $K$ \\ \hline $SFam(\phi)$ & family (sequence) of multi-dimensional multi-\\ & resolution stratified samples associated with $\phi$ \\ \hline $F(\phi, S, x)$ & frequency of value $x$ in set of columns $\phi$ in \\ & sample/table $S$\\ \hline \end{tabular} } \end{center} \vspace{-.1in} \caption{Notation in \xref{sec:stratified-samples}} \vspace{-.1in} \label{tab:notations} \end{table} Queries on uniform samples converge quickly to the true answer, when the original data is distributed uniformly or near-uniformly. This convergence, is however, much slower for uniform samples over highly skewed distributions (\eg exponential or Zipfian) because a much larger fraction of the entire data set needs to be scanned to produce high-confidence estimates on infrequent values. A second, perhaps more important, problem is that uniform samples may not contain any instances of certain subgroups, leading to missing rows in the final output of queries. The standard approach for dealing with such distributions is to use stratified sampling~\cite{sampling-book}, which ensures that rare subgroups are {\it sufficiently} represented in such skewed datasets. This both provides faster convergence of answer estimates and avoids missing subgroups in results. In this section, we describe the use of stratified sampling in \system. Let $\phi = \{c_1, c_2, \ldots, c_k\}$ be a subset of columns in the original table, $T$. For any such subset we define a \emph{sample family} as a sequence of stratified samples over $\phi$ (see Table~\ref{tab:notations} for our notations): \vskip -0.1in \begin{equation} \label{eq:sample-family-def} SFam(\phi) = \{S(\phi, K_i) \mid 0 \leq i < m \}, \end{equation} \noindent where $m$ is the number of samples in the family. By maintaining multiple stratified samples for the same column subset $\phi$ we allow a finer granularity tradeoff between query accuracy and response time. In the remainder of this paper, we use the term ``set'', instead of ``subset'' (of columns), for brevity. In~\xref{sec:optimal-view-creation}, we describe how to select the sets of columns on which sample families are built. \begin{figure}[htbp] \begin{center} \vspace{-.1in} \includegraphics*[width=225pt]{figures/stratified-sample.pdf} \caption{Example of stratified samples associated with a set of columns, $\phi$.} \label{fig:stratified-sample} \end{center} \vspace{-.1in} \end{figure} A stratified sample $S(\phi, K_i)$ on the set of columns, $\phi$, caps the frequency of every value $x$ in $\phi$ to $K_i$.\footnote{Although stratification is done on column set $\phi$, the sample that is stored contains all of the columns from the original table.} More precisely, consider tuple $x = <x_1, x_2, \ldots, x_k>$, where $x_i$ is a value in column $c_i$, and let $F(\phi, T, x)$ be the frequency of $x$ in column set $\phi$ in the original table, $T$. If $F(\phi, T, x) \leq K_i$, then $S(\phi, K_i)$ contains all rows containing $x$ in $T$. Otherwise, if $F(\phi, T, x) > K_i$, then $S(\phi, K_i)$ contains $K_i$ randomly chosen rows from $T$ that contain $x$. Figure~\ref{fig:stratified-sample} shows a sample family associated with column set $\phi$. There are three stratified samples $S(\phi, K_1)$, $S(\phi, K_2)$, and $S(\phi, K_3)$, respectively, where $K_1$ is the largest sample, and $K_3$ the smallest. Note that since each sample is a subset of a bigger sample, in practice there is no need to independently allocate storage for each sample. Instead, we can construct smaller samples from the larger ones, and thus need an amount of storage equivalent to maintaining only the largest sample. This way, in our example we only need storage for the sample corresponding to $K_1$, modulo the metadata required to maintain the smaller samples. Each stratified sample $S(\phi, K_i)$ is stored sequentially sorted according to the order of columns in $\phi$. Thus, the records with the same or consecutive $x$ values are stored contiguously on the disk, which, as we will see, significantly improves the execution times or range of the queries on the set of columns $\phi$. Consider query $Q$ whose {\tt WHERE} or {\tt GROUP BY} clause contains $(\phi = x)$, and assume we use $S(\phi, K)$ to answer this query. If $F(\phi, S(\phi, K), x) < K$, the answer is exact as the sample contains all rows from the original table. On the other hand, if $F(\phi, S(\phi, K), x) > K$, we answer $Q$ based on $K$ random rows in the original table. For the basic aggregate operators {\tt AVG}, {\tt SUM}, {\tt COUNT}, and {\tt QUANTILE}, $K$ directly determines the error of $Q$'s result. In particular, for these aggregate operators, the standard deviation is inversely proportional to $\sqrt{K}$, as shown in Table~\ref{tab:closedform}. In this paper, we choose the samples in a family so that they have exponentially decreasing sizes. In particular, $K_i = \lfloor K_1/c^{i} \rfloor$ for $(1 \leq i \leq m)$, and $m = \lfloor {\log}_c K_1 \rfloor$. Thus, the cap of samples in the sequence decreases by factor $c$. \noindent \textbf{Properties.} A natural question is how ``good'' is a sample family, $SFam(\phi)$, given a specific query, $Q$, that executes on column set $\phi$. In particular, let $S(\phi, K^{opt})$ be the smallest possible stratified sample on $\phi$ that satisfies the error or response time constraints of $Q$. Since $SFam(\phi)$ contains only a finite number of samples, $S(\phi, K^{opt})$ is not guaranteed to be among those samples. Assume $K_1 \geq K^{opt} \geq \lfloor K_1/c^m \rfloor$, and let $S(\phi, K')$ be the closest sample in $SFam(\phi)$ that satisfies $Q$'s constraints. Then we would like that the $Q$'s performance when running on $S(\phi, K^{opt})$ to be as close as possible to $Q$'s performance when running on the optimal-sized sample, $S(\phi, K^{opt})$. Then, for $K^{opt} \gg c$ the following two properties hold (see Appendix \ref{sec:analysis} for proofs): \vspace{-0.1in} \begin{enumerate*} \item For a query with response time constraints, the response time of the query running on $S(\phi, K')$ is within a factor of $c$ of the response time of the query running on the optimal-sized sample, $S(\phi, K^{opt})$. \item For a query with error constraints, the standard deviation of the query running on $S(\phi, K')$ is within a factor of $\sqrt{c}$ of the response time of the query running on $S(\phi, K^{opt})$. \end{enumerate*} \vspace{-0.1in} \noindent \textbf{Storage overhead.} Another consideration is the overhead associated with maintaining these samples, especially for heavy-tailed distributions. In Appendix \ref{sec:analysis} we provide numerical results for a Zipf distribution, one of the most common heavy-tailed distributions. Consider a table with $1$ billion tuples and a column set with a Zipf distribution with an exponent of $1.5$. Then, the storage required by a family of samples $S(\phi, K)$ is only $2.4\%$ of the original table for $K_0=10^4$, $5.2\%$ for $K_0=10^5$, and $11.4\%$ for $K_0=10^6$. These results are consistent with real-world data from Conviva Inc, where for $K_0 = 10^5$, the overhead incurred for sample families on popular columns like city, customer, autonomous system number (ASN) are all less than $10\%$.
2,869,038,156,578
arxiv
\section{INTRODUCTION} Autonomous navigation of quadrotors necessitates sensing \cite{huang2017visual,stevens2018vision,sun2018robust,yang2019cubeslam}, planning \cite{camci2019planning,morrell2018comparison,lai2018optimal,camci2019learning}, and control \cite{greeff2018flatness,tal2018accurate,tang2018learning,mehndiratta2018automated}. Separation of these tasks is the medium within the current state-of-the-art navigation methods. Each task is performed by an individual module and modularity is attained easily by this way. Nevertheless, modularity comes with the cost of possible incompatibility, especially with the presence of erroneous modules. An erroneous module in the pipeline could easily cause the other modules to fail as well. Therefore, in this work, the unification of these tasks is attempted within a single, reliable module using deep reinforcement learning (RL) \cite{sampedro2018image,everett2018motion,do2018learning,gschwindt2019can}. The proposed method aims to solve the following navigation problem: A quadrotor is deployed for navigation in a partially known environment. A rough path to the goal position is known without any obstacle location information on it. The quadrotor is supposed to generate local motion plans using this information and its online sensory data for safe and quick navigation. To this end, a deep RL agent is proposed which uses raw depth images from a front-facing camera to generate desirable motion primitive sequences. Particularly, a deep Q-network (DQN) with around 75,000 parameters is trained which takes a raw depth image and relative position information as its input, and yields a motion primitive selection as its output. \section{APPROACH} In RL, it is critical to design the main elements properly: state, action, and reward. Each of these elements is explained in the following subsections while being depicted as the main elements of the proposed RL system in Fig.~\ref{overview}. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{overall_rl.png} \caption{Overview of the proposed RL system.} \label{overview} \end{figure} \subsection{State} State is the respective situation of the agent in the environment. It is designed as a pair of a depth image and relative position information. The depth image has the size of 32$\times$32 and it is obtained by the front-facing camera. It is included in the state definition in order to inform the agent about the obstacles around. The relative position on the other hand is a 3$\times$1 vector. It is calculated by substracting the quadrotor's position vector from the moving setpoint's position vector, and transforming the resultant vector into the body frame of the quadrotor $x_B$, $y_B$, $z_B$. It is included in the state definition in order to inform the agent about its respective motion along the rough path towards the goal. \subsection{Action} Action is the agent's respective move at each time step following some policy to increase rewards. It is in the form of motion primitives whose basis is formed by B\'{e}zier curves. They are parametric curves based on Bernstein polynomials: \begin{equation} \label{eq_bezier} C:[0,1]\longrightarrow\mathbb{R},\;\;\;C(t) = \sum\limits_{i=0}^{n} \binom{n}{i}P_i B_{n,i}(t), \end{equation} where $P_i$ are control points and $B_{n,i}(t)$ are Bernstein polynomials of $n^{th}$ degree which are given as: \begin{equation} \label{eq_bernstein} B_{n,i}(t)=\left(1-t\right)^{\left(n-i\right)}t^{i}. \end{equation} Smooth motion primitives in the position domain are generated by utilizing the cubic B\'{e}zier curves (n=3) for each finite time step. At each time step, the agent selects an action among the action set which consists of 18 different primitives (Fig.~\ref{fig_action_set}). This set is designed on the account that the agent's motion would be biased towards forward because its only exteroceptive sensor is a front-facing depth camera. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{primitives.png} \caption{Motion primitives as action set.} \label{fig_action_set} \end{figure} \subsection{Reward} Reward is the signal which assesses the quality of the agent's actions. It is designed based upon the quadrotor's relative motion with respect to the moving setpoint on the initial rough path. It is equal to the change in the Euclidean distance ($\Delta d$) between the moving setpoint and the quadrotor position through a time step, discounted by the Euclidean distance between the moving setpoint and the quadrotor position at the end of the current time step ($d_t$). The reward function also incorporates a primary logic based on excessive deviations from the initial path and collisions with obstacles. In this vein, higher rewards are obtained for motions close to the initial path. Lower rewards are obtained for motions away from the initial path. Collisions result in a drastic punishment. Excessive deviations from the initial path (the ones that are larger than 5m) result in a milder punishment. This reward logic is designed for the agent to learn obstacle avoidance and quick navigation towards the goal at the same time in unknown environments. It is mathematically defined as: \begin{equation} \label{eq_reward} R= \begin{dcases} \frac{R_l}{d_t}, & \text{for}\;\; \Delta d_u < \Delta d \\ \frac{R_l + \left(R_u-R_l\right)\frac{\Delta d_u-\Delta d}{\Delta d_u-\Delta d_l}}{d_t}, & \text{for}\;\; \Delta d_l \leq \Delta d \leq \Delta d_u \\ \frac{R_u}{d_t}, & \text{for}\;\; \Delta d < \Delta d_l \\ R_{dp}, & \text{for excessive deviation,} \\ R_{cp}, & \text{for collision.} \end{dcases} \end{equation} The variables $R_l$ and $R_u$ are the reward boundaries, and they are 0 and 0.5 respectively. The terms $\Delta d_l$ and $\Delta d_u$ are the reward saturation bounds on $\Delta d$, and they are -1m and 1m, respectively. The term $R_{dp}$ is the excessive deviation punishment and it is equal to -0.5. The term $R_{cp}$ is the collision punishment and it is equal to -1. Most of these parameters are determined by trial-and-error method. In fact, they are flexible to be modified based upon the case study of interest. \subsection{Algorithmic Details} The idea behind many RL algorithms is to estimate the action-value function $Q(s,a)$. This function can get intricate in the case of real robots due to extremely high number of state-action pairs. Therefore, function approximators have emerged as a common choice to estimate action-value function by adding a certain level of generalization \cite{mnih2015human}. In this work, a DQN is utilized for estimating the complex action-value function which is given as \cite{mnih2015human}: \begin{equation} \label{eq_qfunction} Q(s,a)=\mathbb{E} \left(R_{t+1}+\gamma \displaystyle \mathop{\max}_{a'} Q(s_{t+1},a') \big| s_t=s, a_t=a \right) \end{equation} where $s_t$ is the current state of the agent at the time step $t$. The term $a_t$ is the action taken by the agent. The term $s_{t+1}$ is the next state that the agent reaches after taking the action $a_t$. The term $R_{t+1}$ is the reward that the agent gets as a result of its action. The term $\gamma$ is the discount factor which determines the present value of future rewards \cite{sutton1998reinforcement}. The DQN considered in this work is a combination of convolutional and fully connected neural networks which fuses two different inputs in two main lanes. The first lane uses hierarchical layers of convolutional filters to reason about correlations between local spatial portions of the depth image while the second lane utilizes fully connected layers to incorporate the relative position information. The architecture is depicted in detail in Fig. \ref{fig_nn_arch}. The first convolutional layer in the first lane takes a 32$\times$32 raw depth image as the input and convolves it with 10$\times$10 sized 8 filters with stride 2. It then applies a rectified linear unit (ReLU). The second and third convolutional layers use 16 filters of size 6$\times$6 and 32 filters of size 3$\times$3 with stride 1, respectively. They also apply ReLU after convolution. Subsequently, a fully connected layer of size 800 gets the output of the third convolutional layer and feeds it to the following fully connected layers of sizes 64. In the second lane, the relative position of the moving setpoint with respect to the quadrotor is fed to fully connected layers in three sub-lanes with sizes 16\footnote{The first sub-lane is designed to have a larger layer size as compared to the other two in order to put more emphasis on the relative position information in $x_B$. This information is possibly more important for successful navigation to the goal considering the forward-biased motion of the quadrotor.}, 8, 8. They are then fully connected to a single layer of size 16. Subsequently, the outputs of the first lane (64) and the second lane (16) are concatenated and fed to the final fully connected layers of sizes 64, 32, and 18. All of the fully connected layers in this DQN structure apply ReLU except from the last one which eventually yields the Q-value estimation for each motion primitive. This network is trained using Huber loss \cite{huber1964robust} through Adam optimizer \cite{kingma2014adam} in PyTorch with the default settings. \begin{figure*}[t!] \centering \includegraphics[width=\textwidth]{dqn.png} \caption{DQN architecture composed of convolutional and fully connected layers.} \label{fig_nn_arch} \end{figure*} \section{EXPERIMENTS} The experiments for validating the method consist of training and testing stages. Firstly, the agent is trained in seven different environments in AirSim in order to have diversified retrospective knowledge. Then, it is tested in these seven environments as well as three other environments which are unseen during the training stage. Lastly, the agent is also deployed for preliminary real flight tests to demonstrate the applicability of the proposed method for real vehicles. \subsection{Training in AirSim} The training environments in AirSim are depicted in Fig.~\ref{airsim_training}. The environments are diversified in terms of their complexity, from obstacle-free environment (Env. 1) and obstacle-free corridors with different widths (Env. 2, Env. 3) to left-right (Env. 4, Env. 5) and up-down (Env. 6, Env. 7) slalom environments. All of these environments are merged in a single AirSim session, and they are visited by the agent randomly. Merging them together is particularly helpful because diversification of data samples during learning is improved by this way, enhancing the data sample efficiency. Since the main aim in this work is to develop a simple RL system with relatively early convergence for navigation of quadrotors, high data sample efficiency can be substantially useful. In the same vein, it is also attempted to elude the common need for a large number of interactions in RL because high fidelity AirSim simulations run on real-time. In AirSim, it may require weeks or months to obtain the same amount of data which could have been obtained from accelerated, simple simulation environments in a single day. The convergence pace in RL depends on many factors such as randomized data, network size, $\varepsilon$, $\gamma$, learning rate of the back-propagation algorithm used for DQN, architecture of DQN, etc. For the sake of brevity, only a single set of hyperparameters is considered which is observed to be working well throughout the training trials during this work. The minimum number of episodes required for convergence is found out by fixing the network structure, $\varepsilon$, and $\gamma$ for five training sessions which consist of 100, 200, 500, 1000, and 2000 episodes. In each session, a linear $\varepsilon$ decay from 1.0 to 0.1 and a linear $\gamma$ gain from 0.01 to 0.99 for the first 80\% of total episodes are utilized, while these parameters are kept constant for the next 20\%. \begin{figure}[t!] \includegraphics[width=\columnwidth]{training_envs.png} \caption{Training environments: obstacle-free, corridor, and slalom tracks.} \label{airsim_training} \end{figure} The average rewards obtained for each session are depicted in Fig.~\ref{average_rewards}. The average reward increases with the increased maximum number of episodes since the agent has more chances to interact with the environment and learn more. During the first training with 100 episodes which takes around 1-2 hours, the agent can barely reach positive average reward values. Learning is not visible for this case. In the second training with 200 episodes, a similar trend is observed without a visible increment on average rewards. In the third case with 500 episodes, the reward increment starts to become visible but there are still relatively lower values towards the end. In the fourth and fifth cases with 1000 and 2000 episodes respectively, increasing reward trend is quite prominent. Final average rewards are higher in these cases, while the agent trained for 2000 episodes has slightly better performance. The maximum number of episodes is limited at 2000 because the training takes around 20-22 hours for this case, beyond which the main motivation of this work (a practical RL system with early convergence) would be lost. \begin{figure}[t!] \centering \begin{minipage}{0.49\columnwidth} \centering \includegraphics[width=\columnwidth]{reward100} \end{minipage} \begin{minipage}{0.49\columnwidth} \centering \includegraphics[width=\columnwidth]{reward200} \end{minipage} \begin{minipage}{0.49\columnwidth} \centering \includegraphics[width=\columnwidth]{reward500} \end{minipage} \begin{minipage}{0.49\columnwidth} \centering \includegraphics[width=\columnwidth]{reward1000} \end{minipage} \begin{minipage}{0.49\columnwidth} \centering \includegraphics[width=\columnwidth]{reward2000} \end{minipage} \caption{Average rewards for five different training sessions with maximum numbers of episodes of 100, 200, 500, 1000, 2000.} \label{average_rewards} \end{figure} \subsection{Testing in AirSim} In this section, the performance of the agent trained for 2000 episodes is evaluated through navigation from the start to the goal in ten different environments in AirSim. The first seven environments are the same as in training sessions, where the last three environments are previously unseen and they are depicted in Fig.~\ref{airsim_test}. While the former group examines whether the agent is competent on exploring a desirable policy within training environments, the latter group challenges the agent in terms of generalization to different environments. They consist of obstacles with different shapes and sizes, corridors with different widths but the same length\footnote{The tracks with the same straight length are considered for consistent benchmarking. However, the proposed method can easily be employed in different environments with different lengths of straight lines, a concatenation of straight lines, or turns. Once the intermediate goal points are given to create straight path segments as initial rough paths, it is trivial to utilize the method in these environments since the underlying RL system is designed with respect to the quadrotor's body frame.} of 60m. \begin{table}[t!] \renewcommand{\tabularxcolumn}[1]{>{\centering}m{#1}} \caption{Evaluative test results in AirSim.} \label{test_in_envs} \centering \begin{tabular}{ccccc} \hline\noalign{\smallskip} \multirow{2}{*}{\textbf{Env.$\#$}} & \textbf{Navigation} & \textbf{Navigation} & \multirow{2}{*}{\textbf{Crash}} & \textbf{Total} \\ & \textbf{distance (m)} & \textbf{time (s)} & & \textbf{reward}\\ \noalign{\smallskip}\hline\noalign{\smallskip} \multirow{5}{*}{\textbf{Env.1}} & 60.00 & 57 & N & 12.15\\ & 60.00 & 57 & N & 11.89 \\ & 60.00 & 57 & N & 10.94 \\ & 60.00 & 58 & N & 12.66 \\ & 60.00 & 57 & N & 11.51 \\ \hline\noalign{\smallskip} \multirow{5}{*}{\textbf{Env.2}} & 60.00 & 58 & N & 11.93\\ & 60.00 & 57 & N & 12.49 \\ & 60.00 & 58 & N & 12.43 \\ & 60.00 & 57 & N & 12.28 \\ & 60.00 & 58 & N & 11.83 \\ \hline\noalign{\smallskip} \multirow{5}{*}{\textbf{Env.3}} & 60.00 & 58 & N & 10.35\\ & 60.00 & 58 & N & 11.93 \\ & 60.00 & 57 & N & 12.16 \\ & 60.00 & 58 & N & 11.96 \\ & 60.00 & 58 & N & 12.14 \\ \hline\noalign{\smallskip} \multirow{5}{*}{\textbf{Env.4}} & 60.00 & 79 & N & 3.82\\ & 60.00 & 71 & N & 4.44 \\ & 48.85 & 120 & N & 3.61 \\ & 29.96 & 42 & Y & 1.32 \\ & 28.15 & 33 & Y & 1.25 \\ \hline\noalign{\smallskip} \multirow{5}{*}{\textbf{Env.5}} & 60.00 & 78 & N & 3.41\\ & 60.00 & 75 & N & 3.58 \\ & 60.00 & 71 & N & 3.53 \\ & 21.35 & 28 & Y & 0.51 \\ & 48.57 & 120 & N & 3.39 \\ \hline\noalign{\smallskip} \multirow{5}{*}{\textbf{Env.6}} & 60.00 & 61 & N & 4.85\\ & 44.44 & 120 & N & 4.35 \\ & 60.00 & 70 & N & 4.46 \\ & 60.00 & 61 & N & 4.74 \\ & 60.00 & 61 & N & 4.81 \\ \hline\noalign{\smallskip} \multirow{5}{*}{\textbf{Env.7}} & 50.42 & 120 & N & 3.26\\ & 41.51 & 58 & Y & 0.53 \\ & 15.90 & 77 & Y & 0.14 \\ & 60.00 & 95 & N & 3.27 \\ & 47.33 & 120 & N & 3.04 \\ \hline\noalign{\smallskip} \multirow{5}{*}{\textbf{Env.8}} & 60.00 & 59 & N & 7.69\\ & 60.00 & 59 & N & 7.61 \\ & 45.47 & 44 & Y & 5.82 \\ & 45.39 & 45 & Y & 4.52 \\ & 60.00 & 58 & N & 9.01 \\ \hline\noalign{\smallskip} \multirow{5}{*}{\textbf{Env.9}} & 60.00 & 57 & N & 9.24\\ & 60.00 & 58 & N & 9.31 \\ & 60.00 & 57 & N & 8.78 \\ & 60.00 & 57 & N & 8.24 \\ & 60.00 & 58 & N & 8.60 \\ \hline\noalign{\smallskip} \multirow{5}{*}{\textbf{Env.10}} & 60.00 & 61 & N & 5.35\\ & 60.00 & 58 & N & 5.29 \\ & 60.00 & 61 & N & 4.95 \\ & 60.00 & 60 & N & 5.35 \\ & 60.00 & 60 & N & 5.88 \\ \hline \end{tabular} \end{table} \begin{figure}[t!] \centering \includegraphics[width=0.9\columnwidth]{test_envs.png} \caption{Test environments with obstacles of different shapes and sizes.} \label{airsim_test} \end{figure} The agent is tested in each environment five times to have generalized evaluative results. Table~\ref{test_in_envs} states the values for reward, navigation distance, navigation time, and crash rate. Although the initial rough path in these environments (a straight path of 60m from the start to the goal) can be navigated within 60s in the obstacle-free configuration, the maximum number of time steps is set to 120 during the testing stage in order to account for the desired deviations from the path for obstacle avoidance. In the first three environments, the agent is able to learn a policy which is close to the optimal. Successful navigation from the start to the goal is observed for each trial in these relatively simple environments. The navigation time performance is also desirable as being close to 60s in 1m/s setting. If the common position controllers of the quadrotor would have been used without any local motion planning, they would have taken a similar amount of time for following the path from the start to the goal. The crash rate is 0$\%$ in these environments, even for the narrow corridor in Env. 3. Therefore, the agent's performance is favorable for the first group. In the next four environments, the agent's performance is little degraded with the added complexity. Still, the agent successfully reaches the goal position in 67\% of the trials. In terms of safety, the agent completes 73\% of the trials without a crash in these dense environments. Moreover, there is no single head-on crash. All the crashes occur when the quadrotor moves towards sides or up-down without going forward and obstacles are out of sight of the front-facing camera. This limitation of the proposed method results from the lack of a history of depth images (which can be a part of the future work). On the other hand, there are some trials such as the third one in Env. 4 in which the agent cannot reach the goal position but does not take decisions which might yield a crash either. It just keeps its position around the same particular obstacle-free portion of the environments in order to maintain safety. Overall, the agent's performance is promising considering the density level of these environments. In the last three environments, the agent yields better performance as compared to the former group. It successfully reaches the goal position in 87\% of the trials. The other 13\% stands for two trials in Env. 8 in which there is a large sphere blocking the narrow corridor towards the end. The agent cannot avoid it fully and the landing gear touches to the sphere in these two trials. Apart from these two incidents, the agent is able to avoid obstacles of different shapes and sizes successfully and reach the goal position within a decent navigation duration. Therefore, the agent's performance can be considered as desirable in terms of generalizing to previously unseen environments. \subsection{Testing in Real Flights} For the real flight testing, the agent trained for 2000 episodes is deployed directly on a DJI F330 Quadrotor equipped with the flight controller PX4 FMU, the companion computer Nvidia Jetson TX2, and the extrecoptive sensor Intel RealSense D435. All the planning codes are running on TX2 utilizing its GPU for DQN in PyTorch with Cuda option. D435 provides live depth images to DQN while PX4 is responsible for the low-level control. The odometry information is obtained using a motion capture system. Two different environments are created for the real flights: one as obstacle-free and the other one with two large rectangular shape obstacles (Fig.~\ref{real_trajectory_results}). Since the lab space is limited to roughly 3m by 3m area, the initial path of 3.5m is set diagonally. In order not to cause dangerous movements of the quadrotor, the approximate speed of the vehicle is considered as 0.5m/s. Accordingly, the length of the motion primitives is decreased to a maximum movement of 0.5m in each axis. The maximum number of time steps are set to 30 in order to account for both the desired deviations from the initial path and the decreased length of the primitives. The rest of the parameters are exactly the same as in AirSim. Table~\ref{real_test_in_envs} summarizes the real flight test results. In the first environment without obstacles, the agent's performance is desirable. There is no single crash and the agent reaches to the goal point with 100\% success out of five trials as can be seen in Table~\ref{real_test_in_envs}. The navigation time is relatively high as compared to the results in AirSim though. This is due to the motion primitive selections which cause deviations from the initial path generally in z axis and the control ability of the quadrotor in the real flights. Regarding the former, since the agent is getting the depth image from D435 which yields relatively noisy data as compared to AirSim, the agent's task is more difficult in the real flights. It is supposed to conduct sufficient reasoning by handling these relatively unclear images implicitly. As regards to the latter, the PX4's position controller with default parameters for DJI F330 Quadrotor is used in order to execute the motion primitive sequences governed by DQN. As compared to the ideal conditions in AirSim in which perfect odometry information without any delay as well as a precise controller to execute the high-level commands are available without considering any motor-ESC-propeller efficiency issues, the real flight control ability is obviously less. Still, the agent yields adequate end-to-end reasoning and completes the task with 100\% accuracy in the first environment. In the second environment with two large rectangular shape obstacles, the agent's performance is relatively downgraded in terms of navigation to the goal. It cannot reach the goal position out of five trials. The furthest navigation point on the initial rough path is recorded as 1.55m in the third trial as can be seen in Table~\ref{real_test_in_envs}. On the other hand, the agent yields safe flights for 80\% of the trials in this environment. Only in the second trial, it crashes into the first obstacle. Again, it is not a head-on crash, the propeller just touches the obstacle from the side when the obstacle is out of sight of D435. In the other four trials, the agent avoids the first obstacle but then maintains its location in obstacle-free portions of the environment. Therefore, the agent's response in this environment can be considered as conservative. \begin{table}[t!] \renewcommand{\tabularxcolumn}[1]{>{\centering}m{#1}} \caption{Evaluative test results in real flights.} \label{real_test_in_envs} \centering \begin{tabular}{ccccc} \hline\noalign{\smallskip} \multirow{2}{*}{\textbf{Env.$\#$}} & \textbf{Navigation} & \textbf{Navigation} & \multirow{2}{*}{\textbf{Crash}} & \textbf{Total} \\ & \textbf{distance (m)} & \textbf{time (s)} & & \textbf{reward} \\ \hline\noalign{\smallskip} \multirow{5}{*}{\textbf{Env.1}} & 3.50 & 21 & N & 0.44\\ & 3.50 & 18 & N & 0.46 \\ & 3.50 & 21 & N & 0.44 \\ & 3.50 & 19 & N & 0.43 \\ & 3.50 & 22 & N & 0.43 \\ \hline\noalign{\smallskip} \multirow{5}{*}{\textbf{Env.2}} & 0.65 & 30 & N & 0.34\\ & 1.03 & 15 & Y & -0.70 \\ & 1.55 & 30 & N & 0.32 \\ & 1.38 & 30 & N & 0.34 \\ & 0.80 & 30 & N & 0.37 \\ \hline \end{tabular} \end{table} \begin{figure}[t!] \centering \begin{minipage}{0.49\columnwidth} \centering \includegraphics[width=\columnwidth]{Env1_real.png} \end{minipage} \begin{minipage}{0.49\columnwidth} \centering \includegraphics[width=\columnwidth]{Env2_real.png} \end{minipage} \caption{Real flight environments. \textbf{Left:} Environment without obstacles. \textbf{Right:} Environment with two large rectangular shape obstacles.} \label{real_trajectory_results} \end{figure} \section{DISCUSSION and FUTURE WORK} As can be seen from the results, the proposed method demonstrates sufficient and promising performance for navigation of quadrotors in partially known environments. The agent yields fairly safe flights with collision-free flight percentage of 86\% over the trials in AirSim. This number is 90\% for real flights with relatively simpler environments. It reaches the goal point successfully during 88\% of these safe flights in AirSim, while this percentage is 56\% for real flights. Besides, the method caters for generalization to previously unseen environments which is particularly important to prove possible employability in a broad class of environments. Overall, this work serves as a proof of concept for employing the proposed end-to-end RL-based motion planning system for quadrotor navigation in dense environments. It only includes specific case studies for brevity. Future work will include a comprehensive benchmarking study by considering a wider set of hyperparameters, different network structures, and more extensive experimental tests. Although the preliminary real flight test results demonstrate relatively safe navigation, the ultimate aim is to enhance the proposed method's performance on navigation to the goal, and present a simple and useful tool for the robotics community. \section*{ACKNOWLEDGMENT} We would like to acknowledge the Air Lab members Rogerio Bonatti, Wenshan Wang, and Sebastian Scherer at Carnegie Mellon University, PA, USA for substantially helpful discussions. This work is financially supported by the Singapore Ministry of Education (RG185/17). \addtolength{\textheight}{-12cm} \bibliographystyle{IEEEtran}
2,869,038,156,579
arxiv
\section{INTRODUCTION} There have been extensively studied for years about kaon condensation and its implications on neutron stars at low temperature \cite{lee}. In 1994 Brown and Bethe proposed the low-mass black hole (BH) scenario, based on the large softening of EOS due to kaon condensation \cite{bro}. It is produced as a consequence of the {\it delayed collapse} from a protoneutron star, different from the usual BH formation. This scenario should be very attractive in the light of recent observations on the mass of neutron stars, SN1987A or future observation of neutrinos associated with supernova explosions. Some numerical simulations based on the general relativity have been already performed for the delayed collapse \cite{bau,pon}. In ref.\cite{bau} they studied the delayed collapse by using the EOS of kaon condensed phase at $T=0$, though temperature is very much increased there. Also neutrino trapping is another important factor to be considered for protoneutron stars. There have been some attempts to treat thermal fluctuations in relation to kaon condensation \cite{pra}, but there seems to be no successful theory on the basis of chiral symmetry. Recently we have proposed a formalism to treat this problem \cite{tat}, in relation to protoneutron stars. Here we briefly explain how our formalism gives the thermodynamic potential, and show some results about EOS and structure of protoneutron stars. \section{THERMODYNAMIC POTENTIAL} The kaon condensed state can be decribed as a chiral-rotated one from the meson vacuum \cite{tatb}; actually we can discuss kaon condensation in almost model-independent way within the mean-field approximation. However, if we intend to study the phenomenon further by taking into account the effect of fluctuations, it is useful to invoke the effective Lagrangian like the nonlinear sigma model. \subsection{Path integral} We start with the partition function $Z_{chiral}$ for the nonlinear sigma model ${\cal L}_{chiral}$, \begin{equation} Z_{chiral}=N\int [dU][dB][d\bar B] \exp[S_{chiral}^{eff}], \label{aa} \end{equation} with the effective action, \begin{equation} S_{chiral}^{eff}=\int_0^\beta d\tau\int d^3x \left[{\cal L}_{chiral}(U, B)+\delta{\cal L}(U, B)\right], \end{equation} where $\delta{\cal L}(U, B)$ is the newly-appeared symmetry-breaking (SB) term due to the introduction of chemical potentials \cite{tat}. In evaluating the integral (\ref{aa}), we introduce the local coordinate around the condensate on the chiral manifold, which is equivalent with the following parametrization \cite{tat}, \begin{equation} U\equiv \xi^2=\zeta U_f\zeta(\xi=\zeta U_f^{1/2} u^\dagger=u U_f^{1/2}\zeta), \quad \zeta=\exp(\sqrt{2}i\langle M\rangle/f), \end{equation} where $\langle M\rangle$ is the condensate, $\langle M\rangle=V_+\langle K^+\rangle+V_-\langle K^-\rangle$, with $K^{\pm}=(\phi_4\pm i\phi_5)/\sqrt{2}$, $\theta^2\equiv 2K^+K^-/f^2$ and $V_\pm=F_4\pm iF_5$, while $U_f =\exp[2iF_a\phi_a/f]$ means the fluctuation field. Accordingly defining a new baryon field $B'$ by way of $ B'=u^\dagger B u, $ we can see that \begin{eqnarray} {\cal L}_{chiral}(U, B)={\cal L}_0(U, B)+{\cal L}_{SB}(U, B)&\longrightarrow& {\cal L}_0(U_f, B')+{\cal L}_{SB}(\zeta U_f\zeta,u B' u^\dagger)\nonumber\\ \delta{\cal L}(U, B)&\longrightarrow&\delta{\cal L}(\zeta U_f\zeta,u B' u^\dagger) \end{eqnarray} Thus all the dynamics of kaons and baryons in the condensed phase are completely prescribed by the non-invariant terms ${\cal L}_{SB}, \delta{\cal L}$ under chiral transformation; it is to be noted that the meson mass is included in ${\cal L}_{SB}$ and the Tomozawa-Weinberg term is in $\delta{\cal L}$. \subsection{Dispersion relation} The effective action for the kaon-nucleon sector can be represented as \begin{equation} S^{eff}_{chiral}=S_c+S_K+S_N+S_{int}, \end{equation} where $S_c$ is the previous classical-kaon action \cite{lee} and $S_N$ the nucleon action discarded in HBL. The sum $S_K+S_{int}$ gives the effective kaon action, \begin{equation} S_K^{eff}=-\frac{1}{2}\sum_{n, {\bf p}}(\phi_4(-n,-{\bf p}), \phi_5(-n,-{\bf p})) D^{eff}\left( \begin{array}{c} \phi_4(n,{\bf p})\\ \phi_5(n,{\bf p}) \end{array} \right) +..., \end{equation} with the inverse thermal Green function $D^{eff}$. Looking for the zeros in $D^{eff}$, we find two solutions $E_\pm$; $E_-$ corresponds to the Goldstone mode and exhibits the Bogoliubov spectrum, \begin{equation} E_-^2\sim \frac{c_3}{2C^2}p^2+\frac{p^4}{4C^2}+... \end{equation} where $C$ means an effective mass for kaons and $c_3$ the product of the charge density and the $KK$ scattering length \cite{tat}. We shall see the importance of the thermal kaon loops due to the Goldstone mode. The origin of this Goldstone mode is easily understood by observing that the kaon-condensed state is no longer invariant with respect to the $V$-spin rotation, while the effective Lagrangian is still inavariant. In other words, we can schematically say that the newly-appeared SB term $\delta{\cal L}$ completely cancels the original SB term ${\cal L}_{SB}$, and gives rise to a {\it new} spontaneous symmetry breaking (SSB) instead. \section{RESULTS} First we show EOS thus obtained for the isothermal and isentropic cases (Fig.~1). \begin{figure}[htb] \epsfxsize=1.0\textwidth \epsffile{3figu.eps} \caption{Sketch of EOS and graphical construction for the equilibrium curve in the isothermal case (a). EOS for the isothermal case (b) and the isentropic case (c).} \label{fig:largenenough} \end{figure} Since it exhibits the first-order phase transition (FOPT), we prescribe, for simplicity, the Maxwell construction by connecting the equal-pressure densities for the normal $(N)$ and condensed $(K)$ states. This means that there is no coexistent phase for the isothermal matter. On the other hand the coexistent phase appears in the isentropic matter due to the variation of temperature density by density. The magnitude of FOPT is so large that we shall see the existence of the gravitationally unstable region in the branch of neutron stars (see Fig. 2(b)). The effect of thermal fluctuations should be self-evident there. For protoneutron stars, the isentropic situation should be more relevant. In Fig.2(a) we present the temperature profile inside for protoneutron-star matter the entropy $s=1, 2$. It is a monotonically increasing function with respect to density, and takes the maximum value of several tens MeV at the center. The difference from no kaon case is rather small. In Fig.2(b) we depict the mass-radius relation for protoneutron stars with isentropic structure. The branch of protoneutron stars is clearly separated into two parts by the gravitationally unstable region. We can see that thermal effects are insufficient to support higher mass; on the contrary, the maximum mass is a little bit reduced at $T\neq 0$. \begin{figure}[htb] \begin{minipage}{0.49\textwidth} \epsfsize=0.95\textwidth \epsffile{usr.eps} \end{minipage}% \hfill~% \begin{minipage}{0.49\textwidth} \epsfsize=0.95\textwidth \epsffile{MRr.eps} \end{minipage}% \caption{(a)Temperature profile of the kaon-condensed matter. (b)Mass-radius relation for protoneutron stars (solid lines). Positive slope region means the gravitationally unstable branch. The symbols $N, C, K$ correspond to those in Fig. 1(a). } \label{fig:toosmall} \end{figure} \section{SUMMARY AND CONCLUDING REMARKS} A systematic formulation to include fluctuations around the condensate is presented by introducing the local coordinate on the chiral manifold. This procedure makes the aspect of chiral rotation prominant for the kaon condensation. Using this we obtained the dispersion relation for the kaonic mode; one is the Goldstone mode as a consequence of the SSB of the $V$-spin symmetry, while the other is a very massive mode. The EOS is obtained, in the HBL, for the isothermal and isentropic cases, where the role of thermal kaons should be noted. The EOS exhibits FOPT and it might be interesting to see the appearance of the coexistent phase in the isentropic case. On the other hand, the temperature profile is little changed from that in no kaon matter. These results may be relevant for the delayed collapse during the initial cooling era, where neutrinos are no longer trapped. Hence it is interesting to observe how these results affect the collapsing process and the profile of the neutrino liminocity by dynamical simulations. The maximum mass of protoneutron stars is around $1.6M_\odot$ and is not larger than that of cold neutron stars in this calculation, which suggests more elaborate study is needed to include nucleon dynamics and the neutrino trapping for observing the delayed collapse.
2,869,038,156,580
arxiv
\section{Introduction} \label{sec:intro} A fundamental goal in neuroscience is to understand how information is represented, stored and modified in cortical networks. New experimental methods in neuroscience not only enable chronic, minimally invasive, recordings of large populations of neurons with cellular level resolution, but also allow recordings from identified neuronal subtypes~\cite{Shepherd2013}. The ability to acquire complex large-scale detailed behavioral and neuronal datasets calls for the development of advanced data analysis tools, as commonly used techniques do not suffice to capture the spatio-temporal network complexity. Such a framework should deal effectively with the challenging characteristics of neuronal and behavioral data, namely connectivity structures between neurons and dynamic patterns at multiple time-scales. Due to natural and physical constraints, the accessible high-dimensional data often exhibit geometric structures and lie on a low-dimensional manifold. Manifold learning is a class of data driven methods; these methods aim to find meaningful geometry-based non-linear representations that parametrize the manifold underlying the data~\cite{Tenenbaum:2000, Roweis:2000, Belkin2003, Donoho2003,Coifman2006}. Only very recently have we begun to witness seeds of its applicability to real biological data, and, in particular, to neuroscience (e.g.,~\cite{Vogelstein2014,Cunningham2014}). Yet, most existing manifold learning methods are unable to deal with the complex data sets arising in neuroscience, since they do not account for several fundamental characteristics of the structures and patterns underlying such data. First, current methods are sensitive to noise and interferences. Second, to a large extent, they do not accommodate the dynamical patterns underlying the neuronal activity. Third, manifold learning does not take into account co-dependencies between neuronal connectivity structures and dynamical patterns. Previous work has addressed analysis of data exhibiting such co-dependencies. To exemplify the generality of such a problem, consider the Netflix prize~\cite{Bennett2007}, where it is desired to provide systematic suggestions and recommendations to viewers. A co-organization enables to both group together viewers based on their similar tastes and, at the same time, group together movies based on their similar ratings across viewers. This clustering of viewers or of movies can be highly dependent on the particular viewer, and on the particular movie; two viewers may be similar under one metric, since they both like similar adventure movies, but at the same time, quite different since they do not like the same comedies. Thus, the suggestion system needs different metrics for recommending different types of movies to different viewers. Data arising in such settings can be viewed as a 2D matrix, where in the Netflix Prize the first dimension is the viewers (observations) and the second is the movies (variables). The need for matrix co-organization arises when observations are not independent and identically distributed, i.e., correlations exist among both observations and variables of the data matrix. Similar settings also arise in analysis of documents, psychological questionnaires, gene expression data, etc., where there is no particular reason to prefer treating one dimension as independent, while the other is treated as dependent~\cite{Cheng2000, Tang2001, Busygin2008, Chi2014}. To address problems of this sort, Gavish and Coifman~\cite{Gavish2012} proposed a methodology for matrix organization relying on the construction of a tree-based bi-organization of the data. The analysis of natural data poses an even greater challenge, since such data may also depend on a massive number of marginally relevant variables, including distortions and unrelated measurements, requiring metrics, which are not sensitive to such variability, and which are capable of suppressing noise or irrelevant factors. In particular, it is insufficient to represent neuronal activity recordings, that were acquired in repetitive trials, as a 2D matrix; representing observations (time samples in one dimension), and variables (neurons in the other dimension), does not take into account the multiple scales of the time samples, since the time exhibits both local (within trial) and global (across trials) time scales. Therefore, the data are viewed as a 3D database whose dimensions are the neurons, the local time frames and the global trial indices. In this paper, to accommodate the three-dimensional nature of this data, we extend~\cite{Gavish2012} to a triple-geometry analysis obtaining a nonparametric model for data tensors. We propose a completely data-driven analysis of a given rank-3 tensor that provides a co-organization of the data, i.e., each of the dimensions is organized so that it varies smoothly along the other two dimensions. Specifically, we focus on trial-based neuronal data, however, our approach is general and can be used to analyze other types of 3D data-sets. In addition to the challenge of organizing the data, applying manifold learning methods requires a ``good'' metric between points, which conveys local similarity, as in the Netflix example. Regular metrics do not perform well due to the high dimensionality and hierarchical structure of the trial-based data, as well as their inability to encompass the 3D nature of the data. For example, using the Euclidean distance or cosine similarity between two sensors, treats the neuronal recordings as a 1D vector, and does not take into account the combined local and global nature of the trial-based experiments. Using more sophisticated metrics such as the Mahalanobis or PCA-based distances proposed in~\cite{Singer2008, Talmon2013, Talmon2014,Haddad2014,Mishne2014b}, requires a notion of locality, which is non-trivial in the given application, as the data do not necessarily follow a regular Euclidean 3D grid. Therefore, we also address the problem of defining a new multiscale metric, that incorporates the coupling between the dimensions based on the hierarchical structure of the data in each dimension. Broadly, our focus is on finding a good description of the data; our analysis enables us to build intrinsic data-driven multiscale hierarchical structures. In particular, our analysis builds three types of data structures, conveying a local to global representation, from hierarchical clustering of the data to a multiscale metric to a global embedding. These three structures are constructed in an iterative refinement procedure for each dimension, based on the other two dimensions. Thus, we exploit the coupling between the dimensions. At the micro-scale, we learn a multiscale organization of the data, so that each point is organized in a bottom-up hierarchical structure, using a partition tree. Thus, each point belongs to a set of nested folders, where each folder defines a coarser and coarser sense of locality/neighborhood. At the intermediate scale, the hierarchical organization of the data is then used to define a novel 2D multiscale metric between points. This metric enables to organize each dimension based on a coarse-to-fine decomposition of the other two dimensions. Thus, the metric respects the hierarchy and compares points not only based on the raw measurements, but also based on their values across scales. It is based on a mathematical foundation, stemming from the approximation of the earth-mover's distance (EMD) proposed by Leeb~\cite{Leeb2015}. We show that this metric is equivalent to the $l_1$ distance between points after applying a data-adaptive filter-bank. We extend the tree-based metric to a bi-tree multiscale metric and corresponding 2D filter bank. At the macro scale, the local organization of the data and the multiscale metric enable the calculation of a global manifold representation of the data, via the construction of an affinity kernel and its eigendecomposition~\cite{Coifman2006}. This representation can then be used to provided a single smooth organization of each dimension. The data can also be clustered based on this representation into meaningful groups. Our tri-geometry approach is applied to neuronal recordings and is used for exploratory analysis, interpretability and visualization of the data. This organization is needed to identify latent variables that control the activity in the brain and to develop the automated infrastructure necessary to recover complex structures, with less external information and without expert guidance. Our experimental results on neuronal recordings of head-fixed mice demonstrate the capability of isolating and filtering regional activities and relating them to specific stimuli and physical actions, and of automatically extracting pathological dysfunction. Specifically neuronal groupings, temporal activity groupings and experimental condition scenarios are simultaneously extracted from the database, in a data-driven, model-free and network-oriented manner. The remainder of the paper is organized as follows. Section~\ref{sec:related} briefly reviews related work regarding state-of-the-art methods in neuroscience data analysis. In Section~\ref{sec:problem}, we formulate the problem. In Section~\ref{sec:quest3d}, the proposed methodology for tri-organization of trial-based data is presented, detailing the three components of our approach. Section~\ref{sec:results} presents analysis of experimental neuronal data, in a motor foewpaw reach task in mice. \section{Related Work} \label{sec:related} Current network analysis approaches in neuroscience can be divided into two main classes~\cite{Friston2013,Sporns2014}. The first class comprises methods, which aim to determine functional connectivity, defined in terms of statistical dependencies between measured elements (e.g., neurons or networks), by constructing direct statistical models from the data (e.g., Granger causality, transfer entropy, point process modeling and graph based methods~\cite{DinCheBre06,Schreiber00,TruEdeFel05,Sporns2014}. The second class of methods is often based on Latent Dynamical Systems (LDS), which accommodates effective connectivity characterizing the causal relations between elements through an underlying hidden dynamical system \cite{Friston2013,Shenoy2013,Archer2014}. Non-linear and non-Gaussian extensions of the Kalman filter, contemporary sequential Monte Carlo methods and particle filters, have also been introduced in neuroscience~\cite{Wu04,Ahmadian2011}. \begin{figure*}[t!] \centering{\includegraphics[width=0.88\linewidth]{data3d_slices}} \caption{Visualization of 3D database. The data is visualized here as 2D slices from multiple viewpoints: for several trials $\mathbf{X}_{\cdot \cdot T}$ (left), time frames $\mathbf{X}_{\cdot t \cdot}$ (center), and neurons $\mathbf{X}_{r \cdot \cdot}$(right). The neuronal activity is represented by the intensity level of the image (blue – no activity, red – high activity). } \label{fig:data3d_slices} \end{figure*} These methods share significant drawbacks, as they are mostly heuristic, providing only an approximation of a largely unknown system, and their quality is often hard to assess~\cite{Cunningham2014}. More importantly, they are all prone to the ``curse of dimensionality". On the one hand, designing a parametric generative model for truly complex high-dimensional data, such as neuronal/behavioral recordings, requires considerable flexibility, resulting in a model with a large number of tunable parameters. On the other hand, estimating a large number of parameters, and fitting a predefined class of dynamical models to high-dimensional data, is practically infeasible, thereby leading to poor data representations. Our approach is better designed to deal with dynamical systems and aims to alleviate the shortcomings present in current analysis methods. The proposed framework deviates from common recently used in neuroscience as it makes only very general smoothness assumptions, rather than postulating a-priori specific structural models. In addition, we show that it takes into consideration the high dimensional spatio-temporal neuronal network structure. \section{Problem Formulation} \label{sec:problem} In the sequel we denote the three axes of the 3D data with a trial-based experiment in mind. However, our methodology can be applied to general 3D coordinates. Consider data acquired in a set of fixed-length trials, composed of measurements from a large number of sensors (specifically neurons). Mathematically, we have a database $\mathbf{X}[r,t,T]$, depending on three variables. We collect at each neuron, or identified region of interest (ROI), denoted by $r$, a time series of the neuronal activity (e.g., fluorescence intensity levels in identified ROIs along time). In general trial-based data, this dimension corresponds to the sensors that acquire the data, such as in EEG~\cite{Talmon2015}. On a short time scale, denoted by $t$, these time series can be viewed as a dynamic window profiling the neuron. These profiles vary on a long time scale as well, characterized by repetitive trials and denoted by $T$, and should be organized according to global trends and similarity between trials that are not necessarily consecutive. This database can be separately organized into a triple of geometries involving each variable, $r$, $t$, and $T$. However, the \emph{joint} organization of all three variables leads to an organization of dynamic neuronal activity regimes, using a global representation via the diffusion maps embedding~\cite{Coifman2006}. Let $\mathbf{X} \in \mathbb{R}^{n_r \times n_t \times n_T}$ be a rank-3 tensor, where $n_r$ is the number of neurons, $n_t$ is the number of time frames in an individual trial and $n_T$ is the number of trials. A point $x[r,t,T]\in \mathbf{X}$ is indexed by the neuron $r$, the fast (short scale) time index $t$, and the trial (long scale time) index $T$. Note that although both $t$ and $T$ are coupled as indicating time, there is no assumption on a connection between the two. Let $\mathbf{X}_{r\cdot\cdot} \in \mathbb{R}^{n_t \times n_T}$ denote the two-dimensional matrix (slice) $\mathbf{x}[r,1\leq t \leq n_t, 1 \leq T \leq n_T]$ of all measurements for a given neuron $r$ throughout all trials. In similar fashion, $\mathbf{X}_{\cdot t \cdot} \in \mathbb{R}^{n_r \times n_T}$ is the 2D matrix of all measurements of all neurons, for a given time $t$ for all $n_T$ trials. Finally, $\mathbf{X}_{\cdot \cdot T} \in \mathbb{R}^{n_r \times n_t}$ is the 2D matrix of all measurements of all neurons throughout a single trial $T$. A visualization of a 3D dataset and examples of 2D slices in each dimension is presented in Fig.~\ref{fig:data3d_slices}. Considering trial-based data, we assume the within-trial time index $t$ is smooth, and all trials are of the same length $n_t$. It is easy to define neighbors in this dimension, as it is associated with a regular fixed-length grid. We assume the trials follow a repetitive protocol, controlled by the experimenters, yet the trial indices $T$ are discrete, not necessarily contiguous and describe a longer span of time, i.e., trials occurring on different dates. Thus, the trial index $T$ while being associated with the notion of time which is supposedly smooth, does not imply that two consecutive indices are similar. In the experimental results in Sec.~\ref{sec:results} we show that trials are grouped logically based on behavioral similarity and not based on consecutive experiments. A global trend in the organization of the trials is evident only when introducing a pathological inhibitor, which has a long term effect on the test subject. Finally, we assume that a neuron index $r$ may be assigned randomly to the neuron, therefore, it does not impose any smoothness or structure, and two consecutive indices do not imply any similarity between neurons. Thus, although the trial-based measurements are organized as a 3D database so they are supposedly associated with a regular Euclidean grid, in practice the data suffers from non-uniform sampling, and consecutive indices do not indicate actual proximity as in time-series data (temporal smoothness) or a 2D image (spatial smoothness). Thus, conventional analysis methods, such as multiscale representations via wavelets, are not straightforward in the given application. In order to define a multiscale analysis of the data, it is necessary to be able to define neighborhoods and a sense of locality between points. The notations in this paper follow these conventions: matrices and tensors are denoted by bold uppercase, sets are denoted by uppercase calligraphic, and vectors are denoted by uppercase italic. \section{Tri-geometry analysis} \label{sec:quest3d} Our analysis is based on the assumption that an underlying ``good'' organization of the data exists, such that under a permutation of the indices in each dimension of the data, the resulting tensor is smooth in the three dimensions. Our aim is to recover this organization of the data, through a local to global processing of the data. We begin with learning the hierarchal structure of the data in each dimension via partition trees, which convey local clustering of the data. We then construct a new multiscale bi-tree metric for one dimension based on the coupled geometry between the other two dimensions. Finally, the tree-based metric enables us to define an affinity between points from which we derive a global representation via manifold learning. Thus, our approach does not treat each dimension separately, but introduces a strong coupling between the dimensions. The three-phase organization of each dimension is carried out in an iterative procedure, where each dimension is organized in turn, based on the other two. An advantage of our approach is that it is based on modular components. We describe three methods fulfilling the motivation of each stage, but these methods can be replaced with others. For example, we propose flexible trees for the partition tree construction, but binary trees can be used instead. We expand on the three components of our approach in detail in the following subsections. \subsection{Partition trees and Flexible trees} \label{sec:trees} Following the assumption that under a proper organization the dataset is smooth, we aim to build a fine-to-coarse set of neighborhoods for each point, by constructing partition trees in each dimension. In the tri-geometric organization, the neighborhoods are 3D cubes. Permuting the points in each dimension based on the constructed partition tree will recover the smooth structure respecting the coupling between the neurons and the time dimensions of the data. Given a set of high-dimensional points, we construct a hierarchical partitioning of the points, defined by a tree. In our setting, for each dimension, the set of points are the 2D slices of the data in that dimension. Without loss of generality, we will define the partition trees in this section with respect to partitioning the neurons, but this process is performed in the remaining two dimensions as well. Let $\mathcal{X}_r = \{\mathbf{X}_{i \cdot\cdot}\}_{i=1}^{n_r}$ be the set of all 2D neuron slices. We define a finite partition tree $\mathcal{T}_r$ on $\mathcal{X}_r$ as follows. The partition tree is composed of $L+1$ levels, where a partition of the points $\mathcal{P}_l$ is defined for each level $0 \leq l \leq L$. The partition $\mathcal{P}_l$ at level $l$ consists of $n(l)$ mutually disjoint non-empty subsets of indices in $\{1,...,n_r\}$, termed folders and denoted by $I_{l,i}$, $i\in\{1,...,n(l)\}$: \begin{equation} \mathcal{P}_l = \{I_{l,1},I_{l,2},...,I_{l,n(l)}\}. \end{equation} Note that we define the folders on the indices of the data points and not on the points themselves. The partition tree $\mathcal{T}_r$ has the following properties: \begin{itemize} \item The finest partition ($l = 0$) is composed of $n(0) = n_r$ singleton folders, termed the ``leaves'', where $I_{0,i} = \{i\}$. \item The coarsest partition ($l= L$) is composed of a single folder, $I_{L,1} = \{1,...,n_r\}$, termed the ``root'' of the tree. \item The partitions are nested such that if $I \in \mathcal{P}_l$, then $I \subseteq J$ for some $J \in \mathcal{P}_{l+1}$, i.e., each folder at level $l-1$ is a subset of a folder from level $l$. \end{itemize} The partition tree is the set of all folders at all levels $\mathcal{T} = \{I_{l,i} \;\vert\; 0 \leq l \leq L,\; 1 \leq i \leq n(l)\}$, and the number of all folders in the tree is denoted by $\vert \mathcal{T} \vert$. Given a dataset, there are many methods to construct a hierarchical tree, including deterministic, random, agglomerative and divisive~\cite{Gavish2010,Chi2014,Breiman2001}. Partition trees can be constructed in a top-down or bottom-up approach. In a top-down approach, the data are divided into few folders, then each of these folders is divided into sub-folders, and so on until all folders at the bottom level consist of only one point. In a bottom-up approach, we begin with the lowest level of the tree, clustering the points into small folders. Then these folders are merged into larger folders at higher levels of the tree, until all folders are merged at the root of the tree. A simple approach to bottom-up construction is a $k$-means based construction. The leaves of the tree are clustered via $k$-means into $k$ folders. Each folder is then assigned a centroid, and these centroids are then clustered again using $k$-means, with smaller $k$. This process is repeated until all points are merged at the root with $k=1$. More sophisticated approaches take into account the geometric structure and multiscale nature of the data by incorporating affinity matrices on the data, and manifold embeddings. Gavish et al.~\cite{Gavish2010} propose constructing a partition tree via bottom-up hierarchical clustering, given a symmetric affinity matrix $\mathbf{W}$ describing a weighted graph on the dataset. Ankenman~\cite{Ankenman2014} proposed ``flexible trees'', whose construction requires an affinity matrix on the data, and is based on a low-dimensional diffusion embedding of the data, and not on the high-dimensional points. The advantage of this construction, which uses the embedding rather than the high-dimensional space is that distances between points in the diffusion space are meaningful and robust to noise, as opposed to high-dimensional Euclidean distances. This tree construction enables us to integrate both the multiscale metric and the resulting global embedding. Since our approach is based on an iterative procedure of all three components, the tree construction is refined from iteration to iteration. Another important advantage of flexible trees is that there are relatively few levels and the level at which folders are joined is meaningful across the entire dataset. Thus, the tree structure is logically multiscale and follows the structure of the data. This also reduces the computational complexity of the metric calculation. The construction is controlled by a constant $\epsilon$ which affects the number of levels in the trees. Higher values of $\epsilon$ result in ``tall'' trees, while small values lead to flatter trees. We briefly describe the flexible trees algorithm, given the set $\mathcal{X}_r$ and an affinity matrix on the neurons denoted $\mathbf{W}_r$. For a detailed description see~\cite{Ankenman2014}. \begin{enumerate} \item Input: The set of points $\mathcal{X}_r$, an affinity matrix $\mathbf{W}_r\in \mathbb{R}^{n_r \times n_r}$, and a constant $\epsilon$. \item Init: Set partition $I_{0,i} = \{i\} \; \forall \; 1 \leq i \leq n_r$, set $l=1$. \item Given an affinity on the data, we construct a low-dimensional embedding on the data~\cite{Coifman2006}. \item \label{item:dist} Calculate the level-dependent pairwise distances $d^{(l)}(i,j) \; \forall \; 1 \leq i,j \leq n_r$ in the embedding space. \item Set a threshold $\frac{p}{\epsilon}$, where $p=\textrm{median}\left(d^{(l)}(i,j)\right)$. \item For each point $i$ which has not yet been added to a folder, find its minimal distance $d^{\min}(i)=\min_j\{d^{(l)}(i,j)\}$. \begin{itemize} \item If $d^{\min}(i)<\frac{p}{\epsilon}$, $i$ and $j$ form a new folder if $j$ also does not belong to a folder. If $j$ is already part of a folder $I$, then $i$ is added to that folder if $d^{\min}(i)<\frac{p}{\epsilon} 2^{-\vert I \vert + 1}$. Thus, the threshold on the distance for adding an element to an existing folder is divided by 2 for each added element. \item If $d^{\min}(i) > \frac{p}{\epsilon}$, $i$ remains as a singleton folder. \end{itemize} \item \label{item:partition} The partition $P_l$ is set to be all the formed folders. \item For $l>1$ and while not all points have been merged together in a single folder, steps \ref{item:dist})-\ref{item:partition}) are repeated. Instead of iterating over points, we iterate over all the folders $I_{l-1,i} \in P_{l-1}$. The distances between folders depend on the level $l$, and on the points in the folder. \end{enumerate} The trees yield a hierarchical multiscale organization of the data, which then enables us to apply signal processing methods. For example, we can apply non-local means to each point based on its neighborhood, to denoise the data, or multiscale analysis via tree based wavelets~\cite{Buades2005,Gavish2010,Ram}. However, we aim at a global analysis of the data. To this end, we define a bi-tree multiscale metric, which compares two points, based on their decomposition via the trees. \subsection{Data-adaptive bi-tree multiscale metric} Applying manifold learning requires an appropriate metric between points. As we cannot associate a sense of locality based on the indexing of the dimensions, we treat the data as vertices in a graph and develop a metric that is based on the multi-scale neighborhoods constructed in the partition tree. Given the partition trees in two of the dimensions, our aim is to define a distance $d$ between two 2D slices in the remaining dimension. This distance should incorporate the multiscale nature of the data. Following Leeb~\cite{Leeb2015}, we propose a tree-based metric in one dimension that incorporates the coupling of the other two dimensions. For a two-dimensional matrix, Leeb~\cite{Leeb2015} defines a tree-based metric between two points in one dimension based on a partition tree in the other dimension. We will present this metric in our context and then propose a new metric incorporating two partition trees in the case of a 3D dataset. Consider a single 2D slice of the trial data $\mathbf{X}_{\cdot \cdot T}$, and the partition tree on the neurons $\mathcal{T}_r$. A point $\mathbf{X}_{\cdot t_i T}$ in the time dimension is a vector of length $n_r$, consisting of all the neuronal measurements at the time frame $t_i$ during a given trial $T$. The tree metric $d_{\mathcal{T}_r}(\mathbf{X}_{\cdot t_i T},\mathbf{X}_{\cdot t_j T})$ between two times $t_i$ and $t_j$ within this trial, given the tree $\mathcal{T}_r$ is defined as \begin{equation} \label{eq:emd} d_{\mathcal{T}_r}(\mathbf{X}_{\cdot t_i T},\mathbf{X}_{\cdot t_j T}) = \sum_{I \in \mathcal{T}_r} \vert m(\mathbf{X}_{\cdot t_i T} - \mathbf{X}_{\cdot t_j T}, I)\vert \omega(I), \end{equation} where $\omega(I)>0$ is a weight function, depending on the folder $I$. The value $m(\mathbf{X}_{\cdot t_i T}, I)$ is the mean of vector $\mathbf{X}_{\cdot t_i T}$ in $I$: \begin{equation} m(\mathbf{X}_{\cdot t_i T}, I) = \frac{1}{\vert I \vert} \sum_{k\in I} \mathbf{X}[k,t_i,T], \end{equation} where $\vert I \vert$ denotes the number of points in folder $I$, i.e., its cardinality. The metric encompasses the ability to weight the data based on its multiscale decomposition since each folder is assigned a weight via $\omega$. The weights can incorporate prior smoothness assumptions on the data, and also enable enhancing either coarse or fine structures in the similarity between points. We generalize this metric to a distance between 2D matrices, given two partitions trees. We define this distance for the trial dimension, given trees on the time and neuron dimensions, but the same applies in the other dimensions as well. Given a partition tree $\mathcal{T}_r$ on the neurons and a partition tree $\mathcal{T}_t$ on the time frames, the distance between two trials $T_i$ and $T_j$ is defined as \begin{equation} \label{eq:2demd} d_{\mathcal{T}_r,\mathcal{T}_t}(\mathbf{X}_{\cdot \cdot i} , \mathbf{X}_{\cdot \cdot j}) = \sum_{\substack{I \in \mathcal{T}_r \\ J \in \mathcal{T}_t}} \vert m(\mathbf{X}_{\cdot \cdot i} - \mathbf{X}_{\cdot \cdot j}, I \times J)\vert \omega(I,J), \end{equation} where $\omega(I,J)>0$ is a weight function depending on folders $I \in \mathcal{T}_r $ and $J \in \mathcal{T}_t$. We term this distance a bi-tree metric. The value $m(\mathbf{X}_{\cdot \cdot i}, I \times J)$ is the mean value of a matrix $\mathbf{X}_{\cdot \cdot i}$ on the bi-folder $I \times J = \{ (k,n) \vert k\in I, n\in J \}$: \begin{equation} m(\mathbf{X}_{\cdot \cdot i}, I \times J) = \frac{1}{\vert I \vert \vert J \vert } \sum_{k \in I, n\in J} \mathbf{X}[k,n,i], \end{equation} i.e., for a given trial $T$, we are averaging the sub-matrix of the 2D slice $\mathbf{X}_{\cdot \cdot i}$ defined by the subset of neurons in $I$ and the subset of time frames in $J$. We present a new interpretation of the tree-based metrics~(\ref{eq:emd}) and~(\ref{eq:2demd}). These metrics are equivalent to the $l_1$ distance between points, after applying a multiscale transform to the data, where the tree metric~(\ref{eq:emd}) corresponds to a 1D transform and the bi-tree metric~(\ref{eq:2demd}) corresponds to a 2D transform. For the sake of simplicity we begin with describing the 1D transform in the case of a single 2D slice of the trial data $\mathbf{X}_{\cdot \cdot T}$, and then generalize to the 2D transform. The partition tree $\mathcal{T}_r$ can be seen as inducing a multiscale decomposition on the data, via the construction of a data-adaptive filter bank. Define the filter $f_I \in \mathbb{R}^{n_r}$ as \begin{equation} \label{eq:filter} f_I=\frac{\omega(I)}{\vert I \vert} \mathds{1}_I, \end{equation} such that $\mathds{1}_I$ is the indicator function on the neurons $i\in\{1,...,n_r\}$ belonging to folder $I \in \mathcal{T}_r$. For each filter we calculate the inner product between the filter $f_I$ induced by folder $I$ and the measurement vector $\mathbf{X}_{\cdot t T} \in \mathbb{R}^{n_r}$, yielding a scalar coefficient $g_I$: \begin{equation} \label{eq:coef} \begin{split} g_I(\mathbf{X}_{\cdot t T}) & = \langle f_I,\mathbf{X}_{\cdot t T} \rangle \\ & = \frac{\omega(I)}{ \vert I\vert} \sum_{k\in I} \mathbf{X}[k,t,T]=m(\mathbf{X}_{\cdot t T},I)\omega(I). \end{split} \end{equation} The tree $\mathcal{T}_r$ defines a multiscale transform by applying filter bank $f_{\mathcal{T}_r} = \{f_I\}_{I\in\mathcal{T}_r}$ to the measurements vector $\mathbf{X}_{\cdot t T}$, resulting in the set of coefficients $g_{\mathcal{T}_r} = \{g_I\}_{I\in\mathcal{T}_r}$. The filters of each level $l$ of the tree output $n(l)$ coefficients, such that $g_{\mathcal{T}_r} : x \mapsto \mathbb{R}^{\vert \mathcal{T}_r \vert}$. This is demonstrated in Fig.~\ref{fig:transform}. In the middle, a 2D slice $\mathbf{X}_{\cdot t T}$ is viewed as a 2D matrix and on the left is a partition tree $\mathcal{T}$ defined on the rows of the matrix. We assume that the rows of the matrix have been permuted so they correspond with leaves of the tree (level 0). In applying the transform $g_\mathcal{T}$, each folder $I$ defines an element in the new vector $g_\mathcal{T}(X_{\cdot i})$ (right), proportional to the average of the entries in the original vector $(X_{\cdot i})$ on the support defined by the folder $I$. The new entries in the vector are colored according to the corresponding folders in the tree. \begin{theorem} Given a partition tree on the neurons $\mathcal{T}_r$, the tree metric~(\ref{eq:emd}) between two times $t_j$ and $t_j$ for a given trial $T$ is equivalent to the $l_1$ distance between the multiscale transform defined by the tree and applied to the two vectors: \begin{equation} d_{\mathcal{T}_r}(\mathbf{X}_{\cdot t_i T},\mathbf{X}_{\cdot t_j T}) = \Vert g_{\mathcal{T}_r}(\mathbf{X}_{\cdot t_i T}) - g_\mathcal{T}(\mathbf{X}_{\cdot t_j T}) \Vert_1. \end{equation} \end{theorem} \begin{proof} \begin{align} d_{\mathcal{T}_r}(\mathbf{X}_{\cdot t_i T} &, \mathbf{X}_{\cdot t_j T}) = \sum_{I \in \mathcal{T}_r} \vert m(\mathbf{X}_{\cdot t_i T} - \mathbf{X}_{\cdot t_j T}, I)\vert \omega(I) \nonumber \\ & = \sum_{I \in \mathcal{T}_r} \vert m(\mathbf{X}_{\cdot t_i T},I)\omega(I) - m(\mathbf{X}_{\cdot t_j T}, I) \omega(I)\vert \nonumber \\ & = \sum_{I \in \mathcal{T}_r} \vert g_I(\mathbf{X}_{\cdot t_i T}) - g_I(\mathbf{X}_{\cdot t_j T}) \vert \nonumber \\ & = \sum_{n=1}^{\vert \mathcal{T}_r \vert} \vert g_{\mathcal{T}_r}(\mathbf{X}_{\cdot t_i T})[n] - g_{\mathcal{T}_r}(\mathbf{X}_{\cdot t_j T})[n] \vert \nonumber \\ & = \Vert g_{\mathcal{T}_r}(\mathbf{X}_{\cdot t_i T}) - g_{\mathcal{T}_r}(\mathbf{X}_{\cdot t_j T}) \Vert_1. \end{align} \end{proof} This result can be generalized to a multiscale 2D transform applied to 2D matrices as in our setting. Define the 2D filter $f_{I\times J}$ by: \begin{equation} \label{eq:filter2d} f_{I\times J}=\frac{\omega(I,J)}{\vert I \vert\vert J \vert} \mathds{1}_I \otimes \mathds{1}_J^T, \end{equation} where $\otimes$ denotes the Kronecker product between the two indicator vectors. Then the elements of the 2D matrix $g_{\mathcal{T}_r,\mathcal{T}_t} \in \mathbb{R}^{\vert \mathcal{T}_r \vert \times \vert \mathcal{T}_t \vert }$ are the coefficients obtained from applying the 2D filter bank $f_{\mathcal{T}_r,\mathcal{T}_t} = \{f_{I\times J}\}_{I \in \mathcal{T}_r,J \in \mathcal{T}_t}$ defined by the bi-tree $\mathcal{T}_r \times \mathcal{T}_t$. \begin{corollary} The bi-tree metric~(\ref{eq:2demd}) between two matrices given a partition tree $\mathcal{T}_r$ on the neurons and a partition tree $\mathcal{T}_t$ on the time frames is equivalent to the $l_1$ distance between the 2D multiscale transform of the two matrices: \begin{equation} d_{\mathcal{T}_r,\mathcal{T}_t}(\mathbf{X}_{\cdot \cdot i} , \mathbf{X}_{\cdot \cdot j}) = \Vert g_{\mathcal{T}_r,\mathcal{T}_t} (\mathbf{X}_{\cdot \cdot i}) - g_{\mathcal{T}_r,\mathcal{T}_t}(\mathbf{X}_{\cdot \cdot j}) \Vert_1. \end{equation} \end{corollary} \begin{figure}[t] \centering{\includegraphics[width=0.99\linewidth]{transform_rows3}} \caption{Multiscale 1D tree-transform applied to a 2D slice from Fig.~\ref{fig:data3d_slices}, viewed here as a 2D matrix (middle). On the left is a given partition tree $\mathcal{T}$ on the rows of the 2D matrix, and we assume the rows have been permuted so the leaves of the tree correspond to the rows. The partition tree $\mathcal{T}$ defines a multiscale transform on the columns of the matrix $X_{\cdot i}$, resulting in new vectors $g_\mathcal{T}(X_{\cdot i})$. In applying the transform $g_\mathcal{T}$, the entries in $X_i$ corresponding to each folder in the tree, are averaged and weighted according to~(\ref{eq:coef}). This yields new scalar coefficients which form the output vector $g_\mathcal{T}(X_{\cdot i})$ (right). For visualization, each new entry $g_I$ is colored by the corresponding folder $I$ in the tree. } \label{fig:transform} \end{figure} This interpretation of the metric as the $l_1$ distance between multiscale transforms has two computational advantages. First, given large datatsets, it is inefficient to calculate full affinity matrices on the points, and instead sparse matrices are used by finding $k$-nearest neighbors of each point. Thus, we can apply the multiscale transform to our data, yielding a new feature vector for each point, and then apply approximate nearest-neighbor search for the $l_1$ distance to the new vectors~\cite{Arya:1998,Yi2000}. Second, we can relax the $l_1$ norm to other norms such as $l_2$ or $l_\infty$. In future work, we intend to establish the properties of this transform and its application to other tasks. Note that we claimed that regular metrics are inappropriate in processing the data due to its high-dimensionality in each dimension of the 3D dataset, i.e., each 2D slice of the data contain a large number of elements. This interpretation of the metric via the transform yields that the proposed metric is equivalent to the $l_1$ distance between vectors/matrices of even higher-dimensionality, supposedly contradicting our aim for a good metric. However, due to encompassing weights on the folders, the effective size of the new vectors is smaller than the original dimensionality, as the weights are chosen such that they rapidly decrease to zero based on the folder size. We note that by using full binary trees in each of the two dimensions, the output of applying the multiscale transform is similar to that of applying the Gaussian pyramid representation, popular in image processing~\cite{Burt1983}, to each 2D matrix $\mathbf{X}_{\cdot \cdot i}$, $1 \leq i \leq n_T$. Instead of applying the $5 \times 5$ Gaussian filter proposed by Burt and Adelson, our transform applies a $2 \times 2$ averaging filter, weighted by $\omega(I,J)$, and the resolution at each level will be reduced by 2 as in the Gaussian pyramid. \paragraph*{Relationship to EMD} The Earth mover's distance (EMD) is a metric used to compare probability distributions or discrete histograms, and is popular in computer vision\cite{Rubner1998}. It is fairly insensitive to perturbations since it does not suffer from the fixed binning problems of most distances between histograms. EMD quantifies the difference between the two histograms as the amount of mass one needs to move (flow) between histograms, with respect to a ground distance, so they coincide. In its discrete form, the EMD between two normalized histograms $h_1$ and $h_2$ is defined as the minimal total ground distance ``traveled'' weighted by the flow: \begin{align*} \textrm{EMD}(h_1,h_2) = \min \sum_{i,j}g_{ij}d_{ij} \\ \textrm{s.t.} \sum_i g_{ik} - \sum_j g_{kj} = h_1(k)-h_2(k), \end{align*} where $d_{ij}\geq 0$ is the ground distance, and $g_{ij}$ is the flow from bin $i$ to bin $j$. It was shown~\cite{Leeb2013} that a proper choice of the weight $\omega(I)$ makes the tree metric~(\ref{eq:emd}) equivalent to EMD, i.e., the ratio of EMD to the tree-based metric is always between two constants. The proof follows the Kantorovich-Rubinstein theorem regarding the dual representation of the EMD problem. The weight $\omega(I)$ in~(\ref{eq:emd}) is chosen to depend on the tree structure: \begin{equation} \label{eq:weight} \omega(I) = \left(\frac{\vert I \vert}{M}\right)^{\beta+1}, \end{equation} where $\beta$ weights the folder by its relative size. Positive values of $\beta$ correspond to higher weights on coarser scales of the data, whereas negative values emphasize differences in fine structures in the data. For trees with varying-sized folders, unlike a balanced binary tree, $\beta$ helps to normalize the weights on folders. For $\beta=0$, the filter $f_I$ defined in~(\ref{eq:filter}) is a uniform averaging filter whose support is determined by $I$. In EMD the histograms are associated with a fixed grid and bins quantizing this grid. In our setting, where the data does not follow a fixed grid, the folders take the place of the bins, and by incorporating their multiscale structure via the weights, they can be seen as bins of varying sizes on the data. Shirdhonkar and Jacobs~\cite{Shirdhonkar2008} proposed a wavelet-based metric (wavelet EMD) shown to be equivalent to EMD. The wavelet EMD is calculated as the weighted $l_1$ distance between the wavelet coefficients of the difference between the two histogram. Following~\cite{Shirdhonkar2008}, Leeb~\cite{Leeb2013} proposed a second metric based on the $l_1$ distance between the coefficients of the difference of distributions expanded in the tree-based Haar-like basis~\cite{Gavish2010}, which was also shown to be equivalent to EMD. Our interpretation of the metric~(\ref{eq:emd}) as an $l_1$ distance between a multi-scale filter bank applied to the data, simplifies the calculation even more as it does not require calculating the Haar-like basis defined by the tree, and instead requires only low-pass averaging filters on the support of each folder. This generalizes the wavelet EMD~\cite{Shirdhonkar2008}, to high-dimensional data that is not restricted to a Euclidean grid. For the bi-tree metric~(\ref{eq:2demd}), the weight on a bi-folder $I \times J$ can be chosen in an equivalent manner to~(\ref{eq:weight}) as \begin{equation} \label{eq:omega2d} \omega(I,J) = \left(\frac{\vert I \vert}{n_r}\right)^{\beta_r+1} \left(\frac{\vert J \vert}{n_t}\right)^{\beta_t+1}, \end{equation} where $\beta_r$ weights the bi-folder $I \times J$ based on the relative size of folder $I\in\mathcal{T}_r$ and $\beta_t$ weights the bi-folder based on the relative size of $J \in \mathcal{T}_t$. The values should be set according to the smoothness of the dimension and whether we intend to enhance coarse or fine structures in the data. \subsection{Global Embedding} The intrinsic global representation of the data is attained by an integration process of local affinities, often termed ``diffusion geometry". Specifically, the encoding of local variability and structure from different locations (e.g., cortical regions, or trials) is aggregated into a single comprehensive representation through the eigendecomposition of an affinity kernel~\cite{Coifman2006}. This global embedding preserves local structures in the data, thus enabling us to exploit the fine spatio-temporal variations and inter-trial variability typical of biological data, in contrast to other methods based on averaging and smoothing the data~\cite{Pfau2013}. Given the bi-tree multiscale distance between two points (\ref{eq:2demd}), we can construct an affinity on the data along each dimension. We choose an exponential function, but other kernels can be considered, dependent on the application. Without loss of generality, we describe the embedding calculation with respect to the dimension of the neurons, but this procedure is applied to the time and trials as well, within our iterative framework. Given the multiscale distance $d_{\mathcal{T}_t,\mathcal{T}_T}(\mathbf{X}_{i \cdot \cdot}, \mathbf{X}_{j \cdot \cdot})$ between two neurons $r_i$ and $r_j$, the affinity is defined as: \begin{equation} a(r_i, r_j) = \exp\{ - d_{\mathcal{T}_t,\mathcal{T}_T}(\mathbf{X}_{i \cdot \cdot}, \mathbf{X}_{j \cdot \cdot}) / \sigma_r \}, \end{equation} where $\sigma_r$ is a scale parameter, and depends on the current considered dimension of the 3D data, i.e., each dimension uses a different scale in its affinity. Typically, $\sigma_r$ is chosen to be the mean of distances within the data. The exponential function enhances locality, as points with distance larger than $\sigma_r$ have negligible affinity. The affinity is used to calculate a low-dimensional embedding of the data, using manifold learning techniques, specifically diffusion maps~\cite{Coifman2006}. Defining an affinity matrix $\mathbf{A}[i,j] = a(r_i, r_j),\; \mathbf{A} \in \mathbb{R}^{n_r \times n_r}$, we derive a corresponding row-stochastic matrix by normalizing its rows: \begin{equation} \mathbf{P} = \mathbf{D}^{-1}\mathbf{A}, \end{equation} where $\mathbf{D}$ is a diagonal matrix whose elements are given by $\mathbf{D}[i,i] = \sum_j \mathbf{A}[i,j]$. The eigendecomposition of $\mathbf{P}$ yields a a sequence of positive decreasing eigenvalues: $1 = \lambda_0\geq\lambda_1\geq ...$, and right eigenvectors $\{\psi_\ell\}_\ell$. Retaining only the first $d$ eigenvalues and eigenvectors, the mapping $\Psi_r$ embeds the data set $X$ into the Euclidean space $\mathbb{R}^{d}$: \begin{equation} \label{eq:diffusion_map} \Psi_r:\mathbf{X}_{r_i \cdot \cdot}\rightarrow \big( \lambda_1\psi_1(i), \lambda_2\psi_2(i),..., \lambda_{d}\psi_{d}(i)\big)^T. \end{equation} Note that for simplicity of notation we omit denoting the eigenvalues and eigenvectors by the relevant dimension $r$, $t$ or $T$. The embedding provides a global low-dimensional representation of the data, which preserves local structures. Euclidean distances in this space are more meaningful than in the original high-dimensional space, as they have been shown to be robust to noise. The flexible tree construction is based on the embedding for these reasons. Finally, the embedding integrates the local connections found in the data into a global representation, which enables visualization of the data, reveals overlying temporal trends, organizes the data into meaningful clusters, and identifies outliers and singular points. For more details on diffusion maps, see~\cite{Coifman2006}. \subsection{Algorithm} \label{sec:implement} \begin{algorithm}[t] \caption{Hierarchical tri-geometric analysis} \label{alg:3d} \algsetup{indent=2em} \begin{algorithmic}[1] \item[\algorithmicinit] \item[\algorithmicinput] 3D data matrix $\mathbf{X}$ \STATE Starting with the neuron dimension $r$ \STATE \label{step:aff} \hspace{0.5cm} Calculate initial affinity matrix $\mathbf{A}_r^{(0)}(i,j)$ \STATE \label{step:embed} \hspace{0.5cm} Calculate initial neuron embedding $\Psi_r^{(0)}$. \STATE \label{step:tree} \hspace{0.5cm} Calculate initial flexible tree $\mathcal{T}_r^{(0)}$. \STATE \label{step:time} For time dimension $t$ repeat steps \ref{step:aff}-\ref{step:tree} and obtain $\mathcal{T}_t^{(0)}$. \item[\algorithmiciter] \item[\algorithmicinput] Flexible trees $\mathcal{T}_r^{(0)}$ and $\mathcal{T}_t^{(0)}$ \FOR{$n \geq 1$} \STATE \label{step:dist} Calculate multiscale bi-tree distance between two trials $d(T_i,T_j)=d_{\mathcal{T}_r^{(n-1)},\mathcal{T}_t^{(n-1)}}(\mathbf{X}_{\cdot \cdot i}, \mathbf{X}_{\cdot \cdot j})$ \STATE Calculate trial affinity matrix \\ $\mathbf{A}_T^{(n)}(i,j) = \exp\left\{-d(T_i,T_j) / \sigma_T \right\}$ \STATE \label{step:embed2} Calculate trial embedding $\Psi_T^{(n)}$ \STATE \label{step:tree2} Calculate flexible tree on the trials $\mathcal{T}_T^{(n)}$. \STATE For the neuron dimension $r$, repeat steps \ref{step:dist}-\ref{step:tree2}, given the trees on the time and trials, $\mathcal{T}_t^{(n-1)}$ and $\mathcal{T}_T^{(n)}$ respectively, and obtain $\mathcal{T}_r^{(n)}$. \STATE For the time dimension $t$, repeat steps \ref{step:dist}-\ref{step:tree2}, given the trees on the trials and neurons, $\mathcal{T}_T^{(n)}$ and $\mathcal{T}_r^{(n)}$ respectively, and obtain $\mathcal{T}_t^{(n)}$. \ENDFOR \end{algorithmic} \end{algorithm} Our iterative analysis algorithm composing all three components (tree construction, metric building, embedding) is summarized in Algorithm~\ref{alg:3d}. Note that the order in which the dimensions are processed is arbitrary, and it affect the final results. In addition, since the algorithm is iterative and each component relies on the previous components, an initialization is required. Specifically, calculation of the bi-tree metric for one dimension requires that partition trees be calculated on the other two dimensions. One option is to calculate an initial affinity matrix based on a general distance such as the Euclidean distance or cosine similarity. Here, we use the cosine similarity: \begin{equation} \label{eq:cos} a^{\cos}(r_i,r_j) = \frac{\sum_{t,T} x[i,t,T]x[j,t,T]}{\sqrt{\sum_{t,T} \left(x[i,t,T] \right)^2} \sqrt{\sum_{t,T} \left( x[j,t,T] \right)^2}}. \end{equation} Note that although the affinity is supposedly between two matrices, effectively it is equivalent to reshaping the matrices as 1D vectors and calculating the affinity using 1D distances. In other words, a general affinity does not take into account the 2D structure of the slices of the 3D data, in contrast to our bi-tree metric. In addition, these distances are uninformative, as the data are extremely high-dimensional. For example, in each dimension of the dataset in the experimental results in Sec.~\ref{sec:results}, the dimension of the measurements is of order $10^4$. Given the initial affinity, an embedding and flexible tree are calculated for the neuron dimension $r$ (steps \ref{step:embed}-\ref{step:tree}). This is then repeated for the time dimension (step~\ref{step:time}). One second option is to initialize the partition tree for the time dimension to be a binary tree, since the intra-trial time $t$ is a smooth variable. Given the trees in two of the dimensions, we can calculate the multiscale metric~(\ref{eq:2demd}) in the trial dimension $T$ (step~\ref{step:dist}). A corresponding embedding and flexible tree are then calculated (steps \ref{step:embed2}-\ref{step:tree2}). We now have a partition tree in each dimension, so we continue in an iterative fashion, going over each dimension and calculating the multiscale metric, diffusion embedding and flexible tree in each iteration, based on the other two dimensions. The resulting output of the algorithm can be used to analyze the data both in terms of its hierarchical structure and through visualization of the embedding. Furthermore, each dimension can be organized by calculating a smooth trajectory in its embedding space. This yields a permutation on the indices of the given dimension. Permuting all three dimensions recovers the smooth structure of the data, respecting the coupling between the neurons and the time dimensions of the data. Python code implementing Algorithm~\ref{alg:3d} will be released open-source on publication. \section{Results} \label{sec:results} \subsection{Experimental Setup} \begin{figure}[t!] \centering \includegraphics[width=0.75\linewidth]{roi} \caption{Two-photon imaging in the primary motor cortex (M1). The neuronal measurements are gathered into regions of interest (ROIs), consisting of ellipses, and preprocessed as in (\ref{eq:F})-(\ref{eq:dFF}).} \label{fig:exp} \end{figure} Our experimental data consists of repeated trials of a complex motor forepaw reach task in awake mice. The animals were trained to reach for a food pellet upon hearing an auditory cue~\cite{Whishaw1992}. This complex and versatile task exploits the capability of rodents to use their forepaw very similarly to distal hand movements in primates~\cite{Whishaw1992}. The hand reach task is typically learnt by mice over a period of few weeks to become ``experts" (success rate of $\sim 70\%-80\%$ after training over 2-3 weeks). Neuronal activity in the motor cortex during task performance was measured using two photon in-vivo calcium imaging with the recently developed genetically encoded indicators (GECIs)~\cite{Chen2013}. In addition the network was silenced using DREADDS~\cite{Rogan2011}, which was activated using intraperitoneal (IP) injection of the inert agonist (clozapine-N-oxide (CNO). The analyzed neuronal measurements are of optical calcium fluorescent activity collected from a large population of identified neurons from cortical regions of interest, acquired using two photon microscopy imaging (Fig.~\ref{fig:exp}). In conjunction, high-resolution behavioral recordings of the subject are acquired using a camera (400 Hz). This serves to label the time frames and to determine whether the subject performed the task successfully during the trial. The fluorescent measurements are manually grouped into elliptical regions of interest (ROIs) (Fig.~\ref{fig:exp}), and preprocessing is applied as follows. The spatial average fluorescence of each ROI $k$ per time frame $t$ in a single trial is \begin{equation} \label{eq:F} F_k(t) = \frac{1}{\vert \textrm{ROI}_k \vert}\sum_{i,j\in \textrm{ROI}} I[i,j,t], \end{equation} where $I$ is the florescence image, $i$ and $j$ are the pixel row and column indices in the image, respectively, and $\vert \textrm{ROI}_k \vert$ is the area of the $k$-th ROI. The baseline florescence for ROI $k$ in a single trial $T$ is calculated using a subset of time frames $S_k$ corresponding to the florescent averages $F_k(t)$ with the $10\%$ lowest values $\bar{F}_k = \sum_{t\in S_k} F_k(t)$. Finally, the neuron measurement at each time frame $\mathbf{X}[k,t,T]$ is set using $\frac{\Delta F}{F}$: \begin{equation} \label{eq:dFF} \mathbf{X}[k,t,T] = \frac{F_k(t) - \bar{F}_k}{ \bar{F}_k}. \end{equation} For simplicity, we refer to the ROIs as neurons in our analysis. \subsection{Data} We focus on neuronal measurements from the primary motor cortex region (M1), taken from a specific mouse in a single day of experimental training sessions. The data is composed of 59 consecutive trials, where the first 19 trials are considered ``control" followed by 40 trials in which the activity of the somatosensory region was silenced by injection of CNO, thus activating DREADDS. Each trial lasts 12 seconds, during which activity in 121 neurons is measured for 119 time frames. Thus, the data can be seen as 3-dimensional, measuring a vector of neurons at each time frame within each trial. The data is visualized as 2D slices for several neurons, time frames and trials in Fig.~\ref{fig:data3d_slices}. The time frame (1-119) within the trial is a local time scale, and the trial index represents a global time scale (1-59). Along with neuron measurements, we also have binary data labeling an event for each time and trial (Fig.~\ref{fig:label}). The labeling is performed using a modified version of the machine learning based JAABA software, annotating discrete behavioral events~\cite{Kabra2013}. There are 11 labeled events that provide additional prior information helpful in verifying our analysis. An auditory cue (``tone'' event) is activated after 4 seconds (frames 40-42) and the food pellet comes to position at 4.4 seconds (frames 44-46). The ``tone" event is typically followed by either a successful ``grab'' event and ``at mouth'' event, which lasts until the end of the trial, or by a several failed ``grab'' events and then labeled as a ``miss" event, i.e., the subject failed to grab the food pellet and bring it to its mouth. \begin{figure}[t!] \centering{\includegraphics[width=0.95\linewidth]{trial_labels}} \caption{Binary event labels for two trials. (left) Successful trial in which the subject grabs and eats the food pellet. (right) Failure in which the subject makes several failed attempts to grab the food.} \label{fig:label} \end{figure} The control data consists of 19 trials, 11 of which were successful, i.e., the mouse managed to grab and eat the food pellet. After 19 trials, CNO was injected IP to silence the sensory cortex (S1), which sends feedback information to M1. The next 40 trials, referred to as ``silencing trials'' included 10 successful trials. During these trials, the behavior of the mouse changes, demonstrated by a decrease in ``at mouth" (chewing) events and an increase in ``miss" events (in which the mouse does not manage to grab the food). Note that not all silencing trials are ``miss" and not all control trials are successful. \subsection{Tri-geometric analysis} The activity of the neurons is such that they are correlated at certain times, but completely unrelated at others, and certain neurons are sensitive to the auditory trigger, whereas others completely disregard it. The goal is to automatically extract co-active communities of neurons, as they relate to the activity of the mouse. We first analyze all 59 trials together, using Algorithm~\ref{alg:3d}. For the weights~(\ref{eq:omega2d}) used in the mutilscale metric~(\ref{eq:2demd}), we choose $\beta_r=1,\beta_t=1,\beta_T=0$. We describe in the following how the analysis is used to derive meaningful results for each dimension. \begin{figure*}[th] \centering{\includegraphics[width=0.9\linewidth]{time_embedding_eigenvec}} \caption{Embedding of time frames. (a-b) 3-dimensional embedding of all the 2D time frame slices (as in Fig.~\ref{fig:data3d_slices}(center), constructed by our tri-geometry analysis, where each time sample ($t\in\{1,...,119\}$) is a 3D point. In (a) the points are colored by the time frame index, and in (b) they are colored according to pre-tone frames (blue) and post-tone frames (red). The tone, played at sample t=42 (marked by an arrow), is distinctively recovered from the data. (c) First 11 eigenvectors of time embedding. Each column is an eigenvector $\psi_{t,\ell}\in\mathbb{R}^{119} \; \ell \in \{1,...,11\}$. In general, the eigenvectors take the form of harmonic functions at different scales. Time $t=42$ (the tone) is apparent (marked by the box). Some eigenvectors correspond to harmonic functions over the entire trial (e.g., $\psi_{t,1}$), while some are localized in the pre-tone region (e.g., $\psi_{t,9}$), and some in the post-tone region (e.g., $\psi_{t,11}$). } \label{fig:time_embed} \end{figure*} Figure~\ref{fig:time_embed} presents the 3D embedding of the time frames, where each 3D point is colored by the time index $t \in \{1,...,119\}$ (a). The embedding clearly organizes the time frames through the various repetitive experiments into two dominant clusters: ``pre-tone'' and ``post-tone'' frames (Fig.~\ref{fig:time_embed}(b)), where the tone signifies the cue for the animal to begin the hand reach movement. We emphasize that this prior information was not used in the data-driven analysis. The embedding in effect isolates the time where the auditory tone is activated for the subject to reach for food. Figure~\ref{fig:time_embed}(c) presents the first eleven non-trivial eigenvectors $\{\psi_d(t)\}_{d=1}^{11}$ obtained by the decomposition of the affinity matrix on the time dimension. Some eigenvectors correspond to harmonic functions over the entire interval $t \in [1,119]$. However, some are localized either on the pre-tone region (e.g., $\psi_{t,9}$) or on the post-tone region (e.g., $\psi_{t,8}$ and $\psi_{t,11}$). In addition, each eigenvector captures the time at varying scales. This result demonstrates the power of our analysis; it shows that in a completely data-driven manner, a Fourier-like (harmonic) basis is attained. However, in contrast to the ``generic" Fourier-basis, which is fixed, the obtained basis is data adaptive and captures and characterized true hidden phenomena related to external stimuli (the tone) and to different patterns of behavior (before and after the tone). Thus, the embedding provides a verification of the knowledge we have regarding the time dimension in terms of regions of interest, and enables to pinpoint specific times of interest, essentially capturing the ``script'' of the trial. We do not present the local decomposition of the time frames via the flexible tree since it is not of interest, as this dimension is smooth, and therefore is just decomposed into local temporal neighborhoods. We next examine the analysis of the trial dimension, i.e., organization of the global time. \begin{figure}[th] \centering{\includegraphics[width=0.95\linewidth]{trial_embedding.pdf}} \caption{The 3D embedding of the 2D trial slices (Fig.~\ref{fig:data3d_slices}(left)) of all the trials $T\in\{1,...,59\}$. Each trial slice is represented by a single 3D point, colored by the trial index (here as well, the trial index was not taken into account in the analysis). (a) Initial trial embedding based on cosine affinity. (b) Trial embedding derived from bi-tree multiscale metric. Trials are clustered in three main groups, where red and blue clusters are closer together. } \label{fig:trial_embed} \end{figure} In Fig.~\ref{fig:trial_embed}, we compare the embedding of the trials, obtained from the initial cosine affinity (a), and from the bi-tree multiscale metric (b). The points are colored by the trial index where blue corresponds to control trials (1-19), green-orange trials corresponds to the first silencing trials (19-44), and red corresponds to last silencing trials (45-59). Our tri-geometry analysis yields an embedding (Fig.~\ref{fig:trial_embed}(b)) in which the blue and red points, corresponding to the first and last trials, respectively, are grouped together. This clearly indicates the temporal effect of silencing the somatosensory cortex on the activity of motor cortex. This is a promising result since solely from the neuronal activity, the data is self-organized functionally according to the brain activity manipulation we performed without the need to provide this information during the analysis. This result leads us to hypothesize that our silencing manipulation has a lag, and also that it expires over the duration of the experiment. Our analysis recovers hidden biological cues and enables accurate indication of pathological dysfunction driven by neuronal activity evidence. To highlight the contribution of our approach in the analysis of such data, we compare our embedding to the 3D diffusion maps obtained by the initial cosine affinity (Fig.~\ref{fig:trial_embed}(a)), which does not exhibit any particular organization. Thus, the refinement via iterative application of the algorithm is essential. The multiscale local organization via the trees and coupling of the dimensions via the metric contribute to deriving a meaningful global embedding. \begin{figure}[t] \centering{\includegraphics[width=0.80\linewidth]{trial_tree.pdf}} \caption{Flexible tree of trials ($T\in\{1,...,59\}$). The leaves are colored by trial index. (a) Tree corresponding to initial trial embedding in Fig.~\ref{fig:trial_embed}(a). (b) Tree corresponding to bi-tree multiscale metric embedding in Fig.~\ref{fig:trial_embed}(b). This tree better captures the nature of the trials, separating the pathological dysfunction caused by the silencing from the normal trials. } \label{fig:trial_tree} \end{figure} \begin{figure*}[t!] \centering{\includegraphics[width=0.9\linewidth]{sensor_tree_nodes.pdf}} \caption{Neuron tree for the silencing trials for iteration $n=2$. To demonstrate the organization obtained by the tree, we highlight several interesting tree folders from level $l=3$, marked with different colors and letters. Neurons belonging to the highlighted folders are grouped together, with a colored border corresponding to the folder color. Each neuron has been reorganized as a 2D matrix of $n_T \times n_t$ ($40 \times 119$). The neurons are grouped together according to similar properties. (a) Yellow folder: 8 neurons that are active only at or after the tone (vertical separation), and mostly in trials under the effect of the silencing (horizontal separation). First three are associated with the tone itself, 5 right are associated with post-tone activity. (b) Orange folder: 8 neurons that were dominant mostly in trials under the effect of the silencing (horizontal separation), but are not sensitive to tone. (This node is joined with the yellow node at level $l=5$). (c) Purple folder: 11 neurons that were mostly active during trials not under the effect of the silencing, 8 of which are active after the tone (vertical separation). (d) Green folder: 5 neurons that were silenced by the manipulation (horizontal separation). } \label{fig:sensors} \end{figure*} The improved clustering of the trials achieved by the bi-tree multiscale metric is also apparent when examining the flexible trees obtained from the two embeddings (Fig.~\ref{fig:trial_tree}). The leaves are colored by the trial index as in the embedding. The tree obtained from the new embedding better separates the trials in which the pathological dysfunction caused by the silencing is evident from the normal trials. Remembering that flexible trees are constructed bottom-up using the embedding coordinates, this validates the claim that proximity in the embedding space captures the global temporal trend in the data. To analyze the neurons, we split the data into two parts and analyze each separately, as this enables us to discover both behavioral patterns and pathological dysfunction. First, we examine the 40 trials composing the silencing trials. The neurons were preprocessed by subtracting the mean of each neuron over all trials, and normalizing it by its standard deviation across all trials. This enables us to examine the increase and decrease of activity in the neuron without being sensitive to the intensity of the measurements. Figure~\ref{fig:sensors} presents the multiscale hierarchical organization of the 2D slices of all the neurons in a flexible tree $\mathcal{T}_r^{(2)}$, obtained after two iterations of our analysis, highlighting several interesting tree folders from level $l=3$. Neurons composing four folders from this level are presented. The folders are marked in different colors on the tree and the neurons belonging to each folder are grouped together, with a border in corresponding color to the folder. Each neuron has been reorganized as a 2D matrix of size $n_T \times n_t$ ($40 \times 119$). The neurons are grouped together according to similar properties and the displayed folders clearly relate to pathological dysfunction. For example, the orange folder consists of neurons that are active only during trials under the effect of somatosensory silencing (horizontal separation). The yellow folder consists of neurons that are active only at or after the tone (vertical separation), and mostly in trials under the effect of the silencing (horizontal separation). In contract, the purple folder consists of neurons, which are active after the tone but during trials without the silencing effect. Finally, the green folder consists of neurons, which were silenced by the manipulation. This leads us to hypothesize, as with the trial analysis, that the effect of the somatosensory silencing has a slight delay, and in addition that it wears off after a certain number of trials, since the experiment was very long. Our analysis groups neurons demonstrating the same activity patterns together in an automatic data-driven manner without manual intervention. The silencing trials enable us to analyze the neurons in terms of how they are affected by the introduced virus. We now treat the 19 control trials, which allows us to analyze the behavioral aspect of the neurons without external intervention. In Fig.~\ref{fig:sensors_control}, we display the neuron tree, $\mathcal{T}_r^{(1)}$, obtained after one iteration of our analysis, and examine folders for levels $l=2,3,4$. Neurons composing five folders from this level are presented. The folders are marked in different colors on the tree and the neurons belonging to each folder are grouped together, with a border in corresponding color to the folder. Each neuron has been reorganized as a 2D matrix of size $n_T \times n_t$ ($19 \times 119$). We use the labeled ``at mouth'' event and the prior information on the time of the auditory tone to analyze the results. The binary labels indicating ``at mouth'' activity has also been reordered as a 2D matrix of size $n_T \times n_t$ ($19 \times 119$), and is displayed in within the black border. The results indicate that neurons are grouped by similarity, clearly related to the behavioral data. The upper two folders (red and orange) show increased activity before and during the auditory tone. The next three folders show increased activity after the tone. The yellow folder composes of neurons that are activity during different trials, regardless of ``at mouth'' activity. The purple folder, on the other hand, contains neurons that are active post-tone, almost entirely during which were successful, i.e., the subject managed to eat the food pellet, indicated by continuous ``at mouth'' labeling till the end of the trial. Finally, the green folder is composed of two neurons with the opposite activity. They are most active post-tone during trials in which the subject failed to eat the food pellet. Note that this analysis is data-driven, i.e., no prior information on the event labels is used in grouping the neurons. \begin{figure*}[th] \centering{\includegraphics[width=0.9\linewidth]{sensor_tree_control.pdf}} \caption{Neuron tree for control trials for iteration $n=1$. We highlight several interesting tree folders from level $l=2-4$, marked with different colors and letters. Each neuron has been reorganized as a 2D matrix of $n_T \times n_t$ ($19 \times 119$). Neurons belonging to the highlighted folders are grouped together, with a colored border corresponding to the folder color. (a-b) Red folder (8 neurons) and orange folder (8 neurons) in level $l=3$ are active before the tone. (Note these nodes are joined at level $l=5$). (c) Yellow folder: 8 neurons that are active post-tone. (d) Purple folder ($l=4$): 10 neurons that are active post-tone only during trials which were labeled as ``at mouth". (e) Green folder: 2 neurons that with significant activity post-tone only during trials which were labeled as ``miss". (*) Black border contain binary labeling of at mouth event, ordered as $T \times t$ matrix. } \label{fig:sensors_control} \end{figure*} In analysis of the neurons, the main contribution is the produced partition tree. The global embedding did not yield meaningful results, and the examination of local folders in the tree was most informative. Note that we are looking at a limited set of ``sensors'' since the neurons were manually grouped together into ROIs. In future work we intend to analyze a larger group of sensors, by examining all pixels acquired from the 2-photon imaging video separately. We know from previous work that increasing the number of sensors is typically beneficial to the iterative analysis. This will remove any introduced biases yielded by the pre-processing and enable to identify spatial structures not limited to ellipses. Our experimental results demonstrate that our approach identifies for the first time (to the best of our knowledge), solely from observations and in a purely data-driven manner: (i) functional subsets of neurons, (ii) activity patterns associated with particular behaviors, and (iii) the dynamics and variability in these subsets and patterns as they are related to context and to different time scales (from the variability within a trial, to a global trend in trials, induced by the silencing method. In analyzing the intra-trial time dimension, we pinpoint the time of the auditory trigger, and separate the time frames into multiscale local regions, before and after the trigger. Finally, in organizing the trials, we are able to both separate the trials to ``success'' and ``failure'' cases, and to determine a global trend that relates to an introduced external intervention. Thus, these methods lay a solid foundation for modeling the sensory-motor system by providing sufficiently fine structures and accurate view of the data to test our hypotheses, within an integrated computational theory of sensory-motor perception and action. We note that conventional manifold learning tools did not yield any intelligent data organization for this case. Thus, organizing the neurons or the time samples separately by a 1D geometry using conventional manifold learning methods is inappropriate for this complex data. The fact, demonstrated here, that the neuronal activity of different types of neurons is correlated only during specific times, and might be random otherwise, verifies the need for coupled organization analysis which simultaneously organizes time, trials and neurons into tri-geometries. \section{Conclusions} In this paper we presented a new data-driven methodology for the analysis of trial-based data, specifically trials of neuronal measurements. Our approach relies on an iterative local to global refinement procedure, which organizes the data in coupled hierarchical structures and yields a global embedding in each dimension. Our analysis enabled extracting hidden biological cues and accurate indication of pathological dysfunction extracted solely from the measurements. We identified neuronal activity patterns and variability in these patterns related to external triggers and behavioral events, at different time scales, from recovering the local ``script'' of the trial, to a global trend across trials. In this paper we focused on neuronal measurements, but our approach is general and can be applied to other types of trial-based experimental data, and even to general high-dimensional datasets such as video, temporal hyperspectral measurements, and more. In future work, we intend to analyze the neuronal measurements from the two-photon imaging without clustering them into ROIs. This significantly increases the number of ``sensors'' and should enable to learn complex spatial structures in the cortex. Furthermore, our analysis can be extended to higher dimensions, e.g., incorporating behavioral data as a fourth dimension in the neuronal measurements. \bibliographystyle{IEEEtran}
2,869,038,156,581
arxiv
\section{Introduction} A shadow is a two-dimensional dark region in the observer's sky corresponding to light rays that fall into an event horizon when propagated backwards in time. It is shown that the shape and size of the shadow carry the characteristic information of the geometry around the celestial body \cite{sha1,sha2,sha3}, which means that the shadow can be regarded as a useful tool to probe the nature of the celestial body and to check further various theories of gravity. The investigation \cite{sha2,sha3} indicate that the shadow is a perfect disk for a Schwarzschild black hole and it changes into an elongated silhouette for a rotating black hole due to its dragging effect. The cusp silhouette of shadow is found in the spacetime of a Kerr black hole with Proca hair \cite{fpos2} and of a Konoplya-Zhidenko rotating non-Kerr black hole \cite{sb10} as the black hole parameters lie in a certain range. Moreover, the shadow of a black hole with other characterizing parameters have been studied recently \cite{sha4,sha5,sha6,sha7,sha9,sha10,sha11,sha12,sha13,sha14,sha14a,sha15,sha16, sb1,sha17,sha19,shan1} (for details, see also a review \cite{shan1add}), which indicate that these parameters bring the richer silhouettes for the shadows casted by black holes. However, most of the above investigation have been focused only on the cases where the null geodesics are variable-separable and the corresponding dynamical systems are integrable. As the dynamical systems are non-integrable, the motion of photons could be chaotic, which could lead to some novel features for the black hole shadow. Recently, it is shown that due to such chaotic lensing the multi-disconnect shadows with fractal structures emerge for a Kerr black hole with scalar hair \cite{sw,swo,astro,chaotic} or a binary black hole system \cite{binary,sha18}. The further analysis show that these novel patterns with fractal structures in shadows are determined by the non-planar bound orbits \cite{fpos2} and the invariant phase space structures \cite{BI} for the photon motion in the black hole spacetimes. The similar analysis have also been done for the cases with ultra-compact object \cite{bstar1,bstar2}. It is well known that there exist enormous magnetic fields around large astrophysical black holes, especially in the nucleus of galaxies \cite{Bm1,Bm2,Bm3,Bm4}. These strong magnetic fields could be induced by currents in accretion disks near the supermassive galactic black holes. On the base of strong magnetic fields, there are substantial current theoretical models accounted for black hole jets, which are one of the most spectacular astronomical events in the sky \cite{Blandford1,Blandford2,Punsly}. In general relativity, one of the most important solutions with magnetic fields is Ernst solution \cite{Ernst}, which describes the gravity of a black hole immersed in an external magnetic field. Interestingly, for an Ernst black hole, the polar circumference for the event horizon increases with the magnetic field, while the equatorial circumference decreases. Bonnor's metric \cite{mmd1} is another important solution of the Einstein field equations in the vacuum, which describes a static massive object with a dipole magnetic field in which two static extremal magnetic black holes with charges of opposite signs are situated symmetrically on the symmetry axis. For Bonnor black dihole spacetime, the area of the horizon is finite, but the proper circumference of the horizon surface is zero. Especially, it is not a member of the Weyl electromagnetic class and it can not reduce to Schwarzschild spacetime in the limit without magnetic dipole. The new properties of spacetime structure originating from magnetic dipole will lead to chaos in motion of particles \cite{mmd,mmd10,bbon1}. Since the shadow of black hole is determined by the propagation of light ray in the spacetime, it is expectable that the chaotic lensing caused by the new spacetime structure will yields some new effects on the black hole shadow. Therefore, in this paper, we focus on studying the shadow of Bonnor black dihole \cite{mmd1} and probe the effect of magnetic dipole parameter on the black hole shadow. The paper is organized as follows. In Sec. II, we review briefly the metric of Bonnor black dihole and then analyze the propagation of light ray in this background. In Sec. III, we investigate the shadows casted by Bonnor black dihole. In Sec. IV, we discuss invariant phase space structures of photon motion and formation of the shadow casted by Bonnor black dihole. Finally, we present a summary. \section{Spacetime of Bonnor black dihole and null geodesics} Let us now to review briefly the spacetime of Bonnor black dihole. In 1960s, Bonnor obtained an exact solution \cite{mmd1} of Einstein-Maxwell equations which describes a static massive source carrying a magnetic dipole. In the standard coordinates, the metric of this spacetime has a form \cite{mmd1} \begin{eqnarray} \label{xy} ds^{2}= -\bigg(\frac{P}{Y}\bigg)^{2}dt^{2}+\frac{P^{2}Y^{2}}{Q^{3}Z}(dr^{2}+Zd\theta^{2}) +\frac{Y^{2}Z\sin^{2}\theta}{P^{2}}d\phi^{2}, \end{eqnarray} where \begin{equation} P=r^{2}-2mr-b^{2}\cos^{2}\theta,\;\;Q=(r-m)^{2}-(m^{2}+b^{2})\cos^{2}\theta, \;\;Y=r^{2}-b^{2}\cos^{2}\theta,\;\;Z=r^{2}-2mr-b^{2}. \end{equation} The corresponding vector potential $A_{\mu}$ is given by \begin{eqnarray} A_{\mu}= (0,0,0,\frac{2mbr\sin^{2}\theta}{P}), \end{eqnarray} where $\mu=0,1,2,3$ correspond to the element of $A_{\mu}$ associated with the coordinates $t, r, \theta, \phi$, respectively. It is a static axially-symmetric solution characterized by two independent parameters $m$ and $b$, which are related to the total mass of Bonnor black dihole $M$ as $M=2m$ and to the magnetic dipole moment $\mu$ as $\mu=2mb$. Obviously, this spacetime is asymptotically flat since as the polar coordinate $r$ approaches to infinity the metric tends to the Minkowski one. The event horizon of the spacetime (\ref{xy}) is the null hypersurface $f$ satisfied \begin{eqnarray} g^{\mu\nu}\frac{\partial f}{\partial x^{\mu}}\frac{\partial f}{\partial x^{\nu}}=0, \end{eqnarray} which yields \begin{eqnarray} r^{2}-2mr-b^{2}=0. \end{eqnarray} It is obvious that there exists only a horizon and the corresponding horizon radius is $r_h=m+\sqrt{m^2+b^2}$. The area of the horizon is $\mathcal{A}=16\pi m^2r^2_h/(m^2+b^2)$, but the proper circumference of the horizon surface is zero since $g_{\phi\phi}=0$ on the horizon. This implies that the $Z=0$ surface is not a regular horizon since there exists conical singularities at $r=r_h$. The singularity along the segment $r=r_h$ can be eliminated by selecting a proper period $\Delta\phi=2\pi[b^2/(m^2+b^2)]^2$, but such a choice yields a conical deficit running along the axes $\theta=0, \;\pi$, from the endpoints of the dipole to infinity \cite{mmd101,mmd102}. The defects outside the dipole can be treated as open cosmic strings and then Bonnor black dihole is held apart by the cosmic strings that pull from its endpoints. Since the angular coordinate $\phi$ is periodic, an azimuthal curve $\gamma=\{t=Constant, r=Constant, \theta=Constant\}$ is a closed curve with invariant length $s^2_{\gamma}=g_{\phi\phi}(2\pi)^2$. And then the integral curve with $(t, r, \theta)$ fixed is closed timelike curve as $g_{\phi\phi}<0$. Thus, there exist closed timelike curves inside the horizon. However, the region outside the horizon is regular and there is no closed timelike curves. Moreover, the spacetime (\ref{xy}) possesses the complicated singular behaviour at $P=0$, $Q=0$ and $Y=0$, but there is no singularity outside the horizon. As $b=0$, one can find that it does not reduce to Schwarzschild spacetime, but to the Zipoy-Voorhees one with $\delta=2$ \cite{mmd12,mmd11}, which describes a monopole of mass $2m$ together with higher mass multipoles depended on the parameter $m$. These special spacetime properties affect the propagation of photon and further changes shadow of Bonnor black dihole(\ref{xy}). The Hamiltonian of a photon motion along null geodesics in the spacetime (\ref{xy}) can be expressed as \begin{equation} \label{hami} H(x,p)=\frac{1}{2}g^{\mu\nu}(x)p_{\mu}p_{\nu}=0. \end{equation} Since the metric functions in the spacetime (\ref{xy}) are independent of the coordinates $t$ and $\phi$, it is easy to obtain two conserved quantities $E$ and $L_{z}$ with the following forms \begin{eqnarray} \label{EL} E=-p_{t}=-g_{00}\dot{t},\;\;\;\;\;\;\;\;\;\;\;\;\;\; L_{z}=p_{\phi}=g_{33}\dot{\phi}, \end{eqnarray} which correspond to the energy and the $z$-component of the angular momentum of photon moving in the background spacetime. With these two conserved quantities, we can obtain the equations of a photon motion along null geodesics \begin{eqnarray} \label{cdx} \ddot{r}&=&-\frac{1}{2}\frac{\partial }{\partial r}\bigg[\ln\bigg(\frac{P^2Y^2}{Q^3Z}\bigg)\bigg]\dot{r}^{2}-\frac{\partial }{\partial \theta}\bigg[\ln\bigg(\frac{P^2Y^2}{Q^3Z}\bigg)\bigg]\dot{r}\dot{\theta} +\frac{Z}{2}\frac{\partial }{\partial r}\bigg[\ln\bigg(\frac{P^2Y^2}{Q^3}\bigg)\bigg]\dot{\theta}^{2}\nonumber\\ &&-\frac{Q^3Z}{2}\bigg[\frac{E^2}{P^4}\frac{\partial}{\partial r}\ln\bigg(\frac{P^2}{Y^2}\bigg)- \frac{L^2_z}{Y^4Z\sin\theta}\frac{\partial}{\partial r}\ln\bigg(\frac{Y^2Z\sin\theta}{P^2}\bigg)\bigg], \nonumber\\ \ddot{\theta}&=&\frac{1}{2Z}\frac{\partial }{\partial \theta}\bigg[\ln\bigg(\frac{P^2Y^2}{Q^3}\bigg)\bigg]\dot{r}^{2}-\frac{\partial }{\partial r}\bigg[\ln\bigg(\frac{P^2Y^2}{Q^3Z}\bigg)\bigg]\dot{r}\dot{\theta} +\frac{1}{2}\frac{\partial }{\partial \theta}\bigg[\ln\bigg(\frac{P^2Y^2}{Q^3}\bigg)\bigg]\dot{\theta}^{2}\nonumber\\ &&-\frac{Q^3}{2}\bigg[\frac{E^2}{P^4}\frac{\partial}{\partial \theta}\ln\bigg(\frac{P^2}{Y^2}\bigg)- \frac{L^2_z}{Y^4Z\sin\theta}\frac{\partial}{\partial \theta}\ln\bigg(\frac{Y^2Z\sin\theta}{P^2}\bigg)\bigg], \end{eqnarray} with the constraint condition \begin{eqnarray} \label{lglr} H=\frac{1}{2}\bigg(\frac{Q^{3}Z}{P^{2}Y^{2}}p_{r}^{2}+\frac{Q^{3}}{P^{2}Y^{2}}p_{\theta}^{2} +V\bigg)=0, \end{eqnarray} where $p_{r}$ and $p_{\theta}$ are the components of momentum of the photon $p_{r}=g_{11}\dot{r}$ and $p_{\theta}=g_{22}\dot{\theta}$. $V$ is the effective potential with a form \begin{eqnarray} \label{vv} V=-(\frac{Y}{P})^{2}E^{2}+\frac{P^{2}}{Y^{2}Z\sin^{2}\theta}L_{z}^{2}. \end{eqnarray} Obviously, in the case with magnetic dipole (i.e.,$b\neq0$), we find that the equations of motion (\ref{cdx}) and (\ref{lglr}) can not be variable-separable and the corresponding dynamical system is non-integrable because it admits only two integrals of motion $E$ and $L_z$. This implies that the motion of the photon could be chaotic in the spacetime (\ref{xy}), which will bring some new features for the shadow casted by Bonnor black dihole. \section{Shadow casted by Bonnor black dihole} In this section, we will study the shadow casted by Bonnor black dihole with the method called ``backward ray-tracing" \cite{sw,swo,astro,chaotic} in which the light rays are assumed to evolve from the observer backward in time. In this method, we must solve numerically the null geodesic equations (\ref{EL}) and (\ref{cdx}) for each pixel in the final image with the corresponding initial condition. The image of shadow in observer's sky is composed of the pixels corresponding to the light rays falling down into the horizon of black hole. Since the spacetime of Bonnor black dihole (\ref{xy}) is asymptotic flat, we can define the same observer's sky at spatial infinite as in the usual static cases. The observer basis $\{e_{\hat{t}},e_{\hat{r}},e_{\hat{\theta}},e_{\hat{\phi}}\}$ can be expanded in the coordinate basis $\{ \partial_t,\partial_r,\partial_{ \theta},\partial_{\phi} \}$ as a form \cite{sw,swo,astro,chaotic} \begin{eqnarray} \label{zbbh} e_{\hat{\mu}}=e^{\nu}_{\hat{\mu}} \partial_{\nu}, \end{eqnarray} where $e^{\nu}_{\hat{\mu}}$ satisfies $g_{\mu\nu}e^{\mu}_{\hat{\alpha}}e^{\nu}_{\hat{\beta}} =\eta_{\hat{\alpha}\hat{\beta}}$, and $\eta_{\hat{\alpha}\hat{\beta}}$ is the usual Minkowski metric. For a static spacetime, it is convenient to choice a decomposition \begin{eqnarray} \label{zbbh1} e^{\nu}_{\hat{\mu}}=\left(\begin{array}{cccc} \zeta&0&0&0\\ 0&A^r&0&0\\ 0&0&A^{\theta}&0\\ 0&0&0&A^{\phi} \end{array}\right), \end{eqnarray} where $\zeta$, $A^r$, $A^{\theta}$, and $A^{\phi}$ are real coefficients. From the Minkowski normalization, one can find that the observer basis obey \begin{eqnarray} e_{\hat{\mu}}e^{\hat{\nu}}=\delta_{\hat{\mu}}^{\hat{\nu}}. \end{eqnarray} Therefore, we have \begin{eqnarray} \label{xs} \zeta=\frac{1}{\sqrt{-g_{00}}},\;\;\;\;\;\;\;\;\;\;\;\;\;\; A^r=\frac{1}{\sqrt{g_{11}}},\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; A^{\theta}=\frac{1}{\sqrt{g_{22}}},\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; A^{\phi}=\frac{1}{\sqrt{g_{33}}}, \end{eqnarray} and then the locally measured four-momentum $p^{\hat{\mu}}$ of a photon can be obtained by the projection of its four-momentum $p^{\mu}$ onto $e_{\hat{\mu}}$, \begin{eqnarray} \label{dl} p^{\hat{t}}=-p_{\hat{t}}=-e^{\nu}_{\hat{t}} p_{\nu},\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;p^{\hat{i}}=p_{\hat{i}}=e^{\nu}_{\hat{i}} p_{\nu}, \end{eqnarray} In the spacetime of Bonnor black dihole (\ref{xy}), the locally measured four-momentum $p^{\hat{\mu}}$ can be further written as \begin{eqnarray} \label{smjt} p^{\hat{t}}&=&\frac{1}{\sqrt{-g_{00}}}E,\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; p^{\hat{r}}=\frac{1}{\sqrt{g_{11}}}p_{r} ,\nonumber\\ p^{\hat{\theta}}&=&\frac{1}{\sqrt{g_{22}}}p_{\theta}, \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; p^{\hat{\phi}}=\frac{1}{\sqrt{g_{33}}}L_z. \end{eqnarray} After some similar operations in Refs.\cite{sw,swo,astro,chaotic}, we can obtain the position of photon's image in observer's sky \cite{sb10} \begin{eqnarray} \label{xd1} x&=&-r_{obs}\frac{p^{\hat{\phi}}}{p^{\hat{r}}}=-r_{obs}\frac{L_{z}}{\sqrt{g_{11} g_{33}}\dot{r}}, \nonumber\\ y&=&r_{obs}\frac{p^{\hat{\theta}}}{p^{\hat{r}}}= r_{obs}\frac{\sqrt{g_{22}}\dot{\theta}}{\sqrt{g_{11}}\dot{r}}. \end{eqnarray} \begin{figure}[ht] \center{\includegraphics[width=6cm ]{sfig1.eps} \caption{The sphere light source marked by four different color quadrants and the brown grids of longitude and latitude. The white reference spot lies at intersection of the four colored quadrants.} \label{gy}} \end{figure} \begin{figure} \center{\includegraphics[width=14cm ]{sfig2.eps} \caption{The shadow casted by Bonnor black dihole (\ref{xy}) with different $b$. Here we set $m=1$ and the observer is set at $r_{obs}=30m$ with the inclination angle $\theta_{0}=90\degree$.} \label{shb}} \end{figure} Following the way done in \cite{sw,swo,astro,chaotic,binary,sha18}, one can divide celestial sphere into four quadrants marked with a different color (green, blue, red and yellow as shown in FIG.\ref{gy}). The grid of longitude and latitude lines are marked with adjacent brown lines separated by $10^\circ$. The observer is placed off-centre within the celestial sphere at some a real radius $r_{obs}$. For the sake of simplify, it is placed at the intersection of the four colored quadrants on the celestial sphere, i.e., $r_{obs}=r_{sphere}$, which is not shown in Fig.\ref{gy}. The white reference spot in Fig.\ref{gy} lies at the other intersection of the four colored quadrants, which could provide a direct demonstration of Einstein ring \cite{sw,swo,astro,chaotic,binary,sha18}. We can integrate these null geodesics with different initial conditions until they either reach a point on the celestial sphere or they fall into the horizon of the compact object and the latter defines the shadow. \begin{figure} \begin{center} \includegraphics[width=14cm ]{sfig3.eps} \end{center} \caption{The fractal structure in the shadow of Bonnor black dihole (\ref{xy}) for fixed $b=1.0$. Here we set $m=1$ and the observer is set at $r_{obs}=30m$ with the inclination angle $\theta_{0}=90\degree$. } \label{fx} \end{figure} In Fig. \ref{shb}, we present the shadow casted by Bonnor black dihole (\ref{xy}) with different $b$. Here we set $m=1$ and the observer is set at $r_{obs}=30m$ with the inclination angle $\theta_{0}=90\degree$. Our numerical results show that there exists a critical value $b_c\sim 0.404$ for the shadow. As $b<b_c$, we find that the shadow is a black disk, which is similar to those in the usual static compact object spacetimes with horizon. Moreover, we find that the size of the shadow decreases with the parameter $b$ in this case. However, for the case $b>b_c$, there exist two anchor-like bright zones imbedded symmetrically in the black disk shadow so that the shadow looks like a concave disk with four larger eyebrows, which are shown in Fig. \ref{shb} (c)-(d). The eyebrow-like features of shadow are also found in Refs.\cite{sw,swo,astro,chaotic,binary,sha18}. Actually, many other smaller eyebrow-like shadows can be detected in two anchor-like bright zones as shown in Fig.\ref{fx}. This hints that the shadow possess a self-similar fractal structure, which is caused by chaotic lensing. It is an interesting property of shadows, which is qualitatively different from those in the spacetimes where the equations of motion are variable-separable and the corresponding dynamical system is integrable. With the increase of magnetic dipole parameter $b$, the eyebrows becomes long and the fractal structure become more rich. Moreover, we find that the two anchor-like bright zones increase with the parameter $b$, but for arbitrary $b$, two anchor-like bright zones are disconnected since they are always separated by a black region. In other words, for Bonnor black dihole, the two larger shadows and the smaller eyebrow-like shadows are joined together by the middle black zone. Moreover, the white circle in Figs. \ref{shb} and \ref{fx} denote Einstein ring, which are consistent with the prediction of multiple images of a source due to gravitational lensing. \section{Invariant phase space structures and formation of shadow casted by Bonnor black dihole} In this section, we will discuss the formation of the shadow casted by Bonnor black dihole through analysing the invariant phase space structures as in Ref. \cite{BI}. The invariant phase space structures including fixed points, periodic orbits and invariant manifolds, are one of important features for dynamical systems, which are applied extensively in the design of space trajectory for various of spacecrafts, such as, a low energy transfer from the Earth to the Moon and a ``Petit Grand Tour" of the moons of Jupiter \cite{BI17,BI18,BI19,BI20,BI22}. Recent investigations \cite{BI} show that these invariant structures play an important role in the emergence of black hole shadows. For the spacetime of Bonnor black dihole (\ref{xy}), the fixed point $x_0=(r_{0},\theta_{0},0,0)$ in phase space $(r,\theta,p_r,p_{\theta})$ satisfies the condition \begin{eqnarray} \label{bdd} \dot{x}^{\mu}=\frac{\partial H}{\partial p_{\mu}}=0,\;\;\;\;\;\;\;\;\;\;\;\;\;\; \dot{p}_{\mu}=-\frac{\partial H}{\partial x^{\mu}}=0, \end{eqnarray} which means \begin{eqnarray} \label{bdd1} V\bigg|_{r_{0},\theta_{0}}=0,\;\;\;\;\;\;\;\;\;\;\;\;\;\;\frac{\partial V}{\partial r}\bigg|_{r_{0},\theta_{0}}=0,\;\;\;\;\;\;\;\;\;\;\;\;\;\; \frac{\partial V}{\partial \theta}\bigg|_{r_{0},\theta_{0}}=0. \end{eqnarray} The local stability of the fixed point $x_0=$($r_{0},\theta_{0},0,0$) can be obtained by linearizing the equations (\ref{bdd}) \begin{eqnarray} \label{xxh} \mathbf{\dot{X}}=J\mathbf{X}, \end{eqnarray} where $\mathbf{X}=(\tilde{x}^{\mu},\tilde{p}_{\mu})$ and $J$ is the Jacobian. The circular photon orbits in the equatorial plane named light rings are fixed points of the dynamics for the photon motion \cite{BI,fpos2}. After linearizing the equations (\ref{bdd}) near the fixed point $(r_{0},\pi/2,0,0)$ and setting $m=1$, we obtain the Jacobian \begin{equation} \label{jjj} J=\left[ \begin{array}{cccc} 0 & 0 & 2A & 0 \\ 0 & 0 & 0 & 2B \\ -2C & 0 & 0 & 0 \\ 0 & -2D & 0 & 0 \end{array} \right], \end{equation} with \begin{eqnarray} \label{jjt} A&=&\frac{(r_{0}-1)^{6}(r_{0}^{2}-2r_{0}-b^{2})}{r_{0}^{6}(r_{0}-2)^{2}},\\ \nonumber B&=&\frac{(r_{0}-1)^{6}}{r_{0}^{6}(r_{0}-2)^{2}},\\ \nonumber C&=&\frac{\eta^{2}[3r_{0}^{2}(r_{0}-4)(r_{0}-2)^{3}+b^{2}r_{0} (r_{0}-2)^{2}(16+r_{0})-4b^{4}(r_{0}-3)]}{r_{0}^{4} (r_{0}^{2}-2r_{0}-b^{2})^3}-4\frac{r_{0}+1}{(r_{0}-2)^{4}},\\ \nonumber D&=&\frac{\eta^{2}(r_{0}-2)(r_{0}^{3}-2r_{0}^{2}-4b^{2})}{r_{0}^{4} (r_{0}^{2}-2r_{0}-b^{2})}-\frac{4b^{2}}{(r_{0}-2)^{3}},\\ \nonumber r_{0}&=&\frac{1}{3}\bigg[(3\sqrt{3}\sqrt{108b^4-112b^2-225}-54b^2+28)^{1/3}+7+ \frac{19}{(3\sqrt{3}\sqrt{108b^4-112b^2-225}-54b^2+28)^{1/3}}\bigg], \end{eqnarray} where $\eta\equiv L_z/E$. \begin{figure}[ht] \center{\includegraphics[width=9cm ]{sfig4.eps} \caption{Light rings (dots) and the corresponding family of periodic Lyapunov orbits (solid line) in the spacetime of Bonnor black dihole with $b=1.4$. Here we set $m=1$.} \label{zqt}} \end{figure} Let us now adopt the case $m=1$ and $b=1.4$ as an example to analyse the formation of the shadow of Bonnor black dihole (\ref{xy}) which is shown in Fig.\ref{shb} (d). In this special case, we find that there exist two fixed points. Their positions in phase space are overlapped at ($4.07,\pi/2,0,0$), but their impact parameters are $\eta_1=-9.83$ and $\eta_2=9.83$, respectively. The special distribution of two fixed points is attributed to that the considered magnetic dipole spacetime (\ref{xy}) is a non-rotating spacetime. The eigenvalues of the Jacobian (\ref{jjj}) are $\pm \lambda$, $\pm \nu i$, where $\lambda=0.46$ and $\nu=0.60$. According to Lyapunov central theorem, we know that each purely imaginary eigenvalue gives rise to a one parameter family $\gamma_{\epsilon}$ of periodic orbits, which is the so-called Lyapunov family \cite{BI} and the orbit $\gamma_{\epsilon}$ collapses into the fixed point as $\epsilon\rightarrow0$. We show Lyapunov family for the above fixed points (light rings) in Fig. \ref{zqt}. The two thick dots represent the two light rings, and the solid lines denote a family of periodic Lyapunov orbits arising from these two light rings. These periodic orbits can be parameterized by impact parameter $\eta$ in an interval $[-9.83,9.83]$. All of these periodic Lyapunov orbits are nearly spherical orbits with radius $r=4.07$, which are responsible for determining the boundary of shadow of Bonnor black dihole as in Refs. \cite{fpos2,BI}. \begin{figure}[ht] \center{\includegraphics[width=8cm ]{sfig5.eps} \caption{Projection of the unstable invariant manifolds (green lines) associated with the periodic orbit for $\eta$ =-6 (red line). The dark region are the forbid region of photon and the black dot represents the position of observer.} \label{sn}} \end{figure} The positive ( negative ) real eigenvalue $\pm \lambda$ suggests that there is a unstable ( stable ) invariant manifold, in which points exponentially approach the fixed point in backward ( forward ) time. For each Lyapunov orbit, its corresponding invariant manifolds are two dimensional surfaces forming tubes in the three dimensional reduced phase space $(r; \theta; p_{\theta})$. In Fig. \ref{sn}, we show a projection of the unstable invariant manifolds associated with the periodic orbits for $\eta=-6$ in the plane ($X,\theta$), where $X$ is a compactified radial coordinate defined as $X=\sqrt{r^{2}-r_{h}^{2}}/(1+\sqrt{r^{2}-r_{h}^{2}})$ \cite{BI}. The orbits inside the unstable invariant manifold tube can reach the horizon of Bonnor black dihole. Moreover, we note the periodic orbit touched the boundary of the black region approaches perpendicularly to the boundary $V(r,\theta)= 0$ as in Ref.\cite{binary}. In order to probe the shape of the invariant manifolds as in Ref.\cite{BI}, we present in Fig.\ref{pjl} the Poincar\'{e} section in the plane ($\theta, p_{\theta}$) for the unstable manifolds of Lyapunov orbits at the observers radial position with $\eta=-6$ and $\eta=0$. \begin{figure} \includegraphics[width=13cm ]{sfig6.eps} \caption{The Poincar\'{e} section at $r=r_{obs}$ for the unstable manifolds (green) of Lyapunov orbits in the spacetime of Bonnor black dihole (\ref{xy}) with $b=1.4$. The figures (a)-(c) show the fractal-like structure for $\eta=-6$ and the figures (d) is $\eta=0$. Here we set $m=1$. } \label{pjl} \end{figure} All photons starting within the green regions always move only in the unstable manifold tube. Moreover, we also note that there exist some white regions which corresponds to that photons move outside the unstable manifolds. In Fig.\ref{pjl}, the intersection of the dashed line $\theta=\frac{\pi}{2}$ with these manifolds denotes the trajectories which can be detected by the observer on the equatorial plane. This can be generalized to the cases with other values of $\theta$. \begin{figure} \center{\includegraphics[width=8cm ]{sfig7.eps}} \caption{Intersections of the unstable manifolds with the image plane for the lines with the constant $\eta=-9.83$, $\eta=-6$, and $\eta=0$ in the spacetime of Bonnor black dihole (\ref{xy}) with $b=1.4$.} \label{jx} \end{figure} Actually, these intersection points also determine the positions of the photons with a certain angular momentum on the image plane. In Fig. \ref{jx}, we present the lensing image marking the intersection points for fixed $\eta=-9.83$, $\eta=-6$, and $\eta=0$. The boundary of the shadow of Bonnor black dihole are determined entirely by the intersection points deriving from these fixed points. The anchor-like bright zones in Fig. \ref{shb} (d) are originated from the top, middle and bottom parts of the $S-$shape white region in the Poincar\'{e} section ( see in Fig.\ref{pjl} (a)) and the fractal-like structure shown in Fig.\ref{pjl} (a)-(c) is responsible for the fractal shadow structure in Fig.3. For the case $\eta=0$, there is no white region in the Poincar\'{e} section ( see in Fig.\ref{pjl} (d)), which is responsible for that two anchor-like bright zones are separated by the black shadow in the middle regions in Fig. \ref{shb} (d). \begin{figure} \center{\includegraphics[width=9cm ]{sfig8a.eps} \includegraphics[width=7cm] {sfig8b.eps}} \caption{The Poincar\'{e} section (the left) and the intersections of the unstable manifolds with the image plane (the right) for $\eta=-6$ in the spacetime of Bonnor black dihole (\ref{xy}) with $b=0.4$. } \label{jx04} \end{figure} In order to make a comparison, in Fig. \ref{jx04}, we also plot the Poincar\'{e} section and the intersections of the unstable manifolds with the image plane for $\eta=-6$ in the spacetime of Bonnor black dihole (\ref{xy}) with $b=0.4$. Obviously, there is no white region in the Poincar\'{e} section, which is consistent with that the shadow of Bonnor black dihole is a black disk and there exist no bright zones in the shadow in this case. Finally, we make a comparison between the shadows casted by the equal-mass and non-spinning Majumdar-Papapetrou binary black holes \cite{binary,sha18} and by Bonnor black dihole (\ref{xy}). In Fig.9, we present the shadow for the Majumdar-Papapetrou binary black holes \cite{binary,sha18} with two equal-mass black holes separated by the parameter $a=0.5$, $a=1$ and $a=2$ ( see figures (a)-(c)) and for the cases of Bonnor black dihole (\ref{xy}) separated by the parameter $b=0.5$, $b=1$ and $b=2$ ( see figures (d)-(e)). From Fig.9, one can find that the shadows of Bonnor black dihole possess some properties closely resembling those of Majumdar-Papapetrou binary black holes, which is understandable since there exists the similar black hole configurations in both cases. However, there exists the essential difference in the shadows for the chosen parameter in these two cases. From Fig.9, we find that the two larger shadows and the smaller eyebrow-like shadows are joined together by the middle black zone for Bonnor black dihole, but they are disconnected in the case of the equal-mass and non-spinning Majumdar-Papapetrou binary black holes \cite{binary,sha18}. \begin{figure} \includegraphics[width=10cm ]{sfig99.eps} \caption{The comparison between the shadows of Majumdar-Papapetrou binary black holes and of Bonnor black dihole (\ref{xy}). Figures (a), (b) and (c) correspond to the Majumdar-Papapetrou binary case \cite{binary,sha18} with two equal-mass black holes separated by the parameter $a=0.5$, $a=1$ and $a=2$, respectively. Figures (d),(e) and (f) denote the shadow for the cases of Bonnor black dihole (\ref{xy}) separated by the parameter $b=0.5$, $b=1$ and $b=2$, respectively. Here we set the inclination angle of observer $\theta_{0}=90\degree$ and $m=1$. } \end{figure} Moreover, with the increase of magnetic dipole parameter, we find that the middle black zone connecting the main shadows and the eyebrow-like shadows becomes narrow for Bonnor black dihole. From the previous discussion, we know that due to the existence of singularity on the symmetric axis, Bonnor black dihole is held apart by the cosmic string with tension $\mu=\frac{1}{4}[1-b^4/(m^2+b^2)^2]$ \cite{mmd101,mmd102}, which decreases with the parameter $b$. Therefore, we can obtain that the middle black zone increases with the tension of the cosmic string. This behavior is consistent with that in the case of a Kerr black hole pierced by a cosmic string in which the size of black hole shadow increases with the string tension \cite{sha14a}. Therefore, the appearance of the middle black zone in the shadow of Bonnor black dihole can be attributed to the existence of the conical singularity on the symmetric axis in the background spacetime. In the case of Majumdar-Papapetrou binary black holes \cite{binary,sha18}, there is no such conical singularity since the configuration is supported by the balance between the gravitational force and the Coulomb force. Thus, the difference of the shadow shape in these two spacetimes is caused by the existence of singularity on the symmetric axis in Bonnor's spacetime. \section{Summary} In this paper we have studied the shadows of black dihole described by Bonnor's exact solution of Einstein-Maxwell equations. The presence of magnetic dipole yields that the equation of photon motion can not be variable-separable and the corresponding dynamical system is non-integrable. With the technique of backward ray-tracing, we present numerically the shadow of Bonnor black dihole. For the smaller magnetic dipole parameter $b$, the shadow is a black disk as in the usual static compact object spacetimes with horizon. The size of shadow decreases with the parameter $b$. For the larger magnetic dipole parameter $b$, we find that there exist two anchor-like bright zones imbedded symmetrically in the black disk shadow so that the shadow looks like a concave disk with four large eyebrows. The anchor-like bright zones increase and the eyebrows become long with the increase of $b$. Moreover, many other smaller eyebrow-like shadows can be detected in two anchor-like bright zones and the shadow possess a self-similar fractal structure, which is caused by chaotic lensing. This interesting property of shadows is qualitatively different from those in the spacetimes in which the equations of motion are variable-separable and the corresponding dynamical system is integrable. Finally, we analyse the invariant manifolds of certain Lyapunov orbits near the fixed point and discuss further the formation of the shadow of Bonnor black dihole, which indicates that all of the structures in the shadow originate naturally from the dynamics near fixed points. Our result show that the spacetime properties arising from the magnetic dipole yields some interesting patterns for the shadow casted by Bonnor black dihole. Comparing with that in the case of Majumdar-Papapetrou binary black holes, we find that the two larger shadows and the smaller eyebrow-like shadows are joined together by the middle black zone for Bonnor black dihole, but they are disconnected in the Majumdar-Papapetrou one. The appearance of the middle black zone in the shadow of Bonnor black dihole can be attributed to the existence of the conical singularity on the symmetric axis in the background spacetime. It is of interest to study the effects of such conical singularity on the Lyapunov orbits and the shadow edge etc. Work in this direction will be reported in the future \cite{wchen}. \section{\bf Acknowledgments} We wish to thank anonymous referees very much for their useful comments and suggestions, which improve our paper considerably. This work was partially supported by the Scientific Research Fund of Hunan Provincial Education Department Grant No. 17A124. J. Jing's work was partially supported by the National Natural Science Foundation of China under Grant No. 11475061.
2,869,038,156,582
arxiv
\section{Introduction} \noindent For two decades, there has been increased interest in statistical modelling of functional data. Among the methods allowing to deal with such data, the functional linear regression, discussed in \cite{cardot99,cardot03}, constitutes an essential tool and has a large number of applications in several fields (see, e.g., \cite{ramsay97}). The corresponding model has seen some recent developments. Indeed, in order to enhance prediction power, \cite{mas_pumo09} added a component that includes a derivative. Furthermore, other works considered the so-called semi-functional partially linear regression model that combines a functional linear model with a nonparametric regression model. Estimation based on this previous class of models has been investigated (see \cite{lingetal19}, \cite{zhou_chen12}, \cite{zhouetal16}, and \cite{zhuetal20}), but only for purely non-spatial data. Despite the interest of processing spatial data, there are few works dealing with this type of data, compared to those of non-spatial data. However, \cite{giraldoetal11} proposed a methodology to make spatial linear predictions at non-data locations when the data are functions, which is applied to predict a curve of temperature; and \cite{bouka3} considered spatial functional regression model with a derivative, which can be applied to predict the ozone pollution at non-visited sites. On the other hand, fixed design nonparametric regression has also received a special attention in the field of spatial literature (see \cite{bouka}, \cite{mach_stoica10}, \cite{wang_wang}). Some results from this class of models can be applied to image analysis (see for instance \cite{mach_stoica10}). However, to the best of our knowledge, a small attention has been taken for the estimation in semi-functional linear regression model for spatially dependent observations (see for instance \cite{huetal20}, \cite{li_ying21} and \cite{benallouetal21}). Consider $G:=L^2[0,1]$ and the Sobolev space $H=\{x\in G, x^\prime\in G \}$, where $x^\prime$ is the first derivative of $x$; these are Hilbert spaces with inner products $\left\langle .,.\right\rangle_{H}$ and $\left\langle .,.\right\rangle_{G}$ defined by: \begin{eqnarray*} \left\langle f, g\right\rangle_{G}=\int^{1}_{0}f(t)g(t)dt,\,\,\,\left\langle f, g\right\rangle_{H}=<f,g>_G+<f^\prime,g^\prime>_G. \end{eqnarray*} We are interested with the model: \begin{eqnarray}\label{1.2} Y_{\mathbf{i}}=\left\langle \phi, X_{\mathbf{i}}\right\rangle_{H}+\left\langle \gamma, X^{\prime}_{\mathbf{i}}\right\rangle_{G}+r\left(\frac{i_1}{n_1+1},\cdots,\frac{i_d}{n_d+1}\right)+\epsilon_{\mathbf{i}}\ , \end{eqnarray} where $\mathbf{i}=(i_1,\cdots,i_d)\in\mathcal{I}_{\mathbf{n}}:=\{\mathbf{i}=(i_1,\cdots,i_d)\in\mathbb{Z}^d: 1\leq i_k\leq n_k, k=1,\cdots,d\},\ d\ge 2$, $\phi$ and $\gamma$ are functions, $r(.)$ is a nonparametric spatial function defined on $[0,1]^d$ to be estimated by local linear smoothing, the errors $\epsilon_{\mathbf{i}}$ are spatially correlated, with a covariance function as assumed in Assumption {\ref{rega1}}. The random functions $X_{\mathbf{i}}$ and $X^{\prime}_{\mathbf{i}}$ are assumed to be centered and independent of the noise $\epsilon_{\mathbf{i}}$ which is also centered. The triplet $(X_{\mathbf{i}},Y_{\mathbf{i}},X^{\prime}_{\mathbf{i}})$ has the same probability distribution than the random vector $(X,Y,X^{\prime})$. We are interested in the prediction at a non-visited site obtained from the estimation of the unknown parameter $(\phi, \gamma, r)$ defined in (\ref{1.2}). The special case with $\phi=0$ and $\gamma=0$ is considered in \cite{mach_stoica10},\cite{bouka}, whereas that with $r=0$ is studied in \cite{bouka3}. The article is organized as follows. Our estimation procedure is given in Section \ref{s2}. Section \ref{s3} is devoted to the assumptions and the main results. In Section \ref{s4}, a simulation study is presented whereas an application to ozone pollution forecasting at the non-visited sites is presented to Section \ref{s5}. A discussion of the results is made in Section \ref{Discussion}, while proofs of asymptotic results are postponed in Section \ref{s6}. \section{Estimation}\label{s2} \noindent Our estimation procedure is an association between the method of moments proposed in \cite{mas_pumo09} and the one based on local linear approximation used in \cite{bouka}. For estimating the pair ($\phi$, $\gamma$), we adopt a method similar to that of \cite{mas_pumo09}. We first multiply both members of (\ref{1.2}) by $\left\langle X_{\mathbf{i}}, . \right\rangle_{H}$ and take the expectation of the obtained term. Secondly, we reproduce this procedure by multiplying by $\left\langle X^{\prime}_{\mathbf{i}}, . \right\rangle_{G}$. Since $X_{\mathbf{i}}$ and $X^{\prime}_{\mathbf{i}}$ are assumed to be centered, we finally obtain the following system: \begin{eqnarray}\label{1.3} \left\lbrace \begin{array}{l} \Delta=\Gamma \phi+\Gamma^{\prime}\gamma\\ \Delta^{\prime}=(\Gamma^{\prime } )^\ast\phi+\Gamma^{\prime\prime}\gamma \end{array}, \right. \end{eqnarray} where $\Gamma=\mathbb{E}(X\otimes_{H}X)$, $\Gamma^{\prime}=\mathbb{E}(X^{\prime}\otimes_{G}X)$, $\Gamma^{\prime\prime}=\mathbb{E}(X^{\prime}\otimes_{G}X^{\prime})$, $\Delta=\mathbb{E}(YX)$, $\Delta^{\prime}=\mathbb{E}(Y X^{\prime})$, $(\Gamma^{\prime } )^\ast$ is the adjoint of $\Gamma^{\prime }$, and where $\otimes_H$ (resp. $\otimes_G$) denotes the tensor product defined by $(u\otimes_H v)(h)=\left\langle u,h\right\rangle_H v$ (resp. $(u\otimes_G v)(h)=\left\langle u,h\right\rangle_G v$). The solution of the system (\ref{1.3}) is given by $\phi=S_{\phi}^{-1}\left[\Delta-\Gamma^{\prime}\Gamma^{\prime\prime-1}\Delta^{\prime}\right]$ and $ \gamma=S_{\gamma}^{-1}\left[\Delta^{\prime}-\Gamma^{\prime *}\Gamma^{-1}\Delta\right]$, where $S_{\phi}=\Gamma-\Gamma^{\prime}\Gamma^{\prime\prime-1}\Gamma^{\prime*}$, $S_{\gamma}=\Gamma^{\prime\prime}-\Gamma^{\prime*}\Gamma^{-1}\Gamma^{\prime}$. \noindent Considering the empirical versions of these operators and functions given by \begin{eqnarray*}\label{emp1} \Gamma_{\mathbf{n}}&=&\frac{1}{\widehat{\mathbf{n}}}\sum_{\mathbf{i}\in{\mathcal I}_{\mathbf{n}}}X_{\mathbf{i}}\otimes_{H}X_{\mathbf{i}},\ \Gamma^{\prime}_{\mathbf{n}}=\frac{1}{\widehat{\mathbf{n}}}\sum_{\mathbf{i}\in{\mathcal I}_{\mathbf{n}}}X^{\prime}_{\mathbf{i}}\otimes_{G}X_{\mathbf{i}},\ \Delta_{\mathbf{n}}=\frac{1}{\prod_{k=1}^d(n_k-1)}\sum_{\mathbf{i}\in{\mathcal I}^{1}_{\mathbf{n}}}Y_{\mathbf{i}}X_{\mathbf{i}},\\ \Gamma^{\prime *}_{\mathbf{n}}&=&\frac{1}{\widehat{\mathbf{n}}}\sum_{\mathbf{i}\in{\mathcal I}_{\mathbf{n}}}X_{\mathbf{i}}\otimes_{H}X^{\prime}_{\mathbf{i}},\ \Gamma^{\prime\prime}_{\mathbf{n}}=\frac{1}{\widehat{\mathbf{n}}}\sum_{\mathbf{i}\in{\mathcal I}_{\mathbf{n}}}X^{\prime}_{\mathbf{i}}\otimes_{G}X^{\prime}_{\mathbf{i}},\ \Delta^{\prime}_{\mathbf{n}}=\frac{1}{\prod_{k=1}^d(n_k-1)}\sum_{\mathbf{i}\in{\mathcal I}^{1}_{\mathbf{n}}}Y_{\mathbf{i}}X^{\prime}_{\mathbf{i}}, \end{eqnarray*} where ${\mathcal I}^{1}_{\mathbf{n}}=\prod_{k=1}^d\{2,\cdots,n_k\}$, $\mathbf{n}=(n_1,\cdots,n_k)$, $\widehat{\mathbf{n}}=\prod_{k=1}^dn_k$. We write $\mathbf{n}\rightarrow +\infty$ if $\min_{1\le k\le d}(n_k)\longrightarrow +\infty$ and we assume that $\dfrac{n_j}{n_k}\le C$ for $1\le j,\ k\le d$ and $0<C<+\infty$. Invertible empirical operators are obtained by regularization method: $$ \widetilde{\Gamma}^{-1}_{\mathbf{n}}=(\Gamma_{\mathbf{n}}+w_{\mathbf{n}}I)^{-1},\, \widetilde{\Gamma}^{\prime\prime -1}_{\mathbf{n}}=(\Gamma^{\prime\prime}_{\mathbf{n}}+w_{\mathbf{n}}I)^{-1},\, $$ where $w_{\mathbf{n}}$ is a sequence, from $\mathbb{N}^d$ to $\mathbb{R}$, converging to $0$ as $\mathbf{n}\rightarrow +\infty$ and $I$ denotes the identity operator. We put $$S_{\mathbf{n},\phi}=\Gamma_{\mathbf{n}}-\Gamma^{\prime}_{\mathbf{n}}(\widetilde{\Gamma}^{\prime\prime -1}_{\mathbf{n}})\Gamma^{\prime*}_{\mathbf{n}},\ S_{\mathbf{n},\gamma}=\Gamma^{\prime\prime}_{\mathbf{n}}-\Gamma^{\prime*}_{\mathbf{n}}(\widetilde{\Gamma}^{-1}_{\mathbf{n}})\Gamma^{\prime}_{\mathbf{n}},$$ $$\ \ u_{\mathbf{n},\phi}=\Delta_{\mathbf{n}}-\Gamma^{\prime}_{\mathbf{n}}(\widetilde{\Gamma}^{\prime\prime -1}_{\mathbf{n}})\Delta^{\prime}_{\mathbf{n}},\ u_{\mathbf{n},\gamma}=\Delta^{\prime}_{\mathbf{n}}-\Gamma^{\prime*}_{\mathbf{n}}(\widetilde{\Gamma}^{-1}_{\mathbf{n}})\Delta_{\mathbf{n}},$$ and we estimate $(\phi,\gamma)$ by the pair $(\widehat{\phi}_{\mathbf{n}},\widehat{\gamma}_{\mathbf{n}})$ given, as in \cite{bouka3}, by: \begin{equation*}\label{2.2} \widehat{\phi}_{\mathbf{n}}=(S_{\mathbf{n},\phi}+\psi_{\mathbf{n}}I)^{-1}u_{\mathbf{n},\phi},\ \ \widehat{\gamma}_{\mathbf{n}}=(S_{\mathbf{n},\gamma}+\psi_{\mathbf{n}}I)^{-1}u_{\mathbf{n},\gamma}, \end{equation*} where $\psi_{\mathbf{n}}$ is a sequence, from $\mathbb{N}^d$ to $\mathbb{R}$, converging to $0$ as $\mathbf{n}\rightarrow +\infty$. Secondly, for estimating the nonparametric regression function, we rewrite (\ref{1.2}) as \begin{eqnarray}\label{1.4} T_{\mathbf{i}}=r\left(\frac{\mathbf{i}}{\mathbf{n+1}}\right)+\xi_{\mathbf{i}}, \end{eqnarray} where $\dfrac{\mathbf{i}}{\mathbf{n+1}}=\left(\dfrac{i_1}{n_1+1},\cdots,\dfrac{i_d}{n_d+1}\right)$, $T_{\mathbf{i}}=Y_{\mathbf{i}}-\left\langle \widehat{\phi}_{\mathbf{n}}, X_{\mathbf{i}}\right\rangle_{H}-\left\langle \widehat{\gamma}_{\mathbf{n}}, X^{\prime}_{\mathbf{i}}\right\rangle_{G}$ and $\xi_{\mathbf{i}}=\epsilon_{\mathbf{i}}+\left\langle \phi-\widehat{\phi}_{\mathbf{n}}, X_{\mathbf{i}}\right\rangle_{H}+\left\langle \gamma-\widehat{\gamma}_{\mathbf{n}}, X^{\prime}_{\mathbf{i}}\right\rangle_{G}$, and we locally approximate $r$ by a linear regression function by using Taylor expansion in the neighbourhood of $\mathbf{s}_{0}\in[0,1]^{d}$ i.e. $r(\frac{\mathbf{i}}{\mathbf{n+1}})\approx\beta_{0}+\beta^{T}_{1}(\frac{\mathbf{i}}{\mathbf{n+1}}-\mathbf{s}_{0})$, where $\beta_{0}=r(\mathbf{s}_{0})$ and $\beta_{1}=\nabla r(\mathbf{s}_{0})$ is the gradient vector of $r$ at $\mathbf{s}_{0}$. From the model defined in (\ref{1.4}), an estimator of $\beta_{0}$ is given by the solution of the following least squares minimization problem: \begin{eqnarray*} \min_{\beta_{0},\beta_{1}}\sum_{\mathbf{i}\in{\mathcal I}_{\mathbf{n}}}\left\{T_{\mathbf{i}}-\beta_{0}-\beta^{T}_{1}\left(\frac{\mathbf{i}}{\mathbf{n+1}}-\mathbf{s}_{0}\right)\right\}^2\frac{1}{h^{d}}K\left(\frac{\frac{\mathbf{i}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right), \end{eqnarray*} where $K:\mathbb{R}^{d}\longrightarrow\mathbb{R}_{+}$ is a kernel function and $h$ is a bandwidth, and $u^T$ is the transpose of $u$. Then, a local linear estimator $\widehat{r}$ of $r$ on $\mathbf{s}_0\in[0,1]^d$ is given, as in \cite{bouka}, by: \begin{eqnarray*} \widehat{r}(\mathbf{s}_{0})=(1, \mathbf{0}^{T})\left(\frac{1}{\widehat{\mathbf{n}}}{\mathcal X}^{T}W_{0}{\mathcal X}\right)^{-1}\left(\frac{1}{\widehat{\mathbf{n}}}{\mathcal X}^{T}W_{0}{\mathcal Y}\right)=\mathcal{S}^T_{\mathbf{s}_{0}}{\mathcal Y}, \end{eqnarray*} where $(1,\mathbf{0}^T)\in\mathbb{R}^{d+1}$, $\mathcal{X}$, $\mathcal{Y}$ and $W_0$ are, respectively, the $\widehat{\mathbf{n}}\times (d+1)$, $\widehat{\mathbf{n}}\times 1$ and $\widehat{\mathbf{n}}\times \widehat{\mathbf{n}}$ matrices given by: \[ \mathcal{X}= \begin{pmatrix} 1&\left(\dfrac{\dfrac{\mathbf{1}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)^{T}\\ \vdots&\vdots&\\ 1&\left(\dfrac{\dfrac{\mathbf{n}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)^{T}\\ \end{pmatrix} ,\,\,\,\,\,\, \mathcal{Y}=\left(T_{\mathbf{1}},\cdots,T_{\mathbf{n}}\right)^{T} , \] where $\mathbf{1}=(1,\cdots,1)\in\mathbb{R}^d$, $\mathbf{n}=(n_1,\cdots,n_d)$ and $$ W_{0}=\textrm{diag}\left\{\frac{1}{h^{d}}K\left(\dfrac{\dfrac{\mathbf{1}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right),\cdots,\frac{1}{h^{d}}K\left(\dfrac{\dfrac{\mathbf{n}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)\right\}.$$ \section{Assumptions and results}\label{s3} \noindent We first make assumptions needed for establishing our results. For $\beta>0$, $L>0$ and $M>0$ we denote by $\mathcal{H}(\beta, L)$ (resp. $\mathcal{G}(L)$) the H\"older class (resp. the class) of functions $f$ satisfying $|f(x)-f(y)|\le L\left\|x-y\right\|^\beta_{\infty}$ (resp. $|f(x)-f(y)|\le L\left\|x-y\right\|_{\infty}$), and we consider the class: \begin{eqnarray*}\label{def} \Sigma(\beta, L, M)=\left\lbrace \begin{array}{l} \left\{f\in \mathcal{H}(\beta, L):\left\|f\right\|_{\infty}\leq M\right\}\ \text{if}\ \frac{d}{4}<\beta\leq 1\\ \\ \left\{f\in\mathcal{G}(L):\left\|f\right\|_{\infty}\leq M, \left\|\nabla f\right\|_{\infty}\leq M\right\}\ \text{if}\ \beta>1 \end{array}. \right. \end{eqnarray*} \begin{Assumption}\label{as0} The function $r$ belongs to $\Sigma(\beta, L, M)$ with $\beta>d/4$, $L>0$ and $M>0$. \end{Assumption} \begin{Assumption}\label{as1} $Ker(\Gamma)=Ker(\Gamma^{\prime\prime})=\{0\}$, where $Ker (A)=\{x \,: A x=0\}$. \end{Assumption} \begin{Assumption}\label{as2} $(\phi,\gamma)\notin\{(f,g)\in H\times G: f+D^{*}g=0\}$ where $D^{*}$ is the adjoint of the ordinary differential operator $D$. \end{Assumption} \begin{Assumption}\label{as3} $\left\|\Gamma^{-1/2}\phi\right\|_{H}<+\infty$,\ \ $\left\|(\Gamma^{\prime\prime})^{-1/2}\gamma\right\|_{G}<+\infty$, where $\Vert\cdot\Vert_H$ and $\Vert\cdot\Vert_G$ are the norms induced by $<\cdot ,\cdot>_H$ and $<\cdot ,\cdot>_G$ respectively. \end{Assumption} \begin{Assumption}\label{as3.1} The process $\{Z_{\mathbf{i}}=(X_{\mathbf{i}}, Y_{\mathbf{i}}, X^{\prime}_{\mathbf{i}}),\; \mathbf{i}\in\mathbb{Z}^d\}$ is $\alpha$-mixing dependent. That is\\ $\lim_{m\rightarrow +\infty}\alpha_{1,\infty}(m)=0$, where \begin{eqnarray}\label{ar2} \alpha_{1,\infty}(m)=\sup_{\{\mathbf{i}\}, E\subset\mathbb{Z}^{d}, \rho(E,\{\mathbf{i}\})\geq m}\{\sup_{A\in\sigma(Z_{\mathbf{i}}),B\in\sigma(Z_{\mathbf{j}};\mathbf{j}\in{E})}\{|\mathbb{P}(A\cap B)-\mathbb{P}(A)\mathbb{P}(B)|\}\}, \end{eqnarray} $\rho$ is the distance defined for any subsets $E_{1}$ and $E_{2}$ of $\mathbb{Z}^{d}$, by $\rho(E_{1},E_{2})=\min\{\|\mathbf{i}-\mathbf{j}\|, \mathbf{i}\in E_{1}, \mathbf{j}\in E_{2}\}$ with $\|\mathbf{i}-\mathbf{j}\|=\max_{1\leq s\leq d}|i_{s}-j_{s}|$. \end{Assumption} \begin{Assumption}\label{as5} $\left\|X_{\mathbf{i}} \right\|_{H}\leq C$ a.s. where $C$ is some positive constant. \end{Assumption} \begin{Assumption}\label{rega1} $Cov(\epsilon_\mathbf{i},\epsilon_\mathbf{j})=\sigma^2\exp(-a\|\mathbf{i}-\mathbf{j}\|)$ for all $\mathbf{i}, \mathbf{j}\in{\mathcal I}_{\mathbf{n}}$, where $a$ and $\sigma^2$ are known positive constants. \end{Assumption} \begin{Assumption}\label{rega2} The kernel function $K(.)$ is symmetric, Lipschitz, continuous and bounded. The support of $K(.)$ is $[-1,1]^{d}$, $\int K(\mathbf{u})d{\mathbf{u}}=1$, $\int {\mathbf{u}}K(\mathbf{u})d{\mathbf{u}}=\mathbf{0}$, $\int {\mathbf{u}}{\mathbf{u}}^{\tau}K(\mathbf{u})d{\mathbf{u}}=\nu_{2}I_d$, where $I_d$ is the $d\times d$ identity matrix and $\nu_{2}\ne0$. \end{Assumption} \begin{remark} \label{cor1} \noindent Assumptions \ref{as1}--\ref{as5} are technical conditions to ensure consistency of $\widehat{\phi}_{\mathbf{n}}$ and $\widehat{\gamma}_{\mathbf{n}}$ (see \cite{bouka3}). However, Assumption \ref{as5} can be replaced by $\mathbb{E}\left(\left\|X\right\|^8_H\right)<C$ (see \cite{mas_pumo09}), but this assumption on finite moment would lead us to longer and more intricate methods of proof. Assumptions \ref{rega1} et \ref{rega2} are conditions needed to establish consistency of the estimator $\widehat{r}$ of $r $; they have also been used in \cite{fransisco} and in \cite{bouka}. However, spatial covariance models such that Matern's spatial covariance model or Gaussian covariance model can be also used. \end{remark} \bigskip \noindent Let $\Lambda_{\mathbf{n}}$ (resp. $\Lambda$) be one of the following: $\Gamma_{\mathbf{n}}$, $\Gamma^{\prime}_{\mathbf{n}}$, $\Gamma^{\prime*}_{\mathbf{n}}$, $\Gamma^{\prime\prime}_{\mathbf{n}}$, $\Delta_{\mathbf{n}}$ and $\Delta^{\prime}_{\mathbf{n}}$ (resp. $\Gamma,\Gamma^{\prime},\Gamma^{\prime*},\Gamma^{\prime\prime}$, $\Delta$ and $\Delta^{\prime}$). Rates of convergence of $\Lambda_{\mathbf{n}}$ with respect to the norms $\|.\|_{\infty}$ and $\|.\|_{L^{2}({\mathcal HS})}$ are given in Theorem \ref{th1}, whereas those of $\widehat{\phi}_{\mathbf{n}}$ and $\widehat{\gamma}_{\mathbf{n}}$ are given in Corollary \ref{coro1}. \bigskip \begin{thm}\label{th1} Let $(v_{j})_{j\geq1}$ (resp. $(v_{j}^{\star})_{j\geq1}$) be a complete orthonormal system in $H$ (resp. $G$) and $(\lambda_{j})_{j\geq1}$ be a characteristic numbers sequence of $\Lambda$ (i.e. the square root of the eigenvalues of $\Lambda^{*}\Lambda$) with $\lambda_{j}=O(u^{j})$, $0<u<1$, $j\geq1$. Under assumptions $\ref{as1}-\ref{as5}$ with $\alpha_{1,\infty}(t)=O(t^{-\theta})$, $\theta>2d$, we have, for all $\tau>0$: \begin{eqnarray} &&\mathbb{P}\left(\left\|\Lambda_{\mathbf{n}}-\Lambda\right\|_{\infty}>\tau\right)=O\left(\dfrac{\log \widehat{\mathbf{n}}}{\widehat{\mathbf{n}}}\right);\label{3.2}\\ &&\left\|\Lambda_{\mathbf{n}}-\Lambda\right\|_{L^{2}({\mathcal HS})}=O\left(\dfrac{\log \widehat{\mathbf{n}}}{\sqrt{\widehat{\mathbf{n}}}}\right) \label{3.3}. \end{eqnarray} \end{thm} \bigskip \noindent From Theorem \ref{th1} and arguing as in \cite{mas_pumo09}, we derive the following corollary. \begin{corollary}\label{coro1} Under assumptions of Theorem \ref{th1}, we have: $$\mathbb{E}\left(\left\|\phi-\widehat{\phi}_{\mathbf{n}}\right\|^2_{\Gamma}\right)=O\left(\frac{\psi^{2}_{\mathbf{n}}}{w^{2}_{\mathbf{n}}}\right)+O\left(\frac{(\log \widehat{\mathbf{n}})^{2}}{w^{2}_{\mathbf{n}}\psi^{2}_{\mathbf{n}}\widehat{\mathbf{n}}}\right);$$ $$ \mathbb{E}\left(\left\|\gamma-\widehat{\gamma}_{\mathbf{n}}\right\|^2_{\Gamma^{\prime\prime}}\right)=O\left(\frac{\psi^{2}_{\mathbf{n}}}{w^{2}_{\mathbf{n}}}\right)+O\left(\frac{(\log \widehat{\mathbf{n}})^{2}}{w^{2}_{\mathbf{n}}\psi^{2}_{\mathbf{n}}\widehat{\mathbf{n}}}\right),$$ where $\|.\|_{\Gamma}:=\|\Gamma^{1/2}(.)\|_{H}$ and $\|.\|_{\Gamma^{\prime\prime}}:=\|\Gamma^{\prime\prime1/2}(.)\|_{G}$ are two semi-norms. \end{corollary} \bigskip \noindent It remains to obtain bounds of the estimator $\widehat{r}$ of $r$. The following theorem gives bounds of the bias and of the variance. \bigskip \begin{theorem}\label{regl2} Assume that Assumptions \ref{as0}--\ref{rega2} are satisfied with $\alpha_{1,\infty}(t)=O(t^{-\theta})$, $\theta> 2(d+1)$. Moreover, assume that the eigenvalues $\lambda_j$ of the operator $\Gamma$ are such that $\lambda_j=O(u^j)$ with $0<u<1$, $j\ge1$, that $h\to 0$ and $\min_{k=1,\cdots,d}\{n_k\}h\to +\infty$ as $\mathbf{n}\to+\infty$. Then: \item[(i)]$$ \sup_{\mathbf{s}_{0}\in[0,1]^d}\left[\mathbb{E}(\widehat{r}(\mathbf{s}_{0}))-r(\mathbf{s}_{0})\right]^2=O\left(h^4\right)+O\left(\frac{(\log \widehat{\mathbf{n}})^2}{w_{\mathbf{n}}^2\psi_{\mathbf{n}}^2\widehat{\mathbf{n}}}\right)+O\left(\psi_{\mathbf{n}}^2\right);$$ \item[(ii)]$$ \sup_{\mathbf{s}_{0}\in[0,1]^d}Var(\widehat{r}(\mathbf{s}_{0}))=O\left(\frac{\log \widehat{\mathbf{n}}}{\widehat{\mathbf{n}}h^{d}}\right).$$ \end{theorem} \bigskip \begin{remark}\label{cor2} An immediate consequence of Theorem \ref{regl2} is for all $\mathbf{s}_{0}\in[0,1]^d$, $$\left|\widehat{r}(\mathbf{s}_{0})-r(\mathbf{s}_{0})\right|=O_p\left(\dfrac{(\log \widehat{\mathbf{n}})^{2}}{w^{2}_{\mathbf{n}}\psi^{2}_{\mathbf{n}}\widehat{\mathbf{n}}}+h^4+\psi_{\mathbf{n}}^2+\dfrac{\log \widehat{\mathbf{n}}}{\widehat{\mathbf{n}}h^d}\right)$$ and so the optimal bandwidth $h$ is controlled by the trade-off between the variance and the square of the bias. If $\psi_{\mathbf{n}}\propto \dfrac{(\log \widehat{\mathbf{n}})^{1/2}}{\widehat{\mathbf{n}}^{2/(4+d)}}$, $h=\dfrac{1}{\widehat{\mathbf{n}}^{1/(4+d)}}$ with $d\ge 2$, and $\dfrac{1}{w_{\mathbf{n}}^2}\propto\log\widehat{\mathbf{n}}$, then for all $\mathbf{s}_{0}\in[0,1]^d$, we have \[\left|\widehat{r}(\mathbf{s}_{0})-r(\mathbf{s}_{0})\right|=O_p\left(\max\left(\frac{\left(\log \widehat{\mathbf{n}}\right)^2}{\widehat{\mathbf{n}}^{d/(4+d)}},\frac{\log \widehat{\mathbf{n}}}{\widehat{\mathbf{n}}^{4/(4+d)}}\right)\right)\] which is, with $d\ge4$, the optimal convergence rate in spatial nonparametric regression setting when data are $\alpha$-mixing dependent and the correlation of errors is long-range, and for $d=3$, it is better than $O_p\left(\left(\frac{\log \widehat{\mathbf{n}}}{\widehat{\mathbf{n}}^{4/(4+d)}}\right)^{1/2}\right)$ given in \cite{carbonetal07}, and is quite close from the one of \cite{carbonetal07} with $d=2$. Besides, when the correlation of errors is short-range, the optimal convergence rate from $\widehat{r}$ to $r$ is $O_p\left(\dfrac{1}{\widehat{\mathbf{n}}^{4/(4+d)}}\right)$ \cite[p. 36]{liu01}. \end{remark} \bigskip \noindent In addition, from Remark \ref{cor1}, Theorem \ref{regl2}, Lemma \ref{l1} (see Section \ref{s5}) and Theorem \ref{th1} together with the same arguments than in \cite{mas_pumo09}, we deduce the following Corollary. \bigskip \begin{corollary} Under Assumptions of Theorem \ref{regl2}, we have for each $\mathbf{i}_0\in{\mathcal I}_{\mathbf{n+1}}\setminus{\mathcal I}_{\mathbf{n}}$ that:$$\mathbb{E}\left[\left(\widehat{Y}_{\mathbf{i}_0}-Y_{\mathbf{i}_0}^*\right)^2\right]=O\left(\frac{\psi^{2}_{\mathbf{n}}}{w^{2}_{\mathbf{n}}}+\frac{(\log \widehat{\mathbf{n}})^{2}}{w^{2}_{\mathbf{n}}\psi^{2}_{\mathbf{n}}\widehat{\mathbf{n}}}+h^4+\frac{\log \widehat{\mathbf{n}}}{\widehat{\mathbf{n}}h^d}\right),$$ where $\widehat{Y}_{\mathbf{i}_0}=\left\langle \widehat{\phi}_{\mathbf{n}}, X_{\mathbf{i}_0}\right\rangle_H+\left\langle \widehat{\gamma}_{\mathbf{n}}, X^{\prime}_{\mathbf{i}_0}\right\rangle_G+\widehat{r}\left(\dfrac{\mathbf{i}_0}{\mathbf{n+1}}\right)$ and $Y_{\mathbf{i}_0}^*=\left\langle \phi, X_{\mathbf{i}_0}\right\rangle_H+\left\langle \gamma, X^{\prime}_{\mathbf{i}_0}\right\rangle_G+r\left(\dfrac{\mathbf{i}_0}{\mathbf{n+1}}\right)$. \end{corollary} \bigskip \begin{remark}\label{rem3} The methodology proposed in this paper can also be applied when the design of the non-parametric regression function is random. That is $$Y_{\mathbf{i}}=\int_{0}^{1}\phi(t)X_{\mathbf{i}}(t)dt+\int_{0}^{1}\gamma(t)X_{\mathbf{i}}^{\prime}(t)dt+r(Z_{\mathbf{i}})+\epsilon_{\mathbf{i}}\ ,\, \mathbf{i}\in{\mathcal I}_{\mathbf{n}},$$ where $Z_{\mathbf{i}}$ is a random vector independent of $X_{\mathbf{i}}$ and $X_{\mathbf{i}}^{\prime}$. An estimator $\widehat{r}$ of $r$ is obtained by considering, in our estimation procedure (see Section {\ref{s2}}), the product kernels (see \cite[chapter 3]{camille}) $K_1\left(\dfrac{x_{\mathbf{s}_0}-Z_{\mathbf{i}}}{h_1}\right)K_2\left(\dfrac{\mathbf{s}_0-\mathbf{i}}{(\mathbf{n+1})h_2}\right)$ instead of $K\left(\dfrac{\dfrac{\mathbf{i}}{\mathbf{n+1}}-\mathbf{s}_0}{h}\right)$. Besides, if the correlation of errors is of short-range with $Cov(\epsilon_{\mathbf{i}},\epsilon_{\mathbf{j}})=\sigma^2\exp(-a\widehat{\mathbf{n}}\|\mathbf{i}-\mathbf{j}\|)$, then refer to \cite{fransisco} or \cite{liu01} for the convergence from $\widehat{r}$ to $r$. Moreover, if $\gamma=0$, we obtain a model which extends to spatial case that considered in \cite{zhou_chen12} with independent data. \end{remark} \section{A simulation study}\label{s4} \noindent In this section, we present a simulation study made in order to appreciate the finite sample performance of our proposal. We consider an equivalent definition to the model (\ref{1.2}) given by \begin{eqnarray}\label{simu1} Y_{\mathbf{i}}=\int_{0}^{1}\phi(t)X_{\mathbf{i}}(t)dt+\int_{0}^{1}\gamma(t)X_{\mathbf{i}}^{\prime}(t)dt+r\left(\frac{\mathbf{i}}{n+1}\right)+\epsilon_{\mathbf{i}}, \end{eqnarray} where $d=2$, $n_1=n_2=n$ and $\mathbf{i}\in\{1,\cdots,n\}^2$. By using the lexicographic order in $\mathbb{Z}^2$, we simulated a sample $\{(X_{\mathbf{i}_\ell},Y_{\mathbf{i}_\ell})\}_{1\leq \ell\leq n^2}$ such that: \[ X_{\mathbf{i}_\ell}(t)=\sum_{k=1}^{15}\Lambda_{\mathbf{i}_\ell,k}\,F_k(t) \] where $F_1,\cdots,F_{15}$ are the $15$-th first elements of the Fourier basis, the random vector $(\Lambda_{\mathbf{i}_1,k},\cdots,\Lambda_{\mathbf{i}_{n^2},k})^T$ is obtained from a multivariate truncated normal distribution with zero mean, $n^2\times n^2$ covariance matrix $\Sigma^1$ with general term $\Sigma^1_{ij}=\exp(-a\|\mathbf{i}_i-\mathbf{i}_j\|_2)$, where $a= 0.1,\ 1,\ 3,\ 200$, and with lower truncation limit $(0,\cdots,0)\in\mathbb{R}^{n^2}$ and upper truncation limit $(1,\cdots,1)\in\mathbb{R}^{n^2}$. When $a=200$, there is approximately no spatial autocorrelation in the process. The process is said strongly correlated when $a=0.1,\ 1$ and weakly correlated when $a=3$. The process $Y_{\mathbf{i}_\ell}$ is obtained from the model (\ref{simu1}), in which $X_{\mathbf{i}_\ell}^{\prime}$ is computed by the function "fdata.deriv" of the R fda package, $\phi(t)=[\sin(2\pi t^3)]^3$, $\gamma(t)=(0.6-t)^2$, $t\in[0,1]$, $r(\mathbf{x})=\exp(-\|\mathbf{x}\|_{\infty})$, $\mathbf{x}\in[0,1]^2$, integrals are approximated by the rectangular method applied at the $366$ equispaced points of the interval $[0,1]$, $(\epsilon_{\mathbf{i}_1},\cdots,\epsilon_{\mathbf{i}_{n^2}})^T$ is a random vector having a normal distribution ${\mathcal N}(0,\Sigma^2)$ where $\Sigma^2$ is a $n^2\times n^2$ covariance matrix with general term $\Sigma^2_{ij}=0.01\Sigma^1_{ij}$ for $i\ne j$ and $\Sigma^2_{ii}=\Sigma^1_{ii}$. The estimate ($\widehat{\phi}_{\mathbf{n}}$, $\widehat{\gamma}_{\mathbf{n}}$) of the pair ($\phi$,$\gamma$) depends of the regularization sequences $\psi$ and $w$. These sequences are obtained from cross validation based on the mean standard error of prediction : $$CVMSEP(\psi,w)=\dfrac{1}{n^2}\sum^{n^2}_{\ell=1}\left(Y_{\mathbf{i}_{\ell}}-\widetilde{Y}^{(\ell)}_{\mathbf{i}_{\ell}}(\psi,w)\right)^2,$$ where $\widetilde{Y}^{(-\ell)}_{\mathbf{i}_{\ell}}(\psi,w)=\left\langle \widehat{\phi}_{\mathbf{n}}, X_{\mathbf{i}_{\ell}}\right\rangle_G+\left\langle \widehat{\gamma}_{\mathbf{n}}, X^{\prime}_{\mathbf{i}_{\ell}}\right\rangle_G$ with $\widehat{\phi}_{\mathbf{n}}$ and $\widehat{\gamma}_{\mathbf{n}}$ computed with the $\ell-$th part of the removed data. The estimate $\widehat{r}$ of $r$ depends on the bandwidth $h$ which is selected by minimizing the following generalized cross validation (GCV) function \cite{fransisco}:$$GCV (h)=\dfrac{1}{n^2}\sum_{\ell=1}^{n^2}\left(\dfrac{T_{\mathbf{i}_{\ell}}-\widehat{r}\left(\frac{\mathbf{i}_{\ell}}{n};h\right)}{1-\frac{1}{n^2}tr\left(\mathbf{SC}\right)}\right)^2,$$ where $T_{\mathbf{i}_{\ell}}=Y_{\mathbf{i}_{\ell}}-\left\langle \widehat{\phi}_{\mathbf{n}}, X_{\mathbf{i}_{\ell}}\right\rangle_G-\left\langle \widehat{\gamma}_{\mathbf{n}}, X^{\prime}_{\mathbf{i}_{\ell}}\right\rangle_G$ with $\widehat{\phi}_{\mathbf{n}}$ and $\widehat{\gamma}_{\mathbf{n}}$ computed from the optimal regularization parameters $\psi_{opt}$ and $w_{opt}$, $\widehat{r}\left(\frac{\mathbf{i}_{\ell}}{n};h\right)$ is computed from Epanechnikov kernel defined by $K(x)=\frac{2}{\pi}\max\left\{(1-\|x\|_2^2),0\right\}$, $\mathbf{S}$ is the $n^2\times n^2$ matrix whose $\ell$th row is equal to $\mathcal{S}^T_{\mathbf{i}_{\ell}/n}$ and $\mathbf{C}$ is the correlation matrix of the observations. We assess performance of our method through calculation of the Mean Squared Error ($MSE_1$ and $MSE_2$), based on $100$ replications with $n=5, 10$ and $a=0.1,\ 1,\ 3,\ 200$, and defined by: $$MSE_1=\frac{1}{n^2}\sum_{j=1}^{n}\sum_{i=1}^{n}\left[r\left(\frac{i}{n+1},\frac{j}{n+1}\right)-\widehat{r}\left(\frac{i}{n+1},\frac{j}{n+1}\right)\right]^2,$$ \begin{eqnarray*} MSE_2&=&\frac{1}{n^2}\sum_{j=1}^{n}\sum_{i=1}^{n}\left[\left\langle \phi-\widehat{\phi}_{\mathbf{n}}, X_{(i,j)}\right\rangle_G+ \left\langle \gamma-\widehat{\gamma}_{\mathbf{n}}, X^{\prime}_{(i,j)}\right\rangle_G\right.\\ &&\left.+ r\left(\frac{i}{n+1},\frac{j}{n+1}\right)-\widehat{r}\left(\frac{i}{n+1},\frac{j}{n+1}\right)\right]^2. \end{eqnarray*} We denote by $m$ the mean and by $sd$ the standard deviation. The results are postponed in Table \ref{tab:1}. \begin{table}[ht!] \begin{center} \begin{tabular}{cccccccccc} \hline \multicolumn{1}{c}{}&\multicolumn{1}{c}{}&\multicolumn{2}{c}{$a=0.1$}&\multicolumn{2}{c}{$a=1$}&\multicolumn{2}{c}{$a=3$}&\multicolumn{2}{c}{$a=200$}\\ \cline{3-10} $n^2$& error criterion & $m$& $sd$& $m$& $sd$& $m$& $sd$& $m$& $sd$ \\\hline 25&$MSE_1$ & 0.51& 0.58& 0.29& 0.17& 0.16& 0.10& 0.20& 0.09\\ &$MSE_2$ & 0.98& 1.24& 0.31& 0.18& 0.18& 0.11& 0.21& 0.11\\ & & & & & & & & &\\ 100&$MSE_1$ & 0.38& 0.40& 0.12& 0.08& 0.07& 0.04& 0.05& 0.03\\ &$MSE_2$ & 0.62& 0.73& 0.15& 0.10& 0.06& 0.03& 0.06& 0.03\\\hline \end{tabular} \caption{Mean ($m$) and standard deviation ($sd$) of $MSE_1$ and $MSE_2$, based on 100 replications; $n^2=25,\ 100$ and $a =0.1,\ 1,\ 3,\ 200$.} \label{tab:1} \end{center} \end{table} \noindent In Table \ref{tab:1}, the two kinds of error criterion ($MSE_1$, $MSE_2$) have a decreasing general tendency as the sample size increases. Thus our estimation procedure well fits to spatial semi-functional linear regression model with derivatives. Also, for each fixed $n$, values of each error criterion for weakly correlated processes ($a=3$) are similar to those approximatively non-correlated processes ($a=200$), whereas those of strongly correlated processes ($a=0.1,1$) decrease as $a$ increases, so showing the interest to consider spatially dependent observations in this study. Besides, for each fixed $n$, $MSE_1$ and $MSE_2$ are similar for values of $a\geq 1$. This means that the presence of functional data in the model (\ref{1.2}) does not modify the convergence rate of the estimated nonparametric regression function as stated in \cite[Remark 2]{zhou_chen12}. \section{Application to ozone pollution forecasting at the non-visited sites} \label{s5} \noindent In this section, our methodology is applied to predict the level of ozone pollution at non-visited sites of California state. For that, we use the available data on internet site https://www.epa.gov/outdoor-air-quality-data. The explicative functional variables $$\{X_{s_i}(t),\ t=1,\cdots,100,\ s_i=(Latitude, Longitude)_i,\ i=1,\cdots,51\}$$ correspond to the measurements of ozone concentration measured the $p=100$ firsts days, from January 1st, 2021 to April 12th, 2021 on each of $n^2=51$ sites. The response variables $$\{Y_{s_i},\ s_i=(Latitude, Longitude)_i,\ i=1,\cdots,35\}$$ correspond to the measurements of ozone concentration measured on April 13th, 2021 on each of $35$ firsts stations. $\{Y_{s_i},\ s_i=(Latitude, Longitude)_i,\ i=1,\cdots,35\}$ and $\{X_{s_i}(t),\ t=1,\cdots,100,\ s_i=(Latitude, Longitude)_i,\ i=1,\cdots,35\}$ are related through the spatial semi-functional linear regression model with derivatives (SSFLRD) defined by \begin{eqnarray*} Y_{s_{i}}&=&\int_{0}^1\phi(t)X_{s_i}(t)dt+\int_{0}^1\gamma(t)X^{\prime}_{s_i}(t)dt\\ &&+r\left(\frac{Latitude}{\max_{j=1,\cdots,51}(Latitude[j])},\frac{Longitude}{\max_{j=1,\cdots,51}(Longitude[j])}\right)+\epsilon_{s_i}. \end{eqnarray*} For evaluating the performances of our method, we compare prediction error obtained from SSFLRD model with the spatial functional linear regression model with derivatives (SFLRD) studied in \cite{bouka3} and defined by $$Y_{s_{i}}=\int_{0}^1\phi(t)X_{s_i}(t)dt+\int_{0}^1\gamma(t)X^{\prime}_{s_i}(t)dt+\epsilon_{s_i}\ ,$$ where $X^{\prime}_{\mathbf{i}_{\ell}}$ standing for the first derivative of $X_{\mathbf{i}_{\ell}}$ is computed from the function "fdata.deriv" of the $R$ fda package. So, we predict from both methodologies $$\{Y_{s_i},\ s_i=(Latitude, Longitude)_i, i=36,\cdots,51\},$$ which would correspond to the measurements ozone concentration at the date of the April 13th, 2021 on these 16 others sites assumed non-visited at this same date. \begin{table}[h!] \centering \begin{tabular}{ccc} \hline &SSFLRD &SFLRD \\\hline Prediction error (PE) &$0.0320$&$0.0334$\\\hline \end{tabular} \caption{Prediction error computed from both models with $h=0.32$, $\psi=0.01$, $w=0.28$.} \label{tab:2} \end{table} \begin{figure}[h!] \centering \includegraphics[width=0.99\textwidth]{prediction_ozone.pdf} \caption{Centered predicted values of ozone concentration, from SSFLRD (left) and SFLRD(right), versus the centered measured values. }\label{fig1} \end{figure} \noindent Both last graphics of Figure \ref{fig1} present a very minor different and it is confirmed by computation (see Table \ref{tab:2}) of prediction error (PE) given by $$PE=\sqrt{\sum_{i=36}^{51}\left(Y_{s_i}-\widehat{Y}_{s_i}\right)^2}.$$ We see a very small advantage for prediction obtained from model SSFLRD studied in this paper. \section{Discussion}\label{Discussion} \noindent In this paper, we propose to study asymptotic properties of a prediction at non-visited sites computed from an estimator of nonparametric regression function in a spatial semi-functional linear regression model with derivatives. The originality of the proposed method is to consider spatially dependent data in this new model. We established the convergence rates of the estimation and prediction errors when considered processes are $\alpha$-mixing dependent. The main contributions of this work are on convergence rates of the empirical covariance operators and on the estimator $\widehat{r}$ of $r$ constructed from $\alpha$-mixing dependent data satisfying the general model defined in (\ref{1.2}), allowing to establish prediction at non-visited site. Its convergence rate is optimal with $d\ge4$, is better than the one of \cite{carbonetal07} with $d=3$, and is quite close from the one of \cite{carbonetal07} with $d=2$. Besides, the simulation study revealed that the presence of functional data in the SSFLRD model does not modify the convergence rate of the estimated nonparametric regression function. Notice, however, that problems are different: here it is question of prediction at non-visited sites evaluated from estimation, while \cite{carbonetal07} is only interested in that of estimation. Application to ozone pollution revealed that the proposed prediction fits well to the spatial semi-functional linear regression model with derivatives. Moreover, the SSFLRD model produces equivalent predictions with the SFLRD model. However, the presented methodology in this paper has more advantages than that of SFLRD. \section{Proofs of asymptotic results}\label{s6} \subsection{Technical Lemmas} \noindent The proof of the following Lemma \ref{lemme1} is similar to the one of Lemma 1 in \cite{deo}. \begin{lemma}\label{lemme1} Assume that (\ref{ar2}) holds. Let ${\mathcal L}_r({\mathcal A})$ be the class of ${\mathcal A}$-measurable random function $X$ satisfying $\|X\|_p=\left(\mathbb{E}(|X|^p)\right)^{1/p}$. Let $p,\ s,\ h$ be positive constants such that $p^{-1}+s^{-1}+h^{-1}=1$, $X\in {\mathcal L}_p({\mathcal B}(S))$ and $Y\in {\mathcal L}_s({\mathcal B}(S'))$. Then $$|\mathbb{E}(XY)-\mathbb{E}(X)\mathbb{E}(Y)|\leq K\left\|X\right\|_p\left\|Y\right\|_s\left\{\alpha_{1,\infty}(\rho(S,S'))\right\}^{1/h}.$$ \end{lemma} \noindent The following Lemma \ref{lemme2} adapts Proposition 8 in \cite{mas_pumo09}. \begin{lemma}\label{lemme2} We have: $$\left\|\left(S_{\mathbf{n},\phi}+\psi_{\mathbf{n}}I\right)^{-1}\right\|_{\infty}\le \dfrac{1}{\psi_{\mathbf{n}}},$$ where $\left\|R\right\|_{\infty}=\sup_{x\in H}\dfrac{\left\|Rx\right\|_H}{\left\|x\right\|_H}.$ \end{lemma} \noindent From Lemmas 9, 10 of \cite{mas_pumo09} together with Corollary 3.1 of \cite{bouka3}, we obtain the following Lemma \ref{lemme3}. \begin{lemma}\label{lemme3} We have: \begin{eqnarray*} \|u_{\mathbf{n},\phi}-u_{\phi}\|_{L^2({\mathcal HS})}=O\left(\frac{\log\widehat{\mathbf{n}}}{w_{\mathbf{n}}\widehat{\mathbf{n}}^{1/2}}\right) \,\,\textrm{ and }\,\, \|S_{\mathbf{n},\phi}-S_{\phi}\|_{L^2({\mathcal HS})}=O\left(\frac{\log\widehat{\mathbf{n}}}{w_{\mathbf{n}}\widehat{\mathbf{n}}^{1/2}}\right)\label{e2}, \end{eqnarray*} where ${\mathcal HS}$ stands for the space of Hilbert-Schmidt operators endowed with the inner product $\left\langle R, T\right\rangle_{\mathcal HS}=\sum^{+\infty}_{i=1}\left\langle R(w_i),T(w_i)\right\rangle_H$ where $w_i$ is a basis of $H$, and $\left\|R\right\|_{L^2({\mathcal HS})}=\left\{\mathbb{E}\left(\left\|R\right\|_{\mathcal HS}^2\right)\right\}^{1/2}.$ \end{lemma} \begin{lemma}\label{l1} We have: $$\left\|\widehat{\phi}_{\mathbf{n}}-\phi \right\|_H=O_p(1)\ \ \text{and}\ \ \left\|\widehat{\gamma}_{\mathbf{n}}-\gamma \right\|_G=O_p(1);$$ \end{lemma} \noindent\textit{Proof}. Putting $u_{\phi}=\Delta-\Gamma^{\prime}\Gamma^{\prime\prime-1}\Delta^{\prime}$, we have $$\widehat{\phi}_{\mathbf{n}}-\phi=(S_{\mathbf{n},\phi}+\psi_{\mathbf{n}}I)^{-1}(u_{\mathbf{n},\phi}-u_{\phi})+(S_{\mathbf{n},\phi}+\psi_{\mathbf{n}}I)^{-1}(S_{\phi}-S_{\mathbf{n},\phi}-\psi_{\mathbf{n}}I)\phi.$$ \noindent From Lemma \ref{lemme2}, we obtain \begin{eqnarray*} \left\|\widehat{\phi}_{\mathbf{n}}-\phi \right\|_H^2&\leq& 3\left(\left\|u_{\mathbf{n},\phi}-u_{\phi} \right\|_H^2\left\|(S_{\mathbf{n},\phi}+\psi_{\mathbf{n}}I)^{-1} \right\|_\infty^2\right)\\ &&+3\left\|\phi \right\|_H^2\left(\left\|S_{\phi}-S_{\mathbf{n},\phi} \right\|_\infty^2\left\|(S_{\mathbf{n},\phi}+\psi_{\mathbf{n}}I)^{-1} \right\|_\infty^2\right)\\ &&+3\psi_{\mathbf{n}}^2\left\|\phi \right\|_H^2\left(\left\|(S_{\mathbf{n},\phi}+\psi_{\mathbf{n}}I)^{-1} \right\|_\infty^2\right)\\ &\le&\frac{3}{\psi_{\mathbf{n}}^2}\left(\left\|u_{\mathbf{n},\phi}-u_{\phi} \right\|_H^2+\left\|S_{\phi}-S_{\mathbf{n},\phi} \right\|_\infty^2\left\|\phi\right\|_H^2\right)+3\left\|\phi\right\|_H^2. \end{eqnarray*} \noindent For $\tau>0$, from Markov inequality, we have \begin{eqnarray*} &&\mathbb{P}\left(\left\|\widehat{\phi}_{\mathbf{n}}-\phi \right\|_H>\tau\right)\\ &=&\mathbb{P}\left(\left\|\widehat{\phi}_{\mathbf{n}}-\phi \right\|_H^2>\tau^2\right)\\ &\le&\mathbb{P}\left(\frac{3}{\psi_{\mathbf{n}}^2}\left(\left\|u_{\mathbf{n},\phi}-u_{\phi} \right\|_H^2+\left\|S_{\phi}-S_{\mathbf{n},\phi} \right\|_\infty^2\left\|\phi\right\|_H^2\right)+3\left\|\phi\right\|_H^2>\tau^2\right)\\ &\le&\mathbb{P}\left(\frac{3}{\psi_{\mathbf{n}}^2}\left\|u_{\mathbf{n},\phi}-u_{\phi} \right\|_H^2>\dfrac{\tau^2}{3}\right)+\mathbb{P}\left(\frac{3\left\|\phi\right\|_H^2}{\psi_{\mathbf{n}}^2}\left\|S_{\phi}-S_{\mathbf{n},\phi} \right\|_\infty^2>\dfrac{\tau^2}{3}\right)\\ &&+\,\mathbb{P}\left(3\left\|\phi\right\|_H^2>\dfrac{\tau^2}{3}\right)\\ &\le&\dfrac{9\mathbb{E}\left(\left\|u_{\mathbf{n},\phi}-u_{\phi} \right\|_H^2\right)}{\tau^2\psi_{\mathbf{n}}^2}+\dfrac{9\left\|\phi\right\|_H^2\mathbb{E}\left(\left\|S_{\phi}-S_{\mathbf{n},\phi} \right\|_\infty^2\right)}{\tau^2\psi_{\mathbf{n}}^2}+\mathbb{I}_{]0,3\|\phi\|_H[}(\tau), \end{eqnarray*} where $\mathbb{I}_{]0,3\|\phi\|_H[}(.)$ is the indicator function. Finally, from Lemma \ref{lemme3}, we conclude that $$\forall\delta>0,\, \exists\tau\ge3\|\phi\|_H,\,N_{\delta}\in\mathbb{N}\, \, \text{such that}\, \, \forall\mathbf{n}>N_{\delta}\mathbf{1},\, \mathbb{P}\left(\left\|\widehat{\phi}_{\mathbf{n}}-\phi \right\|_H>\tau\right)\le\delta,$$ where $\mathbf{n}>N_{\delta}\mathbf{1}$ means that $\min_{1\le k\le d}\{n_k\}>N_{\delta}$. Similarly, we obtain that $\left\|\widehat{\gamma}_{\mathbf{n}}-\gamma \right\|_G=O_p(1)$. \subsection{Proof of Theorem \ref{th1}} \noindent Proof of (\ref{3.2}): For seek of simplicity, we only give the proof of the empirical operator $\Gamma_{\mathbf{n}}$, since for other estimators similar arguments can be applied. Recall that $\Lambda_{\mathbf{n}}=\Gamma_{\mathbf{n}}$ with $\Gamma_{\mathbf{n}}=\dfrac{1}{\widehat{\mathbf{n}}}\sum_{\mathbf{i}\in{\cal I}_{\mathbf{n}}}X_{\mathbf{i}}\otimes_{H}X_{\mathbf{i}}$ and $\Lambda=\Gamma=\mathbb{E}(X\otimes_{H}X)$. For all $\tau>0$ and From Markov inequality, we have: \begin{eqnarray} \mathbb{P}\left(\left\|\Gamma_{\mathbf{n}}-\Gamma\right\|_{\infty}>\tau\right)&=&\mathbb{P}\left(\sup_{k}\left\|\Gamma_{\mathbf{n}}v_k-\Gamma v_k\right\|_{H}>\tau\right)\nonumber\\ &\leq&\sum^{+\infty}_{k=1}\mathbb{P}\left(\left\|\Gamma_{\mathbf{n}}v_k-\Gamma v_k\right\|_{H}>\tau\right)\nonumber\\ &\leq&\dfrac{1}{\tau^2}\sum^{\infty}_{k=1}\mathbb{E}\left(\left\|\Gamma_{\mathbf{n}}v_k-\Gamma v_k\right\|_{H}^2\right)\label{6.1}. \end{eqnarray} However, Setting \begin{gather*} L_{\mathbf{ij}}^k=\left\langle\left\langle X_{\mathbf{i}}, v_k\right\rangle_{H}X_{\mathbf{i}}-\mathbb{E}(\left\langle X, v_k\right\rangle_{H}X),\left\langle X_{\mathbf{j}}, v_k\right\rangle_{H}X_{\mathbf{j}}-\mathbb{E}(\left\langle X, v_k\right\rangle_{H}X) \right\rangle_{H}, \end{gather*} we have \begin{eqnarray*} \mathbb{E}\left[\|\Gamma_{\mathbf{n}}v_k-\Gamma v_k\|^{2}_{H}\right]&=&\mathbb{E}\left[\left\|\frac{1}{\widehat{\mathbf{n}}}\sum_{\mathbf{i}\in{\cal I}_{\mathbf{n}}}\left(\left\langle X_{\mathbf{i}}, v_k\right\rangle_{H}X_{\mathbf{i}}-\mathbb{E}(\left\langle X, v_k\right\rangle_{H}X)\right)\right\|^{2}_{H}\right]\\ &=&\dfrac{1}{\widehat{\mathbf{n}}^{2}}\sum_{\mathbf{i}\in{\cal I}_{\mathbf{n}}}\mathbb{E}\left[\left\|\left\langle X_{\mathbf{i}}, v_k\right\rangle_{H}X_{\mathbf{i}}-\mathbb{E}(\left\langle X, v_k\right\rangle_{H}X)\right\|^{2}_{H}\right]\\ &&+\dfrac{1}{\widehat{\mathbf{n}}^{2}}\sum_{\mathbf{i}\neq\mathbf{j}}\mathbb{E} \left(L_{\mathbf{ij}}^k\right)\\ &:=&A_k+B_k. \end{eqnarray*} On the one hand, since $X_{\mathbf{i}}$ are strictly stationary with the same law as $X$, from Assumption \ref{as5}, we have: \begin{eqnarray} A_k&=&\dfrac{1}{\widehat{\mathbf{n}}^{2}}\sum_{\mathbf{i}\in{\cal I}_{\mathbf{n}}}\mathbb{E}\left[\left\|\left\langle X_{\mathbf{i}}, v_k\right\rangle_{H}X_{\mathbf{i}}-\mathbb{E}(\left\langle X, v_k\right\rangle_{H}X)\right\|^{2}_{H}\right]\nonumber\\ &\leq&\dfrac{2}{\widehat{\mathbf{n}}^{2}}\sum_{\mathbf{i}\in{\cal I}_{\mathbf{n}}}\left(\mathbb{E}\left(\left\langle X_{\mathbf{i}}, v_k\right\rangle^{2}_{H}\|X_{\mathbf{i}}\|^{2}_{H}\right)+\mathbb{E}\left(\left\langle X, v_k\right\rangle^{2}_{H}\|X\|^{2}_{H}\right)\right)\nonumber\\ &\leq&\dfrac{4C^2\lambda_k}{\widehat{\mathbf{n}}}.\label{6.2} \end{eqnarray} On the other hand, we have: \begin{eqnarray*} B_k&=&\dfrac{1}{\widehat{\mathbf{n}}^{2}}\sum_{0<\|\mathbf{i}-\mathbf{j}\|\leq C_{\mathbf{n}}}\mathbb{E} (L_{\mathbf{ij}}^k)+\dfrac{1}{\widehat{\mathbf{n}}^{2}}\sum_{\|\mathbf{i}-\mathbf{j}\|>C_{\mathbf{n}}}\mathbb{E} (L_{\mathbf{ij}}^k):=B_{k1}+B_{k2} \end{eqnarray*} where $0<C_{\mathbf{n}}<\widehat{\mathbf{n}}$ and $C_{\mathbf{n}}\rightarrow+\infty$ as $\mathbf{n}\rightarrow+\infty$. However, from Cauchy-Schwartz inequality and by the same arguments than those of the relation (\ref{6.2}), we have \begin{eqnarray*} \mathbb{E}(|L_{\mathbf{ij}}^k|)\leq\mathbb{E}(\sqrt{L_{\mathbf{ii}}^k}\sqrt{L_{\mathbf{jj}}^k})\leq (\mathbb{E} (L_{\mathbf{ii}}^k))^{1/2}(\mathbb{E} (L_{\mathbf{jj}}^k))^{1/2}\leq 2C^2\lambda_k \end{eqnarray*} because $\mathbb{E} (L_{\mathbf{ii}}^k)=\mathbb{E}\left(\left\|\left\langle X_{\mathbf{i}}, v_k\right\rangle_{H}X_{\mathbf{i}}-\mathbb{E}\left(\left\langle X, v_k\right\rangle_{H}X\right)\right\|^{2}_{H}\right)$ are terms of $A_k$. Then \begin{eqnarray*} |B_{k1}|&\leq&\dfrac{2C^2\lambda_k}{\widehat{\mathbf{n}}^2}\sum^{C_{\mathbf{n}}}_{\ell=1}\sum_{\stackrel{\mathbf{i,j}\in{\mathcal I}_{\mathbf{n}}}{\ell\leq\|\mathbf{i-j}\|=t<\ell+1}}1\\ &\leq&\dfrac{2C^2\lambda_k}{\widehat{\mathbf{n}}^2}\sum^{C_{\mathbf{n}}}_{\ell=1}\sum_{\mathbf{j}\in{\mathcal I}_{\mathbf{n}}}\sum_{\stackrel{\mathbf{i}\in{\mathcal I}_{\mathbf{n}}}{\ell\leq\|\mathbf{i-j}\|=t<\ell+1}}1\\ &\leq&\dfrac{2C^2\lambda_k}{\widehat{\mathbf{n}}}\sum^{C_{\mathbf{n}}}_{\ell=1}\sum_{\stackrel{\mathbf{i}\in{\mathcal I}_{\mathbf{n}}}{\ell\leq\|\mathbf{i}\|=t<\ell+1}}1\\ &\leq&\dfrac{2C^2\lambda_k}{\widehat{\mathbf{n}}}\sum^{C_{\mathbf{n}}}_{t=1}t^{d-1}\leq\dfrac{2C^2\lambda_kC_{\mathbf{n}}^d}{\widehat{\mathbf{n}}}. \end{eqnarray*} Taking $C_{\mathbf{n}}=\lfloor(\log\widehat{\mathbf{n}})^{1/d}\rfloor$ (where $\lfloor x\rfloor$ stands for the integer part of $x$), we obtain \begin{eqnarray}\label{6.3} |B_{k1}|\leq\dfrac{2C^2\lambda_k\log\widehat{\mathbf{n}}}{\widehat{\mathbf{n}}} \end{eqnarray} Since $X_{\mathbf{i}}$ are strictly stationary with the same law as $X$ and $\|X\|_{H}<C$ a.s., it follows that $$\mathbb{E}\left[\left(L^k_{\mathbf{ii}}\right)^4\right]=\mathbb{E}\left(\|\left\langle X_{\mathbf{i}}, v_k\right\rangle_{H}X_{\mathbf{i}}-\mathbb{E}\left(\left\langle X, v_k\right\rangle_{H}X\right)\|_{H}^4\right)\leq8\mathbb{E}\left(\left\langle X,v_k\right\rangle_{H}^4\|X\|_{H}^4\right)<8\lambda_{k}C^{6}.$$ Applying Lemma \ref{lemme1} with $p=s=4$ and $h=2$, we obtain \begin{eqnarray*} |\mathbb{E} (L_{\mathbf{ij}}^k)|\leq K\|L_{\mathbf{ii}}^k\|^2_{4}\{\alpha_{1,\infty}(|\mathbf{i-j}|)\}^{1/2}\leq 2KC^3\sqrt{2\lambda_k}\{\alpha_{1,\infty}(|\mathbf{i-j}|)\}^{1/2}. \end{eqnarray*} Since $\alpha_{1,\infty}(t)=O(t^{-\theta})$ with $\theta>2d$, then \begin{eqnarray} |B_{k2}|&\leq&\dfrac{2KC^3\sqrt{2\lambda_k}}{\widehat{\mathbf{n}}^{2}}\sum^{+\infty}_{\ell=C_{\mathbf{n}}+1}\sum_{\stackrel{\mathbf{i,j}\in{\mathcal I}_{\mathbf{n}}}{\ell\leq\|\mathbf{i-j}\|=t<\ell+1}}\{\alpha_{1,\infty}(t)\}^{1/2}\nonumber\\ &\leq&\dfrac{2KC^3\sqrt{2\lambda_k}}{\widehat{\mathbf{n}}}\sum^{+\infty}_{\ell=C_{\mathbf{n}}+1}\sum_{\stackrel{\mathbf{i}\in{\mathcal I}_{\mathbf{n}}}{\ell\leq\|\mathbf{i}\|=t<\ell+1}}\{\alpha_{1,\infty}(t)\}^{1/2}\nonumber\\ &\leq&\dfrac{2KC^3\sqrt{2\lambda_k}}{\widehat{\mathbf{n}}}\sum^{+\infty}_{t=1}t^{d-1}\{\alpha_{1,\infty}(t)\}^{1/2}\nonumber\\ &\leq&\dfrac{2KC^3\sqrt{2\lambda_k}}{\widehat{\mathbf{n}}}\sum^{\infty}_{t=1}t^{d-1-\theta/2}.\label{6.4} \end{eqnarray} From inequalities (\ref{6.1}), (\ref{6.2}), (\ref{6.3}) and (\ref{6.4}) with $\lambda_k=O(u^k)$, $0<u<1$, we conclude that \begin{eqnarray*} \mathbb{P}\left(\left\|\Gamma_{\mathbf{n}}-\Gamma\right\|_{\infty}>\tau\right)=O\left(\dfrac{\log\widehat{\mathbf{n}}}{\widehat{\mathbf{n}}}\right). \end{eqnarray*} \noindent Proof of (\ref{3.3}): Take $\Lambda_{\mathbf{n}}=\Gamma_{\mathbf{n}}$ with $\Gamma_{\mathbf{n}}=\dfrac{1}{\widehat{\mathbf{n}}}\sum_{\mathbf{i}\in{\cal I}_{\mathbf{n}}}X_{\mathbf{i}}\otimes_{H}X_{\mathbf{i}}$ and $\Lambda=\Gamma=\mathbb{E}(X\otimes_{H}X)$. By definition, we have $$ \left\|\Lambda_{\mathbf{n}}-\Lambda\right\|_{L^{2}({\cal HS})}=\left\{\mathbb{E}\left[\left\|\Lambda_{\mathbf{n}}-\Lambda\right\|^{2}_{{\cal HS}}\right]\right\}^{1/2}. $$ Since $(v_{j})_{j\geq1}$ is an orthonormal basis of $H$, we have \begin{eqnarray*} &&\mathbb{E}\left[\left\|\Lambda_{\mathbf{n}}-\Lambda\right\|^{2}_{{\cal HS}}\right]=\sum^{+\infty}_{i=1}\mathbb{E}\left[\left\|\Lambda_{\mathbf{n}}(v_{i})-\Lambda(v_{i})\right\|^{2}_{H}\right]\\ &&\ \ \ =\sum^{Q}_{i=1}\mathbb{E}\left[\left\|\Lambda_{\mathbf{n}}(v_{i})-\Lambda(v_{i})\right\|^{2}_{H}\right]+\sum_{i>Q}\mathbb{E}\left[\left\|\Lambda_{\mathbf{n}}(v_{i})-\Lambda(v_{i})\right\|^{2}_{H}\right]:=A+B. \end{eqnarray*} Applying Theorem \ref{th1} and taking $Q=\lfloor K\log\widehat{\mathbf{n}}\rfloor$, we have $$ A\leq\mathbb{E}\left[\left\|\Lambda_{\mathbf{n}}-\Lambda\right\|^{2}_{\infty}\right]\sum^{Q}_{i=1}\left\|v_{i}\right\|^{2}_{H}\leq C_1\dfrac{(\log\widehat{\mathbf{n}})^{2}}{\widehat{\mathbf{n}}} $$ where $C_1$ is some positive constant. On the other hand, we have \begin{eqnarray*} B&=&\sum_{i>Q}\mathbb{E}\left[\sum^{+\infty}_{j=1}\left\langle \Lambda_{\mathbf{n}}(v_{i})-\Lambda(v_{i}), v_{j}\right\rangle^{2}_{H}\right]\\ &=&\sum_{i>Q}\sum^{+\infty}_{j=1}\mathbb{E}\left\{\left[\frac{1}{\widehat{\mathbf{n}}}\sum_{\mathbf{k}\in{\cal I}_{\mathbf{n}}}\left\langle \left\langle X_{\mathbf{k}}, v_{i}\right\rangle_{H}X_{\mathbf{k}}-\mathbb{E}(\left\langle X, v_{i}\right\rangle_{H}X), v_{j}\right\rangle_{H}\right]^{2}\right\}\\ &\leq&\dfrac{1}{\widehat{\mathbf{n}}}\sum_{i>Q}\sum^{+\infty}_{j=1}\sum_{\mathbf{k}\in{\cal I}_{\mathbf{n}}}\mathbb{E}\left[\left\langle \left\langle X_{\mathbf{k}}, v_{i}\right\rangle_{H}X_{\mathbf{k}}-\mathbb{E}(\left\langle X, v_{i}\right\rangle_{H}X), v_{j}\right\rangle^{2}_{H}\right]\\ &\leq&\frac{2}{\widehat{\mathbf{n}}}\sum_{i>Q}\sum^{+\infty}_{j=1}\sum_{\mathbf{k}\in{\cal I}_{\mathbf{n}}}\left[\mathbb{E}\left(\left\langle X_{\mathbf{k}}, v_{i}\right\rangle^{2}_{H}\left\langle X_{\mathbf{k}}, v_{j}\right\rangle^{2}_{H}\right)+\mathbb{E}\left(\left\langle X, v_{i}\right\rangle^{2}_{H}\left\langle X, v_{j}\right\rangle^{2}_{H}\right)\right]\\ &\leq&4\sum_{i>Q}\left[\mathbb{E}\left(\left\langle X, v_{i}\right\rangle^{4}_{H}\right)\right]^{1/2}\sum^{+\infty}_{j=1}\left[\mathbb{E}\left(\left\langle X, v_{j}\right\rangle^{4}_{H}\right)\right]^{1/2}. \end{eqnarray*} Since $\left\langle X, v_{j}\right\rangle^{4}_{H}\leq\|X\|^{2}_{H}\|v_{j}\|^{2}_{H}\left\langle X, v_{j}\right\rangle^{2}_{H}<C^{2}\left\langle X, v_{j}\right\rangle^{2}_{H}$ a.s. and $\mathbb{E}\left(\left\langle X, v_{j}\right\rangle^{2}_{H}\right)=\lambda_{j}$ with $\lambda_{j}=O(u^{j})$, $0<u<1$, $j\geq1$, and $Q=\lfloor K\log\widehat{\mathbf{n}}\rfloor$ with $K=\dfrac{3}{\log\left(\dfrac{1}{u}\right)}$, then $$ B\leq C_{2}\exp(-K(\log\widehat{\mathbf{n}})(\log(1/u))/2)=\dfrac{C_{2}}{\widehat{\mathbf{n}}^{3/2}} $$ where $C_{2}$ is a positive constant. This finishes the proof of $(\ref{3.3})$. \subsection{Proof of Theorem \ref{regl2}} \noindent Put $K_{\mathbf{i}}:=\dfrac{1}{h^{d}}K\left(\dfrac{\dfrac{\mathbf{i}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)$, $\Gamma(\mathbf{i}):=\left(\dfrac{\mathbf{i}}{\mathbf{n+1}}-\mathbf{s}_{0}\right)^{T}r''(\mathbf{s}_{0})\left(\dfrac{\mathbf{i}}{\mathbf{n+1}}-\mathbf{s}_{0}\right)$, where $r''(\mathbf{s}_{0})$ stands for the matrix of second order partial derivatives of $r$ at $\mathbf{s}_{0}$, \begin{eqnarray*} A_{\mathbf{n}}&=&\frac{1}{\widehat{\mathbf{n}}}{\mathcal X}^{T}W_{0}{\mathcal X}\\ &=& \begin{pmatrix} \dfrac{1}{\widehat{\mathbf{n}}}\sum_{\mathbf{i}\in{\mathcal I}_{\mathbf{n}}}K_{\mathbf{i}}&\dfrac{1}{\widehat{\mathbf{n}}}\sum_{\mathbf{i}\in{\mathcal I}_{\mathbf{n}}}\left(\frac{\frac{\mathbf{i}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)^{T}K_{\mathbf{i}}\\ \dfrac{1}{\widehat{\mathbf{n}}}\sum_{\mathbf{i}\in{\mathcal I}_{\mathbf{n}}}\left(\frac{\frac{\mathbf{i}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)K_{\mathbf{i}}&\ \ \ \ \ \ \dfrac{1}{\widehat{\mathbf{n}}}\sum_{\mathbf{i}\in{\mathcal I}_{\mathbf{n}}}\left(\frac{\frac{\mathbf{i}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)\left(\frac{\frac{\mathbf{i}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)^{T}K_{\mathbf{i}}\\ \end{pmatrix} \end{eqnarray*} and \[B_{\mathbf{n}}:=\frac{1}{\widehat{\mathbf{n}}}{\mathcal X}^{T}W_{0}{\mathcal Y}= \begin{pmatrix} \dfrac{1}{\widehat{\mathbf{n}}}\sum_{\mathbf{i}\in{\mathcal I}_{\mathbf{n}}}K_{\mathbf{i}}T_{\mathbf{i}}\\ \dfrac{1}{\widehat{\mathbf{n}}}\sum_{\mathbf{i}\in{\mathcal I}_{\mathbf{n}}}\left(\dfrac{\dfrac{\mathbf{i}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)K_{\mathbf{i}}T_{\mathbf{i}}\\ \end{pmatrix},\ W_{\mathbf{n}}:=B_{\mathbf{n}}-A_{\mathbf{n}}\begin{pmatrix} \beta_{0}\\ h\beta_{1}\\ \end{pmatrix}. \] We have \[\lim_{\mathbf{n}\to +\infty}A_{\mathbf{n}}= \begin{pmatrix} 1&\mathbf{0}^{T}\\ \mathbf{0}&\ \ \ \int \mathbf{u}\mathbf{u}^{T}K(\mathbf{u})d\mathbf{u}\\ \end{pmatrix}=\begin{pmatrix} 1&\mathbf{0}^{T}\\ \mathbf{0}&\ \ \ \nu_2I_d\\ \end{pmatrix}=A, \] and $ \textrm{det}(A)=\nu_2 \ne 0$; then $A$ is invertible and \[A^{-1}= \begin{pmatrix} 1&\mathbf{0}^{T}\\ \mathbf{0}&\ \ \ \nu_2^{-1}I_d\\ \end{pmatrix}. \] Therefore, putting \[ A^{-1}_{\mathbf{n}} =\begin{pmatrix} u^{\mathbf{n}}_{11}&u^{\mathbf{n}}_{12}\\ u^{\mathbf{n}}_{21}&u^{\mathbf{n}}_{22}\\ \end{pmatrix}, \] we have $\lim_{\mathbf{n}\rightarrow +\infty}\left(u^{\mathbf{n}}_{11}\right)=1$ and $\lim_{\mathbf{n}\rightarrow +\infty}\left(u^{\mathbf{n}}_{12}\right)=\mathbf{0}^{T}$, and the sequences $\left(u^{\mathbf{n}}_{11}\right)_{\mathbf{n}}$ and $\left(u^{\mathbf{n}}_{12}\right)_{\mathbf{n}}$ are bounded. \noindent $(i)$ For all $\mathbf{s}_{0}\in[0,1]^d$, we have $ \widehat{r}(\mathbf{s}_{0})-r(\mathbf{s}_{0})=(1,\mathbf{0}^{T})A^{-1}_{\mathbf{n}}W_{\mathbf{n}}$ and \[\mathbb{E}\left(\widehat{r}(\mathbf{s}_{0})-r(\mathbf{s}_{0})\right)=(1,\mathbf{0}^{T})A^{-1}_{\mathbf{n}} \begin{pmatrix} \dfrac{1}{\widehat{\mathbf{n}}}\sum_{\mathbf{i}\in{\mathcal I}_{\mathbf{n}}}K_{\mathbf{i}}(\Gamma(\mathbf{i})+\mathbb{E}(\epsilon^*_{\mathbf{i}}))\\ \dfrac{1}{\widehat{\mathbf{n}}}\sum_{\mathbf{i}\in{\mathcal I}_{\mathbf{n}}}\left(\dfrac{\dfrac{\mathbf{i}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)K_{\mathbf{i}}(\Gamma(\mathbf{i})+\mathbb{E}(\epsilon^*_{\mathbf{i}}))\\ \end{pmatrix}, \] where $ \mathbb{E}(\epsilon^*_{\mathbf{i}})=\mathbb{E}\left(\left\langle \phi-\widehat{\phi}_{\mathbf{n}}, X_{\mathbf{i}}\right\rangle_{H}\right)+\mathbb{E}\left(\left\langle \gamma-\widehat{\gamma}_{\mathbf{n}}, X^{\prime}_{\mathbf{i}}\right\rangle_{G}\right)$. Since $$\mathbb{E}\left(\left\langle(S_{\mathbf{n},\phi}+\psi_{\mathbf{n}}I)^{-1}\phi, X\right\rangle_H\right)\to\mathbb{E}\left(\left\langle S_{\phi}^{-1}\phi, X\right\rangle_H\right)<+\infty,$$ from Assumption \ref{as5} and from Lemmas \ref{lemme2} and \ref{lemme3}, we have \begin{eqnarray*} \mathbb{E}\left(\left\langle\widehat{\phi}_{\mathbf{n}}-\phi,X\right\rangle_H\right)&=&\mathbb{E}\left(\left\langle(S_{\mathbf{n},\phi}+\psi_{\mathbf{n}}I)^{-1}(u_{\mathbf{n},\phi}-u_{\phi}),X\right\rangle_H\right)\\ &&-\mathbb{E}\left(\left\langle(S_{\mathbf{n},\phi}+\psi_{\mathbf{n}}I)^{-1}\phi,\psi_{\mathbf{n}} X\right\rangle_H\right)\\ && +\mathbb{E}\left(\left\langle(S_{\mathbf{n},\phi}+\psi_{\mathbf{n}}I)^{-1}(S_{\phi}-S_{\mathbf{n},\phi})\phi, X\right\rangle_H\right)\\ &=& O\left(\dfrac{\log \widehat{\mathbf{n}}}{w_{\mathbf{n}}\psi_{\mathbf{n}}\widehat{\mathbf{n}}^{1/2}}\right)+O\left(\psi_{\mathbf{n}}\right). \end{eqnarray*} Therefore, $$\left(\dfrac{1}{\widehat{\mathbf{n}}}\sum_{\mathbf{i}\in{\mathcal I}_{\mathbf{n}}}K_{\mathbf{i}}\mathbb{E}\left[\left(\left\langle \phi-\widehat{\phi}_{\mathbf{n}}, X_{\mathbf{i}}\right\rangle_{H}\right)\right]\right)^2=O\left(\dfrac{(\log \widehat{\mathbf{n}})^2}{w_{\mathbf{n}}^2\psi_{\mathbf{n}}^2\widehat{\mathbf{n}}}\right)+O\left(\psi_{\mathbf{n}}^2\right).$$ Similarly, we have $$\left(\dfrac{1}{\widehat{\mathbf{n}}}\sum_{\mathbf{i}\in{\mathcal I}_{\mathbf{n}}}K_{\mathbf{i}}\mathbb{E}\left[\left\langle \gamma-\widehat{\gamma}_{\mathbf{n}}, X^{\prime}_{\mathbf{i}}\right\rangle_{G}\right]\right)^2=O\left(\dfrac{(\log \widehat{\mathbf{n}})^2}{w_{\mathbf{n}}^2\psi_{\mathbf{n}}^2\widehat{\mathbf{n}}}\right)+O\left(\psi_{\mathbf{n}}^2\right)$$ and $$ \left(\dfrac{1}{\widehat{\mathbf{n}}}\sum_{\mathbf{i}\in{\mathcal I}_{\mathbf{n}}}\left(\frac{\dfrac{\mathbf{i}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)K_{\mathbf{i}}\mathbb{E}(\epsilon^*_{\mathbf{i}})\right)^2=O\left(\dfrac{(\log \widehat{\mathbf{n}})^2}{w_{\mathbf{n}}^2\psi_{\mathbf{n}}^2\widehat{\mathbf{n}}}\right)+O\left(\psi_{\mathbf{n}}^2\right).$$ Besides, we have $$\dfrac{1}{\widehat{\mathbf{n}}}\sum_{\mathbf{i}\in{\mathcal I}_{\mathbf{n}}}K_{\mathbf{i}}\Gamma(\mathbf{i})=O(h^2)\, \, \, \text{and}\, \, \, \dfrac{1}{\widehat{\mathbf{n}}}\sum_{\mathbf{i}\in{\mathcal I}_{\mathbf{n}}}\left(\dfrac{\dfrac{\mathbf{i}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)K_{\mathbf{i}}\Gamma(\mathbf{i})=O(h^2).$$ Thus $$\sup_{\mathbf{s}_0\in[0,1]^d}\bigg\{\mathbb{E}\bigg(\widehat{r}(\mathbf{s}_{0})-r(\mathbf{s}_{0})\bigg)\bigg\}^2=O\left(h^4\right)+O\left(\dfrac{(\log \widehat{\mathbf{n}})^2}{w_{\mathbf{n}}^2\psi_{\mathbf{n}}^2\widehat{\mathbf{n}}}\right)+O\left(\psi_{\mathbf{n}}^2\right).$$ \smallskip \noindent $(ii)$ Since $T_{\mathbf{i}}-\beta_{0}-\left(\dfrac{\dfrac{\mathbf{i}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)^{T}h\beta_{1}-\Gamma(\mathbf{i})=\xi_{\mathbf{i}}$ and putting $k_{\mathbf{n}}(\mathbf{i})=u^{\mathbf{n}}_{11}+u^{\mathbf{n}}_{12}\left(\dfrac{\dfrac{\mathbf{i}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)$, we obtain for all $\mathbf{s}_0\in[0,1]^d$ that \begin{eqnarray*} \widehat{r}(\mathbf{s}_{0})-\mathbb{E}\left(\widehat{r}(\mathbf{s}_{0})\right)&=&\dfrac{1}{\widehat{\mathbf{n}}}\sum_{\mathbf{i}\in{\mathcal I}_{\mathbf{n}}}K_{\mathbf{i}}\left(\xi_{\mathbf{i}}-\mathbb{E}\left(\xi_{\mathbf{i}}\right)\right)k_{\mathbf{n}}(\mathbf{i})\\ &=&\dfrac{1}{\widehat{\mathbf{n}}h^{d}}\sum_{\mathbf{i}\in{\mathcal I}_{\mathbf{n}}}K\left(\dfrac{\dfrac{\mathbf{i}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)\left(\xi_{\mathbf{i}}-\mathbb{E}\left(\xi_{\mathbf{i}}\right)\right)k_{\mathbf{n}}(\mathbf{i}). \end{eqnarray*} Also, since from Lemma \ref{l1}, we have $\mathbb{E}(\xi_{\mathbf{i}}^2)=O(1)$, it follows that \begin{eqnarray*} \mathbb{E}\bigg(\bigg(\hat{r}(\mathbf{s}_{0})-\mathbb{E}\left(\hat{r}(\mathbf{s}_{0})\right)\bigg)^{2}\bigg)\leq\dfrac{C_1}{\left(\widehat{\mathbf{n}}h^d\right)^{2}}\sum_{\mathbf{i}\in{\mathcal I}_{\mathbf{n}}}K\left(\dfrac{\dfrac{\mathbf{i}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)^{2}k^{2}_{\mathbf{n}}(\mathbf{i})+F , \end{eqnarray*} where $C_1$ is some positive constant and $$F=\dfrac{1}{\left(\widehat{\mathbf{n}}h^d\right)^{2}}\sum_{\mathbf{i}\ne\mathbf{j}}K\left(\dfrac{\dfrac{\mathbf{i}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)K\left(\dfrac{\dfrac{\mathbf{j}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)k_{\mathbf{n}}(\mathbf{i})k_{\mathbf{n}}(\mathbf{j})Cov\left(\xi_{\mathbf{i}},\xi_{\mathbf{j}}\right).$$ Since the sequences $\left(u^{\mathbf{n}}_{11}\right)_{\mathbf{n}}$ and $\left(u^{\mathbf{n}}_{12}\right)_{\mathbf{n}}$ are bounded, $\left|\left(\dfrac{\dfrac{\mathbf{i}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)\right|\le\mathbf{1}$, and $$Cov\left(\xi_{\mathbf{i}},\xi_{\mathbf{j}}\right)=Cov\left(\epsilon_{\mathbf{i}},\epsilon_{\mathbf{j}}\right)+Cov\left(\epsilon^*_{\mathbf{i}},\epsilon^*_{\mathbf{j}}\right),$$ where $\epsilon^*_{\mathbf{i}}=\left\langle \phi-\widehat{\phi}_{\mathbf{n}}, X_{\mathbf{i}}\right\rangle_{H}+\left\langle \gamma-\widehat{\gamma}_{\mathbf{n}}, X^{\prime}_{\mathbf{i}}\right\rangle_{G},$ it follows that $F=F_1+F_2$, where \begin{eqnarray*} F_1&=&\dfrac{1}{\left(\widehat{\mathbf{n}}h^d\right)^{2}}\sum_{\mathbf{i}\ne\mathbf{j}}K\left(\dfrac{\dfrac{\mathbf{i}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)K\left(\dfrac{\dfrac{\mathbf{j}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)k_{\mathbf{n}}(\mathbf{i})k_{\mathbf{n}}(\mathbf{j})Cov\left(\epsilon_{\mathbf{i}},\epsilon_{\mathbf{j}}\right);\\ F_2&=&\dfrac{1}{\left(\widehat{\mathbf{n}}h^d\right)^{2}}\sum_{\mathbf{i}\ne\mathbf{j}}K\left(\dfrac{\dfrac{\mathbf{i}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)K\left(\dfrac{\dfrac{\mathbf{j}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)k_{\mathbf{n}}(\mathbf{i})k_{\mathbf{n}}(\mathbf{j})Cov\left(\epsilon_{\mathbf{i}}^*,\epsilon_{\mathbf{j}}^*\right). \end{eqnarray*} Under Assumption \ref{rega2}, we have \begin{eqnarray*} &&\widehat{\mathbf{n}}h^dF_1\\ &\le&\dfrac{c}{\widehat{\mathbf{n}}h^d}\sum_{\mathbf{i}\ne\mathbf{j}}\left|K\left(\dfrac{\dfrac{\mathbf{i}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)-K\left(\dfrac{\dfrac{\mathbf{j}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)\right|K\left(\dfrac{\dfrac{\mathbf{j}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)\\ &&\ \ \ \ \times\ |Cov\left(\epsilon_{\mathbf{i}},\epsilon_{\mathbf{j}}\right)|+\dfrac{c}{\widehat{\mathbf{n}}h^d}\sum_{\mathbf{i}\ne\mathbf{j}}\left[K\left(\dfrac{\dfrac{\mathbf{j}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)\right]^2|Cov\left(\epsilon_{\mathbf{i}},\epsilon_{\mathbf{j}}\right)|\\ &\le&\dfrac{c\sigma^2}{\widehat{\mathbf{n}}h^d}\sum_{\mathbf{j}\in{\mathcal I}_{\mathbf{n}}}K\left(\dfrac{\dfrac{\mathbf{j}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)\sum_{\stackrel{\mathbf{i}\in{\mathcal I}_{\mathbf{n}}}{\|\mathbf{i-j}\|>0}}\dfrac{\left\|\dfrac{\mathbf{i}}{\mathbf{n+1}}-\dfrac{\mathbf{j}}{\mathbf{n+1}}\right\|}{h}\exp(-a\|\mathbf{i-j}\|)\\ &&+\dfrac{c\sigma^2}{\widehat{\mathbf{n}}h^d}\sum_{\mathbf{j}\in{\mathcal I}_{\mathbf{n}}}\left[K\left(\dfrac{\dfrac{\mathbf{j}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)\right]^2\sum_{\stackrel{\mathbf{i}\in{\mathcal I}_{\mathbf{n}}}{\|\mathbf{i-j}\|>0}}\exp(-a\|\mathbf{i-j}\|)\\ &\leq&\dfrac{c\sigma^2}{\widehat{\mathbf{n}}h^d\min_{k=1,\cdots,d}(n_k)h}\sum_{\mathbf{j}\in{\mathcal I}_{\mathbf{n}}}K\left(\dfrac{\dfrac{\mathbf{j}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)\sum^{+\infty}_{t=1}t^de^{-at}\\ &&+\dfrac{c\sigma^2}{\widehat{\mathbf{n}}h^d}\sum_{\mathbf{j}\in{\mathcal I}_{\mathbf{n}}}\left[K\left(\dfrac{\dfrac{\mathbf{j}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)\right]^2\sum^{+\infty}_{t=1}t^{d-1}e^{-at}\\ &&\longrightarrow c_1\sigma^2\int K^2(\mathbf{u})d\mathbf{u}, \end{eqnarray*} where $c$ and $c_1$ are positive constants. On the other hand, taking $Q=\lfloor (\log\widehat{\mathbf{n}})^{1/d} \rfloor$ and applying Lemma \ref{lemme1} together with Lemma \ref{l1}, we have \begin{eqnarray*} &&\widehat{\mathbf{n}}h^dF_2\\ &\le&\dfrac{c}{\widehat{\mathbf{n}}h^d}\sum_{\mathbf{i}\ne\mathbf{j}}\left|K\left(\frac{\frac{\mathbf{i}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)-K\left(\frac{\frac{\mathbf{j}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)\right|K\left(\frac{\frac{\mathbf{j}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)|Cov\left(\epsilon_{\mathbf{i}}^*,\epsilon_{\mathbf{j}}^*\right)|\\ &&+\dfrac{c}{\widehat{\mathbf{n}}h^d}\sum_{\mathbf{i}\ne\mathbf{j}}\left[K\left(\frac{\frac{\mathbf{j}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)\right]^2|Cov\left(\epsilon_{\mathbf{i}}^*,\epsilon_{\mathbf{j}}^*\right)|\\ &\le&\dfrac{c_2}{\widehat{\mathbf{n}}h^d}\sum_{\mathbf{j}\in{\mathcal I}_{\mathbf{n}}}K\left(\frac{\frac{\mathbf{j}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)\left(\sum_{\stackrel{\mathbf{i}\in{\mathcal I}_{\mathbf{n}}}{\|\mathbf{i-j}\|>Q}}\dfrac{\left\|\frac{\mathbf{i}}{\mathbf{n+1}}-\frac{\mathbf{j}}{\mathbf{n+1}}\right\|}{h}\left[\alpha_{1,\infty}(\|\mathbf{i-j}\|)\right]^{\frac{1}{2}}\right.\\ &&\left. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\ \frac{Q^{2d}}{\min_{k=1,\cdots,d}(n_k)h}\right)\\ &&+\dfrac{c_2}{\widehat{\mathbf{n}}h^d}\sum_{\mathbf{j}\in{\mathcal I}_{\mathbf{n}}}\left[K\left(\frac{\frac{\mathbf{j}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)\right]^2\left(\sum_{\stackrel{\mathbf{i}\in{\mathcal I}_{\mathbf{n}}}{\|\mathbf{i-j}\|>Q}}\left[\alpha_{1,\infty}(\|\mathbf{i-j}\|)\right]^{\frac{1}{2}}+Q^d\right) \end{eqnarray*} and by replacing $Q$ by its value, we obtain \begin{eqnarray*} &&\widehat{\mathbf{n}}h^dF_2 \\ &\leq&\dfrac{c_2}{\widehat{\mathbf{n}}h^d\min_{k=1,\cdots,d}(n_k)h}\sum_{\mathbf{j}\in{\mathcal I}_{\mathbf{n}}}K\left(\frac{\frac{\mathbf{j}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)\left(\sum^{+\infty}_{t=1}t^{-(\theta-2d)/2}+(\log \widehat{\mathbf{n}})^2\right)\\ &&+\dfrac{c_2}{\widehat{\mathbf{n}}h^d}\sum_{\mathbf{j}\in{\mathcal I}_{\mathbf{n}}}\left[K\left(\frac{\frac{\mathbf{j}}{\mathbf{n+1}}-\mathbf{s}_{0}}{h}\right)\right]^2\left(\sum^{+\infty}_{t=1}t^{-(\theta-2d+2)/2}+\log \widehat{\mathbf{n}}\right), \end{eqnarray*} where $c_2$ is some positive constant. Finally, we conclude that \begin{eqnarray*} \lim_{\mathbf{n}\to +\infty}\dfrac{\widehat{\mathbf{n}}h^d}{\log \widehat{\mathbf{n}}}\sup_{\mathbf{s}_0\in[0,1]^d}Var\left(\widehat{r}(\mathbf{s}_{0})\right)\leq c_2\int K^{2}(\mathbf{u})d\mathbf{u}<+\infty. \end{eqnarray*}
2,869,038,156,583
arxiv
\section{Introduction} \label{sec:intro} Solid semiconductor sensors are used for imaging, tracking and calorimetry in different disciplines. These detectors may have different level of tolerance for damage caused by different radiation types and doses, depending on the application. When an ionizing particle traverses the elementary cell of the sensor (pixel, strip, pad, etc.), an electric current pulse is produced at the collection node of that cell (the electronic signal). This current is in effect produced from electrons or holes (charge carriers), which are created in pairs along the trajectory of the ionizing traversing particle. The collected charges are processed by the front-end electronics and a ``hit'' is formed if the collected charge is above certain threshold. The number of charge carriers and the way these propagate to the collection node depend therefore on the specifics of the sensor material. The electronic signal depends in turn strongly on the material, but it also depends on a number of other parameters such as the sensor fabrication processes, the sensor geometry, the external voltage applied to the sensor, the damage the sensor may have suffered due to continuous exposure to radiation, etc. A key ingredient in the characterization and understanding charge carriers' propagation in the cell is the electric field inside it. The field governs this propagation process and ultimately determines the size and quality of the electronic signal output of the cell. Hence, the simulation of these sensors relies strongly on the electric field knowledge. An important task in the overall design process of a sensor is the implementation of the corresponding technology computer-aided-design (TCAD) simulation of the device. This can be done using ``Sentaurus" software from Synopsys~\cite{SYNOPSIS} or ``Atlas" software from Silvaco~\cite{SILVACO}. One of the outputs of such TCAD simulation is the fine-grained 3D electric field mesh inside the cell, which is later used to simulate the charge propagation. Hence, the field is an essential piece in the characterization of the device performance. Implementing the device's growth and fabrication procedures in TCAD is an extremely complicated task. Moreover, the information about these procedures is usually protected commercially. In some cases, the field may be one dimensional, e.g., $\vec{E} = E_z$ (with $z$ being the coordinate along the depth of the cell). Furthermore, this single component may be modeled to a good approximation as a simple function of $z$ only. In these cases, the absence of a detailed TCAD mesh description of the field is not a limiting factor for the simulation chain of the device. However, in other cases, the field may be highly non-linear and generally, it may have non-trivial components also in the transverse ($x,y$) coordinates, i.e., $\vec{E} = (E_x,E_y,E_z$). The three components can themselves depend non-trivially on all three coordinates, i.e., $E_i = f_i(x,y,z)$, where $i=x,y,z$ and $f_i$ is some 3D function that is not necessarily analytical. This is particularly important for new monolithic active pixel sensors (MAPS)~\cite{SNOEYS2013125}, which are used more and more extensively in recent years thanks to the development of sub-micron capabilities in different foundries worldwide. Unlike in hybrid sensors, the readout circuitry of MAPS is integrated in the same substrate as the sensing volume. Consequently, the material budget may be substantially reduced, the production costs can also be reduced and the production yield can be higher. MAPS are now becoming popular, e.g., for the next-generation high-energy physics experiments (for tracking, vertexing and calorimetry) and elsewhere. In some of these cases it may happen that users of a specific sensor do not have access to the TCAD mesh of the non-trivial field. This poses a serious problem for these users, who effectively lack the first building block of the device simulation and therefore cannot properly simulate its response. To overcome this limitation, we show that combining public beam-test data and a very limited public TCAD-based knowledge, we are able to effectively reconstruct the 3D electric field function in the pixel cell of one important and widely used example, namely the ALPIDE monolithic silicon sensor~\cite{ALPIDE1,ALPIDE2,ALPIDE3,ALPIDE4,ALPIDE5}. The ALPIDE (ALice PIxel DEtector) sensor was developed by and for the ALICE~\cite{ALICE} experiment at the LHC together with TowerJazz~\cite{TJ}. The inner tracking system (ITS) of ALICE~\cite{ALICEITS} has been upgraded successfully in 2021 with this new technology and it is now the largest pixel detector ever built. The successful ALPIDE technology is also being used elsewhere in other experiments/facilities like LUXE~\cite{LUXECDR}, sPHENIX~\cite{PHENIX:2015siv,Dean:2021rlo}, DESY's beam-test facility~\cite{DESYTELESCOPE}, the CSES-2 satellite mission~\cite{Iuppa:2021ozs}) in space, Proton Computed Tomography (pCT)~\cite{pCT1,pCT2}, and more. However, despite its wide usage, the TCAD sampling points (referred to hereafter as ``mesh'') are not available to the community as it is under proprietary restriction. In this work, we discuss how the 3D effective field function (denoted EFF hereafter) in the ALPIDE pixel sensor is derived. We use the Allpix$^2$\xspace software~\cite{allpix,allpixmanual} to compare the performance in simulation between two cases, once starting from our 3D EFF and once starting from public results that are using the actual TCAD mesh, which remains unknown to us. The two cases are denoted hereafter as Allpix$^2$+EFF\xspace and Allpix$^2$+TCAD\xspace, respectively. For the comparison, we take help of the Allpix$^2$+TCAD\xspace results shown in~\cite{DANNHEIM2020163784}. The authors of~\cite{DANNHEIM2020163784} go further to compare their Allpix$^2$+TCAD\xspace simulation with data collected in a beam-test~\cite{TESTBEAM}. The results in~\cite{DANNHEIM2020163784} and~\cite{TESTBEAM} are obtained using a prototype of the ALPIDE sensor, called ``Investigator''. The pixel design of the Investigator sensor is close to that of the ALPIDE design, except for the pitch (the Investigator has square pixel pitch of different sizes). The performance of the three scenarios is shown to be very similar. This agreement gives confidence in our EFF as the cornerstone for further simulation campaigns, where the ALPIDE sensor is used. We comment on the derivation process and provide the EFF as a set of ROOT~\cite{rene_brun_2019_3895860} TFormula-compatible strings. We provide the code that produces our EFF such that further fine-tuning may be done, and such that similar work can be started for other sensors. We also provide the Allpix$^2$\xspace configuration files used in our Allpix$^2$+EFF\xspace simulation. Our EFF generation code and Allpix$^2$\xspace configurations are available in~\cite{gitEField}. \section{The \texorpdfstring{Allpix$^2$\xspace}{} setup} \label{sec:allpix} The Allpix$^2$\xspace software~\cite{allpix,allpixmanual} is a lightweight C++ framework that simulates the processes triggered inside different semiconductor devices when ionizing particles traverse these. Particularly, it simulates the electron-hole pair creation, the drift or diffusion of these charge carriers given some electric field, the charge collection by the electrodes and finally the digitization of collected charge along with the electronic noise in the front-end electronics. The task of simulating the energy deposition process by the ionizing particles (an input for the charge carriers generation) is external to Allpix$^2$\xspace. Usually this is done by \textsc{Geant4}\xspace~\cite{geant1, geant2, geant3}, which is interfaced directly to Allpix$^2$\xspace for convenience, but this can also be provided from other sources. The Allpix$^2$\xspace simulation flow is implemented using several modules, accessed sequentially using a configuration file. While for a full description the reader is referred to the Allpix$^2$\xspace user manual~\cite{allpixmanual}, we briefly summarize the most relevant modules related to the purpose of this work: \begin{itemize} \item\textbf{GeometryBuilderGeant4:} constructs the \textsc{Geant4}\xspace geometry given the description of the sensors and their arrangement (dimension, position and orientation). \item\textbf{DepositionGeant4:} implements the simulation of energy deposition by ionizing particles in the active volume of the detector. \item\textbf{ElectricFieldReader:} implements the electric field inside the sensor volume. The field can be applied using several methods. These include TCAD mesh input files, simple built-in functions (constant, linear, parabolic), and most importantly for this work, any user-defined analytic functions in the form of ROOT TFormula-compatible strings. \item\textbf{GenericPropagation:} simulates the propagation of electrons and holes in the sensor volume given the electric field. \item\textbf{DefaultDigitizer:} translates the collected charge into a digitized signal proportional to the input charge. The noise in the electronics is also simulated here. \end{itemize} These modules are called sequentially in the simulation chain. The output of the chain contains the fired pixels, i.e., pixels which have a collected charge passing some predefined threshold. It is important to stress that an induced charge generation in adjacent pixels (to the pixel where the ionizing particle passes through) is also simulated by Allpix$^2$\xspace. The list of fired pixels can then be used in a subsequent analysis, which depends on the specific application. This analysis may include, e.g., calibrating the charge to the energy-loss, counting of hits, clustering of the pixels, tracking, etc.\\ The Allpix$^2$\xspace simulation configuration for this work is kept as close as possible to that of~\cite{DANNHEIM2020163784}. This is necessary for the validation of the EFF by comparing the Allpix$^2$+EFF\xspace and Allpix$^2$+TCAD\xspace performances. Therefore, the total thickness of the sensor in our Allpix$^2$+EFF\xspace runs is taken to be 100~\ensuremath{\mu}m\xspace, with a depletion depth of 25~\ensuremath{\mu}m\xspace, like the sensor used in~\cite{DANNHEIM2020163784}. The pixel pitch of the Investigator sensor used is $28\times 28~\mu{\rm m}^{2}$ in $x\times y$. While we configure our code that generates the 3D EFF to have the same parameters as for the Investigator sensor, we make sure that it is easy to reconfigure these parameters using the production version of the ALPIDE. For example, the pixel pitch is defined as a global parameter in the code and hence, it can be simply re-set at one place such that the EFF would scale naturally. A simulated $\pi^+$ beam particles with an energy of 120~GeV is set to hit the detector. This is similar to the beam condition of the beam-test setup of~\cite{TESTBEAM}. The beam axis is defined as the positive $z$ direction, normal to the sensor plane. After the $\pi^+$ beam particles deposit energy and generate the electron-hole pairs along their trajectory in the sensor, the charge carriers are propagated by Allpix$^2$\xspace towards the electrodes along the electric field lines. In the upper part of the sensor (within depletion depth of 25~\ensuremath{\mu}m\xspace), both drift and diffusion are the preferred modes of charge carrier motion, whereas in the region beyond the depletion depth, only diffusion occurs due to the lack of electric field in that region. The charges reaching the electrodes within a time window of 22.5~ns (as in~\cite{DANNHEIM2020163784}) are collected. Finally, the collected charge is converted into a digital signal. Equivalent electronic noise is added to the signal by randomly drawing from a Gaussian distribution with a width of 10~e and mean of 0~e. Different charge collection thresholds are checked in the range between 40~e and 700~e. A threshold smearing option is also added to the simulation by sampling from a Gaussian distribution with width 5~e and mean 0~e. While we mention some key simulation parameters explicitly above, there are in fact more parameters required for the process. All parameters are kept as in~\cite{DANNHEIM2020163784}. The bias voltage applied to the ALPIDE p-well in~\cite{DANNHEIM2020163784} and~\cite{TESTBEAM} is $-6$~V. A voltage of $+0.8$~V is applied to the collection electrode. When using the TCAD mesh, the voltage settings are dictated by that. In the user-defined function case, however, the settings are completely set by the function's normalization. This normalization can be capped very roughly by $\sim -6$~V, when integrating the EFF along the depletion region, but it strongly depends on the shape as well. \section{The process of deriving the effective field function} \label{sec:reveng} As mentioned before, the detailed TCAD mesh of the electric field inside the ALPIDE sensor is unavailable publicly. However, its magnitude, \ensuremath{\sqrt{\Sigma E_i^2}}\xspace (with $i=x,y,z$), in the $x-z$ plane for $y=0$ is shown in figure~4 of~\cite{DANNHEIM2020163784}. The field lines in this plane are also overlaid in this figure. The magnitude of the field at the faces of the pixel in three-dimensions are shown in figure~3 of~\cite{DANNHEIM2020163784}. As discussed above, due to various restrictions, these two figures are given without axes scale-labels. The meaning of the colors in these figures can be loosely interpreted from the field of a similar ``investigator'' chip shown in figures~8.4 and~8.5 of an older work from 2018~\cite{magdalena}. This work used the CLIC Tracker Detector (CLICTD), a monolithic chip~\cite{Kremastiotis:2020msg} which is also produced in a 180~nm imaging CMOS\footnote{Complementary metal-oxide-semiconductor.} process on a high-resistivity epitaxial layer like the ALPIDE. The extensive TCAD studies in~\cite{magdalena} show a similar field behavior to the one of the ALPIDE, where unlike~\cite{DANNHEIM2020163784}, more details are provided, in particular the scale of the color map. This information was also used to define our EFF. As it can be seen in the figures mentioned above, the field's magnitude shape is highly non-linear and the EFF cannot be easily deciphered from the available information. At this step, we therefore only have a very rough idea on how the magnitude of the 3D EFF should look like in a few slices of the pixel. We also know that besides mimicking the shape visually, its components' normalizations should result in a voltage of $\sim -6$~V when integrated along $z$ in the upper 25~\ensuremath{\mu}m\xspace of the sensor. This information is clearly not enough to derive the field, even not effectively. However, we do have another few indirect pieces of invaluable information from~\cite{DANNHEIM2020163784} detailing the performance of the sensor with the $\pi^+$ particle beam discussed above. This includes the charge distributions (figure~9), the cluster size distributions (figures~10 and 11) and the position residual distributions in $x$ (figure~15), all given for a specific threshold. The behavior for different thresholds can be seen in the cluster size graphs (figure~13), the spatial resolution graphs in $x$ and $y$ (figure~16) and the efficiency graphs (figure~18). With this, we can iteratively plug in one EFF ansatz at a time such that \begin{enumerate} \item its 3D magnitude, \ensuremath{\sqrt{\Sigma E_i^2}}\xspace (with $E_i=f_i(x,y,z)$ and $i=x,y,z$), visually resembles the magnitude of the TCAD field shape in the slices from figures~3-4 of~\cite{DANNHEIM2020163784} including the field lines, \item it results in an integral $\ensuremath{-\int{\vec{E}\cdot{\rm d}\vec{\ell}}}\xspace \simeq -6$~V, where $\vec{\ell}$ runs along the negative $z$-axis direction at the top 25~\ensuremath{\mu}m\xspace of the sensor in its center ($x=y=0$), and \item it gives a good agreement with the performance figures~9-18 of~\cite{DANNHEIM2020163784} after simulating the same scenarios with our Allpix$^2$+EFF\xspace setup. \end{enumerate} For step~1 above, we add the main features of the field, one by one and plot the field magnitude in 3D and a few 2D slices, similar to the ones available from TCAD. This is done initially with simple functional shapes (sphere, arcs, stripes, Gaussian and exponential shapes, etc.). Upon adding a new feature, we verify that it merges properly with the existing features in terms of the relative normalization and the smoothness in the transition regions. In some cases, adding a new prominent feature may lead to irregularities in the transition regions. In these cases, one can add new, compensating, features limited to the relevant regions to remove these irregularities. Regardless of the field magnitude, in case the field lines are also available from TCAD in some 2D slices, they must always match in direction. This is adjusted by changing both the feature's relative normalization and sign in all three components. For step~3 above, we initially simulate 1000 primary $\pi^+$ beam particles to see if the cluster charge and size distributions roughly agree with those in figures~9-10 of~\cite{DANNHEIM2020163784} for a nominal threshold of 120~e. If these agree, we continue to check a wider range of thresholds (40-700~e) with larger statistics (20,000 primary $\pi^+$ beam particles) and check the agreement with figures~9-18 of~\cite{DANNHEIM2020163784}. For the comparison, we digitize the figures from~\cite{DANNHEIM2020163784} using the `WebPlotDigitizer' software~\cite{DIGITIZEPLOTS} to get the distributions/graphs and their uncertainties wherever applicable. A good agreement in that respect is therefore judged according to the ratio between our Allpix$^2$+EFF\xspace and the Allpix$^2$+TCAD\xspace simulation taking into account the uncertainties. The iterative process of adding field features to the EFF ansatz and testing the performance (per ansatz) may be stopped when all three criteria above are satisfied to a large extent, noting that the first two are only qualitative tests. Specifically for the ALPIDE sensor, more than a hundred of such iterations were needed in order to arrive at a satisfactory EFF. The final EFF string generated automatically by our code~\cite{gitEField}, using a set of parametric logical expressions, has $\sim 2600$ characters for the $z$ component and $\sim 5700$ characters for each of the $x$ and $y$ components. This gives a rough idea about how highly-complex the EFF is and particularly in the transverse direction. The procedure discussed above can be regarded as generic as long as there is at least (i) a very basic knowledge on the field's shape and (ii) reference data for the comparison of the sensor performance with simulation. \section{The effective field function of the ALPIDE sensor} \label{sec:elfieldalpide} The results shown in this section are given after the full rigorous procedure described in section~\ref{sec:reveng}. The resulting EFF magnitude of the ALPIDE sensor is seen in figures~\ref{fig:effmag1},~\ref{fig:effmag2}. These plots should be compared with figures~4 and~3 of~\cite{DANNHEIM2020163784}. It can be seen that while the EFF magnitude shape is not identical to the TCAD mesh magnitude shape, it captures all important features. The integral \ensuremath{-\int{\vec{E}\cdot{\rm d}\vec{\ell}}}\xspace with $\vec{\ell}$ running along the negative $z$-axis direction at the top 25~\ensuremath{\mu}m\xspace of the sensor in its center ($x=y=0$) results in a bias voltage of -6.08~V, as expected. The analytical expression (in the form of a ROOT TFormula-compatible string) is too complex and long to be included in this manuscript and hence it is saved in~\cite{gitEField}. In addition, the code producing the EFF as well as all the plots below can be found in~\cite{gitEField}. \begin{figure}[!ht] \centering \begin{overpic}[width=0.465\textwidth]{fig/pixel_3D_v18.1.1.3.1.png}\end{overpic} \begin{overpic}[width=0.525\textwidth]{fig/EfieldALPIDE_magnitudes_xzy0_slice_with_arrows.pdf}\end{overpic} \caption{Left: the 3D EFF magnitude, \ensuremath{\sqrt{\Sigma E_i^2}}\xspace (with $E_i=f_i(x,y,z)$ and $i=x,y,z$), at the ALPIDE pixel sensor faces (sides and top). Right: the EFF magnitude in the $z$ vs $x$ plane sliced at $y=0$ overlaid with the field lines shown as white arrows (the arrows are positioned at the bin centers). All shapes are given for the upper 25~\ensuremath{\mu}m\xspace of the sensor, where the EFF is non-zero. These plots should be compared with figures~4 and~3 of~\cite{DANNHEIM2020163784}, respectively.} \label{fig:effmag1} \end{figure} \begin{figure}[!ht] \centering \begin{overpic}[width=0.99\textwidth]{fig/EfieldALPIDE_magnitudes_pixel_faces.pdf}\end{overpic} \caption{The 3D EFF magnitude, \ensuremath{\sqrt{\Sigma E_i^2}}\xspace (with $E_i=f_i(x,y,z)$ and $i=x,y,z$), at the ALPIDE pixel faces overlaid with the field lines (clockwise from top left: the bottom face of the pixel, its top face, its $y$ face and its $x$ face). All shapes are given for the upper 25~\ensuremath{\mu}m\xspace of the pixel where the EFF is non-zero.} \label{fig:effmag2} \end{figure} \section{Sensor performance with the effective field function} \label{sec:performance} The Allpix$^2$\xspace results shown in this section are using the EFF discussed in section~\ref{sec:elfieldalpide}. The Allpix$^2$+EFF\xspace results are compared with the ones of the Allpix$^2$+TCAD\xspace and the beam-test data from~\cite{DANNHEIM2020163784}. The comparison is done at the level of a pixels-cluster in terms of the cluster's charge, size and position. The clustering of fired pixels (pixels with charge above threshold) follow a simple ``packman'' algorithm, where all adjacent fired pixels (in both $x$ and $y$ directions) are added together to form a cluster. In this simple algorithm, no pixel sharing is allowed between two clusters. The same algorithm is used also in~\cite{DANNHEIM2020163784}. The cluster size is simply the number of pixels associated with it. Likewise, the cluster charge is the sum of all pixels' charges associated with it. The cluster position in $x-y$ is obtained as a charge-weighted mean from the positions of all pixels of the cluster. This can be compared with the truth position of the incoming $\pi^+$ beam particles to obtain the residuals. Finally, the detection efficiency can be defined in terms of the fraction of truth-matched clusters (i.e. clusters matched with true incident $\pi^+$ beam particles) from all clusters formed. A successful matching is achieved when the maximum distance between the cluster position and the true incident particle position is smaller than three times the pixel pitch in each dimension. For the comparison, we use a wide range of thresholds, between 40~e and 700~e, where the nominal threshold is of 120~e. The number of primary $\pi^+$ beam particles used for the Allpix$^2$+EFF\xspace results below is 20,000 for the nominal as well as the other thresholds. This gives a low enough statistical error which allows to clearly see the main trends. Whenever the statistical uncertainties are available in the Allpix$^2$+TCAD\xspace results of~\cite{DANNHEIM2020163784}, we use those in the ratio plots shown in the bottom panels. Otherwise, the errors in the ratio panel represent the statistical uncertainty of our Allpix$^2$+EFF\xspace simulation alone. Furthermore, the systematic uncertainty bands for the Allpix$^2$+TCAD\xspace plots shown in~\cite{DANNHEIM2020163784} are not overlaid here because it was not possible to decipher these from the source. However, we do comment qualitatively on the compatibility between the Allpix$^2$+TCAD\xspace and Allpix$^2$+EFF\xspace results with respect to the systematic uncertainties from~\cite{DANNHEIM2020163784}. These uncertainties should be identical between the two simulations, leading to an overall uncertainty higher by a factor of $\sqrt{2}$ for the comparison. This statement is true as long as one excludes systematic variations of the field function itself, a study which we leave for future work. The cluster charge and size distributions for the nominal threshold are shown in figure~\ref{fig:charge_size}. The most probable value (MPV) resulting from a fit of the Allpix$^2$+EFF\xspace charge distribution to a convolution of a Landau and a Gaussian functions is $1.48$, whereas the MPV result for the Allpix$^2$+TCAD\xspace is $1.42$. A good agreement is also seen in the two ratio plots. \begin{figure}[!ht] \centering \begin{overpic}[width=0.49\textwidth]{fig/ClusterChargeComparisonRatio.pdf}\end{overpic} \begin{overpic}[width=0.49\textwidth]{fig/ClusterSizeComparisonRatio.pdf}\end{overpic} \caption{The comparison of the cluster charge (left) and cluster size (right) distributions between our Allpix$^2$+EFF\xspace simulation and the Allpix$^2$+TCAD\xspace simulation (and the beam-test data) from~\cite{DANNHEIM2020163784}. The bottom panels show the ratio of the Allpix$^2$+TCAD\xspace (red) or data (grey) results to the Allpix$^2$+EFF\xspace results. The cluster charge distribution for the data either does not have statistical uncertainties at the source or the errors are too small for the digitizer software~\cite{DIGITIZEPLOTS} to decipher. The systematic uncertainty band for the Allpix$^2$+TCAD\xspace charge plot shown in~\cite{DANNHEIM2020163784} is not overlaid here because it was not possible to decipher it from the original plot. However, as one can see in~\cite{DANNHEIM2020163784}, the uncertainty on the Allpix$^2$+TCAD\xspace histogram is as high as 1-10\% in the region where the statistics are high enough. The Allpix$^2$+EFF\xspace uncertainties should be identical to that (giving an overall uncertainty higher by a factor of $\sqrt{2}$).} \label{fig:charge_size} \end{figure} The distributions of the cluster size in $x$ and $y$, and the residuals in $x$ are checked as well for the nominal threshold in figures~\ref{fig:size_xy} and~\ref{fig:residuals}. A similar conclusion about the good agreement can be drawn in these cases as well. Particularly, using figure~\ref{fig:residuals} and following the exact procedure given in~\cite{DANNHEIM2020163784} to calculate the resolution, we find that it is $\sigma_x=3.34$~\ensuremath{\mu}m\xspace for Allpix$^2$+EFF\xspace at a threshold of 120~e, compared with $3.60\pm 0.01({\rm stat})^{+0.24}_{-0.13}({\rm syst})$~\ensuremath{\mu}m\xspace for Allpix$^2$+TCAD\xspace. As mentioned earlier, since the procedures are identical, the overall uncertainty for our comparison will be a factor of $\sqrt{2}$ higher. \begin{figure}[!ht] \centering \begin{overpic}[width=0.49\textwidth]{fig/ClusterSizeXComparisonRatio.pdf}\end{overpic} \begin{overpic}[width=0.49\textwidth]{fig/ClusterSizeYComparisonRatio.pdf}\end{overpic} \caption{The comparison of the cluster size in $x$ (left) and in $y$ (right) distributions between our Allpix$^2$+EFF\xspace simulation and the Allpix$^2$+TCAD\xspace simulation (and the beam-test data) from~\cite{DANNHEIM2020163784}. The bottom panels show the ratio of the Allpix$^2$+TCAD\xspace (red) or data (grey) results to the Allpix$^2$+EFF\xspace results. The data histograms either do not have statistical uncertainties at the source or the errors are too small for the digitizer software~\cite{DIGITIZEPLOTS}. The source also does not have systematic uncertainties for these plots.} \label{fig:size_xy} \end{figure} \begin{figure}[!ht] \centering \begin{overpic}[width=0.9\textwidth]{fig/ResidualXComparisonRatio.pdf}\end{overpic} \caption{The comparison of the cluster position residuals distribution in $x$ between our Allpix$^2$+EFF\xspace simulation and the Allpix$^2$+TCAD\xspace simulation (and the beam-test data) from~\cite{DANNHEIM2020163784}. The bottom panel show the ratio of the Allpix$^2$+TCAD\xspace (red) or data (grey) results to the Allpix$^2$+EFF\xspace result. The three resolutions reported are calculated in the same way discussed in~\cite{DANNHEIM2020163784}. The histograms from~\cite{DANNHEIM2020163784} either do not have statistical uncertainties at the source or the errors are too small for the digitizer software~\cite{DIGITIZEPLOTS}. The systematic uncertainty band for the Allpix$^2$+TCAD\xspace shown in~\cite{DANNHEIM2020163784} is not overlaid here because it was not possible to decipher it from the original plot. } \label{fig:residuals} \end{figure} Finally, we also see a good agreement when scanning the thresholds in the range mentioned above and plotting the mean cluster size, the detection efficiency and the spatial resolution in $x$ and $y$ vs the threshold. These results are shown in figures~\ref{fig:comparisonThreshold} and~\ref{fig:comparisonResThreshold}. We take the systematic uncertainties of the Allpix$^2$+TCAD\xspace results from~\cite{DANNHEIM2020163784} qualitatively and note again that these should be identical to those of our Allpix$^2$+EFF\xspace simulation. The largest discrepancy between the two results is seen in the spatial resolution at low thresholds, $\lesssim 150$~e. This discrepancy is nevertheless within the uncertainty for the resolution in $y$ (5\%-15\% in 40-120~e), while for $x$, the two results are compatible within the uncertainties ($\sim 7\%$ in 40-200~e) only above $\simeq 80$~e. This lies well below the nominal projected working range of the sensors. The second-largest discrepancy is evident in the efficiency graph at the highest thresholds, $\gtrsim 600$~e. This discrepancy is well within the systematic uncertainty ($\sim 1-2\%$ above 600~e) and also here, it is manifested in a range that is much above the nominal projected working range of the sensors. The performance plots given here show a comprehensive set of tests in a wide range of values each, indicating a good compatibility between our Allpix$^2$+EFF\xspace results and the Allpix$^2$+TCAD\xspace results and particularly for the relevant range of operation. These two sets of simulation results also show the same level of compatibility with the data. \begin{figure}[!ht] \centering \begin{overpic}[width=0.49\textwidth]{fig/ClusterMeanRatio.pdf}\end{overpic} \begin{overpic}[width=0.49\textwidth]{fig/EfficiencyRatio.pdf}\end{overpic} \caption{Left: the comparison of the mean cluster size behavior for different thresholds between our Allpix$^2$+EFF\xspace simulation and the Allpix$^2$+TCAD\xspace simulation (and the beam-test data) from~\cite{DANNHEIM2020163784}. Right: the same comparison for the detection efficiency. The bottom panels show the ratio of the Allpix$^2$+TCAD\xspace (red) or data (black) results to the Allpix$^2$+EFF\xspace results. The graphs from~\cite{DANNHEIM2020163784} either do not have statistical uncertainties at the source or the errors are too small for the digitizer software~\cite{DIGITIZEPLOTS}. The systematic uncertainty band for the Allpix$^2$+TCAD\xspace shown in~\cite{DANNHEIM2020163784} is not overlaid here because it was not possible to decipher it from the original plots. However, as one can see in~\cite{DANNHEIM2020163784}, the uncertainty on the Allpix$^2$+TCAD\xspace is as high as 5-15\% for the mean cluster size plot and 1-2\% for the efficiency plot. The Allpix$^2$+EFF\xspace uncertainties should be identical to those (giving an overall uncertainty higher by a factor of $\sqrt{2}$). } \label{fig:comparisonThreshold} \end{figure} \begin{figure}[!ht] \centering \begin{overpic}[width=0.49\textwidth]{fig/ResidualXRatio.pdf}\end{overpic} \begin{overpic}[width=0.49\textwidth]{fig/ResidualYRatio.pdf}\end{overpic} \caption{ Left: the comparison of the resolution in $x$ behavior for different thresholds between our Allpix$^2$+EFF\xspace simulation and the Allpix$^2$+TCAD\xspace simulation (and the beam-test data) from~\cite{DANNHEIM2020163784}. Right: the same comparison for the resolution in $y$. The bottom panels show the ratio of the Allpix$^2$+TCAD\xspace (red) or data (black) results to the Allpix$^2$+EFF\xspace results. The graphs from~\cite{DANNHEIM2020163784} either do not have statistical uncertainties at the source or the errors are too small for the digitizer software~\cite{DIGITIZEPLOTS}. The systematic uncertainty band for the Allpix$^2$+TCAD\xspace shown in~\cite{DANNHEIM2020163784} is not overlaid here because it was not possible to decipher it from the original plots. However, as one can see particularly at the low end of the thresholds range, $\lesssim 150$~e, the uncertainty on the Allpix$^2$+TCAD\xspace is as high as 5-15\% in $y$. In $x$, at the same threshold range, the uncertainty is approximately constant at $\sim 7\%$. The Allpix$^2$+EFF\xspace uncertainty should be identical to that (giving an overall uncertainty higher by a factor of $\sqrt{2}$). Hence, while the results in $y$ are fully compatible throughout the full threshold range, the results in $x$ are compatible only above 80~e. } \label{fig:comparisonResThreshold} \end{figure} \section{Conclusions and outlook} \label{sec:concl} We show that a carefully constructed EFF may replace a highly detailed TCAD mesh in the simulation chain of semiconductor sensors. This is shown for the highly non-linear and non-trivial field of the ALPIDE sensor, where the TCAD mesh is not available publicly. Our EFF shape resembles the one from~\cite{DANNHEIM2020163784} and moreover, it reproduces the same performance to a very good extent for several quantities in a wide range of values. The procedures discussed can be regarded as general as long as there is at least (i) a very basic knowledge on the field's shape and (ii) reference data for the comparison of the sensor performance with simulation, e.g. from beam-tests, radioactive sources, cosmic muons, etc. We point out that, while our EFF (available in~\cite{gitEField}) can be already used now, it may also be further optimized to reach an even better agreement with the TCAD-based results of~\cite{DANNHEIM2020163784}. In that sense, the decision where to stop the optimization process is purely driven by pragmatic considerations, depending on the specific application. Our work allows the many ALPIDE users worldwide, who do not have access to the TCAD mesh, to use our EFF as an important input for their simulation. In fact, the tracking detector simulation of the LUXE experiment~\cite{LUXECDR}, which is to be built from the production version of the ALPIDE sensors, is already using our EFF. The production version of the ALPIDE has different dimensions compared to the ``investigator'' chip used in~\cite{DANNHEIM2020163784}. Namely the pixel size is $29.24 \times 26.88~\mu{\rm m}^{2}$ in $x\times y$ and 50~\ensuremath{\mu}m\xspace deep in $z$ with a depletion depth of 25~\ensuremath{\mu}m\xspace. The EFF for this version may therefore be scaled naturally by changing the dimensions of the pixel, where effectively only the pitch very slightly changes and where the change in thickness has no impact on the field since it is only defined within the depletion depth. The different thickness does impact, however, the performance that is checked with the Allpix$^2$\xspace software, but this change is unrelated to the EFF. Finally, we note that although we use the Allpix$^2$\xspace software extensively for the derivation and validation of our EFF, the resulting function can be interfaced also with other software like~\cite{MIKOPAPER} and it is not linked specifically to Allpix$^2$\xspace. \clearpage \newpage \section*{Acknowledgments} We wish to thank the colleagues from the ALICE ITS project for useful discussions and help related to the ALPIDE sensors. We would like to especially thank Luciano Musa, Gianluca Aglieri Rinella, Antonello Di Mauro, Magnus Mager, Corrado Gargiulo, Felix Reidt, Ivan Ravasenga, and Ruben Shahoyan. We also wish to thank Simon Spannagel and Paul Sch\"utze for the detailed and dedicated help and for the useful discussions about Allpix$^2$\xspace and especially the work in~\cite{DANNHEIM2020163784}.\\ \noindent This work is supported by a research grant from the Estate of Dr. Moshe Gl\"{u}ck, the Minerva foundation with funding from the Federal German Ministry for Education and Research, the ISRAEL SCIENCE FOUNDATION (grant No. 708/20), the Anna and Maurice Boukstein Career Development Chair, the Yeda-Sela SABRA (Supporting Advanced Basic Research) fund.
2,869,038,156,584
arxiv
\section{Introduction} \label{sec:intro} The quantum mechanics employed by physicists every day to make predictions concerning the outcomes of experimental measurements is not, as it is conventionally understood, sufficient to make predictions in physical situations in which a clearly defined concept of ``measurement'' is absent. Prime examples of such physical situations may be found in the early universe, or indeed, any time or place in the universe where measuring (recording) apparatus are not present. How, then, may physical theories offer clear and unambiguous predictions concerning the behavior of the physical universe in the moments surrounding the epoch normally considered to be the big bang? One (possibly partial) answer to this question lies in the consistent (or decoherent) histories formulation of quantum theory pioneered and subsequently developed by Griffiths \cite{griffiths08}, Omnes \cite{omnes94}, Gell-Mann and Hartle \cite{GMH90a,GMH90b,hartle91a,lesH}, Halliwell \cite{halliwell99,hallithor01,hallithor02,halliwall06,halliwell09} and others.\cite{hartlemarolf97,CH04,as05} In this framework quantum theory is supplemented by a \emph{consistency condition} which must be satisfied in any specific physical situation in order for quantum theory to offer definite predictions concerning that situation. This condition amounts to the requirement that there is essentially no interference (overlap) between the branch wave functions corresponding to each of the alternative quantum histories describing that scenario. Only in that case can quantum theory consistently assign definite probabilities to each of those possible alternative histories. This condition of ``decoherence''% \footnote{This usage of the term ``decoherence'' is related to, but conceptually distinct from, the broadly understood phenomenon of \emph{environmental decoherence}.\cite{giulini,schlosshauer07,halliwell89,zurek09a,RZZ16a} Physical mechanisms which engender environmental decoherence may thereby lead to the decoherence of the corresponding histories. However, other physical processes such as occur in typical laboratory ``measurement situations'' may also lead to decoherence of histories in the sense the term is applied here. } % or ``consistency'' among the branch wave functions is satisfied in any physical scenario that would normally be thought of as a classical ``measurement situation'',% \footnote{This is epitomized by the two-slit experiment. When the slit through which the particle passes is measured, in the sense that sufficient information is recorded to determine it, the interference between the branch wave functions corresponding to passage through either the upper or lower slit separately is destroyed, and physically meaningful probabilities can be assigned to which slit the particle traversed. Otherwise, the alternative wave functions interfere, and quantum mechanics simply has nothing to say -- i.e.\ cannot assign a probability in a logically consistent manner -- about through which slit the particle passed. } % and hence the framework of consistent histories encompasses the ordinary quantum mechanics of measurements -- sans the additional hypothesis of ``collapse of the wave function'' upon measurement. However, it also extends it in an objective, observer-independent manner to a wide range of physical situations in which observers and measuring apparatus are not present, thus giving quantum theory a voice even when and where physicists themselves do not have one. The central mathematical object in the consistent histories framework is the \emph{decoherence functional}. It is a natural generalization of the notion of the quantum state of a system as it arises in the algebraic formulation of quantum theory, and is a sesquilinear functional of the possible quantum histories of the system. The decoherence functional measures the quantum interference between the possible histories, and, when that interference vanishes among all the possible histories -- i.e.\ the family of histories \emph{decoheres} or ``is consistent'' -- the probabilities of each of them. In this way the decoherence functional -- the quantum state itself -- provides definite quantum predictions for the behavior of a system even in the absence of observers, recording apparatus, or ``measurements''.% \footnote{What the consistent histories framework does \emph{not} dictate is which family of quantum histories one interrogates. Different choices of family may provide very different pictures of ``what happens''. This, indeed, is a manifestation of contextuality in the quantum mechanics of history -- quantum mechanics is irreducibly a contextual theory, and thereby paints a contextual picture of reality. } % In this contribution we will summarize recent work on the application of these ideas to loop quantum cosmology, developed in collaboration with Parampreet Singh,\cite{CS10a,CS10b,CS10c,CS12a,CS13a,CS16c} and based on earlier work of J.B.\ Hartle and others.\cite{hartle91a,lesH,halliwell99,hallithor01,hallithor02,halliwall06,halliwell09,hartlemarolf97,CH04,as05} All of the discussion will be formulated in the context of a flat, homogeneous and isotropic Friedmann-Lema\^{i}tre-Robertson-Walker (FLRW) cosmological model sourced by a massless, minimally coupled scalar field. In section \ref{sec:gqt} we describe the consistent histories formalism in general terms. We then discuss in section \ref{sec:chqc} the implementation of this formalism in both a conventional so-called ``Wheeler-DeWitt'' quantization of this cosmology, as well as a distinct \emph{loop} quantization of the same model. In section \ref{sec:app} we illustrate the application of this formalism to show how it may be used to make predictions concerning quantum histories of various physical quantities. As an important illustration, we show that in the Wheeler-DeWitt quantization, all quantum states are invariably singular, as they are in the classical theory. By contrast, in the loop quantization all quantum states remain non-singular, and indeed ``bounce'' from large volume to large volume, approaching well-defined states of the corresponding Wheeler-DeWitt quantization. In this way we illustrate how consistent histories formulations of quantum cosmological theories may be defined and then deployed to make consistent quantum predictions in these theories in the absence of the observers or measurements typically viewed as essential to prediction in quantum mechanics. \section{Generalized Consistent Histories Quantum Theory} \label{sec:gqt} A ``generalized quantum mechanics'', as originally defined by Hartle\cite{hartle91a,lesH}, consists in the specification of four ingredients: (i) The \emph{\textbf{fine-grained histories}} $h$ of the system, the most refined descriptions of the possible alternative physical histories of the system it is possible to give. In Lagrangian mechanics, for example, for a set of generalized coordinates $\{q_i\}$, the fine-grained histories are all possible paths $\{q_i(t)\}$. The fine-grained histories may be collected into exhaustive (complete) sets of mutually exclusive alternative histories which we will sometimes denote by ${\mathcal S} =\{h\}$. Depending upon the theory, there may be many distinct (possibly incompatible) such exclusive, exhaustive sets of histories, as happens for example in ordinary Hilbert space quantum mechanics because of the existence of non-commuting operators. (Explicit examples of histories in Hilbert space quantum theory will be given below.) (ii) The allowed \emph{\textbf{coarse-grainings}} (partitions) of exclusive, exhaustive sets of alternative histories into more coarse-grained descriptions of the system. For example, in a diffeomorphism-invariant theory it would be typical to require coarse-grainings to be themselves diffeomorphism-invariant. (It is common to denote by $\bar{h}$ a course-grained history that contains the history $h$ when it is convenient to refer back to $h$ specifically. It is also common to allow $h$ alone to represent both fine-grained and coarse-grained histories as is convenient in context.) It is assumed that all exclusive, exhaustive sets of fine-grained histories ${\mathcal S} =\{h\}$ have a common complete coarse-graining $u=\cup_{h\in{\mathcal S}}\,h$. (iii) The \emph{\textbf{decoherence functional}} $d$. The decoherence functional is a complex-valued function on pairs of histories that is \begin{description} \item[(i)] \emph{Hermitian}: \quad $d(h,h')=d(h',h)^*$ \item[(ii)] \emph{Positive}: \quad $d(h,h)\geq 0$ \item[(iii)] \emph{Additive} (``Principle of Superposition''): \quad $d(\bar{h},\bar{h}') = \sum_{h\in\bar{h}}\sum_{h'\in\bar{h'}} d(h,h')$ \item[(iv)] \emph{Normalized}: \quad $\sum_{h,h'\in{\mathcal S}}d(h,h')=1$ \end{description} Note these conditions imply $d(u,u)=1$. Finally, the decoherence functional is used to define (iv) the \emph{\textbf{decoherence condition}} that determines when probabilities may be assigned to the histories in an exclusive, exhaustive set ${\mathcal S}$. The simplest and most common decoherence condition (usually called ``medium decoherence'' in the literature\cite{GMH93,diosi04}) simply requires that the decoherence functional be diagonal on a set of histories ${\mathcal S}$ in order for probabilities to be assigned to the histories in ${\mathcal S}$. In that case the diagonal elements of $d$ are the corresponding probabilities, and ${\mathcal S}$ is said to be a \emph{consistent} or \emph{decoherent} set of histories: \begin{equation} d(h,h') = p(h)\, \delta_{h,h'} \qquad \forall h,h'\in{\mathcal S} \label{eq:dfndmtl} \end{equation} if ${\mathcal S}$ is a consistent set. Only when ${\mathcal S}$ is consistent may the diagonal elements of $d$ be interpreted as (Kolmogorov) probabilities, for which $p(h)\geq 0$ and $\sum_{h\in{\mathcal S}} p(h)=1$ (whence the term ``consistent histories''.) In particular, if decoherence does not obtain, the putative ``probabilities'' $p(h)$ do not add consistently in the sense that $p(h_1+h_2)\neq p(h_1) + p(h_2)$, as happens for example in the two-slit experiment when the slit traversed is not measured. It is the consistency condition (\ref{eq:dfndmtl}) that alone determines whether or not probabilities may be meaningfully assigned in any family of histories, and not any notion of ``observer'' or ``measurement'' (though it does reproduce the predictions of ordinary Copenhagan quantum theory when quantum measuring apparatus are included in the system.) It is an objective criterion that depends only on the system's quantum state and dynamics -- and, of course, on the set of histories in question. In ordinary Hilbert space quantum theory, fine-grained histories may be specified by sequences of eigenvalues $a^{\alpha_i}_{k_i}$ of operators $A^{\alpha_i}$ at a sequence of times $\{t_i\}$: $h=(a^{\alpha_1}_{k_1},a^{\alpha_2}_{k_2},\cdots)$.% \footnote{To help clarify the confusing but necessary notation, note that the spectrum of the operator $A^{\alpha}$ is $\{\cup_k a^{\alpha}_k\}$. In other words, the \emph{superscript} labels the \emph{observable}, while the \emph{subscript} labels the \emph{eigenvalues} of that observable. } % Corresponding to each history $h$ may be defined the operator \begin{equation} C_h = \Projsupb{\alpha_1}{a_{k_1}}(t_1) \Projsupb{\alpha_2}{a_{k_2}}(t_2) \cdots \Projsupb{\alpha_n}{a_{k_n}}(t_n), \label{eq:classopdef-fg} \end{equation} where if $U(t)$ is the theory's propagator, $P_a(t_i)=U(t_i,t_0)^{\dagger}\ketbra{a}{a}U(t_i,t_0)$ is a Heisenberg picture projection operator. $C_h$ is called the ``class operator'' for the history $h$, and is typically identified with $h$ itself. Coarse-grainings of these histories then correspond to the operator sums \begin{equation} C_{\bar{h}} = \sum_{a_{k_{1}}\in\Delta a_{\bar{k}_1}} \sum_{a_{k_{2}}\in\Delta a_{\bar{k}_2}}\cdots \sum_{a_{k_{n}}\in\Delta a_{\bar{k}_n}} \, \Projsupb{\alpha_1}{a_{k_{1}}}(t_1) \Projsupb{\alpha_2}{a_{k_{2}}}(t_2) \cdots \Projsupb{\alpha_n}{a_{k_{n}}}(t_n) \label{eq:classopdef-cg} \end{equation} corresponding to the history $\bar{h}=(\Delta a^{\alpha_1}_{\bar{k}_1},\Delta a^{\alpha_2}_{\bar{k}_2},\cdots)$ in which the value of $A^{\alpha_1}$ is in the range $\Delta a^{\alpha_1}_{\bar{k}_1}$ labeled by $\bar{k}_1$ (so the spectrum of $A^{\alpha}$ is the union of the intervals $\{\cup_k \Delta a^{\alpha}_{k} \}$), and so on. The completely coarse-grained history $u$ then corresponds to \begin{equation} C_u = \sum_{h\in{\mathcal S}}C_h = \mathds{1}. \label{eq:classopdefCu} \end{equation} The decoherence functional corresponding to ordinary quantum mechanics is \begin{equation} d(h,h') = \tri{C_h^{\dagger}}{C_{h'}}, \label{eq:dfdefqm} \end{equation} where $\rho$ is the initial density matrix \footnote{In this form it should be clear in what sense the decoherence functional is a natural generalization of the notion of the ``quantum state'' of the system as it arises in the algebraic formulation of quantum theory, to measure interference between histories as well as their probabilities.\cite{ILS94a,dac97} } % If the intial state is pure, $\rho=\ketbra{\psi}{\psi}$, this reduces simply to \begin{equation} d(h,h') = \bracket{\psi_{h'}}{\psi_h}, \label{eq:dfdefstd} \end{equation} where \begin{equation} \ket{\psi_h(t)} = U(t,t_0)C_h^{\dagger}\ket{\psi} \label{eq:bwfdef} \end{equation} is the so-called ``branch wave function'' corresponding to the history $h$. Up to normalization, it is simply the wave function at time $t$ a system that began in the state $\ket{\psi}$ would have were the system observed to have had values of observable $A^{\alpha_1}$ in $\Delta a^{\alpha_1}_{k_1}$ at time $t_1$, % $A^{\alpha_2}$ in $\Delta a^{\alpha_2}_{k_2}$ at time $t_2$, % and so on, i.e.\ for the system to have been observed to ``follow'' the particular history $h$. However, in consistent histories quantum theory we do \emph{not} (necessarily) have observers as an essential element of the predictive scheme, and consequent wave function ``collapse''. Here, it appears simply as a description of one of the many possible alternative histories of the system, and it is up to decoherence to decide if the corresponding family of histories is consistent, and if so, what the probability of each such decohering history may be.% \footnote{See Refs.~\refcite{CS10c,CS13a} for some discussion of the leading factor of the propagator $U$, which is not strictly necessary -- it cancels in the decoherence functional -- and is introduced only in order to enable us to think of $\ket{\psi_h}$ as an evolving solution of the Schr\"{o}dinger equation for all $t$. } % If there is also a ``final condition'' $\rho_{\omega}$ in addition to an ``initial condition'' $\rho_{\alpha}$, then the decoherence functional may be defined similarly by $d(h,h') = \tr{\rho_{\omega}C_h^{\dagger}\rho_{\alpha}C_{h'}}$. If $\rho_{\alpha}=\sum_i \alpha_i\ketbra{\Psi_i}{\Psi_i}$ and $\rho_{\omega}=\sum_i \omega_i\ketbra{\Phi_i}{\Phi_i}$, this reduces to \begin{equation} d(h,h') = \sum_{i,j}\alpha_i\omega_j \melt{\Psi_i}{C_h}{\Phi_j}^* \melt{\Psi_i}{C_{h'}}{\Phi_j}. \label{eq:dfdefTS} \end{equation} Finally, given a \emph{path integral} formulation of a quantum theory, if the propagator is given by \begin{equation} U(t,t_0) = \int\delta q\, e^{i\int_{t_0}^{t}dt\, S[q]}, \label{eq:propdefPI} \end{equation} for some action $S[q]$, then class operators may be defined by \begin{equation} C_h^{\dagger}(t,t_0) = \int_{q(t)\in h}\delta q\, e^{i\int_{t_0}^{t}dt\, S[q]}. \label{eq:classopdefPI} \end{equation} In other words, in a path integral formulation, coarse-grained histories are defined by partitions of the space of paths which appear in the path integral. Considerable further detail concerning the construction of the decoherence functional in theories whose amplitudes are defined by path integrals, both relativistic and non-relativistic, may be found in Refs.~\refcite{hartle91a,lesH,hallithor01,hallithor02,halliwall06,halliwell09,CH04,CS13b}. In this way, in conjunction with Eq.\ (\ref{eq:dfdefTS}), a consistent histories formulation may be given to spin foam models of quantum gravity\cite{schroer13a} and loop quantum cosmology,\cite{CS13b} though we will not discuss these here. \subsection{Prediction in consistent histories quantum theory} \label{sec:prediction} The process of making a physical prediction in this formulation of quantum theory proceeds as follows. Given a quantum state $\ket{\psi}$ and its associated decoherence functional, one constructs the class operators for the family of histories corresponding to the physical question in which one is interested -- for example, what (coarse-grained) path does a quantum particle follow through a two-slit apparatus. The decoherence functional is then calculated to determine whether or not that family of histories decoheres i.e.\ is consistent. If the family is consistent, the decoherence functional determines the probability of each history in the family, and the quantum question has been answered. However, if the family of histories is \emph{not} consistent, then no probabilities can be assigned, and quantum mechanics says that the question being asked \emph{has no consistent answer within the theory.} Familiar examples are replete in ordinary quantum theory -- e.g.\ the precise values of the position and momentum of a particle at the same moment, or which slit a particle passed through in the two slit experiment if no information was gathered to determine it. This important point is perhaps worth some emphasis: the inability of quantum mechanics to make definite predictions concerning what are classically perfectly sensible physical statements is not in any way a unique feature of consistent histories quantum theory. What the consistent histories formulation provides is simply an \emph{observer-independent criterion}, derived from the quantum state itself, for determining when that is the case, rather than relying on the Copenhagen notion of a wave function-collapsing ``measurement'', which is in any case impossible to make sense of in environments such as the early universe. We will see how this scheme works in quantum cosmology in the sequel. \section{Homogeneous and Isotropic Scalar Cosmologies} \label{sec:hicosmo} We consider what must be the simplest possible cosmological model, namely, a flat, homogeneous and isotropic Friedmann-Lema\^{i}tre-Robertson-Walker cosmology sourced by a single massless, minimally coupled scalar field. This model has two relevant advantages. First, it is simple enough that it admits two distinct, exactly solvable quantizations: a conventional ``Wheeler-DeWitt'' quantization, and a \emph{unitarily inequivalent} loop quantization, the exactly solvable version of which is dubbed sLQC. Moreover, the loop quantization can be cast in the form of a spin-foam (quasi-``path integral'') quantization of flat scalar FLRW, with an explicitly solvable vertex expansion. It is thus a perfect playground within which to explore and contrast the consistent histories formulations of each of these three quantizations in a context in which mathematically explicit expressions and exact calculations are possible. Second, in this model the scalar field emerges as an internal physical ``clock''. While not essential to the consistent histories formulation, the presence of an internal clock variable does make it easier to conceptualize and interpret the results. Thus, this model is rich enough to supply an exceptional proving ground for the exploration of the technical and conceptual issues associated with the consistent histories formulation of quantum gravitational models, in a technically manageable context. \subsection{Classical scalar cosmologies} \label{sec:classcosmo} The FLRW metrric for a homogeneous isotropic universe may be expressed as \begin{equation} g_{ab} = - N(t)^2dt_a dt_b + a(t)^2 \mathring{q}_{ab}, \label{eq:FLRWmetric} \end{equation} where $a(t)$ is the scale factor, $N(t)$ is the lapse, and $\mathring{q}_{ab}$ is a fixed, flat fiducial metric on the spatial slices $\Sigma$, which we take to be topologically $\mathbb{R}^3$.\cite{ashsingh11} To construct a Hamiltonian formulation of the dynamics spatial integrals over finite volumes are required. Therefore, one introduces a fixed fiducial cell ${\mathcal V}$ with volume $\mathring{V}$ relative to $\mathring{q}_{ab}$, the physical volume of which is therefore $V=a^3\mathring{V}$. The Einstein-Hilbert action for a flat ($k=0$) FLRW universe sourced by a massless, minimally coupled scalar field becomes, after integration over the fiducial spatial volume ${\mathcal V}$, \begin{equation} S = \mathring{V}\int dt\, \left\{ -\frac{3}{8\pi G}\frac{a\dot{a}^2}{N} + \frac{1}{2}a^3\frac{\dot{\phi}^2}{N} \right\} , \label{eq:FLRWaction} \end{equation} where the dot denotes a derivative with respect to $t$. Choosing phase space variables with Poisson brackets $\{a,p_a\}=1$ and $\{\phi,p_{\phi}\}=1$, the canonical momenta are \begin{equation} p_a = -\frac{3}{4\pi G}\mathring{V} a \frac{\dot{a}}{N}, \qquad p_{\phi} = \mathring{V} a^3 \frac{\dot{\phi}}{N}. \label{eq:FLRWmom} \end{equation} The Hamiltonian is then \begin{equation} H = \frac{1}{\mathring{V}}\left\{ -\frac{2\pi G}{3}\frac{N}{a}p_a^2 + \frac{1}{2}\frac{N}{a^3}p_{\phi}^2 \right\}. \label{eq:FLRWHam} \end{equation} For comparison with loop quantum cosmology it turns out to be convenient to make a canonical transformation to a different set of variables, a volume variable% \footnote{Note we have adopted the conventions used in Refs.~\refcite{dac13a,CS13a}, which differ slightly from the normalization chosen in Ref.\ \refcite{CS10c}. } % \begin{equation} \nu = \varepsilon\, \frac{1}{2\pi{l}_{\mathrm{P}}^2}\,\frac{\mathring{V}}{\gamma}\,a^3 \label{eq:nudef} \end{equation} and \begin{eqnarray} b & = & -\varepsilon\, \frac{4\pi G}{3}\, \frac{\gamma}{\mathring{V}}\,\frac{p_a}{a^2} \nonumber\\ & = & \varepsilon\, \gamma\, \frac{\dot{a}/N}{a}. \label{eq:bdef} \end{eqnarray} Here $G$ is Newton's constant, ${l}_{\mathrm{P}} = \sqrt{G\hbar}$ is the Planck length (with $c=1$), and $\gamma$ is the Barbero-Immirzi parameter of loop quantum gravity; its presence is purely for convenience of comparison to LQC and it can otherwise be set to 1. $\varepsilon = \pm 1$ is a factor that determines the orientation relative to a fiducial triad in terms of which $\mathring{q}_{ab}$ is expressed. Its presence is necessary for a consistent quantization because it ensures $-\infty < \nu < \infty$ as well as $-\infty < b < \infty$. These variables satisfy $\{b,\nu\}=2/\hbar$. Physically, the volume of ${\mathcal V}$ is $V=2\pi{l}_{\mathrm{P}}^2\gamma|\nu|$, and from (\ref{eq:bdef}), $b$ is $\varepsilon\gamma\times (\mathrm{Hubble\ rate})$. Equivalently, since for flat FLRW the Ricci scalar is $R=-6(\dot{a}/Na)^2=-6(b/\gamma)^2$, $b$ is also a measure of spacetime curvature. The classical dynamical trajectories given by the solution of Hamilton's equations show that $p_{\phi}$ is a constant of the motion, and \begin{equation} \phi = \pm \sqrt{12\pi G}^{-1}\, \ln\left|\frac{\nu}{\nu_0}\right| + \phi_0, \qquad V = V_0 e^{\pm\sqrt{12\pi G} (\phi-\phi_0)}, \label{eq:FLRWclasssoln} \end{equation} where $\sqrt{12\pi G}=\sqrt{12 \pi G}$ and $\nu_0$ and $\phi_0$ are constants of integration. Thus the classical solutions split into a pair of disjoint expanding ($+$) and contracting ($-$) solutions, regarding the value of the scalar field $\phi$ as an internal emergent physical ``clock''. \emph{All} classical solutions are therefore singular either in the ``past'' ($\phi\rightarrow-\infty$) or the ``future'' ($\phi\rightarrow+\infty$).% \footnote{Actually, if one expresses (\ref{eq:FLRWclasssoln}) in terms of $\nu$, we see there are \emph{four} distinct solutions, two with $\nu>0$ and two with $\nu<0$. These solutions are physically degenerate. } % Solving the Hamiltonian constraint $H\approx 0$ gives the Friedmann equation \begin{equation} \left(\frac{b}{\gamma}\right)^2=\frac{8\pi G}{3}\rho \label{eq:Friedmann} \end{equation} for a flat universe, where the scalar matter energy density on the spatial slices $\Sigma$ is given by \begin{equation} \rho = \frac{p_{\phi}^2}{2V|_{\phi}^2}\, . \label{eq:rhodef} \end{equation} Here $V|_{\phi}^2$ is the volume at scalar field value $\phi$ given by (\ref{eq:FLRWclasssoln}). To make the connection with loop quantum cosmology, we note that in LQC the natural phase space variables specify the symmetric connection $c$ and its conjugate triad $p$, related to the Ashtekar-Barbero $SU(2)$ connection $A^i_a$ and the densitized triad $E^a_i$ by \begin{equation} A^i_a = c \mathring{V}^{-1/3}\mathring{\omega}^i_a \ , \qquad E^a_i = p \sqrt{\mathring{q}} \mathring{V}^{-2/3} \mathring{e}^a_i\, . \label{eq:LQCvardefs} \end{equation} Here $\mathring{e}^a_i$ and $\mathring{\omega}^i_a$ are the fiducial triad and co-triad respectively, $\mathring{e}^a_i \mathring{\omega}^j_a = \delta^j_i$, in terms of which $\mathring{q}_{ab}=\mathring{\omega}^i_a\mathring{\omega}^j_b\delta_{ij}$. In these variables the physical volume of ${\mathcal V}$ is given by $V=|p|^{3/2}$, and \begin{equation} b = \frac{c}{|p|^{1/2}} \ , \qquad \nu = \varepsilon\, \frac{|p|^{3/2}}{2\pi\gamma{l}_{\mathrm{P}}^2}\, . \label{eq:LQCbnudefs} \end{equation} It's easy to check that $\{c,p\}= (8\pi G/3)\gamma$. The variables $c$ and $p$ are related to the original geometrodynamic phase space variables by \begin{eqnarray} c &=& \varepsilon\, \frac{4\pi G}{3}\, \gamma\, \mathring{V}^{-2/3}\, \frac{p_a}{a} \ , \qquad p = \varepsilon\, \mathring{V}^{2/3}a^2 \nonumber\\ & = & \varepsilon\, \gamma \mathring{V}^{1/3}\, \frac{\dot{a}}{N}\, . \label{eq:LQCcanonrel} \end{eqnarray} \subsection{Quantization of scalar cosmologies} \label{sec:quantcosmo} We proceed to describe two \emph{physically inequivalent}\cite{ashsingh11} quantizations of this classical model, a conventional, so-called ``Wheeler-DeWitt'' quantization, and a distinct loop quantum gravity-inspired quantization called ``sLQC'' (for ``solvable LQC''). In order to quantize, we must choose a gauge i.e.\ fix the lapse $N(t)$. The classical ``harmonic'' gauge $N(t) = a(t)^3$ simplifies the Hamiltonian (\ref{eq:FLRWHam}), and it is this choice that leads to the exact solvability of the loop quantization. In this gauge the Hamiltonian may be written in the variables $(b,\nu)$ as \begin{equation} H = \frac{1}{2\mathring{V}}\left\{ -3\pi G\hbar^2\, b^2\nu^2 + p_{\phi}^2 \right\}. \label{eq:LQCHam} \end{equation} This is the Hamiltonian we will quantize in the sequel. It should be noted that in spite of the fact that $-\infty < \nu < \infty$, in both the Wheeler-DeWitt and loop quantizations the $\nu \lessgtr 0$ sectors are both disjoint and physically identical in the absence of fermions, differing only in triad orientation. Physical states may be assumed to be symmetric in $\nu$, and we will typically restrict attention to the $\nu > 0$ sector. (See Ref.~\refcite{ashsingh11} for further discussion of this point.) \subsubsection{Wheeler-DeWitt quantization} \label{sec:WdWqc} The conventional quantization proceeds as usual by defining the conjugate operators \begin{align} \hat{\nu} &= \nu & \hat{b} & = 2i\partial_{\nu} \nonumber\\ \hat{\phi} &= \phi & \hat{p}_{\phi} &= -i\hbar\partial_{\phi} \label{eq:physopdefs} \end{align} which satisfy $[\hat{b},\hat{\nu}]=2i$ and $[\hat{\phi},\hat{p}_{\phi}]=i\hbar$, in accord with the Poisson brackets given above. The Hamiltonian constraint, $H\approx 0$, becomes the Wheeler-DeWitt equation $\hat{H}\Psi^{{\scriptscriptstyle\text{WdW}}} =0$, \begin{eqnarray} \partial_{\phi}^2\Psi^{{\scriptscriptstyle\text{WdW}}}(\nu,\phi) & = & \sqrt{12\pi G}^2\, \frac{1}{\sqrt{|\nu|}}\, \nu\partial_{\nu}(\nu\partial_{\nu}(\sqrt{|\nu|}\Psi^{{\scriptscriptstyle\text{WdW}}}(\nu,\phi))) \nonumber\\ & \equiv & \Theta^{{\scriptscriptstyle\text{WdW}}}\Psi^{{\scriptscriptstyle\text{WdW}}}(\nu,\phi), \label{eq:WdWEOM} \end{eqnarray} with a choice of operator ordering corresponding to the Laplace-Beltrami operator of the DeWitt metric on the configuration space $(\nu,\phi)$.% \footnote{This is the choice adopted in Refs.~\refcite{dac13a,CS13a}, and is a slightly different representation than that employed in Ref.~\refcite{CS10c}. In contrast to that reference, our states carry an additional factor of $\sqrt{\lambda/|\nu|}$ in order to simplify the form of the inner product. } % Group averaging leads to a solvable quantum theory in which physical states may be chosen to be ``positive frequency'' solutions to the quantum constraint \begin{equation} +\hat{p}_{\phi}\Psi^{{\scriptscriptstyle\text{WdW}}}(\nu,\phi) = \hbar \sqrt{\Theta^{{\scriptscriptstyle\text{WdW}}}}\Psi^{{\scriptscriptstyle\text{WdW}}}(\nu,\phi). \label{eq:pdef} \end{equation} (The positive and negative frequency sectors are disjoint and physically equivalent.) Thus \begin{equation} \Psi^{{\scriptscriptstyle\text{WdW}}}(\nu,\phi) = U^{{\scriptscriptstyle\text{WdW}}}(\phi-\phi_0)\,\Psi^{{\scriptscriptstyle\text{WdW}}}(\nu,\phi_0), \label{eq:WdWevolve} \end{equation} where the ``propagator'' in internal ``time'' $\phi$ is given by \begin{equation} U^{{\scriptscriptstyle\text{WdW}}}(\phi) = e^{i\sqrt{\Theta^{{\scriptscriptstyle\text{WdW}}}}\phi} \ . \label{eq:WdWpropdef} \end{equation} The gravitational ``evolution'' operator $\Theta^{{\scriptscriptstyle\text{WdW}}}$ is positive and self-adjoint in the group-averaged, Schr\"{o}dinger-like inner product% \footnote{It is perhaps worth some emphasis that the form of the inner product in both the Wheeler-DeWitt and loop quantizations depends on the chosen representation of states. For example, in some variables the inner product assumes a Klein-Gordon type form; see Refs.~\refcite{acs:slqc,CS10c} for some discussion and examples. } % \begin{equation} \bracket{\Psi^{{\scriptscriptstyle\text{WdW}}}}{\Phi^{{\scriptscriptstyle\text{WdW}}}} = \int_{-\infty}^{\infty}d\nu\, \Psi^{{\scriptscriptstyle\text{WdW}}}(\nu,\phi)^*\Phi^{{\scriptscriptstyle\text{WdW}}}(\nu,\phi). \label{eq:WdWip} \end{equation} Solutions may be conveniently expressed in terms of the eigenfunctions of $\Theta^{{\scriptscriptstyle\text{WdW}}}$, \begin{equation} e^{{\scriptscriptstyle\text{WdW}}}_k(\nu) = \frac{1}{\sqrt{4\pi|\nu|}}\,e^{ik\ln\left|\frac{\nu}{\lambda}\right|}, \label{eq:WdWedef} \end{equation} where \begin{equation} \TWe^{{\scriptscriptstyle\text{WdW}}}_k = \omega_k^2\, e^{{\scriptscriptstyle\text{WdW}}}_k. \label{eq:WdWTdef} \end{equation} Here $\omega_k = \sqrt{12\pi G}\, |k|$, and we choose $\lambda = \sqrt{\Delta} \cdot {l}_{\mathrm{P}} =\sqrt{4\sqrt{3}\pi\gamma} \cdot {l}_{\mathrm{P}}$. % ($\lambda^2$ is the so-called ``area gap'' of loop quantum gravity.) This specific choice for the constant $\lambda$ -- necessary for dimensional reasons -- plays no physical role in the Wheeler-DeWitt theory, but is a convenient normalization for comparison with LQC. In terms of the $e^{{\scriptscriptstyle\text{WdW}}}_k(\nu)$, physical states may be written \begin{eqnarray} \Psi^{{\scriptscriptstyle\text{WdW}}}(\nu,\phi) & = & \int_{-\infty}^{\infty}dk\, \tilde{\Psi}^{{\scriptscriptstyle\text{WdW}}}(k)\,e^{{\scriptscriptstyle\text{WdW}}}_k(\nu)\,e^{i\omega_k(\phi-\phi_0)} \nonumber \\ & = & \Psi^{{\scriptscriptstyle\text{WdW}}}_{R}(\nu,\phi) + \Psi^{{\scriptscriptstyle\text{WdW}}}_{L}(\nu,\phi), \label{eq:WdWQWFdef} \end{eqnarray} where \begin{eqnarray} \Psi^{{\scriptscriptstyle\text{WdW}}}_{R}(\nu,\phi) & = & \frac{1}{\sqrt{4\pi|\nu|}}\, \int_{-\infty}^{0}dk\, \tilde{\Psi}^{{\scriptscriptstyle\text{WdW}}}(k)\,e^{ik[\ln\left|\frac{\nu}{\lambda}\right|-\sqrt{12\pi G}(\phi-\phi_0)]} \nonumber\\ \Psi^{{\scriptscriptstyle\text{WdW}}}_{L}(\nu,\phi) & = & \frac{1}{\sqrt{4\pi|\nu|}}\, \int_{0}^{\infty}dk\, \tilde{\Psi}^{{\scriptscriptstyle\text{WdW}}}(k)\,e^{ik[\ln\left|\frac{\nu}{\lambda}\right|+\sqrt{12\pi G}(\phi-\phi_0)]}. \label{eq:WdWPsiLRdef} \end{eqnarray} These ``right-'' and ``left-''moving (in a plot of $\phi$ vs.\ $\nu$) states clearly correspond to the expanding and contracting branches of the classical solution. In the quantum theory, the $R$ and $L$ sectors are orthogonal and a-priori independent of one another. A state may be purely $R$- (or $L$-) moving, or a superposition of the two. For considerable further detail concerning the quantization and references to the earlier literature see Ref.~\refcite{ashsingh11}. \subsubsection{Observables} \label{sec:observe} The physical (``Dirac'') observables must be represented by operators that commute with the constraint $\Theta^{{\scriptscriptstyle\text{WdW}}}$. The scalar momentum $\hat{p}_{\phi}$ clearly commutes with $\sqrt{\Theta^{{\scriptscriptstyle\text{WdW}}}}$ via (\ref{eq:WdWEOM}) and is therefore a constant of the motion as in the classical theory. The volume $\hat{\nu}$ is not. However, the corresponding ``relational'' observable $\hat{\nu}|_{\phi^*}$ that gives the value of the volume at a fixed value $\phi^*$ of the scalar field is. The ``Heisenberg-picture'' operator $\hat{\nu}|_{\phi^*}(\phi)$ that acts on states at $\phi$ is given by \begin{equation} \hat{\nu}|_{\phi^*}(\phi) = U(\phi^*-\phi)^{\dagger}\hat{\nu} U(\phi^*-\phi), \label{eq:nurelopdef} \end{equation} where $U(\phi)$ is given by (\ref{eq:WdWpropdef}). Thus, for example, the physical volume of the fiducial cell ${\mathcal V}$ at $\phi^*$ is given by the operator $\hat{\nu}|_{\phi^*}(\phi)$ whose action is \begin{equation} \hat{\nu}|_{\phi^*}(\phi)\, \Psi(\nu,\phi) = 2\pi\gamma{l}_{\mathrm{P}}^2\, e^{i\sqrt{\Theta^{{\scriptscriptstyle\text{WdW}}}}(\phi-\phi^*)} |\nu|\,\Psi(\nu,\phi^*). \label{eq:nurelopaction} \end{equation} FLRW universes are, of course, classically singular -- every classical solution (\ref{eq:FLRWclasssoln}) begins or ends in a zero-volume singularity with diverging energy density. In Ref.~\refcite{acs:slqc} it is shown that the same is true of the Wheeler-DeWitt quantum theory in the sense that the expectation value of the volume observable vanishes for generic right-moving (expanding) states as $\phi\rightarrow -\infty$, and for generic left-moving states as $\phi\rightarrow +\infty$. (This is actually evident from Eq.\ (\ref{eq:WdWPsiLRdef}) by simply invoking the Riemann-Lebesgue lemma.) Since $\hat{p}_{\phi}$ is a constant of the motion this implies the matter energy density (\ref{eq:rhodef}) diverges in those limits for generic states. By constrast, they showed that in sLQC the (expectation value of the) volume for generic states remains bounded away from zero, and the (expectation value of the) matter energy density is bounded above for generic states \footnote{Prior work\cite{aps,aps:improved} employing lapse $N=1$ had shown this numerically for sharply peaked states (later confirmed for generic states.\cite{dgs14a,dgms14a}) The exact solvability of the theory in the harmonic gauge makes an analytic result possible in sLQC. } % Here we will sharpen these results and demonstrate the singularity of the Wheeler-DeWitt quantization and concomitant finiteness of LQC using the consistent histories framework for quantum theory. \subsubsection{Loop quantization} \label{sec:loopqc} The loop quantization proceeds by quantizing the Hamiltonian (\ref{eq:LQCHam}) on a different kinematical Hilbert space. (For details and references to the earlier literature see Ref.~\refcite{ashsingh11}.) If $\ket{\nu}$ denotes the eigenstates of the multiplicative volume operator $\hat{\nu}$, in this inequivalent quantization one finds that $\widehat{\exp(i\lambda b)}\ket{\nu}=\ket{\nu-2\lambda}$ acts as a translation. Quantizing the Hamiltonian constraint once again leads to \begin{equation} \hat{p}_{\phi}^2\,\Psi(\nu,\phi) = \hbar^2\, \Theta\, \Psi(\nu,\phi), \label{eq:LQCEOM} \end{equation} but now $\Theta$ is a second-order difference operator \begin{eqnarray} \Theta\, \Psi(\nu,\phi) & = & -\frac{3\pi G}{4\lambda^2} \left\{ \sqrt{|\nu(\nu+4\lambda)|} |\nu+2\lambda| \Psi(\nu+4\lambda,\phi) - 2 \nu^2 \Psi(\nu,\phi) \right. \nonumber\\ & \qquad & \qquad \qquad \qquad \qquad \left. + \sqrt{|\nu(\nu-4\lambda)|} |\nu-2\lambda| \Psi(\nu-4\lambda,\phi) \right\}. \label{eq:LQCTopdef} \end{eqnarray} Solutions to the full quantum constraint $\hat{C}=-[\partial_{\phi}^2+\Theta]$ therefore decompose into disjoint sets of solutions on the $\epsilon$-lattices given by $\nu=4\lambda n + \epsilon$, where $\epsilon \in [0,4\lambda)$. Without loss of generality we will restrict attention to the sector of the theory on the $\epsilon=0$ latice. Thus, volume in this model is discrete, \begin{equation} \nu = 4\lambda n \ , \quad n \in \mathbb{Z}. \label{eq:nulattice} \end{equation} Group averaging leads to the physical inner product \begin{equation} \bracket{\Psi}{\Phi} = \sum_{\nu=4\lambda n} \Psi(\nu,\phi_0)^*\Phi(\nu,\phi_0) \label{eq:LQCipdef} \end{equation} at some fiducial (but physically irrelevant) $\phi_0$. Once again the theory splits into disjoint, physically degenerate positive- and negative-frequency sectors, and as before we restrict our attention to the positive frequency sector \begin{equation} -i\partial_{\phi} \Psi(\nu,\phi) = \sqrt{\Theta}\, \Psi(\nu,\phi). \label{eq:LQCEOMroot} \end{equation} Also as in the Wheeler-DeWitt theory, states in sLQC may be ``propagated'' by $U(\phi)=\exp(i\sqrt{\Theta}\phi)$. Relational Dirac observables in sLQC are then defined in precisely the same manner as in the Wheeler-DeWitt theory. Similarly, quantum states may be represented in terms of the symmetric eigenfunctions $e^{{\scriptscriptstyle (s)}}_k(\nu)$ of $\Theta$, \begin{equation} \Theta e^{{\scriptscriptstyle (s)}}_k(\nu) = \omega_k^2\, e^{{\scriptscriptstyle (s)}}_k (\nu) \ , \label{eq:LQCedef} \end{equation} as \begin{equation} \Psi(\nu,\phi) = \int_{-\infty}^{\infty}dk\, \tilde{\Psi}(k)\,e^{{\scriptscriptstyle (s)}}_k(\nu)\,e^{i\omega_k(\phi-\phi_0)}. \label{eq:LQCQWFdef} \end{equation} Explicit expressions for the eigenfunctions $e^{{\scriptscriptstyle (s)}}_k(\nu)$, based on the earlier work of Refs.~\refcite{ach09,ach10a,ach10b}, are given in \refcite{dac13a}. (See Eq. (3.14) of that reference.) Their salient properties are \begin{description}[leftmargin=0cm] \item[(i)] They are oscillatory functions of both $k$ and $\nu$, oscillating increasingly rapidly in $k$ as $\nu$ increases. They are symmetric in both $k$ and $\nu$. \item[(ii)] At large volume (specifically, for $|\nu| \gg 2\lambda\, |k|$), they rapidly approach a specific linear combination of corresponding Wheeler-DeWitt eigenfunctions, namely \begin{eqnarray} e^{{\scriptscriptstyle (s)}}_k(\nu) & \simeq & \sqrt{\frac{2\lambda}{\pi |\nu|}}\, \cos(|k|\ln\left|\frac{\nu}{\lambda}\right|+\alpha(|k|)) \qquad |\nu| \gg 2\lambda |k| \nonumber\\ & = & \sqrt{2\lambda}\, \left(e^{{\scriptscriptstyle\text{WdW}}}_{+|k|}(\nu)e^{+i\alpha(|k|)} + e^{{\scriptscriptstyle\text{WdW}}}_{-|k|}(\nu)e^{-i\alpha(|k|)} \right), \label{eq:escasslim} \end{eqnarray} where $\alpha(k) = k\ln(1-\ln k) + \frac{\pi}{4}$. \item[(iii)] Regarded as functions of $k$, the $e^{{\scriptscriptstyle (s)}}_k(\nu)$ exhibit a sharp exponential ultraviolet cutoff that sets in when $2\lambda |k| \approx |\nu|$. In other words, the eigenfunctions have support only in the wedge \begin{equation} |k| \lesssim \frac{|\nu|}{2\lambda}. \label{eq:kwedge} \end{equation} \end{description} All of these features are evident in Fig.\ \ref{fig:esWedge}. \begin{figure}[!htbp] \begin{center} \includegraphics[width=1.0\textwidth]{fig1} \end{center} \vspace{-15pt} \caption{Plot of the gravitational functions $e^{{\scriptscriptstyle (s)}}_k(\nu=4\lambda n)$ in the $(k,n)$ plane. The volume variable $\nu=4\lambda n$ is fundamentally discrete; the values of the eigenfunctions are plotted as continuous in both variables $k$ and $n$ for reasons of visual clarity only. The exponential ultraviolet cutoff along the lines $|k|=2|n|=|\nu/2\lambda|$ is clearly evident. The eigenfunctions $e^{{\scriptscriptstyle (s)}}_k(\nu)$ may therefore be regarded to an excellent approximation as having support only in the ``wedge'' $|k|\lesssim 2|n|$. It is this feature of the eigenfunctions that is ultimately responsible for the existence of a universal upper bound to the matter density. }% \label{fig:esWedge}% \end{figure} Property (ii) is worthy of note because it directly implies that \emph{all} states in LQC approach a specific, \emph{symmetric} linear combination of expanding and contracting Wheeler-DeWitt states \begin{equation} \Psi(\nu,\phi) \approx \Psi^{{\scriptscriptstyle\text{WdW}}}_{R}(\nu,\phi) + \Psi^{{\scriptscriptstyle\text{WdW}}}_{L}(\nu,\phi) \qquad \mbox{\scriptsize $ \begin{pmatrix} \text{large}\\ \text{volume} \end{pmatrix} $} \label{eq:LQCLRlim} \end{equation} at large volume.\cite{dac13a} (See Refs.~\refcite{dac13a,CS13a} for considerable further detail and discussion of this point. By contrast, recall in the Wheeler-DeWitt theory the expanding and contracting branches of a generic state are completely independent of one another.) This is one signature of the quantum ``bounce'' characteristic of loop quantum states. Indeed, it is well known that in LQC quasiclassical states (as described in the sequel) remain peaked on a solution of the quantum-corrected Friedmann equation\cite{singh06a,taveras08a} \begin{equation} \left(\frac{\dot{a}}{a}\right)^2 = \frac{8\pi G}{3}\rho\left(1-\frac{\rho}{\rho_{\mathrm{max}}}\right) \label{eq:LQCFriedmann} \end{equation} for all values of the volume, ``bouncing'' from one large volume state to another rather than being swallowed by the singularity as in the classical or Wheeler-DeWitt theory; see Fig.~\ref{fig:msstraj}. Here $\rho_{\mathrm{max}}$ is the maximum matter energy density defined in (\ref{eq:rhocritdef}) below. Equation (\ref{eq:LQCLRlim}) is an indication of the remarkable fact, demonstrated in further detail in the sequel, that \emph{generic} quantum states in sLQC are nonsingular\cite{aps,aps:improved,acs:slqc,ashsingh11,dac13a,CS13a,dgs14a,dgms14a} -- \emph{all} states ``bounce'', not merely quasiclassical ones. In this regard property (iii) of the eigenfunctions is especially interesting, because it is directly responsible for the non-singular nature of sLQC. It is a manifestation of the quantum gravitational repulsion that appears at the Planck scale in LQG. In fact, one may argue heuristically that the matter energy density remains bounded in LQC by \begin{equation} \rho_{\mathrm{max}} = \frac{\sqrt{3}}{32\pi^2\gamma^3}\, \rho_p, \label{eq:rhocritdef} \end{equation} where $\rho_p=1/G{l}_{\mathrm{P}}^2$ is the Planck density. The argument proceeds as follows. The matter density is given classically by (\ref{eq:rhodef}). It is argued in Ref.~\refcite{acs:slqc} that correspondingly \begin{equation} \expct{\rho|_{\phi}} = \frac{1}{2}\, \frac{\expct{p_{\phi}^2}}{\expct{\hat{V}|_{\phi}^2}} \label{eq:rhoopdef} \end{equation} in the quantum theory. (Variations on this definition have also been discussed.\cite{acs:slqc,dac13a}) Now, the eigenvalues of $\hat{p}_{\phi}$ are $\hbar\sqrt{12\pi G} k$, and the UV cutoff (\ref{eq:kwedge}) on the eigenfunctions $e^{{\scriptscriptstyle (s)}}_k(\nu)$ requires $|k| \lesssim |\nu|/2\lambda$, so that with $\hat{V}=2\pi{l}_{\mathrm{P}}^2\gamma|\hat{\nu}|$, \begin{eqnarray} \expct{\rho|_{\phi}} & \sim & \frac{1}{2}\, \frac{(\hbar\sqrt{12\pi G} |k|)^2}{(2\pi{l}_{\mathrm{P}}^2 \gamma |\nu|)^2} \nonumber\\ & \lesssim & \frac{1}{2} \left(\frac{\hbar\sqrt{12\pi G}}{2\lambda}\right)\left( \frac{|\nu|}{2\pi\gamma{l}_{\mathrm{P}}^2 |\nu|}\right)^2, \label{eq:rholim} \end{eqnarray} which gives precisely the bound (\ref{eq:rhocritdef}). Thus, the linear scaling of the UV cutoff on the eigenfunctions with volume leads to a universal upper bound on the expectation value (hence spectrum) of the matter energy density. It is satisfying that this simple heuristic argument based on the UV cutoff (\ref{eq:kwedge}) reproduces precisely the value of the bound on the density found by a careful analytical argument in Ref.~\refcite{acs:slqc}. A quite different analytical argument may be found in Ref.~\refcite{dac13a}. \section{Consistent Histories Formulation of Canonical Quantum Cosmology} \label{sec:chqc} Both the Wheeler-DeWitt quantization and sLQC are canonical quantum theories, in the sense that they describe quantum universes by states in well-defined Hilbert spaces, with physical observables represented (Dirac) operators on those Hilbert spaces. This means that each may be given a consistent histories formulation that closely resembles that of non-relativistic quantum theory. Indeed, the presence in these models of a variable that behaves as an emergent internal matter ``clock'', the scalar field $\phi$ (via $\ket{\Psi(\phi)}=U(\phi)\ket{\Psi}$), only encourages that association. We emphasize, however, that that association is not \emph{necessary} for the formulation of the consistent histories framework. What matters is that well-defined quantum states and observable operators are present which permit the definition of class operators and branch wave functions out of which a decoherence functional may be constructed. The decoherence functional can then be employed to make predictions concerning patterns of \emph{correlation} between any of the observable quantities, whether or not they are necessarily ordered in some ``time'' variable. The existence of an internal time, however, does supply a convenient context for the physical interpretation of the resulting theory. \subsection{Decoherence functional} \label{sec:df} In order to define the decoherence functional for these theories we must first construct class operators and branch wave functions for quantum histories defined by histories of values of the theory's Dirac observables. The constructions are identical in each theory so long as they are expressed in terms of the gravitational constraint operator $\Theta$ and its eigenfunctions. The differing physical predictions then arise because of the sharply distinct behavior of these objects in the two theories. We therefore proceed to formulate the decoherence functional in terms of $\Theta$ and its eigenfunctions alone, and apply the construction to contrast the physical predictions of the two theories in the sequel. In this simple FLRW model our operators are $\{\hat{\phi},\hat{p}_{\phi},\hat{b}\, (\text{or }\widehat{\exp(i\lambda b)}),\hat{\nu}\}$ with corresponding relational Dirac observables given by the equivalents of (\ref{eq:nurelopdef}). Given its relevance to the question of the singularity of the universe we will concentrate most of our attention on $\hat{\nu}$, but parallel constructions are available for any observable. The definition of class operators is based on the spectral decomposition of observables. In the case of the volume operator \begin{eqnarray} \hat{\nu}^{{\scriptscriptstyle\text{WdW}}} & = & \int_{-\infty}^{\infty}d\nu'\, \nu'\, \ketbra{\nu'}{\nu'} \nonumber\\ & = & \int_{-\infty}^{\infty}d\nu'\, \nu'\, P^{\nu}_{\nu'}\ , \label{eq:nuWdWopspectral} \end{eqnarray} with \begin{eqnarray} \bracket{\nu}{\nu'} & = & \delta(\nu-\nu') \nonumber\\ e^{{\scriptscriptstyle\text{WdW}}}_k(\nu) & = & \bracket{\nu}{k^{{\scriptscriptstyle\text{WdW}}}} \label{eq:nuketdefwdw} \end{eqnarray} and $P^{\nu}_{\nu'}$ a volume projection on the eigenket $\ket{\nu'}$. (We work with a normalization appropriate to the full range $-\infty < \nu < \infty$ and take all states to be symmetric in $\nu$.) By contrast, in sLQC volume is discrete: \begin{eqnarray} \hat{\nu} & = & \sum_{\nu=4\lambda n} \nu\, \ketbra{\nu}{\nu} \nonumber\\ & = & \sum_{\nu'=4\lambda n'} \nu'\, P^{\nu}_{\nu'}. \label{eq:nuopspectral} \end{eqnarray} and \begin{eqnarray} \bracket{\nu}{\nu'} & = & \delta_{n,n'} \nonumber\\ e^{{\scriptscriptstyle (s)}}_k(\nu) & = & \bracket{\nu}{k^{{\scriptscriptstyle (s)}}}\ . \label{eq:nuketdefslqc} \end{eqnarray} In the sequel $\hat{\nu}$ will refer to whichever theory is appropriate in context, with integrals in the Wheeler-DeWitt theory and sums in sLQC. For example, the projection onto a range of values $\Delta\nu$ will be one of \begin{equation} P^{\nu}_{\Delta\nu} = \begin{cases} \int_{\nu\in\Delta\nu}d\nu\, \ketbra{\nu}{\nu} & \text{Wheeler-DeWitt} \\ \sum_{\nu\in\Delta\nu}\ketbra{\nu}{\nu} & \text{slQC} \ . \end{cases} \label{eq:nuprojdef} \end{equation} ``Heisenberg'' projections may be defined via the propagator $U(\phi)=\exp(i\sqrt{\Theta}\phi)$ as \begin{equation} P^{\alpha}_{\Delta a^{\alpha}_{k}}(\phi) = U(\phi-\phi_0)^{\dagger} P^{\alpha}_{\Delta a^{\alpha}_{k}} U(\phi-\phi_0), \label{eq:heisprojdef} \end{equation} where $\phi_0$ is a fiducial (but physically irrelevant) value of the scalar field at which the quantum state is defined. Class operators may then be defined as in Eqs.~(\ref{eq:classopdef-fg}-\ref{eq:classopdef-cg}). For example, the class operator for the relational history in which the volume is in $\Delta\nu$ when the scalar field has value $\phi^*$ is \begin{equation} C_{\Delta\nu|_{\phi^*}} = U(\phi^*-\phi_0)^{\dagger} P^{\nu}_{\Delta\nu} U(\phi^*-\phi_0). \label{eq:nuclassopdef} \end{equation} It turns out such class operators offer an illuminating perspective on relational observables such as $\nu|_{\phi^*}(\phi)$, a point to which we shall return later. Similarly, the class operator describing a coarse-grained trajectory $\nu(\phi)$, for which the volume is in $\Delta\nu_1$ at scalar field value $\phi_1$, $\Delta\nu_2$ at $\phi_2$, and so in, is \begin{equation} C_{\Delta\nu_1|_{\phi_1};\Delta\nu_2|_{\phi_2};\cdots;\Delta\nu_n|_{\phi_n}} = \Projsupb{\nu}{\Delta \nu_1}(\phi_1) \Projsupb{\nu}{\Delta \nu_2}(\phi_2) \cdots \Projsupb{\nu}{\Delta \nu_n}(\phi_n). \label{eq:nutrajclassopdef} \end{equation} Branch wave functions corresponding to a history $h$ are then defined as in (\ref{eq:bwfdef}) by \begin{equation} \ket{\Psi_h(\phi)} = U(\phi-\phi_0) C_h^{\dagger}\ket{\Psi}, \label{eq:qcbwfdef} \end{equation} which is everywhere a solution of the theory's Wheeler-DeWitt equation (i.e.\ annihilated by the quantum constraint.) The decoherence functional in the case of a pure ``initial'' state defined on a minisuperspace surface of constant $\phi \footnote{See \refcite{CS10c} for brief discussion of this point. } % may then be defined simply as \begin{equation} d(h,h') = \bracket{\Psi_{h'}}{\Psi_h}, \label{eq:qcdfdef} \end{equation} using the group-averaged inner product on states. Consistent or decoherent families of histories then satisfy (\ref{eq:dfndmtl}) on all pairs of histories $h$ in some exclusive, exhaustive family $\{h\}$. As discussed above, the decoherence functional provides an objective, observer-independent measure of the interference among histories in the family. When that interference vanishes, each history in the family may be assigned the probability $p(h)=d(h,h)$. Otherwise, quantum mechanics says that the physical question posed by the family of histories simply has no sensible answer within quantum theory. Let us see how this is accomplished in a series of examples in keeping with the plan laid out in section \ref{sec:prediction}. \subsection{Consistent histories in quantum cosmology: other models} \label{sec:other} sLQC has also been used as the basis for a spin-foam-like ``path integral'' formulation of loop quantum cosmology.\cite{ach09,ach10a,ach10b,ashsingh11} (See also related work.\cite{hrvwe11}) As remarked at the end of section \ref{sec:gqt}, a consistent histories formulation may also be given for theories defined via path-integrals, as has been done (for example) for Bianchi IX cosmologies and other other models.\cite{hartle91a,dac07,CH04,halliwell99,hallithor01,hallithor02,halliwall06,halliwell09} These constructions serve as the template for a consistent histories formulation of spin foam loop quantum cosmology\cite{CS16a,CS16b,CS16c} and of spin foam loop quantum gravity (as in Ref.~\refcite{schroer13a}, which is directly modeled on Refs.~\refcite{hartle91a,lesH,CH04}.) We do not, however, have the space to describe these path-integral consistent histories theories here. It should also be mentioned that the physical predictions of the consistent histories formulation have been compared to a de Broglie-Bohm quantization of FLRW that closely parallels the work reviewed herein.\cite{fpps12,pf13a} See, however, Ref.~\refcite{CS13a} for some pertinent brief discussion. \section{Applications} \label{sec:app} \subsection{Probabilities, histories, and relational observables} \label{sec:relnlobs} Relational (``Dirac'') observables are the physical quantities about which theories with constraints make predictions, and (by definition) must commute with those constraints. It might have been thought that the class operators for volume given in (\ref{eq:nuclassopdef}-\ref{eq:nutrajclassopdef}) should have been constructed directly from the Dirac observable (\ref{eq:nurelopdef}). While that indeed is an option, an alternative point of view is that histories provide a natural framework within which to understand the \emph{emergence} of such relational observables in theories with an emergent ``time'' evolution.\cite{CS10c} % To see how they arise naturally in a histories framework, consider a self-adjoint operator $\hat{A}$ that does not commute with the constraint -- for example, $\hat{\nu}$, in the models we have been discussing -- with spectral decomposition (assuming for definiteness that $\hat{A}$ has a purely discrete spectrum) \begin{equation} \hat{A} = \sum_a a\, P_a, \label{eq:Aspectral} \end{equation} where $P_a=\ketbra{a}{a}$. % The class operator corresponding to histories in which $\hat{A}$ has values in one of the complete set of disjoint of intervals of eigenvalues $\{\Delta a\}$ at $\phi=\phi^*$ are \begin{equation} C_{\Delta a|_{\phi^*}} = U(\phi^*-\phi_0)^{\dagger} \Projsupb{A}{\Delta a} U(\phi^*-\phi_0), \label{eq:classopA} \end{equation} with the corresponding branch wave functions defined as in (\ref{eq:qcbwfdef}). Because the class operators are simply one ``time'' ($\phi$) projections, the branch wave functions \emph{always} decohere -- the family is consistent: \begin{eqnarray}\label{eq:drlnl} d(\Delta a,\Delta a') & = & \bracket{\Psi_{\Delta a'|_{\phi^*}}}{\Psi_{\Delta a|_{\phi^*}}} \nonumber\\ & = & \melt{\Psi}{C_{\Delta a'|_{\phi^*}}C_{\Delta a|_{\phi^*}}^{\dagger}}{\Psi} \nonumber\\ & = & \melt{\Psi}{U(\phi^*-\phi_0)^{\dagger} \Projsupb{A}{\Delta a'} U(\phi^*-\phi_0) U(\phi^*-\phi_0)^{\dagger} \Projsupb{A}{\Delta a} U(\phi^*-\phi_0)}{\Psi} \nonumber\\ & = & \melt{\Psi(\phi^*)}{\Projsupb{A}{\Delta a'}\Projsupb{A}{\Delta a}}{\Psi(\phi^*)} \nonumber\\ & = & \melt{\Psi(\phi^*)}{\Projsupb{A}{\Delta a}}{\Psi(\phi^*)}\,\delta_{\Delta a',\Delta a} \nonumber\\ & = & p_{\Delta a}(\phi^*)\,\delta_{\Delta a',\Delta a}\ , \end{eqnarray} where $p_{\Delta a}(\phi^*)$ is the probability that $a\in\Delta a$ when $\phi=\phi^*$. For example, since we have assumed $\hat{A}$ has a discrete spectrum if we set $\Delta a = \{a\}$, a single eigenvalue, then \begin{equation} p_a(\phi^*) = |\bracket{a}{\Psi(\phi^*)}|^2 \label{eq:Aprob} \end{equation} as one might expect,% \footnote{Precisely because of this expectation, it is crucial to emphasize that the simple form of this result, which we have derived from the decoherence functional, is directly connected with the simple form of the Schr\"{o}dinger-like form of the inner product in this representation. As noted above, in other representations the inner product can take on e.g.\ a Klein-Gordon type form, and therefore the formula for the probability does as well. This illustrates the importance of placing quantum prediction in a coherent, self-consistent frame.\cite{CH04,halliwell91:qcbu} } % with a similarly expected expression if $\hat{A}$ has a continuous spectrum; see Ref.~\refcite{CS10c} for details in that case. To see the connection with relational observables, let us calculate the average value of $\hat{A}$ at $\phi^*$: \begin{eqnarray} \expct{\hat{A}}|_{\phi^*} & = & \sum a\, p_a(\phi^*) \nonumber\\ & = & \melt{\Psi(\phi^*)}{\sum_a a\, P^A_a}{\Psi(\phi^*)} \nonumber\\ & = & \melt{\Psi}{U(\phi^*-\phi_0)^{\dagger}\hat{A}U(\phi^*-\phi_0)}{\Psi} \nonumber\\ & = & \melt{\Psi}{\hat{A}|_{\phi^*}}{\Psi}, \label{eq:Aexpct} \end{eqnarray} the expectation value of the relational observable $\hat{A}|_{\phi^*}$ corresponding to $\hat{A}$ in the state $\ket{\Psi}$. Probabilities for histories of values of $\hat{A}$ (which does not commute with the constraint) are naturally expressed in terms of the corresponding relational observable $\hat{A}|_{\phi^*}$ (which does). Had we not known about Dirac observables the histories formulation would have led us to consider them. \subsection{Scalar momentum} \label{sec:scalarmom} The scalar field momentum $p_{\phi}$ is a constant of the motion in the classical theory, where $\{p_{\phi},H\}=0$, and similarly in the quantum theory, $[\hat{p}_{\phi},\Theta]=0$. How does this constancy manifest in the consistent histories formulation? If we are only interested in the distribution of probabilities for values of $\hat{p}_{\phi}$ at a single $\phi=\phi^*$, we might construct the relational observable $\hat{p}_{\phi}|_{\phi^*}$. However, because $\hat{p}_{\phi}$ commutes with the constraint, $\hat{p}_{\phi}|_{\phi^*}=\hat{p}_{\phi}$, and correspondingly the probability $p_{\Delta p_{\phi}}(\phi^*)$ for $\hat{p}_{\phi}$ to have values in $\Delta p_{\phi}$ at $\phi^*$ is independent of $\phi^*$, in keeping with $p_{\phi}$ being a constant of the motion. Similarly, to the question ``what is the likelihood $p_{\phi}$ is in $\Delta p_{\phi;1}$ at $\phi_1$, $\Delta p_{\phi;2}$ at $\phi_2$, \ldots (etc.)'' corresponds the class operator \begin{equation} C_{\Delta p_{\phi;1}|_{\phi_1};\Delta p_{\phi;2}|_{\phi_2};\cdots;\Delta p_{\phi;n}|_{\phi_n}} = \Projsupb{p_{\phi}}{\Delta p_{\phi;1}}(\phi_1) \Projsupb{p_{\phi}}{\Delta p_{\phi;2}}(\phi_2) \cdots \Projsupb{p_{\phi}}{\Delta p_{\phi;n}}(\phi_n), \label{eq:pphiclassopdef} \end{equation} Since $P^{p_{\phi}}_{\Delta p_{\phi}}(\phi) =P^{p_{\phi}}_{\Delta p_{\phi}}$, all such class operators are zero unless all of the intervals $\Delta p_{\phi;i}$ are equal, in which case each such class operator is the simple projection $P^{p_{\phi}}_{\Delta p_{\phi;i}}$. The family of all such class operators clearly decoheres for any initial state, the corresponding probabilities $p_{\Delta p_{\phi;i}}$ all constants (i.e.\ independent of $\phi$). This is the meaning of the statement that $p_{\phi}$ is a constant of the motion in the quantum theory. (Constants of motion in generalized consistent histories quantum theory are discussed further in Ref.~\refcite{hartlemarolf97}.) \subsection{Volume at a single value of $\phi$} \label{sec:volphi} Classically, one may express the volume $\nu$ of the fiducial cell as a function $\nu(\phi)$ as in (\ref{eq:FLRWclasssoln}). It might seem natural then to ask the quantum question ``what is the probability the volume $\nu$ is in the interval $\Delta\nu$ at scalar field value $\phi^*$?'' The class operator corresponding to this question was given in (\ref{eq:nuclassopdef}). According to the calculation of (\ref{eq:Aprob}) these single-$\phi$ histories decohere with corresponding probabilities \begin{equation} p_{\Delta\nu}(\phi^*) = \bracket{\Psi_{\Delta\nu|_{\phi^*}}}{\Psi_{\Delta\nu|_{\phi^*}}}. \label{eq:nuprobdef} \end{equation} In the Wheeler-DeWitt case this is, explicitly, \begin{equation} p^{{\scriptscriptstyle\text{WdW}}}_{\Delta\nu}(\phi^*) = \int_{\Delta\nu}d\nu\, |\psi^{{\scriptscriptstyle\text{WdW}}}(\nu,\phi^*)|^2. \label{eq:nuprobWdW} \end{equation} This probability is calculated explicitly for semi-classical Wheeler-DeWitt states (i.e.\ states peaked on a particular classical trajectory) in Ref.~\refcite{CS10c}. In the case of a superposition of right- (expanding) and left- (contracting) moving states (as in (\ref{eq:psiLRcatdef})), an example of the result is shown in Figure \ref{fig:pvolplot}.\cite{CS10b,CS10c} In loop quantum cosmology the equivalent expression is \begin{equation} p^{{\scriptscriptstyle\text{LQC}}}_{\Delta\nu}(\phi^*) = \sum_{\nu\in\Delta\nu} |\psi(\nu,\phi^*)|^2. \label{eq:nuprobLQC} \end{equation} This is again calculated for the case of ``quasiclassical'' loop quantum states that approach a symmetric superposition of semiclassical Wheeler-DeWitt states at large volume in Ref.~\refcite{CS13a}. (For discussion of the usage of the term ``quasiclassical'' in this context see Sec.~\ref{sec:quasiclassical}.) \begin{figure}[hbt!] \begin{center} \includegraphics[width=0.80\textwidth]{fig2} \end{center} \vspace{-15pt} \caption{ The behavior of $p_{\Delta\nu^*}(\phi)$, the probability that the quantum universe will be found in the interval $\Delta\nu^* =[0,\nu^*]$ (\emph{i.e.\ }at small volume) for a sample superposition of expanding $(R)$ and contracting $(L)$ semiclassical states both peaked at large volume near $\phi=0$. $p_L$ and $p_R$ give the relative ``amount'' of each component in the superposition, so that $p_L + p_R =1$ (cf.~Eq.~(\ref{eq:psiLRcatdef}).) This plot may appear to imply the possibility of a ``quantum bounce'', since at any given $\phi$ there is a non-zero probability that the universe may be found with volume $\nu > \nu^*$. A more careful consistent histories analysis shows that this possibility is not realized: the probability that the universe has large volume in \emph{both} the ``past'' and ``future'' is zero.\cite{CS10b,CS10c } % \label{fig:pvolplot} \end{figure} \subsection{Cosmological trajectories} \label{sec:traj} The class operator specifying the volume of the fiducial cell at a sequence of values of $\phi$ given in (\ref{eq:nutrajclassopdef}) describes a (coarse-grained) cosmological trajectory $\nu(\phi)$. (See Fig.\ \ref{fig:trajcg} for some examples.) Because such class operators are not simply projections, the corresponding branch wave functions will not in general decohere, and in these simple models probabilities can \emph{not} in general be assigned to the family of trajectories they describe, as is typical in quantum theory. Nonetheless, there are several physically important examples for which they do decohere. \begin{figure}[hbtp!] \begin{center} \subfloat[Coarse-grained trajectories]{ \includegraphics[width=0.485\textwidth]{fig3} \label{fig:trajcg} }% \subfloat[Coarse-graining by singularity]{ \includegraphics[width=0.485\textwidth]{fig4} \label{fig:smallvol}% }% \end{center} \caption{Coarse-grainings of the FLRW minisuperspace. The left-hand plot shows an expanding (R-moving) and collapsing (L-moving) classical trajectory as well as the corresponding ``classical'' loop quantum trajectory i.e.\ solution to the effective Friedmann equation (\ref{eq:LQCFriedmann}). Also depicted are coarse-grainings by ranges of values of the volume at different values of the scalar field for two histories. The first is a coarse-grained history $(\Delta\nu_{{cl}_1},\Delta\nu_{{cl}_2},\Delta\nu_{{cl}_3})$ describing a quantum universe peaked on an expanding classical trajectory. The second history $(\Delta\nu_{\gamma_{\text{sc}}^1},\Delta\nu_{\gamma_{\text{sc}}^2},\ldots)$ describes a loop quantum trajectory characterized by a bounce, which is peaked on symmetrically related expanding and collapsing classical trajectories at large $|\phi|$. The right-hand plot shows the same trajectories as well as a coarse-graining suitable for studying the probability that the universe assumes large or small volume. The volume $\nu$ is partitioned into the range $\Delta\nu^*=[0,\nu^*]$ (the shaded region in the figure) and its complement $\overline{\Delta\nu^*}=(0,\infty)$. The quantum universe may be said to attain small volume if the probability for the branch wave function $\ket{\Psi_{\smash{\Delta\nu^*}}(\phi)}$ is near unity while that for $\ket{\Psi_{\smash{\overline{\Delta\nu^*}}}(\phi)}$ is near zero for arbitrary choices of $\nu^*$. Conversely, the universe may be said to attain arbitrarily large volume over some range of $\phi$ if the probability for $\ket{\Psi_{\smash{\overline{\Delta\nu^*}}}(\phi)}$ is near unity for arbitrary choice of $\nu^*$ over that range of $\phi$. }% \label{fig:msstraj}% \end{figure} The simplest way in which decoherence of cosmological trajectories obtains is if the quantum state remains peaked along a particular individual trajectory, here imagined as a classical trajectory for convenience -- where ``classical'' means a bouncing solution of the effective Friedmann equation (\ref{eq:LQCFriedmann}) in the case of sLQC, which approaches the corresponding general relativistic solutions at large volume. (See the remarks in Section \ref{sec:quasiclassical} immediately below.) Consider a coarse-graining on a set of slices $\{\phi_1,\phi_2,\ldots,\phi_n\}$ by a set of intervals in volume $\{\Delta\nu_{i_k},k=1\ldots n\}$ chosen in such a way that on each slice $\phi_k$ one of the ranges $\Delta\nu_{cl_k}$ straddles the trajectory. If $\Delta\nu_{cl_k}$ is wider than the width of the quantum state at $\phi_k$, then essentially the only non-zero branch wave function is \begin{equation} \ket{\Psi_{cl}} = \Projsupb{\nu}{\Delta\nu_{{cl}_n}}(\phi_n) \cdots \Projsupb{\nu}{\Delta\nu_{{cl}_2}}(\phi_2) \Projsupb{\nu}{\Delta\nu_{{cl}_1}}(\phi_1)\ket{\Psi}. \label{eq:classtrajbwf} \end{equation} The family of histories (classical,non-classical) therefore decoheres and quasiclassical behavior for such a state is predicted with probability one. Attempts to specify the trajectory too finely (relative to the width of the quantum state) destroys the decoherence. This is important: quantum theory \emph{has no predictions} for the trajectory followed \emph{even for quasiclassical states} if it is too precisely specified. (This is merely the uncertainty principle manifesting itself in the context of quantum cosmology.) A more detailed discussion of what it means for a state to ``follow a trajectory'' in generalized quantum theory is given in Ref.~\refcite{CS13a}. A similar but more sophisticated analysis may be given for WKB states; see Refs.~\refcite{lesH,CH04} for details. In this review we will describe several important examples of scenarios involving quantum histories of volume evolving with the scalar field $\phi$. \subsection{Quasiclassical trajectories} \label{sec:quasiclassical} We intend by ``quasiclassical state'' to mean a quantum state that is peaked on some classical trajectory (\ref{eq:FLRWclasssoln}) at large volume. (An explicit example of such a state is given in Eq.~(3.37) of Ref.~\refcite{CS10c}.) We characterize quasiclassical states in this way for the following reason. Recall that in the Wheeler-DeWitt quantization, generic quantum states (\ref{eq:WdWQWFdef}) are a superposition of orthogonal and independent right- and left-moving (expanding and contracting) branches. In loop quantum cosmology, on the other hand, \emph{all} quantum states approach a specific symmetric superposition of right- and left-moving Wheeler-DeWitt states at large volume, corresponding to the quantum ``bounce'' of loop quantum states. A ``quasiclassical'' state in the Wheeler-DeWitt quantization is one which is peaked on either an expanding or contracting classical solution at large volume for all values of $\phi$. In the generic case that the state is a superposition of right- and left-moving branches, such a state will be peaked on a contracting classical solution as $\phi\rightarrow-\infty$, and an expanding solution as $\phi\rightarrow\infty$. If the state is purely right- or left-moving, then only one of these cases will hold. By contrast, since loop quantum states \emph{always} approach a superposition of right- and left-moving branches, ``quasiclassical'' states in LQC will be peaked on a solution of the ``effective'' Friedmann equation (\ref{eq:LQCFriedmann}), which approaches a classical contracting solution at large volume as $\phi\rightarrow\-\infty$ and an expanding solution as $\phi\rightarrow+\infty$, as in the generic Wheeler-DeWitt case. By the analysis of states which remain peaked on a trajectory given above, then, a sufficiently coarse-grained family of trajectory histories will decohere for such ``quasiclassical'' states, predicting quasiclassical behavior at large volume with unit probability, so long as the trajectory is not too finely specified. (See Refs.~\refcite{CS10c,CS13a} for a more in-depth discussion.) (Not that this usage therefore does \emph{not} imply that a ``quasiclassical'' state necessarily follows a classical (i.e.\ general relativistic) trajectory throughout its evolution. In the Wheeler-DeWitt quantum theory, such states will in fact track classical solutions all the way to the singularity, while in the case of sLQC, states which track classical general relativistic trajectories at large volume are connected by the ``bounce'' in the deep quantum regime along a solution to the effective Friedmann equation.) \subsection{Volume singularity} \label{sec:volsing} In order to assess whether or not a given quantum cosmological model is singular a specific criterion for singularity must be given. In these simple homogeneous isotropic models there are relatively few available, and they are clearly connected: does the volume $\nu$ of the fiducial cell become 0? Does the curvature $b$ diverge? Does the matter density $\rho$ (given classically by (\ref{eq:rhodef})) % diverge? The latter definition of ``quantum singularity'' seems the most physical, but it does require adopting a specific definition for the matter energy density operator. Several choices are considered in Ref.~\refcite{acs:slqc}. For each of these choices essentially the same underlying physics comes into play: as $p_{\phi}$ is a constant of the motion in both the classical and the quantum theory, and the density $\rho|_{\phi}$ is essentially a ratio of (the square of) $p_{\phi}$ to (the square of) volume, $\rho|_{\phi}$ will remain bounded above if the volume remains bounded below, and will diverge if the volume becomes zero. These arguments are made precise in Refs.~\refcite{aps,aps:improved,acs:slqc}, where it is shown that the expectation value of the volume inevitably becomes 0 for generic states in the Wheeler-DeWitt theory, and therefore the expectation value of the matter energy density diverges. By contrast, in sLQC the expectation value of the density is bounded above by the critical value (\ref{eq:rhocritdef}) given by the ``moral'' argument of Ref.~\refcite{dac13a} and Eq.~(\ref{eq:rholim}) due to the linear scaling of the ultraviolet cutoff on the constraint eigenfunctions; see Refs.~\refcite{aps,aps:improved,acs:slqc,ashsingh11}, and Ref.~\refcite{dac13a} for a different proof. In Refs.~\refcite{CS10c,CS13a} the question of the singularity of the universe is addressed in the consistent histories framework, in which quantum questions concerning multiple values of $\phi$ -- i.e.\ concerning the singularity of quantum \emph{histories} of the universe -- may be framed and confidently answered. The most direct course would be to consider ranges of eigenvalues of the density operator and show that histories for which the density diverges have probability zero (or unity, in the Wheeler-DeWitt theory.) However, this course is not available as the spectrum of the density operator is not currently known in either theory. Fortunately, as discussed above, at least in these models the behavior of the matter density is directly tied to the behavior of the volume of the fiducial cell, and so the analysis of the singularity of these quantum universes can instead be given in terms of the volume.% \footnote{Alternately, an analysis analagous to what we give here can be perfomed for the curvature $b$ conjugate to the volume. } % We will show that in the Wheeler-DeWitt quantum theory \emph{all} states are inevitably singular, while in the loop quantization, generic quantum states -- quasiclassical or not -- ``bounce'' from large volume to large volume. This, of course, was already known. However, the consistent histories analysis brings new, potentially sharper tools to the problem, and sheds additional light on the theory, as will be discussed in the conclusion. The analysis proceeds as follows. The question of whether the universe becomes singular is, in these models, equivalent to the question of whether the volume becomes 0. Therefore, partition $|\nu|$ (recall all quantum states are symmetric in $\nu$) into complementary, disjoint ranges $\{\Delta\nu^*,\overline{\Delta\nu^*}\}$, where $\Delta\nu^*=[0,\nu^*]$ and its complement $\overline{\Delta\nu^*}=(\nu^*,\infty)$, where $\nu^*$ is any arbitrary volume. (See Fig.\ \ref{fig:smallvol}.) The universe has small volume at some scalar field value $\phi^*$ if $|\nu|\in\Delta\nu^*$ at $\phi^*$, and large volume if $|\nu|\in\overline{\Delta\nu^*}$. The corresponding branch wave functions are \begin{equation} \ket{\Psi_{\Delta\nu^*|_{\phi^*}}(\phi)}= U(\phi-\phi^*)P^{\nu}_{\Delta\nu^*}\ket{\Psi(\phi^*)} \label{eq:volbwfdef} \end{equation} and its complement; the probability the volume is in $\Delta\nu^*$ given the state $\ket{\Psi}$ is then \begin{eqnarray} p_{\Delta\nu^*}(\phi) & = & \bracket{\Psi_{\Delta\nu^*|_{\phi^*}}(\phi)}{\Psi_{\Delta\nu^*|_{\phi^*}}(\phi)} \nonumber\\ & = & \begin{cases} \int_{0}^{\nu^*}\!d\nu\, |\psi^{{\scriptscriptstyle\text{WdW}}}(\nu,\phi)|^2 & \text{Wheeler-DeWitt} \\ \sum_{|\nu|\in\Delta\nu^*} |\psi(\nu,\phi)|^2 & \text{sLQC} \ , \end{cases} \label{eq:volprobdef} \end{eqnarray} with $p_{\Delta\nu^*}(\phi)=1-p_{\overline{\Delta\nu^*}}(\phi)$. These probabilities are evaluated explicitly for quasiclassical Wheeler-DeWitt and loop quantum states in Refs.~\refcite{CS10c,CS13a}. We are interested here, however, in the question of the singularity of such states. We begin with the Wheeler-DeWitt theory. It is shown in detail in Ref.~\refcite{CS10c} that for purely left- or right-moving states \begin{align} \lim_{\phi\rightarrow-\infty} p^L_{\Delta\nu^*}(\phi) &= 0 & \lim_{\phi\rightarrow+\infty} p^L_{\Delta\nu^*}(\phi) &= 1 \nonumber\\ \lim_{\phi\rightarrow-\infty} p^R_{\Delta\nu^*}(\phi) &= 1 & \lim_{\phi\rightarrow+\infty} p^R_{\Delta\nu^*}(\phi) &= 0 \label{eq:pLR} \end{align} for any fixed $\nu^*$. This result may be understood most easily by examining Eqs.~(\ref{eq:WdWPsiLRdef}). From the Riemann-Lebesgue lemma it is clear that \emph{all} right-/left-moving states are ``sucked in'' to the singularity along the classical trajectories (\ref{eq:FLRWclasssoln}). This means, perhaps unsurprisingly, that generic expanding states are inevitably singular as $\phi\rightarrow -\infty$, and contracting states are inevitably singular as $\phi\rightarrow+\infty$. Two points merit emphasis. First, while the result is expected, it has been here \emph{derived} within a well-defined framework for quantum prediction. Second, there is the question of the role of the limits $|\phi|\rightarrow\infty$. It may indeed be the case that some quantum states become singular at finite $\phi$. The limit $|\phi|\rightarrow\infty$ serves to show that the singularity is a generic prediction for \emph{all} states in the theory. It is noteworthy, however, that generic states in the Wheeler-DeWitt theory are actually \emph{superpositions} of expanding and contracting states. What are the corresponding probabilities then? Indeed, if one writes \begin{equation} \ket{\Psi} = \sqrt{p_L}\,\ket{\Psi_L} + \sqrt{p_R}\,\ket{\Psi_R}\ , \label{eq:psiLRcatdef} \end{equation} one finds \begin{equation} p_{\Delta\nu^*}(\phi) = p^L_{\Delta\nu^*}(\phi) + p^R_{\Delta\nu^*}(\phi), \label{eq:pvolcat} \end{equation} with $p^{L,R}_{\Delta\nu}(\phi)$ given as above, and \begin{equation} \lim_{\phi\rightarrow -\infty} p_{\Delta\nu^*}(\phi) = p_R \qquad \mathrm{and}\qquad \lim_{\phi\rightarrow +\infty} p_{\Delta\nu^*}(\phi) = p_L. \label{eq:pvolcatlim} \end{equation} See Fig.\ \ref{fig:pvolplot} for an example (and Ref.~\refcite{CS10c} for an explicit formula for quasiclassical states.) By contrast, in loop quantum cosmology one finds\cite{CS13a} instead that, for generic states \begin{align} \lim_{\phi\rightarrow-\infty} p_{\Delta\nu^*}(\phi) &= 0 & \lim_{\phi\rightarrow+\infty} p_{\Delta\nu^*}(\phi) &= 0 \nonumber\\ \lim_{\phi\rightarrow-\infty} p_{\overline{\Delta\nu^*}}(\phi) &= 1 & \lim_{\phi\rightarrow+\infty} p_{\overline{\Delta\nu^*}}(\phi) &= 1. \label{eq:probvolslim} \end{align} from which it is clear that loop quantum states invariably bounce. The argument is once again based on the behavior of the eigenfunctions $e^{{\scriptscriptstyle (s)}}_k(\nu)$ inserted into (\ref{eq:LQCQWFdef}). The UV cutoff in the eigenfunctions ensures that for \textbf{any} finite volume $\nu^*$, \emph{all} states in sLQC vanish as $|\phi|\rightarrow\infty$, again on account of Riemann-Lebesgue. In addition, essentially because the eigenfunctions vanish at $\nu=0$, the probability the volume of the fiducial cell is precisely 0 is 0 for \emph{all} $\phi$: $p_{\nu=0}(\phi) = 0$. These universes never become singular for any value of $\phi$. \subsection{Quantum bounce} \label{sec:qbounce} We have shown how, within a consistent histories framework, it may be demonstrated that the probability that the universe achieves zero volume is zero, while the probability that generic quantum states achieve arbitrarily large volume in both limits $|\phi|\rightarrow\infty$ is unity: \emph{all} quantum states in sLQC ``bounce'' from large volume to large volume. By contrast, in the Wheeler DeWitt quantum theory, we showed that all right-moving (expanding) states inevitably assume zero volume as $\phi\rightarrow -\infty$, and contracting states do so as $\phi\rightarrow +\infty$, and are therefore singular. However, a \emph{superposition} of expanding and contracting states leads to a probability for the universe having small volume that is in general never unity, just as it is never zero. This is suggestive that, in the Wheeler-DeWitt quantum theory, such a superposition state leads to a non-zero probability that it, too, might ``bounce'' from large volume to large volume (with probability $p_{\mathrm{bounce}}=p_L\cdot p_R$). This possibility is not realized, however -- \emph{all} quantum states in the Wheeler-DeWitt quantum theory are singular, just as all states in sLQC are non-singular. To understand why this is so it is important to recognize that the question of whether or not a quantum universe bounces \emph{is not a question about a single value of $\phi$} -- it is a question about a coarse-grained \emph{\textbf{trajectory}}: does the universe have a large volume at (at least) \emph{two} values of $\phi$, one in the ``past'' and one in the ``future''? This is a \emph{genuinely quantum} question, and only has a definite quantum answer in the instance that the corresponding family of histories decoheres.\cite{CS10a,CS10c,CS13a} We shall show, in fact, that in an appropriate limit it does, and that indeed loop quantum universes are generically non-singular and Wheeler-DeWitt universes are singular. To pose the question concretely, consider partitions of the volume $\{\Delta\nu^*_1,\overline{\Delta\nu^*_1}\}$ and $\{\Delta\nu^*_2,\overline{\Delta\nu^*_2}\}$ on two minisuperspace $\phi$-slices $\phi_1$ and $\phi_2$. The class operator describing a ``bounce'' is then \begin{equation} C_{\mathrm{bounce}}(\phi_1,\phi_2) \,=\, C_{\overline{\Delta\nu^*_1};\overline{\Delta\nu^*_2}} \, = \, \Projsupb{\nu}{\overline{\Delta\nu^*_1}}(\phi_1) \Projsupb{\nu}{\overline{\Delta\nu^*_2}}(\phi_2). \label{eq:Cbouncedef}\\ \end{equation} The class operator for the complementary history in which the universe is at arbitrarily small volume at $\phi_1$, $\phi_2$, or both, is then \begin{eqnarray} C_{\mathrm{sing}}(\phi_1,\phi_2) & = & \mathds{1} - C_{\mathrm{bounce}}(\phi_1,\phi_2) \nonumber\\ & = & C_{\smash{\Delta\nu^*_1;\Delta\nu^*_2}} + C_{\smash{\Delta\nu^*_1;\overline{\Delta\nu^*_2}}} + C_{\smash{\overline{\Delta\nu^*_1};\Delta\nu^*_2} }. \label{eq:Csingdef} \end{eqnarray} By arguments essentially similar to those leading to (\ref{eq:pLR}) above, one shows that in the Wheeler-DeWitt quantum theory the branch wave functions corresponding to bouncing vs.\ singular cosmologies are \begin{eqnarray} \ket{\Psi_{\text{sing}}(\phi)} & = & U(\phi-\phi_o) \lim_{\substack{\phi_1\rightarrow -\infty\\ \phi_2\rightarrow +\infty }} C_{\mathrm{sing}}^\dagger(\phi_1,\phi_2)\ket{\Psi} = \ket{\Psi(\phi)} \nonumber\\ \ket{\Psi_{\text{bounce}}(\phi)} & = & U(\phi-\phi_o) \lim_{\substack{\phi_1\rightarrow -\infty\\ \phi_2\rightarrow +\infty }} C_{\text{bounce}}^{\dagger} (\phi_1,\phi_2) \ket{\Psi} = 0 ~. \label{eq:bwfdefb-s} \end{eqnarray} Thus the alternative histories (bounce,singular) \emph{decohere} in this limit, $d(\mathrm{bounce},\mathrm{singular})= \bracket{\Psi_{\mathrm{sing}}}{\Psi_{\mathrm{bounce}}}=0$, and \begin{eqnarray} p_{\mathrm{sing}} & = & \bracket{\Psi_{\mathrm{sing}}}{\Psi_{\mathrm{sing}}} \nonumber\\ & = & \bracket{\Psi}{\Psi} \nonumber\\ & = & 1. \label{eq:psing} \end{eqnarray} Wheeler-DeWitt quantum cosmological models are \emph{invariably} singular, regardless of state, in spite of the potential promise of Fig.\ \ref{fig:pvolplot} that they sometimes might not be. Similarly, in loop quantum cosmology one finds instead that \begin{eqnarray} \ket{\Psi_{\mathrm{bounce}}} & = & C_{\mathrm{bounce}}\ket{\Psi} \nonumber\\ & = & \ket{\Psi} \label{eq:psibounce} \end{eqnarray} while \begin{eqnarray} \ket{\Psi_{\mathrm{sing}}} & = & C_{\mathrm{sing}}\ket{\Psi} \nonumber\\ & = & 0. \label{eq:psising} \end{eqnarray} Again, these histories decohere, but now $p_{\mathrm{bounce}} =1$, and the probability arbitrary loop quantum states are singular is 0. \section{Discussion} \label{sec:disc} We have here reviewed the formulation of a consistent histories approach to quantum theory for loop quantum cosmology, showing how the framework provides the structure necessary -- the decoherence functional -- to enable the theory to make consistent quantum predictions in the absence of measurements, external observers, or other similar apparatus typically required in quantum theory before one can make definite predictions. We have illlustrated the application of the framework in showing how loop quantum cosmologies are non-singular for generic states, contrasting that striking result with the inevitable singularity of the Wheeler-DeWitt quantization of the same family of comological models. We have also made an effort to point to some other work on consistent histories formulations of quantum cosmological models, even though there was not sufficient space to review them here. Discussion of a few important points is in order. It is no secret that while physicists generally agree on how to \emph{do} quantum mechanics, the story is quite different when one has the temerity to inquire what it \emph{means}. We wish to emphasize that it is not necessary to commit to any particular ontology for quantum mechanics to acknowledge the centrality of the role of interference among alternative outcomes in arriving at quantum predictions. Indeed, destruction of this interference is the fundamental \emph{technical} role played by the classical idea of ``measurement'' in quantum theory via the postulated ``collapse'' of the wave function upon measurement of a property by an external agent. Consistent histories quantum theory supplies, more or less, the \textbf{minimal} additional structure one requires to provide quantum theory with an objective, observer independent, purely \emph{internal} measure of this interference, that \emph{reproduces the predictions of ordinary measurement-based quantum theory} in traditional ``measurement situations'', but also \emph{extends it} to physical circumstances in which there is no meaningful notion of an ``external observer'', thus enabling quantum theory to make predictions concerning questions of fundamental physical interest such as in the early universe. In particular, we emphasize that apart from introducing the decoherence functional, the objective measure of interference derived from the quantum state (an object already present in the conventional theory), consistent histories quantum theory \emph{is still quantum theory}, with all of the interpretational challenges that implies.% \footnote{Some of the foundational questions the consistent histories framework does not by itself attempt to directly answer include the problem of outcomes, the basis problem, and the meaning of ``probability'' -- among others. } % It is neither the place nor our intention here to analyze the limitations of the consistent histories framework as a complete answer to the question, ``What is the quantum mechanical account of reality?'' It is our view that while the concepts of consistent histories -- even supplemented by physical mechanisms and ideas such as environmental decoherence, envariance, and ``quantum Darwinism''\cite{giulini,schlosshauer07,zurek09a,RZZ16a} -- are at best only a \emph{partial} answer to this question, we nonetheless believe it likely they will play a role in an eventual picture. We have just alluded to the fact that the consistent histories framework does not offer a fresh answer to the ``true'' meaning of the probabilities quantum theory supplies. Nonetheless, we take it as uncontroversial that the meaning of a probability that is unity or zero is not ambiguous -- the theory predicts that thing either does, or does not, happen, with certainty.\cite{hartle88a,hartle91a,sorkin94,sorkin97a} These are the \textbf{definite} predictions of a theory. It is for this reason we have emphasized the $|\phi|\rightarrow\infty$ limit in our assessment of the probabilities that Wheeler-DeWitt or loop quantum universes are (or are not) singular. It is in this limit that we are \emph{guaranteed} that \textbf{all} Wheeler-DeWitt states are singular, and that \textbf{all} loop quantum states bounce, quasiclassical or not. It may be natural to inquire whether one could have argued that Wheeler-DeWitt cosmologies are necessarily singular even for superposition states such as (\ref{eq:psiLRcatdef}) by arguing that the amplitude $\bracket{\Psi_L}{\Psi_R}=0$, without invoking the consistent histories formulation, and in spite of the fact that the ``single-$\phi$'' probability illustrated in Fig.\ \ref{fig:pvolplot} suggests otherwise. (Indeed, one finds such arguments in the classical literature on quantum cosmology.\cite{halliwell91:qcbu,kiefer12}) This is tempting, and is certainly the thrust of the consistent histories calculation itself. However, doing so \emph{assumes} that this amplitude may be interpreted as a probability for a sequence of events, and thereby glosses over one of the fundamental messages of quantum theory: \emph{\textbf{quantum amplitudes do not represent quantum probabilities unless interference among the alternative outcomes vanishes.}}% \footnote{It may be worth reiterating that this obtains generally when one considers amplitudes for \emph{sequences} of quantum events -- such as which slit a particle passes through in a two slit apparatus. But such amplitudes for quantum histories are precisely the sort in which one is often interested in quantum cosmology, such as the amplitudes for a quantum bounce discussed in this review.\cite{CS10a,CS10b,CS10c,CS13a} } % Simply put, quantum mechanics says that \textbf{some questions do not have physically meaningful answers.} In ordinary laboratory applications of quantum mechanics, the act of gathering information about a system -- ``measurement'' -- supplies the physical mechanisms that destroy that interference. Amplitudes for outcomes which are measured do not interfere, and therefore may be interpreted as probabilities for those measured outcomes. If the measurement is not made, those amplitudes may \textbf{not} be consistently interpreted as probabilities for the unmeasured outcomes. The consistent histories perspective on quantum theory simply recognizes that amplitudes are not probabilities unless such interference vanishes, and supplies an objective measure of that interference that may be applied in the absence of laboratories and measurements in environments such as the very early universe, where there were certainly no observers present. Even so, physical mechanisms may exist which destroy interfence among possible alternatives, and thereby enable quantum theory to assign definite probabilities to those alternatives. Indeed, additional degrees of freedom may supply a resource which effectively ``gathers information'' (i.e.\ creates records\cite{GMH90a,GMH90b,hartle91a,GMH93,halliwell99,RZZ16a}) about the physical alternatives of interest, leading to decoherence of those alternative histories and consequent ability to assign them meaningful probabilities. One would expect this to be the case in the actual physical universe which even in a globally homogeneous and isotropic cosmology carries both geometric and matter degrees of freedom which may imprint information about quantum alternatives and lead to decoherence. Matter density perturbations in the early universe are just such degrees of freedom, and a consistent histories analysis of such perturbations should lead to a coherent picture of the ``quantum-to-classical'' transition of inflationary perturbations.\cite{CS17a} Thus, the methods described in this review provide the tools to tell a consistent quantum story of the predictions of quantum gravitational theories in the early universe. In forthcoming work these methods will be applied to provide a similar analysis of spin-foam loop quantum cosmological models.\cite{CS13b} \section*{Acknowledgments} The author would like to thank P.\ Singh for teaching him loop quantum cosmology and for a fruitful collaboration. Portions of this work were supported by a grant from FQXi, for which we thank the Institute. \ifthenelse{1=1}{%
2,869,038,156,585
arxiv
\section{Introduction} Several empirical studies~\citep{SzegedyZSBEGF13, GoodfellowMP18} have demonstrated that models trained to have a low accuracy on a data set often have the undesirable property that a small perturbation to an input instance can change the label outputted by the model. For most domains this does not align with human perception and thus indicates that the learned models are not representing the ground truth despite obtaining good accuracy on test sets. The theory of PAC-learning characterizes the conditions under which learning is possible. For binary classification, the following conditions are sufficient: a) unseen data should arrive from the same distribution as training data, and b) the class of models should have a low capacity (as measured, for example, by its VC-dimension). If these conditions are met, an \emph{Empirical Risk Minimizer} (ERM) that simply optimizes model parameters to maximize accuracy on the training set learns successfully. Recent work has studied test-time adversarial perturbations under the PAC-learning framework. If an adversary is allowed to perturb data during test time then the conditions above do not hold, and we cannot hope for the model to learn to be robust just by running ERM. Thus, the goal here is to bias the learning process towards finding models where label-changing perturbations are rare. This is achieved by defining a loss function that combines both classification error and the probability of seeing label-changing perturbations, and learning models that minimize this loss on unseen data. It has been shown that even though (robust) ERM can fail in this setting, PAC-learning is still possible as long as we know during training the kinds of perturbations we want to guard against at test time~\citep{MontasserHS19}. This result holds for all perturbation sets. However, the learning algorithm is significantly more complex than robust ERM and requires a large number of samples (with the best known sample complexity bounds potentially being exponential in the VC-dimension). We study a \emph{tolerant} version of the adversarially robust learning framework and restrict the perturbations to balls in a general metric space with finite doubling dimension. We show this slight shift in the learning objective yields significantly improved sample complexity bounds through a simpler learning paradigm than what was previously known. In fact, we show that a version of the common ``perturb-and-smooth'' paradigm successfully PAC-learns any class of bounded VC-dimension in this setting. {\bf Learning in general metric spaces.} What kinds of perturbations should a learning algorithm guard against? Any transformation of the input that we believe should not change its label could be a viable perturbation for the adversary to use. The early works in this area considered perturbations contained within a small $\ell_p$-ball of the input. More recent work has considered other transformations such as a small rotation, or translation of an input image~\citep{engstrom2019exploring, fawzi2015manitest, kanbak2018geometric, xiao2018spatially}, or even adding small amounts of fog or snow~\citep{kang2019testing}. It has also been argued that small perturbations in some \emph{feature space} should be allowed as opposed to the input space~\citep{inkawhich2019feature, sabour2016adversarial, xu2020towards, song2018constructing, hosseini2018semantic}. This motivates the study of more general perturbations. We consider a setting where the input comes from a domain that is equipped with a distance metric and allows perturbations to be within a small metric ball around the input. Earlier work on general perturbation sets (for example,~\citep{MontasserHS19}) considered arbitrary perturbations. In this setting one does not quantify the magnitude of a perturbation and thus cannot talk about small versus large perturbations. Modeling perturbations using a metric space enables us to do that while also keeping the setup general enough to be able to encode a large variety of perturbation sets by choosing appropriate distance functions. {\bf Learning with tolerance.} In practice, we often believe that small perturbations of the input should not change its label but we do not know \emph{precisely} what small means. However, in the PAC-learning framework for adversarially robust classification, we are required to define a precise perturbation set and learn a model that has error arbitrarily close to the smallest error that can be achieved with respect to that perturbation set. In other words, we aim to be arbitrarily close to a target that was picked somewhat arbitrarily to begin with. Due to the uncertainty about the correct perturbation size, it is more meaningful to allow for a wider range of error values. To achieve this, we introduce the concept of tolerance. In the tolerant setting, in addition to specifying a perturbation size $r$, we introduce a tolerance parameter $\gamma$ that encodes our uncertainty about the size of allowed perturbations. Then, for any given $\varepsilon>0$, we aim to learn a model whose error with respect to perturbations of size $r$ is at most $\varepsilon$ more than the smallest error achievable with respect to perturbations of size $r(1+\gamma)$. \section{Our results} In this paper we formalize and initiate the study of the problem of adversarially robust learning in the tolerant setting for general metric spaces and provide two algorithms for the task. Both of our algorithms rely on: 1) modifying the training data by randomly sampling points from the perturbation sets around each data point, and 2) smoothing the output of the model by taking a majority over the labels returned by the model for nearby points. Our first algorithm starts by modifying the training set by randomly perturbing each training point using a certain distribution (see Section~\ref{sec:erm} for details). It then trains a (non-robust) PAC-learner (such as ERM) on the perturbed training set to find a hypothesis $h$. Finally, it outputs a smooth version of $h$. The smoothing step replaces $h(x)$ at each point $x$ with the a majority label outputted by $h$ on the points around $x$. We show that for metric spaces of a fixed doubling dimension, this algorithm successfully learns in the tolerant setting~\modify{assuming tolerant realizability}. \begin{theorem}[Informal version of Theorem~\ref{thm:tpas_guarantee}] Let $(X, \d)$ be a metric space with doubling dimension $d$ and $\mathcal{H}$ a hypothesis class. Assuming~\modify{tolerant} realizability, $\mathcal{H}$ can be learned tolerantly in the adversarially robust setting using $O\left(\frac{(1+1/\gamma)^{O(d)}\mathrm{VC}(\mathcal{H})}{\varepsilon}\right)$ samples, where $\gamma$ encodes the amount of allowed tolerance, and $\varepsilon$ is the desired accuracy. \end{theorem} An interesting feature of the above result is the linear dependence of the sample complexity with respect to $\mathrm{VC}(\mathcal{H})$. This is in contrast to the best known upper bound for non-tolerant adversarial setting ~\citep{MontasserHS19} which depends on the \emph{dual VC-dimension} of the hypothesis class and in general is exponential in $\mathrm{VC}(\mathcal{H})$. Moreover, this is the first PAC-type guarantee for the general perturb-and-smooth paradigm, indicating that the tolerant adversarial learning is the ``right'' learning model for studying these approaches. While the above method enjoys simplicity and can be computationally efficient, one downside is that its sample complexity grows exponentially with the doubling dimension. For instance, such algorithm cannot be used on high-dimensional data in the Euclidean space. Another limitation is that the guarantee holds only in the (robustly) realizable setting. In the second main part of our submission (Section \ref{sec:agn}) we show that, surprisingly, these limitations can be overcome by incorporating ideas from the tolerant framework and perturb-and-smooth algorithms into a novel compression scheme for robust learning. The resulting algorithm improves the dependence on the doubling dimension, and works in the general agnostic setting. \begin{theorem}[Informal version of Corollary~\ref{Thm:compress_metric}] Let $(X, \d)$ be a metric space with doubling dimension $d$ and $\mathcal{H}$ a hypothesis class. Then $\mathcal{H}$ can be learned tolerantly in the adversarially robust setting using $\widetilde{O}\left(\frac{O(d)\mathrm{VC}(\mathcal{H})\log(1+1/\gamma)}{\varepsilon^2}\right)$ samples, where $\widetilde{O}$ hides logarithmic factors, $\gamma$ encodes the amount of allowed tolerance, and $\varepsilon$ is the desired accuracy. \end{theorem} This algorithm exploits the connection between sample compression and adversarially robust learning~\cite{MontasserHS19}. However, unlike~\cite{MontasserHS19}, our new compression scheme sidesteps the dependence on the dual VC-dimension (refer to the discussion at the end of Section~\ref{sec:agn} for more details). As a result, we get an exponential improvement over the best known (nontolerant) sample complexity in terms of dependence on VC-dimension. \section{Related work} PAC-learning for adversarially robust classification has been studied extensively in recent years~\citep{cullina2018pac, awasthi2019robustness, MontasserHS19, feige2015learning, attias2018improved, montasser2020efficiently, ashtiani2020black}. These works provide learning algorithms that guarantee low generalization error in the presence of adversarial perturbations in various settings. The most general result is due to~\cite{MontasserHS19}, and is proved for general hypothesis classes and perturbation sets. All of the above results assume that the learner knows the kinds of perturbations allowed for the adversary. Some more recent papers have considered scenarios where the learner does not even need to know that.~\cite{goldwasser2020beyond} allow the adversary to perturb test data in unrestricted ways and are still able to provide learning guarantees. The catch is that it only works in the transductive setting and only if the learner is allowed to abstain from making a prediction on some test points.~\cite{montasser2021adversarially} consider the case where the learner needs to infer the set of allowed perturbations by observing the actions of the adversary. Tolerance was introduced by~\cite{ashtiani2020black} in the context of certification. They provide examples where certification is not possible unless we allow some tolerance. \cite{montasser2021transductive} study transductive adversarial learning and provide a ``tolerant'' guarantee. Note that unlike our work, the main focus of that paper is on the transductive setting. Moreover, they do not specifically study tolerance with respect to metric perturbation sets. Without a metric, it is not meaningful to expand perturbation sets by a factor $(1+\gamma)$ (as we do in the our definition of tolerance). Instead, they expand their perturbation sets by applying two perturbations in succession, which is akin to setting $\gamma = 1$. In contrast, our results hold in the more common inductive setting, and capture a more realistic setting where $\gamma$ can be any (small) real number larger than zero. \modify{Subsequent to our work,~\cite{bhattacharjee2022robust} study adversarially robust learning with tolerance for ``regular'' VC-classes and show that a simple modification of robust ERM achieves a sample complexity polynomial in both VC-dimension and doubling dimension. In a similar vein,~\cite{raman2022probabilistically} identify a more general property of hypothesis classes for which robust ERM is sufficient for adversarially robust learning with tolerance.} Like many recent adversarially robust learning algorithms~\citep{feige2015learning, attias2018improved}, our first algorithm relies on calls to a non-robust PAC-learner.~\cite{montasser2020reducing} formalize the question of reducing adversarially robust learning to non-robust learning and study finite perturbation sets of size $k$. They show a reduction that makes $O(\log^2{k})$ calls to the non-robust learner and also prove a lower bound of $\Omega(\log{k})$. It will be interesting to see if our algorithms can be used to obtain better bounds for the tolerant setting. Our first algorithm makes one call to the non-robust PAC-learner at training time, but needs to perform potentially expensive smoothing for making actual predictions (see Theorem~\ref{thm:tpas_guarantee}). \modify{A related line of work studies smallest achievable robust loss for various distributions and hypothesis classes. For example, ~\cite{bubeck2021universal} show that hypothesis classes with low robust loss must be overparametrized.~\cite{yang2020closer} explore real-world datasets and provide evidence that they are separable and therefore there must exist locally Lipschitz hypotheses with low robust loss. Note that the existence of such hypotheses does not immediately imply that PAC-learning is possible.} The techniques of randomly perturbing the training data and smoothing the output classifier has been extensively used in practice and has shown good empirical success. Augmenting the training data with some randomly perturbed samples was used for handwriting recognition as early as by ~\cite{yaeger1996effective}. More recently, ``stability training'' was introduced by~\cite{zheng2016improving} for state of the art image classifiers where training data is augmented with Gaussian perturbations. Empirical evidence was provided that the technique improved the accuracy against naturally occurring perturbations. Augmentations with non-Gaussian perturbations of a large variety were considered by~\cite{hendrycks2019augmix}. Smoothing the output classifier using random samples around the test point is a popular technique for producing \emph{certifiably} robust classifiers. A certification, in this context, is a guarantee that given a test point $x$, all points within a certain radius of $x$ receive the same label as $x$. Several papers have provided theoretical analyses to show that smoothing produces certifiably robust classifiers~\citep{cao2017mitigating, CohenRK19, lecuyer2019certified, li2019certified, liu2018towards, SalmanLRZZBY19, levine2020robustness},~\modify{whereas others have identified cases where smoothing does not work~\cite{yang2020randomized, blum2020random}}. However, to the best of our knowledge, a PAC-type guarantee has not been shown for any algorithm that employs training data perturbations or output classifier smoothing, and our paper provides the first such analysis. \section{Notations and setup} \label{s:notations} We denote by $X$ the input domain and by $Y=\{0,1\}$ the binary label space. We assume that $X$ is equipped with a metric $\d$. A hypothesis $h:X\to Y$ is a function that assigns a label to each point in the domain. A hypothesis class $\H$ is a set of such hypotheses. For a sample $S = ((x_1, y_1), \ldots, (x_n, y_n))\in (X\times Y)^n$, we use the notation $S^X = \{x_1, x_2, \ldots, x_n \}$ to denote the collection of domain points $x_i$ occurring in $S$. The binary (also called 0-1) loss of $h$ on data point $(x,y)\in X\times Y$ is defined by \[ \ell^{0/1}(h, x, y) = \indct{h(x) \neq y}, \] where $\indct{.}$ is the indicator function. Let $P$ by a probability distribution over $X\times Y$. Then the \emph{expected binary loss} of $h$ with respect to $P$ is defined by \[ \bLo{P} (h) = \mathbb{E}_{(x,y)\sim P} [\ell^{0/1}(h , x, y)] \] Similarly, the \emph{empirical binary loss} of $h$ on sample $S = ((x_1, y_1), \ldots, (x_n, y_n))\in (X\times Y)^n$ is defined as $\bLo{S}(h) = \frac{1}{n}\sum_{i=1}^n \ell^{0/1}(h, x_i, y_i)$. We also define the \emph{approximation error} of $\mathcal{H}$ with respect to $P$ as $\bLo{P} (\mathcal{H}) = \inf_{h\in \mathcal{H}}\bLo{P} (h)$. A \emph{learner} ${\mathcal A}$ is a function that takes in a finite sequence of labeled instances $S = ((x_1, y_1), \ldots, (x_n, y_n))$ and outputs a hypothesis $h = {\mathcal A}(S)$. The following definition abstracts the notion of PAC-learning~\cite{vapnikcherv71, Valiant84}. \begin{definition}[PAC-learner]\label{def:learn} Let $\P$ be a set of distributions over $X\times Y$ and $\H$ a hypothesis class. We say ${\mathcal A}$ PAC-learns $(\H, \P)$ with $m_{\mathcal A}: (0,1)^2\to \mathbb{N}$ samples if the following holds: for every distribution $P\in \P$ over $X\times Y$, and every $\varepsilon,\delta \in (0,1)$, if $S$ is an i.i.d.~ sample of size at least $m_{\mathcal A}(\varepsilon, \delta)$ from $P$, then with probability at least $1-\delta$ (over the randomness of $S$) we have \[ \Lo{P}({\mathcal A}(S)) \leq \Lo{P}(\mathcal{H}) + \varepsilon. \] ${\mathcal A}$ is called an \emph{agnostic learner} if $\P$ is the set of all distributions on $X\times Y$, and a \emph{realizable learner} if $\P=\{P:\Lo{P}(\mathcal{H}) = 0\}$. \end{definition} The smallest function $m: (0,1)^2\to \mathbb{N}$ for which there exists a learner ${\mathcal A}$ that satisfies the above definition with $m_{{\mathcal A}} = m$ is referred to as the (realizable or agnostic) \emph{sample complexity} of learning $\mathcal{H}$. The existence of sample-efficient PAC-learners for VC classes is a standard result~\cite{vapnikcherv71}. We state the results formally in Appendix~\ref{app_pac_learning}. \subsection{Tolerant adversarial PAC-learning} Let $\mathcal{U}:X\to 2^X$ be a function that maps each point in the domain to the set of its ``admissible'' perturbations. We call this function the \emph{perturbation type}. The adversarial loss of $h$ with respect to $\mathcal{U}$ on $(x,y)\in X\times Y$ is defined by \[ \rlo{\mathcal{U}}(h, x, y) = \max_{z\in\mathcal{U}(x)} \{\ell^{0/1}(h, z, y)\} \] The \emph{expected adversarial loss} with respect to $P$ is defined by $\rLo{\mathcal{U}}{P}(h)=\mathbb{E}_{(x,y)\sim P}\rlo{\mathcal{U}}(h, x, y)$. The \emph{empirical adversarial loss} of $h$ on sample $S = ((x_1, y_1), \ldots, (x_n, y_n))\in (X\times Y)^n$ is defined by $\rLo{\mathcal{U}}{S}(h) = \frac{1}{n}\sum_{i=1}^n \rlo{\mathcal{U}}(h, x_i, y_i)$. Finally, the \emph{adversarial approximation error} of $\mathcal{H}$ with respect to $\mathcal{U}$ and $P$ is defined by $\rLo{\mathcal{U}}{P} (\mathcal{H}) = \inf_{h\in \mathcal{H}}\rLo{\mathcal{U}}{P} (h)$. The following definition generalizes the setting of PAC adversarial learning to what we call the \emph{tolerant} setting, where we consider two perturbation types $\mathcal{U}$ and $\mathcal{V}$. We say $\mathcal{U}$ is \emph{contained in} $\mathcal{V}$ and and write it as $\mathcal{U} \prec \mathcal{V}$ if $\mathcal{U}(x)\subset\mathcal{V}(x)$ for all $x\in X$. \begin{definition}[Tolerant Adversarial PAC-learner]\label{def:adv_learn_tol} Let $\P$ be a set of distributions over $X\times Y$, $\mathcal{H}$ a hypothesis class, and $\mathcal{U} \prec \mathcal{V}$ two perturbation types. We say ${\mathcal A}$ \emph{tolerantly} PAC-learns $(\mathcal{H}, \P, \mathcal{U}, \mathcal{V})$ with $m_{\mathcal A}: (0,1)^2\to \mathbb{N}$ samples if the following holds: for every distribution $P\in\P$ and every $\varepsilon,\delta \in (0,1)$, if $S$ is an i.i.d.~ sample of size at least $m_{\mathcal A}(\varepsilon, \delta)$ from $P$, then with probability at least $1-\delta$ (over the randomness of $S$) we have \[ \rLo{\mathcal{U}}{P}({\mathcal A}(S)) \leq \rLo{\mathcal{V}}{P}(\mathcal{H}) + \varepsilon. \] We say ${\mathcal A}$ is a tolerant PAC-learner in the \emph{agnostic setting} if $\P$ is the set of all distributions over $X\times Y$, and in the \emph{tolerantly realizable setting} if $\P=\{P:\rLo{\mathcal{V}}{P}(\mathcal{H}) = 0\}$. \end{definition} In the above context, we refer to $\mathcal{U}$ as the \emph{actual perturbation type} and to $\mathcal{V}$ as the \emph{reference perturbation type}. The case where $\mathcal{U}(x)=\mathcal{V}(x)$ for all $x\in X$ corresponds to the usual adversarial learning scenario (with no tolerance). \subsection{Tolerant adversarial PAC-learning in metric spaces} If $X$ is equipped with a metric $\d(.,.)$, then $\mathcal{U}(x)$ can be naturally defined by a ball of radius $r$ around $x$, i.e., $\mathcal{U}(x)={\mathcal B}_r(x) = \{z\in X ~\mid~ \d(x,z) \leq r\}$. To simplify the notation, we sometimes use $\rlo{r}(h, x, y)$ instead of $\rlo{{\mathcal B}_r}(h, x, y)$ to denote the adversarial loss with respect to ${\mathcal B}_r$. In the tolerant setting, we consider the perturbation sets $\mathcal{U}(x) = {\mathcal B}_r(x)$ and $\mathcal{V}(x) = {\mathcal B}_{(1+\gamma)r}(x)$, where $\gamma>0$ is called the \emph{tolerance parameter}. Note that $\mathcal{U} \prec \mathcal{V}$. We now define PAC-learning with respect to the metric space. \begin{definition}[Tolerant Adversarial Learning in metric spaces]\label{def:adv_learn_metric} Let $(X, \d)$ be a metric space, $\mathcal{H}$ a hypothesis class, and $\P$ a set of distributions of $X\times Y$. We say $(\mathcal{H}, \P, \d)$ is tolerantly PAC-learnable with $m:(0,1)^3\to \mathbb{N}$ samples when for every $r, \gamma>0$ there exist a PAC-learner ${\mathcal A}_{r, \gamma}$ for $(\mathcal{H},\P, B_r, B_{r(1+\gamma)})$ that uses $m(\varepsilon, \delta, \gamma)$ samples. \end{definition} \begin{remark} In this definition the learner receives $\gamma$ and $r$ as input but its sample complexity does not depend on $r$ (but can depend on $\gamma$). Also, as in Definition~\ref{def:adv_learn_tol}, the tolerantly realizable setting corresponds to $\P=\{P:\rLo{r(1+\gamma)}{P}(\H) = 0\}$ while in the agnostic setting $\P$ is the set of all distributions over $X\times Y$. \end{remark} The doubling dimension and the doubling measure of the metric space will play important roles in our analysis. We refer the reader to Appendix~\ref{app_sec_metric} for their definitions. We will use the following lemma in our analysis, whose proof can be found in Appendix~\ref{app_sec_metric}: \begin{lemma}\label{lemma:simple_doubling} For any family $\mathcal{M}$ of complete, doubling metric spaces, there exist constants $c_1, c_2 > 0$ such that for any metric space $(X, \d)\in{\mathcal M}$ with doubling dimension $d$, there exists a measure $\mu$ such that if a ball ${\mathcal B}_r$ of radius $r>0$ is completely contained inside a ball ${\mathcal B}_{\alpha r}$ of radius $\alpha r$ (with potentially a different center) for any $\alpha > 1$, then $0<\mu({\mathcal B}_{\alpha r})\leq (c_1\alpha)^{c_2 d}\mu({\mathcal B}_r)$. Furthermore, if we have a constant $\alpha_0 > 1$ such that we know that $\alpha \geq\alpha_0$ then the bound can be simplified to $0 < \mu({\mathcal B}_{\alpha r})\leq \alpha^{\zeta d}\mu({\mathcal B}_r)$, where $\zeta$ depends on ${\mathcal M}$ and $\alpha_0$. \end{lemma} Later, we will set $\alpha = 1+1/\gamma$ where $\gamma$ is the tolerance parameter. Since we are mostly interested in small values of $\gamma$, suppose we decide on some loose upper bound $\Gamma\gg\gamma$. This corresponds to saying that there exists some $\alpha_0 > 1$ such that $\alpha \geq \alpha_0$. It is worth noting that in the special case of Euclidean metric spaces, we can set both $c_1$ and $c_2$ to be 1. In the rest of the paper, we will assume we have a loose upper bound $\Gamma\gg\gamma$ and use the simpler bound from Lemma~\ref{lemma:simple_doubling} extensively. Given a metric space $(X, d)$ and a measure $\mu$ defined over it, for any subset $Z\subseteq X$ for which $\mu(Z)$ is non-zero and finite, $\mu$ induces a \emph{probability} measure $P_Z^\mu$ over $Z$ as follows. For any set $Z'\subseteq Z$ in the $\sigma$-algebra over $Z$, we define $P_Z^\mu(Z') = \mu(Z')/\mu(Z)$. With a slight abuse of notation, we write $z\sim Z$ to mean $z\sim P_Z^\mu$ whenever we know $\mu$ from the context. Our learners rely on being able to sample from $P_Z^\mu$. Thus we define the following oracle, which can be implemented efficiently for $\ell_p$ spaces. \begin{definition}[Sampling Oracle] Given a metric space $(X, \d)$ equipped with a doubling measure $\mu$, a \emph{sampling oracle} is an algorithm that when queried with a $Z\subseteq X$ such that $\mu(Z)$ is finite, returns a sample drawn from $P_Z^\mu$. We will use the notation $z\sim Z$ for queries to this oracle. \end{definition} \section{The perturb-and-smooth approach for tolerant adversarial learning} \label{sec:erm} In this section we focus on tolerant adversarial PAC-learning in metric spaces (Definition~\ref{def:adv_learn_metric}), and show that VC classes are tolerantly PAC-learnable in the tolerantly realizable setting. Interestingly, we prove this result using an approach that resembles the ``perturb-and-smooth'' paradigm which is used in practice (for example by~\cite{CohenRK19}). The overall idea is to ``perturb'' each training point $x$, train a classifier on the ``perturbed'' points, and ``smooth out'' the final hypothesis using a certain majority rule. We employ three perturbation types: $\mathcal{U}$ and $\mathcal{V}$ play the role of the \emph{actual} and the \emph{reference} perturbation type respectively. Additionally, we consider a perturbation type $\mathcal{W}:X\to 2^X$, which is used for smoothing. We assume $\mathcal{U} \prec \mathcal{V}$ and $\mathcal{W} \prec \mathcal{V}$. For this section, we will use metric balls for the three types. Specifically, if $\mathcal{U}$ consists of balls of radius $r$ for some $r>0$, then $\mathcal{W}$ will consists of balls of radius $\gamma r$ and $\mathcal{V}$ will consist of balls of radius $(1+\gamma)r$. \begin{definition}[Smoothed classifier]\label{def:smooth} For a hypothesis $h:X\to \{0,1\}$, and perturbation type $\mathcal{W}: X \to 2^X$, we let $\bar{h}_{\mathcal{W}}$ denote the classifier resulting from replacing the label $h(x)$ with the average label over $\mathcal{W}(x)$, that is \[ \bar{h}_{\mathcal{W}}(x) = \indct{\mathbb{E}_{x'\sim \mathcal{W}(x)}h(x')\geq 1/2} \] For metric perturbation types, where $\mathcal{W}$ is a ball of some radius $r$, we also use the notation $\bar{h}_{r}$ and when the type $\mathcal{W}$ is clear from context, we may omit the subscript altogether and simply write $\bar{h}$ for the smoothed classifier. \end{definition} \paragraph{The tolerant perturb-and-smooth algorithm} We propose the following learning algorithm, TPaS\xspace, for tolerant learning in metric spaces. Let the perturbation radius be $r>0$ for the actual type $\mathcal{U} = {\mathcal B}_r$, and let $S = ((x_1, y_1), \ldots, (x_m, y_m))$ be the training sample. For each $x_i\in S^X$, the learner samples a point $x'_i\sim{\mathcal B}_{r\cdot(1+\gamma)}(x_i)$ (using the sampling oracle) from the expanded reference perturbation set $\mathcal{V}(x_i) = {\mathcal B}_{(1+\gamma)r}(x_i)$. Let $S' = ((x'_1, y_1), \ldots, (x'_m, y_m))$. TPaS\xspace then invokes a (standard, non-robust) PAC-learner ${\mathcal A}_\H$ for the hypothesis class ${\mathcal H}$ on the perturbed data $S'$. We let $\hat{h} = {\mathcal A}_{\H}(S')$ denote the output of this PAC-learner. Finally, TPaS\xspace outputs the $\mathcal{W}$-smoothed version of $\bar{h}_{\gamma r}$ for $\mathcal{W}= {\mathcal B}_{\gamma r}$. That is, $\bar{h}_{\gamma r}(x)$ is simply the majority label in a ball of radius $\gamma r$ around $x$ with respect to the distribution defined by $\mu$, see also Definition \ref{def:smooth}. We will prove below that this $\bar{h}_{\gamma r}$ has a small $\mathcal{U}$-adversarial loss. Algorithm \ref{alg:tpas} below summarizes our learning procedure. \begin{algorithm} \caption{Tolerant Perturb and Smooth (TPaS\xspace)}\label{alg:tpas} \begin{algorithmic} \STATE {\bf Input:} Radius $r$, tolerance parameter $\gamma$, data $S = ((x_1, y_1), \ldots, (x_m, y_m))$, accesss to sampling oracle ${\mathcal O}$ for $\mu$ and PAC-learner ${\mathcal A}_{\H}$. \STATE Initialize $S' = \emptyset$ \FOR{$i = 1$ to $m$} \STATE Sample $x'_i \sim {\mathcal B}_{(1+\gamma) r}(x_i)$ \STATE Add $(x'_i, y_i)$ to $S'$ \ENDFOR \STATE Set $\hat{h} = {\mathcal A}_{\H}(S')$ \STATE {\bf Output:} $\bar{h}_{\gamma r}$ defined by \STATE \qquad\qquad $\bar{h}_{\gamma r}(x) = \indct{\mathbb{E}_{x'\sim{\mathcal B}_{\gamma r}(x)} \hat{h}(x') \geq 1/2}$ \end{algorithmic} \end{algorithm} The following is the main result of this section. \begin{theorem}\label{thm:tpas_guarantee} Let $(X,\d)$ be an any metric space with doubling dimension $d$ and doubling measure $\mu$. Let $\mathcal{O}$ be a sampling oracle for $\mu$. Let $\mathcal{H}$ be a hypothesis class and $\P$ a set of distributions over $X\times Y$. Assume ${\mathcal A}_\mathcal{H}$ PAC-learns $\mathcal{H}$ with $m_\mathcal{H}(\varepsilon, \delta)$ samples in the realizable setting. Then there exists a learner ${\mathcal A}$, namely TPaS\xspace, that \begin{itemize} \item Tolerantly PAC-learns $(\mathcal{H}, \P, \d)$ in the tolerantly realizable setting with sample complexity bounded by $m(\varepsilon, \delta, \gamma)=O\left( m_\mathcal{H}(\varepsilon, \delta)\cdot (1+1/\gamma)^{\zeta d}\right) = O\left(\frac{\mathrm{VC}({\mathcal H}) + \log{1/\delta}}{\varepsilon}\cdot (1+1/\gamma)^{\zeta d}\right)$, where $\gamma$ is the tolerance parameter and $d$ is the doubling dimension. \item Makes only one query to ${\mathcal A}_\mathcal{H}$ \item Makes $m(\varepsilon, \delta, \gamma)$ queries to sampling oracle $\mathcal{O}$ \end{itemize} \end{theorem} The proof of this theorem uses the following key technical lemma (its proof can be found in Appendix~\ref{app_lemma}): \begin{lemma}\label{lem:majorities} Let $r>0$ be a perturbation radius, $\gamma>0$ a tolerance parameter, and $g:X\to Y$ a classifier. For $x\in X$ and $y\in Y = \{0,1\}$, we define $$\Sigma_{g, y}(x) = \mathbb{E}_{z\sim{\mathcal B}_{r(1+\gamma)}(x)}\indct{g(z)\neq y} \quad\text{and}\quad \sigma_{g,y}(x) = \mathbb{E}_{z\sim{\mathcal B}_{r\gamma}(x)}\indct{g(z)\neq y}.$$ Then $\Sigma_{g,y}(x)\leq\frac{1}{3}\cdot\left(\frac{1+\gamma}{\gamma}\right)^{-\zeta d}$ implies that $\sigma_{g,y}(z)\leq 1/3$ for all $z\in{\mathcal B}_r(x)$. \end{lemma} \begin{proof}[Proof of Theorem \ref{thm:tpas_guarantee}] Consider some $\epsilon_0 >0$ and $0<\delta < 1$ to be given (we will pick a suitable value of $\epsilon_0$ later), and assume the PAC-learner ${\mathcal A}_\H$ was invoked on the perturbed sample $S'$ of size at least $m_A(\epsilon_0,\delta)$. According to definition \ref{def:learn}, this implies that with probability $1-\delta$, the output $\hat{h} = {\mathcal A}_\H(S)$ has (binary) loss at most $\epsilon_0$ with respect to the data-generating distribution. Note that the relevant distribution here is the two-stage process of the original data generating distribution $P$ and the perturbation sampling according to $\mathcal{V} = {\mathcal B}_{(1+\gamma)r}$. Since $P$ is $\mathcal{V}$-robustly realizable, the two-stage process yields a realizable distribution with respect to the standard $0/1$-loss. Thus, we have \[ \mathbb{E}_{(x, y)\sim P}\mathbb{E}_{z\sim{\mathcal B}_{r(1+\gamma)}(x)}\indct{\hat{h}(z)\neq y} \leq\epsilon_0. \] With Lemma \ref{lem:majorities}, this becomes $\mathbb{E}_{(x, y)\sim P} \Sigma_{\hat{h}, y}(x) \leq\epsilon_0$. For $\lambda >0$, Markov's inequality then yields : \begin{align} \mathbb{E}_{(x, y)\sim P}\indct{\Sigma_{\hat{h}, y}(x)\leq\lambda} >1-\epsilon_0/\lambda\label{eqn:markov} \end{align} Thus setting $\lambda = \frac{1}{3}\cdot\left(\frac{1+\gamma}{\gamma}\right)^{-\zeta d}$ and plugging in the result of the Lemma \ref{lem:majorities} to equation (\ref{eqn:markov}), we get $$\mathbb{E}_{(x,y)\sim P}\indct{\forall z \in{\mathcal B}_r(x), \sigma_{\hat{h}, y}(z)\leq 1/3}>1-\epsilon_0/\lambda.$$ Since $\sigma_{\hat{h}, y}(z)\leq 1/3$ implies that $\indct{\mathbb{E}_{z'\sim{\mathcal B}_{\gamma r(z)}}\hat{h}(z')\geq 1/2}=y,$ using the definition of the smoothed classifier $\bar{h}_{\gamma r}$ we get \begin{align} & \mathbb{E}_{(x,y)\sim P}\indct{\exists z\in{\mathcal B}_r(x), \bar{h}_{\gamma r}(z)\neq y}\leq\epsilon_0/\lambda\nonumber,\\ \end{align} which implies $\rLo{r}{P}(\bar{h}_{\gamma r}) \leq \epsilon_0/\lambda$. Thus, for the robust learning problem, if we are given a desired accuracy $\varepsilon$ and we want $\rLo{r}{P}(\bar{h}_{\gamma r})\leq\varepsilon$, we can pick $\epsilon_0 = \lambda\varepsilon$. Putting it all together, we get sample complexity $m \leq O(\frac{\mathrm{VC}({\mathcal H}) + \log{1/\delta}}{\epsilon_0})$ where $\epsilon_0 = \lambda\varepsilon$, and $\lambda = \frac{1}{3}\cdot\left(\frac{1+\gamma}{\gamma}\right)^{-\zeta d}$. Therefore, $m\leq O\left(\frac{\mathrm{VC}({\mathcal H}) + \log{1/\delta}}{\varepsilon}\cdot (1+1/\gamma)^{\zeta d}\right)$. \end{proof} {\bf Computational complexity of the learner.} Assuming we have access to $\mathcal{O}$ and an efficient algorithm for non-robust PAC-learning in the realizable setting, we can compute $\hat{h}$ efficiently. Therefore, the learning can be done efficiently in this case. However, at the prediction time, we need to compute $\bar{h}(x)$ on new test points which requires us to compute an expectation. We can instead \emph{estimate} the expectations using random samples from the sampling oracle. For a single test point $x$, if the number of samples we draw is $\Omega(\log{1/\delta})$ then with probability at least $1-\delta$ we get the same result as that of the optimal $\bar{h}(x)$. Using more samples we can boost this probability to guarantee a similar output to that of $\bar{h}$ on a larger set of test points. {\bf The traditional non-tolerant framework does not justify the use of perturb-and-smooth-type approaches.} The introduction of the tolerance in the adversarial learning framework is crucial for being able to prove guarantees for perturb-and-smooth-type algorithms. To see why, consider a simple case where the domain is the real line, the perturbation set is an open Euclidean ball of radius 1, and the hypothesis class is the set of all thresholds. Assume that the underlying distribution is supported only on two points: $\Pr(x=-1,y=1)=\Pr(x=1, y=0)=0.5$. This distribution is robustly realizable, but the threshold should be set exactly to $x=0$ to get a small error. However, the perturb-and-smooth method will fail because the only way the PAC-learner ${\mathcal A}_{\mathcal H}$ sets the threshold to $x=0$ is if it receives a (perturbed) sample exactly at $x=0$, whose probability is 0. \section{Improved tolerant learning guarantees through sample compression} \label{sec:agn} The perturb-and-smooth approach discussed in the previous section offers a general method for tolerant robust learning. However, one shortcoming of this approach is the exponential dependence of its sample complexity with respect to the doubling dimension of the metric space. Furthermore, the tolerant robust guarantee relied on the data generating distribution being tolerantly realizable. In this section, we propose another approach that addresses both of these issues. The idea is to adopt the perturb-and-smooth approach within a sample compression argument. We introduce the notion of a $(\mathcal{U},\mathcal{V})$-tolerant sample compression scheme and present a learning bound based on such a compression scheme, starting with the realizable case. We then show that this implies learnability in the agnostic case as well. Remarkably, this tolerant compression based analysis will yield bounds on the sample complexity that avoid the exponential dependence on the doubling dimension. For a compact representation, we will use the general notation $\mathcal{U}, \mathcal{V}, $ and $\mathcal{W}$ for the three perturbation types (actual, reference and smoothing type) in this section and will assume that they satisfy the Property \ref{assmt:UVWsmoothing} below for some parameter $\beta >0$. Lemma \ref{lem:majorities} implies that, in the metric setting, for any radius $r$ and tolerance parameter $\gamma$ the perturbation types $\mathcal{U} = {\mathcal B}_r$, $\mathcal{V} = {\mathcal B}_{(1+\gamma)r}$, and $\mathcal{W} = {\mathcal B}_{\gamma r}$ have this property for $\beta = \frac{1}{3}\left(\frac{1+\gamma}{\gamma} \right)^{-\zeta d}$. \begin{property}\label{assmt:UVWsmoothing} For a fixed $0 < \beta <1/2$, we assume that the perturbation types $\mathcal{V}, \mathcal{U}$ and $\mathcal{W}$ are so that for any classifier $h$ and any $x\in X$, any $y\in\{0,1\}$ if \[ \mathbb{E}_{z\sim \mathcal{V}(x)}[h(z) = y] \geq 1-\beta \] then $\mathcal{W}$-smoothed class classifier $\bar{h}_{\mathcal{W}}$ satisfies $\bar{h}_{\mathcal{W}}(z) = y$ for all $z\in\mathcal{U}(x)$. \end{property} A compression scheme of size $k$ is a pair of functions $(\kappa, \rho)$, where the compression function $\kappa: \bigcup_{i=1}^{\infty}(X\times Y)^i \to \bigcup_{i=1}^{k}(X\times Y)^i$ maps samples $S = ((x_1, y_1), (x_2, y_2), \ldots, (x_m, y_m))$ of arbitrary size to sub-samples of $S$ of size at most $k$, and $\rho:\bigcup_{i=1}^{k}(X\times Y)^i \to Y^X$ is a decompression function that maps samples to classifiers. The pair $(\kappa, \rho)$ is a sample compression scheme for loss $\ell$ and class $\H$, if for any samples $S$ realizable by $\H$, we recover the correct labels for all $(x,y)\in S$, that is, $\Lo{S}(H)=0$ implies that $\Lo{S}(\kappa \circ \rho(S))=0$. For tolerant learning, we introduce the following generalization of compression schemes: \begin{definition}[Tolerant sample compression scheme] A sample compression scheme $(\kappa, \rho)$ is a \emph{$\mathcal{U}, \mathcal{V}$-tolerant sample compression scheme} for class $\H$, if for any samples $S$ that are $\rlo{\mathcal{V}}$ realizable by $\H$, that is $\rLo{\mathcal{V}}{S}(\H)=0$, we have $\rLo{\mathcal{U}}{S}(\kappa \circ \rho(S))=0$. \end{definition} The next lemma establishes that the existence of a sufficiently small tolerant compression scheme for the class $\H$ yields bounds on the sample complexity of tolerantly learning $\H$. The proof of the lemma is based on a modification of a standard compression based generalization bound. Appendix Section \ref{app_sec_compression} provides more details. \begin{lemma}\label{lem:tolerant_compression_generalization} Let $\H$ be a hypothesis class and $\mathcal{U}$ and $\mathcal{V}$ be perturbation types with $\mathcal{U}$ included in $\mathcal{V}$. If the class $\H$ admits a $(\mathcal{U}, \mathcal{V})$-tolerant compression scheme of size bounded by $k\ln(m)$ for sample of size $m$, then the class is $(\mathcal{U},\mathcal{V})$-tolerantly learnable in the realizable case with sample complexity bounded by $m(\varepsilon, \delta) = \tilde{O}\left(\frac{k + \ln(1/\delta)}{\varepsilon}\right)$. \end{lemma} We next establish a bound on the tolerant compression size for general VC-classes, which will then immediately yield the improved sample complexity bounds for tolerant learning in the realizable case. The proof is sketched here; its full version has been moved to the Appendix. \begin{lemma}\label{lem:tolerant_compression_bound} Let $\H\subseteq Y^X$ be some hypothesis class with finite VC-dimension $\mathrm{VC}(\H) <\infty$, and let $\mathcal{U}, \mathcal{V}, \mathcal{W}$ satisfy the conditions in Property \ref{assmt:UVWsmoothing} for some $\beta >0$. Then there exists a $(\mathcal{U},\mathcal{V})$-tolerant sample compression scheme for $\H$ of size $\tilde{O}\left(\mathrm{VC}(\H)\ln(\frac{m}{\beta})\right)$. \end{lemma} \begin{proof}[Proof Sketch] We will employ a boosting-based approach to establish the claimed compression sizes. Let $S = ((x_1, y_1), (x_2, y_2), \ldots, (x_m, y_m))$ be a data-set that is $\rlo{\mathcal{V}}$-realizable with respect to $\H$. We let $S_{\mathcal{V}}$ denote an ``inflated data-set'' that contains all domain points in the $\mathcal{V}$-perturbation sets of the $x_i\in S^X$, that is $S_{\mathcal{V}}^X := \bigcup_{i=1}^{m} \mathcal{V}(x_i)$. Every point $z\in S_{\mathcal{V}}^X$ is assigned the label $y = y_i$ of the minimally-indexed $(x_i, y_i)\in S$ with $z\in \mathcal{V}(x_i)$, and we set $S_\mathcal{V}$ to be the resulting collection of labeled data-points. We then use the boost-by-majority method to encode a classifier $g$ that (roughly speaking) has error bounded by $\beta/m$ over (a suitable measure over) $S_\mathcal{V}$. This boosting method outputs a $T$-majority vote $g(x) = \indct{\Sigma_{i=1}^T h_i(x)} \geq 1/2$ over weak learners $h_i$, which in our case will be hypotheses from $\H$. We prove that this error can be achieved with $T = 18\ln(\frac{2m}{\beta})$ rounds of boosting. We prove that each weak learner that is used in the boosting procedure can be encoded with $n = \tilde{O}(\mathrm{VC}(\H))$ many sample points from $S$. The resulting compression size is thus $n\cdot T = \tilde{O}\left(\mathrm{VC}(\H)\ln(\frac{m}{\beta})\right)$. Finally, the error bound $\beta/m$ of $g$ over $S_\mathcal{V}$ implies that the error in each perturbation set $\mathcal{V}(x_i)$ of a sample point $(x_i, y_i)\in S$ is at most $\beta$. Property \ref{assmt:UVWsmoothing} then implies $\rLo{\mathcal{U}}{S} (\bar{g}_{\mathcal{W}}) = 0$ for the $\mathcal{W}$-smoothed classifier $\bar{g}_{\mathcal{W}}$, establishing the $(\mathcal{U},\mathcal{V})$-tolerant correctness of the compression scheme. \end{proof} This yields the following result \begin{theorem} Let $\H$ be a hypothesis class of finite VC-dimension and $\mathcal{V}, \mathcal{U}, \mathcal{W}$ be three perturbation types (actual, reference and smoothing) satisfying Property \ref{assmt:UVWsmoothing} for some $\beta>0$. Then the sample complexity (omitting log-factors) of $(\mathcal{U},\mathcal{V})$-tolerantly learning $\H$ is bounded by \[ m(\varepsilon, \delta) = \tilde{O}\left(\frac{\mathrm{VC}(\H)\ln({1}/{\beta}) + \ln({1}/{\delta})}{\varepsilon}\right) \] in the realizable case, and in the agnostic case by \[ m(\varepsilon, \delta) = \tilde{O}\left(\frac{\mathrm{VC}(\H)\ln({1}/{\beta}) + \ln({1}/{\delta})}{\varepsilon^2}\right) \] \end{theorem} \begin{proof} The bound for the realizable case follows immediately from Lemma \ref{lem:tolerant_compression_bound} and the subsequent discussion (in the Appendix). For the agnostic case, we employ a reduction from agnostic robust learnabilty to realizable robust learnability~\citep{MontasserHS19, moran2016sample}. The reduction is analogous to the one presented in Appendix C of \cite{MontasserHS19} for usual (non-tolerant) robust learnablity with some minor modifications. Namely, for a sample $S$, we choose the largest subsample $S'$ that is $\rlo{\mathcal{V}}$-realizable (this will result in competitiveness with a $\rlo{\mathcal{V}}$-optimal classifier), and we will use the boosting procedure described there for the $\rlo{\mathcal{U}}$ loss. For the sample sizes employed for the weak learners in that procedure, we can use the sample complexity for $\varepsilon = \delta = 1/3$ of an optimal $(\mathcal{U},\mathcal{V})$-tolerant learner in the realizable case (note that each learning problem during the boosting procedure is a realizable $(\mathcal{U},\mathcal{V})$-tolerant learning task). These modifications result in the stated sample complexity for agnostic tolerant learnability. \end{proof} In particular, for the doubling measure scenario (as considered in the previous section), we obtain \begin{corollary}\label{Thm:compress_metric} For metric tolerant learning with tolerance parameter $\gamma$ in doubling dimension $d$ the sample complexity of adversarially robust learning with tolerance in the realizable case is bounded by $m(\varepsilon, \delta) = \tilde{O}\left(\frac{\mathrm{VC}(\H)\zeta d \ln(1 + 1/{\gamma}) + \ln({1}/{\delta})}{\varepsilon}\right)$ and in the agnostic case by $m(\varepsilon, \delta) = \tilde{O}\left(\frac{\mathrm{VC}(\H)\zeta d \ln(1 + 1/{\gamma}) + \ln({1}/{\delta})}{\varepsilon^2}\right)$. \end{corollary} \paragraph{Discussion of linear dependence on $\mathrm{VC}(\H)$} Earlier, general compression based sample complexity bounds for robust learning with arbitrary perturbation sets exhibit a dependence on the dual VC-dimension of the hypothesis class and therefore potentially an exponential dependence on $\mathrm{VC}(\H)$ \citep{MontasserHS19}. In our setting, we show that it is possible to avoid the dependence on dual-VC by exploiting both the metric structure of the domain set and the tolerant framework. In the full proof of Lemma \ref{lem:tolerant_compression_bound}, we show that \emph{if we can encode a classifier with small error} (exponentially small with respect to the doubling dimension for the metric case) on the perturbed distribution \emph{w.r.t. larger perturbation sets}, then we can \emph{use smoothing to get a classifier that correctly classifies every point in the inner inflated sets}. And, as for TPaS, the tolerant perspective is crucial to exploit a smoothing step in the compression approach (through the guarantee from Property \ref{assmt:UVWsmoothing} or Lemma \ref{lem:majorities}). More specifically, we define a tolerant compression scheme (Definition 12) that naturally extends the classic definition of compression to the tolerant framework. The compression scheme we establish in the proof of Lemma \ref{lem:tolerant_compression_bound} then borrows ideas from our perturb-and-smooth algorithm. Within the compression argument, we define the perturbed distribution over the sample that we want to compress with respect to the larger perturbation sets. We then use boosting to build a classifier with very small error with respect to this distribution. The nice property of boosting is that its error decreases exponentially with the number of iterations. As a result, we also get linear dependence on the doubling dimension. This classifier can be encoded using $\tilde{O}\left(T\mathrm{VC}(\H) \right)$ samples ($T$ rounds of boosting, and each weak classifier can be encoded using $\tilde{O}(\mathrm{VC}(\H)$ samples, since we can here use simple $\varepsilon$-approximations rather than invoking VC-theory in the dual space). Our decoder receives the description of these weak classifiers, combines them, and performs a final smoothing step. The smoothing step translates the exponentially small error with respect to the perturbed distribution to zero error with respect to the (inner) inflated set, thereby satisfying the requirement of a tolerant compression scheme.
2,869,038,156,586
arxiv
\section{Introduction}\label{Introduction} While new physics (NP) coupling to quarks or gluons is strongly constrained by direct LHC searches (see e.g. Refs.~\cite{Butler:2017afk,Masetti:2018btj} for an overview), there is much more parameter space left for models with new particles posessing only electroweak (EW) interactions. In this context, vector-like leptons (VLLs), which are heavy fermions that are neutral under QCD and can mix with SM leptons via Higgs interactions, are very interesting. VLLs are predicted in many SM extensions, such as Grand Unified Theories~\cite{Hewett:1988xc,Langacker:1980js,delAguila:1982fs}, composite models or models with extra dimensions~\cite{Antoniadis:1990ew,ArkaniHamed:1998kx,Csaki:2004ay,ArkaniHamed:2001nc,ArkaniHamed:2002qy,Perelstein:2005ka,delAguila:2010vg,Carmona:2013cq} and, last but not least, are involved in the type I~\cite{Minkowski:1977sc,Lee:1977tib} and type III~\cite{Foot:1988aq} seesaw mechanisms. In fact, as expected, LEP~\cite{Achard:2001qw} and LHC~\cite{Aad:2019kiz,Sirunyan:2019ofn}\footnote{For a recent dedicated theoretical analysis of VLLs at colliders, see e.g.~\cite{Chala:2020odv,Das:2020gnt, Das:2020uer,deJesus:2020upp}.} searches allow for VLLs with masses far below the TeV scale. Therefore, it is well possible that VLLs are the lightest states within a NP model superseding the SM, thus providing the dominant NP effects in the EW sector of the SM. Note that even by simply adding by hand any VLL to the SM one obtains a consistent UV complete (renormalizable and anomaly free) extension of it, that can thus be studied on its own. \smallskip Since VLLs can couple to SM leptons and the Higgs, they mix with the former after EW symmetry breaking~\cite{Langacker:1988ur}. This mixing modifies the couplings of the SM leptons to EW gauge bosons ($W$ and $Z$), which are tightly constrained by LEP measurements~\cite{Schael:2013ita,ALEPH:2005ab}. In particular, any modification of the $W\ell\nu$ coupling is always accompanied by an effect in the $Z\ell\ell$ and/or $Z\nu\nu$ couplings. Furthermore, $W\mu\nu$ and $W e\nu$ couplings affect the extraction of the Fermi constant $G_F$ from muon decay. Therefore, their impact on different observables is clearly correlated and in order to consistently study them, it is necessary to perform a global fit to all the EW observables. This was done previously in Ref.~\cite{delAguila:2008pw} for all the VLL representations and in Refs.~\cite{Antusch:2014woa,deGouvea:2015euy,Fernandez-Martinez:2016lgt,Chrzaszcz:2019inj,Crivellin:2020lzu} for the VLLs corresponding to the type I or type III seesaw. However, since the publication of Ref.~\cite{delAguila:2008pw} the experimental situation has changed significantly. In particular, the Higgs mass is now known~\cite{Aaboud:2018wps,Sirunyan:2017exp} and the top~\cite{TevatronElectroweakWorkingGroup:2016lid,Khachatryan:2015hba,Sirunyan:2018gqx} and $W$~\cite{Aaltonen:2012bp,D0:2013jba,Aaboud:2017svj} mass measurements have become much more precise. \smallskip Furthermore, recently the ``Cabibbo Angle Anomaly'' (CAA) has emerged with a significance of up to $4\,\sigma$~\cite{Belfatto:2019swo,Grossman:2019bzp,Coutinho:2019aiy,Crivellin:2020lzu,Endo:2020tkb}. This anomaly is due to the disagreement between the CKM element $V_{us}$ extracted from kaon and tau decays, and the one determined from beta decays, in particular super-allowed beta decays (using CKM unitarity). One can consider this discrepancy to be a sign of (apparent) CKM unitarity violation~\cite{Belfatto:2019swo,Cheung:2020vqm}. However, a sizable violation of CKM unitarity is in general difficult to generate due to the strong bounds from flavour-changing neutral currents, such as kaon mixing (see e.g.\ Ref.~\cite{Bobeth:2016llm}). Alternatively, one can consider the CAA as a sign of lepton flavour universality (LFU) violation (LFUV)~\cite{Coutinho:2019aiy,Crivellin:2020lzu,Capdevila:2020rrl,Endo:2020tkb}. In fact, flavour dependent modified neutrino couplings to the $W$ and $Z$ gauge bosons provide a very good fit to the data~\cite{Coutinho:2019aiy} and this view seems to be a natural since experiments have accumulated intriguing hints for the violation of LFU within recent years. In particular, the measurements of the ratios $R(D^{(*)})$~\cite{Lees:2012xj,Aaij:2017deq,Abdesselam:2019dgh} and $R(K^{(*)})$~\cite{Aaij:2017vbb,Aaij:2019wad} deviate from the SM expectation of LFU by more than $3\,\sigma$~\cite{Amhis:2019ckw,Murgui:2019czp,Shi:2019gxi,Blanke:2019qrx,Alok:2019uqc} and $4\,\sigma$~\cite{Alguero:2019ptt,Aebischer:2019mlg,Ciuchini:2019usw,Arbey:2019duh}, respectively. The anomalous magnetic moments $(g-2)_\ell$ of the charged leptons are also a measure of LFU violation as they vanish in the massless limit. Here, the long-standing discrepancy of about $3.7\,\sigma$~\cite{Bennett:2006fi,Aoyama:2020ynm} in the anomalous magnetic moment of the muon, $(g-2)_\mu$,~\footnote{Recently, the BMWc released a lattice calculation of hadronic vacuum polarisation in $(g-2)_\mu$ whose results would bring theory and experiment of $(g-2)_\mu$ into agreement. However, this result disagrees with $e^+e^-$ to hadron data~\cite{Davier:2017zfy,Keshavarzi:2018mgv,Davier:2019can,Keshavarzi:2019abf,Colangelo:2018mtw,Ananthanarayan:2018nyx} and would increase the tension in the EW fit~\cite{Crivellin:2020zul,Keshavarzi:2020bfy} as hadronic vacuum polarisations contribute to the running of $\alpha$, which, at the scale $M_Z$, is a crucial input for the EW fit. We checked that modified gauge boson couplings to leptons are not capable of reducing this tension significantly and we therefore use the result from $e^+e^-$ to hadrons.} and the more recently emerging deviation of $2.5\,\sigma$ in the anomalous magnetic moment of the electron, $(g-2)_e$, interestingly, with the opposite sign, could have a common origin~\cite{Davoudiasl:2018fbb,Crivellin:2018qmi}. In fact, it has been shown in Refs.~\cite{Czarnecki:2001pv,Kannike:2011ng,Dermisek:2013gta,Freitas:2014pua,Aboubrahim:2016xuz,Kowalska:2017iqv,Raby:2017igl,Megias:2017dzd,Calibbi:2018rzv,Crivellin:2018qmi,Arnan:2019uhr} that $(g-2)_\mu$ of the muon can be explained by VLLs, and in Refs.~\cite{Gripaios:2015gra,Arnan:2016cpy,Raby:2017igl,Arnan:2019uhr,Kawamura:2019rth} VLLs are involved in the explanation of $b\to s\ell^+\ell^-$ via loop effects. \smallskip We take these developments as a motivation to perform an updated global EW fit~\cite{Haller:2018nnx,deBlas:2016ojx} to the modified EW gauge boson couplings to leptons. In particular, we want to assess the impact of including the $V_{us}$ and $V_{ud}$ measurements in the fit and see if an explanation of the CAA is possible. We will do this first in a model independent way by performing a fit to the dimension-6 operators which (directly) change the lepton's gauge boson couplings. Then we perform a fit to all six representations of VLLs. Here, also contributions to flavour changing decays of charged leptons (such as $\mu\to e\gamma$, $\mu\to3e$, the analogous tau decays, and $\mu\to e$ conversion) can arise, which we calculate and analyse as well. \smallskip This article is structured as follows: in the next section we will establish our setup, before calculating the contributions to the relevant observables and discussing the experimental situation in Sec.~\ref{observables}. In Sec.~\ref{analysis} we will perform our global fit, first in a model independent fashion including dimension-6 operators, and after for each of the six representations of VLLs separately. Finally, we conclude in Sec.~\ref{Conclusions}. \section{Setup} Let us establish our setup by first considering the effective dimension-6 operators (in the Warsaw basis) that generate modified $W\ell\nu$, $Z\nu\nu$ and $Z\ell\ell$ couplings after EW symmetry breaking. We will then turn to the six possible representations of VLLs under the SM gauge group and perform the matching on the effective operators. \subsection{EFT} Disregarding magnetic operators whose effect vanishes at zero momentum transfer and which can only be generated at the loop level, there are three operators (not counting flavour indices) in the $SU(3)_c\times SU(2)_L\times U(1)_Y$-invariant SM EFT which (directly) modify the couplings of neutrinos and charged leptons to the EW gauge bosons~\cite{Buchmuller:1985jz,Grzadkowski:2010es}. \begin{equation} \mathcal{L} = \mathcal{L}_{SM} + \dfrac{1}{\Lambda^2}\left( C_{\phi \ell}^{\left( 1 \right) ij} Q_{\phi \ell}^{\left( 1 \right) ij} + C_{\phi \ell}^{\left( 3\right) ij} Q_{\phi \ell }^{\left( 3 \right) ij} + C_{\phi e}^{ij} Q_{\phi e }^{ij}\right)\,, \label{Lagrangian} \end{equation} with \begin{equation} \begin{aligned} Q_{\phi \ell }^{\left( 1 \right)ij} &= {\phi ^\dag }i{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\leftrightarrow$}} \over D} }_\mu }\phi \, {{\bar \ell_L}^i}{\gamma ^\mu }{\ell_L^j}\,,\\ Q_{\phi \ell }^{\left( 3 \right)ij} &= {\phi ^\dag }i\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\leftrightarrow$}} \over D} _\mu ^I\phi \, {{\bar \ell_L}^i}{\tau ^I}{\gamma ^\mu }{\ell_L^j}\,,\\ Q_{\phi e}^{ij} &= {\phi ^\dag }i{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\leftrightarrow$}} \over D} }_\mu }\phi \, {{\bar e_R}^i}{\gamma ^\mu }{e_R^j}\,, \end{aligned} \label{eq:ops} \end{equation} where \begin{equation} D_{\mu}=\partial_{\mu}+ig_2W_{\mu}^a \tau^a+ig_1B_{\mu}Y\,.\label{CovD} \end{equation} Here $i$ and $j$ are flavour indices and the Wilson coefficients $C$ are dimensionless. The operators defined in \eq{eq:ops} result in the following modifications of the $Z$ and $W$ boson couplings to leptons after EW symmetry breaking \begin{equation} \mathcal{L}_{W,Z}^{\ell,\nu}=\bigg({{\bar \ell }_f}\Gamma _{fi}^{\ell\nu}{\gamma ^\mu }{P_L}{\nu _i}\,{W_\mu } + h.c.\bigg)+ \left[ {{{\bar \ell }_f}{\gamma ^\mu }\left( {\Gamma _{fi}^{\ell L}{P_L} + \Gamma _{fi}^{\ell R}{P_R}} \right){\ell _i} + {{\bar \nu }_f}\Gamma _{fi}^\nu {\gamma ^\mu }{P_L}{\nu _i}} \right]{Z_\mu }\,, \label{definitionZll} \end{equation} with \begin{equation} \begin{aligned} \Gamma _{fi}^{\ell L} &= \frac{{{g_2}}}{{2{c_W}}}\left[ {\left( {1 - 2s_W^2} \right){\delta _{fi}} + \frac{{v^2}}{{\Lambda^2}}\left( {C_{\phi \ell }^{\left( 1 \right)fi} + C_{\phi \ell }^{\left( 3 \right)fi}} \right)} \right]\,,\\ \Gamma _{fi}^{\ell R} &= \frac{{{g_2}}}{{2{c_W}}}\left[ { - 2s_W^2{\delta _{fi}} + \frac{{v^2}}{{\Lambda^2}}C_{\phi e}^{fi}} \right]\,,\\ \Gamma _{fi}^\nu &= - \frac{{{g_2}}}{{2{c_W}}}\left( {{\delta _{fi}} + \frac{{v^2}}{{\Lambda^2}}\left( {C_{\phi \ell }^{\left( 3 \right)fi} - C_{\phi \ell }^{\left( 1 \right)fi}} \right)} \right)\,,\\ \Gamma _{fi}^{\ell\nu} &= - \frac{{{g_2}}}{{\sqrt 2 }}\left( {{\delta _{fi}} + \frac{{v^2}}{{\Lambda^2}}C_{\phi \ell }^{\left( 3 \right)fi}} \right)\,, \end{aligned} \label{Gammas} \end{equation} Here we used the convention $v/\sqrt{2}\approx 174\,$GeV. Eqs.~\eqref{definitionZll} and \eqref{Gammas} agree with Ref.~\cite{Dedes:2017zog}. The terms proportional to the Kronecker delta correspond to the (unmodified) SM couplings. \medskip \subsection{Vector-Like Leptons} Moving beyond the model independent approach of the last subsection, we now consider models with VLLs. As mentioned in the introduction, these particles modify the $Z$ and $W$ couplings to leptons already at tree-level and can therefore give dominant effects in the corresponding observables entering the global EW fit, in particular in the determination of $V_{us}$ and $V_{ud}$, related to the CAA. \smallskip We define VLLs as fermions whose left and right-handed components have the same representations of $SU(2)_L\times U(1)_Y$, are singlets under QCD and can couple to the SM Higgs and SM leptons via Yukawa-like couplings. The possible representations under the SM gauge group are given in Table~\ref{VLLrep}. Since these fermions are vectorial, they can have bare mass terms (already before EW symmetry breaking) and interact with SM gauge bosons via the covariant derivative which was defined in \eq{CovD}.\footnote{In the case $\psi$ equals $N$ or $\Sigma_0$, which are Majorana fermions, i.e. $N_R=N_L^c$ or $\Sigma_{0,R}=\Sigma_{0,L}^c$, \eq{Lquad} should be defined with a factor ${1}/{2}$ to ensure a canonical normalisation.} \begin{align} \mathcal{L}^{\rm VLL} = \sum_\psi \, i \, \bar{\psi} \gamma_{\mu}D^{\mu}\,\psi - M_{\psi}\,\bar{\psi}\psi\,, \label{Lquad} \end{align} with $\psi=N,E,\Delta_1,\Delta_3,\Sigma_1,\Sigma_3$. The interactions of the VLLs with the SM leptons are given by \begin{align} -\mathcal{L}_{NP}^{\rm int} =&\, \lambda_N^i\, \bar{\ell}_i\,\tilde{\phi}\, N + \lambda_E^i\, \bar{\ell}_i\,\phi\, E + \lambda_{\Delta_1}^i\, \bar{\Delta}_1\,\phi\, e_i + \label{Lint} \\ &\lambda_{\Delta_3}^i\, \bar{\Delta}_3\,\tilde{\phi}\, e_i + \lambda_{\Sigma_0}^i\, \tilde{\phi}^{\dagger}\,\bar{\Sigma}_0^I\,\tau^I\, \ell_i + \lambda_{\Sigma_1}^i\, \phi^{\dagger}\,\bar{\Sigma}_1^I\,\tau^I\, \ell_i +{\rm h.c.}\,,\nonumber \end{align} where $i$ is a flavour index and $\tau^I=\sigma^I/2$ are the generators of $SU(2)_L$. Here we neglected interactions of two different VLL representations with the Higgs\footnote{These couplings which would induce mixing among the VLLs are in general not important with respect to the modified $Z$ and $W$ couplings studied in this article, as they only give rise to dim-8 effects here. However, they can have important phenomenological consequences in magnetic dipole operators, allowing for an explanation of the $(g-2)_{\mu,e}$ via chiral enhancement~\cite{Czarnecki:2001pv,Kannike:2011ng,Dermisek:2013gta,Freitas:2014pua,Aboubrahim:2016xuz,Kowalska:2017iqv,Raby:2017igl,Megias:2017dzd,Calibbi:2018rzv,Crivellin:2018qmi,Arnan:2019uhr}.}. Our conventions for the VLL-triplets after EW symmetry breaking are \begin{align} \Sigma_0 = \frac{1}{2}\begin{pmatrix} \Sigma_0^0 & \sqrt{2}\Sigma_0^+ \\ \sqrt{2}\Sigma_0^- & -\Sigma_0^0 \end{pmatrix}, \quad \Sigma_1 = \frac{1}{2}\begin{pmatrix} \Sigma_1^- & \sqrt{2}\Sigma_1^0 \\ \sqrt{2}\Sigma_1^{--} & -\Sigma_1^- \end{pmatrix}\,, \end{align} where the superscript labels the electric charge. \smallskip \begin{table}[t!] \centering \begin{tabular}{l | c c c } & $SU(3)$& {$SU(2)_L$}&$U(1)_Y$\\ \hline $\ell$ &1 & 2 & -1/2 \\ e &1 & 1 & -1 \\ $\phi$ &1 & 2 & 1/2 \\ \hline N &1 & 1 & 0 \\ E & 1& 1 & -1 \\ $\Delta_1= (\Delta_1^0, \Delta_1^-)$ & 1 & 2 & -1/2\\ $\Delta_3 = (\Delta_3^-, \Delta_3^{--})$ & 1 & 2 &-3/2 \\ $\Sigma_0 = (\Sigma_0^+, \Sigma_0^0, \Sigma_0^- )$ & 1 & 3 & 0 \\ $\Sigma_1= (\Sigma_1^0, \Sigma_1^-, \Sigma_1^{--} )$& 1 & 3 & -1 \end{tabular} \caption{Representations of the SM leptons ($\ell,e$), the SM Higgs Doublet ($\phi$) and the VLLs under the SM gauge group. }\label{VLLrep} \end{table} Integrating out the VLLs at tree-level (see Fig.~\ref{FeynmanDiagrams}), we find the following expressions for the Wilson coefficients defined in \eq{Lagrangian} \begin{align} \begin{split} \frac{C_{\phi \ell}^{(1)ij}}{\Lambda^2} &= \frac{\lambda_N^{i}\lambda_N^{j\dagger}}{4M_N^2} -\frac{\lambda_E^{i}\lambda_E^{j\dagger}}{4M_E^2}+\frac{3}{16}\frac{\lambda_{\Sigma_0}^{i\dagger}\lambda_{\Sigma_0}^{j}}{M_{\Sigma_0}^2} -\frac{3}{16}\frac{\lambda_{\Sigma_1}^{i\dagger}\lambda_{\Sigma_1}^{j}}{M_{\Sigma_1}^2} \\ \frac{C_{\phi \ell}^{(3)ij}}{\Lambda^2} &= -\frac{\lambda_N^{i}\lambda_N^{j\dagger}}{4M_N^2} -\frac{\lambda_E^{i}\lambda_E^{j\dagger}}{4M_E^2} + \frac{1}{16}\frac{\lambda_{\Sigma_0}^{i\dagger}\lambda_{\Sigma_0}^{j}}{M_{\Sigma_0}^2} + \frac{1}{16}\frac{\lambda_{\Sigma_1}^{j\dagger}\lambda_{\Sigma_1}^{i}}{M_{\Sigma_1}^2}\\ \frac{C_{\phi \ensuremath{\mathrm{e}}}^{ij}}{\Lambda^2} &= \frac{\lambda_{\Delta_1}^{i\dagger}\lambda_{\Delta_1}^{j}}{2M_{\Delta_1}^2} - \frac{\lambda_{\Delta_3}^{i\dagger}\lambda_{\Delta_3}^{j}}{2M_{\Delta_3}^2} \end{split} \label{VLLmatch} \end{align} which agree with Refs.~\cite{delAguila:2008pw,deBlas:2017xtg}. \begin{figure}[t] \centering \includegraphics[width=0.42\textwidth]{./Plots/FD1.pdf} \\ \includegraphics[width=0.42\textwidth]{./Plots/FD2.pdf} \hspace{0.9cm} \includegraphics[width=0.42\textwidth]{./Plots/FD3.pdf} \caption{Feynman diagrams giving rise to the operators $Q_{\phi \ell }^{\left( 1 \right)ij}$, $Q_{\phi \ell }^{\left( 3 \right)ij}$ and $Q_{\phi e }^{ij}$ where $X$ denotes any of the six VLLs. Note that the first diagram does not give a contribution for $N$ and $E$. \label{FeynmanDiagrams}} \end{figure} Here and in the following this notation is to be understood as \begin{align} \begin{aligned} \frac{\lambda_X^i\lambda_X^{j\dagger}}{M_X^2}&=\sum_n\lambda_{X_n}^i M_{X_n}^{-2}\lambda_{X_n}^{j\dagger}\;{\rm for}\; X=N,E\,,\\ \frac{\lambda_{X}^{i\dagger}\lambda_{X}^j}{M_X^2}&=\sum_n\lambda_{X_n}^{i\dagger} M_{X_n}^{-2}\lambda_{X_n}^j\;{\rm for}\; X=\Delta_1,\,\Delta_3,\,\Sigma_0\,,\Sigma_1\,, \end{aligned} \end{align} in the case where more than one generation of VLLs is present. Without loss of generality, we assume that the mass matrices $M_X$ of the VLLs can be made real and diagonal by an appropriate choice of basis. \smallskip Importantly, the different representations give rise to specific patterns for the modifications of the $SU(2)_L$ gauge bosons couplings to the SM leptons. In particular, the diagonal elements even have a fixed sign: \begin{align} \begin{aligned}[c] \text{N}:&\qquad C_{\phi \ell}^{(3)ii} =\; \,-C_{\phi \ell}^{(1)ii} < \;0,\\ \text{E}:&\qquad C_{\phi \ell}^{(3)ii} = \;\;\;\;\,C_{\phi \ell}^{(1)ii}<\;0,\\ \Delta_1:&\qquad C_{\phi \ell}^{(3)ii} =\; \;\;\;\,C_{\phi \ell}^{(1)ii}=\;0,\\ \Delta_2:&\qquad C_{\phi \ell}^{(3)ii} = \;\;\;\;\,C_{\phi \ell}^{(1)ii}=\;0,\\ \Sigma_0:&\qquad C_{\phi \ell}^{(3)ii} =\; \;\,\frac{1}{3}C_{\phi \ell}^{(1)ii}>\;0,\\ \Sigma_1:&\qquad C_{\phi \ell}^{(3)ii} = -\frac{1}{3}C_{\phi \ell}^{(1)ii} >\;0, \end{aligned} \qquad \begin{aligned}[c] C_{\phi \ensuremath{\mathrm{e}}}^{ij}\,&=& 0,\\[3pt] C_{\phi \ensuremath{\mathrm{e}}}^{ij}\,&=& 0, \\[3pt] C_{\phi \ensuremath{\mathrm{e}}}^{ij}\,&>& 0,\\[3pt] C_{\phi \ensuremath{\mathrm{e}}}^{ij}\,&<& 0,\\[3pt] C_{\phi \ensuremath{\mathrm{e}}}^{ij}\,&=& 0,\\[3pt] C_{\phi \ensuremath{\mathrm{e}}}^{ij}\,&=& 0. \end{aligned}\label{VLCcorr} \end{align} The resulting modified $Z$ and $W$ couplings after EW symmetry breaking are given in Table~\ref{modSMWZcouplings} in the appendix \ref{ModWZcouplings}. Note that if the VLLs $N$ and $\Sigma_0$ are Majorana fermions, $N$ corresponds to the right-handed neutrino in the type~I seesaw~\cite{Minkowski:1977sc,Mohapatra:1979ia}, while $\Sigma_0$ corresponds to the mediator in the III mechanism~\cite{Foot:1988aq,Bajc:2006ia,Bajc:2007zf}. In this case $N$ and $\Sigma_0$ generate the neutrino mass matrices \begin{align} \begin{split} N:&\quad m_{\nu}=\frac{\lambda_{N}\lambda_{N}^T}{2M_{N}}\;v^2\,,\\ \Sigma_0:&\quad m_{\nu}=\frac{\lambda_{\Sigma_0}^{\dagger}\lambda_{\Sigma_0}^*}{8M_{\Sigma_0}}\;v^2\,. \end{split} \end{align} In general, the upper limits on the active neutrino masses set extremely stringent limits on the corresponding couplings (for a given mass of the VLLs). However, by requiring lepton number conservation~\cite{Kersten:2007vk}, as in the inverse seesaw \cite{Mohapatra:1986bd}, the effect in the neutrino masses can be avoided. In fact, it has been shown in an effective picture that such scenarios correspond to a specific pattern of the couplings $\lambda$ that allows the active neutrino masses to be small while the Dirac mass can be sizeable~\cite{Coy:2018bxr}. In the phenomenological analysis we will assume that such a mechanism is at work~\cite{Ingelman:1993ve,delAguila:2005ssc}, or simply that the VLLs $N$ and $\Sigma_0$ are Dirac fermions, meaning that the effects in modified $W$ and $Z$ couplings can be sizeable. \medskip \section{Observables} \label{observables} In this section we summarise the relevant observables for which the SM predictions are altered by the modified $W$ and $Z$ couplings, both in the EFT case and with VLLs. \subsection{Flavour} Already in the EFT, modified $W$ and $Z$ couplings to leptons give rise to processes like $\ell\to \ell^\prime\gamma$ at one-loop level and can even generate $\ell\to3\ell$ and $\mu\to e$ at tree level. For the latter two, the expressions are the same in the full theory (with VLLs) and in the effective theory, while for $\ell\to \ell^\prime\gamma$ the expressions are different. We report the expressions for the EFT in appendix \ref{ModWZcouplings}. Even though all VLLs except the $N$ give rise to modified couplings of charged leptons to the $Z$ boson (see Eqs.~(\ref{Gammas}) and (\ref{VLLmatch})) and therefore contribute to $\mu\to e$ conversion, $\mu\to 3e$, $\tau\to 3\mu$, etc. already at tree-level, the latter are phase space suppressed compared to the radiative lepton decays which give competitive bounds for tau decays, even though they are induced only at the loop level. Nonetheless, the off-diagonal elements are experimentally strongly constrained, both for the EFT~\cite{Crivellin:2013hpa,Pruna:2014asa,Crivellin:2017rmk} and the VLLs~\cite{Tommasini:1995ii,Abada:2007ux,Raidal:2008jk}. Furthermore, since the flavour changing elements do not generate amplitudes which interfere with the SM flavour conserving observables, their effect is suppressed. Therefore, it is sufficient to consider the flavour diagonal elements $C_{\phi \ell}^{\left( 1 \right) ii}$, $C_{\phi \ell}^{\left( 3 \right) ii}$ and $C_{\phi e}^{ii}$ within the EW fit. The flavour effects that are inevitably present if there is only one generation of VLLs which couples simultaneously to at least two generations of SM leptons, will be calculated in the following. However, note that these effects can in principle be avoided by introducing multiple generations of VLLs and assuming that each SM generation mixes with at most one vector-like generation. \subsubsection{$\ell\to3\ell$ Processes} In $\ell\to3\ell$ processes, we can neglect multiple flavour changes and thereby contributions to exotic decays such as $\tau^-\to e^-\mu^+e^-$ and focus on the decays involving only one flavour change. The corresponding experimental limits (at 90\% CL~\cite{Amhis:2019ckw,Bellgardt:1987du,Lees:2010ez,Hayasaka:2010np,Aaij:2014azz}) are given by \begin{align} \begin{split} \operatorname{Br}(\mu \rightarrow e e e)&\leq 1.0 \times 10^{-12}\,,\\ \operatorname{Br}(\tau \rightarrow \mu \mu \mu) &\leq 1.1 \times 10^{-8}\,,\\ \operatorname{Br}(\tau \rightarrow e e e)&\leq 1.4 \times 10^{-8}\,, \\ \operatorname{Br}(\tau \rightarrow e \mu \mu) &\leq 1.6 \times 10^{-8}\,,\\ \operatorname{Br}(\tau \rightarrow \mu e e) &\leq 8.4 \times 10^{-9}\,. \end{split} \end{align} The branching ratios for $\mu\to 3e$ and $\tau \to e \mu\mu$ are (here we give $\mu\to 3e$ and $\tau \to e \mu\mu$ for concreteness but the other combinations can be obtained trivially by adjusting indices) \begin{align} \begin{split} {\rm Br}(\mu\to 3e) = \frac{m_{\mu}^5}{1536\pi^3m_Z^4\Gamma_{\mu}}(&2|\Gamma^{\ell L}_{e\mu}\Gamma^{\ell L}_{ee}|^2+2|\Gamma^{\ell R}_{e\mu}\Gamma^{\ell R}_{ee}|^2+|\Gamma^{\ell R}_{e\mu}\Gamma^{\ell L}_{ee}|^2+|\Gamma^{\ell L}_{e\mu}\Gamma^{\ell R}_{ee}|^2)\,,\\ {\rm Br}(\tau\to e\mu\mu) = \frac{m_{\tau}^5}{1536\pi^3m_Z^4\Gamma_{\tau}}(&|\Gamma^{\ell L}_{e\tau}\Gamma^{\ell L}_{\mu\mu}|^2+|\Gamma^{\ell R}_{e\tau}\Gamma^{\ell R}_{\mu\mu}|^2+|\Gamma^{\ell R}_{e\tau}\Gamma^{\ell L}_{\mu\mu}|^2+|\Gamma^{\ell L}_{e\tau}\Gamma^{\ell R}_{\mu\mu}|^2)\,, \end{split} \end{align} with $\Gamma^{\ell L(R)}_{ij}$ given in Eq.~(\ref{Gammas}) and Eq.~(\ref{VLLmatch}) as well as in Table~\ref{modSMWZcouplings} in appendix \ref{ModWZcouplings} and $\Gamma_\mu, \, \Gamma_\tau$ are the muon and tau decay widths. \subsubsection{Radiative Lepton Decays} The branching ratio for $\ell_i \to \ell_f \gamma$ can be written as \begin{align} \label{Brmuegamma} \text{Br}[\ell_i \to \ell_f \gamma]&=\frac{m_{\ell_i}^3}{4\pi \, \Gamma_{i}} \big(|c_{fi}^{R} |^{2}+ |c_{if}^{R} |^{2}\big), \end{align} where the coefficients $c_{fi}^{R}$ are given by \begin{align} \begin{split} c^{RN}_{fi}=&\frac{e }{16 \pi ^2} \,m_{\ell_i}\,\left[\lambda_N\lambda_N^\dagger\;\frac{\tilde{f}_V\left(x_N\right)-\tilde{f}_V(0)}{M_N^2}\right]_{fi} \\ c^{RE}_{fi}=&\frac{e }{32\pi ^2 }\,m_{\ell_i}\, \left[\lambda_E\lambda_E^{\dagger }\left(\frac{\tilde{F}_\Phi\left(y_E\right)}{M_H^2} + \frac{-2\tilde{f}_V(0)+\tilde{F}_V\left(c_W^2 x_E\right)-2(1-2s_W^2)\,\tilde{F}_V(0)}{M_E^2}\right)\right]_{fi} \,,\\ c^{R\Delta_1}_{fi}=&\frac{e}{32 \pi ^2}\,m_{\ell_f}\,\left[\lambda_{\Delta_1}^{\dagger }\lambda_{\Delta_1} \left( \frac{\tilde{F}_\Phi\left(y_{\Delta_1}\right)}{M_H^2} +\frac{\tilde{F}_V\left(c_W^2 x_{\Delta_1}\right)-4 s_W^2 \tilde{F}_V(0)}{ M_{\Delta_1}^2}\right)\right]_{fi} \,,\\ c^{R\Delta_3}_{fi}=&\frac{e}{32 \pi ^2}\,m_{\ell_f} \,\left[\lambda_{\Delta_3}^{\dagger }\lambda_{\Delta_3}\left(\frac{ \tilde{F}_\Phi\left(y_{\Delta_3}\right) }{M_H^2} +\frac{\tilde{F}_V\left(c_W^2 x_{\Delta_3}\right) +4 s_W^2 \tilde{F}_V(0)}{M_{\Delta_3}^2}\right)\right]_{fi}\,,\\ c^{R\Sigma_0}_{fi}=&\frac{e}{64 \pi ^2 } \, m_{\ell_i}\,\Bigg[\lambda_{\Sigma_0}^{\dagger }\lambda_{\Sigma_0}\Bigg(\frac{ \tilde{F}_\Phi\left(y_{\Sigma_0}\right)}{M_H^2} \,,\\ &\qquad\qquad\quad +\frac{\tilde{f}_V\left(x_{\Sigma_0}\right)+ \tilde{f}_V(0) + \tilde{F}_V\left(c_W^2 x_{\Sigma_0}\right)\; +2\left(1-2 s_W^2\right) \tilde{F}_V(0)}{M_{\Sigma_0}^2} \Bigg)\Bigg]_{fi}\,, \label{mu_egamma_WCs1} \end{split} \end{align} \begin{align} \begin{split} c^{R\Sigma_1}_{fi}=&\frac{e}{128 \pi ^2 }\, m_{\ell_i} \,\Bigg[\lambda_{\Sigma_1}^{\dagger }\lambda_{\Sigma_1}\Bigg(\frac{\tilde{F}_\Phi\left(y_{\Sigma_1}\right)}{M_H^2} \\ &\qquad\qquad\quad + \frac{2\tilde{f}_V(0)\,+\tilde{F}_V\left(c_W^2 x_{\Sigma_1}\right)-2\left(1-2 s_W^2\right)\tilde{F}_V(0)}{M_{\Sigma_1}^2}\Bigg) \Bigg]_{fi}\,, \label{mu_egamma_WCs2} \end{split} \end{align} \noindent with the loop functions being \begin{align} \begin{split} \tilde f_\Phi(x)&=\frac{2x^3+3x^2-6x+1-6x^2\log x}{24(x-1)^4},\quad \tilde f_\Phi(0)=\frac{1}{24},\\ \tilde g_\Phi(x)&=\frac{x^2-1-2x\log x}{8(x-1)^3},\quad \tilde g_\Phi(0)=\frac{1}{8},\\ \tilde{F}_\Phi(x)&=\tilde f_\Phi(x)-\tilde g_\Phi(x),\quad \tilde F_\Phi(0)=-\frac{1}{12},\\ \tilde f_V(x) &= \frac{-4x^4+49x^3-78x^2 + 43x -10 - 18x^3\log x}{24(x - 1)^4},\quad \tilde f_V(0)=-\frac{5}{12},\\ \tilde g_V(x) &= \frac{-3(x^3-6x^2 + 7x -2 + 2x^2\log x)}{8(x - 1)^3},\quad \tilde g_V(0)=-\frac{3}{4}.\\ \tilde{F}_V(x)&=\tilde f_V(x)-\tilde g_V(x),\quad \tilde F_V(0)=\frac{1}{3}. \end{split} \end{align} Here we show the expressions expanded up to second order in $v/M_X$ and defined \begin{align*} x_X\equiv \frac{M_X^2}{M_W^2},\quad y_X\equiv\frac{M_X^2}{M_H^2},\quad \quad \rm{with}\qquad X=N,\,E,\,\Delta_1,\,\Delta_3,\,\Sigma_0,\,\Sigma_1. \end{align*} In the presence of more than one generation of VLLs, the expressions in Eqs. (\ref{mu_egamma_WCs1})-(\ref{mu_egamma_WCs2}) are to be understood as \begin{align} c^{RN}_{fi}=&\frac{e }{16 \pi ^2} \,m_{\ell_i}\sum_n\,\lambda_{N_n}^f \lambda_{N_n}^{i*}\frac{\tilde{f}_V\left(x_{N_n}\right)-\tilde{f}_V(0)}{M_{N_n}^2} \end{align} (and similar for the others). Here $n$ runs over the number of generations of VLLs. We use the following experimental bounds on radiative leptonic decays (at 90\% C.L.) \cite{TheMEG:2016wtm}\cite{Aubert:2009ag} \begin{align*} \operatorname{Br}(\mu \rightarrow e \gamma)&\leq 4.2 \times 10^{-13}\,,\\ \operatorname{Br}(\tau \rightarrow e \gamma)&\leq 3.3 \times 10^{-8}\,,\\ \operatorname{Br}(\tau \rightarrow \mu \gamma)&\leq 4.4 \times 10^{-8}\,. \end{align*} \medskip \subsubsection{$\mu\to e$ Conversion In Nuclei} The induced $Ze\mu$ couplings lead to $\mu\to e$ conversion already at tree-level. These processes have stringent experimental bounds. Taking into account just this leading contribution, it is sufficient to consider the following effective Lagrangian: \begin{align} \mathcal{L}_{\text{eff}}=\sum_{q=u,d}\left(C_{qq}^{V\,LL}O_{qq}^{V\,LL}+C_{qq}^{V\,LR}O_{qq}^{V\,LR}\right)+L\leftrightarrow R+\text{h.c}.\,, \end{align} with \begin{align} O_{qq}^{V\,LL}&=(\bar{e}\gamma^\mu P_L\mu)(\bar{q}\gamma_\mu P_L q)\,,\qquad O_{qq}^{V\,LR}=(\bar{e}\gamma^\mu P_L\mu)(\bar{q}\gamma_\mu P_R q)\,, \end{align} and \begin{align} C_{qq}^{V\,LL}&=\Gamma_{e\mu}^{\ell L}\;\frac{1}{M_Z^2} \;\Gamma_{qq}^{L},\qquad\, C_{qq}^{V\,LR}=\Gamma_{e\mu}^{\ell L}\;\frac{1}{M_Z^2} \;\Gamma_{qq}^{ R}\,, \end{align} where $\Gamma_{e\mu}^{\ell L/R}$ is defined in \eq{definitionZll} and given in Table~\ref{modSMWZcouplings}. The corresponding $Z$ couplings to quarks in the SM are given by \begin{align} \begin{aligned} \Gamma_{uu}^L&=-\frac{g_2}{c_W}\left(\frac{1}{2}-\frac{2}{3}s_W^2\right)\,,\qquad \Gamma_{uu}^R=\frac{2}{3}\frac{g_2 \,s_W^2}{c_W}\,,\\ \Gamma_{dd}^L&=-\frac{g_2}{ c_W}\left(-\frac{1}{2}+\frac{1}{3}s_W^2\right)\,,\quad\;\Gamma_{dd}^R=-\frac{1}{3}\frac{g_2 \,s_W^2}{c_W}\,. \end{aligned} \end{align} \smallskip Hence the transition rate $\Gamma_{\mu\to e}^N\equiv \Gamma(\mu N\to eN)$ is given by (see e.g.~\cite{Cirigliano:2009bz,Crivellin:2014cta,Crivellin:2017rmk}) \begin{align} \Gamma_{\mu\to e}^N =4 m_\mu^5 \,\Bigg \vert \sum_{q=u,d}\left(C_{qq}^{V\;RL}+C_{qq}^{V\;RR}\right)\left(f_{Vp}^{(q)}V_N^p\, +\, f_{Vn}^{(q)}V_N^n\right) \Bigg\vert^2+L\leftrightarrow R\,, \end{align} with the nucleon vector form factors $f_{Vp}^{(u)}=2,\;f_{Vn}^{(u)}=1,\;f_{Vp}^{(d)}=1,\;f_{Vn}^{(d)}=2$ and the overlap integrals for which we use the numerical values for gold~\cite{Kitano:2002mt} \begin{align} V_{\text{Au}}^p=0.0974\,, \quad V_{\text{Au}}^n=0.146\,. \end{align} This conversion rate needs to be normalised by the capture rate~\cite{Suzuki:1987jf} \begin{align} \Gamma_{\text{Au}}^\text{capt}=8.7\times 10^{-18}\; \text{GeV}\,, \end{align} in order to be compared to the experimental 90\% C.L. limit on $\mu\to e$ conversion in gold of~\cite{Bertl:2006up} \begin{align} \frac{\Gamma_\text{Au}^\text{conv}}{\Gamma_\text{Au}^\text{capt}}&<7.0\times 10^{-13}\,, \end{align} \medskip which makes the off-diagonal couplings $Z\mu e$ to be negligible in our analysis. \subsection{LFU Test} \begin{table}[t!] \centering \begin{tabular}{l c c } \hline\hline Observable & Ref. & Measurement \\ \hline \\[-0.5cm] $R\left[\frac{K\rightarrow\mu\nu}{K\rightarrow e\nu}\right]\simeq|1+\frac{{v^2}}{{\Lambda^2}}C_{\phi \ell }^{\left( 3 \right)\mu\mu}-\frac{{v^2}}{{\Lambda^2}}C_{\phi \ell }^{\left( 3 \right)ee}|$ &~\cite{Pich:2013lsa} &$0.9978 \pm 0.0020$ \\[0.2 cm] $R\left[\frac{\pi\rightarrow\mu\nu}{\pi\rightarrow e\nu}\right]\simeq|1+\frac{{v^2}}{{\Lambda^2}}C_{\phi \ell }^{\left( 3 \right)\mu\mu}-\frac{{v^2}}{{\Lambda^2}}C_{\phi \ell }^{\left( 3 \right)ee}|$&~\cite{Aguilar-Arevalo:2015cdf,Tanabashi:2018oca} & $1.0010 \pm 0.0009$ \\[0.2 cm] $R\left[\frac{\tau\rightarrow\mu\nu\bar{\nu}}{\tau\rightarrow e\nu\bar{\nu}}\right]\simeq|1+\frac{{v^2}}{{\Lambda^2}}C_{\phi \ell }^{\left( 3 \right)\mu\mu}-\frac{{v^2}}{{\Lambda^2}}C_{\phi \ell }^{\left( 3 \right)ee}|$&~\cite{Amhis:2019ckw,Tanabashi:2018oca} & $1.0018 \pm 0.0014$ \\[0.2 cm] $R\left[\frac{K\rightarrow\pi\mu\bar{\nu}}{K\rightarrow\pi e\bar{\nu}}\right]\simeq|1+\frac{{v^2}}{{\Lambda^2}}C_{\phi \ell }^{\left( 3 \right)\mu\mu}-\frac{{v^2}}{{\Lambda^2}}C_{\phi \ell }^{\left( 3 \right)ee}|$&~\cite{Pich:2013lsa} & $1.0010 \pm 0.0025$ \\[0.2 cm] $R\left[\frac{W\rightarrow\mu\bar{\nu}}{W\rightarrow e\bar{\nu}}\right]\simeq|1+\frac{{v^2}}{{\Lambda^2}}C_{\phi \ell }^{\left( 3 \right)\mu\mu}-\frac{{v^2}}{{\Lambda^2}}C_{\phi \ell }^{\left( 3 \right)ee}|$&~\cite{Pich:2013lsa,Schael:2013ita} & $0.996 \pm 0.010$ \\ [0.2 cm] $R\left[\frac{\tau\rightarrow e\nu\bar{\nu}}{\mu\rightarrow e\bar{\nu}\nu}\right]\simeq|1+\frac{{v^2}}{{\Lambda^2}}C_{\phi \ell }^{\left( 3 \right)\tau\tau}-\frac{{v^2}}{{\Lambda^2}}C_{\phi \ell }^{\left( 3 \right)\mu\mu}|$&~\cite{Amhis:2019ckw,Tanabashi:2018oca} & $1.0010 \pm 0.0014$ \\[0.2 cm] $R\left[\frac{\tau\rightarrow \pi\nu}{\pi\rightarrow \mu\bar{\nu}}\right]\simeq|1+\frac{{v^2}}{{\Lambda^2}}C_{\phi \ell }^{\left( 3 \right)\tau\tau}-\frac{{v^2}}{{\Lambda^2}}C_{\phi \ell }^{\left( 3 \right)\mu\mu}|$&~\cite{Amhis:2019ckw}& $0.9961 \pm 0.0027$ \\[0.2 cm] $R\left[\frac{\tau\rightarrow K\nu}{K\rightarrow \mu\bar{\nu}}\right]\simeq|1+\frac{{v^2}}{{\Lambda^2}}C_{\phi \ell }^{\left( 3 \right)\tau\tau}-\frac{{v^2}}{{\Lambda^2}}C_{\phi \ell }^{\left( 3 \right)\mu\mu}|$&~\cite{Amhis:2019ckw} & $0.9860 \pm 0.0070$ \\[0.2 cm] $R\left[\frac{W\rightarrow \tau\bar{\nu}}{W\rightarrow \mu\bar{\nu}}\right]\simeq|1+\frac{{v^2}}{{\Lambda^2}}C_{\phi \ell }^{\left( 3 \right)\tau\tau}-\frac{{v^2}}{{\Lambda^2}}C_{\phi \ell }^{\left( 3 \right)\mu\mu}|$&~\cite{Pich:2013lsa,Schael:2013ita,ATLAS:2020wvq} & \begin{tabular}{@{}c @{}} $1.034 \pm 0.013|_{\text{LEP}}$\\ $0.092\pm 0.013|_{\text{ATLAS}}$ \end{tabular} \\[0.2 cm] $R\left[\frac{\tau\rightarrow \mu\nu\bar{\nu}}{\mu\rightarrow e\nu\bar{\nu}}\right]\simeq|1+\frac{{v^2}}{{\Lambda^2}}C_{\phi \ell }^{\left( 3 \right)\tau\tau}-\frac{{v^2}}{{\Lambda^2}}C_{\phi \ell }^{\left( 3 \right)ee}|$&~\cite{Amhis:2019ckw,Tanabashi:2018oca} & $1.0029 \pm 0.0014$ \\[0.2 cm] $R\left[\frac{W\rightarrow \tau\bar{\nu}}{W\rightarrow e\bar{\nu}}\right]\simeq|1+\frac{{v^2}}{{\Lambda^2}}C_{\phi \ell }^{\left( 3 \right)\tau\tau}-\frac{{v^2}}{{\Lambda^2}}C_{\phi \ell }^{\left( 3 \right)ee}|$&~\cite{Pich:2013lsa,Schael:2013ita} & $1.031 \pm 0.013$\\[0.2 cm] $R\left[\frac{B\rightarrow D^{(*)}\mu\nu}{B\rightarrow D^{(*)}e\nu}\right]\simeq|1+\frac{{v^2}}{{\Lambda^2}}C_{\phi \ell }^{\left( 3 \right)\mu\mu}-\frac{{v^2}}{{\Lambda^2}}C_{\phi \ell }^{\left( 3 \right)ee}|$&~\cite{Jung:2018lfu} & $0.989 \pm 0.012$\\[0.2 cm] \hline\hline \end{tabular} \caption{Ratios testing LFU together with their dependence on the Wilson coefficients $C_{\phi \ell }^{\left( 3 \right)ij}$ and the corresponding experimental values. Note that here deviations from unity measures LFU violation.}\label{ObsLFU} \end{table} Violation of LFU in the charged current, i.e. modifications of the $W\ell\nu$ couplings, can be tested by ratios of $W$, kaon, pion and tau decays with different leptons in the final state. These ratios constrain LFU-violating effects and have reduced experimental and theoretical uncertainties. They are given by \begin{align} R(Y)=\frac{\mathcal{A}[Y]}{\mathcal{A}[Y]_{SM}} \,, \end{align} where $\mathcal{A}$ is the amplitude, and the $R(Y)$ ratio is defined in such a way that in the limit without any mixing between the SM and the VLLs, the ratios are unity. Here $Y$ labels the different observables included in our global fit which are reported in Table~\ref{ObsLFU} together with their dependence on the Wilson coefficients (see \eq{Lagrangian}) and their experimental values. Note that in all these ratios the dependence on $g_2$, the Fermi constant, etc.~drop out. In principle, the CAA could be included here via the ratio $R(V_{us})$ proposed in Ref.~\cite{Crivellin:2020lzu}. However, since we are performing a global fit, including $V_{ud}$ from beta decays and $V_{us}$ from kaon and tau decays is equivalent. Therefore, we will discuss the CAA separately later. \medskip \subsection{EW Precision Observables} \begin{table}[t!] \centering \begin{tabular}{c c} \hline\hline \begin{tabular}{c c c } Observable & Ref. & Measurement \\ \hline $M_W\,[\text{GeV}]$ & ~\cite{Tanabashi:2018oca} & $80.379(12)$ \\ $\Gamma_W\,[\text{GeV}]$ & ~\cite{Tanabashi:2018oca} & $2.085(42)$ \\ $\text{BR}(W\to \text{had})$ & ~\cite{Tanabashi:2018oca} & $0.6741(27)$ \\ $\text{sin}^2\theta_{\rm eff(CDF)}^{\rm e}$ & ~\cite{Aaltonen:2016nuy} & $0.23248(52)$ \\ $\text{sin}^2\theta_{\rm eff(D0)}^{\rm e}$ & ~\cite{Abazov:2014jti} & $0.23146(47)$ \\ $\text{sin}^2\theta_{\rm eff(CDF)}^{\rm \mu}$ & ~\cite{Aaltonen:2014loa} & $0.2315(20)$ \\ $\text{sin}^2\theta_{\rm eff(CMS)}^{\rm \mu}$ & ~\cite{Chatrchyan:2011ya} & $0.2287(32)$ \\ $\text{sin}^2\theta_{\rm eff(LHCb)}^{\rm \mu}$ & ~\cite{Aaij:2015lka} & $0.2314(11)$ \\ $P_{\tau}^{\rm pol}$ &~\cite{ALEPH:2005ab} &$0.1465(33)$ \\ $A_{e}$ &~\cite{ALEPH:2005ab} &$0.1516(21)$ \\ $A_{\mu}$ &~\cite{ALEPH:2005ab} &$0.142(15)$ \\ $A_{\tau}$ &~\cite{ALEPH:2005ab} &$0.136(15)$ \\ $\Gamma_Z\,[\text{GeV}]$ &~\cite{ALEPH:2005ab} &$2.4952(23)$ \\ \end{tabular} & \begin{tabular}{c c c} Observable & Ref. & Measurement \\ \hline $\sigma_h^{0}\,[\text{nb}]$ &~\cite{ALEPH:2005ab} &$41.541(37)$ \\ $R^0_{\ensuremath{\mathrm{e}}}$ &~\cite{ALEPH:2005ab} &$20.804(50)$ \\ $R^0_{\mu}$ &~\cite{ALEPH:2005ab} &$20.785(33)$ \\ $R^0_{\tau}$ &~\cite{ALEPH:2005ab} &$20.764(45)$ \\ $A_{\rm FB}^{0, e}$&~\cite{ALEPH:2005ab} &$0.0145(25)$ \\ $A_{\rm FB}^{0, \mu}$&~\cite{ALEPH:2005ab} &$0.0169(13)$ \\ $A_{\rm FB}^{0, \tau}$&~\cite{ALEPH:2005ab} &$0.0188(17)$ \\ $R_{b}^{0}$ &~\cite{ALEPH:2005ab} &$0.21629(66)$\\ $R_{c}^{0}$ &~\cite{ALEPH:2005ab} &$0.1721(30)$ \\ $A_{\rm FB}^{0,b}$ &~\cite{ALEPH:2005ab} &$0.0992(16)$\\ $A_{\rm FB}^{0,c}$ &~\cite{ALEPH:2005ab} &$0.0707(35)$ \\ $A_{b}$ &~\cite{ALEPH:2005ab} &$0.923(20)$ \\ $A_{c}$ &~\cite{ALEPH:2005ab} &$0.670(27)$ \\ \end{tabular}\\ \hline\hline \end{tabular} \caption{EW observables included in our global fit together with their current experimental values.\label{ObsEW}} \end{table} The EW sector of the SM was tested with high precision at LEP \cite{Schael:2013ita,ALEPH:2005ab} and the $W$ mass has been measured with high accuracy both at Tevatron~\cite{Aaltonen:2013iut} and at the LHC~\cite{Aaboud:2017svj}. The EW sector can be completely parameterised by three Lagrangian parameters. We choose the set with the smallest experimental error: the Fermi constant ($G_F$), the fine structure constant ($\alpha$) and the mass of the $Z$ boson ($M_Z$). All other quantities and observables shown in Table~\ref{ObsEW} can be expressed in terms of these parameters and their measurements allow for consistency tests. In addition, the Higgs mass ($M_H$), the top mass ($m_t$) and the strong coupling constant ($\alpha_s$) need to be included as fit parameters, since they enter EW observables indirectly via loop effects. The theoretical predictions of Ref.~\cite{Sirlin:1980nh}, which were implemented in HEPfit~\cite{deBlas:2019okz} and are used as input parameters in our global fit are reported in Table~\ref{ParamEW} along with their priors. \smallskip \begin{table}[t!] \centering \begin{tabular}{l c} \hline\hline Parameter & Prior \\ \hline $G_F\,\,[{\rm GeV}^{-2}]$~\cite{Tanabashi:2018oca} & $1.1663787(6) \times 10^{-5}$ \\ $\alpha$~\cite{Tanabashi:2018oca} & $7.2973525664(17) \times 10^{-3}$ \\ $\Delta\alpha_{\rm had}$~\cite{Tanabashi:2018oca} & $276.1(11) \times 10^{-4}$ \\ $\alpha_s(M_Z)$~\cite{Tanabashi:2018oca} & $0.1181(11)$\\ $M_Z\,\,[{\rm GeV}]$~\cite{ALEPH:2005ab} & $91.1875\pm0.0021$\\ $M_H\,\,[{\rm GeV}]$~\cite{Aaboud:2018wps,CMS:2019drq} & $125.16\pm0.13$ \\ $m_{t}\,\,[{\rm GeV}]$~\cite{TevatronElectroweakWorkingGroup:2016lid,Aaboud:2018zbu,Sirunyan:2018mlv}& $172.80\pm 0.40$ \\ \hline\hline \end{tabular} \caption{Parameters of the EW fit together with their (Gaussian) priors. \label{ParamEW}} \end{table} The modifications of the $W$ and $Z$ boson couplings in \eq{Gammas} do not affect the measurements of $\alpha$ and of $M_Z$, while they do shift the value of $G_F$, which is extracted with very high precision from the decay $\mu\to e\nu\nu$. \noindent Taking into account that Br($\mu^+\rightarrow\ensuremath{\mathrm{e}}^+\nu_e\bar{\nu}_{\mu})\sim1$ we have that \begin{align} \frac{1}{\tau_{\mu}}=\frac{(G_F^{\mathcal{L}})^2m_{\mu}^5}{192\pi^3}(1+\Delta q)(1+C_{\phi \ell }^{\left( 3 \right)\mu\mu}+C_{\phi \ell }^{\left( 3 \right)ee})^2\,, \end{align} where $G_F^{\mathcal{L}}$ is the Fermi constant appearing in the Lagrangian and $\Delta q$ includes phase space, QED and hadronic radiative corrections~\cite{Fael:2020tow, Kinoshita:1958ru, vanRitbergen:1999fi, Ferroglia:1999tg}. Thus we find \begin{align} \begin{split} G_F^{}&=G_F^{\mathcal{L}}(1+C_{\phi \ell }^{\left( 3 \right)\mu\mu}+C_{\phi \ell }^{\left( 3 \right)ee})\,. \end{split} \label{GFmod} \end{align} Note that within the standard set of EW observables, which is given in Table~\ref{ObsEW} and was included in our global fit, most observables are indirectly modified by \eq{GFmod} while only some of them are directly affected by the anomalous lepton-gauge boson couplings given in \eq{Gammas}. \medskip \subsection{Cabibbo Angle Anomaly} As outlined in the introduction, the CAA is the disagreement between the value of $V_{ud}$ determined from beta decays and that of $V_{us}$ extracted from kaon and tau decays, once they are compared via CKM unitarity. The most precise determination of $V_{ud}$ is currently the one extracted from super-allowed $\beta$ decays~\cite{Hardy:2018zsb} and is given by \begin{equation} |V_{ud}|^2=\frac{2984.432(3)s}{\mathcal{F}t(1+\Delta_R^V)}\,. \end{equation} For the $\mathcal{F}t$-value we consider both the case of $\mathcal{F}t=3072.07(63)s$~\cite{Hardy:2018zsb} and that of $\mathcal{F}t = 3072(2)s$ including the ``new nuclear corrections'' (NNCs) that were proposed in Refs.~\cite{Seng:2018qru,Gorchtein:2018fxl}. The NCCs are included in addition to the universal electroweak corrections $\Delta_R^V$. Furthermore, there are two sets of nucleus-independent radiative corrections \begin{align} \Delta_R^V\big|_\text{SFGJ}&=0.02477(24)\qquad \text{\cite{Seng:2020wjq}}\,,\\ \Delta_R^V\big|_\text{CMS}&=0.02426(32)\qquad \text{\cite{Czarnecki:2019mwq}}\,. \end{align} Due to the smaller uncertainties in the SFGJ value, which is obtained by combining lattice QCD with dispersion relations, we will use this number in the following. Therefore, we have \begin{align} V_{ud}^\beta&=0.97365(15)\,, & V_{us}^\beta &=0.2281(7)\,,\notag\\ V_{ud}^\beta\big|_\text{NNC}&=0.97366(33)\,, & V_{us}^\beta|_{\text{NNC}} &=0.2280(14)\,, \label{Vusbeta} \end{align} where we employed CKM unitarity with $|V_{ub}|=0.003683$~\cite{CKMfitter:2019,Charles:2004jd} even though the precise value of $|V_{ub}|$ is immaterial for our purpose. \smallskip Note that $V_{us}$ can be directly determined from the semi-leptonic kaon decays $K_{\ell 3}$. Using the compilation from Ref.~\cite{Moulson:Amherst} (updating Ref.~\cite{Moulson:2017ive}) as well as the form factor normalisation $f_+(0) = 0.9698(17)$~\cite{Carrasco:2016kpy,Bazavov:2018kjg,Moulson:Amherst}, we have that \begin{align} \begin{split} V_{us}^{K_{\mu 3}}&=0.22345(54)(39)=0.22345(67)\,,\\ V_{us}^{K_{e 3}}&=0.22320(46)(39)=0.22320(61)\,, \end{split} \label{VusKl3} \end{align} where the first error refers to experiment and the second to the form factor. Here we include the determination of $V_{us}$ from the muon mode in the global fit, while the electron mode is already taken into account via the LFU ratios in Table~\ref{ObsLFU}. \smallskip The NP modifications to $V_{us}^{K_{\mu 3}}$ and $V_{us}^{\beta}$, including the modified couplings in \eq{Gammas} and the indirect effect of $G_F$, are \begin{align} \begin{split} |V_{us}^{K_{\mu 3}}|\simeq& \; \bigg|V_{us}^{\mathcal{L}}\bigg(1-\frac{v^2}{\Lambda^2}C_{\phi \ell }^{\left( 3 \right)ee}\bigg)\bigg|\,, \\ |V_{us}^{\beta}|\simeq& \;\sqrt{1-|V_{ud}^{\mathcal{L}}|^2\bigg(1-\frac{v^2}{\Lambda^2}C_{\phi \ell }^{\left( 3 \right)\mu\mu}\bigg)^2}\,, \end{split} \end{align} where $V_{us}^{\mathcal{L}}$ and $V_{ud}^{\mathcal{L}}$ are the elements of the (unitary) CKM matrix of the Lagrangian. \smallskip \begin{figure}[t] \centering \includegraphics[width=0.85\textwidth]{./Plots/LFUPlot.pdf} \caption{Global Fit in the LFU scenario with $C^{(3)}_{\phi\ell}\,,C^{(1)}_{\phi\ell}\,,C_{\phi e}$. The green dashed lines correspond to the standard fit (not including CKM elements), while the red regions include $V_{us}$ and $V_{ud}$ by assuming CKM unitarity. The blue dashed lines indice the region obtained if the additional NNCs are included. \label{LFUPlot}} \end{figure} Regarding the purely leptonic kaon decays $K_{\ell2}$, one usually considers the ratio $K\!\to\!\mu\nu$ over $\pi\!\to\!\mu\nu$ to cancel the absolute dependence on the decay constants. This allows one to directly determine $V_{us}/V_{ud}$ once the ratio of decay constants $f_{K^\pm}/f_{\pi^{\pm}}$ is known and the treatment of the isospin-breaking corrections are specified~\cite{Cirigliano:2011tm,DiCarlo:2019thl}. Here, we use the recent results from lattice QCD~\cite{DiCarlo:2019thl} and at the same time adjust the FLAG average~\cite{Aoki:2019cca} back to the isospin limit $f_{K^\pm}/f_{\pi^{\pm}}=1.1967(18)$~\cite{Dowdall:2013rya,Carrasco:2014poa,Bazavov:2017lyh}, to obtain \begin{align} V_{us}^{K_{\mu 2}}=0.22534(42)\,. \label{VusKl2} \end{align} Note that this determination is insensitive to the modified $W\ell\nu$ couplings. \smallskip \begin{figure}[t] \centering \includegraphics[width=1.\textwidth]{./Plots/4ParamPlot.pdf} \caption{Global Fit for the 6-dimensional scenario with $C^{(3)ee}_{\phi\ell}$, $C^{(3)\mu\mu}_{\phi\ell}$, $C^{(3)\tau\tau}_{\phi\ell}$, $C^{(1)ee}_{\phi\ell}$, $C^{(1)\mu\mu}_{\phi\ell}$ and $C^{(1)\tau\tau}_{\phi\ell}$ as free parameters. Here we marginalized over the Wislon coefficients with taus and do not show them explicitly since in this case there is no preference for non-zero values. The dashed lines indicate the impact of including the NNCs and the star refers to the SM point. \label{4dFit}} \end{figure} Alternatively, $|V_{us}|$ can be also determined from hadronic $\tau$ decays. Here the current average value for inclusive determinations is~\cite{Amhis:2019ckw} \begin{align} |V_{us}^\tau| = 0.2195\pm0.0019\,. \end{align} Both this inclusive determination as well as the exclusive ones depend on $V_{us}/V_{ud}$, which means that there is no dependence on the modified $W$ couplings at leading order. Even though here the determination of the CKM elements is not modified by NP effects, they have an impact on the global fit as they increase the significance of the CAA. However, the exclusive modes are already included in the LFU ratios and therefore we do not include them as measurements of the CKM elements. \medskip \section{Analysis}\label{analysis} Now we are in the position to perform a global analysis of all the observables discussed in the last section. We do this within a Bayesian framework using the publicly available HEPfit package~\cite{deBlas:2019okz}, whose Markov Chain Monte Carlo determination of posteriors is powered by the Bayesian Analysis Toolkit (\texttt{BAT})~\cite{Caldwell:2008fw}. With this setup we find an Information Criterion (IC)~\cite{Kass:1995} value of $\simeq 93$ for the SM. \subsection{Model Independent Analysis} In a first step we update the global fit assuming LFU and assess the impact of including the different determinations of $V_{us}$ on the fit. Therefore, we have only three (additional) parameters at our disposal; $C^{(3)}_{\phi\ell}\,,C^{(1)}_{\phi\ell}$ and $C_{\phi e}$. The results in all possible two-dimensional planes are given in Fig.~\ref{LFUPlot}. Interestingly, even under the assumption of LFU, including $V_{us}$ into the fit has a significant impact. In fact, without the NNCs, the 68\% C.L. regions for $C^{(3)}_{\phi\ell}$ and $C^{(1)}_{\phi\ell}$ including $V_{us}$ do not overlap with the 68\% C.L. regions for which $V_{us}$ is not included. This behaviour can be traced back to the fact that beta decays have a sensitivity to modified $W\mu\nu$ couplings, which is enhanced by $|V_{ud}|^2/|V_{us}|^2$~\cite{Crivellin:2020lzu}. Also note that while there is some preference for non-zero values of $C^{(3)}_{\phi\ell}$ and $C^{(1)}_{\phi\ell}$, $C_{\phi e}$ they are still compatible with 0 at $\simeq 2 \sigma$. Having checked explicitly that the impact of $C_{\phi e}$ on the fit is negligible, we exclude it from the following analysis which assume LFU violation. \smallskip Allowing for LFU violation, we have six free parameters in our fit, since we can neglect the flavour off-diagonal elements which are not only constrained by flavour processes but also do not lead to interference with the SM in the other observables. Furthermore, since all Wilson coefficients related to tau leptons turn out to be compatible with zero, we do not include them in Fig.~\ref{4dFit}. Here we again depict both the case where the NNCs are included and the case where they are neglected, finding an IC value of $83$ in both 6-dimensional scenarios, while the IC value reduces to $77$ when the tau coupling are set to zero for the outset. From these plots one can see that the pattern $C^{(1)}_{\phi\ell}=-C^{(3)}_{\phi\ell}$, already presented in Ref.~\cite{Coutinho:2019aiy}, gives a very good fit to data. This result is confirmed by an IC value of $76$ for the 3-dimensional scenario shown in Fig.~\ref{fig:C3-C1} (both for the case with NNC and without). There we also show the case of $C^{(3)}_{\phi\ell}$ only, which also provides a better fit than the SM. Here we find ${\rm IC}\simeq88$ for the scenario without NNCs and ${\rm IC}\simeq83$ with NNCs. \medskip \begin{figure} \centering \includegraphics[width=1\textwidth]{./Plots/C3Plots.pdf} \caption{Global fit for the 3-dimensional scenarios $C^{(1)}_{\phi\ell}=-C^{(3)}_{\phi\ell}$ (left) and $C^{(3)}_{\phi\ell}$ only (right). Like in Fig.~\ref{4dFit}, the dashed lines indicate the effect of include the additional NNCs, the star indicates the SM point, and the regions correspond to 68\% and 95\% C.L..} \label{fig:C3-C1} \end{figure} \subsection{Vector Like Leptons} Now we turn to the patterns for the modified $W$ and $Z$ couplings to leptons obtained with VLLs. We first consider each representation separately and show the preferred regions in parameter space for each representations in Fig.~\ref{VLFit1}, Fig.~\ref{VLFit2} and Fig.~\ref{VLFit3}. Here also the bounds from $\tau\to 3\mu$ and $\tau\to 3e$ are depicted as dashed black lines. Note that the bounds from $\tau\to\mu\gamma$ and $\tau\to e\gamma$ are weaker and lie outside the displayed area. Also the flavour bounds from $\mu\to e$ processes are not shown in Fig.~\ref{VLFit1}, Fig.~\ref{VLFit2} and Fig.~\ref{VLFit3} since they are very stringent and thus would hardly be visible. Therefore, they are shown separately in Fig.~\ref{Combined_mu_e_Plot}. It is important to keep in mind that the bounds from flavour-violating processes only necessarily apply if just one generation of VLLs is present and that the bounds can be completely avoided in presence of three or more generations of VLLs. Concerning the overall goodness of the fit, note that none of the representations alone can describe data much better than the SM. This can also be seen from the obtained IC values of $93(79)$, $99(84)$, $96(82)$, $98(84)$, $95(83)$ and $92(84)$ for N, E, $\Delta_1$, $\Delta_3$, $\Sigma_0$ and $\Sigma_1$, respectively, without (with) the NNCs. \smallskip Therefore, let us search for a simple and minimal way to combine different representations in order to obtain a good fit to data. These criteria are best met by the combination of $N$ and $\Sigma_1$, with $N$ only coupling to electrons and $\Sigma_1$ only coupling to muons. The results of the corresponding two-dimensional fit are depicted in Fig.~\ref{2DPlot}, which shows that this case is in much better agreement with data than the SM, as quantified by the IC values of 73 both in the scenario with and in the scenario without the NNCs. Since this combination of VLLs describes the data so well, we added the posteriors for the most relevant observables in Table \ref{Posteriors}. \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[width=1.0\textwidth]{./Plots/NandEPlots.pdf} \end{tabular} \caption{Preferred regions in parameter space for the VLLs $N$ and $E$. The color coding is the same as in Fig.~\ref{4dFit} and the black line indicates the exclusion by $\tau\to3\mu$ or $\tau\to3\mu$ in case of one generation of VLLs. The exclusions from $\mu\to e$ processes are very stringent and for better visibility shown in Fig.~\ref{Combined_mu_e_Plot}.\label{VLFit1}} \end{figure} \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[width=1.0\textwidth]{./Plots/D1andD3Plots.pdf} \end{tabular} \caption{Preferred regions in parameter space for for the VLL $\Delta_3$. The color coding is the same as in Fig.~\ref{4dFit} and the black line indicates the exclusion by $\tau\to3\mu$ or $\tau\to3\mu$ in case of one generation of VLLs. The exclusions from $\mu\to e$ processes are very stringent and for better visibility shown in Fig.~\ref{Combined_mu_e_Plot}.\label{VLFit2}} \end{figure} \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[width=1.0\textwidth]{./Plots/S0andS1Plots.pdf} \end{tabular} \caption{Preferred regions in parameter space for the VLLs $\Sigma_0$ and $\Sigma_1$. The color coding is the same as in Fig.~\ref{4dFit} and the black line indicates the exclusion by $\tau\to3\mu$ or $\tau\to3\mu$ in case of one generation of VLLs. The exclusions from $\mu\to e$ processes are very stringent and for better visibility shown in Fig.~\ref{Combined_mu_e_Plot}.\label{VLFit3}} \end{figure} \begin{figure} \centering \includegraphics[scale=0.49]{./Plots/CombinedPlot.pdf} \caption{Upper bounds on the product $|\lambda_X^e\lambda_X^\mu|$ from the lepton flavour violating processes $\mu\to e\gamma$ (continuous lines), $\mu\to eee$ (dashed) and $\mu \to e$ conversion (dotted) as a function of the mass of the VLLs $X\,=\,N,\,E,\,\Delta_1,\,\Delta_3,\,\Sigma_0,\,\Sigma_1$. Here we assumed that only one generation of a single VLL representation is present at the same time. For $\mu\to eee$ and $\mu \to e$ conversion we only included the dominant tree-level effects induced by the modified $Z\ell\ell$ couplings. Note that therefore, $N$ only contributes to $\mu\to e\gamma$ while all other representations lead to $\mu\to3e$ and $\mu\to e$ conversion as well. Since for reasons of visibility only the results for the processes $\ell \to 3\ell'$ are depicted, we list the conversion factors from ${\rm Br}(\ell\to \ell ' \ell''\ell'')$ to ${\rm Br}(\ell\to 3\ell')$ (involving just one flavour off-diagonal coupling) in Table~\ref{l3l_Conversion_Factors}.}\label{Combined_mu_e_Plot} \end{figure} \section{Conclusions}\label{Conclusions} Possible modifications of the SM $Z$ and $W$ boson couplings to leptons can be most accurately constrained or determined by performing a global fit to all the available EW data. This usually includes LEP data, as well as $W$, top, and Higgs mass measurements. However, it was recently pointed out that also the CKM element $V_{us}$ (or equivalently $V_{ud}$, if CKM unitarity is employed) is affected by modified $W\ell\nu$ couplings. In fact, the interesting CAA, pointing towards a (apparent) violation of first row CKM unitarity, can be viewed as a sign of LFUV. Therefore, this anomaly does not only fall into the pattern of other hints for LFUV observed in semi-leptonic $B$ decays, but can even be explained by modified $W\ell\nu$ couplings. \smallskip \begin{figure}[t!] \centering \includegraphics[width=0.84\textwidth]{./Plots/2DPlot.pdf} \caption{Global fit in case (one generation of) the VLL $N$ couples to electrons and the VLL $\Sigma_1$ couples to muons, only. The red regions are preferred at the 68\%, 95\% and 99.7\% C.L. and the lines indicate the effect of including the NNC.\label{2DPlot}} \end{figure} We take this as a motivation to update the global EW fit to modified gauge boson couplings to leptons. We first study the model-independent approach where gauge-invariant dim-6 operators affect (directly) the $Z$ and $W$ couplings and find that even in the LFU case, the inclusion of CKM elements in the fit significantly impacts the results. Furthermore, for specific NP patterns like $C^{(3)}_{\phi\ell}=-C^{(1)}_{\phi\ell}$ or $C^{(3)}_{\phi\ell}$ only, the CAA leads to a preference of non-zero modifications over the SM hypothesis. \smallskip Moving on to the UV complete models, we studied all six representations of VLLs, which can mix, after EW symmetry breaking, with SM leptons. These different representations (under the SM gauge group) of heavy leptons lead to distinct patterns in the modifications of the $W$ and $Z$ couplings. We performed a global fit to all VLL representations separately, showing the preferred regions in parameter space which can be used to test models with VLLs against the data. In the case of a single generation of VLL, the effects on EW precision observables are correlated with the charged lepton flavour violating observables. The resulting LFV bounds (which can be avoided in presence of multiple generations of VLLs) are complementary to the regions obtained from the EW fit to $\tau-\mu$ and $\tau-e$ couplings. For $\mu-e$ couplings the bounds from flavour-violating processes are much superior to the fit-results and we show them separately in Fig.~\ref{Combined_mu_e_Plot}. Finally, while no single representation of VLL gives a fit far better than the one of the SM, we were able to identify a simple combination of VLLs which can achieve this: $N$ coupling only to electrons and $\Sigma_1$ coupling only to muons avoids the LFV constraints and agrees much better with data than the SM (by more than $4\,\sigma$). \smallskip Several experimental developments are foreseen which can improve the LFU tests in Table~\ref{ObsLFU}. The J-PARC E36 experiment aims at measuring $K\to\mu\nu/K\to e\nu$~\cite{Shimizu:2018jgs}. A similar sensitivity as in $R(V_{us})$ may be possible for $\tau\to \mu\nu\bar\nu/\tau\to e\nu\bar\nu$ at Belle II~\cite{Kou:2018nap}, where approximately one order of magnitude more $\tau$ leptons will be produced than at BELLE or BaBar. The most promising observable is probably $\pi\to\mu\nu/\pi\to e\nu$ for which the PEN experiment anticipates an improvement by more than a factor three~\cite{Glaser:2018aat}, which would also bring the limit on $W\mu\nu$ vs $W e\nu$ modifications well below $O(10^{-3})$. Interestingly, here our $N+\Sigma_1$ model predicts a deviation from the SM expectation of more than $4\,\sigma$ which can therefore be tested in the near future. \smallskip \begin{table}[t!] \centering \begin{tabular}{c c c c c} \hline\hline Observable & Measurement & SM Posterior & NP Posterior & Pull\\ \hline $M_W\,[\text{GeV}]$ & $80.379(12)$ & $80.363(4)$& $80.369(6)$ & \cellcolor{Gray}$0.56$\\ \hline \\[-0.5cm] $R\left[\frac{K\rightarrow\mu\nu}{K\rightarrow e\nu}\right]$ &$0.9978 \pm 0.0020$ & $1$& $1.00168(39)$ & $-0.80$ \\[0.2 cm] $R\left[\frac{\pi\rightarrow\mu\nu}{\pi\rightarrow e\nu}\right]$& $1.0010 \pm 0.0009$& $1$ & $1.00168(39)$ &\cellcolor{Gray} $0.42$ \\[0.2 cm] $R\left[\frac{\tau\rightarrow\mu\nu\bar{\nu}}{\tau\rightarrow e\nu\bar{\nu}}\right]$ & $1.0018 \pm 0.0014$& $1$ & $1.00168(39)$ &\cellcolor{Gray} $1.2$ \\[0.2 cm] $|V_{us}^{K_{\mu3}}|$ &$0.22345(67)$ & $0.22573(35)$ & $0.22519(39)$ & \cellcolor{Gray}$0.77$\\[0.2 cm] $|V_{ud}^{\beta}|$ & $0.97365(15)$& $0.97419(8)$ & $0.97378(13)$ & \cellcolor{Gray}$2.52$\\[0.2 cm] \hline\hline \end{tabular} \caption{Posteriors for the observables with the largest pulls with respect to the SM in our model in which $N$ mixes with electrons and $\Sigma_1$ with muons. Note that $|V_{us}^{K_{\mu3}}|$ and $|V_{ud}^{\beta}|$ are not the Lagrangian parameters but the predictions for this CKM elements as extracted from data assuming the SM. \label{Posteriors}} \end{table} Clearly modified $W\ell\nu$ couplings always come together with modified $Z\ell\ell$ and/or $Z\nu\nu$ couplings. The LEP bounds on $Z\ell\ell$ couplings are already now at the per mille level~\cite{Schael:2013ita} and also the bounds on the invisible $Z$ width (corresponding to $Z\nu\nu$ in the SM) are excellent. These bounds could be significantly improved by future $e^+e^-$ colliders such as the ILC~\cite{Baer:2013cma}, CLIC~\cite{deBlas:2018mhx}, or the FCC-ee~\cite{Abada:2019lih,Abada:2019zxq}. Furthermore, $W$ pair production will allow for a direct determination of $W\to\mu\nu/W\to e\nu$. In particular, the FCC-ee could produce up to $10^8$ $W$ bosons (compared to LEP, which produced $4\times 10^4$ $W$ bosons), leading to an increase in precision that would render a direct discovery of LFUV in $W\ell\nu$ conceivable. Furthermore, since VLLs can explain the anomalous magnetic moment of the muon (electron) and can be involved in the explanation of $b\to s\ell^+ \ell^-$ data, they are prime candidates for an extension of the SM and could also be discovered directly at the HL-LHC~\cite{ApollinariG:2017ojx} or future $e^+e^-$ colliders. \acknowledgments We thank Antonio Coutinho for useful discussions and help with HEPfit. This work is supported by a Professorship Grant (PP00P2\_176884) of the Swiss National Science Foundation.
2,869,038,156,587
arxiv
\section*{APPENDIX} \bibliographystyle{bibliography/IEEEtran} \section{Conclusion} \label{sec:conclusion} In this paper we have presented a method for determining the median value of measurements for a group of agents communicating using scheduled acoustic communication channel. The convergence of the protocol towards the median value is proven theoretically. In order to validate the presented protocol we tested it in a simulation setting, by creating a model of the multi-agent system using scheduled communication. This model was used to gather simulation results of the presented dynamic median method for different number of agents, different connectivity matrices and tuning parameters. We have also tested the presented method on the underwater robotic platforms aMussel, which are equipped with acoustic communication units. Simulation, as well as experimental results, have shown that the presented method converges towards the median of the measurements and that parameters of the consensus protocol can be tuned so that desired speed of convergence and accuracy of the system output obtain desired values. Results also show that the protocol work correctly even for a higher percentage of communication losses. \section{Preliminaries} \label{sec:prelim} \subsection{Problem description} We consider a network of $n$ agents communicating over a single communication channel. The underlying communication graph is defined as a directed graph $\mathbf{G}=(\mathbf{V},\mathbf{E})$, where the set of nodes $\mathbf{V}= \{1,2,..,n\}$ corresponds to agents, and the set of edges $\mathbf{E}\subseteq \mathbf{V} \times \mathbf{V}$ corresponds to communication links between agents. $\mathbf{E}$ is usually described with a corresponding adjacency matrix $\textbf{A}$ such that $a_{ij}=1$ if there is a communication link from agent $j$ to agent $i$, and $a_{ij}=0$ otherwise. Each agent $i$ has a local reference signal, denoted $z_i \in \mathbb{R}, \mathbf{z}=[z_1 z_2 .. z_n]$. In this paper this signal is a measurement made by the agent, but in general it could be the value of some other internal or external variable. Internal state of agent $i$ in step $k$ is denoted as $x_i^k \in \mathbb{R}$. Let us introduce the following notations. A \textit{median value} of vector $\mathbf{z}=[z_1 z_2 ... z_n]$, where elements are listed in ascending order, can be defined as (\cite{Franceschelli2017}): \begin{equation} m(\mathbf{z}) \in \begin{cases} \{z_{\frac{n+1}{2}}\} & \text{if $n$ is odd} \\ [z_{\frac{n}{2}},z_{\frac{n}{2}+1}] & \text{if $n$ is even} \\ \end{cases} \label{mediandef} \end{equation} Agent dynamics in discrete-time domain can in general be written as: \begin{equation} x_i^{k+1} = x_i^k + u_i^k, \quad i \in \{1,2,\ldots,n\} \label{eq-dynamics} \end{equation} where $u_i^k \in \mathbb{R}$ takes into account both agent's internal states as well as states of neighbouring agents, which are obtained through communication. We say that the system running (\ref{eq-dynamics}) converges to consensus value $c$ (denoted $x_i^k \xrightarrow{} c$) iff: \begin{equation} \exists \delta>0,\exists k_0>0, |x_i^k-c|<\delta, \forall i\in \{1,2,\ldots,n\}, \forall k>k_0 \label{convergence} \end{equation} The most common type of such consensus protocols are designed to converge to the \textit{average} value of local reference signals ($z_i$) of all agents. In this paper we deal with \textit{median} consensus protocol, that is, we propose and analyse a protocol such that all the agents converge to the same value, corresponding to the median of their measurements, $x_i^k \xrightarrow{} m(\mathbf{z})$, which means that internal states of all agents $x_i^k$ will converge to the median of their current measurements $\mathbf{z}$. Further, since the proposed protocol allows individual agents to track a \textit{time-varying} median of reference signals (measurements), it belongs to a class of \textit{dynamic} consensus protocols. \begin{table}[] \caption{List of variables used} \label{tab:variables} \centering { \begin{tabular}{|c|c|} \hline Variable & Explanation\\ \hline\hline $x_i^k$& internal state of $i$th agent at step $k$\\ & (communicated over the network)\\ \hline $y_i^k$ & additional internal state of $i$th agent at step $k$ \\ & (not communicated over the network)\\ \hline $z_i^k$ & measurement of agent $i$ at step $k$\\ \hline $a_{ij}$ & $1$ if agent $i$ is neighbour of agent $j$, $0$ otherwise\\ \hline $\beta$, $\alpha$, $\gamma$, $\kappa$ & algorithm tuning parameters\\ \hline $n$ & number of agents\\ \hline $r_i$ & number of neighbours of agent $i$\\ \hline \end{tabular}} \end{table} \subsection{Communication scheme} \label{sec:comm} A communication scheme is such that agents, due to the characteristics of acoustic signals, must take turns transmitting messages. In other words, in order to avoid interference of acoustic signals, one must ensure that agents do not transmit messages in the same time. We opt for the simplest solution, which is a sequential assignment of time-slots to agents, repeated cyclically (\emph{round-robin} \cite{OSStallings}). Each time-slot is reserved for sending of messages of one agent. \begin{figure}[h!] \centering \includegraphics[width=\linewidth,trim=4 4 4 4,clip]{figures/topologies.pdf} \caption{Scheduled acoustic communication - graph $G_i$ for each time-slot and the underlying graph $G$. Sending agent (blue), receiving agents (green).} \label{fig:comm_topology} \end{figure} The result of the described scheme is a switching communication topology, as given in Figure \ref{fig:comm_topology}. The underlying communication graph $\mathbf{G}$ is an union of communication graphs $\mathbf{G_i}$, $i \in \{1,\ldots, n\}$ over one cycle of described round-robin schedule. The same communication scheme was used in \cite{Arbanas20182} and \cite{arbanas2018}, where it was more thoroughly elaborated. Other approaches might be used as well and do not affect the outcome of the proposed protocol. \section{Dynamic median consensus} \label{sec:consensus} We propose the following decentralized protocol, defined using a local update rule of each agent: \begin{equation} \begin{split} x_i^{k+1}&=x_i^k+a^k_{ij}(\beta(x_j^k-x_i^k)+\frac{\alpha}{r_i} \text{sign}(z_i^k-x_i^k)+y_i^k)\\ y_i^{k+1}&=y_i^k+a^k_{ij}(\gamma((x_j^k-x_i^k)-\kappa y_i^k))\\ \end{split} \label{eq:states} \end{equation} where $\beta$, $\alpha$, $\gamma$ and $\kappa$ are tuning parameters, $y_i^k$ is an additional internal state of agent $i$ in step $k$, $x_j^k$ is the value of agent $j$ which is transmitting information over communication channel in step $k$ (when $a^k_{ij} \neq 0$) and $r_i$ is total number of neighbours of agent $i$, obtained from graph $\mathbf{G}$ (including agent $i$ itself). List of the variables used is shown in Table \ref{tab:variables}. Initial value is $x_i^0 = z_i^0$. With respect to other median-reaching algorithms (\cite{Franceschelli2014}, \cite{Franceschelli2017}), the novelty is introduction of the internal state variable $y_i$, which is needed to ensure convergence for the underlying switching topology. The signum component of the local rule, characteristic for median-reaching consensuses, is here specified to account for number of neighbor agents ($r_i)$, which is as well needed to ensure stability due to switching nature of the system. \begin{theorem} \label{theorem1} For a local rule of each agent defined as \eqref{eq:states}, and if $y_i \in [-\frac{2\alpha}{r_i},\frac{2\alpha}{r_i}]$, $\gamma<\beta$$ <\frac{1}{n^2}$, and $\kappa\gamma<1$, all of the agents converge to the value $m(\mathbf{z})$. \end{theorem} \begin{proof} In order to simplify the equations, let us introduce the following notation $\max\limits_{i \in \mathbf{V}}x_i^k =x^k_{max}$, $\min\limits_{i \in \mathbf{V}}x_i^k =x^k_{min}$, Similar as in \cite{Franceschelli2017}, we consider a discrete non-smooth Lyapunov candidate: \begin{equation} V(\mathbf{x^k},\mathbf{y^k})=x^k_{max}-x^k_{min}+y^k_{max}-y^k_{min} \end{equation} In order for a system to be stable, the following conditions need to be met: \begin{equation} \begin{split} V(0)&=0\\ V(\mathbf{x}^k,\mathbf{y}^k)&>0, \forall \mathbf{x}^k, \mathbf{y}^k \neq \mathbf{0} \\ dV=V(\mathbf{x}^{k+1},\mathbf{y}^{k+1})-V(\mathbf{x}^k, \mathbf{y}^k )&\leq 0, \forall \mathbf{x}^k, \mathbf{y}^k \end{split} \label{eq:lyapunov} \end{equation} The first two conditions in \eqref{eq:lyapunov} are always satisfied. The second equation could have value $0$ in case when all $x_i$ and all $y_i$ have the same value. However, this case will never occur, because in equation \eqref{eq:states}, for $x_i$ to be in consensus, all $y_i$ cannot have the same value. The third condition in \eqref{eq:lyapunov} yields: \begin{equation} \begin{split} dV = &x^{k+1}_{max}-x_{min}^{k+1} +y_{max}^{k+1}-y_{min}^{k+1}-\\ -&(x^{k}_{max}-x_{min}^{k} +y_{max}^{k}-y_{min}^{k}) \end{split} \label{eq:condition} \end{equation} By taking into account local interaction rule \eqref{eq:states}, the worst case, from the stability point of view, can be written as: \begin{equation} \begin{split} x_{max}^{k+1}&=x^{k}_{max}+\beta(x_j^k-x^{k}_{max})+\frac{\alpha}{r_{min}}+y^{k}_{max}\\ x_{min}^{k+1}&=x^{k}_{min}+\beta(x_j^k-x^{k}_{min})-\frac{\alpha}{r_{min}}+y^{k}_{min}\\ y_{max}^{k+1}&=y^{k}_{max}+\gamma((x_j^k-x^{k}_{min})-\kappa y^{k}_{max}))\\ y_{min}^{k+1}&=y^{k}_{min}+\gamma((x_j^k-x^{k}_{max})-\kappa y^{k}_{min}))\\ \end{split} \label{eq:raspisano} \end{equation} By substituting \eqref{eq:raspisano} in \eqref{eq:condition}, we obtain the following: \begin{equation} dV=(1-\gamma\kappa)(y^{k}_{max}-y^{k}_{min}) -(\beta-\gamma)(x^{k}_{max}-x^{k}_{min}) +2\frac{\alpha}{r_{min}} \label{eq:skraceno} \end{equation} For a system to be asymptotically stable $dV<0$ has to be satisfied. Given $\beta > \gamma$, and, as stated in Theorem \ref{theorem1}, ${y_{max}}$ and $y_{min}$ are limited to $[-\frac{2\alpha}{r_{min}},\frac{2\alpha}{r_{min}}]$, with $\kappa\gamma<1$, we get: \begin{equation} x^{k}_{max}-x^{k}_{min}>\frac{6\alpha-4\kappa\gamma\alpha}{r_{min}(\beta-\gamma)} \label{eq:uvjet} \end{equation} The condition \eqref{eq:uvjet} is the worst possible case, that is, the right side of the inequality is the lowest theoretical bound on the values of $x^{k}_{max}-x^{k}_{min}$. When \eqref{eq:uvjet} is true, the energy in the system will dissipate until the condition in \eqref{eq:uvjet} becomes false. In that moment $x_{max}-x_{min}$ will start increasing until equation \eqref{eq:uvjet} becomes true. By getting closer to the consensus, $y_i^k$ will converge to either $\frac{\alpha_i}{r_i}$ or $-\frac{\alpha_i}{r_i}$. Hence, the system is stable under the given condition, however, the system \eqref{eq:states} does not reach a steady state in the classical sense. We assume that the steady state is reached when the values over one communication cycle of length $n$ become stationary, that is, when $x_i^{k+n}=x_i^k, y_i^{k+n}=y_i^k$, $\forall i$, $\forall k > k_0$. From \eqref{eq:states} we get: \begin{equation} \label{eq:steadystate} \begin{split} x_i^{k+n}=x_i^k &+\beta\sum_{j=1}^{n}{a_{ij}(x_j^{k+j}-x_i^{k+j})} +\sum_{j=1}^{n}{a_{ij}y_i^{k+j}}\\ &+\frac{\alpha}{r_i}\sum_{j=1}^{n}{a_{ij}\text{sign}(z_i-x_i^{k+j})}\\ y_i^{k+n}=y_i^k&+\gamma\sum_{j=1}^{n}{a_{ij}(x_j^{k+j}-x_i^{k+j})} -\gamma \kappa \sum_{j=1}^{n}{a_{ij}y_i^{k+j}} \end{split} \end{equation} By summing the expressions for $y_i^{k+n}$ in \eqref{eq:steadystate} for all agents $i$ in stationary state we get: \begin{equation} \begin{split} \sum_{i=1}^n\sum_{j=1}^{n}{a_{ij}(x_j^{k+j}-x_i^{k+j})} =\\ \kappa \sum_{i=1}^n \frac{-\alpha}{r_i(1+\beta\kappa)} \sum_{j=1}^{n}{a_{ij}\text{sign}(z_i-x_i^{k+j})} \end{split} \label{eq:sumsteadystate} \end{equation} In equation \eqref{eq:sumsteadystate} if we assume that during $n$ steps, the value of $\text{sign}(z_i-x_i^{k+j})$ doesn't change for any $i$, the following must hold: \begin{equation} \begin{split} \sum_{i=1}^n\sum_{j=1}^{n}{a_{ij}(x_j^{k+j}-x_i^{k+j})} = \frac{-\alpha\kappa}{1+\beta\kappa}\sum_{i=1}^n {\text{sign}(z_i-x_i^k)} \end{split} \label{eq:sumsteadystate2} \end{equation} Under the assumption that the system reaches consensus of all agents, the following holds: \begin{equation} \sum_{i=1}^{n} \sum_{j=1}^{n}{a_{ij}(x_j^{k+j}-x_i^{k+j})}\approx 0 \label{eq:assumption} \end{equation} which in turn gives: \begin{equation} \sum_{j=1}^n {\text{sign}(z_i-x_i^{k+j})} \approx0 \label{eq:sumsignum} \end{equation} In the case when all agents have exactly the same value, $c=x_i^k$ $\forall i$, it is clear from \eqref{eq:sumsignum} that $c$ must be the median value of vector $\mathbf{z}$. However, according to the definition of consensus \eqref{convergence}, not all $x_i^k$ need to have exactly the same values. For that reason there is a need for further analysis. Without the loss of generality we can sort the elements under the sum in the equation \eqref{eq:sumsignum} in the ascending order of their elements $z_i$: \begin{equation} \sum_{i=1}^n {\text{sign}(z_{l(i)}-x_{l(i)}^k)} \approx0 \label{eq:sortsumsignum} \end{equation} where $l(i)$ rearranges indices from \eqref{eq:sumsignum} in such a way that $z_{l(i)}<z_{l(i+1)},\forall i$. It is important to note that each element in the sum in equation \eqref{eq:sumsignum} has corresponding element in the sum in the equation \eqref{eq:sortsumsignum}. After that we can split the sum into two sums: \begin{equation} \sum_{i=1}^{n/2}\text{sign}(z_{l(i)}-c+\delta)+\sum_{i=n/2+1}^{n}\text{sign}(z_{l(i)}-c-\delta)\approx 0 \label{eq:splitsumsign2} \end{equation} where $x_{l(i)}^k\in [c-\delta,c+\delta]$, as defined in \eqref{convergence}. The exact value of $c$ depends on the values in vector $\mathbf{z}$, but it is clear that for \begin{equation} c \in [m(z)-\delta,m(z)+\delta] \label{eq:rangec} \end{equation} the system will converge to the median values of measurements $\mathbf{z}$. Finally, since matrix $\mathbf{A}$ is symmetrical, every element $(x_j^{k+j_1}-x_i^{k+j_1})$ in the sum in \eqref{eq:sumsteadystate} has its pair element $(x_i^{k+j_2}-x_j^{k+j_2})$, except elements in the diagonal which by default have value 0. Hence, from equations \eqref{eq:steadystate} and \eqref{eq:sumsteadystate}, one can determine $\beta$ from the worst case scenario: \begin{equation} \frac{1}{2}\sum_{i=1}^n\sum_{j=1}^n \frac{\beta \kappa \alpha- (-\beta \kappa \alpha)}{1+\beta \kappa}<\frac{\kappa\alpha}{1+\beta\kappa} \end{equation} which leads to: \begin{equation} \beta<\frac{1}{n^2} \end{equation} \end{proof} \section{Experimental results} \label{sec:experimental} In this section we present the results of experiments performed with aMussels (Fig. \ref{fig:amussel}). The aMussel platform (described in \cite{Loncar2019}) is a robotic unit intended for long-term monitoring of underwater areas. It is designed for low power operations on sea-bed, where it takes measurements in regular intervals and goes to low power mode in between. It is equipped with various sensors used to perceive the environment, and two underwater communication devices to share acquired data and control signals with other robots. It is capable only of 1D movement, as it can change its depth using simple yet effective buoyancy system. The aMussel is powered from two single-cell LiPo batteries, where each of them powers different elements of the system. As a communication channel between agents (aMussels) in presented experiments, we have used a nanomodem acoustic unit. For a successful communication, only one of the aMussels in the group is allowed to transmit data using a nanomodem at single moment. For these reasons we have developed a time scheduling scheme, where each of the aMussels has an assigned timeslot in which it is allowed to transmit information using acoustics. Detail about time-scheduling implementation are described in \cite{Loncar2019} and in section \ref{sec:comm}. \begin{figure}[t] \centering \includegraphics[width=0.55\linewidth,trim=0 90 0 90, clip]{figures/jarun4.pdf} \caption{Experiment setup using aMussel platforms} \label{fig:amussel} \end{figure} When on surface, aMussels can communicate either using WiFi, for data transfer, using Bluetooth, for diagnostics purposes, and using SMS, for sending short messages (like GPS coordinates) to greater distances. Each aMussel is also equipped with two underwater communication devices: green light, short-range communication device based on modulated light, and nanomodem, long-range acoustic communication device. In this work, we use only the later. \subsection{Experiment Setup} We have executed three experiments, one with 3 aMussels and two with 5 aMussels with the corresponding parameters in Table \ref{tab:parametri}. Since aMussels were close to each other, it is assumed that the graph is complete, corresponding to the connection matrix $\mathbf{A^1}$, but it is possible that some individual messages did not reach all the agents. In all experiments one communication step lasts for 5 seconds. For the purpose of the experiment with 3 aMussels, measurements are kept constant until moment $k_{Step}$ when aMussel3 changes its measured value from 100 to 180, making a change in system median. This experiment was conducted with aMussels next to each other where for each aMussel, the measured value is a preprogrammed number that does not correspond to any physical value. Experiments with 5 aMussels were conducted in Jarun lake in Zagreb. Each aMussel was tied to a rope with different length, so each of them was on different depth (Figure \ref{fig:amussel}). Since aMussels were placed close to each other on different depths, a significant loss of communication packages was expected. During the experiment, aMussels measured pressure in hPa, where one hPa corresponds approximately to one cm of depth. The surface pressure is around 1000 hPa. To enforce a change in measured values, the aMussel with the longest rope was moved towards the surface in the middle of the experiment, and thus became the aMussel with the smallest depth. The experiment goal was for aMussel to agree on the median of their depths (which corresponds to their measured pressure). Additional experiment with 5 aMussels was conducted to show the influence of communication loss on the results. Same as in the previous experiments, aMussels were tied to a rope on different depths. After they reached consensus, one of them was removed from the water for 5 minutes and then returned to the same depth. After that, two aMussels were removed from the water and left to communicate between each other. After some time, they were returned to the water and placed on different depths. \subsection{Results} Results of experiments with 3 aMussels (Figure \ref{fig:exp3}) show that values of individual aMussels converge towards the median values. After the measurements of the aMussel3 change, the median value of the group changes, which leads to convergence of individual aMussel values towards the new median value. The results are slightly oscillatory compared to the simulation, because all values transferred over acoustics are quantizied. The experiment was run 4 times with similar results. In the experiment with 5 aMussels, they first reach consensus which corresponds to the measurement of aMussel27 (Figure \ref{fig:exp5}). After the aMussel39 changes its depth, aMussels manage to reach the consensus around the measurement of aMussel18, which represents the new median value. The measured total loss of packages during the experiment was around 15\%, with aMussel39 having a loss of around 30\%. This experiment was run 2 times with similar results. The results of communication loss experiment are shown in Figure \ref{fig:expcommloss}. After 600 steps aMussel27 was removed from the water for 5 minutes, and the results show that this short term communication loss did not influence the system. When aMussel31 and aMussel39 were removed from water and left to communicate only between each other, they started to agree on the consensus between only their measurements. After returning them into the water on different depths, all aMussels in the water started converging to the new median value. This experiment was run once. \begin{figure}[h] \centering \includegraphics[width=0.7\linewidth,trim=100 300 110 300, clip]{figures/exp3_v2.pdf} \caption{Experimental results for 3 aMussels} \label{fig:exp3} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.8\linewidth,trim=100 250 110 240, clip]{figures/exp5_jarun.pdf} \caption{Experimental results for 5 aMussels} \label{fig:exp5} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.8\linewidth,trim=100 240 110 260, clip]{figures/exp5_jarun_lost_comm_v3.pdf} \caption{Experimental results for communication loss experiment} \label{fig:expcommloss} \end{figure} \section{Introduction} \IEEEPARstart{O}{ne} of the envisioned goals of the subCULTron project was to develop a marine multi-robot system for intelligent long-term monitoring of underwater ecosystems \cite{subcultron}. The underwater system is comprised of 3 different types of robots (Figure \ref{fig:subCULTron_robots}). Artificial mussels (aMussels) are sensor hubs attached to the sea bottom, which monitor natural habitat, including biological agents like algae, bacterial incrustation, and fish. They serve as the collective long-term memory of the system, allowing information to persist beyond the runtime of other agents, enabling the system to continue developing from previously learned states \cite{Arbanas20182}, \cite{arbanas2018}. On the water surface, artificial lily pads (aPads) interface with humans, delivering energy and information influx from ship traffic or satellite data \cite{babic2018}. Between those two layers, artificial fish (aFish) move, monitor and explore the environment and exchange information with aMussels and aPads. \begin{figure} \centering \subfloat{ \includegraphics[height=0.25\linewidth] {figures/amussels_3.pdf}} \subfloat{ \includegraphics[height=0.25\linewidth]{figures/afish_v4.pdf}} \subfloat{ \includegraphics[height=0.25\linewidth,trim=270 200 200 200, clip]{figures/apad_v4.pdf}} \caption{Robots of the subCULTron swarm -- aMussels (left), aFish (middle) and aPad (right)} \label{fig:subCULTron_robots} \end{figure} An underwater swarm consisting of a large number of units (100 aMussels, 10 aFish and 5 aPads) was developed within the project. The ability of such a system to compute (in a decentralized manner) common estimates of unknown quantities (such as measurements), and agree on a common view of the world, is critical. Consensus protocols (algorithms) are particularly compelling for implementation in multi-agent systems due to their simplicity and a wide range of applications. Foundation of consensus protocols in multi-agent systems lies in the field of distributed computing. In networks of agents, \emph{consensus} means to "reach an agreement regarding a certain quantity of interest that depends on the state of all agents" \cite{Olfati2007}. A consensus protocol is a series of rules that define information exchange between an agent and all of his neighbors on the network, as well as internal processing of the obtained information by each agent. A good overview of consensus protocols can be found in \cite{Olfati2007}, \cite{Ren2011}, \cite{Qin2017}. Our primary target is subCULTron system or more precisely, a network of underwater robots measuring environment parameters such as oxygen or turbidity, with sensors prone to faults/errors (outliers). Consensus protocols can be exploited to increase the reliability of such a system by i) developing protocols for detection of faulty agents (such as trust consensus protocols \cite{mazdin2016trust}, \cite{haus2014trust}), or ii) developing consensus protocols that implicitly account for and nullify the influence of outlier values. In this paper we apply the later approach - the goal is for each agent to reach a consensus on the real value of the measuring signal. The most common type of consensus problems - average consensus (\cite{Jadbabaie2003}, \cite{ReanBeard2008}) - is not appropriate for this application since even one faulty agent with abnormal values might skew the consensus to incorrect values, which do not represent the real value of the measuring signal. On the other hand, authors in \cite{Franceschelli2014} and \cite{Franceschelli2017} use \textit{median} as the chosen measure, which is inherently robust to outliers, and we follow upon their work in this paper. Further, due to the time-varying nature of environment parameters and requirement for spatial distribution of sensors, measurements are performed by a multi-robot system applying the \textit{dynamic} median consensus protocol, where agents track the median of locally available \textit{time-varying} signals, executing local computations and communicating with neighboring agents only \cite{Kia2019}. Dynamic protocol for the average case is available, for both static and varying communication topology (\cite{Zhu2008}). Work presented in this paper is based upon \cite{Franceschelli2014} and \cite{Franceschelli2017}, where authors present a dynamic median consensus protocol. In this paper we modify their approach so that protocol is functional under scheduled communication protocol, like the one using acoustics for underwater communication. Systems with such a communication scheme are called \emph{switching systems} and their analysis is more complex compared to analysis of static systems. Our previous work \cite{Arbanas20182} and \cite{arbanas2018} deals with consensus using switching systems, but the work presented in this paper reaches the median value rather than the average. Another difference, compared to our previous work, is addition of the dynamic component to the consensus protocol. To conclude, \textbf{the main contribution} of the paper is analysis of novel consensus protocol that is dynamic, works on a switching communication topology and converges to median value, a combination that (to the best of our knowledge) no other papers study. There have not been many advancements in the area of underwater consensus protocols. As far as we know, the only underwater consensus applications include formation control of tethered and untethered underwater vehicles \cite{Joordens2010, Putranti2016, Mirzaei2016} and tracking of underwater targets using acoustic sensor networks (ASN) \cite{Yan2017}. Both of those approaches have been validated only in the simulation environment, using ideal communication channels. Among other things, this paper contributes to the field by providing the first experimental application of a consensus protocol in an underwater multi-agent system, acting as a distributed sensor network. The paper is organized as follows. In the next section, we define some preliminary definitions and notations regarding the systems we study. We present the implemented method for dynamic median consensus over scheduled acoustic communication in Section \ref{sec:consensus}. Simulation results and analysis are presented in Section \ref{sec:simulation}, while the section \ref{sec:experimental} describes the robotic platform we used for experiments and shows the achieved results. Finally, we give a conclusion in Section \ref{sec:conclusion}. \section{Simulation results} \label{sec:simulation} \subsection{Convergence Analysis} In this section we present simulation results of the proposed dynamic median consensus protocol. We conducted simulations for system with 3, 5 and 31 agents. In the experimental section, we conducted tests with 3 and 5 agents. A more extensive network of 31 agents is chosen to illustrate the algorithm's performance on a larger scale. For each number of agents, two different communication topologies were used, a complete and chain topology, as given in Figure \ref{fig:sim-E}. These topologies correspond to the best and worst case, since in order to propagate information from agent $i$ to agent $j$ we need only one step for Figure \ref{fig:sim-E1} (best case) and $n$ steps for Figure $\ref{fig:sim-E2}$ (worst case). Graphs given in this figure correspond to the overall communication topology, while topology in each step behaves as given in subsection \ref{sec:comm}. \begin{figure}[h] \centering \subfloat[Complete topology]{ \includegraphics[width=0.33\linewidth, trim=25 25 25 25, clip]{figures/complete.pdf} \label{fig:sim-E1} } \subfloat[Chain topology]{ \includegraphics[width=0.6\linewidth, trim=25 25 25 25, clip] {figures/chain.pdf} \label{fig:sim-E2} } \caption{Communication topologies used in simulation} \label{fig:sim-E} \end{figure} Adjacency matrices for the above topologies are: \begin{gather*} \mathbf{A^1}= \begin{bmatrix} 1 & 1 & ... & 1\\ 1 & 1 & ... & 1\\ ... & ... & ... &...\\ 1 & 1 & ... & 1\\ \end{bmatrix} \\ \mathbf{A^2}= \begin{bmatrix} 1 & 1 & 0 & 0 & 0 & ... & 0 & 0\\ 1 & 1 & 1 & 0 & 0 & ... & 0 & 0\\ 0 & 1 & 1 & 1 & 0 & ... & 0 & 0\\ ... & ... & ... & ... & ...& ... & ... & ...\\ 0 & 0 & 0 & 0 & & ... & 1 & 1\\ \end{bmatrix} \end{gather*} Table \ref{tab:parametri} shows the number of agents, consensus tuning parameters (as given in Eq. (\ref{eq:states})) and results, for six different simulation setups. The numerical results are presented with 3 parameters: $t_s$ settling time (the number of steps needed for all agents to reach value that is within 5\% of measurements median value), $t_c$ convergence time (the number of steps needed for maximal distance between two agents to reach value that is within 5\% of measurements median value) and $\epsilon_{ss}$ error in steady state (value of maximal distance of agents from measurements median value with respect to the median value) Influence of each of the tuning parameters on the system behaviour is analysed later in this section. \begin{table}[h!] \centering \caption{Tuning parameters and results in each simulation run}\label{tab:parametri} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Parameter & Sim 1 & Sim 2 & Sim 3 & Sim 4 & Sim 5 & Sim 6\\ \hline \hline $n$ & 3 & 3 & 5 & 5 & 31 & 31 \\ \hline $\alpha$ & 9 & 9 & 3 & 3 & 2 & 2\\ \hline $\beta$ & 0.08 & 0.08 & 0.04 & 0.04 & 0.001 & 0.001\\ \hline $\gamma$ & 0.003 & 0.003 & 0.0015 & 0.0015 & 0.0005 & 0.0005\\ \hline $\kappa$ & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1\\ \hline $\mathbf{A}$ & $\mathbf{A^1}$ & $\mathbf{A^2}$ & $\mathbf{A^1}$ & $\mathbf{A^2}$ & $\mathbf{A^1}$ & $\mathbf{A^2}$\\ \hline $t_s$ & 51 & 238 & 110 & 729 & 26856 & 342520 \\ \hline $t_c$ & 53 & 230 & 111& 753 & 11811 & 440231 \\ \hline $\epsilon_{ss}$ & 2.25\% & 2.46\% & 0.39\% & 0.63\%& 0.04\% & 1.6\% \\ \hline \end{tabular} \end{table} We set an initial measured value for each agent, and then make a step change of individual measurements so that the overall median changes stepwise. Results for 3 agents are shown in Figures \ref{fig:n3e1} and \ref{fig:n3e2}. Figures show responses of individual agent values $\mathbf{x}$ and their additional internal states $\mathbf{y}$ for different communication topologies. Results for 5 and 31 agents are given in Figures \ref{fig:n5e1} - \ref{fig:n31e2}. Results show that, after a transient period, the system converges to correct values of median, and is able to dynamically track changes in median values. Convergence time is almost seven times shorter for a complete topology, than for a chain topology. Stationary values of y for each agent $i$ are $\pm\frac{\alpha}{r_i}$, as given in Eq. \eqref{eq:states}. \begin{figure}[h] \centering \includegraphics[ width=0.75\linewidth,trim=80 255 80 260, clip]{figures/n3e1_v4.pdf} \caption{Simulation results for 3 agents (complete topology)} \label{fig:n3e1} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.75\linewidth,trim=80 255 80 260, clip] {figures/n3e2_v4.pdf} \caption{Simulation results for 3 agents (chain topology)} \label{fig:n3e2} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.75\linewidth,trim=80 255 80 260, clip]{figures/n5e1_v4.pdf} \caption{Simulation results for 5 agents (complete topology) } \label{fig:n5e1} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.75\linewidth,trim=80 255 80 260, clip]{figures/n5e2_v4.pdf} \caption{Simulation results for 5 agents (chain topology) } \label{fig:n5e2} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.75\linewidth,trim=80 285 80 290, clip]{figures/n31e1_v6.pdf} \caption{Simulation results for 31 agents (complete topology)} \label{fig:n31e1} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.75\linewidth,trim=80 285 80 290, clip] {figures/n31e2_v6.pdf} \caption{Simulation results for 31 agents (chain topology)} \label{fig:n31e2} \end{figure} Results for higher number of agents show that settling (convergence) time increases for higher number of agents. Range of values around the median will also influence the convergence time. In general, larger range leads to slower convergence, which is a consequence of the existence of the sign part in the equation for local interaction rule. The method was also tested for the case of the lost communication packages. Table \ref{tab:lostcomm} shows influence of random loss of transmitted packages on settling time $t_s$. Results were obtained by simulation for 5 agents with complete topology. For each percentage of lost packages, 100 simulations were run and the table presents mean settling time $t_{savg}$ and its standard deviation $\sigma(t_{s})$, as well as mean steady state error $\epsilon_{avg}$ and its standard deviation $\sigma(\epsilon)$. The results show that the system converges to the median of measured values, but it takes longer time with a higher percentage of lost messages. Simulations have shown that even with higher percentage of lost communication packages, the system converges towards the median of the measurements. Simulations have also shown that lost messages have no influence on the steady state error. \begin{table}[] \centering \caption{Influence of the random loss of transmitted packages on the settling time, tested on 100 simulation for the case of 5 agents with complete topology} \begin{tabular}{|c|c|c|c|c|} \hline Lost packages & $t_{savg}$ & $\sigma(t_s)$ & $\epsilon_{avg}$ & $\sigma(\epsilon)$ \\ \hline\hline 0\% & 110 & 0 & 0.43\% & 0\%\\\hline 10\% & 964 & 1942 & 0.39\% & 0.035\%\\\hline 20\% & 1527 & 2627 & 0.38\% & 0.037\%\\\hline 30\% & 1982 & 3303 & 0.38\% & 0.034\%\\\hline 40\% & 2773 & 3831 & 0.37\% & 0.038\%\\\hline 50\% & 4587 & 5508 & 0.38\% & 0.041\%\\\hline \end{tabular} \label{tab:lostcomm} \end{table} System was further tested for sine-shaped measurements of each agent, representing a more dynamical measurement signal. Results ($x$, $y$), together with measurements for each of the 5 agents ($z$), are shown in Figure \ref{fig:sin}. After a transient period the system reaches consensus on the median value and is able to continuously track the median value of all measurements. The second half of the response shows the case of faster changing sinus function. The results show that the system is still able to track the median, but with a delay. Results are given for an odd number of agents. For an even number, the convergence value is in the interval defined with \eqref{convergence}, and exact value depends on the protocol parameters and values of measured signals. The presented method works for both odd and even number of agents. However, results were presented only for an odd number of agents, since in this case the median is a single value, as opposed to system with even number of agents where median is a range of values. \begin{figure}[h] \centering \includegraphics[width=0.65\linewidth,trim=110 230 110 230, clip] {figures/siunus_v4.pdf} \caption{Simulation results for sine measurements} \label{fig:sin} \end{figure} \subsection{Analysis of tuning parameters} Each of the algorithm tuning parameters in Table \ref{tab:variables} influences the behaviour of the system: \begin{itemize} \item Increase of parameter $\alpha$ causes faster convergence, but according to the equation \eqref{eq:uvjet} increases the area of instability around the median value. \item Increase of parameter $\beta$ shortens the time needed for agents to reach a common value (which initially does not have to correspond to the median). \item With parameter $\gamma$ we influence the rate of change of the state $\mathbf{y}$, which then influences the convergence speed of $\mathbf{x}$. At the same time, its increase towards $\beta$ increases the instability of the system. \item $\kappa$ is the forgetting factor of the state $\mathbf{y}$, too large values cause instability of the system, and with too small values, system converges to the wrong value. \end{itemize} The influence of parameters on system response is shown in Figure \ref{fig:tuningparams}, which shows two graphs for each parameter change: i) initial start of response, which presents the way the agents converge towards the median value and ii) steady state response, which presents the influence of the parameter to the steady state. Numerical results are shown in Table \ref{tab:tuning}. These results were obtained with 5 agents, using tuning parameters Sim 4 from Table \ref{tab:parametri}, where only one parameter changed its value. \begin{table} \centering \caption{Influence of tuning parameters to responses} \begin{tabular}{|c|c|c|c|} \hline Parameter & $t_c$ & $t_s$ & $\epsilon_{ss}$ \\ \hline \hline $\alpha=1$ & 1082 & 949 & 0.36\%\\ \hline $\alpha=3$ & 753 & 729 & 1.12\%\\ \hline $\beta=0.01$ & 2659 & 1669 & 1.07\%\\ \hline $\beta=0.08$ & 660 & 463 & 1.12\%\\ \hline $\gamma=0.0015$ & 753 & 729 & 1.12\%\\ \hline $\gamma=0.01$ & 1415 & 1044 & 1.12\%\\ \hline $\kappa=0.02$ & 1047 & 679 & 0.69\%\\ \hline $\kappa=0.4$ & 1639 & 794 & 4.12\%\\ \hline \end{tabular} \label{tab:tuning} \end{table} \begin{figure}[h] \centering \includegraphics[width=0.49\linewidth,trim=50 230 50 250, clip] {figures/alfa_v3.pdf} \includegraphics[width=0.49\linewidth,trim=50 230 50 250, clip] {figures/beta_v3.pdf} \includegraphics[width=0.49\linewidth,trim=50 230 50 250, clip] {figures/gama_v3.pdf} \includegraphics[width=0.49\linewidth,trim=50 230 50 250, clip] {figures/kapa_v3.pdf} \caption{Influence of tuning parameters to response} \label{fig:tuningparams} \end{figure}
2,869,038,156,588
arxiv
\section{Introduction} \IEEEPARstart{C}{ollision}-free trajectory planning plays an essential role in the missions preformed by a swarm of robots in a shared environment, such as cooperative inspection and transportation \cite{Chung2018,Zhou2022}. Currently, optimization-based methods, such as model predictive control \cite{Luis2019} and sequential convex program \cite{Augugliaro2012}, are widely employed to handle collision avoidance by introducing different kinds of constraints. However, the constrained optimization problem may suffer from infeasibility leading to the failure of replanning, and such a phenomenon occurs more frequently in a crowded environment. Furthermore, in obstacle-dense situations, the robots in the swarm are more likely to get stuck with each other, which is also known as deadlock \cite{Alonso2018}. Concerning the multi-robot trajectory planning problem in an obstacle-dense environment, we propose a novel method to ensure the optimization feasibility and handle the deadlock problem simultaneously. In this work, the modified buffered Voronoi cell with warning band (MBVC-WB) \cite{Chen2022} is utilized to deal with inter-robot collision avoidance, and the assorted adaptive right hand rule is brought in for deadlock resolution. Furthermore, in order to avoid obstacles in the environment, a safe corridor is generated to provide a feasible space for trajectory replanning. Spcifically, we firstly adopt a search-based path planning method ABIT$^*$ \cite{Strub2020} to determine an approximated path in the complex environment. Then, the separating hyperplane is constructed between the imminent obstacles and the existing planned trajectory based on a quadratic program. The main contributions of this work are summarized as follows. \begin{itemize} \item[$\bullet$] A novel safe corridor constituted by a sequence of polyhedron is proposed for obstacle avoidance, and it is formed via an online method, different from those offline ones as in \cite{Toumieh2022,Gao2020}. In contrast to the rectangular safe corridor in \cite{Park2020,Park2022,Li2021}, the presented corridor provides a more reasonable planning space by considering the motion tendency of robots and the distribution of obstacles in the environment. \item[$\bullet$] Different from \cite{Chen2022} where the deadlock resolution is performed in a free space, this work considers an obstacle-dense environment, which is evidently more challenging. In addition, the penalty term related to the warning band is replaced by a quadratic one which considerably mitigates the computation time. \item[$\bullet$] Comparisons with state-of-the-art results \cite{Park2022,Zhou2021,Jesus2021} are made in several cluttered scenarios, illustrating that the proposed method has a better performance in terms of guaranteeing the collision avoidance as well as handling the deadlock problem. \item[$\bullet$] Hardware experiments are executed to verify the validation in real-world cases, including eight crazyfiles passing though a 3D framework (Fig.~\ref{8-cube-framework}), four crazyfiles transitting in a polygonal environment, and six and eight robots going through ``H"-and ``n"-shaped narrow passages. \end{itemize} \begin{figure}[t!] \centering \includegraphics[width=0.75\linewidth]{figs/8-cube-framework} \caption{Eight nano-quadrotors fly through a cubic framework.} \label{8-cube-framework} \end{figure} \section{Related Work} \subsection{Optimization-based Trajectory Planning} In optimization-based multi-robot planners, trajectory planning is formulated as a numerical optimization problem, where the inter-robot collision avoidance is leveraged by adding convex \cite{Luis2019,Augugliaro2012} or non-convex \cite{Jesus2021,Kamel2017} constraints. Nonetheless, most of the existing methods encounter a challenge that the optimization may be infeasible under these constraints. To overcome this drawback, the method in \cite{Zhou2021} adopts a soft constraints instead of a hard one, which however may lead to the result that the safety of planned trajectory cannot be ensured. Other methods \cite{Park2022} solve the feasibility problem where the relative safe corridor is used to guarantee the feasibility. Unfortunately, it computes the trajectories sequentially instead of in a concurrent way, e.g., \cite{Luis2019,Zhou2017}. This indicates that the robot would waste a large amount of time in waiting for others replanning. Besides the optimization feasibility, another problem in trajectory planning is the deadlock, which refers to the fact that robots will be trapped by each other in collision avoidance \cite{Alonso2018,Park2022,Wang2017}. A common solution is the right-hand rule \cite{Zhou2017} based on an artificial perturbation \cite{Toumieh2022}, but the inter-robot collision avoidance cannot be ensured. Our previous work \cite{Chen2022} is well performed in deadlock resolution via an adaptive right-hand rule, which however is only applicable in obstacle-free space. In conclusion, feasibility guarantee and deadlock resolution in an obstacle-dense environment is still an open problem for multi-robot trajectory planning. \subsection{Safe corridor} An early work related to the safe corridor is \cite{Deits2015}, which generates the corridor through semi-definite programming. The work \cite{Liu2017} produces the safe corridor using two steps, namely, sampling-based path planning and geometry-based corridor construction, and achieves a high-speed replanning for quadrotors. Unfortunately, the constraints introduced by the corridor cannot always be satisfied, which implies the optimization may be infeasible, and the same problem can also be observed in \cite{Toumieh2022,Senbaslar2021}. In addition, the authors in \cite{Park2020,Park2022,Li2021} propose a construction method by expanding the rectangular corridor in a grid map. Despite this method is computationally efficient, the constructed corridors are restricted by its rectangular shape that may have a lower space efficiency. The method in \cite{Honig2018} generates a safe corridor by using supported vector machine, which has a higher utility rate of space but is centralized and offline. Another way to construct a safe corridor is voxel expansion \cite{Toumieh2022,Gao2020}, where a performed corridor is constituted based on a grid map. However, the construction there is achieved offline as well, and cannot handle dynamic missions such as changing targets. \section{Problem Statement} \label{section problem statement} This section formulates the optimization-based trajectory planning problem in a cluttered environment with dimension $d={2,3}$. The goal is to drive $N$ robots from their initial positions to respective destinations in an environment with obstacles. During this period, a robot cannot collide with \emph{any} other robot or \emph{any} obstacle. Despite every robot can only determine its own control input, the information of others can be obtained via wireless communication. The trajectory is replanned and executed every sampling time step and the replanning is reformulated as a numerical optimization with finite variables. \subsection{Trajectory Representation} Let $h$ denote the sampling time step. In the time step $t$ of replanning, the planned trajectory for robot $i$ is defined as $\mathcal{P}^i(t)=[p^i_1(t),p^i_2(t),\ldots,p^i_K(t)]$ where $p^i_k(t)$, $k\in \mathcal{K} := \{1,2,\cdots,K\}$, is the planned position at time $t + k h$ and $K$ is the length of horizon. Similarly, let $v^i_k(t)$, $k \in \mathcal{K}$ denote the velocity at time $t + kh$ and let $u^i_k(t)$, $k = \{0,1,\cdots,K-1\}$ denote the control input. The dynamics of the robot are formulated as \begin{equation} \label{eq: dynamic constraint} x_{k}^{i}(t)=\mathbf{A} x_{k-1}^{i}(t)+\mathbf{B} u_{k-1}^{i}(t), \; k \in \mathcal{K}, \end{equation} where $x^i_k(t)=[p^i_k(t),v^i_k(t)]$ is the planned state at time $t+kh$, $x^i_0(t)=x^i(t)$ and \begin{equation} \label{def: A B} \mathbf{A} = \left[ \begin{array}{ccc} \mathbf{I}_d & h \mathbf{I}_d \\ \mathbf{0}_d & \mathbf{I}_d \\ \end{array} \right] , \quad \mathbf{B} = \left[ \begin{array}{ccc} \frac{h^2}{2} \mathbf{I}_d \\ h \mathbf{I}_d \end{array} \right]. \end{equation} Additionally, the velocity and input constraints are given as \begin{equation} \label{eq: input-constraint} \| \Theta_a u^i_{k-1}(t) \|_2 \le a_{\text{max}} ,\ k \in \mathcal{K}, \end{equation} \begin{equation} \label{eq: state-constraint} \| \Theta_v v_k^i \|_2 \le v_{\text{max}}, \ k \in \mathcal{K}, \end{equation} where $\Theta_v, \Theta_a$ are positive-definite matrices, and $v_{\text{max}}, a_{\text{max}} $ denote the maximum velocity and acceleration, respectively. Assume that once the planned trajectory is updated, the lower feedback controller can perfectly track it in the time interval $\left[ t, t+h \right]$. As a result, when replanning at time $t+h$, the current state of the robot satisfies $x^i(t+h)=x^i_1(t)$. Fortunately, the current tracking controller, e.g. \cite{Mellinger2011}, is qualified to fulfill this assumption and is adopted in our hardware experiments. In cooperative navigation, the information of different robots is exchanged by wireless communication. Moreover, it is assumed that the modified historically planned trajectory is informed and called predetermined trajectory, defined as $\overline{P}^i(t)=[\overline{p}_{1}^{i}(t),\overline{p}_{2}^{i}(t),\ldots,\overline{p}_{K}^{i}(t)]$, where $\overline{p}_{k}^{i}(t) = p^i_{k+1}(t-h), k \in \tilde{\mathcal{K}} := 1,2,\ldots,K-1$ and $\overline{p}^i_{K}(t)=p^i_{K}(t-h)$. \subsection{Collision Avoidance} \subsubsection{Inter-robot Collision Avoidance} To avoid the collision among robots, the minimum distance allowed between any pair of robots is set to be $r_{\text{min}}>0$, implying that a collision happens when $ \| p^i - p^j \|_2 \leq r_{\text{min}}$ \footnote{For the sake of simplicity, the time index $t$ will be omitted whenever ambiguity is not caused. For example, $p^i(t)$ will be rewritten as $p^i$. }. Moreover, the replanned trajectories of robots $i$ and $j$ are collision-free if the positions of different pairs of robots are larger than $r_\text{min}$ at not only the sampling time $t+kh$, $ k \in \mathcal{K}$ but also during the interval time. \subsubsection{Obstacle Avoidance} Let $\mathcal{O}$ denote the collection of obstacles. Then the obstacle avoidance requires that the occupied circle or sphere for a robot in the configuration space should not have contact with any obstacle, i.e., $ \left(p^i \oplus r_{a} \mathbf{I}_d \right) \cap \mathcal{O} = \emptyset $, where $\oplus$ is the Minkowski sum and $r_a$ is the radius of agents. Furthermore, a planned trajectory $\mathcal{P}$ is collision-free, if its position is collision-free at not only the sampling time but also during the interval time. Similar to \cite{Senbaslar2021,Ma2016}, we assume that the obstacles are convex-shaped. \section{Trajectory Planning Method} The trajectory planning method will be provided in this section. We deal with the inter-robot collision avoidance, deadlock resolution and then obstacle avoidance. Subsequently, the complete trajectory planning method is summarized, followed by the proof for the feasibility guarantee of the proposed method. \subsection{Inter-robot Collision Avoidance} For inter-robot collision avoidance, we introduce the modified buffered Voronoi cell with warning band (MBVC-WB)~\cite{Chen2022} as depicted in Fig.~\ref{MBVC-WB}. Define the following parameters: \begin{equation} \label{a b c} a_{k}^{i j}=\frac{ \overline{p}_{k}^{i}-\overline{p}_{k}^{j} } { \|\overline{p}_{k}^{i}-\overline{p}_{k}^{j}\|_{2} },\quad b_{k}^{i j}=a_{k}^{i j^{T}} \frac{\overline{p}_{k}^{i} + \overline{p}_{k}^{j}}{2}+\frac{r_{\min }^{\prime}}{2}, \end{equation} where $ r^{\prime}_{\text{min}} = \sqrt{r_{\text{min} }^{2}+h^{2} v_{\text{max} }^{2}} $ denotes the extended minimum distance. Then, the MBVC-WB can be formulated by \begin{subequations} \label{eq: inter-constraint} \begin{align} &{a_{k}^{i j}}^{T} p_{k}^{i} \geq b_{k}^{i j}, \ \forall j\neq i, k \in \tilde{\mathcal{K}}, \label{eq: a p > b 1} \\ &{a_{K}^{i j}}^{T} p_{K}^{i} \geq b_{K}^{i j} + w^{i j}, \forall j \neq i \label{eq: a p > b 2}, \end{align} \end{subequations} where $w^{i j}$ is an additional variable added to the optimization and satisfies \begin{equation} \label{eq: w-constraint} 0 \leq w^{i j} \leq \epsilon. \end{equation} In (\ref{eq: w-constraint}), $\epsilon$ is referred to as the maximum width of warning band. The additional penalty term added to the cost function is given by \begin{equation} \label{eq: C^i_w} C_{w}^{i}=\sum_{j \neq i} \frac{1}{\epsilon \gamma^{i j}} \rho_{i j} {w^{i j}}^2, \end{equation} where $\gamma^{i j}(t)=(1-\beta) \gamma^{i j}(t-h) + \beta w^{i j} (t-h)$, $\beta \in (0,1)$ and $\gamma^{i j}(t_{0})~=~\epsilon$. Notably, the modified function \eqref{eq: C^i_w} is a quadratic term and is computationally efficient in the sense that $\gamma^{i j}(t)$ related to the result of $w^{i j}$ in last time step $t-h$ is utilized to adjust the weight of this penalty. Additionally, $\rho_{i j}$ is an important coefficient designed to execute adaptive the right-hand rule and will be presented later. \subsection{Deadlock Resolution} To deal with the possible deadlock problem in inter-robot collision avoidance, we propose a detection-resolution mechanism. For deadlock detection, a notion of terminal overlap is introduced as follows: If there holds $p^i_K(t) \ne p^i_\text{target}$, $p^i_K(t)=p^i_K(t-h)$, and $p^i_K(t)=p^i_{K-1}(t)$, we say that a terminal overlap happens and it is denoted by $b^i_\text{TO}=True$. Regarding deadlock resolution, the adaptive right-hand rule is carried out by adjusting $\rho^{i j}$ as follows \begin{equation}\label{eq: rho^i j} \rho^{i j} = \rho_0\, e^{ \eta^i(t) \, \sin \theta^{i j} }, \end{equation} where \begin{equation}\label{eq: eta^i} \eta^i(t) = \left\{ \begin{array} {rll} &\eta^i(t-h) + \Delta \eta &\mbox{if}\; b^i_\text{TO} = True, \\ &0 &\mbox{if}\; \ w^{i j} = 0, \forall j \ne i,\\ &\eta^i(t-h) &\text{else}. \end{array} \right. \end{equation} In (\ref{eq: rho^i j}) and (\ref{eq: eta^i}), $\rho_{0} > 0$ and $\Delta \eta >0$ are the coefficients; the initial condition of $\eta^i$ is $\eta^i(t_{0})=0$; the parameter $\theta^{i j}$ is defined as the angle in $x-y$ plane between the projection of $p^i_K$ to $p^i_{\text{target}}$ and $p^i_K$ to $p^j_K$. \begin{figure}[t!] \centering \includegraphics[width=0.85\linewidth]{figs/MBVC-WB} \caption{Illustration of the MBVC-WB where the green area is the feasible space and the orange one is the warning band. Shared space is split at each horizon (\textbf{Left}). In particular, for terminal horizon, warning band is added (\textbf{Right}).} \label{MBVC-WB} \end{figure} \subsection{Obstacle Avoidance} \label{subsection: obstacle collision avoidance} The obstacle avoidance is realized by restricting the planned trajectory in a safe corridor. The corridor is constituted by a sequence of convex polyhedra in which the edge can separate the planned positions $p^i_k$ and inflated obstacle $\tilde{\mathcal{O}}$, i.e., the obstacle inflated by $r_\text{a}$ in (Fig.~\ref{corridor}, the blue polygons shifted by green obstacles). In our method, the corridor is formed upon the predetermined trajectory. As an example in Fig.~\ref{get_corridor}, three intersected convex polyhedra make up a corridor. Based on the safe corridor, the obstacle avoidance constraint can be written as \begin{equation} \label{eq: obstacle-constraint} {a_{k}^{i,o}}^{T} p_{k}^{i} \geq b_{k}^{i,o}, \end{equation} where $a_{k}^{i,o}$ and $b_{k}^{i,o}$ constitute the edge of corridor for robot~$i$ at horizon $k \in \mathcal{K}$. In the following subsections, the method of constructing these planes will be clarified in detail. \subsubsection{Path Planning} To begin with, a path is required to indicate an approximated direction for a robot to the destination. This path is a collision-free polyline that connects the robot and its target, i.e., connecting terminal horizon position of predetermined trajectory, $\overline{p}^i_K$ and target, $p^i_{\text{target}}$ as shown in Fig~\ref{get_path}. RRT$^*$-based methods are qualified to find such a feasible path, and among these methods, the Advanced Batch Informed Trees (ABIT$^*$) \cite{Strub2020} has a higher path quality. Thus, ABIT$^*$ is utilized to find the path and the length of this path is chosen as objective to be minimized. Once the path is found, a point $p^i_{\text{tractive}}$, called the tractive point, will be chosen in this path. It is determined as the closet point in the path to the target such that the line segment between this point and $\overline{P}^i_K$ is collision-free. A demo is presented in Fig.~\ref{get_segment_list}. Then, if a terminal overlap does not happen, i.e., $b^i_\text{TO} = False$, we add the tractive point to the end of predetermined trajectory to form the extended predetermined trajectory (EPT). In practice, we find that, in most of situations, a tractive point can be obtained via previously planned path. In other words, the path is unnecessary to be updated in every replanning. Thus, the path planning is triggered when it is needed such as changing the target or failing to find a tractive point. \subsubsection{Segment Division} After obtaining EPT, we need to divide the points of EPT into several segments so as to decrease the computation requirements as shown in Fig~\ref{get_segment_list}. Firstly, we choose the end point of EPT as the start of the first segment. Next, from the second end point of EPT to the start, the point in EPT will be added into the current segment one by one. We will stop to add the next point into the current segment until the convex hull of the contained points is not collision-free anymore. Then, a new segment will begin from the end point of the last one. The above process will be repeatedly carried out until the beginning point $\overline{p}^i_1$ is added into the segment. \begin{figure}[t!] \centering \subfigure[Path planning.]{ \includegraphics[width=0.45\linewidth]{figs/get_path} \label{get_path} } \subfigure[The process of choosing a tractive point and splitting the segments $S1$, $S2$, $S3$.]{ \includegraphics[width=0.45\linewidth]{figs/get_segment_list} \label{get_segment_list} } \quad \subfigure[Form the separation hyperplane via linear program.]{ \includegraphics[width=0.45\linewidth]{figs/get_separating_plane} \label{get_separating_plane} } \subfigure[For a given predetermined trajectory, form a safe corridor to provide a feasible space for replanning.]{ \includegraphics[width=0.45\linewidth ]{figs/get_corridor} \label{get_corridor} } \caption{Process of forming safe corridor.} \label{corridor} \end{figure} \begin{algorithm} \label{AL: GetCorridor} \caption{\text{GetCorridor()}} \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{ $\mathcal{O}$,\ $\overline{\mathcal{P}}^i(t)$ } \Output{ $ObCons$ } \eIf{ Need Path Planning }{ $Path^i(t) \leftarrow ABIT^{*} (\overline{p}^i_K,p^i_{\text{target}},\mathcal{O})$\; }{ $Path^i(t) \leftarrow Path^i(t-h)$\; } $ObCons \leftarrow \emptyset$\; $Segmentlist \leftarrow \text{SegmentDivision} (\mathcal{O},\overline{\mathcal{P}}^i(t),Path^i(t))$\; \For{$Segment \ \in \ Segmentlist$}{ $Corridor \leftarrow \text{GetCorridor} (Segment,\mathcal{O})$\; $ObCons \cup \text{AddConstraints} (Corridor)$\; } \end{algorithm} \subsubsection{Separating Plane} \label{susbsubsection get separating plane} After diving points of EPT into several segments, we will construct the separating plane between the segment and obstacles. Since the convex hull formed by the points in each segment is obstacle-free and also the obstacles are convex-shaped, a separating plane exists according to separating hyperplane theorem \cite{Boyd2004}. Then, as shown in Fig~\ref{get_separating_plane}, an optimization-based method can be provided as follows: \begin{equation} \label{QP1} \begin{aligned} & \max _{ a, b , \gamma } \; \gamma , \\ {\rm s.t.},\ & { a }^T p^{\text{free}} \geq \gamma + b, \\ & { a}^T p^{\text{obstacle}} \leq b , \\ & {\left \| a \right \|}_{2} = 1 , \\ & \gamma \geq 0, \end{aligned} \end{equation} where $a$ and $b$ determine the separating plane; $p^{\text{free}}$ and $p^{\text{obstacle}}$ denote the points of segments and obstacles respectively; $\gamma$ is the margin variable. Such an optimization can be further transformed to the following quadratic program (QP): \begin{equation} \label{QP2} \begin{aligned} & \min _{ a^{\prime}, b^{\prime} } \ \left\| a^{\prime} \right\|_2^2 \\ {\rm s.t.},\ & { a^{\prime} }^T p^{\text{free}} \geq 1 + b^{\prime}, \\ & { a^{\prime} }^T p^{\text{obstacle}} \leq b^{\prime}, \\ \end{aligned} \end{equation} where $a^{\prime}=\frac{a}{\gamma}$ and $b^{\prime}=\frac{b}{\gamma}$. By solving the QP \eqref{QP2}, the separating plane can be obtained as $a=\frac{a^{\prime}}{\|a^{\prime}\|_2}$ and $b=\frac{b^{\prime}}{\|a^{\prime}\|_2}$, which forms the edge of a convex polyhedron. Moreover, $a_{k}^{i,o}$ and $b_{k}^{i,o}$ are chosen as $a$ and $b$, respectively, to formulate the constraints \eqref{eq: obstacle-constraint}. In implementation, the separation between segments and obstacles are unnecessary to be constructed in some occasions that the obstacle locates out of the planning region. Therefore, we come up with the following manipulation to simplify the planning process. Specifically, the separating plane will be computed from the nearest obstacle to the farthest one. Regarding each obstacle, a separating plane will be built if this obstacle has a contact with the currently formed convex polyhedra. Otherwise, such an obstacle will be omitted. In addition, if the distance between the obstacle and the robot is larger than $K h v_{\text{max}}$, the obstacle will not be considered either. \subsubsection{Safe Corridor Construction} All above, the algorithm of forming a safe corridor is concluded in Alg.~\ref{AL: GetCorridor}, where the detail process has been illustrated sequentially in previous sub-subsections. This safe corridor generation method has some critical properties, which is summarized as follows. \begin{lemma} \label{lemma-1} The safe corridor formulation method provided in Alg.~\ref{AL: GetCorridor} has the following three properties: \begin{enumerate} \item If the predetermined trajectory $\overline{\mathcal{P}}$ is obstacle-free, a safe corridor can be generated. \item The underlying predetermined trajectory $\overline{\mathcal{P}}$ satisfies the formulated constraints \eqref{eq: obstacle-constraint}, i.e., ${a_{k}^{i,o}}^{T} \overline{p}_{k}^{i} \geq b_{k}^{i,o}$, $k \in \mathcal{K}$. \item If the constraints \eqref{eq: obstacle-constraint} formulated by the safe corridor are satisfied, the planned trajectory $\mathcal{P}$ is obstacle-free. \end{enumerate} \end{lemma} \begin{proof} 1) Based on the above-mentioned method, we can conclude that a division of segment can be found if a predetermined trajectory is obstacle-free. This is because the line between the tractive point and $\overline{p}^i_K$ is collision-free, which obviously leads to a collision-free EPT. In this case, the worst and most conservative division is all the segments are only formed by the adjacent two points of EPT. Additionally, since there exist separating planes between the points of segment and any of obstacle, the convex polyhedra can be formed. Consequently, the safe corridor can be constituted by a sequence of polyhedra. 2) Since the constraints \eqref{eq: obstacle-constraint} are obtained from the optimization \eqref{QP1} with $p^{\text{free}}$ chosen as the positions from predetermined trajectory, it clear that the predetermined trajectory $\overline{\mathcal{P}}$ satisfies these constraints. 3) As aforementioned, for a collision-free predetermined trajectory, a division of segment can be found. Then, due to the segment division rule, it is obvious that the adjacent points of predetermined trajectory, e.g., $\overline{p}^i_k$ and $\overline{p}^i_{k+1}$, must be contained in a common segment. Thus, if the constraints is enforced, the corresponding planned points $p^i_k$ and $p^i_{k+1}$ must be restricted to a common convex polyhedron. Consequently, the line segment between them must be obstacle-free. Then, regarding a finished segment division, all points of predetermined trajectory are included because the segment division can be completed if and only if the point is counted sequentially until the last one. Thus, all the line segments of the trajectory is covered by a collision-free safe corridor. \end{proof} \subsection{Trajectory Planning Algorithm} In the previous subsections, the methods dealing with inter-robot and robot-obstacle collision avoidance are formulated to be certain kinds of constraints into the optimization problem. Furthermore, another constraint is introduced to ensure the feasibility of the underlying optimization, that is, \begin{equation} \label{eq: terminal-constraint} v^i_K=\mathbf{0}_d. \end{equation} Moreover, we enforce that $x^i_{k}=x^i_{K}$, $k>K$ and $u^i_k=\textbf{0}_d$. In addition to the constraints, the proposed cost function for robot~$i$ is given by \begin{equation} \label{eq: C^i} C^i = C^i_p + C^i_w, \end{equation} where $C^i_w$ is provided in \eqref{eq: C^i_w} and $C^i_p$ is given by \begin{equation} \label{eq: C^i_p} C^i_p = \frac{1}{2} Q_K \|p_{K}^{i}-p_{\text {tractive}}^{i}\|_2^2 + \frac{1}{2} \sum_{k=1}^{K-1}Q_{k}\|p_{k+1}^{i}-p_{k}^{i}\|_2^2. \end{equation} Note that $C^i_p$ is employed to drive the robots to the current tractive point and $Q_k$, $k \in \mathcal{K}$ in \eqref{eq: C^i_p} is the weight parameter. Therefore, the optimization can reformulated as follows. \begin{subequations} \label{eq:final-mpc} \begin{align} &\min _{\mathrm{u}^{i}, x^{i}, w^{i j} } C^{i} \notag \\ \text { s.t. } \eqref{eq: dynamic constraint},\eqref{eq: input-constraint}, &\eqref{eq: state-constraint}, \eqref{eq: inter-constraint}, \eqref{eq: w-constraint}, \eqref{eq: obstacle-constraint}, \eqref{eq: terminal-constraint} \notag. \end{align} \end{subequations} Based on this optimization, the proposed trajectory planning method is summarized in Algorithm~\ref{AL:IMPC-OB}. The inputs are the initial position $p^i(t_0)$, the target position $p^i_\text{target}$ and obstacles $\mathcal{O}$. To begin with, the predetermined trajectory is initialized as $\overline{\mathcal{P}}^i(t_0)=[p^{i}(t_0), \ldots, p^{i}(t_0)]$ and $b^i_\text{TO}$ is set as False. After this initialization, in the main loop, each robot run their respective algorithm in parallel (Line~\ref{algline:impc-each-robot}). At the beginning, the predetermined trajectory is informed among the robots (Line~\ref{algline:impc-commu}), followed by the procedure that the functions related to inter-robot and robot-obstacle collision avoidance are realized (Line~\ref{algline:impc-inter-cons}-\ref{algline:impc-obstacle-cons}). After obtaining the current state, the optimization \eqref{eq:final-mpc} is formulated and resolved (Line~\ref{algline:convex-programming}). Afterwards, the deadlock detection is made (Line~\ref{algline:get-boolean}) based on the resolution of the optimization and the predetermined trajectory in next step is derived thereafter (Line~\ref{algline:get-PT}). Finally, the planned trajectory is executed (Line~\ref{algline:impc-execute}). \begin{algorithm}[t] \label{AL:IMPC-OB} \caption{The Complete Algorithm}\label{algorithm} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{$p^i(t_0)$, $p^i_\text{target}$,$\mathcal{O}$} $\overline{\mathcal{P}}^i(t_0) \leftarrow \text{InitialPredTraj}(p^i(t_0))$ \label{algline:impc-init}\; $b^i_\text{TO}, \leftarrow \textbf{False}$\; \While{not all robots at target \label{algline:impc-while}} { \For{$i\in \mathcal{N}$ \label{algline:impc-each-robot} concurrently }{ $\overline{\mathcal{P}}^j (t) \leftarrow \text{Communicate}(\overline{\mathcal{P}}^i(t))$ \label{algline:impc-commu} \; $cons^i \leftarrow \text{GetInterCons}(\overline{\mathcal{P}}^i(t), \overline{\mathcal{P}}^j (t))$\label{algline:impc-inter-cons}\; $cons^i \leftarrow cons^i \cup \text{GetCorridor}(\mathcal{O},\overline{\mathcal{P}}^i(t))$ \label{algline:impc-obstacle-cons}\; $x^i(t) \leftarrow \text{GetCurrentState}()$\label{algline:impc-current}\; $\mathcal{P}^i(t), w^{i j} \leftarrow \text{Optimization}(cons^i,x^i(t))$ \label{algline:convex-programming}\; $b^i_\text{TO} \leftarrow \text{DeadlockDetection}(\mathcal{P}^i(t), w^{i j})$ \label{algline:get-boolean}\; $\overline{\mathcal{P}}^i(t+h) \leftarrow \text{GetPredTraj}( \mathcal{P}^i(t) )$ \label{algline:get-PT} \; $\text{ExecuteTrajectory}(\mathcal{P}^i(t)) $\label{algline:impc-execute}\; } $t \leftarrow t+h$\; } \end{algorithm} \subsection{Feasibility Guarantee} Different from most existing optimization-based methods, the proposed planner guarantees the feasibility of the optimization problem, which is illustrated in the following theorem. \begin{theorem} \label{recursive feeasible} If the initial positions of all robots are collision-free, the robots will never have collisions with each other or any obstacles. \end{theorem} \begin{proof} At beginning time $t_0$, the predetermined trajectory is initialized as $\overline{\mathcal{P}}^i(t_0)=[ p^{i}(t_0), \ldots, p^{i}(t_0) ]$. Choosing it as the planned trajectory, i.e., $p^i_k(t_0)=\overline{p}^i_k(t_0)$, it is naturally that this planned trajectory satisfies all constraints listed in optimization \eqref{eq:final-mpc}. Thus, at time $t_0$, the optimization is feasible for all robots. Thereafter, we intend to prove that our algorithm is recursively feasible, i.e., once the optimization at time step $t-h$ are feasible, we can find a feasible solution at time $t$. Form Lemma~\ref{lemma-1}, we already know that if planned trajectory at $t-h$ is feasible, it must be obstacle-free and we can find a safe corridor in this time's replanning. Afterwards, the final optimization \eqref{eq:final-mpc} can be formulated. Given feasible solution at last time step, $u^i_{k-1}(t-h)$ and $x^i_k(t-h)$ for $k \in \mathcal{K}$, we can provide a feasible solution $x^i_k(t)=x^i_{k+1}(t-h)$, $u^i_k(t)=u^i_{k+1}(t-h)$ and $w^{i j} (t) = \min \{ \epsilon, \; {a_{K}^{i j}}^{T}(t) p_{K}^{i}(t) - b_{K}^{i j}(t) \}$, where we enforce that $x^i_{K}(t)=x^i_{K+1}(t-h)=x^i_{K}(t-h)$ and $u^i_K(t)=u_e$. First, as the result of optimization at time step $t-h$, $x^i_{k+1}(t-h)$ and $u^i_{k}(t-h)$ with $k \in \tilde{\mathcal{K}}$, satisfy the constraints in~\eqref{eq: dynamic constraint}, \eqref{eq: input-constraint}-\eqref{eq: state-constraint} naturally. In addition, since $x^i_{K}(t)=x^i_{K+1}(t-~h~)=x^i_{K}(t-~h~)$ and $u^i_{K-1}(t)=u^i_K(t-h)=u_e$ hold, $x^i_{K}(t)$ and $u^i_{K-1}(t)$ also satisfy these constraints. In the meantime, as $x^i_{K}(t)=x^i_{K+1}(t-h)=x^i_{K}(t-h)=x^i_{K-1}(t)$ holds, it is evident that the constraint \eqref{eq: terminal-constraint} holds as well. Then, towards constraints related to MBVC-WB, i.e., \eqref{eq: inter-constraint}, \eqref{eq: w-constraint}, the feasibility of the provided solution have been proved in our previous work \cite{Chen2022}. Lastly, regarding constraint \eqref{eq: obstacle-constraint}, as property~1) and 3) stated in Lemma~\ref{lemma-1}, for given feasible solution in time $t-h$, the previously planned trajectory is obstacle-free and a formulation of safe corridor can be found at this time. Afterwards, according to property~3), the provided solution, i.e., the predetermined trajectory, satisfies these constraints evidently. Thus, the constraint \eqref{eq: obstacle-constraint} is feasible and the optimization is feasible in a recursive way. Since the initial optimizations as well as thees successive ones are feasible, the included constraints are satisfied. According to property~3) in Lemma~\ref{lemma-1} and the property of MBVC-WB in \cite{Chen2022}, the obstacle and inter-robot collision can be avoided. \end{proof} \section{Numerical Simulations and Experiments} In this section, we will analyze the validation and performance of the proposed algorithm via numerical simulations and hardware experiments. The algorithm is implemented in an Intel Core i9 @3.2GHz computer with Python3 programming language and publicly available at https://github.com/PKU-MACDLab/IMPC-OB. We use CVXOPT \cite{cvxopt} for quadratic programming and trajectory optimization, and OMPL \cite{ompl} for ABIT$^*$ path planning. Furthermore, comparisons with Ego-swarm \cite{Zhou2021}, MADER \cite{Jesus2021} and LSC \cite{Park2022} are carried out. \subsection{Numerical Simulations and Comparison} The main parameters of robots are chosen as follows: the minimum inter-agent distance $r_{\text{min}}=0.6{\rm m}$; the radius of agents $r_a=0.3{\rm m}$; the maximum acceleration $a_{\text{min}}=2{\rm m/s^2}$; the maximum velocity $v_{\text{max}}=3{\rm m/s}$. \begin{figure} \centering \subfigure[LSC \cite{Park2022}]{ \includegraphics[width=0.35\linewidth]{figs/L-passage-LSC} } \subfigure[Ours]{ \includegraphics[width=0.35\linewidth]{figs/L-passage-IMPC-OB} } \caption{The scenario 'L'-passage for LSC \cite{Park2022} and ours.} \label{L-passage} \end{figure} \begin{figure} [t] \centering \includegraphics[width=0.74\linewidth]{figs/corridor-contrast} \caption{The generated corridor for given pre-planned trajectory (\textbf{Top}: LSC \cite{Park2022} and \textbf{Bottom}: Ours ). Noticeably, the provided corridor is the feasible space for the centroid of the robot. } \label{generated-corridor} \end{figure} \begin{figure} [t] \centering \subfigure[ego-swarm\cite{Zhou2021}]{ \includegraphics[width=0.4\linewidth]{figs/forest-ego} } \subfigure[MADER\cite{Jesus2021}]{ \includegraphics[width=0.4\linewidth]{figs/forest-MADER} } \quad \subfigure[LSC\cite{Park2022} ]{ \includegraphics[width=0.4\linewidth]{figs/forest-LSC} } \subfigure[Ours]{ \includegraphics[width=0.4\linewidth]{figs/forest-IMPC-OB} } \caption{Four planners are performed in the forest scenario.} \label{Forest-transition} \end{figure} \begin{figure} \centering \subfigure[ego-swarm\cite{Zhou2021}]{ \includegraphics[width=0.4\linewidth]{figs/H-ego} } \subfigure[MADER\cite{Jesus2021}]{ \includegraphics[width=0.4\linewidth]{figs/H-MADER} } \quad \subfigure[LSC\cite{Park2022} ]{ \includegraphics[width=0.4\linewidth]{figs/H-LSC} } \subfigure[Ours]{ \includegraphics[width=0.4\linewidth]{figs/H-IMPC-OB} } \caption{Four planners are performed in the ``H" scenario.} \label{H-transition} \end{figure} \begin{table} [t] \caption{Comparison With State-of-arts. (Safety: no collision occurs. $T_t[{\rm s}]$: transition time. $L_t[{\rm m}]$: the length of transition. $T_c[{\rm ms}]$: mean computation time per replanning.) } \label{table: comparison} \begin{tabular}{lllllll} \toprule &method &Safety &$T_t$ &$L_t$ &$T_c$ \\ \midrule \multirow{5}{*}{forest}&Ego-swarm \cite{Zhou2021} &No &9.2 &105.8 &9.6 \\ &MADER \cite{Jesus2021} &No &22.3 &111.1 &104.0 \\ &LSC \cite{Park2022} &Yes &22.3 &114.34 &53.2 \\ &Ours &Yes &8.1 &102.4 &93.0 \\ \midrule \multirow{5}{*}{``H"} &Ego-swarm \cite{Zhou2021} &No &7.5 &66.6 &10.2 \\ &MADER \cite{Jesus2021} &No &14.2 &71.5 &116.7 \\ &LSC \cite{Park2022} &Yes &- &- &62.3 \\ &Ours &Yes &9.3 &67.7 &86.8 \\ \bottomrule \end{tabular} \end{table} To begin with, we will provide a single robot simulation to show the proposed safe corridor, where the environment is a ``L"-passage. Fig. \ref{L-passage} shows the comparative simulation results of the method proposed in \cite{Park2022} and our method. Notably, our method has a higher speed and smoother trajectory in the sense that our planner cost $4.1s$ in comparison with $7.5{\rm s}$ for LSC. The difference between the safe corridors are given in Fig.~\ref{generated-corridor}. Compared with the rectangle corridor in LSC, our corridor is a trapezoid, which provides more feasible space for upcoming turning. Furthermore, comparisons with Ego-swarm, MADER and LSC are made throughout forest-transition and ``H"-transition. Trajectories of these scenarios are illustrated in Fig.~\ref{Forest-transition} and Fig.~\ref{H-transition}, respectively. The results are shown in Table~\ref{table: comparison}. For computation time, the proposed method has a relatively long computation time since it is programmed by Python3 that cannot take full advantage of multi-core computation. But in prospective execution, the computation can be carried out in multiple processors concurrently, and the time will decrease considerably. Ego-swarm has an impressive computation time in addition to a relative smooth and fast trajectories. Unfortunately, it cannot guarantee the collision avoidance, since it adopts an unconstrained optimization in planning. For MADER, the optimization under the strict constraints guarantees the avoidance between robots. However, regarding obstacle-avoidance, MADER seems to be maladaptive in an obstacle-dense environment as several collision appears. LSC has the feasibility guarantee which ensures the safety of robots, but a deadlock occurs in ``H"-transition. Despite heuristic deadlock resolution method is adopted in LSC, it is ineffective in this scenario. Though we similarly utilize linear constraint to handle inter-agent collision, the extra warning band introduce an elastic interaction instead of a hard one, based on which, the adaptive right-hand rule is leveraged, resulting in right-hand rotations in the bottleneck of the ``H". Moreover, for transition time and length, the proposed planner has a considerable superiority which means a faster and smoother trajectories. \begin{figure}[t!] \centering \subfigure[\textbf{Left}: the trajectories of this swarm where the ellipsoids indicate the minimum inter-robot distance. \textbf{Right}: the distances between different pairs of robots. ]{ \includegraphics[width=0.40\linewidth]{figs/8_cube_real_trajectory} \includegraphics[width=0.50\linewidth]{figs/8-cube-inter} \label{cubic-framework} } \subfigure[\textbf{left}: four crazyfiles are deployed in a polyhedrom-shape enviroment. \textbf{Right}: When a crazyfile go through the narrow passage, another quadrotor make a way to let it pass by.]{ \includegraphics[width=0.45\linewidth,height=0.45\linewidth]{figs/complex-enviroment} \includegraphics[width=0.45\linewidth,height=0.45\linewidth]{figs/complex-trajectory} \label{complex-scenario} } \subfigure[\textbf{Left}: The hardware experiment of ``H"-transition. \textbf{Right}: ``n"-transition. ]{ \includegraphics[width=0.45\linewidth,height=0.45\linewidth]{figs/8-H-real-trajectory} \includegraphics[width=0.45\linewidth,height=0.45\linewidth]{figs/6-n-real-trajectory} \label{other-exp} } \caption{The real-world experiments.} \label{experiment} \end{figure} \subsection{Hardware Experiments} Hardware experiments are executed on the platform of crazyswarm \cite{crazyswarm}, where multiple nano-quadrotors are flying under a motion capture system OptiTrack. The computation of all robots is done on a central computer with frequency, $5$Hz to comply with the sampling time step $h=0.2$s. For each crazyfile, a feedback controller \cite{Mellinger2011} is adopted to track the planned trajectory. The first experiment is shown in Fig.~\ref{cubic-framework}, where $8$ crazyfiles fly through a $0.6{\rm m}$ cubic framework. Considering the air turbulence, a crazyfile in the inter-robot avoidance is represented as an ellipsoid with diameter $0.24$m in the $x-y$ plane and $0.6$m in the $z$ axis. Owing to the deformation of robots, the inter-robot constraints are accordingly adjusted by modifying $a^{i j}_k$ and $b^{i j}_k$ as \begin{equation*} a_{k}^{i j}=E \frac{ E (\overline{p}_{k}^{i}- \overline{p}_{k}^{j}) } { \|E (\overline{p}_{k}^{i}-\overline{p}_{k}^{j}) \|_{2} },\quad b_{k}^{i j}=a_{k}^{i j^{T}} \frac{E (\overline{p}_{k}^{i} + \overline{p}_{k}^{j})}{2}+\frac{r_{\min }^{\prime}}{2}, \end{equation*} where $E={\rm diag}(1.0,1.0,\frac{0.24}{0.6})$; $r_{\min }^{\prime}=\sqrt{r_{\text{min}}^2+v_{\text{max}}^2}$ and $r_\text{min}=0.24$m. The radius of a crazyfile is set as $r_a=0.12$m. From the result given in Fig~\ref{cubic-framework}, it is apparent that the crazyfiles can achieve this transition. In addition, four crazyfiles moving in a cluttered environment are shown in Fig~\ref{complex-scenario}. Given initial positions, the targets are randomly chosen. After arriving at the targets, the new one will be published immediately and this process is repeated $5$ times. In this scenario, the feasible space is the irregular-shaped passage at the interval of polygon-shaped obstacles where the width of these passages ranges from $0.4$m to $0.7$m. By the help of MBVC-WB, where the warning band introduces an elastic interaction between the robots, a robot can squeeze out a way between the wall and other robots to avoid this trap as shown in Fig.~\ref{complex-scenario}. Such an inter-robot coordination in this scenario solves the deadlock problem. At last, other two experiments as illustrated in Fig. \ref{other-exp} are carried out, where the scenarios are ``H"-transition and ``n"-transition, respectively. In these navigations, the safety is guaranteed and the coordination in narrow passage is properly achieved. In ``H"-transition, the right-hand rotation appears as in the previous simulation. Regarding the ``n"-transition, the intersection of two groups of quadrotors at the top passage is the main challenge for this mission. It can be seen that the proposed method resolves it as the quadrotors fly through the passage without sacrificing any speed. For a quadrotor, when encountering an oncoming agent, it can rapidly find a side to avoid collisions by utilizing MBVC-WB. \section{Conclusion} This work has proposed a novel multi-robot trajectory planning method for obstacle-dense environments wherein the collision avoidance is guaranteed and the deadlock among robots is resolved. In contrast to state-of-the-art works, the proposed method's performance of safety and deadlock resolution in cluttered scenarios can be ensured by theoretical proof and validated by comprehensive simulations and hardware experiments. \bibliographystyle{IEEEtran}
2,869,038,156,589
arxiv
\section{Parameter settings} \label{sec:appendix:defaults} \subsection{Algebraic part} The settings for the algebraic part are listed and explained in detail in the section ``Code Writer/Make Package" in the documentation, as well as in the section ``Loop Integral" for settings which are specific to loop integrals. \subsubsection{Loop package} The parameters for {\tt loop\_package} are: \begin{description} \item[name]: string. The name of the {\it C++} namespace and the output directory. \item[loop\_integral]: The loop integral to be computed, defined via \\ {\tt pySecDec.loop\_integral.LoopIntegral} (see below). \item[requested\_orders]: integer. The expansion in the regulator will be computed to this order. \item[real\_parameters]: iterable of strings or sympy symbols, optional. Parameters to be interpreted as real numbers, e.g. Mandelstam invariants and masses. \item[complex\_parameters]: iterable of strings or sympy symbols, optional. Parameters to be interpreted as complex numbers, e.g. masses in a complex mass scheme. \item[contour\_deformation (True)]: bool, optional. Whether or not to produce code for contour deformation. \item[additional\_prefactor (1)]: string or sympy expression, optional. An additional factor to be multiplied to the loop integral. It may depend on the regulator, the real parameters and the complex parameters. \item[form\_optimization\_level (2)]: integer out of the interval [0,3], optional. The optimization level to be used in {\texttt{FORM}}{}. \item[form\_work\_space ('500M')]: string, optional. The {\texttt{FORM}}{} WorkSpace. \item[decomposition\_method]: string, optional. The strategy for decomposing the polynomials. The following strategies are available: \begin{itemize} \item `iterative' (default) \item `geometric' \item `geometric\_ku' \end{itemize} \item[normaliz\_executable (`normaliz')]: string, optional. The command to run normaliz. normaliz is only required if {\tt decomposition\_method} is set to `geometric' or `geometric\_ku'. \item[enforce\_complex (False)]: bool, optional. Whether or not the generated integrand functions should have a complex return type even though they might be purely real. The return type of the integrands is automatically complex if {\tt contour\_deformation} is True or if there are complex parameters. In other cases, the calculation can typically be kept purely real. Most commonly, this flag is needed if the logarithm of a negative real number can occur in one of the integrand functions. However, py{\textsc{SecDec}}{} will suggest setting this flag to True in that case. \item[split (False)]: bool, optional. Whether or not to split the integration domain in order to map singularities at 1 back to the origin. Set this option to True if you have singularities when one or more integration variables are equal to one. \item[ibp\_power\_goal (-1)]: integer, optional. The {\tt power\_goal} that is forwarded to the integration by parts routine. Using the default setting, integration by parts is applied until no linear or higher poles remain in the integral. We refer to the documentation for more detailed information. \item[use\_dreadnaut (True)]: bool or string, optional. Whether or not to use {\tt dreadnaut} to find sector symmetries. \end{description} The main keywords to define loop integrals from a ``graphical representation" ({\tt LoopIntegralFromGraph}) are: \begin{description} \item[internal\_lines]: list defining the propagators as connections between labelled vertices, where the first entry of each element denotes the mass of the propagator, e.g. {\tt [[`m', [1,2]], [`0', [2,1]]]}. \item[external\_lines]: list of external line specifications, consisting of a string for the external momentum and a string or number labelling the vertex, e.g. {\tt [[`p1', 1], [`p2', 2]]}. \item[replacement\_rules]: symbolic replacements to be made for the external momenta, e.g. definition of Mandelstam variables. Example: {\tt [(`p1*p2', `s'), (`p1**2', 0)]} where {\tt p1} and {\tt p2} are external momenta. It is also possible to specify vector replacements, e.g. {\tt [(`p4', `-(p1+p2+p3)')]}. \item[Feynman\_parameters ('x')]: iterable or string, optional. The symbols to be used for the Feynman parameters. If a string is passed, the Feynman parameter variables will be consecutively numbered starting from zero. \item[regulator ($\epsilon$)]: string or sympy symbol, optional. The symbol to be used for the dimensional regulator. Note: If you change this symbol, you have to adapt the dimensionality accordingly. \item[regulator\_power (0)]: integer. The numerator will be multiplied by the regulator ($\epsilon$) raised to this power. This can be used to ensure that the numerator is finite in the limit $\epsilon\to 0$. \item[dimensionality (4-2$\epsilon$)]: string or sympy expression, optional. The dimensionality of the loop momenta. \item[powerlist]: iterable, optional. The powers of the propagators, possibly dependent on the regulator. The ordering must match the ordering of the propagators given in {\tt internal\_lines}. \end{description} For {\tt LoopIntegralFromPropagators}: \begin{description} \item[propagators]: iterable of strings or sympy expressions. The propagators in momentum representation, e.g. {\tt [`k1**2', `(k1-k2)**2 - m1**2']}. \item[loop\_momenta]: iterable of strings or sympy expressions. The loop momenta, e.g. {\tt [`k1','k2']}. \item[external\_momenta]: iterable of strings or sympy expressions, optional. The external momenta, e.g. {\tt [`p1','p2']}. Specifying the external momenta is only required when a numerator is to be constructed. \item[Lorentz\_indices]: iterable of strings or sympy expressions, optional. Symbols to be used as Lorentz indices in the numerator. \item[numerator (1)]: string or sympy expression, optional. The numerator of the loop integral. Scalar products must be passed in index notation, \\ e.g. {\tt k1(mu)*k2(mu)+p1(mu)*k2(mu)}. All Lorentz indices must be explicitly defined using the parameter {\tt Lorentz\_indices}. \item[metric\_tensor ('g')]: string or sympy symbol, optional. The symbol to be used for the (Minkowski) metric tensor $g^{\mu\nu}$. \end{description} Note: The parameters {\tt replacement\_rules, regulator, dimensionality, powerlist, regulator\_power} are available for both, {\tt LoopIntegralFromGraph} and {\tt LoopIntegralFromPropagators}. \subsubsection{Make package} The parameters for {\tt make package} are: \begin{description} \item[name]: string. The name of the {\it C++} namespace and the output directory. \item[integration\_variables]: iterable of strings or sympy symbols. The variables that are to be integrated from 0 to 1. \item[regulators]: iterable of strings or sympy symbols. The (UV/IR) regulators of the integral. \item[requested\_orders]: iterable of integers. Compute the expansion in the regulators to these orders. \item[polynomials\_to\_decompose]: iterable of strings or sympy expressions. The polynomials to be decomposed. \item[polynomial\_names]: iterable of strings. Assign symbols for the polynomials to decompose. These can be referenced in the {\tt other\_polynomials}. \item[other\_polynomials]: iterable of strings or sympy expressions. Additional polynomials where no decomposition is attempted. The symbols defined in {\tt polynomial\_names} can be used to reference the {\tt polynomials\_to\_decompose}. This is particularly useful when computing loop integrals where the numerator can depend on the first and second Symanzik polynomials. Note that the {\tt polynomial\_names} refer to the {\tt polynomials\_to\_decompose} without their exponents. \item[prefactor]: string or sympy expression, optional. A factor that does not depend on the integration variables. It can depend on the regulator(s) and the kinematic invariants. The result returned by py{\textsc{SecDec}}{} will contain the expanded prefactor. \item[remainder\_expression]: string or sympy expression, optional. An additional expression which will be considered as a multiplicative factor. \item[functions]: iterable of strings or sympy symbols, optional. Function symbols occurring in {\tt remainder\_expression}. Note: The power function pow and the logarithm log are already defined by default. The log uses the nonstandard continuation from a negative imaginary part on the negative real axis (e.g. $\log(-1) = -i\,\pi$). \item[form\_insertion\_depth (5)]: non-negative integer, optional. How deep FORM should try to resolve nested function calls. \item[contour\_deformation\_polynomial]: string or sympy symbol, optional. The name of the polynomial in {\tt polynomial\_names} that is to be continued to the complex plane according to a $-i\delta$ prescription. For loop integrals, this is the second Symanzik polynomial F, and this will be done automatically in {\tt loop\_package}. If not provided, no code for contour deformation is created. \item[positive\_polynomials]: iterable of strings or sympy symbols, optional. The names of the polynomials in {\tt polynomial\_names} that should always have a positive real part. For loop integrals, this applies to the first Symanzik polynomial U. If not provided, no polynomial is checked for positiveness. If {\tt contour\_deformation\_polynomial} is None, this parameter is ignored. \end{description} Note: All parameters (except {\tt loop\_integral}) described under {\tt loop\_package} are also available in {\tt make\_package}. \subsection{C++ part} The default settings for the numerical integration are listed in the section ``Integral Interface" in the documentation. We also list the defaults and a short description for the main parameters here. The values in brackets behind the keywords denote the defaults. \subsubsection{Contour deformation parameters and general settings} \begin{description} \item[real\_parameters]: iterable of float. The real parameters of the library (e.g. kinematic invariants in the case of loop integrals). \item[complex\_parameters]: iterable of complex. The complex parameters of the library (e.g. complex masses). \item[together (True)]: bool. Determines whether to integrate the sum of all sectors or to integrate the sectors separately. \item[number\_of\_presamples (100000)]: unsigned int, optional. The number of samples used for the contour optimization. This option is ignored if the integral library was created with contour deformation set to `False'. \item[deformation\_parameters\_maximum (1.0)]: float, optional. The maximal value the deformation parameters $\lambda_i$ can obtain. If number\_of\_presamples=0, all $\lambda_i$ are set to this value. This option is ignored if the integral library was created without deformation. \item[deformation\_parameters\_minimum ($10^{-5}$)]: float, optional. The minimal value for the deformation parameters $\lambda_i$. This option is ignored if the integral library was created without deformation. \item[deformation\_parameters\_decrease\_factor (0.9)]: float, optional. If the sign check (the imaginary part always must be negative) with the optimized $\lambda_i$ fails, all $\lambda_i$ are multiplied by this value until the sign check passes. This option is ignored if the integral library was created without deformation. \item[real\_complex\_together (False)]: If true, real and imaginary parts are evaluated simultaneously. If the grid should be optimally adapted to both real and imaginary part, it is more advisable to evaluate them separately. \end{description} \subsubsection{{\sc Cuba} parameters} \begin{table}[h!] \caption{Default settings for integrator-specific parameters.\label{tab:cuba}} \begin{tabular}{|l|l|l|l|} \hline Vegas&Suave&Divonne&Cuhre\\ \hline nstart (1000)&nnew (1000)&key1 (2000)&key (0)\\ nincrease (500)&nmin (10)&key2 (1), key3 (1)&\\ nbatch (1000)&flatness (25.0)&maxpass (4)&\\ &&border (0.0)&\\ &&maxchisq (1.0)&\\ &&mindeviation (0.15)&\\ \hline \end{tabular} \end{table} Common to all integrators: \begin{description} \item[epsrel (0.01)]: The desired relative accuracy for the numerical evaluation. \item[epsabs ($10^{-7}$)]: The desired absolute accuracy for the numerical evaluation. \item[flags (0)]: Sets the {\sc Cuba} verbosity flags. The flags=2 means that the {\sc Cuba} input parameters and the result after each iteration are written to the log file of the numerical integration. \item[seed (0)]: The seed used to generate random numbers for the numerical integration with Cuba. \item[maxeval (1000000)]: The maximal number of evaluations to be performed by the numerical integrator. \item[mineval (0)]: The number of evaluations which should at least be done before the numerical integrator returns a result. \end{description} For the description of the more specific parameters, we refer to the {\sc Cuba} manual. Our default settings are given in Table~\ref{tab:cuba}. When using Divonne, we strongly advise to use a non-zero value for {\tt border}, e.g. $10^{-8}$. \subsection{One-loop box} \label{subsec:examples:one-loop} This example is located in the folder {\tt box1L}. It calculates a 1-loop box integral with one off-shell leg ($p_1^2\not=0$) and one massive propagator connecting the external legs $p_1$ and $p_2$. The user has basically two possibilities to perform the numerical integrations: \\ (a) using the {\texttt{python}}{} interface to call the library or \\ (b) using the {\it C++} interface by inserting the numerical values for the kinematic point into {\tt integrate\_box1L.cpp}. The commands to run this example in case (a) above are\\ {\tt python generate\_box1L.py\\ make -C box1L\\ python integrate\_box1L.py } In case (b) above the commands are\\ {\tt python generate\_box1L.py\\ <edit kinematic point in box1L/integrate\_box1L.cpp>\\ make -C box1L integrate\_box1L\\ ./box1L/integrate\_box1L } The {\tt make} command can optionally be passed the jobs ({\tt -j}) command to run multiple {\texttt{FORM}}{} jobs and then multiple compilation jobs in parallel, for example \\ {\tt make -j 8 -C box1L}\\ would run 8 jobs in parallel where possible. Other simple examples can be run in their corresponding folders by replacing the name {\tt box1L} in the above commands with the name of the example. In more detail, running the {\texttt{python}}{} script {\tt generate\_box1L.py} will create a folder called {\tt box1L} (the ``name'' specified in {\tt generate\_box1L.py}) which will contain the following files and subdirectories: \begin{center} \texttt{ \begin{tabular}{l l l l } box1L.hpp & integrate\_box1L.cpp & Makefile & pylink/ \\ box1L.pdf & src/ \qquad codegen/ & Makefile.conf & README \\ \end{tabular} } \end{center} Inside the generated {\tt box1L} folder typing `{\tt make}' will create the source files for the integrand and the libraries `{\tt libbox1L.a}' and `{\tt box1L\_pylink.so}', which can be linked to an external program calling these integrals. Note that py{\textsc{SecDec}}{} automatically creates a {\tt pdf} file with the diagram picture if {\tt LoopIntegralFromGraph} is used as input format. In case (a) the {\texttt{python}}{} file {\tt integrate\_box1L.py} is used to perform the numerical integration, the user may edit the kinematic point and integration parameter settings directly at {\texttt{python}}{} level. In case (b), the user has to insert the values for the kinematic point in\\ {\tt box1L/integrate\_box1L.cpp} at the line \\ \qquad{\tt const std::vector<box1L::real\_t> real\_parameters = \{\}; }\\ which can be found at the beginning of {\tt int main()}. Complex parameters should be given as a list of the real and imaginary part. For box1L, the complex numbers {\tt 1+2\,i} and {\tt 2+1\,i} are written as \\ \qquad {\tt const std::vector<box1L::complex\_t> complex\_parameters=\{{ \{1.0,2.0\}, \{2.0,1.0\} }\};}\\ If no complex parameters are present, the list {\tt complex\_parameters} should be left empty. The command `{\tt make -C box1L box1L/integrate\_box1L}' produces the executable {\tt integrate\_box1L} which can be run to perform the numerical integration using the {\it C++} interface. \vspace*{2mm} {\bf Loop over multiple kinematic points} The file {\tt integrate\_box1L\_multiple\_points.py} shows how to integrate a number of kinematic points sequentially. The points are defined in the file {\tt kinematics.input}. They are read in by the {\texttt{python}}{} script (line by line).\\ The first entry of each line in the kinematics data file {\tt kinematics.input} should be a string, the ``name'' of the kinematic point, which can serve to label the results for each point. The results are written to the file {\tt results\_box1L.txt}. \subsection{Two-loop three-point function with massive propagators} The example {\tt triangle2L} calculates the two-loop diagram shown in Fig.~\ref{fig:P126}. The steps to perform to run this example (using the python interface) are analogous to the ones given in the previous section:\\ {\tt python generate\_triangle2L.py \&\& make -C triangle2L \&\& }\\ {\tt python integrate\_triangle2L.py} Results for this diagram can be found e.g. in \cite{Fleischer:1997bw,Davydychev:2003mv,Bonciani:2003hc,Ferroglia:2003yj}. \begin{figure}[htb] \begin{center} \includegraphics[width=5.5cm]{P126.pdf} \end{center} \caption{A two-loop vertex graph containing a massive triangle loop. Solid lines are massive, dashed lines are massless. The vertices are labeled to match the construction of the integrand from the topology.} \label{fig:P126} \end{figure} The result of py{\textsc{SecDec}}{} is shown in Tab.~\ref{tab:triangle2L}. We would like to point out that the default accuracy in this example is set to $10^{-2}$ in order to keep the runtimes low. This does not reflect the accuracy py{\textsc{SecDec}}{} can actually reach. \begin{table}[htb] \caption{Result for the two-loop triangle {\tt P126} at $p_3^2=9$ and $m^2=1$ compared to the analytic result of Ref.~\cite{Davydychev:2003mv}. } \begin{center} \begin{tabular}{|l|l|c|c|} \hline $\epsilon$ order & py{\textsc{SecDec}}{} result\\ \hline $\epsilon^{-2}$ & (-0.0379735 - i\,0.0747738) $\pm$ (0.000375449 + i\,0.000695892) \\ $\epsilon^{-1}$ & (0.2812615 + i\,0.1738216) $\pm$ (0.003117778 + i\,0.002358655) \\ $\epsilon^{0}$ & (-1.0393673 + i\,0.2414135) $\pm$ (0.011940978 + i\,0.004604699) \\ \hline & analytic result\\ \hline $\epsilon^{-2}$& -0.038052884394 - i\,0.0746553844162\\ $\epsilon^{-1}$& \,0.279461083591 + i\,0.1746609123993\\ $\epsilon^{0}$ &-1.033851309109 + i\,0.2421265865644\\ \hline \end{tabular} \end{center} \label{tab:triangle2L} \end{table} A comparison of the timings with {\textsc{SecDec}~$3$}{} and \textsc{Fiesta}~$4.1${} for the evaluation of the finite part can be found in Table~\ref{tab:timings}. \subsection{Two-loop four-point function with numerators} \label{subsec:examples:numerator} The example {\tt box2L\_numerator} shows how numerators can be treated in py{\textsc{SecDec}}{}. It calculates a massless planar on-shell two-loop 7-propagator box in two different ways: \\ (a) with the numerator defined as an inverse propagator ({\tt box2L\_invprop.py}), \\ (b) with the numerator defined in terms of contracted Lorentz vectors\\ ({\tt box2L\_contracted\_tensor.py}). This example is run with the following commands\\ \begin{tabular}{ll} {\tt make } &{\it (will run both {\texttt{python}}{} scripts and compile the libraries)}\\ {\tt./integrate\_box2L} & {\it (will calculate the integral in both ways and }\\ & {\it print the results as well as the difference between }\\ & {\it the two results, which should be numerically zero).} \end{tabular} The result for $s=-3$ and $t=-2$ is listed in Tab.~\ref{tab:box2l}. The integral is given by \begin{align} &I_{a_1\ldots a_8}=\int\frac{d^Dk_1}{i\pi^\frac{D}{2}}\frac{d^Dk_2}{i\pi^\frac{D}{2}} \frac{1}{[D_1]^{a_1}[D_2]^{a_2}[D_3]^{a_3}[D_4]^{a_4}[D_5]^{a_5}[D_6]^{a_6}[D_7]^{a_7}[D_8]^{a8}}\\ &D_1=k_1^2, D_2=(k_1+p_2)^2, D_3=(k_1-p1)^2,D_4=(k_1-k_2)^2,\nonumber\\ & D_5=(k_2+p_2)^2,D_6=(k_2-p_1)^2, D_7=(k_2+p_2+p_3)^2,D_8=(k_1+p_3)^2.\nonumber \end{align} In case (a), the integral $I_{1111111-1}$ is specified by {\tt powerlist = [1,1,1,1,1,1,1,-1]} in {\tt box2L\_invprop.py}.\\ In case (b), the same integral is specified (in {\tt box2L\_contracted\_tensor.py}) \\ by {\tt powerlist = [1,1,1,1,1,1,1,0]} and \\{\tt numerator = `k1(mu)*k1(mu) + 2*k1(mu)*p3(mu) + p3(mu)*p3(mu)'}. \begin{table}[htb] \caption{Result for the two-loop four-point function with numerators at the kinematic point $s=-3, t=-2$. } \begin{center} \begin{tabular}{|c|c|} \hline $\epsilon$ order & py{\textsc{SecDec}}{} result \\ \hline $\epsilon^{-4}$ & -0.2916 $\pm$ 0.0022 \\ $\epsilon^{-3}$ & 0.7410 $\pm$ 0.0076 \\ $\epsilon^{-2}$ & -0.3056 $\pm$ 0.0095 \\ $\epsilon^{-1}$ & -2.2966 $\pm$ 0.0313 \\ $\epsilon^{0}$ & 1.1460 $\pm$ 0.0504 \\ \hline \end{tabular} \end{center} \label{tab:box2l} \end{table} \subsection{Three-loop triangle integral} \label{subsec:examples:3loop} \begin{figure}[h] \begin{center} \includegraphics[width=5.3cm]{formfactor3L.pdf} \end{center} \caption{Three-loop massless 7-propagator graph. \label{fig:A75}} \end{figure} The example {\tt triangle3L} demonstrates how the symmetry finder can reduce the number of sectors. We consider the seven-propagator 3-loop 3-point integral depicted in Fig.~\ref{fig:A75}, which is the figure that is automatically created by the code. This integral has been calculated to order $\epsilon$ in Ref.~\cite{Heinrich:2007at} and to order $\epsilon^4$ in Ref.~\cite{vonManteuffel:2015gxa}. Here we also calculate it to order $\epsilon^4$. This example is run as usual by the commands\\ {\tt python generate\_triangle3L.py \&\& make -C triangle3L \&\& }\\ {\tt python integrate\_triangle3L.py} It shows that the symmetry finder reduces the number of primary sectors to calculate from 7 to 3, and the total number of sectors from 212 to 122. For comparison, {\textsc{SecDec}~$3$}{} produces 448 sectors using strategy X. \subsection{Integrals containing elliptic functions} \label{subsec:examples:elliptic} In the examples {\tt elliptic2L\_euclidean} and {\tt elliptic2L\_physical} an integral is calculated which is known from Refs.~\cite{Bonciani:2016qxi,Primo:2016ebd} to contain elliptic functions. We consider the integrals $I_{a_1\ldots a9}$ \begin{align} &I_{a_1\ldots a_9}=\int\frac{d^Dk_1}{i\pi^\frac{D}{2}}\frac{d^Dk_2}{i\pi^\frac{D}{2}} \frac{D_8^{-a8}D_9^{-a9}}{[D_1]^{a_1}[D_2]^{a_2}[D_3]^{a_3}[D_4]^{a_4}[D_5]^{a_5}[D_6]^{a_6}[D_7]^{a_7}}\\ &D_1=k_1^2-m^2, D_2=(k_1+p_1+p_2)^2-m^2, D_3=k_2^2-m^2,\nonumber\\ & D_4=(k_2+p_1+p_2)^2-m^2,D_5=(k_1+p_1)^2-m^2,D_6=(k_1-k_2)^2,\nonumber\\ & D_7=(k_2-p_3)^2-m^2,D_8=(k_2+p_1)^2,D_9=(k_1-p_3)^2.\nonumber \end{align} The topology for $I_{110111100}$ is depicted in Fig.~\ref{fig:ellipticI1}. Here we calculate the integral $f^A_{66}=\left(-s/m^2\right)^\frac{3}{2}\,I_{110111100}$ discussed in Ref.~\cite{Bonciani:2016qxi}. In {\tt elliptic2L\_euclidean} we calculate the kinematic point $s=-4/3, t= -16/5, p_4^2=-100/39, m=1$ (Euclidean point) with the settings {\tt epsrel=}$10^{-5}$, {\tt maxeval=}$10^7$ and obtain \begin{align} f^A_{66}&=0.2470743601 \pm 6.9692\times 10^{-6} \;. \end{align} The analytic result\footnote{We thank Francesco Moriello and Hjalte Frellesvig for providing us the result.} is given by \begin{align} f^A_{66,{\mathrm{analytic}}}&=0.247074199140732131068066 \;.\nonumber \end{align} In {\tt elliptic2L\_physical} we calculate the non-Euclidean point $s=90, t= -2.5, p_4^2=1.6, m^2=1$ and find with {\tt epsrel=}$10^{-4}$, {\tt maxeval=}$10^7$: \begin{align} \left(\frac{-s}{m^2}\right)^{-\frac{3}{2}}\,f^A_{66}&=-0.04428874+i\,0.01606818 \pm (2.456+i\,2.662)\times 10^{-5})\;.\nonumber \end{align} \begin{figure}[htb] \begin{center} \includegraphics[width=5.cm]{elliptic_box_6prop-eps-converted-to.pdf} \end{center} \caption{Two-loop 6-propagator graph leading to elliptic functions. Curly lines denote massless particles. The box contains massive propagators with mass $m$. One leg ($p_4$) has $p_4^2\not=0$. \label{fig:ellipticI1}} \end{figure} \subsection{Two-loop vertex diagram with special kinematics} In the example {\tt triangle2L\_split} we calculate an integral entering the two-loop corrections to the $Zb\bar{b}$ vertex, calculated in Refs.~\cite{Fleischer:1998nb,Dubovyk:2016zok}, where it is called $N_3$. This example is run as usual by the commands\\ {\tt python generate\_triangle2L\_split.py \&\& make -C triangle2L\_split \&\&}\\ {\tt python integrate\_triangle2L\_split.py} The diagram produced by py{\textsc{SecDec}}{} is shown in Fig.~\ref{fig:Zbb}. \begin{figure}[htb] \begin{center} \includegraphics[width=0.4\textwidth]{N3_graph.png} \end{center} \caption{The integral $N_3$ with one massive propagator ($m_Z$) and $s=p_3^2=m_Z^2$. \label{fig:Zbb}} \end{figure} The kinematic condition $s=M_Z^2$ leads to an integrand which is particularly difficult for the sector decomposition method because it does not have a Euclidean region. As a consequence, the integral has both endpoint singularities as well as singularities due to the fact that the second Symanzik polynomial ${\cal F}$ can vanish on some hyperplane in Feynman parameter space, rather than only at the origin. The remappings done by the standard sector decomposition algorithm would turn this into singularities at $x_i=1$. In {\textsc{SecDec}~$3$}, singularities at $x_i=1$ were treated by a split of the integration domain at $x_i=0.5$ and subsequent remapping to the unit hypercube. However, this can lead to an infinite recursion of the problem. py{\textsc{SecDec}}{} can detect and remap such ``hyperplane singularities" into singularities at the origin by a dedicated spitting procedure, where a splitting at the symmetric point $x_i=0.5$ is avoided. The results obtained for this example are listed in Table~\ref{tab:N3}. \begin{table}[htb] \caption{Numerical result from py{\textsc{SecDec}}{} for the integral $N_3$.} \begin{center} \begin{tabular}{|c|l|} \hline $\epsilon$ order & py{\textsc{SecDec}}{} result \\ \hline $\epsilon^{-2}$ & (1.23370112 + i\,5.76 $\times 10^{-7}$) $\pm$ (0.00003623 + i\,0.00003507) \\ $\epsilon^{-1}$ & (2.89050847 + i\,3.87659429) $\pm$ (0.00060165 + i\, 0.00070525) \\ $\epsilon^0$ & (0.77923028 + i\,4.13308243) $\pm$ (0.00815782 + i\, 0.00923315)\\ \hline \end{tabular} \end{center} \label{tab:N3} \end{table} \subsection{Hypergeometric function $_5F_4$} \label{subsec:examples:hypergeo} An example of a general dimensionally regulated parameter integral, which can also have endpoint-singularities at $z_i=1$, can be found in {\tt hypergeo5F4}. We consider the hypergeometric function $_5F_4(a_1,...,a_5;b_1,...,b_4;\beta)$. This example is run by the usual commands\\ {\tt python generate\_hypergeo5F.py \&\& make -C hypergeo5F4 \&\& }\\ {\tt python integrate\_hypergeo5F.py} The considered function has the integral representation $$\prod_{i=1}^4 \left[\frac{\Gamma[b_i]}{\Gamma[a_i]\Gamma[b_i-a_i]}\int_0^1dz_i\,(1 - z_i)^{-1 - a_i + b_i}z_i^{-1 + a_i}\right](1 - \beta z_1 z_2 z_3 z_4)^{a_5}\;.$$ The potential singularities at $z_i=1$ are automatically detected by the program and remapped to the origin if the flag {\tt split=True} is set. Results for values $a_5=-\epsilon,a_2=-\epsilon,a_3=-3\epsilon,a_4=-5\epsilon,a_5=-7\epsilon$, $b_1=2\epsilon,b_2=4\epsilon,b_3=6\epsilon,b_4=8\epsilon, \beta=0.5$ are shown in Tab.~\ref{tab:hyp5f4}. \begin{table}[h!] \caption{Comparison of the exact result for $_5F_4$ with the evaluation of py{\textsc{SecDec}}{}, maximally using $10^9$ integrand evaluations. } \begin{center} \begin{tabular}{|c|c|c|} \hline $\epsilon$ order & Exact result (using HypExp~\cite{Huber:2005yg}) & py{\textsc{SecDec}}{} result \\ \hline $\epsilon^0$ & 1 & $1\pm\times 10^{-15}$ \\ $\epsilon^1$ & 0.1895324 & 0.18953239 $\pm$ 0.0002 \\ $\epsilon^2$ & - 2.2990427 & -2.2990377 $\pm$ 0.0016 \\ $\epsilon^3$ & 55.469019 & 55.468712 $\pm$ 0.084 \\ $\epsilon^4$ & - 1014.3924 & -1014.3820 $\pm$ 0.89 \\ \hline \end{tabular} \end{center} \label{tab:hyp5f4} \end{table} \subsection{Function with two different regulators} \label{subsec:examples:scet} The example {\tt two\_regulators} demonstrates the sector decomposition and integration of a function with multiple regulators. We consider the integral\footnote{We thank Guido Bell and Rudi Rahn for providing this example.} \begin{eqnarray} I&=e^{-\gamma_E(2\epsilon+\alpha)}\int_0^1 dz_0 \int_0^1 dz_1 \,z_0^{-1-2\epsilon-\alpha}(1-z_0)^{-1+2\epsilon+\alpha} z_1^{-\epsilon+\frac{\alpha}{2}}\,e^{-z_0/(1-z_0)}\\ &=\frac{2}{\alpha}\Gamma(-2\epsilon-\alpha)\,e^{-\gamma_E(2\epsilon+\alpha)}\nonumber\\ &=-\frac{1}{\alpha\epsilon}+\frac{1}{2\epsilon^2}-\frac{\pi^2}{6}+{\cal O}(\alpha,\epsilon)\;.\nonumber \end{eqnarray} This example can be run using the usual commands\\ {\tt python generate\_two\_regulators.py \&\& make -C two\_regulators \&\&}\\ {\tt python integrate\_two\_regulators.py} The regulators are specified in a list as {\tt regulators = [`alpha',`eps']}. The orders to be calculated in each regulator are defined in a list where the position of each entry matches the one in the regulator list. For example, if the integral should be calculated up to the zeroth order in the regulator $\alpha$ and to first order in $\epsilon$, the corresponding input would be {\tt requested\_orders = [0,1]}. \subsection{User-defined additional functions} \label{subsec:examples:dummy} The user has several possibilities to define functions which are not included in the decomposition procedure itself and which can therefore be non-polynomial or be defined by an arbitrary {\it C++} function, for example a jet algorithm or the definition of an event shape variable. Three examples ({\tt dummyI}, {\tt dummyII} and {\tt thetafunction}) which demonstrate the use of user defined functions are contained in the subdirectory {\tt userdefined\_cpp}. \subsubsection{Analytic functions not entering the decomposition} The example {\tt dummyI} demonstrates how a result can be multiplied by an analytic function of the integration variables which should not be decomposed. The example can be run with the usual commands\\ {\tt python generate\_dummyI.py \&\& make -C dummyI \&\& \\ python integrate\_dummyI.py} The functions which are to be multiplied onto the result are listed in {\tt generate\_dummyI.py} on the line {\tt functions = [`dum1', `dum2']}. The user can give functions any name which is not a reserved {\texttt{python}}{} function name.\\ The dependence of these functions on a number of arguments is given on the line \\ {\tt remainder\_expression =\\ `(dum1(z0,z1,z2,z3) + 5*eps*z0)**(1+eps) *\\ dum2(z0,z1,alpha)**(2-6*eps)'}.\\ Note that the {\tt remainder\_expression} is an explicitly defined function of the integration variables but that the functions {\tt dum1} and {\tt dum2} are left implicit in the {\texttt{python}}{} input file. Any functions left implicit in the {\texttt{python}}{} input (in this example {\tt dum1} and {\tt dum2}) are to be defined later in the file {\tt <name>/src/functions.hpp}. A template for this file will be created automatically together with the process directory. In our example, for the user's convenience, the appropriate functions are copied to the process directory in the last line of {\tt generate\_dummyI.py}. Note that the arguments in {\tt functions.hpp} are the ones that occur in the argument list of the function in {\tt generate\_dummyI.py}, in the same order. The function arguments can be both integration variables and parameters. Derivatives of the functions are needed if higher than logarithmic poles appear in the decomposition of the integrand. The definition of the derivatives are named following the pattern {\tt d<function>d<argument>}, for example `{\tt ddum1d0}' means the first derivative of the function with name `{\tt dum1}' with respect to its first argument. Alternatively, if the extra functions are simple, they can be defined explicitly in the {\texttt{python}}{} input file in {\tt remainder\_expression =} `{\it define explicit function here}'. The example {\tt dummyII} demonstrates this. It can be run with the usual commands\\ {\tt python generate\_dummyII.py \&\& make -C dummyII \&\& \\ python integrate\_dummyII.py} In this case, the definition of functions like {\tt dum1,dum2} is obsolete. The definitions given in {\tt remainder\_expression} will be multiplied verbatim to the polynomials to decompose. \subsubsection{Non-analytic or procedural functions not entering the decomposition} The user can also multiply the result by {\it C++} functions which are not simple analytic functions, for example they may contain {\tt if} statements, {\tt for} loops, etc., as may be needed to define measurement functions or observables. An example of this is given in {\tt generate\_thetafunction.py} which shows how a theta-function can be implemented in terms of a {\it C++} {\tt if} statement. This example can be run with the usual commands\\ {\tt python generate\_thetafunction.py \&\& make -C thetafunction \&\& \\ python integrate\_thetafunction.py} In the {\texttt{python}}{} input file the name of the {\it C++} function is given on the line {\tt functions = ['cut1']}. The line\\ {\tt remainder\_expression = `cut1(z1,delt)'}\\ instructs py{\textsc{SecDec}}{} to multiply the function onto the result, without decomposition. Note that the implementation of the function {\tt cut1} is not given in the {\texttt{python}}{} input file. Once the process directory is created, the function {\tt cut1} should be defined in {\tt <name>/src/functions.hpp}. In our example, the appropriate function is copied to the process directory in the last line of {\tt generate\_thetafunction.py} for the user's convenience. The theta-function may be defined as follows:\\ \lstset{language=C++, basicstyle=\ttfamily, keywordstyle=\ttfamily, stringstyle=\ttfamily, commentstyle=\ttfamily } \begin{lstlisting} template<typename T0, typename T1> integrand_return_t cut1(T0 arg0, T1 arg1) { if (arg0 < arg1) { return 0.; } else { return 1.; } }; \end{lstlisting} The first argument ({\tt arg0}) corresponds to {\tt z1}, the second one ({\tt arg1}) is the cut parameter {\tt delta}. \subsection{Four-photon amplitude} \label{sec:examples:4gamma} This example, contained in {\tt 4photon1L\_amplitude}, calculates the one-loop four-photon amplitude ${\cal M}^{++--}$. The example may be run using the commands:\\ {\tt make \&\& ./amp} The {\tt Makefile} will produce the libraries for the two-point and four-point functions entering the amplitude and compile the file {\tt amp.cpp} which defines the amplitude. Executing `{\tt ./amp}' evaluates the amplitude numerically and prints the analytic result for comparison. The amplitude for 4-photon scattering via a massless fermion loop can be expressed in terms of three independent helicity amplitudes, ${\cal M}^{++++}$, ${\cal M}^{+++-}$, ${\cal M}^{++--}$, out of which the remaining helicity amplitudes forming the full amplitude can be reconstructed using crossing symmetry, Bose-symmetry and parity. Omitting an overall factor of $\alpha^2$, the analytic expressions read (see e.g.~\cite{Binoth:2002xg}) \begin{eqnarray} {\cal M}^{++++} &=& 8 \quad , \quad {\cal M}^{+++-} = -8 \;,\nonumber \\ {\cal M}^{++--} &=& -8 \Bigl[ 1 + \frac{t-u}{s} \log\left(\frac{t}{u}\right) + \frac{t^2+u^2}{2 s^2} \Bigl( \log\left(\frac{t}{u}\right)^2 + \pi^2 \Bigr)\Bigr] \;\; . \end{eqnarray} Up to an overall phase factor, the amplitude ${\cal M}^{++--}$ can be expressed in terms of one-loop integrals as \begin{align} {\cal M}^{++--} =-8 \left\{ 1 + \frac{t^{2}+u^{2}}{s} I_{4}^{D+2}(t,u) + \frac{t-u}{s}\left( I_{2}^{D}(u)-I_{2}^{D}(t) \right) \right\}\,. \end{align} The purpose of our simple example is to show how py{\textsc{SecDec}}{} can be used to calculate the master integrals occurring in the amplitude ${\cal M}^{++--}$. \subsection{Comparison of timings} \begin{table}[htb] \caption{Comparison of timings (algebraic, numerical) using py{\textsc{SecDec}}{}, {\textsc{SecDec}~$3$}{} and \textsc{Fiesta}~$4.1${}.} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline & py{\textsc{SecDec}}{} time\,(s) & {\textsc{SecDec}~$3$}{} time\,(s) & \textsc{Fiesta}~$4.1${} time\,(s) \\ \hline \texttt{triangle2L} & (40.5, 9.6) & (56.9, 28.5) & (211.4, 10.8) \\ \texttt{triangle3L} & (110.1, 0.5) & (131.6, 1.5) & (48.9, 2.5) \\ \texttt{elliptic2L\_euclidean} & (8.2, 0.2) & (4.2, 0.1) & (4.9, 0.04) \\ \texttt{elliptic2L\_physical} & (21.5, 1.8) & (26.9, 4.5) & (115.3, 4.4) \\ \texttt{box2L\_invprop} & (345.7, 2.8) & (150.4, 6.3) & (21.5, 8.8) \\ \hline \end{tabular} \end{center} \label{tab:timings} \end{table} We compare the timings for several of the above mentioned examples between py{\textsc{SecDec}}{}, {\textsc{SecDec}~$3$}{} and \textsc{Fiesta}~$4.1${}, where we distinguish between the time needed to perform the algebraic and the numeric part. In Tab.~\ref{tab:timings}, the compilation of the generated {\it C++} functions is included in the algebraic part, because it needs to be done only once. The timings for the numerical part are the wall clock times for the evaluation of the {\it C++} functions. The timings were taken on a four-core (eight hyper-thread) Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz machine. We set the parameter \texttt{number\_of\_presamples} in py{\textsc{SecDec}}{}, \texttt{optlamevals} in {\textsc{SecDec}~$3$}{} and \texttt{LambdaIterations} in \textsc{Fiesta}~$4.1${}, which controls the number of samples used to optimise the contour deformation, to the \textsc{Fiesta}~$4.1${} default of \texttt{1000}. The default decomposition strategy of each tool was used, \texttt{STRATEGY\_S} for \textsc{Fiesta}~$4.1${} and \texttt{X} for py{\textsc{SecDec}}{} and {\textsc{SecDec}~$3$}{}. The integrands were summed before integrating in the following way: setting \texttt{together=True} in py{\textsc{SecDec}}{} and \texttt{togetherflag=1} in {\textsc{SecDec}~$3$}{} sums all integrands contributing to a certain pole coefficient before integrating. \texttt{SeparateTerms=False} in \textsc{Fiesta}~$4.1${} sums the integrands in each sector which appears after pole resolution before integrating. For the examples considered on our test platform these settings were found to be optimal for all three tools. The integration is performed using the default settings of py{\textsc{SecDec}}{} and the same settings in {\textsc{SecDec}~$3$}{} and \textsc{Fiesta}~$4.1${}. In particular, this implies a rather low desired relative accuracy of $10^{-2}$. The numerical integration times in py{\textsc{SecDec}}{} are generally reduced with respect to {\textsc{SecDec}~$3$}{}, which is mostly due to a better optimization during the algebraic part, and partly also due to a more efficient deformation of the integration contour in py{\textsc{SecDec}}{}. For the test cases considered we found that \textsc{Fiesta}~$4.1${} is the fastest to perform the algebraic (decomposition) step when contour deformation is not required. We would like to stress that although we endeavoured to keep all relevant settings identical across the tools we are not experts in the use of \textsc{Fiesta}~$4.1${} and we expect that it is possible to obtain better timings by adjusting settings away from their default values. Furthermore, which tool is fastest strongly depends on the case considered and whether one prefers faster decomposition or numerical evaluation of the resulting functions. \subsection{Algebraic part} The algebraic part consists of several modules that provide functions and classes for the purpose of the generation of the integrand, performing the sector decomposition, contour deformation, subtraction and expansion in the regularization parameter(s). The algebra modules contained in py{\textsc{SecDec}}{} use both {\tt sympy} and {\tt numpy}, but also contain the implementation of a computer algebra system tailored to the sector decomposition purposes, in order to be competitive in speed with the previous implementation in Mathematica. For example, since sector decomposition is an algorithm that acts on polynomials, a key class contained in {\tt pySecDec.algebra} is the class {\tt Polynomial}. Acting on general polynomials, py{\textsc{SecDec}}{} is not limited to loop integrals. It can take as an integrand any product of polynomials, raised to some power, provided that the endpoint singularities are regulated by regularisation parameters, and that integrable singularities away from the integration endpoints can be dealt with by a deformation of the integration path into the complex plane. We should point out that py{\textsc{SecDec}}{} can perform the subtraction and expansion in several regulators, see the description of example \ref{subsec:examples:scet}. For loop integrals, the program contains the module {\tt pySecDec.loop\_integral}. There are two ways to define a loop integral in py{\textsc{SecDec}}{}: (a) from the list of propagators, and (b) from the adjacency list defining the graph, which is roughly the list of labels of vertices connected by propagators. Examples for both alternatives to define loop integrals are given in Sections \ref{subsec:usage} and \ref{sec:examples}. The availability of {\texttt{python}}{} functions which can be called individually by the user allows for a very flexible usage of py{\textsc{SecDec}}{}. The {\tt html} documentation describes all the available modules and functions in detail, and also contains a ``quick search" option. \subsubsection{Sector decomposition strategies} When using \texttt{loop\_package}, which facilitates the definition and calculation of Feynman integrals, one can choose between the following sector decomposition strategies: \begin{itemize} \item \texttt{iterative}: Default iterative method~\cite{Binoth:2000ps,Heinrich:2008si}. \item \texttt{geometric}: Algorithm based on algebraic geometry (\texttt{G2} in {\textsc{SecDec}~$3$}{}). Details can be found in Refs.~\cite{Borowka:2015mxa,Schlenk:2016cwf,Schlenk:2016a}. \item \texttt{geometric\_ku}: Original algorithm based on algebraic geometry introduced by Kaneko and Ueda (\texttt{G1} in {\textsc{SecDec}~$3$}{})~\cite{Kaneko:2009qx,Kaneko:2010kj}. \end{itemize} The recommended sector decomposition algorithm based on algebraic geometry is \texttt{geometric}, since it improves on the original geometric algorithm. For general parametric integrals there are additionally the following options which can be set in \texttt{make\_package}: \begin{itemize} \item \texttt{iterative\_no\_primary}: Iterative method without primary sector decomposition. \item \texttt{geometric\_no\_primary}: Geometric decomposition according to Kaneko and Ueda without primary sector decomposition. \end{itemize} \subsection{Producing C++ code and numerical results} The module {\tt pySecDec.code\_writer} is the main module to create a {\it C++} library. It contains the function {\tt pySecDec.code\_writer.make\_package} which can decompose, subtract and expand any polynomial expression and return the produced set of finite functions as a {\it C++} package, where {\texttt{FORM}}{} has been employed to write out optimised expressions. Simple examples of how to use {\tt make\_package} are described in Sections \ref{subsec:examples:hypergeo} and \ref{subsec:examples:scet}. A more advanced example is given in Section \ref{subsec:examples:dummy}, which shows how the user can define any number of additional finite functions. These functions need not be polynomial. Furthermore, the user is free to define arbitrary {\it C++} code (for example, a jet clustering routine) to be called by the integrand during the numerical integration. Templates for such user-defined functions will be created automatically if the field {\tt functions}, where the names of such functions are given, is non-empty in {\tt make\_package} . If a loop integral should be calculated, the function {\tt loop\_package} can be used, which contains methods specific to loop integrals, for example the construction of the Symanzik polynomials ${\cal F}$ and ${\cal U}$ from the list of propagators or from the adjacency list of a graph. Examples how to use the loop package are given in Sections \ref{subsec:examples:one-loop} to \ref{subsec:examples:elliptic}. Both {\tt make\_package} and {\tt loop\_package} will create a directory (with the name given by the user in the field {\tt name}) which contains the main {\it C++} integration files and a Makefile to generate the {\it C++} source code and the libraries (static and dynamic) containing the integrand functions. The library can be linked against a user-specific code, or it can be called via a {\texttt{python}}{} interface, as described in Section \ref{subsec:examples:one-loop}. \subsection{New features} In addition to the complete re-structuring, which opens up new possibilities of usage, there are various new features compared to {\textsc{SecDec}~$3$}{}: \begin{itemize} \item The functions can have any number of different regulators, not only the dimensional regulator $\epsilon$. \item The treatment of numerators is much more flexible. Numerators can be defined in terms of contracted Lorentz vectors or inverse propagators or a combination of both. \item The distinction between ``general functions" and ``loop integrands" is removed in the sense that all features are available for both general polynomial functions and loop integrals (except those which only make sense in the loop context). \item The inclusion of additional ``user-defined" functions which do not enter the decomposition has been facilitated and extended. \item The treatment of poles which are higher than logarithmic has been improved. \item A procedure has been implemented to detect and remap singularities at $x_i=1$ which cannot be cured by contour deformation. \item A symmetry finder~\cite{2013arXiv1301.1493M} has been added which can detect isomorphisms between sectors. \item Diagrams can be drawn (optionally, based on {\tt neato}~\cite{graphviz}; the program will however run normally if {\tt neato} is not installed). \item The evaluation of multiple integrals or even amplitudes is now possible, using the generated {\it C++} library, as shown in Example \ref{sec:examples:4gamma}. \end{itemize} \section{Introduction} \label{sec:intro} \input{intro} \section{Description of py{\textsc{SecDec}}{}} \label{sec:program} \input{program} \section{Installation and usage} \label{sec:usage} \input{usage} \section{Examples} \label{sec:examples} \input{examples} \vspace*{3mm} \section{Conclusions} \label{sec:conclusion} We have presented a new version of the program {\textsc{SecDec}}{}, called py{\textsc{SecDec}}{}, which is publicly available at {\tt http://secdec.hepforge.org}. The program py{\textsc{SecDec}}{} is entirely based on open source software ({\texttt{python}}{}, {\texttt{FORM}}{}, {\sc Cuba}) and can be used in various contexts. The algebraic part can isolate poles in any number of regulators from general polynomial expressions, where Feynman integrals are a special case of. For the numerical part, a library of {\it C++} functions is created, which allows very flexible usage, and in general outperforms {\textsc{SecDec}~$3$}{} in the numerical evaluation times. In particular, it extends the functionality of the program from the evaluation of individual (multi-)loop integrals to the evaluation of larger expressions containing multiple analytically unknown integrals, as for example two-loop amplitudes. Such an approach already has been used successfully for the two-loop integrals entering the full NLO corrections to Higgs boson pair production. Therefore py{\textsc{SecDec}}{} can open the door to the evaluation of higher order corrections to multi-scale processes which are not accessible by semi-analytical approaches. \section*{Acknowledgements} We would like to thank Viktor Papara, Rudi Rahn and Andries Waelkens for useful comments and Hjalte Frellesvig and Francesco Moriello for providing numbers for comparison. This research was supported in part by the Research Executive Agency (REA) of the European Union under the Grant Agreement PITN-GA2012316704 (HiggsTools) and the ERC Advanced Grant MC@NNLO (340983). S. Borowka gratefully acknowledges financial support by the ERC Starting Grant ``MathAm" (39568). \renewcommand \thesection{\Alph{section}} \renewcommand{\theequation}{\Alph{section}.\arabic{equation}} \setcounter{section}{0} \setcounter{equation}{0} \input{appendix} \bibliographystyle{JHEP} \subsection{Installation} The program can be downloaded from {\tt http://secdec.hepforge.org}. It relies on {\texttt{python}}{} and runs with versions 2.7 and 3. Further the packages {\tt numpy} ({\tt http://www.numpy.org}) and {\tt sympy} ({\tt http://www.sympy.org}) are required. The former is a package for scientific computing with {\texttt{python}}{}, the latter is a {\texttt{python}}{} library for symbolic mathematics. To install py{\textsc{SecDec}}{}, perform the following steps {\tt tar -xf pySecDec-<version>.tar.gz \\ cd pySecDec-<version> \\ make \\ <copy the highlighted output lines into your .bashrc> } The \texttt{make} command will automatically build further dependencies in addition to py{\textsc{SecDec}}{} itself. These are the {\sc Cuba} library\cite{Hahn:2004fe,Hahn:2014fua} needed for multi-dimensional numerical integration, {\texttt{FORM}}{}\cite{Vermaseren:2000nd,Kuipers:2013pba} for the algebraic manipulation of expressions and to produce optimized {\it C++} code, and {\sc Nauty}\cite{2013arXiv1301.1493M} to find sector symmetries, thereby reducing the total number of sectors to be integrated. The lines to be copied into the \texttt{.bashrc} define environment variables which make sure that py{\textsc{SecDec}}{} is found by {\texttt{python}}{} and finds its aforementioned dependencies. With our effort of shipping external dependencies with our program, we want to make sure the installation is as easy as possible for the user. The py{\textsc{SecDec}}{} user is strongly encouraged to cite the additional dependencies when using the program. \subsubsection{Geometric sector decomposition strategies} The program {\sc Normaliz}~\cite{2012arXiv1206.1916B,Normaliz} is needed for the geometric decomposition strategies {\tt geometric} and {\tt geometric\_ku}. In py{\textsc{SecDec}}{} version 1.0, the versions 3.0.0, 3.1.0 and 3.1.1 of {\sc Normaliz} are known to work. Precompiled executables for different systems can be downloaded from \\ {\tt https://www.normaliz.uni-osnabrueck.de}. We recommend to export its path to the environment of the terminal such that the {\it normaliz} executable is always found. Alternatively, the path can be passed directly to the functions that call it, see the manual for more information. The strategy {\tt iterative} can be used without having {\sc Normaliz} installed. \subsection{Usage} \label{subsec:usage} Due to its highly modular structure, modules of the program py{\textsc{SecDec}}{} can be combined in such a way that they are completely tailored to the user's needs. The individual building blocks are described in detail in the manual. The documentation is shipped with the tarball in {\tt pdf} ({\tt doc/pySecDec.pdf}) and {\tt html} ({\tt doc/html/index.html}) format. We provide {\texttt{python}}{} scripts for the two main application directions of the program. One is to use py{\textsc{SecDec}}{} in a ``standalone" mode to obtain numerical results for individual integrals. This corresponds to a large extent to the way previous {\textsc{SecDec}}{} versions were used. The other allows the generation of a library which can be linked to the calculation of amplitudes or other expressions, to evaluate the integrals contained in these expressions. The different use cases are explained in detailed examples in Section~\ref{sec:examples}. To get started, we recommend to read the section ``getting started" in the online documentation. The basic steps can be summarized as follows: \begin{enumerate} \item Write or edit a {\texttt{python}}{} script to define the integral, the replacement rules for the kinematic invariants, the requested order in the regulator and some other options (see e.g. the one-loop box example {\tt box1L/generate\_box1L.py}). \item Run the script using {\texttt{python}}{}. This will generate a subdirectory according to the {\tt name} specified in the script. \item Type {\tt make -C <name>}, where {\tt <name>} is your chosen name. This will create the {\it C++} libraries. \item Write or edit a {\texttt{python}}{} script to perform the numerical integration using the {\texttt{python}}{} interface (see e.g. {\tt box1L/integrate\_box1L.py}). \end{enumerate} Further usage options such as looping over multiple kinematic points are described in the documentation as well as in section \ref{subsec:examples:one-loop}. \vspace*{5mm} The algebra package can be used for symbolic manipulations on integrals. This can be of particular interest when dealing with non-standard loop integrals, or if the user would like to interfere at intermediate stages of the algebraic part. For example, the Symanzik polynomials ${\cal F}$ and ${\cal U}$ resulting from the list of propagators can be accessed as follows (example one-loop bubble): {\tt >>> from pySecDec.loop\_integral import * }\\ {\tt >>> propagators = ['k**2', '(k - p)**2']}\\ {\tt >>> loop\_momenta = ['k']}\\ {\tt >>> li = LoopIntegralFromPropagators(propagators,loop\_momenta) } Then the functions ${\cal U}$ and ${\cal F}$ including their powers can be called as: {\tt >>> li.exponentiated\_U\\ ( + x0 + x1)**(2*eps - 2)\\ >>> li.exponentiated\_F\\ ( + (-p**2)*x0*x1)**(-eps) } Numerators can be included in a much more flexible way than in {\textsc{SecDec}~$3$}{}, see the example in Section \ref{subsec:examples:numerator} and the manual. An example where ${\cal F}$ and ${\cal U}$ are calculated from the adjacency list defining a graph looks as follows (for a one-loop triangle with two massive propagators): {\tt >>> from pySecDec.loop\_integral import *\\ >>> internal\_lines = [['0',[1,2]], ['m',[2,3]], ['m',[3,1]]]\\ >>> external\_lines = [['p1',1],['p2',2],['-p12',3]]\\ >>> li = LoopIntegralFromGraph(internal\_lines, external\_lines) } Finally, we should point out that the conventions for additional prefactors defined by the user have been changed between {\textsc{SecDec}~$3$}{} and py{\textsc{SecDec}}{}. The prefactor will now be multiplied automatically to the result. For example, if the user defines {\tt additional\_prefactor=}$\Gamma(3-2\epsilon)$, this prefactor will be expanded in $\epsilon$ and included in the numerical result returned by py{\textsc{SecDec}}{}.
2,869,038,156,590
arxiv
\section{Introduction} \label{sec:motivation} With the appearance of more and better near-IR instruments, the studies of the stellar populations in galaxies in different environments and at various redshifts, exploring the rest-frame wavelength region between 1 and 2.5~$\mu$m, have become more frequent \citep[e.g.][]{silva2008, lancon08,hyvonen09,esther09,cesetti09,martins10,riffel11b}. Thus a reliable framework to interpret these observations is needed. The comparison of observations with stellar population models is an approach that proved to be successful in the optical wavelength regime over the last decades. However, the near-IR range of the evolutionary populations synthesis (EPS) models still lacks a proper empirical calibration. Recent advances in theoretical as well as empirical calibration include in particular work on asymptotic giant branch (AGB) evolutionary phase, whose contribution is crucial to the near-IR part of the models, \citep[e.g.][]{marigo08,girardi10,maraston98,maraston05}. In \citet[][hereafter Paper~I]{az10a} we described our efforts to create a near-IR library of integrated spectra of globular clusters to serve as a calibrator and test bench for existing and future EPS models \citep[e.g.][]{bruz03,maraston05,con2009,maraston11}. For this pilot project we chose six globular clusters in the LMC, for which detailed information about the age and chemical composition exists, based on their resolved light. Our sample consists of three old ($>$~10~Gyr) and metal poor ([Fe/H]~$\simeq -1.4$) clusters, namely NGC\,1754, NGC\,2005, and NGC\,2019, as well as three intermediate age $(1 - 2$\,Gyr) and more metal rich ([Fe/H]~$\simeq-0.4$) clusters: NGC\,1806, NGC\,2162, and NGC\,2173 (see Table~\ref{tab:lmc_gc}). \begin{table*}[tdp] \caption{\label{tab:lmc_gc}Target globular clusters in the LMC.} \centering \begin{tabular}{c c c c c c c c c c c} \hline \hline Name & Age (Gyr) & [Fe/H] & $L_{mosaic}$ ($L_{\sun}$) & $r_{h}$ & $r_{t}$ & S/N$_{J}$ & S/N$_{H}$\\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8)\\ \hline NGC\,1806 & 1.1\tablefootmark{1}, 1.7\tablefootmark{8}, 1.5\tablefootmark{9}, 1.9\tablefootmark{2}, 0.5\tablefootmark{e} & -0.23\tablefootmark{b}, -0.71\tablefootmark{e} & $3.4\times10^{4}$ & -- & -- & $>$60\tablefootmark{*} & $>$45\tablefootmark{*}\\ NGC\,2162 & 1.1\tablefootmark{1}, 1.3\tablefootmark{3}, 2\tablefootmark{2}, 0.9\tablefootmark{4}, 1.25\tablefootmark{5} & -0.23\tablefootmark{b}, -0.46\tablefootmark{f} & $0.5\times10^{4}$ & 21\farcs37 & 197\farcs2 & $>$45\tablefootmark{*} & $>$40\tablefootmark{*}\\ NGC\,2173 & 2\tablefootmark{1}, 2.1\tablefootmark{3}, 1.6\tablefootmark{6}, 4.1\tablefootmark{2}, 1.5\tablefootmark{7}, 1.1\tablefootmark{4}, 1.6\tablefootmark{5} & -0.24\tablefootmark{b}, -0.42\tablefootmark{f}, -0.51\tablefootmark{g} & $0.9\times10^{4}$ & 34\farcs35 & 393\farcs5 & $>$80\tablefootmark{*} & $>$45\tablefootmark{*}\\ NGC\,2019 & 10\tablefootmark{1}, 16\tablefootmark{10}, 13.3\tablefootmark{11}, 16.3\tablefootmark{a}, 17.8\tablefootmark{b} & -1.23\tablefootmark{a}, -1.18\tablefootmark{b}, -1.37\tablefootmark{c}, -1.10\tablefootmark{d} & $6.7\times10^{4}$ & 9\farcs72 & 121\farcs6 & 95 & 150\\ NGC\,2005 & 10\tablefootmark{1}, 6.3\tablefootmark{10}, 16\tablefootmark{11}, 15.5\tablefootmark{a}, 16.6\tablefootmark{b} & -1.35\tablefootmark{a}, -1.92\tablefootmark{b}, -1.80\tablefootmark{c}, -1.33\tablefootmark{d} & $3.9\times10^{4}$ & 8\farcs65 & 98\farcs8 & 140 & -- \\ NGC\,1754 & 10\tablefootmark{1}, 7\tablefootmark{10}, 14\tablefootmark{11}, 15.6\tablefootmark{a,b} & -1.42\tablefootmark{a}, -1.54\tablefootmark{b} & $2.7\times10^{4}$ & 11\farcs2 & 142\farcs9 & 80 & --\\ \hline \hline \end{tabular} \tablefoot{ (1) Cluster name, (2) age of the cluster in Gyr, derived using different methods: \tablefoottext{1} {\citet{frogel90} -- based on the SWB type;} \tablefoottext{2} {\citet{leonardi03} -- integrated spectroscopy;} \tablefoottext{3} {\citet{geisler97},} \tablefoottext{4} {\citet{girardi95},} \tablefoottext{5} {\citet{kerber07},} \tablefoottext{6}{\citet{bertelli03},} \tablefoottext{7}{\citet{woo03},} \tablefoottext{8}{\citet{mackey08},} \tablefoottext{9}{\citet{milone09} -- all CMDs,} \tablefoottext{10}{\citet{beasley02} -- integrated spectroscopy (H$\beta$-Mg$b$),} \tablefoottext{11}{\citet{beasley02} -- integrated spectroscopy (H$\gamma$-$\langle$Fe$\rangle$),} (3) [Fe/H] derived using different methods: \tablefoottext{a}{\citet{olsen98} -- slope of the RGB,} \tablefoottext{b}{\citet{olszewski91} -- low-resolution Ca\,II triplet,} \tablefoottext{c}{\citet{johnson06} -- high-resolution Fe\,I,} \tablefoottext{d}{\citet{johnson06} -- high-resolution Fe\,II,} \tablefoottext{e}{\citet{dirsch00} -- Str\"{o}mgren photometry,} \tablefoottext{f}{\citet{groch06} -- low-resolution Ca\,II triplet,} \tablefoottext{g}{\citet{muc08} -- high-resolution spectroscopy} (4) sampled bolometric luminosity within the clusters central mosaics in $L_{\sun}$, (5) half-light radius and (6) tidal radius of the King-model cluster fit from the catalogue of \citet{mclaughlin05}, (7) and (8) S/N of the integrated spectra in $J$- and $H$-band, respectively; \tablefoottext{*}{lower threshold, see Sect.~\ref{sec:spec_int} for an explanation}. } \end{table*} In Paper~I\/, using the integrated $K$-band spectra of the globular clusters, we demonstrated the feasibility of our observational approach, as well as discussed some discrepancies that arise between recent EPS models and the data. We argued that the main reason for these discrepancies is the incompleteness of the current stellar spectral libraries of asymptotic giant branch (AGB) stars that are used to create the models. Also, we illustrated how the presence of luminous carbon-rich AGB stars significantly changes the observed spectrophotometric properties in the $K$-band, especially the $^{12}$CO\,(2--0)\/ 2.29 $\mu$m feature as predicted by \citet{maraston05}, despite the observational discrepancies. This paper complements Paper~I\/ by presenting results based on the integrated $J$- and $H$-band spectra of the same globular clusters. To date there are very few studies of the spectral properties of globular clusters in this wavelength regime \citep[e.g.][]{riffel11a}. The paper is organised as follows: in Sect.~\ref{sec:obs} we briefly recall our observing strategy and data reduction methods. In Sect.~\ref{sec:sed} we compare the observed overall spectral energy distributions of the globular clusters to stellar population models. Sect.~\ref{sec:c2} is devoted to the C$_{2}$\/ absorption feature in the $H$-band and its behaviour in the spectra of globular clusters. In Sect.~\ref{sec:discussion} we discuss potential reasons for the disagreement between model predictions and our observations, based on stellar libraries. Finally, in Sect.~\ref{sec:conclusions} we present our concluding remarks. \section{Observations and data reduction} \label{sec:obs} \subsection{Observations} Our sample of globular clusters, whose basic properties are listed in Table~\ref{tab:lmc_gc}, were observed with VLT/SINFONI \citep{eis03,bonnet04} in poor seeing conditions without adaptive optics in the period October -- December 2006 (Prog. ID 078.B-0205, PI: Kuntschner). The detailed descriptions of the observing strategy, target selection and data reduction are given in Paper~I\/, where we presented the $K$-band spectra from our project. In the present paper we explore the data obtained in the $J$- and $H$-bands, where minor differences in the data reduction exist. Briefly, in order to sample at least the half-light radius of the clusters, we observed a $3\times3$ mosaic of the largest field-of-view that SINFONI offers ($8\arcsec \times 8\arcsec$). Thus, the resulting coverage was $24\arcsec \times 24\arcsec$, with a spatial sampling of 0\farcs125 per pixel. For the observations we used the standard near-IR nodding technique. We observed each mosaic tile three times with individual integration times of 50\,s and dithering of 0\farcs25 to ensure better bad pixel rejection. We took sky exposures between each mosaic tile. Finally, within each cluster's tidal radius, we selected up to eight bright stars with colours and magnitudes typical for the red giant branch (RGB) and AGB stars in the LMC (see Table 3 in Paper~I), which were observed in addition to the cluster mosaics in order to better sample the short lived AGB phase. The integration times for these stars were 10\,s. After each science target and at a similar airmass, we observed an A-B dwarf star for telluric absorption correction (Table 4 in Paper~I). \begin{figure*} \resizebox{\hsize}{!}{\includegraphics[angle=0]{./Lyubenova_fig01.eps}} \caption{\label{fig:jh_spec} {\it Left panel:} $J$-band, integrated spectra of the six LMC globular clusters. The shaded areas indicate the regions with contamination by strong OH and O$_{2}$ sky line residuals. {\it Right panel:} $H$-band, integrated spectra of four of the LMC clusters. Due to unstable atmospheric conditions, the $H$-band spectra of NGC\,1754 and NGC\,2005 are heavily contaminated by sky line residuals and thus exibit a very low S/N ratio and are not shown. Each spectrum is normalised to its median value. Line identifications are based on the stellar spectral atlases by \citet{lancon92},\citet{wallace00} and \citet{rayner09}. } \end{figure*} \subsection{Data reduction} \label{sec:data_red} We used the SINFONI pipeline version 2.0.5 to perform the basic data reduction on the three exposures per mosaic tile plus two bracketing sky exposures. In brief, the pipeline extracts the raw data, applies distortion, bad pixel and flat-field corrections, wavelength calibration, and stores the combined sky-subtracted spectra in a 3-dimensional data cube. Then on each resulting data cube we ran the {\tt lac3d} code \citep{davies10}, whose purpose is to detect and correct residual bad pixels that are identified using a 3D Laplacian edge detection method. This code not only removes residual bad pixels, but also produces an error data cube for each input science data cube, which is very useful, because the SINFONI pipeline does not provide an estimate of the error propagation during data reduction. The derived signal-to-noise ratio, using these error spectra, is in agreement with the signal-to-noise ratio derived using an empirical method described by \citet{stoehr07}. In the $J$-band we had to switch off the pipeline option to correct for sky line residuals based on the algorithm described in \citet[][i.e. {\tt --objnod-sky\_cor = FALSE}]{davies07}. This was needed due to the complicated pattern of overlapping OH and O$_{2}$ sky emission lines in this wavelength region \citep[see e.g.][]{rousselot00}, which leads to their over- or under-subtraction when applying the sky residual correction. Therefore, we applied simple sky subtraction and indicated the regions where the sky lines are the strongest in Fig.~\ref{fig:jh_spec} (i.e. 1.21\,--\,1.236 $\mu$m and 1.26\,--\,1.295 $\mu$m). We reduced the telluric star data in the same way as the science frames. Then for each telluric star we extracted a one-dimensional spectrum, removed the hydrogen absorption lines fitted with a Lorentzian profile, and divided the star spectrum by a black body spectrum with the same temperature as the star. The last step in preparing the telluric correction spectrum was to apply small shifts ($<$0.05 pixels) and scalings to minimise the residuals of the telluric features. Finally, we divided each science data cube by the corresponding telluric spectrum. In this way we also obtained an approximate relative flux calibration. An absolute flux calibration was not possible due to non-photometric conditions. The telluric star HD\,44533, used for the telluric correction of the observations for NGC\,2019 and its surrounding stars, had an unusual shape of the hydrogen lines. It seems that this star shows some hydrogen emission together with absorption. Thus, to remove the hydrogen lines, we interpolated linearly over the affected regions, which are not overlapping with any indices used in this study. However, we add a word of caution to treat these regions with care in case the spectra of NGC\,2019 are used in the future for other purposes. \subsection{Spectra integration} \label{sec:spec_int} To obtain the final, integrated $J$- and $H$-band spectra for the six globular clusters, we estimated the noise level in each reconstructed image from the central mosaic data cubes. We assumed that this noise is due to residuals after the sky background correction. Thus, we computed the median residual sky noise level and its standard deviation, after clipping all data points with intensities of more than 3$\sigma$. We then selected all spaxels, which have an intensity more than three times the standard deviation above the median residual sky noise level. We summed them and normalised the resulting spectrum to 1\,s of exposure time. The same approach was used to obtain the spectra for the additional bright RGB and AGB stars. The final step in preparing the luminosity weighted, integrated spectra for the six globular clusters was to add the additionally observed bright stars outside the central mosaics (for a detailed discussion on cluster membership and which stars where included, see Sect.~4 in Paper~I\/ and Table~\ref{tab:bs_observations}). Fig.~\ref{fig:jh_spec} shows the final, integrated $J$- and $H$-band spectra for the six LMC globular clusters. The spectra of the three intermediate age clusters are shown on the top (NGC\,1806, NGC\,2162, and NGC\,2173). These spectra display numerous absorption features due to carbon based molecules, like CO, C$_{2}$, and CN which makes them appear to be very noisy, while in fact they are not. The spectra of the old and metal poor clusters (NGC\,2019, NGC\,2005, and NGC\,1754) are shown at the bottom. These are dominated by late-type giant stars, where the metallic lines are more prominent. To identify the spectral absorption lines displayed in Fig.~\ref{fig:jh_spec}, we used the stellar spectral atlases of \citet{lancon92}, \citet{wallace00}, and \citet{rayner09}. The weather conditions during the observations of the $H$-band spectra for NGC\,1754 and NGC\,2005 were not favourable, which resulted in a signal-to-noise ratio that was too low and contamination by multiple sky line residuals. Thus we decided to exclude these two spectra from our analysis. The spectral resolution of our spectra, as measured from arc lamp frames, is 6.7\,\AA\/ and 6.6\,\AA\/ (FWHM) in the $J$ and $H$-band, respectively. Finally, we estimated the S/N ratio of each integrated globular cluster spectrum using the method of \citet{stoehr07} and listed the values in Table~\ref{tab:lmc_gc}. This method allows us to compute the S/N from the spectrum itself. However, due to the numerous absorption features mimicking noise in the spectra of globular clusters containing carbon-rich AGB stars, i.e. the three intermediate age clusters marked with an asterisks in Table~\ref{tab:lmc_gc}, we are able to give only a lower threshold. Based on the integration times and luminosities of the clusters, we conclude that the quality of these intermediate age spectra is as good, if not better, than those of the old globular clusters. \section{The overall near-IR spectral energy distribution} \label{sec:sed} \begin{figure*} \resizebox{\hsize}{!}{\includegraphics[angle=0]{./Lyubenova_fig02.eps}} \caption{\label{fig:lmc_sed} $J$, $H$, $K$-band spectral energy distributions of our sample of six LMC globular clusters. The black solid lines indicate the most closely matching (in age and metallicity) stellar population model of \citet{maraston05}; i.e. age~=~13\,Gyr, $[Z/H]=-1.35$ for NGC\,1754, NGC\,2005, and NGC\,2019, age~=~2\,Gyr, $[Z/H]=-0.33$ for NGC\,2173, and age~=~1.5\,Gyr, $[Z/H]=-0.33$ for NGC\,1806 and NGC\,2162. The spectra are normalised to the flux at 1.25~$\mu$m. Note the prominent spectral shape differences in the $J$- and $H$-bands between old and intermediate age clusters. The shaded areas indicate the regions with contamination by strong OH and O$_{2}$ sky line residuals. } \end{figure*} Having completed the primary goal of our project, namely to provide an empirical library of integrated near-IR spectra of globular clusters, we show in Fig.~\ref{fig:lmc_sed} the overall $J$, $H$ and $K$-band spectra as derived from our SINFONI observations, compared to the spectral energy distributions from the \citet{maraston05} stellar population models. The observed spectral segments were scaled individually relative to the model providing the closest match in age and metallicity and were normalised to unity at 1.25~$\mu$m (see Fig. caption). In the figure the models are represented with solid, black lines. With the exception of the $J$-band part in old globular clusters, where we were unable to achieve good sky background removal, as explained in Sec.~\ref{sec:data_red}, the models agree well with the general spectral shape of our spectra. Remarkably, the distinct spectral energy distribution (the "sawtooth" pattern) in the $J$- and $H$-bands caused by the contribution from carbon-rich thermally pulsing asymptotic giant branch (TP-AGB) phase is visible in the spectral regions covered by SINFONI for the intermediate age clusters. As expected the pattern is less prominent for the cluster NGC\,2173 due to its slightly older age. Also, the most prominent features such as C$_{2}$\/ at 1.77~$\mu$m and the CO bandhead at 2.29~$\mu$m are well matched. Thus, when they become available, it will be very interesting to make a comparison with higher spectral resolution EPS models. Weaker features, as the ones indicated in Fig.~\ref{fig:jh_spec}, could be studied in detail. In this figure, in addition to carbon based molecular absorption bands, several other absorption features are indicated. They are mainly due to metallic lines, such as \fe\/, \ion{Mg}{I}\/, \ion{Si}{I}\/, and \ion{Al}{I}\/. However, due to their relative weakness and associated difficulty of measuring them in galaxies, we decided not to discuss them further in this paper. \section{The $H$-band $C_{2}$\/ index} \label{sec:c2} One of the most prominent and easy to quantify features in the $H$-band is the C$_{2}\,(0-0)$ bandhead at 1.77\,$\mu$m \citep{ballik63}. The existence of the C$_{2}$\/ molecule is typical for carbon-rich stars, where the ratio of carbon to oxygen (C/O) atoms is larger than 1. This type of star is the main contributor to the near-IR light of intermediate age (1-3\,Gyr) stellar populations \citep[e.g.][]{ferraro95,girardi98,maraston98,maraston05}. Thus, it is expected that the C$_{2}$\/ absorption feature will be strong in globular clusters and galaxies with stellar populations in this age interval \citep{lancon99,maraston05}. Also, more metal poor stellar populations exhibit a stronger $C_{2}$\/ index on average, because dredge-up more easily leads to C/O$>$1 in that regime \citep{wagenhuber98,mouhcine03,weiss09}. The $C_{2}$\/ index reflects the depth of the C$_{2}$\/ absorption feature at 1.77\,$\mu$m \citep{alvarez00}. In our work we used the definition employed by \citet[][hereafter we refer to this definition as ``classical'']{maraston05}. The index is defined in magnitudes, as the flux ratio between a central passband ($1.768 - 1.782\,\mu$m) and a continuum passband ($1.752 - 1.762\,\mu$m), and is normalised to Vega by subtracting a zero point of 0.038$^{m}$. In this definition of the index, the continuum band is on top of H$_{2}$O and $^{12}$CO features (see Fig.~\ref{fig:c2_c22_comp}, top panel). While carbon-rich stars are not expected to have H$_{2}$O absorption, these two features may vary with the pulsation period for oxygen-rich stars \citep{loidl01}. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=0]{./Lyubenova_fig03.eps}} \caption{\label{fig:c2_c22_comp} {\it Top panel:} Spectrum of NGC\,1806 with overplotted central passband (red) and the classical continuum region (blue) for the $C_{2}$\/ index \citep[][]{maraston05}. In orange we overplot the new continuum region for the $C_{2}$\/ index defined in this paper (see Sect.~\ref{sec:c2}). {\it Bottom panel:} Comparison between the classical and the redefined $C_{2}$\/ index, measured on our own LMC star observations (solid black and red symbols), as well as the stars from the Milky Way spectral libraries of \citet[][orange]{lancon02} and \citet[][black diamonds, green squares, red asterisks]{rayner09}. The dashed line shows the one-to-one relation, the solid line represents a linear least squares fit to all data points.} \end{figure} Because of these additional contributions to the continuum passband we explored a modified index definition by shifting the continuum passband to shorter wavelengths relative to the classical definition ($1.74 - 1.75\,\mu$m, shown in orange in Fig.~\ref{fig:c2_c22_comp}). The Vega zero point to be subtracted is 0.037$^{m}$. We measured the two indices for the LMC stars from our own observations, the carbon-rich averaged stellar spectra of \citet{lancon02} and the stars from the library of \citet{rayner09}. The comparison is shown in Fig.~\ref{fig:c2_c22_comp} (bottom panel). We found that for these samples the scatter is not significant, but one might expect it in a larger sample, and in particular if one includes AGB stars in different pulsation phases. For the present study we decided to use the classical index definition for direct comparison to models. The measurements for the stars in our library are listed in Table~\ref{tab:bs_observations} and for the globular clusters in Table~\ref{tab:lmc_c2}. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=0]{./Lyubenova_fig04.eps}} \caption{\label{fig:c2_dco_model} Comparison between $C_{2}$\/ index values and the models of \citet{maraston05}. The ages of the clusters are the mean ones from Table~\ref{tab:lmc_gc} with error bars reflecting the r.m.s. For guidance, in the top panel we reproduce the $K$-band $D_{CO}$\/ index from Paper~I (their Fig.~9). In both panels with black arrows we indicated how the spectrum of NGC\,2173 changes when adding different fractions of LMC carbon star light to it. The red arrows indicate the changes in the same spectrum, but adding different fractions of the averaged Milky Way carbon star spectrum of \citet{lancon02}. See text for a detailed description.} \end{figure} \begin{center} \begin{table*}[tdp] \centering \caption{\label{tab:bs_observations}Near-IR colours and spectral indices of the additional bright RGB and AGB stars.} \begin{tabular}{c c c c c c c } \hline \hline Name & $K$ (mag) & $(J-K)$ & $D_{CO}$ & $C_{2}$ (mag) & Notes & Cluster \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) \\ \hline 2MASS\,J04540127-7026341 & 11.40 & 1.13 & 1.208$\pm$0.013 & 0.034$\pm$0.010 & & NGC\,1754 \\ 2MASS\,J04540771-7024398 & 11.86 & 1.26 & 1.223$\pm$0.021 & 0.152$\pm$0.013 & & '' \\ 2MASS\,J04543522-7027503 & 12.26 & 1.07 & 1.143$\pm$0.015 & -0.011$\pm$0.018 & & '' \\ 2MASS\,J04542864-7026142 & 12.62 & 0.91 & 1.112$\pm$0.018 & 0.052$\pm$0.025 & & '' \\ 2MASS\,J04540935-7024566 & 12.73 & 0.96 & 1.088$\pm$0.021 & -0.048$\pm$0.020 & & '' \\ 2MASS\,J04541188-7028201 & 13.04 & 0.91 & 1.177$\pm$0.030 & 0.045$\pm$0.009 & & '' \\ 2MASS\,J04540536-7025202 & 13.36 & 0.93 & 1.122$\pm$0.029 & 0.156$\pm$0.012 & & '' \\ 2MASS\,J05302221-6946124 & 10.65 & 1.21 & 1.258$\pm$0.008 & -0.021$\pm$0.010 & & NGC\,2005 \\ 2MASS\,J05300708-6945327 & 11.66 & 1.04 & 1.213$\pm$0.012 & -0.044$\pm$0.018 & & '' \\ 2MASS\,J05295466-6946014 & 11.68 & 1.12 & 1.246$\pm$0.010 & -0.045$\pm$0.020 & & '' \\ 2MASS\,J05300704-6944031 & 11.80 & 1.03 & 1.251$\pm$0.010 & -0.039$\pm$0.027 & & '' \\ 2MASS\,J05300360-6944311 & 12.02 & 0.97 & 1.216$\pm$0.013 & 0.063$\pm$0.036 & & '' \\ 2MASS\,J05295822-6944445 & 12.05 & 1.22 & 1.233$\pm$0.016 & -0.040$\pm$0.041 & & '' \\ 2MASS\,J05320670-7010248 & 10.49 & 1.13 & 1.136$\pm$0.014 & -0.018$\pm$0.005 & & NGC\,2019 \\ 2MASS\,J05313862-7010093 & 10.56 & 1.39 & 1.196$\pm$0.017 & 0.339$\pm$0.006 & C & '' \\ 2MASS\,J05315232-7010083 & 10.61 & 0.94 & 1.200$\pm$0.016 & -0.022$\pm$0.005 & & '' \\ 2MASS\,J05321152-7010535 & 10.87 & 1.06 & 1.176$\pm$0.018 & -0.036$\pm$0.005 & & '' \\ 2MASS\,J05321647-7008272 & 10.95 & 1.13 & 1.183$\pm$0.016 & -0.008$\pm$0.006 & & '' \\ 2MASS\,J05320418-7008151 & 11.22 & 1.02 & 1.204$\pm$0.024 & -0.047$\pm$0.007 & & '' \\ 2MASS\,J05021232-6759369 & 10.23 & 1.84 & 1.051$\pm$0.013 & 0.458$\pm$0.007 & +,C & NGC\,1806 \\ 2MASS\,J05020536-6800266 & 10.63 & 1.61 & 1.139$\pm$0.015 & 0.500$\pm$0.009 & +,C & '' \\ 2MASS\,J05015896-6759387 & 10.69 & 1.76 & 1.065$\pm$0.013 & 0.481$\pm$0.010 & +,C & '' \\ 2MASS\,J05021623-6759332 & 11.02 & 1.06 & 1.211$\pm$0.012 & -0.047$\pm$0.007 & + & '' \\ 2MASS\,J05021870-6758552 & 11.32 & 1.11 & 1.267$\pm$0.015 & -0.071$\pm$0.011 & + & '' \\ 2MASS\,J05021846-6759048 & 11.74 & 1.00 & 1.243$\pm$0.018 & -0.053$\pm$0.012 & & '' \\ 2MASS\,J05021121-6759295 & 11.97 & 0.96 & 1.197$\pm$0.026 & -0.045$\pm$0.024 & & '' \\ 2MASS\,J05021137-6758401 & 11.98 & 1.02 & 1.199$\pm$0.023 & -0.064$\pm$0.016 & & '' \\ 2MASS\,J06002748-6342222 & 9.60 & 1.80 & 1.075$\pm$0.012 & 0.437$\pm$0.005 & +,C & NGC\,2162 \\ 2MASS\,J06003156-6342581 & 11.64 & 1.03 & 1.202$\pm$0.011 & -0.043$\pm$0.011 & + & '' \\ 2MASS\,J06003316-6342131 & 12.24 & 0.99 & 1.197$\pm$0.016 & -0.127$\pm$0.017 & + & '' \\ 2MASS\,J06003869-6341393 & 12.26 & 0.99 & 1.184$\pm$0.020 & -0.038$\pm$0.017 & & '' \\ 2MASS\,J05563892-7258155 & 11.77 & 1.03 & 1.227$\pm$0.012 & -0.091$\pm$0.013 & & NGC\,2173 \\ 2MASS\,J05575667-7258299 & 12.07 & 0.95 & 1.166$\pm$0.014 & -0.047$\pm$0.017 & + & '' \\ 2MASS\,J05570233-7257449 & 12.13 & 1.04 & 1.159$\pm$0.016 & -0.008$\pm$0.019 & & '' \\ 2MASS\,J05575784-7257548 & 12.18 & 1.03 & 1.203$\pm$0.016 & -0.048$\pm$0.019 & + & '' \\ 2MASS\,J05563368-7257402 & 12.43 & 1.00 & 1.146$\pm$0.016 & 0.010$\pm$0.027 & & '' \\ 2MASS\,J05581142-7258328 & 12.45 & 1.04 & 1.154$\pm$0.019 & -0.021$\pm$0.023 & + & '' \\ 2MASS\,J05583257-7258499 & 12.48 & 0.92 & 1.136$\pm$0.015 & -0.070$\pm$0.022 & & '' \\ 2MASS\,J05572334-7256006 & 12.56 & 1.01 & 1.227$\pm$0.019 & -0.016$\pm$0.027 & & '' \\ 2MASS\,J05565761-7254403 & 12.84 & 1.01 & 1.169$\pm$0.020 & -0.099$\pm$0.031 & & '' \\ \hline \hline \end{tabular} \tablefoot{ (1) 2MASS catalogue star name, (2) extinction corrected $K$-band magnitude and (3) $(J-K)$ colour from the 2MASS Point Source Catalogue \citep{2mass}, (4) $K$-band $D_{CO}$\/ index value, (5) $H$-band $C_{2}$\/ index value, (6) Notes on individual stars: "C" -- a carbon-rich stars, "+" -- the star was used for the integrated spectrum of the cluster, (7) globular cluster, next to which the star was observed. The $D_{CO}$\/ index is defined in \citet{esther08} and the $C_{2}$\/ index is taken from \citet{maraston05}.} \end{table*} \end{center} \begin{table}[tdp] \caption{\label{tab:lmc_c2} Measurements of $D_{CO}$\/ and $C_{2}$\/ indices for the LMC globular clusters.} \centering \begin{tabular}{c c c} \hline \hline Name & $D_{CO}$ & C$_{2}$ (mag) \\ \hline NGC\,1806 & 1.129$\pm$0.005 & 0.214$\pm$0.010\\ NGC\,2162 & 1.108$\pm$0.010 & 0.235$\pm$0.022 \\ NGC\,2173 & 1.186$\pm$0.005 & 0.068$\pm$0.012 \\ NGC\,2019 & 1.068$\pm$0.003 & 0.006$\pm$0.004 \\ NGC\,2005 & 1.086$\pm$0.003 & --\\ NGC\,1754 & 1.082$\pm$0.005 & --\\ \hline \hline \end{tabular} \tablefoot{ The $D_{CO}$\/ index is defined in \citet{esther08} and the $C_{2}$\/ index is taken from \citet{maraston05}.} \end{table} Fig.~\ref{fig:c2_dco_model} shows a comparison between our measurements of the $C_{2}$\/ index for the LMC globular clusters and the SSP model predictions of \citet{maraston05}. In the top panel, for guidance, we repeat Fig.~9 from Paper~I\/, showing the $K$-band $D_{CO}$\/ index. This index is sensitive to the metallicity at older ages, but also shows a dependence on the presence of carbon-rich stars at intermediate ages. We measured the $D_{CO}$\/ and $C_{2}$\/ indices at the nominal spectral resolution of the data. Current models have lower spectral resolution than our data. However, due to the broadness and intrinsic strength of these spectral features, the differences in spectral resolution will not affect our conclusions. We tested this by broadening our globular cluster spectra to spectral resolutions ranging from 50 to 400~$\mbox{km s}^{-1}$\/ in steps of 50~$\mbox{km s}^{-1}$\/. The largest offset we measured was $-0.03$ for both the $D_{CO}$\/ and $C_{2}$\/ indices, which is much smaller than the observed differences between observations and models. For old stellar populations ($>$3\,Gyr) the models predict an approximately constant and zero value of the $C_{2}$\/ index (Fig.~\ref{fig:c2_dco_model}, bottom panel). The $C_{2}$\/ index measurement for the old globular cluster NGC\,2019 in this age range whose metallicity is best compared to the model with [Z/H]~$=-1.35$ (solid, blue line), is consistent with the model predictions. We note that we were unable to measure the $C_{2}$\/ index in the other two old and metal poor clusters in our sample, due to the very low quality of their $H$-band spectra. The group of the three intermediate age and more metal rich clusters is best compared with the SSP model with [Z/H]~$=-0.33$ (blue, dotted line). The $C_2$ index measurement for NGC\,2173 (2\,Gyr, red solid symbol) is consistent with the model prediction of increasing $C_2$ index towards younger ages. The youngest globular clusters at $\sim$1.3\,Gyr (NGC\,1806 and NGC\,2162) follow this trend, although their $C_2$ index is significantly larger than the model prediction. The latter observation supports our findings based on the $K$-band $D_{CO}$\/ index in Paper~I. There, we argued that the reason for the discrepancy between the CO index predictions of the models and the data (see Fig.~\ref{fig:c2_dco_model}, top panel) is due to the presence of carbon-rich stars which influence the spectrophotometric properties of stellar populations at intermediate ages. The empirical calibration of the models is based on spectra of the Milky Way carbon stars from the library of \citet{lancon02}, combined with photometric calibration based on the LMC globular clusters, which led to inconsistent results. The presence of LMC carbon-rich stars in the spectra leads to a decrease of the $D_{CO}$\ index, but to an increase of the $C_{2}$\ index. To test this claim, we used the spectrum of NGC 2173, which is the "oldest" intermediate-age globular cluster and therefore the one with the least contribution from carbon-rich stars in our sample \citep{muc06}, as a baseline for a simplistic stellar population modelling test. Increasing the contribution from TP-AGB stars in this cluster would mimic a younger age, such as seen in NGC\,1806 and NGC\,2162. In this way we are able to test the effects of different carbon- to oxygen-rich stars ratios in a given stellar population on its spectral features. When we added 40\% to 70\% of the averaged Milky Way stellar spectrum contained in the 3rd bin of \citet{lancon02} to the spectrum of NGC 2173, the resulting $C_{2}$\/ and $D_{CO}$\/ indices increased, as predicted by the models for the younger clusters (red arrows in Fig. 4). When performing the same test, but adding the spectrum of a carbon star in the LMC (in this case the star 2MASS J06002748-6342222 in NGC 2162) to the cluster, the resultant $D_{CO}$\/ index {\em decreased}\/ (black arrows in the same figure) and fitted the observations of the younger clusters better. Contrary to the $D_{CO}$\/ index dependence on C-star content, the resultant $C_{2}$\/ index increased consistent with the model predictions regardless if the C-star is taken from the Milky Way or the LMC. The results from the above tests depend on the actual carbon star spectrum that is used to perform the test. Carbon stars exhibit large variations in their properties for a given environment, as we discuss further in Sect.~\ref{sec:discussion} \citep[see also e.g.][]{lancon02}. This caveat illustrates the intrinsic problem of stochastic fluctuations of AGB stars in globular clusters \citep[see e.g. review by][]{lancon11}. AGB stars are rare in stellar populations, with only one in a population of 10$^{4}$ stars in total. Nevertheless, a single star can produce up to 80\% of the stellar population's near-IR light \citep{maraston05}. Here we see that it can also play a very important role in the integrated spectral properties of the hosting stellar population. This will inevitably lead to large or even significant deviations between the model predictions and observations of individual clusters. Various authors have made extensive studies of the influence of such stochastic fluctuations for the integrated colours of stellar clusters \citep[e.g.][]{piskunov09,popescu10,fouesneau10}. A similar study for the spectral features in integrated spectra of globular clusters would be beneficial for the interpretation and judgment of our results. \section{Carbon-rich AGB stars in the LMC and the Milky Way} \label{sec:discussion} A possible explanation for the differences between models and observations, described in the previous section, may be the different metallicities or carbon and oxygen abundances of the sample stars. In more metal poor stars (i.e. the LMC stars), the C/O ratio is higher on average. \citet{matsuura02} argue for a systematically larger C/O ratio in LMC carbon stars compared to Galactic ones. Models of carbon-rich stars with higher C/O ratio have weaker CO features, but stronger C$_{2}$\/ features \citep{loidl01,aringer09}. We note that a variation in the molecular bands of carbon stars during their pulsation cycles is not expected to be strong \citep{loidl01}. In Paper~I\/ we supported this scenario, i.e. the difference in metallicity to be the cause of the discrepancy, by showing that stars in the LMC and the Milky Way with similar $(J-K)$ colours can have substantially different $D_{CO}$\/ index values, as \citet{cohen81} and \citet{frogel90} have shown earlier as well. Based on the limited stellar samples discussed there, we speculate that the LMC carbon-rich stars have weaker $D_{CO}$\/ indices when compared to carbon stars in the Milky Way. In Fig.~\ref{fig:dco_c2_jk_stars} (left panel) we repeat the diagram and add stars from the SpeX library \citep{rayner09}. This library contains spectra of mostly near-solar metallicity K and M stars with luminosity classes between I and V, carbon-rich stars and S-stars (for which C/O$\simeq$1). For half of the carbon-rich stars (four) we could find iron abundance estimates in the PASTEL database \citep{soubiran10}, ranging from $-0.3$ to 0.2 dex. Their mean iron abundance is slightly sub-solar, which is on average more metal-rich than the sample of LMC carbon stars with [Fe/H]~$\simeq-0.4$. The carbon-rich stars from the SpeX sample are located between the two relations, found in Paper~I, for LMC (solid line) and Milky Way (dashed line) carbon stars from the library of \citet{lancon02}, thus smearing any clear relations. Moreover, if instead of the averaged Milky Way carbon star spectra, the individual stars of \citet{lanconW00} were plotted, then the dispersion would have been even more prominent. However, we do not observe $D_{CO}$\/ in any of the Milky Way stars as weak as in three of the LMC stars. Surprisingly, on the $(J-K)$ -- $C_{2}$\/ plot (Fig.~\ref{fig:dco_c2_jk_stars}, right panel) there is no obvious difference between the stars in the two galaxies. Assuming the lower metallicity of the LMC and the higher C/O ratio, as argued by \citet{matsuura02}, the $C_{2}$\/ index is expected to be stronger in the LMC than in the Milky Way carbon stars. Instead we find the opposite: that two of the Galactic stars have a stronger $C_{2}$\/ index than the LMC stars. This could be due to the C/O ratio changing at every dredge-up episode, thus leading to a dispersion in the $C_{2}$\/ index larger than the metallicity effect. Based on the three stellar samples explored here, we cannot confirm that there is a marked difference between the Milky Way and LMC carbon stars in terms of the C/O ratio. To reach conclusive results about the dependence of the $C_{2}$\/ index on the C/O ratio, different element abundances, and the metallicity of the stars, detailed abundance measurements for a larger sample of LMC and Milky Way giant stars are necessary. Fig.~\ref{fig:dco_c2_stars} shows the behaviour of the $K$-band CO versus the $H$-band C$_{2}$\/ features in stellar spectra. The complete sample of stars displays a large spread of $D_{CO}$\ index values, while the $C_{2}$\ index is significantly stronger than zero only in carbon-rich stars. There are also a number of M-type stars (M6-9III), whose $C_{2}$\/ index is larger than zero but is still systematically smaller than the value of $C_{2}$\/ in C-type stars. In these stars there is significant $H_{2}O$ absorption in the range $1.7 - 2.05\,\mu$m \citep[e.g.][]{matsuura99,rayner09} that leads to a steep decrease of the continuum at the location where the C$_{2}$\/ feature is located. Thus, when measuring the $C_{2}$\/ index in these M giants we obtained a non-zero index value, while in reality there is no C$_{2}$\/ absorption. Based on this figure, there is no clear separation between LMC and Milky Way carbon-stars, although the LMC stars have on average lower $D_{CO}$\/ indices, which is consistent with their lower metallicity. However, the same LMC stars do not show a stronger $C_{2}$\/ index, which would be expected as discussed earlier in the text. There is one C-type star from the catalogue of \citet{rayner09} that shows $D_{CO}$\/ and $C_{2}$\/ indices and a $(J-K)$ colour consistent with K and M-type stars (see Fig.~\ref{fig:dco_c2_jk_stars}). This is HD\,76846, classified as C-R2 \citep{keenan00}. As \citet{tanaka07} have shown, the C$_{2}$\/ feature is intrinsically weak in early hotter C-R stars. In Fig.~\ref{fig:dco_c2_stars}, for comparison, we show the measured $D_{CO}$\/ and $C_{2}$\/ indices from the integrated LMC globular cluster spectra (coloured triangles). The two clusters that are dominated by carbon stars, NGC\,1806 and NGC\,2162, follow the stellar trends of higher $C_{2}$\/ index values. Such a near-IR spectroscopic index diagnostic plot can be very helpful in revealing the age of a given integrated stellar population, being a stellar cluster or a galaxy. As shown above, the C$_{2}$\/ feature is present and its strength is significant only in carbon stars and stellar populations harbouring such. \citet{lancon99} have already proposed a similar C$_{2}$\/ index, that can be used to detect the presence of intermediate age stellar populations. Even when smeared out by the line-of-sight velocity distribution of stars in galaxies of the order of 400\,$\mbox{km s}^{-1}$\/, it will remain identifiable and measurable. By observing an increased strength of the $C_{2}$\/ index, one can conclude that the object is in the carbon-star phase and thus date it to an intermediate age of 1-2\,Gyr. \begin{figure*} \resizebox{\hsize}{!}{\includegraphics[angle=0]{./Lyubenova_fig05.eps}} \caption{\label{fig:dco_c2_jk_stars} $D_{CO}$\/ {\it (left)} and $C_{2}$\/ {\it(right panel)} indices versus $(J-K)$ colour for stars in the Milky Way and the LMC. The oxygen- and carbon-rich LMC stars (filled black and red dots, respectively) are from our own SINFONI observations, the K (green open squares), M (black diamonds), C (red asterisks), and S (red asterisks with square around) stars are taken from the SpeX library of \citet{rayner09}. The C-rich spectra in the Milky Way (orange crosses) are taken from the library of \citet{lancon02}. Typical error bars are shown in the upper right corner of the left panel. In the left panel, the solid line denotes the relation for LMC carbon-rich stars from \citet[][Eq.~3]{az10a}, the dashed line (Eq.~5 in the same paper) represents the relation for the Milky Way spectra of \citet{lancon02}.} \end{figure*} \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=0]{./Lyubenova_fig06.eps}} \caption{\label{fig:dco_c2_stars} $D_{CO}$\/ vs. $C_{2}$\/ index for stellar and globular cluster spectra. Origin of spectra, symbols and colours as in Fig.~\ref{fig:dco_c2_jk_stars}. Error bars are about the size or smaller than the symbols, thus they are not explicitly shown. Coloured triangles show the measurements for the integrated spectra of LMC globular clusters.} \end{figure} \section{Concluding remarks} \label{sec:conclusions} In this paper we concluded our pilot project of providing an empirical near-IR library of integrated spectra of globular clusters in the LMC by adding $J$- and $H$-band spectra to the $K$-band ones from \citet[][Paper~I]{az10a}. We provided full near-IR spectral coverage (as observable from the ground) for four globular clusters: the old and metal poor cluster NGC\,2019, and the intermediate age and approximately half-solar metallicity clusters NGC\,1806, NGC\,2162, and NGC\,2173. For the old and metal poor clusters NGC\,1754 and NGC\,2005 the $H$-band part is missing due to bad observing conditions and unacceptable sky emission subtraction residuals. Using our sample of globular clusters we tested predictions of current evolutionary population synthesis models in the near-IR and discussed some of the problems that arise from this comparison. Although still influenced by small number statistics, our spectra show that the near-IR spectral domain is a useful source of information when applied to the study of integrated stellar populations. One of the main goals of our project was to provide a small empirical library of near-IR spectra of globular clusters to serve as a test bench for current and future stellar population models. In Fig.~\ref{fig:lmc_sed} we presented a comparison between the overall near-IR spectral energy distribution of sample clusters and the stellar population models of \citet{maraston05}. The models agree remarkably well with the data in terms of the spectral shape and the most prominent absorption features. Especially, the predicted distinct signatures of the TP-AGB phase in intermediate age (1-2\,Gyr) stellar populations are nicely reproduced. However, the agreement is not so good when exploring individual spectral features in more detail, where more subtle differences in absorption strength can be quantified. Here, as well as in Paper~I\/, we concentrated our studies to the strongest absorption features in the near-IR, namely C$_{2}$\/ at 1.77\,$\mu$m and $^{12}$CO\,(2--0)\/ at 2.29\,$\mu$m, due to the limited spectral resolution of the current models. We described their behaviour using two indices: $C_{2}$\/ and $D_{CO}$\/, respectively. The first one reflects the presence of carbon-rich AGB stars in the stellar population, while the second is typical for both carbon- and oxygen- rich AGB and RGB stars. We refer the reader to Sect.~\ref{sec:c2}, where we discussed in detail the observed differences between our observations and the EPS models of \citet{maraston05}. In Sect.~\ref{sec:discussion} we explored some possible reasons for these, based on stellar libraries with Milky Way and LMC carbon-rich stars (see Fig.~\ref{fig:dco_c2_jk_stars}). These stars can contribute up to 80\% to the total $K$-band light of a globular cluster stellar population, thus their proper inclusion in models is crucial. Our results were inconclusive about the proposed difference in the C/O ratio between stars in the Galaxy and the LMC. To confirm or reject this, accurate abundance measurements of iron, oxygen and carbon elements are necessary. Therefore, there is a clear need for larger spectral libraries of carbon-rich stars with a range of metallicities before we are able to reproduce their proper contribution to the stellar population models. The occurrence of a strong $C_{2}$\/ index in the near-IR spectra of globular clusters and hence in galaxies is an indication that they are in the carbon star phase, which dates the hosting stellar population to $1-2$\,Gyr, as previously suggested by \citet{lancon99}. In contrast, the $D_{CO}$\/ index is much less straightforward to interpret, since the magnitude of this index is driven by a complex combination of metallicity, surface gravity (luminosity), and effective temperature, as illustrated by Fig.~\ref{fig:dco_c2_jk_stars}. In Fig.~\ref{fig:dco_c2_stars} we propose a diagnostic plot, based only on these two near-IR indices. Both of them are sufficiently broad and strong, so they can be easily measured in galaxy spectra, even when smeared out by the line-of-sight velocity distribution of the stars. For galaxies up to a redshift of 0.007, e.g. Virgo and Fornax clusters, the C$_{2}$\/ feature is in the rest-frame $H$-band. For redshifts between 0.007 and 0.13 the feature is hidden by the atmospheric cutoff, but it remains accessible from space. During the next decades new facilities, such as the JWST and the E-ELT, will widen the discovery space in extragalactic research. Their enhanced sensitivity will be in the infrared portion of the spectrum. However, the stellar population analysis methods for the integrated light are not yet fully developed in the near-IR wavelength regime. The methods discussed here offer a step towards a better understanding of galaxy evolution and observations are already feasible with current facilities. For example, current adaptive optics systems, working mostly in the near-IR, offer a possibility to resolve the innermost parts of nearby galaxies. New information about the stellar population properties at these spatial scales can give us important clues about how galaxies have formed and evolved. \acknowledgements{We are grateful to the many ESO astronomers who obtained the data presented in this paper in service mode operations at La Silla Paranal Observatory. We thank Livia Origlia for useful discussions on near-IR spectral synthesis and Claudia Maraston for her comments. ML would like to thank ESO's Visitors Program as well as the staff at the Astronomical Observatory of the University of Sofia for their hospitality, where major parts of this research have been carried out. Finally, we thank the referee for her/his helpful suggestions which certainly made this paper more complete. This paper is dedicated to Ralitsa Mitkova, a tiny star that evolved together with this manuscript and ever since provides an endless source of inspiration.} \bibliographystyle{aa}
2,869,038,156,591
arxiv
\section{Introduction} The importance of random matrix theory (RMT) can hardly be overstated. Since its initial development by Wigner and Dyson to deal with the spectrum of many-body quantum systems \cite{wigner,dyson}, it has found applications in areas of physics as diverse as disordered systems, chaos, and quantum gravity, to name just a few \cite{mehta,review,fgz,physrep}. Most of the time RMT has been used as a very powerful tool for the study of the energy-level fluctuations of quantum systems. In this case the matrix that RMT is modeling is of course the quantum Hamiltonian of the system. However, there is a different context where RMT can be very useful, namely the study of the statistical properties of classical disordered systems. By disordered systems we mean not only those cases where quenched disorder is directly present in the Hamiltonian, as in spin-glasses, random field models or neural networks, but also systems whose physical behaviour at low temperatures is heavily influenced by the self-induced disorder of their typical configurations, as, for example, supercooled liquids and structural glasses. In all these systems the properties of the energy landscape, or energy surface, are known to be far from trivial. In particular, the presence of many local minima of the potential energy is one of the most distinctive features of this class of systems \cite{spin-glass,supercooled}. An obvious consequence of this fact is that the energy surface displays many extensive regions with unstable negative curvature and therefore has very non-trivial stability properties \cite{laloux,noiselle}. In this context a key object becomes the matrix of the second derivatives of the Hamiltonian, normally called Hessian, which encodes all the stability attributes of the energy landscape. The study of the statistical properties of the Hessian has been an important issue both in the theory of mean-field spin-glasses and in liquid theory. In the former case it is often possible to analyze the Hessian in the stationary points of the free-energy, having therefore important information on the shape and stability of the thermodynamic states \cite{spin-glass}. In liquids, on the other hand, the Hessian of the potential energy is the key object in the context of the instantaneous normal modes approach \cite{stratt,keyes}, where the average spectrum of the Hessian is directly connected to many physical observables of the system. In particular, it has been argued that there exists a deep relation between the diffusion properties of a liquid and the negative unstable eigenvalues of the average Hessian \cite{keyes}. It is evident that in the above context an application of RMT to the study of the statistical properties of the Hessian can be potentially very useful. An important remark is the following: the Hessian is a matrix which in general depends on the configuration of the system and possibly also on the quenched disorder, when this is present. The basic idea is to derive from the distribution of the configurations and from the distribution of the disorder an effective probability distribution for the Hessian, which can then be studied in the context of RMT (a recent example of this strategy can be found in \cite{noi-inm}). Besides, it is clear from the former discussion that an important issue is the analysis of the negative eigenvalues of the Hessian, since their presence is related to regions of unstable negative curvature of the energy surface and thus possibly to the boundaries of different basins of attractions in the phase space. In particular, the number of negative eigenvalues of the Hessian, called the {\it index}, is the first and easiest measure of instability. As a consequence, all the tools devised for the investigation of the index in RMT are particularly relevant in the context of statistical mechanics of disordered systems. The average value of the index is trivially related to the average spectrum of the Hessian by a simple integration. On the other hand, a more interesting and less trivial quantity is the {\it probability distribution} of the index. Indeed, while the average index gives a measure of the overall degree of instability of the energy surface, the knowledge of the fluctuations of the index around its average value allows a more profound and complete geometric description the energy landscape. In this paper we compute the probability distribution of the index for an ensemble of Gaussian random matrices with a diagonal shift. This ensemble provides the simplest possible model for the Hessian of a disordered system at a given energy and represents the ideal context where to develop the technical aspects of this kind of computation. Moreover, in the Gaussian context we are able to give non-trivial physical interpretations of our results. In order to compute the index distribution we use a fermionic replica method. In the past the replica method has been applied to recover standard results in RMT, with variable success. Recently the interest of the community has focused again on this method \cite{meka1,meka2,yl} and some indications of the mathematical consistency of the method have been provided, even if some strong criticisms still persist \cite{zirn}. The present computation offers an interesting example where the replica method can be applied to obtain exact results which are not easily available in the standard RMT literature. There is also another important reason for using the replica method in the computation of the index, which is related to the physical relevance of the Hessian discussed above. As we have seen, RMT can be used once an effective probability distribution for the Hessian has been worked out from the distribution of the configurations and from the distribution of the quenched disorder. This effective distribution will not be Gaussian in general (unless we consider some very particular models) and typically it will not belong to the standard ensembles considered by ordinary RMT. By means of the replica method we have in principle no need to assume any specific form of the distribution. The paper is organized as follows. In Sec. II we compute the average determinant for matrices of the Gaussian Orthogonal Ensemble as a warm-up exercise to fix notation and ideas. We then proceed in Sec. III to the main part of the paper, where we calculate the average index distribution by means of the replica method, in the limit of large matrices. In Sec. IV we apply the previous analysis to the specific case of a mean-field spin-glass model, where the Hessian is exactly a Gaussian random matrix. Finally in Sec. V we discuss the general relevance of our results and state our conclusions . Technical details of the calculation and the contribution of replica symmetry broken solutions are contained in two appendices. \section{A Preliminary Calculation} Consider the matrix \begin{equation} M_{ij} = J_{ij} - E \, \delta_{ij}, \label{eq:mat} \end{equation} where $J_{ij}$ is an $N$-dimensional real and symmetric random matrix with the Gaussian distribution function \begin{equation} {\cal P}[J] = 2^{-N/2} \left( \frac{N}{\pi} \right)^{N^2/2} \exp\left(-\frac{N}{4} {\rm Tr} \, J^2 \right) . \label{dist} \end{equation} We have introduced a diagonal shift $E$ in order to mimic what in general happens in disordered systems, where $M$ represents the Hessian of the Hamiltonian. In this context we expect to find very few negative eigenvalues of $M$ at low energies, because of the dominance of minima at very low energies. This is the effect of the shift $E$ in (\ref{eq:mat}) and we therefore shall refer in the following to $E$ as to the {\it energy}. The average density of eigenvalues, or spectrum, of $M$ is defined by \begin{equation} \rho(\lambda;E) = - \frac{1}{\pi N}\, {\rm Im} \, \overline{ {\rm Tr} \left( \lambda - M + i\, \epsilon \right)^{-1}} = - \frac{1}{\pi N}\, {\rm Im}\; \frac{\partial}{\partial\lambda}\; \overline{ \log \det ( \lambda - M + i\, \epsilon) } \ , \label{rho1} \end{equation} where the bar indicates the average over distribution (\ref{dist}). It is well known that for the Gaussian ensembles the spectrum $\rho$ in the limit ${N \to \infty}$ is given by a semi-circle centered around $\lambda=- E$, that is \begin{equation} \rho(\lambda;E)= \frac{1}{2\pi} \sqrt{4 - (\lambda+E)^2} \ , \label{semi} \end{equation} while $\rho$ is zero outside the semi-circle support\cite{mehta}. In order to fix our notation and to acquire some familiarity with the method we will use, we compute in this section the average determinant of $M$. In general this is {\it not} a self-averaging quantity, in the sense that fluctuations around the mean value do not decrease in the limit $N\to\infty$. The correct object to average is in principle the logarithm of the determinant, as it appears in the definition of $\rho$, since this is an extensive quantity. However, it is a particular property of the Gaussian case that the determinant {\it is} self-averaging at the leading order, so that the calculation of $\overline{\det M}$ is an interesting and simple warm-up exercise for what we want to show later. We can write the determinant by means of a Gaussian integral over $N$-dimensional fermionic vectors $(\overline{\psi},\psi)$ \begin{equation} \det M = \int d\overline{\psi} \, d\psi \, \exp\left[ - \sum_{i,j=1}^N \overline{\psi}_i \psi_j \left( J_{ij} - E \delta_{ij} \right) \right] \ , \end{equation} We now average over the symmetric matrix $J_{kl}$ \begin{equation} \overline{\det M} = \int d\overline{\psi} \, d\psi \, \exp \left( E \, \sum_{i=1}^N \overline{\psi}_i \psi_i - \frac{1}{2N} \sum_{i,j=1}^N \overline{\psi}_i \psi_i \overline{\psi}_j \psi_j \right) . \end{equation} To decouple the quartic term in the fermions we perform a Hubbard-Stratonovich transformation \begin{equation} \overline{\det M} = \int d\overline{\psi} \, d\psi \, dq \, \exp \left( E \, \sum_{i=1}^N \overline{\psi}_i \psi_i - \frac{N}{2} q^2 + i \, q \, \sum_i \overline{\psi}_i \psi_i \right) , \end{equation} and after integrating out the fermions we obtain \begin{equation} \overline{\det M} = \int dq \, e^{N S(q,E)} \ , \label{int1} \end{equation} with \begin{equation} S(q,E) = -\frac{1}{2} q^2 + \log(-E -iq) \ . \label{azione} \end{equation} This integral can be solved exactly in the limit $N\to\infty$ by means of the steepest descent method. The procedure is quite standard \cite{bender}, but we briefly summarize it for the sake of clarity. In order to calculate integral (\ref{int1}) in the large $N$ limit we must select a path of integration $\gamma$ in the complex plane, which satisfies the following conditions: \vskip 0.3 truecm \noindent {\it i)} The integral along $\gamma$ must be equal to the integral along the original integration path (in our case the real axis). \noindent {\it ii)} The imaginary part of the action $S(z,E)$ (or phase) must be constant along $\gamma$. \noindent {\it iii)} The path $\gamma$ must pass through {\it at least} one of the saddle points of the action $S(z,E)$. \vskip 0.3 truecm The integral along $\gamma$ can then be computed using the Laplace method \cite{bender} and it is given, at the leading order, by the integrand evaluated in the maximum of the real part of $S$ along $\gamma$, that is, in the saddle point of the whole action. In the case where many maxima lie on $\gamma$, only those with the largest real part of $S$ contribute to the total integral. In our case the action $S$ has two saddle points in the complex plane, given by \begin{equation} q_\pm = \frac{i}{2} E \pm \frac{1}{2} \sqrt{4 - E^2} \ . \label{qpm} \end{equation} The regions of constant phase passing through $q_+$ and $q_-$ are defined by \begin{eqnarray} \gamma_+ : \,\, {\rm Im }\, S(z)&=&{\rm Im }\, S(q_+)\nonumber \\ \gamma_- : \,\, {\rm Im }\, S(z)&=&{\rm Im }\, S(q_-) \ . \end{eqnarray} These regions satisfy by definition conditions {\it (ii)} and {\it (iii)} and thus the correct path of integration $\gamma$ must be built by using the different branches of $\gamma_+$ and $\gamma_-$ in such a way to satisfy condition {\it (i)}. We can distinguish three different regimes: \vskip 0.3 truecm $\bullet \ E < -2$: For these values of the energy the imaginary part of the action is the same for the two saddle points. The constant phase region is shown in Fig.1: it is clear that there is only one path $\gamma$ satisfying condition {\it (i)} which can be built by means of the different branches of the constant phase region. This path is almost parallel to the real axis and passes through $q_+$, but {\it not} through $q_-$. Indeed, the path parallel to the imaginary axis, which passes through both the stationary points, does not conserve the original integral. The only stationary point contributing to the integral is therefore $q_+$ and we have \begin{equation} \overline{\det{M}} = e^{N S(q_+,E)} = 2^{-N} \, \left( |E| - \sqrt{E^2 - 4} \right)^N \, e^{ N \, \left( |E| + \sqrt{E^2 - 4} \right)^2 / \, 8} \ . \end{equation} In this energy regime the spectrum $\rho$ has support completely contained in the positive semi-axis and we thus expect the average determinant to be positive, as it is. \begin{figure} \begin{center} \leavevmode \epsfxsize=5in \epsffile{fig1.eps} \caption{$E < -2$: The region of constant phase (dashed line) and the real axis (full line). The two small circles indicate the positions of the two saddle points, $q_+$ (up) and $q_-$ (down). The only suitable path of integration $\gamma$ passes just through $q_+$, since the original integral is not conserved on the orthogonal path. The case $E>+2$ is specular to this one.} \label{fig1} \end{center} \end{figure} $\bullet \ E > 2$: The support of the spectrum is now entirely contained in the negative semi-axis, so we expect all eigenvalues of the matrix to be negative. In this case the path $\gamma$ passes only through the saddle point $q_-$, and we thus find for the determinant \begin{equation} \overline{\det{M}} = e^{N S(q_-,E)} = (-1)^N \, 2^{-N} \, \left( |E| - \sqrt{E^2 - 4} \right)^N \, e^{ N \, \left( |E| + \sqrt{E^2 - 4} \right)^2 / \, 8} \ , \end{equation} with the correct prefactor $(-1)^N$ indicating that all eigenvalues are negative. $\bullet \ -2 < E < +2$: In this regime the situation is very different. In Fig.2 we plot the region of constant phase: the only path $\gamma$ which satisfies condition {\it (i)}, passes now through {\it both} the saddle points $q_+$ and $q_-$. It must be noted that in this case the imaginary part of $S$ is different in $q_+$ and $q_-$, so that actually the global region of constant phase plotted in Fig.2 is the union of two different regions, $\gamma_+$ and $\gamma_-$. On the other hand, the real part of $S$ is the same in the two stationary points, and therefore they both contribute to the integral. We have \begin{equation} \overline{\det{M}} = e^{N S(q_+,E)} + e^{N S(q_-,E)} = (-1)^{N \alpha(E)} \, e^{N (E^2 - 2)/4 + \log 2} \ , \end{equation} where \begin{equation} \alpha(E)= \frac{1}{\pi} {\rm arctg}\left(\frac{- \sqrt{4-E^2}}{E}\right) +\frac{1}{4\pi} E\sqrt{4-E^2} \ , \label{medio} \end{equation} \begin{figure} \begin{center} \leavevmode \epsfxsize=5in \epsffile{fig2.eps} \caption{$-2<E<2$: The region of constant phase (dashed line) and the real axis (full line). The two small circles indicate the positions of the two saddle points, $q_+$ (right) and $q_-$ (left). In this case the correct path of integration $\gamma$ passes through both the saddle points. } \label{fig2} \end{center} \end{figure} At these values of the energy the spectrum of $M$ is partly contained in the negative semi-axis, so that a non-trivial fraction of the eigenvalues is negative. The interesting point is that the interplay between the two saddle points gives rise to the correct sign of the determinant. Indeed, it is easy to check that $\alpha(E)$ is exactly the mean fraction of negative eigenvalues of $M$, that is \begin{equation} \alpha(E)=\int_{-\infty}^0 d\lambda \; \rho(\lambda;E) \ . \end{equation} Note that the mechanism we have described above, given by the interplay between the two saddle points and the paths of integration, is crucial in order to obtain the correct result for the determinant of $M$. \section{The index distribution} The index ${\cal I}_M$ of a matrix $M$, defined as the number of its negative eigenvalues, can be computed from the following formula \cite{Jorge} \begin{equation} {\cal I}_M= \frac{1}{2 \pi i} \lim_{\epsilon\to 0} \ \left[ \log \det (M - i \epsilon) - \log \det (M + i \epsilon) \right] \ . \label{index} \end{equation} The meaning of this relation is quite clear: the function $f(z) = \log \det (M-z)$ has a cut on the real axis at each eigenvalue of $M$, such that by means of the limit in (\ref{index}) we are crossing as many cuts as negative eigenvalues are present. Besides, this formula can be simply obtained by integrating the non-averaged spectrum (\ref{rho1}) from minus infinity up to zero. In the case we are considering, the index is a function of the energy $E$ and its average value is given by $N \alpha(E)$ (Eq.(\ref{medio})). We are interested in calculating the average probability distribution of the index, at a given energy $E$, that is the probability $P(K; E)$ to have a matrix $M$ with index ${\cal I}_M$ equal to $K$, at energy $E$ \begin{equation} P(K; E) = \overline{ \delta( K - {\cal I}_M(E) ) } \ . \label{pepe} \end{equation} In the following it will be important to distinguish between the {\it extensive} index $K$, which is a positive integer between $0$ and $N$, and the {\it intensive} one $k=K/N$, which takes values in the continuous interval $[0,1]$, and whose probability distribution is \begin{equation} p(k;E) = \overline{ \delta( k - {\cal I}_M(E)/N ) }= N P(Nk;E) \ . \end{equation} Note that the limit $N\to\infty$ is well defined only for $p(k;E)$. From Eqs.(\ref{index}) and (\ref{pepe}) we get \begin{equation} P(K;E) = \frac{1}{2 \pi} \int_{-\infty}^{\infty} d\mu \; e^{-i \mu K} \ \overline{G(\mu, E)} \ , \label{pi} \end{equation} where \begin{equation} G(\mu,E) = {\det}^{\mu/2\pi} (M - i \epsilon) \; {\det}^{-\mu/2\pi}(M + i \epsilon) \ . \label{dets} \end{equation} We now make use of the replica method to represent the powers of the determinants in $G(\mu,E)$ as analytic continuations of integer powers \begin{equation} {\det}^{\pm\mu/2\pi} (M \mp i \epsilon) = \lim_{n_\pm\to\pm\mu/2\pi} {\det}^{n_\pm} (M \mp i \epsilon) \ . \end{equation} By introducing two different sets of $N$-dimensional fermionic vectors $(\bar\chi_\pm^r, \chi_\pm^r)$ with $r=1,\dots,n_\pm$, we can rewrite the determinants as \begin{equation} {\det}^{\pm\mu/2\pi} (M \mp i \epsilon) = \lim_{n_\pm\to\pm\mu/2\pi} \int D\bar\chi_\pm^r D\chi_\pm^r \exp \left( - \sum_{r=1}^{n_\pm} \bar{\chi}_\pm^r (M \mp i\epsilon) \chi_\pm^r \right) \ , \label{fer} \end{equation} where the sums over the matrix indices $i,j$ are hereafter always understood. We can write everything in a more compact fashion by introducing the Grassmann vectors $(\bar\psi_a,\psi_a)$, with $a = 1, \ldots, (n_+ + n_-)$, defined as (see also \cite{nomo}) \begin{equation} (\psi_1, \ldots, \psi_{(n_+ + n_-)}) \equiv (\chi_+^1, \ldots, \chi_+^{n_+}, \chi_-^1, \ldots, \chi_-^{n_-}) \ , \end{equation} together with the matrix \begin{equation} \epsilon_{ab} \equiv \mbox{} {\rm diag} ( \underbrace{\epsilon,\ldots,\epsilon}_{n_+}, \underbrace{-\epsilon,\ldots,-\epsilon}_{n_-}) \ . \end{equation} Note that both $\psi_a$ and $\epsilon_{ab}$ have replica dimension $n \equiv (n_+ + n_-) \to 0$. In this way we have for $G$ \begin{equation} G(\mu,E) = \lim_{n_\pm \to \pm \mu/2\pi} \int D\overline{\psi} D\psi \exp \left[ - \sum_{ab=1}^n \overline{\psi}_a \left( M \delta_{ab}- i \epsilon_{ab} \right) \psi_b \right] \ . \label{gg} \end{equation} The average of $G$ over the distribution of $J$ can be computed by a generalization of the procedure of the previous section, the main difference being the fact that we have an extra replica dimension, so that the variable $q$ must be replaced by a matrix $Q_{ab}$. For the sake of completeness the details of the computation are in Appendix A. We obtain \begin{equation} \overline{G(\mu, E)} = \int DQ \ e^{N S(Q,E)} \ , \label{smodel} \end{equation} with \begin{equation} S(Q,E) = - \frac{1}{2} {\rm Tr} \, Q^2 + \log \det (- \hat E -iQ) \ , \label{sq} \end{equation} and $\hat E_{ab}= E \delta_{ab} + i \, \epsilon_{ab}$. Note the similarity with equations (\ref{int1}) and (\ref{azione}). The matrix $Q$ is an $n \times n$ self-dual real-quaternion matrix \cite{elk,meka2} (see Appendix A). It has $2 n^2 - n$ degrees of freedom, and is diagonalized by transformations of the simplectic group $Sp(n)$. In (\ref{sq}) we see for the first time the role of $\epsilon$ as a symmetry breaking field. The matrix $\hat E$ has an upper block of size $n_+$ which contains $+i\epsilon$ and a lower one of size $n_-$ with $-i\epsilon$, so that the action is only invariant under $Sp(n)/Sp(n_+) \times Sp(n_-)$, and the full invariance under $Sp(n)$ is only recovered in the limit $\epsilon \to 0$. However, how {\it exactly} the symmetry breaking affects the calculation will become clearer below. We can evaluate the integral (\ref{smodel}) by means of the steepest descent, or saddle point, method, which becomes exact for large $N$. The saddle point equation for the matrix $Q$ reads \[ Q = i (\hat{E} + i Q)^{-1} \ . \] This equation can be solved assuming for $Q$ a diagonal form, $Q_{ab} = z_a \delta_{ab}$. We have two different sets of equations, one set for the elements belonging to the upper block, $z_a^{(u)}$, and a second set for the elements of the lower block, $z_a^{(l)}$. The only difference between the two sets is, of course, the sign of $\epsilon$, \begin{eqnarray} z_a^{(u)} &=& i \, \left( E + i \epsilon + i z_a^{(u)} \right)^{-1} \mbox{upper block} \nonumber \\ z_a^{(l)} &=& i \, \left( E - i \epsilon + i z_a^{(l)} \right)^{-1} \mbox{lower block} \end{eqnarray} Each one of these two sets of equations has two solutions, $z_\pm^{(u)}$ for the upper block, $z_\pm^{(l)}$ for the lower one, namely \begin{eqnarray} z^{(u)}_\pm &=& \frac{i}{2} (E + i\epsilon) \pm \frac{1}{2} \sqrt{4 - (E + i\epsilon)^2} \ , \nonumber \\ z^{(l)}_\pm &=& \frac{i}{2} (E - i\epsilon) \pm \frac{1}{2} \sqrt{4 - (E - i\epsilon)^2} \ . \end{eqnarray} For all values of the energy such that $|4-E^2| \gg \epsilon$ these solutions can be expanded in powers of $\epsilon$ and read \begin{eqnarray} z_\pm^{(u)} &=& q_\pm - \epsilon\left( \frac{1}{2} \pm i\,\frac{E}{2\sqrt{4-E^2}} \right) + O(\epsilon^2) \ , \nonumber\\ z_\pm^{(l)} &=& q_\pm + \epsilon\left( \frac{1}{2} \pm i\,\frac{E}{2\sqrt{4-E^2}} \right) + O(\epsilon^2) \ , \label{speq} \end{eqnarray} where $q_\pm$ are given in equation (\ref{qpm}). There are some important things to note here, related to the fact that the presence of $\epsilon$ crucially modifies the mutual relevance of the different saddle points. We have seen in the previous section that in the regime $-2 < E < 2$ the correct integration path $\gamma$ passes through both the saddle points $q_+$ and $q_-$ (see Fig.2). This is true also in the present case, when a value $\epsilon\neq 0$ is considered: for each $z_a$ the path $\gamma$ passes through $z_+$ and $z_-$ and in principle both the saddle points must be taken into account. However, when we look at the real part of the action $S$, we now discover that the contribution of one saddle point is exponentially dominant over the other by a factor $\exp(-N\epsilon)$. This is in contrast with the case of the previous section, where the real part of $S$ was the same in the two saddle points. The crucial point is that, due to opposite sign of $\epsilon$ in the upper and lower blocks, the real part of the action is tilted in opposite ways in the two blocks and, as a consequence, the dominant saddle point becomes $z_+$ for the upper block and $z_-$ for the lower one. We now start to understand the way in which $\epsilon$ works as a symmetry breaking field: without $\epsilon$ the two saddle points have the same weight in the integral and we have to consider both of them. With $\epsilon$, the weights are modified in opposite ways for the upper and lower blocks. In order to apply the steepest descent method we must perform the limit $N\to\infty$ {\it before} the limit $\epsilon\to 0$, and this selects {\it just one} different saddle point for each of the two different blocks, dumping completely the non-dominant contribution. As a result, when at the end $\epsilon\to 0$ we have selected $q_+$ for the upper block and $q_-$ for the lower one. This is very reminiscent of what happens in statistical physics, where, in order to break a symmetry by means of an external field, the thermodynamic limit must be performed before sending the field to zero. On the other hand, for energies $|E| > 2$, the effect of $\epsilon$ is harmless, there is no qualitative change from the situation described in the previous section and the same kind of saddle point for the upper and lower block contributes to the integral. We can now proceed in our computation. We will focus first on the region $-2 < E < 2$, where the typical spectrum is not positive defined and where we thus expect a more interesting index distribution. According to the above discussion on the dominant saddle points, we must consider the following form for $Q_{\rm SP}$: \begin{equation} Q_{\rm SP} = \mbox{diag} ( \underbrace{z_+^{(u)}, \ldots, z_+^{(u)}}_{n_+}, \underbrace{z_-^{(l)}, \ldots, z_-^{(l)}}_{n_-} ) \label{sp} \ . \end{equation} This form is invariant under the unbroken group $Sp(n_+) \times Sp(n_-)$ of replica symmetry transformations, and in this sense we shall refer to it as a replica symmetric (RS) saddle point \cite{noiselle,meka1}. We note that Eq.(\ref{dets}) is invariant under the simultaneous action of complex conjugation and inversion of $\mu$, which after replicating becomes $n_\pm \to n_\mp$, and that our saddle point satisfies this invariance. If we plug expression (\ref{sp}) into Eq.(\ref{smodel}), we obtain after taking the limit $\epsilon\to 0$ \begin{equation} \overline{G(\mu,E)} = \left(-E-iq_+ \right)^{N \mu/2\pi} \, \left(-E-iq_- \right)^{-N \mu/2\pi} \, e^{-N \, \mu \, (q_+^2-q_-^2)/4\pi} \, = \exp\left[i\mu N \alpha(E)\right] \ , \end{equation} where $\alpha(E)$ is the average fraction of negative eigenvalues given by Eq.(\ref{medio}). From (\ref{pi}) we finally get the probability $p(k,E)$ in the limit $N \to \infty$, \begin{equation} p(k,E)= \delta \left[ k - \alpha(E) \right] \ . \label{delta} \end{equation} This result is very reasonable, but also rather trivial: the probability distribution of the intensive index is a $\delta$-function peaked on its average value in the limit $N\to\infty$. In order to observe a non-trivial behaviour we need to consider the scaling with $N$, that is, the distribution of the index for large but finite $N$. This is particularly important if we are interested in the distribution of the extensive index, as for example in the case of disordered systems, where we want to know the change in the probability of different stationary points when variations of the index of order one, not of order $N$, are considered. To go beyond result (\ref{delta}), we must consider fluctuations around the saddle point (\ref{sp}). The general procedure is discussed in Appendix B. As expected there are three kinds of fluctuations: within the upper block, within the lower block, and those which mix the two blocks. Their corresponding eigenvalues and degeneracies are, \begin{equation} \begin{array}{lllcccl} \omega_{\rm u} & = 1 + z_+^{(u)} \, z_+^{(u)} & = (1 + q_+^2) - \frac{\epsilon \, q_+^2}{\sqrt{1 - E^2/4}} + O(\epsilon^2)& & & & d_{\rm u} = 2 n_+^2 - n_+ \ , \\ \omega_{\rm l} & = 1 + z_-^{(l)} \, z_-^{(l)} & = (1 + q_-^2) - \frac{\epsilon \, q_-^2}{\sqrt{1 - E^2/4}} + O(\epsilon^2)& & & & d_{\rm l} = 2 n_-^2 - n_- \ , \\ \omega_{\rm m} & = 1 + z_+^{(u)} \, z_-^{(l)} & = \frac{\epsilon}{\sqrt{1 - E^2/4}} + O(\epsilon^2) & & & & d_{\rm m} = 4 n_+ n_- \ . \\ \end{array} \label{omega} \end{equation} The first two sets of eigenmodes are massive modes, in the sense that their eigenvalues are $O(1)$. The third set are soft modes: for vanishing $\epsilon$ they would correspond to zero modes associated to the restoration of the $Sp(n_+ + n_-)$ symmetry; for small non-zero $\epsilon$ they become soft vibrations. Integrating over the fluctuations, we obtain \begin{equation} \overline{G(\mu, E)} = \omega_{\rm u}^{-(n_+^2 - n_+/2)} \, \omega_{\rm l}^{-(n_-^2 - n_-/2)} \, \omega_{\rm m}^{-2 n_+ n_-} \, \exp\left[i\mu N \alpha(E)\right] \ . \end{equation} In the replica limit $n_\pm\to\pm\mu/2\pi$ this quantity becomes \begin{equation} \overline{G(\mu, E)} = \exp\left[ i\mu N \alpha(E) - \frac{\mu^2}{2 \pi^2} \log \left( \frac{\sqrt{\omega_{\rm u} \omega_{\rm l}}}{\omega_{\rm m}} \right) + \frac{\mu}{4 \pi} \log \left( \frac{\omega_{\rm u}}{\omega_{\rm l}} \right) \right] \ . \label{gege} \end{equation} From Eq.(\ref{pi}) we obtain the distribution for the extensive and intensive index for finite but large $N$: \begin{eqnarray} P(K,E) &=& \sqrt{\frac{1}{2 \pi \Delta(E)}} \exp \left( - \frac{\left[ K - N \alpha(E) + \beta(E)\right]^2} {2 \Delta(E)} \right) \ , \label{pK} \\ p(k,E) &=& \sqrt{\frac{N^2}{2 \pi \Delta(E)}} \exp \left( - \frac{N^2\left[ k - \alpha(E) + \beta(E)/N \right]^2} {2 \Delta(E)} \right) \ . \label{pk} \end{eqnarray} These are Gaussian distributions peaked on the average value $\alpha(E)$. Indeed the shift, \begin{equation} \beta(E)=\frac{1}{2\pi}{\rm arctg}\left( \frac{E}{\sqrt{4-E^2}} \right) \ , \label{shift} \end{equation} is of order one and is not relevant at large enough values of $N$. The variance $\Delta(E)$ is given by \begin{equation} \Delta(E) = \frac{1}{\pi^2} \log \left( \frac{\sqrt{\omega_{\rm u} \omega_{\rm l}}}{\omega_{\rm m}} \right) \ , \end{equation} that is \begin{equation} \Delta(E) = \frac{1}{\pi^2} \log \left[ 2 \pi^2 \epsilon^{-1} \rho_0(E)^2 \right] \ , \label{var} \end{equation} where we have defined $\rho_0(E) \equiv \rho(\lambda=0; E)$ (see Eq.(\ref{semi})). This result for the variance can also be obtained by the method of orthogonal polynomials where $\epsilon$ plays the role of a high frequency cutoff \cite{ap2,ap4,review}. The fact that expression (\ref{var}) still depends on $\epsilon$ can seem rather unphysical, especially when we consider the fact that the limit $\epsilon\to 0$ has to be performed. However, we have to remember that we are looking at finite $N$ corrections, and this very fact makes the parameters $\epsilon$ and $N$ no longer independent. In this way the presence of $\epsilon$ translates in a more physical $N$ dependence and this allows us to compute the scaling of the index distribution with the matrix size $N$. Before discussing the result we have obtained for the index distribution, we have therefore to address the problem of the relation between $\epsilon$ and $N$. There are mainly two different reasons why $\epsilon$ and $N$ are related. First, as we have previously noted, there is a precise interplay between the two limits, $N\to\infty$ and $\epsilon\to 0$, when the saddle point approximation is used in order to solve integral (\ref{smodel}): the symmetry breaking due to $\epsilon$ works only if $\epsilon\to 0$ {\it after} $N\to\infty$, as in any thermodynamic calculation. If $N$ is kept finite, we need a value of $\epsilon$ big enough to guarantee the dominance of one saddle point over the other. We have seen that the role of $\epsilon$ is to modify the real part of the action in such a way that along the integration path $\gamma$ one saddle point is weighted more than the other. However, if $\epsilon$ is too small, also the non-dominant saddle point may give a non-negligible contribution to the integral. To avoid this fact we need the secondary contribution to be suppressed also at finite $N$ and to vanish when the limit $N\to \infty$ is considered. The suppression factor is given, at order $\epsilon$, by \begin{equation} e^{- N [S(z_+^{(u)}) - S(z_-^{(u)}) ]} = e^{- 2 \pi N \epsilon \rho_0(E) } \ , \label{supp} \end{equation} for the upper block (for the lower block an analogous expression is valid). In order for the suppression factor to vanish it must hold \begin{equation} \epsilon N \to \infty \ , \ \ \ \ \ \ \ \ N \to \infty \ . \label{grande} \end{equation} This imposes a lower bound for $\epsilon$ when $N$ is finite. A natural general choice is therefore to assume \begin{equation} \epsilon=\frac{1}{N^{1-\delta(N)}} \ , \label{scale} \end{equation} where the exponent $\delta(N)$ has to satisfy the relation $\delta(N) \log N \to \infty$. The simplest possibility is, of course, a constant value of $\delta$. However, as we shall argue immediately below, this would not be consistent with the second condition we have to impose on $\epsilon$. The second bound for $\epsilon$ comes from the following observation. When we perform our calculation with a finite value of $N$ and of $\epsilon$, there are of course two different kinds of corrections to the asymptotic exact result: the first kind is related to the saddle point approximation and brings corrections which scale as inverse powers of $N$. The second is related to the non-zero value of $\epsilon$ and brings corrections which scales as powers of $\epsilon$. Consistency requires that in the final result the error introduced by considering a finite value of $\epsilon$ must be of the same order as the terms we discard in the expansion in $1/N$. It can be easily shown that the corrections to the index distribution (\ref{pK}) for finite $\epsilon$ are of order $\epsilon^2$, that is \begin{equation} G(\mu)=\exp(N\alpha(E)+ \beta(E) + O(\epsilon^2)) \ . \label{lbound} \end{equation} On the other hand, by considering the Gaussian fluctuations around the saddle point, we are discarding terms of order $1/N^2$ in the exponent of (\ref{lbound}). Thus, we must impose the condition \begin{equation} \epsilon^2 \sim \frac{1}{N^2} \ . \label{chico} \end{equation} Equation (\ref{chico}) is consistent with equations (\ref{scale}) and (\ref{grande}) only if, \begin{equation} \delta(N)\to 0 \ , \ \ \ \ \ \delta(N)\log N\to \infty \ , \ \ \ \ \ \ N\to\infty \ . \label{patty} \end{equation} In this way we finally get for the variance the result, \begin{equation} \Delta(E,N) = \frac{1}{\pi^2} \log \left[ 4 \pi^2 N^{(1-\delta(N))} \rho_0(E)^2 \right] = \frac{1-\delta(N)}{\pi^2}\log N + \frac{2}{\pi^2}\log(2\pi\rho_0) \ , \label{varn} \end{equation} where we have taken $\epsilon=1/2N^{1-\delta}$, the factor $1/2$ being consistent with equation (\ref{supp}) at $E=0$. This result agrees very well with numerical simulations: in Fig.3 we plot the variance $\Delta$ as a function of $\log N$, obtained by exact numerical diagonalization. A linear fit gives \begin{equation} \Delta = \frac{a}{\pi^2} \log N + \frac{b}{\pi^2} \log(2\pi\rho_0) \ , \ \ \ \ \ a=1.005\pm 0.006 \ \ , \ \ b= 1.993\pm 0.003 \ . \end{equation} This same scaling for the variance has been found also in \cite{ap2}, where a completely different method based on the invariance properties of the Gaussian Orthogonal Ensemble and the dominance of intrinsic binary correlations was used. In Appendix B we show in details that the contributions of the other possible saddle point solutions of the whole integral (\ref{smodel}) to the index distribution are smaller by inverse powers of $\log N$ in this energy region, therefore the scaling with $N$ is correctly reproduced by equations (\ref{pK}),(\ref{pk}) and (\ref{varn}). \begin{figure} \begin{center} \leavevmode \epsfxsize=3in \epsffile{fig3.eps} \caption{The variance $\Delta$ as a function of $\log N$ for $E=0$, obtained by means of exact numerical diagonalization on the Gaussian Orthogonal Ensemble. The full line is the linear fit.} \label{fig3} \end{center} \end{figure} We can finally analyze the significance of our result, equations (\ref{pK}),(\ref{pk}) and (\ref{varn}), in the energy regime $-2<E<2$. What we see is that the variance of the probability distribution of the {\it intensive} index goes to zero in the limit $N\to\infty$ and this was quite expected, given our former result (\ref{delta}). On the other hand, the variance of the distribution of the {\it extensive} index diverges logarithmically for $N \to \infty$. The meaning of this result is the following: on the one hand the probability of finding a matrix with an index density different from the average one, that is with an extensive index ${\cal I} \sim N\alpha + O(N)$, is zero in the limit $N\to\infty$. But, on the other hand, the probability of having a matrix whose index differs from the average one for a number of negative eigenvalues of order one, i.e. ${\cal I} \sim N\alpha + O(1)$, is exactly the same as the probability of having a matrix with the average index, in the limit $N\to\infty$. As we shall see in the next section, this fact has some very interesting physical consequences in the context of disordered systems. Let us now look at the other energy regions. First of all we note that the derivation of equations (\ref{speq}), (\ref{omega}) and (\ref{supp}) holds as long as the energy is such that $\rho_0(E)$ is of $O(1)$. But this condition breaks down when the energy gets close to $\pm 2$ and $\rho_0(E) \ll 1$. In this region the procedure previously adopted to compute the index distribution has to be modified. Indeed, when $\rho_0(E)$ becomes too small the suppression mechanism (\ref{supp}) starts being inefficient, and the saddle point (\ref{sp}) is no longer the only one contributing to the integral. At some point the excitations which were treated as soft modes in (\ref{omega}) must be considered as zero modes connecting equivalent saddle points: there exists a manifold $Sp(n_+ + n_-) / Sp(n_+) \times Sp(n_-)$ of saddle points and the original replica symmetry under $Sp(n_+ + n_-)$ is restored. At this stage $\epsilon$ plays no longer any role and it can be taken to zero. The massive modes are the same as in Eq. (\ref{omega}), and after integrating over them and exactly over the degrees of freedom associated with the zero modes we obtain (up to trivial factors in the replica limit) \begin{equation} \overline{G(\mu, E)} = N^{2 n_+ n_-} {\cal V}^{n_+}_{n_+ + n_-} \, \omega_{\rm u}^{-(n_+^2 - n_+/2)} \, \omega_{\rm l}^{-(n_-^2 - n_-/2)} \, \exp\left[i\mu N \alpha(E)\right] \ , \end{equation} where ${\cal V}^{n_+}_{n_+ + n_-}$ corresponds to the volume of the manifold of saddle points solutions (see Appendix B). At this point one has to analytically continue the previous expression for $n_{+} \to \mu/2\pi , \, n \to 0$. The volume ${\cal V}^{n_+}_{n_+ + n_-}$ is finite for $n_+=0$ and it is zero for positive integers \cite{meka1}. Its analytic continuation is an oscillatory function of $n_+$, with exponentially increasing amplitude \cite{zirn}, so that the presence of this factor in the former equation makes the index distribution non-Gaussian. However, as long as this analytic continuation is finite for non-integer $n_+\sim 1$, the distribution can be approximated by a Gaussian with variance \begin{equation} \Delta(E \sim \pm 2) = \frac{1}{\pi^2} \log \left[ 8 \pi^4 \, N \, \rho_0(E)^3 \right] \ . \label{delta2} \end{equation} We can see from (\ref{delta2}) that the variance still scales as $\log N$. However, when $|E-2| \sim 1/N^{\frac{2}{3}}$, we have $\rho_0(E)\sim 1/N^{\frac{1}{3}}$ and a further crossover takes place: the variance $\Delta(E)$ becomes of order one meaning that the index distribution is dramatically more peaked around its typical value as we approach $E=\pm 2$. Note also that when $E \sim -2 + 1/N^{\frac{2}{3}}$ the typical index $\alpha(E)$ becomes of order 1, meaning that in this region matrices with $O(1)$ negative eigenvalues are dominant. Summarizing, in the energy regime where the average number of negative eigenvalues is of order one, the fluctuations around the mean value become of order one too. When the energy is exactly at the threshold values $E = \pm 2$ we have a special case since the saddle point equations for the eigenvalues have a single degenerate solution, and the harmonic terms in the expansion around the saddle point vanish. It is not difficult to show that the distributions here become \begin{equation} P(K,E \to -2^+) = N^{-1} \, \delta(K) \ , \;\;\;\;\;\;\; P(K,E \to 2^-) = N^{-1} \, \delta(K - N) \ . \label{limit} \end{equation} The calculation in the regions $|E| > 2$ is completely straightforward since $\epsilon$ plays no role from the beginning. As mentioned before, the same kind of saddle point has to go in both blocks, so that we have \begin{equation} Q_{\rm SP} = \mbox{diag} ( \underbrace{z_\pm^{(u)}, \ldots, z_\pm^{(u)}}_{n_+}, \underbrace{z_\pm^{(l)}, \ldots, z_\pm^{(l)}}_{n_-} ) \label{spout} \ , \end{equation} where the plus (minus) sign corresponds to negative (positive) energies. There is only one kind of massive fluctuation with degeneracy $2 n^2 - n$, which goes to zero in the replica limit, and thus the integration over fluctuations gives a trivial prefactor. The final result for the distribution of $K$ is \begin{equation} P(K,E) = \left\{ \begin{array}{lc} N^{-1} \, \delta(K) & E < -2 \\ N^{-1} \, \delta(K - N) & E > 2 \ , \\ \end{array} \right. \label{below} \end{equation} which coincides with the limiting behaviour (\ref{limit}) of the distribution in the region $-2<E<2$. Thus, while in the energy region $-2<E<2$ values of the index with an $O(1)$ difference from the typical one have a finite probability, here the index distribution is so much peaked on the typical value that even small changes in the index have zero probability. \section{An application to disordered systems} In this section we consider a mean-field spin-glass model, that has been extensively studied in the last years and whose thermodynamical as well as dynamical features are very well known, namely the $p$-spin spherical model \cite{crisa1,crisatap,crisa2,kpz,ck1}. Our aim is to use the results of the calculation we have carried out in the previous section, in order to have a better understanding of the statistical and geometrical properties of the energy landscape for this model. This problem is by itself relevant, because both the static properties and the peculiar off-equilibrium dynamical behaviour of mean field spin-glasses, and in particular of this model, are known to be deeply related to the distribution of the minima and of the saddles of the Hamiltonian \cite{spin-glass,laloux,noiselle,nomo}. Moreover, it is now commonly accepted that the $p$-spin spherical model shares many common features with structural glasses, which are presently one of the major challenges for statistical mechanics. Indeed, notwithstanding the completely different form of the Hamiltonians, some structural glasses (in particular fragile glasses) and the $p$-spin spherical model have a very similar structure of the energy landscape \cite{francesi}. Therefore, a thorough investigation of the energy landscape for the $p$-spin spherical model is important also for a better understanding of structural glasses. As already stated in the Introduction, knowing the index distribution of the Hessian at various energies is equivalent to knowing the fluctuations in the stability of the energy surface. In other words, the index distribution tells us what are the dominant stationary points of the Hamiltonian (or saddles) at a given energy, and, more importantly, what is the probability distribution around the typical saddles, thus providing an insight on the mutual entropic accessibility of different stationary points. This is what we are going to describe in this last section. The reason why the $p$-spin spherical model is particularly appropriate for an application of the above calculation and concepts is the following: when we look at the stationary points of the Hamiltonian of this system, we find that the Hessian matrix $M$ in such stationary points, behaves as a Gaussian random matrix of the same kind as the ones considered in the calculations above. More specifically, if we classify the stationary points of the Hamiltonian in terms of their energy density $E$, we find that the Hessian $M(E)$ in these stationary points is a random matrix of the form (see for instance \cite{kpz}) \begin{equation} M_{ij}(E) = J_{ij} - E \, \delta_{ij}, \end{equation} where $J_{ij}$ is an $N$-dimensional real and symmetric random matrix with the same Gaussian distribution as (\ref{dist}), and where $N$ is the size of the system. The spectrum of the Hessian in the stationary points is therefore, \begin{equation} \rho(\lambda;E)= \frac{1}{2\pi} \sqrt{E_{th}^2 - (\lambda+E)^2} , \end{equation} where $E_{th}$ is the so-called {\it threshold energy}, which depends on the parameters of the model (in the previous sections it was $|E_{th}|=2$). Given the particular shape of the Hessian, we can completely disregard the details of the $p$-spin spherical model and assume the results obtained in our calculation as the starting point, interpreting these results in terms of probability distributions of the stationary points of the Hamiltonian. Let us begin our geometric analysis of the energy landscape from very low energies. When $E<-|E_{th}|$ the semi-circle is entirely contained in the positive semi-axis and the average determinant of the Hessian is positive: this is the region dominated by minima, as the index distribution (\ref{below}) shows. Moreover, as we have already noted in the previous section, the probability of finding a stationary point with an index different from the typical one (i.e. $0$) is zero. Minima are strongly dominant in this energy regime. A more careful analysis \cite{noiselle} shows that even in this regime there are saddles with non-zero index, but the probability of these objects is exponentially small in $N$, that is \begin{equation} P(K,E) \sim e^{-KN\Omega(E)} \ \ \ \ \ \ K=1,2,3, \dots \ . \end{equation} This result is obtained by considering non-symmetric contribution to the saddle point equations (see \cite{noiselle} and Appendix B) and, consistently with equation (\ref{below}), it gives a contribution too small to be caught by simply analyzing fluctuations around the dominant saddle point. The above result shows that at low energies minima are exponentially dominant over saddles of order one, and even more dominant over saddles with extensive index. In this sense we shall call this region the {\it decoupling regime}, since at any energy below the threshold only one kind of stationary points, namely minima, dominates. When we raise the energy, we finally arrive at $E=-|E_{th}|$: here the semi-circle touches the zero and the decoupling between different stationary points is no longer true. Indeed, it can be proved \cite{noiselle} that $\Omega(E_{th})=0$, meaning that at the threshold energy minima and saddles of order one have the same probability. Thanks to the calculation of the previous section we are now in the position to answer the following question: What happens {\it above} the threshold energy ? From a simple inspection of the semi-circle law it is clear that above the threshold saddles become important, since many negative eigenvalues appear and the average index $N\alpha(E)$ is non-zero. Yet, in order to have information on the degree of decoupling of the stationary points, the simple typical index $N\alpha(E)$ is not enough. The reason is the following: the knowledge of the typical index does not tell us whether at that same energy other stationary points, different from the typical ones, do or do not have non-zero probability. In this sense the mutual entropic accessibility of different stationary points is encoded in the index distribution $P(K,E)$, which reveals to what extent the typical saddles are dominant over the non-typical ones. From equation (\ref{pK}) we see that in this regime not only the dominant stationary points are saddles of order $N$, but, also, that the probability of finding a minimum is of order $e^{-N^2}$. The decoupling between minima and dominant saddles is therefore much more dramatic than the one we found below the threshold. On the other hand, because of the divergence of the variance $\Delta$ with $N$ (equation (\ref{varn})), we see that there is a mixing among saddles with the same {\it intensive} index: the probability of having a saddle whose index differs from the average by a number of order one, is the same as the probability of the typical saddles \cite{nota}. In other words, the main result is that there is no decoupling among saddles with the same intensive index, so that a mixing of different stationary points occurs, while still a decoupling exists between dominant saddles and minima. Summarizing, we can therefore distinguish two energy regimes where the probability distribution of the stationary points, and therefore the geometric structure of the energy landscape, is very different: a decoupled regime for $E<E_{th}$ and a mixed regime for $E>E_{th}$. Interestingly enough, the threshold energy $E_{th}$ is exactly the asymptotic energy where a purely dynamical transition occurs: below a critical temperature $T_d$, the ergodicity is broken and the system is no longer able to visit the entire phase space in its time evolution, remaining confined to an energy level higher than the equilibrium one. This `dynamical energy' is equal to $E_{th}$ \cite{crisa2,kpz,ck1}. This suggests us to relate the information we have on the distribution of the stationary points, following from the index distribution, to the dynamical physical behaviour of the system. Above $T_d$ the equilibrium energy $E$ of the system is higher than threshold value $E_{th}$ and therefore belongs to the mixed regime: the equilibrium landscape explored by the system is dominated by saddles of order $N$ which, as we have shown, are all equally relevant up to variations of the index of order one. This means that all these unstable stationary points are equally accessible to the system in its time evolution. As $T_d$ is approached the equilibrium energy $E$ gets closer and closer to $E_{th}$, and the properties of the equilibrium landscape change accordingly to the behaviour of $P(K,E)$ we have discussed in the previous section: when $E\sim -|E_{th}| + 1/N^{2/3}$ saddles with index of order one become the most relevant and the variance of the index distribution is now finite. This means that minima start having a finite probability in this energy regime. The range of temperatures where this behaviour takes places is of order $1/N^{2/3}$ and shrinks to zero in the thermodynamic limit. Below $T_d$, the equilibrium energy belongs to the decoupled regime, that is $E<E_{th}$: minima are now dominant and saddles of any order have exponentially vanishing probability. We can therefore interpret $T_d$ as the temperature where a geometric transition occurs from a regime of strong mixing of the stationary points to a regime of equally strong decoupling. \section{Conclusions} In this paper we computed the average index distribution for an ensemble of Gaussian random matrices. We find a result which is in optimum agreement with exact numerical diagonalization. This computation is, in our opinion, an interesting example where the fermionic replica method, together with a careful asymptotic expansion of the integrals, gives correct results. We hope that the present work can therefore contribute to clarify the role of the replica method in the context of RMT. Besides, and this was our main purpose, the index distribution provides a really useful tool for investigating the geometric structure of the energy landscape in disordered systems. In the previous section we applied this tool to the simple case of the $p$-spin spherical model and discussed the physical consequences of our results. In general, the task of computing the distribution of the index of the Hessian is not as simple as in the $p$-spin model. The main reason is that the Hessian usually does not behave as a Gaussian random matrix, because, as noted in the Introduction, its distribution is determined both by the distribution of the quenched disorder and by the distribution of the configurations. However, the same procedure we adopted in this paper can also be applied to these more complicated cases, with the appropriate modifications: to compute the index distribution at a given energy $E$, one has to average over the distribution of the disorder and integrate over the relevant configurations belonging to the manifold of energy $E$ \cite{noiselle}. This is the reason why the method presented in this paper is particularly suitable for this task, since it addresses the problem without assuming any particular form for the distribution of the Hessian. Finally, there have been recently some attempts to find a connection between the occurrence of a thermodynamical phase transition and the change in the topology of the configuration space visited by the system at equilibrium \cite{lapo1}. For various non-disordered models which present a second order phase transition it has been shown via numerical simulations that the fluctuations of the curvature of the configuration space exhibit a singular behaviour at the transition point. This is similar to the behaviour described in the previous section for the $p$-spin spherical model, where the average fluctuations of the index (\ref{var}) at the equilibrium energy encounter a dramatic change as the dynamical transition is approached \cite{noiselle}. This suggests first of all that also in disordered systems a connection between thermodynamical behaviour and topology of the configuration space exists. Besides, the case of the $p$-spin is also peculiar in this sense: it presents a static phase transition which is thermodynamically of second order, but it is discontinuous in the order parameter \cite{crisa1} and exhibits a purely dynamical transition at a higher temperature \cite{crisa2}. As we have shown, in this case a dramatic change of geometrical properties occurs at the dynamical transition, indicating that a more complex situation probably holds for disordered systems which present this sort of behaviour. \acknowledgements It is a pleasure to thank Jorge Kurchan for many suggestions and discussions. We also thank Kurt Broderix for a careful reading of the manuscript and for some key remarks on a preliminary version of this work. Finally, we wish to thank the kind hospitality of the Department of Physics of the ENS of Lyon, where part of this work was done. A.C. and I.G were supported by EPSRC Grant GR/K97783, and J.P.G. by EC Grant ARG/B7-3011/94/27.
2,869,038,156,592
arxiv
\section{Introduction} An analysis of the spectra of particles produced in heavy ion collisions at CERN and AGS indicates that the excited matter created in these reactions spends most of its life time close to the QCD phase transition \cite{Sta_96}. Under these circumstances, we cannot expect that the phenomena observed in these collisions can be understood in terms of a perturbative plasma of quarks and gluons occupying the reaction zone. Instead, we have to address the non-perturbative aspects of QCD associated with the phase transition itself. Furthermore, it is these phenomena that can really teach us something about the structure of the QCD vacuum at zero temperature and baryon density. In this contribution, I will try to summarize our current theoretical understanding of the QCD phase transition. In particular, I will report on recent progress in understanding the chiral phase transition in the instanton liquid model. Lattice results on the QCD phase transition are discussed in E.~Laermann's contribution, and I will not go into detail here. Nevertheless, I would like to emphasize one important point. QCD has a very rich phase structure as a function of the number of colors, the number of flavors and their masses. In particular, pure gauge QCD has a first order deconfinement transition, while QCD with $N_f=3$ massless flavors has a first order chiral transition, see figure \ref{fig_phase}. These two transitions are not connected. When the mass of the fermions is varied from $m=0$ to $m=\infty$ (corresponding to the pure gauge theory), the two first order transitions are separated by a region in the phase diagram in which there is no true phase transition, just a rapid crossover. Indeed, the order of the phase transition for real QCD, with two light and one intermediate mass flavor is still not established with certainty. The distinction between the pure gauge deconfinement and the light quark chiral phase transition is not just purely academic. In fact, the two transitions have completely different energy scales. The chiral phase transition takes place at $T_c\simeq 150$ MeV, while the purge gauge transition occurs at $T_c\simeq 260$ MeV (where the scale is set by the rho meson mass or the string tension). In terms of energy density (and tax dollars!), this is an order-of-magnitude difference. Also, the latent heat associated with the pure gauge transition is rather large, on the order of $1.5\,{\rm GeV}/{\rm fm}^3$, while the chiral decondensation energy is $\Delta\epsilon\simeq 250\,{\rm MeV}/{\rm fm}^3$. This has important consequences for the non-perturbative gluon condensate. While most of the gluon condensate is removed across the pure gauge transition, there is evidence that a significant part of it remains above the chiral transition. \begin{figure}[t] \begin{minipage}[b]{80mm} \epsfxsize=7cm \epsffile{kanaya_fig.ps} \end{minipage} \hspace{\fill} \begin{minipage}[b]{75mm} \caption{\label{fig_phase} Schematic phase diagram for QCD in the $m_{u,d}-m_s$ plane, from \protect\cite{IKK_96}. The points show the type of transition found in lattice simulations with Wilson fermions. Note that QCD (*) appears to be in the first order region, while earlier simulations with staggered fermions placed QCD in the crossover region \protect\cite{BBC_90}.} \end{minipage} \end{figure} \section{Vacuum engineering} In Monterey T.D. Lee reminded us that the ultimate goal of relativistic heavy ion collisions is vacuum engineering, the removal of the quark and gluon condensates present in the $T= \mu=0$ vacuum. Here in Germany, vacuum engineering has a long tradition. More than three hundred years ago Otto v.~Guericke, after inventing a suitable pump, demonstrated the existence of air pressure by evacuating a pair of hollow semi-spheres. In QCD, we have to overcome the non-perturbative vacuum pressure in order to produce a perturbative vacuum state. The vacuum pressure is determined by the the trace anomaly \begin{eqnarray} \label{trace_anom} p = -\frac{1}{4}\langle T_{\mu\mu} \rangle \;=\; \frac{b}{32\pi} \left\langle\alpha_s G^2 \right\rangle - \frac{1}{4} \sum_f m_f\langle \bar q_fq_f\rangle, \end{eqnarray} where $b=11N_c/3-2N_f/3$ is the first coefficient of the beta function. Using the canonical values of the condensates, this relation gives $p=500\,{\rm MeV}/{\rm fm}^3$. At low temperature, the $T$-dependence of the condensates is determined by chiral perturbation theory \cite{Leu_96} \begin{eqnarray} \langle\bar qq\rangle &=& \langle\bar qq\rangle_0 \left\{ 1- \frac{N_f^2-1}{3N_f}\left(\frac{T^2}{4f_\pi^2}\right) - \frac{N_f^2-1}{18N_f^2} \left(\frac{T^2}{4f_\pi^2}\right)^2 + \ldots \right\}, \\ \langle \alpha_s G^2\rangle &=& \langle \alpha_s G^2\rangle_0 -\frac{4\pi^4}{405b}N_f^2(N_f^2-1)\frac{T^8}{f_\pi^4} \left(\log\left(\frac{\Lambda}{T}\right)-\frac{1}{4}\right) -\ldots\; . \end{eqnarray} To leading order, the $T$-dependence of the quark condensate has a very simple interpretation in terms of the number of thermal pions times the pion matrix element of the quark condensate, $\langle \pi|\bar qq|\pi \rangle = -\langle\bar qq\rangle/f_\pi^2$ \cite{GL_87}. This means that pions act as a vacuum cleaner for the chiral condensate, each thermal pion removes $\sim 5$ $\bar qq$ pairs. On the other hand, the gluon content of the pion is rather small. Naively extrapolating these results to larger temperatures, one expects chiral symmetry restoration to occur at $T\simeq 260$ MeV, while the gluon condensate is essentially $T$-independent. This estimate is not strongly affected by higher loop corrections in ChPTh. Clearly, chiral perturbation theory was not meant to be used near the transition. Nevertheless, the result indicates that something more than just pions is needed to restore the symmetry at the expected temperature. It also shows that even if the chiral expansion is apparently convergent, the neglected (exponentially small) terms need not be small. The thermodynamics of the phase transition is usually described in terms of a bag model equation of state. This EOS directly incorporates the idea that the transition takes place as soon as the perturbative pressure from quarks and gluons can overcome the non-perturbative bag pressure in the QCD vacuum. For $N_f=2$ and $B=500\,{\rm MeV}/{fm}^3$ from (\ref{trace_anom}) this gives the estimate $T_c=[(90B)/(37\pi^2)]^{1/4}\simeq 180$ MeV. Analyzing lattice thermodynamics in more detail one finds that this estimate is too large, because only $\sim 1/2$ of the bag pressure is removed across the phase transition \cite{Den_89,AHZ_91,KB_93}. An even simpler approach to estimate the critical temperature is based on the idea that the transition occurs when thermal hadrons begin to overlap. The density of hadrons becomes of order $1\,{\rm fm}^{-3}$ near $T\simeq 200$ MeV. This is reasonably close to $T_c$ (although, if $T_c$ is really 150 MeV then $n_{had} (T_c)\simeq 0.15\,{\rm fm}^{-3}$). Nevertheless, this kind of argument is not correct in general. A good example is pure gauge QCD, where $T_c \simeq 250$ MeV, but the lightest state is at $\sim 1.7$ GeV, so the particle density below $T_c$ is small, on the order of $n\simeq 0.005\,{\rm fm}^{-3}$. The lesson is that a first order transition occurs if the high $T$ (QGP) and low $T$ (hadronic) phases have the same free energy; the phase transition point cannot be inferred from looking at the low temperature phase only. \section{QCD near $T_c$} Chiral perturbation theory is based on a non-linear effective lagrangian in which the $\sigma$, the chiral partner of the pion, is eliminated. Near $T_c$, this description is not expected to be useful. However, in the vicinity of a second order phase transition, universality implies that critical phenomena are governed by an effective Landau-Ginzburg action for the chiral order parameter. In the case of QCD with two (massless) flavors the order parameter is a four-vector $\phi^a=(\sigma,\vec\pi)$. Universality makes definite predictions for the critical behavior of $\langle\bar qq\rangle$, the chiral susceptibility and the specific heat near $T_c$ \cite{PW_84,Wil_92,RW_93}. At the moment, these predictions appear to be in agreement with lattice gauge results \cite{KL_94}, but the issue has not been completely settled \cite{KK_95}. I would like to make a few comments concerning the role of universality arguments. First, it is important to clearly distinguish between the low energy chiral lagrangian (or the linear $\sigma$-model used as an effective lagrangian, see e.g. \cite{BK_96}) and the effective action for the order parameter near $T_c$. The Landau-Ginzburg action is a three dimensional action for static modes only. It is applicable only near $T_c$. In particular, the parameters in the effective action, the $\pi,\sigma$ masses and couplings, are completely independent of the parameters used in the linear $\sigma$-model at $T=0$. My second point concerns the thermodynamics of the phase transition. The effective action describes the singular part of the free energy only. In QCD we expect a large change in the free energy that corresponds to the release of 37 (quark and gluon) degrees of freedom. This means that in practice, the non-universal, regular, part of the free energy will most likely dominate the universal, singular, contribution. Universality predicts the behavior of three dimensional (screening) correlation functions near $T_c$. The corresponding screening masses have also been studied in some detail on the lattice, see section 5. In practice, however, we are more interested in dynamical (temporal) masses, corresponding to poles of the spectral function in energy, not momentum. These quantities are hard to extract on the lattice, although some exploratory studies have been made \cite{BGK_94}. In addition to that, we have made significant progress in studying temporal correlation functions in the instanton liquid model (see below). The only general approach to the problem that we have available at the moment are QCD sum rules. The general strategy is easily explained. It is based on matching phenomenological information contained in hadronic spectral function with perturbative QCD, using the operator product expansion. At finite temperature, the sum rules are of the type \begin{eqnarray} c_0 \log\omega^2 +c_1\langle O_1\rangle \frac{1}{\omega^4} + c_2\langle O_2\rangle \frac{1}{\omega^6} +\ldots = \int du^2\frac{\rho(u^2)}{u^2-\omega^2}, \end{eqnarray} where $\rho(\omega^2)$ is the spectral function at $\vec p=0$, $c_i$ are temperature independent coefficients that can be calculated in perturbative QCD and $\langle O_i\rangle$ are temperature dependent condensates. If there is a range in energies in which both the OPE has reasonable accuracy and the spectral representation is dominated by the ground state, we can use the sum rules to make predictions about ground state properties. In practice, this is a difficult game, even at zero temperature. At $T\neq 0$, additional problems arise because we do not know the $T$-dependence of the condensates and there is little phenomenological guidance concerning the form of the spectral functions. For this reason, reliable predictions can only be made at small temperature. The most systematic studies have been made in the vector meson channels $\rho$ and $a_1$ \cite{EI_93,EI_95,HKL_93}. To order $T^2$, there is no shift in the resonance masses. The only effect is mixing between the $\rho$ and $a_1$ channels, which is caused by scattering off thermal pions. At order $T^4$, masses start to drop. It is interesting to note that at this order, the mass shift is not controlled by the quark condensate, but the energy momentum tensor in a thermal pion gas. \section{The instanton liquid at finite temperature} In order to make progress we need a more detailed picture of the chiral phase transition. In particular, we would like to understand the mechanism of the transition and the behavior of the condensates and hadronic correlation functions in the transition region. In the following, I will argue that important progress in this direction has been made in the context of the instanton liquid model. In this model, chiral symmetry breaking is caused by the delocalization of quark zero modes associated with instantons. Chiral restoration takes place when instantons and antiinstantons form molecules, the quark modes become localized and the quark condensate is zero. The essential assumption underlying the instanton model is that the (euclidean) QCD partition function \begin{eqnarray} Z= \int DA_\mu \exp(-S)\prod_f^{N_f} \det(iD\!\!\!\!/\,+im_f) , \end{eqnarray} is dominated by classical gauge configurations called instantons. Instantons describe tunneling events between degenerate vacua. As usual, tunneling lowers the ground state energy. This is why instantons contribute to the vacuum energy density and pressure in the QCD vacuum. The instanton solution is characterized by 12 parameters, position (4), color orientation (7) and size (1). An ensemble of interacting instantons is described by the partition function \begin{eqnarray} \label{Z} Z= \sum_{N_+ N_-} {1 \over N_+ ! N_- !}\int \prod_i^{N_+ + N_-} [d^4z_idU_id\rho_i\; d(\rho_i)] \exp(-S_{int})\prod_f^{N_f} \det(iD\!\!\!\!/\,+im_f) \; , \end{eqnarray} where $N_+$ and $N_-$ are the numbers of instantons and antiinstantons and $d(\rho)$ is the semi-classical instanton density calculated by 't Hooft. There are two important pieces of evidence that suggest that instantons play an important role in the QCD vacuum. One is provided by extensive calculations of hadronic correlation functions in the instanton liquid model \cite{SV_93b,SSV_94,SS_96b}. These correlators agree both with phenomenological information \cite{Shu_93} and lattice calculations \cite{CGHN_93b}. The second comes from direct studies of instantons on the lattice. An example is shown in figure \ref{fig_cool}. Using a procedure called ``cooling" one can relax any given gauge field configuration to the closest classical component of the QCD vacuum. These configurations were found to be ensembles of instantons and antiinstantons. The MIT group concludes that the instanton density is $(N/V)\simeq (1.4-1.6)\,{\rm fm}^{-4}$ while the typical size is about $\rho\simeq 0.35$ fm \cite{CGHN_94}. These numbers are in good agreement with the instanton liquid parameters $(N/V)= 1\,{\rm fm}^{-4}$, $\rho=1/3\,{\rm fm}$ proposed by Shuryak a long time ago \cite{Shu_82a}. What is even more important is that hadronic correlation functions remain practically unchanged during the cooling process. This suggests that instantons play a dominant role in generating the spectrum of light hadrons. \begin{figure}[t] \epsfxsize=14cm \epsffile{negele_fig.ps} \vspace*{-0.5cm} \caption{\label{fig_cool} Instanton content of a typical $T=0$ gauge configuration, from \protect\cite{CGHN_94}. Figs. (a) and (c) show the field strength while (b) and (d) show the topological charge density. The upper panel shows the original configuration and the lower panel the same configuration after 25 cooling sweeps.} \end{figure} Recently, the role of instantons at finite temperature has been reevaluated. The semi-classical expression for the instanton density at finite temperature contains a suppression factor $\sim\exp( -(2N_c/3+N_f/3)(\pi\rho T)^2)$ \cite{GPY_81} which comes mostly from Debye screening of the instanton field. From this expression we would expect that instantons are significantly suppressed ($\sim 0.2$) near $T_c$. This is not correct. The perturbative expression for the Debye mass is not applicable until the temperature is significantly above $T_c$. We have already seen that the gluon condensate is essentially temperature independent at small $T$. A similar calculation can also be performed for the instanton density, with the same conclusion \cite{SV_94}. This result was confirmed in a number of lattice calculations (in quenched QCD) \cite{CS_95,IMM_95,ADD_96}, see figure \ref{fig_chu}. The figure shows the topological susceptibility which, in quenched QCD, is roughly equal to the instanton density. The Pisarski-Yaffe prediction (labeled P-Y) clearly underpredicts the topological susceptibility near $T_c$. The dashed curve which fits the data above $T_c$ corresponds to the PY-prediction with a shifted temperature $T\to(T-T_c)$. If instantons are not suppressed around $T_c$, then the chiral phase transition has to be caused by a rearrangement of the instanton liquid. A mechanism for such a rearrangement, the formation of polarized instanton-antiinstanton molecules, was proposed in \cite{IS_94,SSV_95}. In the presence of light fermions, instantons interact via the exchange of $2N_f$ quarks (see fig. \ref{fig_schem}). The amount of correlations in the instanton ensemble depends on the competition between maximum entropy, which favors randomness, and minimum action, which favors the formation of instanton anti-instanton pairs. At low temperatures the instanton system is random and chiral symmetry is broken. At high temperatures the interaction in the spacelike direction becomes screened, whereas the periodicity of the fields in the timelike interaction causes the interaction in that direction to be enhanced. Schematically, the fermion determinant for one pair looks like \begin{eqnarray} \det(iD\!\!\!\!/\,)\sim |\sin(\pi T \tau)/\cosh(\pi T r)|^{2N_f}, \end{eqnarray} where $\tau$ and $r$ are the separations in the temporal and spatial direction. This interaction is maximal for $r=0$ and $\tau=\beta/2= (1/2T)$ which is the most symmetric configuration of the $I\bar I$ pair on the Matsubara torus. In numerical simulations, we find the transition to a correlated system in which chiral symmetry is restored at $T_c\simeq 130$ MeV \cite{SS_96}. Typical instanton configurations below and above $T_c$ are shown in figure \ref{fig_configs}. The plots are projections of a four dimensional box into the $x\tau$-plane. At high temperature, the imaginary time axis is short. The location of instantons and antiinstantons is denoted by $\pm$ signs, while the lines connecting them indicate the strength of the fermionic ``hopping" matrix elements. Below $T_c$, there is no clear pattern. Instantons are either isolated or part of larger clusters. Following the hopping matrix elements, quarks can propagate over large distances and form a condensate. Above $T_c$, instantons are bound into pairs. The propagation of quarks in the spatial direction is suppressed and no condensate is formed. \begin{figure}[t] \begin{minipage}[b]{80mm} \epsfxsize=7cm \epsffile{chu_fig3.ps} \vspace*{-1.0cm} \caption{\label{fig_chu} $\chi_{top}$ as a function of temperature in quenched QCD, from \protect\cite{CS_95}.} \vspace*{0.5cm} \epsfxsize=7cm \epsffile{liquid.ps} \caption{\label{fig_schem} Schematic picture of the phase transition in the interacting instanton liquid.} \end{minipage} \hspace{\fill} \begin{minipage}[b]{75mm} \epsfxsize=7cm \epsffile{conf300.ps} \vspace*{1.5cm} \epsfxsize=7cm \epsffile{conf140.ps} \caption{\label{fig_configs} Instanton configurations below and above the phase transition.} \end{minipage} \end{figure} More details are provided in figure \ref{fig_dens}. The quark condensate is practically $T$-independent at small temperature, but drops rapidly above $T=100$ MeV. At small $T$, the instanton density rises slightly\footnote{This might very well be an artefact. The $T=0$ point is not a true zero temperature calculation.}. It drops above $T=100$ MeV, but retains about half of its $T=0$ value near $T_c$. Similarly, the instanton related free energy\footnote{Figure \protect\ref{fig_dens}b only shows the instanton-related part of the free energy, the full free energy $F=-p$ has to be a monotonically decreasing function of $T$.} does not vanish at $T_c$. Instantons still contribute to the energy density and pressure above $T_c$. It is interesting to study the phase diagram in more detail. For realistic QCD, the transition appears to be weakly first order (the discontinuity in the free energy is smoothed out in a finite volume and cannot be seen in figure \ref{fig_dens}). In the case of two flavors, we see a second order phase transition with critical exponents consistent with the $O(4)$ universality class. For more flavors, the transition temperature drops until (around $N_f=5$ massless flavors) chiral symmetry is restored in the ground state even at $T=0$. \section{Hot hadrons} We have studied both temporal (related to the spectral function in energy) and spatial (related to screening masses) correlation functions at finite temperature \cite{SS_96b}. Figure \ref{fig_scr} shows the spectrum of spacelike screening masses. First, we clearly observe that chiral symmetry is restored at $T\simeq 130$ MeV. Chiral partners, like the $(\pi,\sigma)$ and $(\rho,a_1)$ become degenerate at $T_c$. Second, even above $T_c$, the scalars $\pi$ and $\sigma$ are significantly lighter than the vectors $\rho$ and $a_1$. This result is in agreement with lattice calculations \cite{TK_87,Goc_91}. The spectrum of screening masses is also in qualitative agreement with the predictions from dimensional reduction (DR) \cite{EI_88,HZ_92,KSB_92}, but DR fails to account for the attraction seen in the scalar channels. \begin{figure}[t] \begin{minipage}[b]{80mm} \epsfxsize=7cm \epsffile{dens.ps} \caption{\label{fig_dens} Instanton density, free energy and quark condensate as a function of $T$.} \end{minipage} \hspace{\fill} \begin{minipage}[b]{75mm} \epsfxsize=7cm \epsffile{scr.ps} \caption{\label{fig_scr} Screening masses in the instanton liquid model.} \end{minipage} \end{figure} More interesting is the behavior of temporal correlation functions, shown in figure \ref{fig_tempcor}. The correlation functions are normalized to free quark propagation at the same temperature, so all correlators start at 1 for $\tau=0$. Below $T_c$, the data points are denoted by open squares ($T=0$), pentagons, etc., while the high temperature points are denoted by solid squares and pentagons. Above $T_c$, the period of the correlation functions becomes short, $\beta=T_c^{-1}\sim 1.5$ fm. This means that all the spectral information is contained in a very short interval $0<\tau<\beta/2\sim 0.7$ fm. This fact makes it very difficult to extract thermal masses from the correlation functions (or to decide whether there are narrow poles in the spectral function at all). Again, we observe that above $T_c$ the correlation functions of chiral partners become equal. What is more remarkable is the fact that in the regime $\tau<(\beta/2)$, which is not (directly) affected by the periodic boundary conditions, the pion\footnote{ The $\sigma$ correlation is even larger than the $\pi$ below $T_c$ because it receives a disconnected contribution from the quark condensate.} correlation function is almost as large as it is at $T=0$. This suggest that there is still a $(\pi,\sigma)$-mode above $T_c$ \cite{HK_85,SS_95b}. No such effect is seen in the vector channels. The small resonance contribution seen in the $\rho$ channel at small temperature quickly melts and above $T_c$ the correlation function is consistent with the propagation of two independent quarks with a small residual chiral mass (a mass term in the vector part of the propagator that does not violate chiral symmetry). What effect causes the resonance behavior seen in the scalar channels? At zero temperature, the instanton induced interaction between quarks is conveniently discussed in terms of an effective four fermion interaction\footnote{Strictly speaking, a flavor antisymmetric $2N_f$ fermion interaction. However, for $N_f=3$ and broken chiral symmetry (either spontaneous or explicit), we can absorb two zero modes into the instanton density.} \cite{tHo_76b,SVZ_80b} \begin{eqnarray} \label{leff} {\cal L}= \int d(\rho)d\rho\, \frac{(2\pi\rho)^4}{8(N_c^2-1)} \left\{ \frac{2N_c-1}{2N_c}\left[ (\bar\psi\tau^{-}_a\psi)^2 + (\bar\psi\tau^{-}_a\gamma_5\psi)^2 \right] - \frac{1}{4N_c} (\bar\psi\tau^{-}_a\sigma_{\mu\nu}\psi)^2 \right\} , \end{eqnarray} where $d(\rho)$ denotes the density of instantons. Here, $\psi$ is an isodoublet of quark fields and the four vector $\tau^{-}_a$ has components $(\vec\tau,i)$ with $\vec\tau$ equal to the Pauli matrices acting in isospace. The interaction (\ref{leff}) successfully explains many properties of the ($T$=0) QCD correlation functions, most importantly the strong attraction seen in the pion channel. The effective lagrangian (\ref{leff}) comes from the $2N_f$ zero modes associated with an individual instanton. It is derived under the assumptions that instantons are sufficiently dilute and completely uncorrelated. Above $T_c$, the collective coordinates of instantons and antiinstantons are no longer random, but become strongly correlated. The four fermion interaction induced by polarized instanton-antiinstanton molecules is given by \cite{SSV_95} \begin{eqnarray} \label{lmol} {\cal L}_{mol\,sym}&=& G \left\{ \frac{2}{N_c^2}\left[ (\bar\psi\tau^a\psi)^2-(\bar\psi\tau^a\gamma_5\psi)^2 \right]\right. \nonumber \\ & & - \;\,\frac{1}{2N_c^2}\left. \left[ (\bar\psi\tau^a\gamma_\mu\psi)^2+(\bar\psi\tau^a\gamma_\mu\gamma_5 \psi)^2 \right] + \frac{2}{N_c^2} (\bar\psi\gamma_\mu\gamma_5\psi)^2 \right\} + {\cal L}_8, \end{eqnarray} where the coupling constant $G$ is determined by the number of correlated pairs (and their overlap matrix element) and ${\cal L}_8$ is the color octet part of the interaction. $G$ is not strong enough in order to cause quarks to condense. Nevertheless, molecules produce a significant attractive interaction in the $\pi$ and $\sigma$ channels. \begin{figure}[t] \epsfxsize=14cm \epsffile{temp_cor.ps} \vspace*{-1cm} \caption{\label{fig_tempcor} Temporal correlation functions in the instanton liquid model.} \end{figure} A problem that has received a lot of attention recently is the fate of the $U(1)_A$ anomaly at finite temperature \cite{Shu_94,KKL_96,HW_96,Coh_96,LH_96,EHS_96}. Given the large $\eta'-\pi$ splitting any tendency towards $U(1)_A$ could lead to rather dramatic effects in heavy ion collisions. Two experimental signatures that have been discussed are the $\eta/\pi$ ratio \cite{HW_96} measured by the WA80 collaboration \cite{WA80_94} and the possibility that the $\eta'$ Dalitz decay \cite{KKL_96} contributes to the enhancement in low mass dileptons seen by CERES \cite{CER_95c}. The anomaly is related to the presence of zero modes in the spectrum of the Dirac operator. Instantons can absorb $N_f$ left handed quarks of different flavors and turn them into right handed quarks, violating axial charge by $2N_f$ units. This is the process described by the 't Hooft vertex (\ref{leff}). Inserting the 't Hooft interaction into the $\eta'$ correlation function splits the $\eta'$ from the pion\footnote{In QCD, flavor symmetry is broken and part of the splitting comes from the strange quark mass. However, in the absence of the anomaly we would expect the $\eta-\eta'$ mixing to be almost ideal (similar to the $\rho-\phi$ system). In this case, there is a non-strange $\eta$ which is almost degenerate with the $\pi$.}. Above $T_c$ isolated instantons disappear and instanton-antiinstanton molecules do not violate $U(1)_A$. However, the 't Hooft operator can induce a tunneling event (instanton) all by itself. This can be seen as follows. If we keep a small current quark mass, the density of isolated instantons above $T_c$ is proportional to $m^{N_f}$. When we calculate a $U(1)_A$ violating observable, there are $N_f$ propagators\footnote{In the (academic) case of three massless flavors, $U_A(1)$ violation does not affect the $\eta'$ correlation function above $T_c$ because the third quark in the 't Hooft vertex cannot be absorbed. In real QCD, the strange quark can be absorbed by a mass insertion.}, each of which has a zero mode contributing a factor $1/m_f$. As a result, $U(1)_A$ is broken in the chiral limit $m_f\to 0$. \begin{figure}[t] \epsfxsize=14cm \epsffile{anomaly.ps} \caption{\label{fig_anomaly} Flavor mixing in the $\eta-\eta'$ system below and above the chiral phase transition.} \end{figure} At temperatures significantly above $T_c$, we expect screening to reduce the strength of the $U(1)_A$ violating interaction. Near $T_c$, screening is not important but chiral symmetry restoration affects the structure of flavor mixing in the $\eta-\eta'$ system \cite{Sch_96}, see figure \ref{fig_anomaly}. Below $T_c$, there is strong flavor mixing between $u,d$ and $s$ quarks. Flavor symmetry is broken, $\langle\bar uu\rangle \neq\langle\bar ss\rangle$, but the $\eta'$ state is almost pure singlet. Above $T_c$, there is no flavor mixing between non-strange and strange quarks. As a result, the eigenstates in the $\eta-\eta'$ system are the non-strange and strange eta components $\eta_{NS}$ and $\eta_S$. The anomaly acts only on the non-strange $\eta_{NS}$, so the strange $\eta_S$ can become light. This effect is of phenomenological interest, because it might enhance strangeness production in heavy ion collisions. \section{Summary} There is substantial evidence that non-perturbative effects are important in QCD, even above the phase transition. This evidence comes from an analysis of lattice results for the equation of state, the spectrum of screening masses and temporal correlation functions above $T_c$. This suggest that in order to understand the transition region we need a more detailed picture of the transition itself. We have shown that the chiral phase transition can be understood as a transition from a disordered instanton liquid to a correlated phase of instanton-antiinstanton molecules. This picture is consistent with both the observation that not all of the gluon condensate is removed across the phase transition and with the observed spectrum of screening masses. It provides interesting predictions for the behavior of hadronic modes near $T_c$. For example, we suggest that $(\pi,\sigma)$-like modes survive the phase transition and that the structure of flavor mixing in the $\eta-\eta'$ sector changes near $T_c$. \section{Acknowledgements} The material presented here is based on work done in collaboration with E. Shuryak and J. Verbaarschot. I would also like to acknowledge many discussions with G. E. Brown, V. Koch and R. Venugopalan.
2,869,038,156,593
arxiv
\section{Introduction} \label{sec:intro} Segmentation of fine-scale structures such as vessels, neurons and membranes is an important task especial in biomedical applications~\cite{tetteh2020deepvesselnet,fakhry2016deep, hu2019topology}. Accurate delineation of these structures are crucial for downstream analysis and for understanding biomedical functionality. Classic segmentation algorithms \cite{long2015fully,he2017mask,chen2014semantic,chen2018deeplab,chen2017rethinking} are prone to structural errors, e.g., broken connections, as they are mostly trained on pixel-wise losses such as cross-entropy. In recent years, new topology-relevant losses have been proposed to improve structural accuracy of deep segmentation networks~\cite{hu2019topology,hu2021topology,shit2021cldice,mosinska2018beyond,clough2020topological}. These methods identify topologically critical locations at which the network is error-prone, and enforce the network to memorize these hard locations through increased loss weights. However, we argue that these loss-based methods are \emph{only learning pixel-wise representations}, and thus will inevitably make structural errors, especially at the inference stage. See Fig.~\ref{fig:teaser}(c) for an illustration. In order to fundamentally address the problem, we argue that it is essential to directly model and reason about the structures. In this paper, we propose the first deep neural network method that directly learns the structural representation of images. Given an input image properly denoised by a neural network, we adopt the classic (discrete) Morse theory~\cite{milnor1963morse,forman2002user} to extract Morse complexes consisting of pieces of 0D, 1D and 2D structures. These Morse structures are singularities of the gradient field of the input function. Their combinations constitute a space of structures arising from the input function. See Fig.~\ref{fig:structure_space}(c) for an illustration. \begin{figure*}[t] \centering \subfigure[Original]{ \includegraphics[width=0.18\textwidth]{figure/patch1_ori.pdf}} \subfigure[GT]{ \includegraphics[width=0.18\textwidth]{figure/patch1_gt.pdf}} \subfigure[DMT]{ \includegraphics[width=0.18\textwidth]{figure/patch1_pred.pdf}} \subfigure[Sample1]{ \includegraphics[width=0.18\textwidth]{figure/patch1_sample1.pdf}} \subfigure[Sample2]{ \includegraphics[width=0.18\textwidth]{figure/patch1_sample3.pdf}} \vspace{-.05in} \caption{From left to right: \textbf{(a)} Original image, \textbf{(b)} Ground truth, \textbf{(c)} Segmentation map generated by standard DMT~\cite{hu2021topology}, \textbf{(d)} and \textbf{(e)} are two possible structure-preserving segmentation maps generated by our method. Compared with loss-function based segmentation methods, our method can generate both diverse and true structure-preserving segmentation maps. The red rectangles show structure-corrected regions by our method, and the yellow rectangle shows diversity region.} \vspace{-.15in} \label{fig:teaser} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{figure/structure_space.pdf} \vspace{-.25in} \caption{The probabilistic structural representation. \textbf{(a)} is a sample input, \textbf{(b)} is the predicted likelihood map, \textbf{(c)} is the whole structural space obtained by running discrete Morse theory algorithm on the likelihood map, \textbf{(d)} the 1-$d$ structural family parametrized by the persistence threshold $\epsilon$, as well as a Gaussian distribution over $\epsilon$, \textbf{(e)} a sampled skeleton, \textbf{(f)} the final segmentation map generated using the skeleton sample, and \textbf{(g)} the uncertainty map generated by multiple segmentations.} \vspace{-.2in} \label{fig:structure_space} \end{figure*} For further reasoning with structures, we propose to learn a probabilistic model over the structural space. The challenge is that the space consists of exponentially many branches and is thus of very high dimension. To reduce the learning burden, we introduce the theory of persistent homology~\cite{2011MNRAS,DRS15,WWL15} for structure pruning. Each branch has its own persistence measuring its relative saliency. By continuously thresholding the complete Morse complex in terms of persistence, we obtain a sequence of Morse complexes parameterized by the persistence threshold, $\epsilon$. See Fig.~\ref{fig:structure_space}(d). By learning a Gaussian over $\epsilon$, we learn a parametric probabilistic model over these structures. This parametric probabilistic model over structural space allows us to make direct structural predictions via sampling (Fig.~\ref{fig:structure_space}(e), and to estimate structure-wise uncertainty via sampling (Fig.~\ref{fig:structure_space}(g). The benefit is two-fold: First, direct prediction of structures will ensure the model outputs always have structural integrity, even at the inference stage. This is illustrated in Fig.~\ref{fig:teaser}(d) and (e). Samples from the probabilistic model are all feasible structural hypothesis based on the input image, with certain variations at uncertain locations. This is in contrast to state-of-the-art methods using pixel-wise representations (Fig.~\ref{fig:teaser}(c)). Note the original output structure (Fig.~\ref{fig:structure_space}(e), also called skeleton) is only 1-pixel wide and may not serve as a good segmentation output. In the inference stage, we use a postprocessing step to grow the structures without changing topology as the final segmentation prediction (Fig.~\ref{fig:structure_space}(f)). More details are provided in Sec.~\ref{sec:inference} and Fig.~\ref{fig:inference}. Second, the probabilistic structural model can be seamlessly incorporated into human-in-the-loop annotation workflows to facilitate large scale annotation of these complex structures. This is especially important in the biomedical domain where fine-scale structures are notoriously difficult to annotate, due to the complex 2D/3D morphology and low contrast near extremely thin structures. Our probabilistic model makes it possible to identify uncertain structures for efficient human quality control. Note that the structural space is crucial for uncertainty reasoning. As shown in Fig.~\ref{fig:structure_space}(g), our proposed model uncertainty is only focusing on structures, whereas traditional pixel-wise uncertainty estimations~\cite{kendall2015bayesian} are much less informative without structural representation. The main contributions of this paper are: \begin{enumerate}[topsep=0pt, partopsep=0pt,itemsep=0pt,parsep=0pt] \item We propose the first deep segmentation network that learns a structural representation, based on discrete Morse theory and persistent homology. \item We learn a probabilistic model over the structural space, which facilitates different tasks such as segmentation and uncertainty estimation. \item We validate our method on various biomedical datasets with \textit{rich and complex structures}. It outperforms state-of-the-art methods in both deterministic and probabilistic categories. \end{enumerate} \vspace{-.05in} \section{Related Work} \vspace{-.05in} \label{sec:related} \textbf{Structure/Topology-aware deep image segmentation.} A number of recent works have tried to segment with correct topology with additional topology-aware losses~\cite{mosinska2018beyond,hu2019topology,clough2020topological,hu2021topology,shit2021cldice}, which are close to the problem we are trying to address in this paper. Specifically, UNet-VGG~\cite{mosinska2018beyond} detects linear structures with pretrained filters, and clDice~\cite{shit2021cldice} introduces additional Dice loss for extracted skeleton structures. Another Topoloss~\cite{hu2019topology,clough2020topological} learns to segment with correct topology explicitly with a differentiable loss by leveraging the concept of persistent homology. Similarly, DMT-loss~\cite{hu2021topology} tries to identify the topological critical structures via discrete Morse theory. All these methods propose additional topology-aware losses which are minimized if the topology of the segmented map is perfect. Though in the training stage, the models may fit the training set very well in terms of the topology and the topology-aware losses are minimized, it's difficult for the models to reason the correct topology during the inference stage as the models are essentially topology agnostic. Topological priors have also been combined with encoder-decoder deep networks for semantic segmentation of microscopic neuroanatomical data~\cite{banerjee2020semantic}. Additionally, discrete Morse theory has been used for image analysis~\cite{DRS15,RWS11,WWL15,DWW19}, but only as a preprocessing step. Different from all these loss functions based segmentation methods, our method, however, make structural predictions directly by using discrete Morse theory and persistent homology. \textbf{Segmentation uncertainty.} Instead of traditional deterministic models with single prediction, a set of works have tried to generate multiple segmentations and explore the uncertainty in image segmentation tasks~\cite{kendall2015bayesian, kohl2018probabilistic} By using dropout, some methods~\cite{kendall2015bayesian,kendall2017uncertainties} learn a probability distribution instead of a single deterministic number for pixel classification. Though these methods are able to measure the pixel-level uncertainty, they are possible to generate inconsistent outputs as the probability for each pixel is estimated independently. A possible way to obtain consistent outputs is by ensembling the results of different models~\cite{lakshminarayanan2017simple}. However, as the composed models are trained separately, the final ensemble outputs are usually not diverse enough. Another solution is to train a common network with $M$ heads~\cite{rupprecht2017learning,ilg2018uncertainty}. Though comparing to deep ensemble approaches, multi-head methods have the ability to generate diverse results, the major issue with both ensembling and multi-head models is that they are not scalable, and both of them require a fixed number of models/branches during the training stage. To overcome the scalability issue, Probabilistic-UNet~\cite{kohl2018probabilistic} learns a distribution over the segmentation map given an input, and it is able to generate infinite number of possible outputs efficiently. We basically follow the logic of Probabilistic-UNet to design our probabilistic model, while we are targeting a quite different task: \textit{how to generate diverse plausible structure-preserving segmentation maps given an image with rich structures?} As far as we know, none of existing works have tried to explore the structure-level uncertainty. Underlying the probabilistic model is the classical discrete Morse theory, which is used to construct the structural space from noisy likelihood maps. \vspace{-.05in} \section{Method} \vspace{-.05in} Our method starts by taking an input image, processing it with a neural network to obtain a reasonable likelihood map, and then using discrete Morse theory to construct a space of structures. These structures are the hypothesis structures one can infer from the input image. Next, we use persistent-homology-based thresholding to filter these structures, getting a linear size family of structures, parameterized by a threshold $\epsilon$. We learn a 1D Gaussian distribution for the $\epsilon$ as our probabilistic model. Details will be provided below in Sec.~\ref{sec:structure_space}. In Sec.~\ref{sec:method}, we will provide details on how our deep neural network is constructed, as illustrated in Fig.~\ref{fig:pipeline}. \vspace{-.05in} \subsection{Constructing the Structural Space} \vspace{-.05in} \label{sec:structure_space} In this section, we focus on how to construct a structural representation space using discrete Morse theory. We will then discuss how to prune the structural space using persistent homology. The resulting structural representation space will be used to build a probabilistic model. Given a reasonably clean input (e.g., the likelihood map of a deep neural network, Fig.~\ref{fig:structure_space}(b)), we treat the function as a terrain function, and the Morse theory~\cite{milnor1963morse} can help to capture the structures regardless of weak/blur conditions (Fig.~\ref{fig:discrete}). The weak part of a line in the continuous map can be viewed as the local dip in the mountain ridge of the terrain (see Fig.~\ref{fig:discrete} as an illustration). In the language of Morse theory, the lowest point of this dip is a saddle point ($S$ in Fig.~\ref{fig:discrete}(b)), and the mountain ridges which are connected to the saddle point ($M_1S$ and $M_2S$) compose the stable manifold of the saddle point. We mainly focus on 2D images in this paper, although extending to 3D images is natural. We consider two dimensional continuous function $f: \mathbb{R}^{2} \rightarrow \mathbb{R}$. For a point $x\in \mathbb{R}^2$, the gradient can be computed as $\nabla f(x) = [\frac{\partial f}{ \partial x_{1}},\frac{\partial f}{\partial x_{2}}]^T$. We call a point $x = (x_{1},x_{2})$ \emph{critical} if $\nabla f(x) = 0$. For a Morse function defined on $\mathbb{R}^2$, a critical point could be a minimum, a saddle or a maximum. Consider a continuous line (the red rectangle region in Fig.~\ref{fig:discrete}(a)) in a 2D likelihood map. Imagine if we put a ball on one point of the line, then $-\nabla f(x)$ indicates the direction which the ball will flow down. By definition, the ball will eventually flow to the critical points where $\nabla f(x) = 0$. The collection of points whose ball eventually flows to $p$ ($\nabla f(p) = 0$) is defined as the stable manifold (denoted as $S(p)$) of point $p$. Intuitively, for a 2D function $f$, the stable manifold $S(p)$ of a minimum $p$ is the entire valley of $p$ (similar to watershed algorithm); similarly, the stable manifold $S(q)$ of a saddle point $q$ consists of the whole ridge line which connecting two local maxima and goes through the saddle point. See Fig.~\ref{fig:discrete}(b) as an illustration. \begin{wrapfigure}{r}{0.55\textwidth} \centering \vspace{-.25in} \includegraphics[width=0.55\textwidth]{figure/DMT_illu.pdf} \vspace{-.1in} \caption{\textbf{(a)} shows a sample likelihood map from the deep neural network, and \textbf{(b)} is the terrain view of the red patch in \textbf{(a)} and illustrates the stable manifold of a saddle point in 2D case for a line-like structure.} \vspace{-.2in} \label{fig:discrete} \end{wrapfigure} For vessel data, the stable manifold of a saddle point contains the topological structures (line-like) of the continuous likelihood map predicted by deep neural networks, and they are exactly what we want to recover from noisy images. In practice, we adopt the discrete version of Morse theory for cubic data (images). \myparagraph{Discrete Morse theory.} Take 2D image as a 2-dimensional cubical complex. A 2-dimensional cubical complex then contains $0$-, $1$-, and $2$-dimensional cells, which correspond to vertices (pixels), edges and squares, respectively. In the setting of discrete Morse theory (DMT)~\cite{forman,forman2002user}, a pair of adjacent cells, termed as discrete gradient vectors, compose gradient vector. Critical points ($\nabla f(x) = 0$) are those critical cells which are not in any discrete gradient vectors. In 2D domain, a minimum, a saddle and a maximum correspond to a critical vertex, a critical edge and a critical square respectively. A 1-stable manifold (the stable manifold of a saddle point) in 2D corresponds to a \emph{V-path}, i.e., connecting two local maxima and a saddle. See Fig.~\ref{fig:discrete}(b). In this way, by using discrete Morse theory, for a likelihood map from the deep neural network, we can extract all the stable manifolds of saddles, whose compositions constitute the whole structural space. \textbf{Formally, we call any combinations of these stable manifolds a structure}. Fig.~\ref{fig:structure_space}(c) illustrates 5 different structures. This structural space, however, is of exponential size. Assume $N$ pieces of stable manifolds/branches (we also call these stable manifolds as branches for convenience). Any combinations of these stable manifolds/branches will be a potential structure. We will have $2^{N}$ possible structures. This can be computational prohibitive to construct and to model. We need a principled way to prune structures so the structural representation space has a controllable size. \myparagraph{Persistent homology for structural pruning.} We propose to use the theory of persistent homology~\cite{2011MNRAS,DRS15,WWL15} to prune the structural space. Persistent homology is an important tool for topological data analysis~\cite{edelsbrunner2010computational,edelsbrunner2000topological}. Intuitively, we grow a Morse complex by gradually including more and more discrete elements (called cells) from empty. A branch of the Morse complex is a special type of cell. Other types include vertices, patches, etc. Cells will be continuously added to the complex. New branches will born and existing branches will die. The persistence algorithm~\cite{edelsbrunner2000topological} pairs up all these critical cells as birth and death pairs. The difference of their function values is essentially the life time of the specific topological structure, which is called the \emph{persistence}. The importance of a branch is associated with its persistence. Intuitively, the longer the persistence of a specific branch is, the more important the branch is. Recall our original construction of the structural space considers all possible combination of branches, and thus can have exponentially many combinations. Instead, we propose to only select branches with high persistence as important ones. By doing this, we will be able to prune the less important/noisy branches very efficiently, and recover the branches with true signals. Specifically, the structure pruning is done via the \emph{Morse cancellation} (more details are included in Supplementary Material) operation. The persistence thresholding provides us the opportunity to obtain a linear-size of structural space. We start with the complete Morse complex, continuously grow the threshold $\epsilon$. At each threshold, we obtain a structure by filtering with $\epsilon$ and only keeping the branches whose persistence is above $\epsilon$. This gives a sequence of structures parametrized by $\epsilon$. As shown in Fig.~\ref{fig:structure_space}(d), the family of structures represent different structural densities. The one-parameter space allows us to easily learn a probabilistic model and carry out various inference tasks such as segmentation, sampling, and uncertainty estimation. Specifically, we will learn a Gaussian distribution over the persistence threshold $\epsilon$, $\epsilon\sim N(\mu, \sigma)$. More details will be provided in Sec.~\ref{sec:method}. \textbf{Approximation of Morse structures for volume data.} Finally, we provide some additional technical details on construction of Morse complexes. In 2D setting, the stable manifold of saddles compose the line-like structures, and the captured Morse structures will essentially contain the \textit{non-boundary edges}, which fits well with the vessel data. However, the output structures should always be \textit{boundary edges} for volume data, which can't be dealt with the original discrete Morse theory. Consequently, we approximate the Morse structures of 2D volume data with the boundaries of the stable manifolds of local minima. As mentioned above, the stable manifold of a local minimum $p$ in 2D setting corresponds the whole valley, and the boundaries of these valleys construct the approximation of the Morse structures for volume data. Similar to the original discrete Morse theory, we also introduce a persistence threshold parameter $\epsilon$ and use persistent homology to prune the less important branches. The details of the proposed persistent-homology filtered topology watershed algorithm are illustrated in Supplementary Material. \subsection{Neural Network Architecture} \label{sec:method} \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{figure/pipeline.pdf} \vspace{-.25in} \caption{The overall workflow of the training stage. The red arrows indicate supervision.} \vspace{-.15in} \label{fig:pipeline} \end{figure*} In this section, we introduce our neural network that learns the probabilistic model over structural representation. See Fig.~\ref{fig:pipeline} for an illustration of the overall pipeline. Since the structural reasoning needs a sufficiently clean input function to construct discrete Morse complexes, our method first obtain such a likelihood map by training a segmentation branch which is supervised by the standard segmentation loss, cross-entropy loss, formally, $L_{seg} = L_{bce}(Y, S(X;\omega_{seg}))$, in which $S(X;\omega_{seg})$ is the output likelihood map, $\omega_{seg}$ is the segmentation branch's weight. The output likelihood map, $S(X;\omega_{seg})$, is used as the input for the discrete Morse theory algorithm (DMT), which generates a discrete Morse complex consisting of all possible Morse branches from the likelihood map. Thresholding these branches using persistent homology with different $\epsilon$'s will produce different structures. We refer to the DMT computation and the persistent homology thresholding operation as $f_{DMT}$ and $f_{PH}$. So given a likelihood map $S(X;\omega_{seg})$ and a threshold $\epsilon$, we can generate a structure (which we call a skeleton): \begin{equation} S_{skeleton}(\epsilon) = f_{PH}(f_{DMT}(S(X; \omega_{seg})); \epsilon) \end{equation} Next, we discuss how to learn the probabilistic model. Recall we want to learn a Gaussian distribution over the persistent homology threshold, $\epsilon\sim N(\mu, \sigma)$. The parameters $\mu$ and $\sigma$ are learned by a neural network called the \emph{posterior network}. The network uses the input image $X$ and the corresponding ground truth mask $Y$ as input, and outputs the parameters $\mu(X,Y;\omega_{post})$ and $\sigma(X,Y;\omega_{post})$. $\omega_{post}$ is the parameter of the network. During training, at each iteration, we draw a sample $\epsilon$ from the distribution ($\epsilon \sim N(\mu, \sigma)$). Using the sample $\epsilon$, together with the likelihood map, we can generate the corresponding sample structure, $S_{skeleton}(\epsilon)$. This skeleton will be compared with the ground truth for supervision. To compare a sampled skeleton, $S_{skeleton}(\epsilon)$, with ground truth $Y$, we use the skeleton to mask both $Y$ and the likelihood map $S(X;\omega_{seg})$, and then compare the skeleton-masked ground truth and the likelihood using cross-entropy loss: $L_{bce}(Y \circ S_{skeleton}(\epsilon), S(X; \omega_{seg}) \circ S_{skeleton}(\epsilon))$. To learn the distribution, we use the expected loss: \begin{equation} L_{skeleton} = \mathbb{E}_{\epsilon \sim N(\mu, \sigma)} L_{bce}(Y \circ S_{skeleton}(\epsilon), S(X; \omega_{seg}) \circ S_{skeleton}(\epsilon)) \end{equation} The loss can be backpropagated through the posterior network through reparameterization technique~\cite{kingma2013auto}. More details will be provided in Supplemental. Note that this loss will also provide supervision to the segmentation network through the likelihood map. \myparagraph{Learning a prior network from the posterior network.} Although our posterior network can learn the distribution well, it rely on the ground truth mask $Y$ as input. This is not available at inference stage. To address this issue, inspired by Probabilistic-UNet~\cite{kohl2018probabilistic}, we use another network to learn the distribution of $\epsilon$ with only the image $X$ as input. We call this network the \textit{prior net}. We denote by $P$ the distribution using parameters predicted by the prior network, and denote by $Q$ the distribution predicted by the posterior network. During the training, we want to use the prior net to mimic the posterior net; and then in the inference stage, we can use the prior net to obtain a reliable distribution over $\epsilon$ with only the image $X$. Thus, we incorporate the Kullback-Leibler divergence of these two distributions, \begin{equation} D_{KL}(Q||P) = \mathbb{E}_{\epsilon \sim Q} (\log \frac{Q}{P}) \end{equation} which measures the differences of prior distribution $P$ ($N(\mu_{prior}, \sigma_{prior})$) and the posterior distribution $Q$ ($N(\mu_{post}, \sigma_{post})$). \textbf{Training the neural network.} The final loss is composed by the standard segmentation loss, the skeleton loss $L_{skeleton}$, and the KL divergence loss, with two hyperparameters $\alpha$ and $\beta$ to balance the three terms, \begin{equation} L(X, Y) = L_{seg} + \alpha L_{skeleton} + \beta D_{KL}(Q||P) \end{equation} The network is trained to jointly optimize the segmentation branch and the probabilistic branch (containing both prior and posterior nets) simultaneously. During the training stage, the KL divergence loss ($D_{KL}$) pushes the prior distribution towards the posterior distribution. The training scheme is also illustrated in Fig.~\ref{fig:pipeline}. \textbf{Inference stage: generating structure-preserving segmentation maps.} \label{sec:inference} In the inference stage, given an input image, we are able to produce unlimited number of plausible structure-preserving skeletons via sampling. We use a postprocessing step to grow the 1-pixel wide structures/skeletons without changing its topology as the final segmentation prediction. Specifically, the skeletons are overlaid on the binarized initial segmentation map (Fig.~\ref{fig:inference}(c)), and only the connected components which exist in the skeletons are kept as the final segmentation maps. In this way, each plausible skeleton generates one final segmentation map and it has exact the same topology as the corresponding skeleton. The pipeline of the procedure is illustrated in Fig.~\ref{fig:inference}. \section{Experiments} \label{sec:experiment} \myparagraph{Datasets.} We use three datasets to validate the efficacy of the proposed method: \textbf{ISBI13}~\cite{arganda20133d} (volume), \textbf{CREMI} (volume), and \textbf{DRIVE}~\cite{staal2004ridge} (vessel). More details are included in Supplementary Material. \textbf{Evaluation metrics.} We use four different evaluation metrics: \textbf{Dice score}, \textbf{ARI}, \textbf{VOI}, and \textbf{Betti number error}. Dice is a popular pixel-wise segmentation metric, and the other three are structure/topology-aware segmentation metrics. More details are included in Supplementary Material. \textbf{Baselines.} We compare the proposed method with two kinds of baselines: 1) Standard segmentation baselines: \textbf{{DIVE}}~\cite{fakhry2016deep}, \textbf{{UNet}}~\cite{ronneberger2015u}, \textbf{{UNet-VGG}}~\cite{mosinska2018beyond}, \textbf{TopoLoss}~\cite{hu2019topology} and \textbf{DMT}~\cite{hu2021topology}. 2) Probabilistic-based segmentation methods: \textbf{Dropout UNet}~\cite{kendall2015bayesian} and \textbf{Probabilistic-UNet}~\cite{kohl2018probabilistic}. More details about these baselines are included in Supplementary Material. \textbf{Illustration of generating final structure-preserving segmentation maps and human-in-the-loop annotation workflow.} In the inference stage, we are able to generate a continuous likelihood map (Fig.~\ref{fig:inference}(b)) and a set of structure-preserving skeletons (Fig.~\ref{fig:inference}(d)) simultaneously for a given image. By growing the structure-preserving skeleton, we will finally generate the \textit{true structure-preserving} segmentation map (Fig.~\ref{fig:inference}(e)). Note that Fig.~\ref{fig:inference}(d) and Fig.~\ref{fig:inference}(e) have exact the same topology, which both improve a lot compared with initial segmentation (Fig.~\ref{fig:inference}(c)) in terms of topology/structure. The proposed method can also be used as an image annotation workflow for biomedical images with rich structures. As mentioned in the motivation section and illustrated in Fig.~\ref{fig:teaser}(c), the binary mask generated by standard segmentation methods may still have topological errors and noise, which can not be directly used in practice. With the proposed method, given an image (Fig.~\ref{fig:inference}(a)), the users can conduct the inference a few times (such as 10) and be able to generate a set of structure-preserving segmentation masks (Fig.~\ref{fig:inference}(f)). By choosing the one which looks most reasonable, human-in-the-loop can then start from a good point. By removing the unnecessary structures and redrawing the missing structures, we can efficiently annotate one image with rich structures. The whole inference pipeline and human-in-the-loop structure-aware image annotation workflow is illustrated in Fig.~\ref{fig:inference}. \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{figure/Figure_4_5_all_enlarge.pdf} \vspace{-.1in} \caption{The inference and human-in-the-loop pipeline.} \vspace{-.2in} \label{fig:inference} \end{figure*} \myparagraph{Quantitative and qualitative results.} Table~\ref{table:quantitative} shows the quantitative results comparing to several baselines. Note that for deterministic methods, the numbers are computed directly based on the outputs; while for probabilistic methods, we generate five segmentation masks and report the averaged numbers over the five segmentation masks for each image (for both the baselines and the proposed method). We use t-test to determine the statistical significance and highlight the significant better results. From the table, we can observe that the proposed method achieves significant better performances in terms of topology-aware metrics (ARI, VOI and Betti Error). \begin{table*}[t] \begin{center} \caption{Quantitative results for different models on three different biomedical datasets.} \label{table:quantitative} \begin{tabular}{ccccc} \hline Method & Dice $\uparrow$ & ARI $\uparrow$ & VOI $\downarrow$ & Betti Error $\downarrow$\\ \hline\hline \multicolumn{5}{c}{ISBI13 (Volume)} \\ \hline DIVE~\cite{fakhry2016deep} & 0.9658 $\pm$ 0.0020 & 0.6923 $\pm$ 0.0134 & 2.790 $\pm$ 0.025 & 3.875 $\pm$ 0.326\\ UNet~\cite{ronneberger2015u} & 0.9649 $\pm$ 0.0057 & 0.7031 $\pm$ 0.0256 & 2.583 $\pm$ 0.078 &3.463 $\pm$ 0.435\\ UNet-VGG~\cite{mosinska2018beyond} & 0.9623 $\pm$ 0.0047 & 0.7483 $\pm$ 0.0367 & 1.534 $\pm$ 0.063 & 2.952 $\pm$ 0.379\\ TopoLoss~\cite{hu2019topology} & 0.9689 $\pm$ 0.0026 & 0.8064 $\pm$ 0.0112 & 1.436 $\pm$ 0.008& 1.253 $\pm$ 0.172\\ DMT~\cite{hu2021topology} & \textbf{0.9712 $\pm$ 0.0047} & 0.8289 $\pm$ 0.0189 & 1.176 $\pm$ 0.052 & 1.102 $\pm$ 0.203\\ \hline Dropout UNet~\cite{kendall2015bayesian} & 0.9591 $\pm$ 0.0031 & 0.7127 $\pm$ 0.0181 & 2.483 $\pm$ 0.046 & 3.189 $\pm$ 0.371 \\ Prob.-UNet~\cite{kohl2018probabilistic} &0.9618 $\pm$ 0.0019 & 0.7091 $\pm$ 0.0201 & 2.319 $\pm$ 0.041 & 3.019 $\pm$ 0.233\\ \hline \textbf{Ours} & 0.9637 $\pm$ 0.0032 & \textbf{0.8417 $\pm$ 0.0114} & \textbf{1.013 $\pm$ 0.081} & \textbf{0.972 $\pm$ 0.141} \\ \hline\hline \multicolumn{5}{c}{CREMI (Volume)} \\ \hline DIVE~\cite{fakhry2016deep} & 0.9542 $\pm$ 0.0037 & 0.6532 $\pm$ 0.0247 & 2.513 $\pm$ 0.047 & 4.378 $\pm$ 0.152\\ UNet~\cite{ronneberger2015u} & 0.9523 $\pm$ 0.0049 & 0.6723 $\pm$ 0.0312 & 2.346 $\pm$ 0.105 & 3.016 $\pm$ 0.253\\ UNet-VGG~\cite{mosinska2018beyond} & 0.9489 $\pm$ 0.0053 & 0.7853 $\pm$ 0.0281 & 1.623 $\pm$ 0.083 & 1.973 $\pm$ 0.310\\ TopoLoss~\cite{hu2019topology} & 0.9596 $\pm$ 0.0029 & 0.8083 $\pm$ 0.0104 & 1.462 $\pm$ 0.028 & 1.113 $\pm$ 0.224\\ DMT~\cite{hu2021topology} & \textbf{0.9653 $\pm$ 0.0019} & 0.8203 $\pm$ 0.0147 & 1.089 $\pm$ 0.061 & 0.982 $\pm$ 0.179\\ \hline Dropout UNet~\cite{kendall2015bayesian} & 0.9518 $\pm$ 0.0018 & 0.6814 $\pm$ 0.0202 & 2.195 $\pm$ 0.087 & 3.190 $\pm$ 0.198 \\ Prob.-UNet~\cite{kohl2018probabilistic} & 0.9531 $\pm$ 0.0022 & 0.6961 $\pm$ 0.0115 & 1.901 $\pm$ 0.107 & 2.931 $\pm$ 0.177\\ \hline \textbf{Ours} & 0.9541 $\pm$ 0.0031 & \textbf{0.8509 $\pm$ 0.0054} & \textbf{0.918 $\pm$ 0.074} & \textbf{0.906 $\pm$ 0.085}\\ \hline\hline \multicolumn{5}{c}{DRIVE (Vessel)} \\ \hline DIVE~\cite{fakhry2016deep} & 0.7543 $\pm$ 0.0008 & 0.8407 $\pm$ 0.0257 & 1.936 $\pm$ 0.127 & 3.276 $\pm$ 0.642 \\ UNet~\cite{ronneberger2015u} & 0.7491 $\pm$ 0.0027 & 0.8343 $\pm$ 0.0413 & 1.975 $\pm$ 0.046 & 3.643 $\pm$ 0.536\\ UNet-VGG~\cite{mosinska2018beyond} & 0.7218 $\pm$ 0.0013 & 0.8870 $\pm$ 0.0386 & 1.167 $\pm$ 0.026 & 2.784 $\pm$ 0.293\\ TopoLoss~\cite{hu2019topology} & 0.7621 $\pm$ 0.0036 & 0.9024 $\pm$ 0.0113 & 1.083 $\pm$ 0.006 & 1.076 $\pm$ 0.265\\ DMT~\cite{hu2021topology} & \textbf{0.7733 $\pm$ 0.0039} & 0.9077 $\pm$ 0.0021 & 0.876 $\pm$ 0.038 & 0.873 $\pm$ 0.402\\ \hline Dropout UNet~\cite{kendall2015bayesian} & 0.7410 $\pm$ 0.0019 & 0.8331 $\pm$ 0.0152 & 2.013 $\pm$ 0.072 & 3.121 $\pm$ 0.334 \\ Prob.-UNet~\cite{kohl2018probabilistic} & 0.7429 $\pm$ 0.0020 & 0.8401 $\pm$ 0.1881 & 1.873 $\pm$ 0.081 & 3.080 $\pm$ 0.206\\ \hline \textbf{Ours} & 0.7545 $\pm$ 0.0043 & \textbf{0.9141 $\pm$ 0.0036} & \textbf{0.813 $\pm$ 0.051} & \textbf{0.735 $\pm$ 0.104}\\ \hline \vspace{-.4in} \end{tabular} \end{center} \end{table*} \begin{figure*}[t] \centering \subfigure{ \includegraphics[width=0.13\textwidth]{figure/img39_4_4_ori.pdf}} \subfigure{ \includegraphics[width=0.13\textwidth]{figure/img39_4_4_gt.pdf}} \subfigure{ \includegraphics[width=0.13\textwidth]{figure/img39_4_4_pred.pdf}} \subfigure{ \includegraphics[width=0.13\textwidth]{figure/img39_4_4_pred_binary.pdf}} \subfigure{ \includegraphics[width=0.13\textwidth]{figure/img39_4_4_sample1.pdf}} \subfigure{ \includegraphics[width=0.13\textwidth]{figure/img39_4_4_sample3.pdf}} \subfigure{ \includegraphics[width=0.13\textwidth]{figure/img39_4_4_sample7.pdf}} \vspace{-.1in} \subfigure{ \includegraphics[width=0.13\textwidth]{figure/img18_4_0_ori.pdf}} \subfigure{ \includegraphics[width=0.13\textwidth]{figure/img18_4_0_gt.pdf}} \subfigure{ \includegraphics[width=0.13\textwidth]{figure/img18_4_0_pred.pdf}} \subfigure{ \includegraphics[width=0.13\textwidth]{figure/img18_4_0_pred_binary.pdf}} \subfigure{ \includegraphics[width=0.13\textwidth]{figure/img18_4_0_sample1_final.pdf}} \subfigure{ \includegraphics[width=0.13\textwidth]{figure/img18_4_0_sample2_final.pdf}} \subfigure{ \includegraphics[width=0.13\textwidth]{figure/img18_4_0_sample7_final.pdf}} \vspace{-.1in} \subfigure{ \includegraphics[width=0.13\textwidth]{figure/img5_1_2_ori.png}} \subfigure{ \includegraphics[width=0.13\textwidth]{figure/img5_1_2_gt.png}} \subfigure{ \includegraphics[width=0.13\textwidth]{figure/img5_1_2_pred.png}} \subfigure{ \includegraphics[width=0.13\textwidth]{figure/img5_1_2_pred_binary.png}} \subfigure{ \includegraphics[width=0.13\textwidth]{figure/img5_1_2_sample1_final.png}} \subfigure{ \includegraphics[width=0.13\textwidth]{figure/img5_1_2_sample3_final.png}} \subfigure{ \includegraphics[width=0.13\textwidth]{figure/img5_1_2_sample7_final.png}} \vspace{-.1in} \subfigure{ \stackunder{\includegraphics[width=0.13\textwidth]{figure/img11_0_1_ori.png}}{(a) Image}} \subfigure{ \stackunder{\includegraphics[width=0.13\textwidth]{figure/img11_0_1_gt.png}}{(b) GT}} \subfigure{ \stackunder{\includegraphics[width=0.13\textwidth]{figure/img11_0_1_pred.png}}{(c) LH}} \subfigure{ \stackunder{\includegraphics[width=0.13\textwidth]{figure/img11_0_1_pred_binary.png}}{(d) DMT}} \subfigure{ \stackunder{\includegraphics[width=0.13\textwidth]{figure/img11_0_1_sample1_final.png}}{(e) Sample1}} \subfigure{ \stackunder{\includegraphics[width=0.13\textwidth]{figure/img11_0_1_sample3_final.png}}{(f) Sample2}} \subfigure{ \stackunder{\includegraphics[width=0.13\textwidth]{figure/img11_0_1_sample4_final.png}}{(g) Sample3}} \vspace{-.1in} \caption{Qualitative results of the proposed method compared to DMT-loss~\cite{hu2021topology}. From left to right: \textbf{(a)} sample image, \textbf{(b)} ground truth, \textbf{(c)} continuous likelihood map and \textbf{(d)} thresholded binary mask for DMT~\cite{hu2021topology}, and \textbf{(e-g)} three sampled segmentation maps generated by our method.} \vspace{-.15in} \label{fig:qualitative} \end{figure*} Fig.~\ref{fig:qualitative} shows qualitative results. Comparing with DMT~\cite{hu2021topology}, our method is able to produce a set of true structure-preserving segmentation maps, as illustrated in Fig.~\ref{fig:qualitative}(e-g). Note that compared with the existing topology-aware segmentation methods, our method is more capable of recovering the weak connections by using Morse skeletons as hints. More qualitative results are included in Supplementary Material. \myparagraph{Ablation study of loss weights.} We observe that the performances of our method are quite robust to the loss weights $\alpha$ and $\beta$. As the learned distribution over the persistence threshold might affect the final performances, we conduct an ablation study in terms of the weight of KL divergence loss ($\beta$) on DRIVE dataset. The results are reported in Fig.~\ref{fig:ablation}. When $\beta=10$, the model achieves slightly better performance in terms of VOI (0.813 $\pm$ 0.051, the smaller the better) than other choices. Note that, for all the experiments, we set $\alpha=1$. \begin{wrapfigure}{r}{0.4\textwidth} \centering \vspace{-.25in} \includegraphics[width=0.4\textwidth]{figure/bar_plot_with_error_bars.png} \vspace{-.3in} \caption{Ablation study results for $\beta$.} \vspace{-.05in} \label{fig:ablation} \end{wrapfigure} \myparagraph{Illustration of the structure-level uncertainty.} In this section, we'd like to explore the structure-level uncertainty based on the sampled segmentation masks. We show three sampled masks (Fig.~\ref{fig:uncertain}(c-e)) in the inference stage for a given image (Fig.~\ref{fig:uncertain}(a)), and the structure-wise uncertainty map (Fig.~\ref{fig:uncertain}(f)). Note that uncertainty map is generated by taking variance across all the samples (the number is 10 for this specific case). Different from pixel-level uncertainty, each small branch has the same uncertainty value with our method. If we look at the original image, we will find that the uncertainties are usually caused by the weak signals (small branches) of the original image. The weak signals of the original image make the deep model difficult to predict these locations correctly and confidently, especially in structure wise. Actually this also makes sense in real cases. Different from natural images, even experts can not always reach a consensus for biomedical image annotation~\cite{armato2011lung,clark2013cancer}. It is beneficial that our model can both generate a set of plausible segmentation results and the uncertainty map as hints for further quality control. More explorations of structure-level uncertainty are included in Supplementary Material. \begin{figure*}[t] \centering \subfigure[Image]{ \includegraphics[width=0.145\textwidth]{figure/img10_1_2_ori.png}} \subfigure[GT]{ \includegraphics[width=0.145\textwidth]{figure/img10_1_2_gt.png}} \subfigure[Sample1]{ \includegraphics[width=0.145\textwidth]{figure/img10_1_2_sample1_final.png}} \subfigure[Sample2]{ \includegraphics[width=0.145\textwidth]{figure/img10_1_2_sample4_final.png}} \subfigure[Sample3]{ \includegraphics[width=0.145\textwidth]{figure/img10_1_2_sample7_final.png}} \subfigure[Uncertainty]{ \includegraphics[width=0.188\textwidth]{figure/img10_1_2_heatmap.png}} \vspace{-.1in} \caption{An illustration of structure-level uncertainty.} \vspace{-.2in} \label{fig:uncertain} \end{figure*} \myparagraph{The advantage of the joint training and optimization.} Another straightforward alternative of the proposed approach is to use the discrete Morse theory to postprocess the continuous likelihood map obtained from the standard segmentation networks. In this way, we can still obtain structure-clean segmentation maps, but there are two main issues: 1) if the segmentation network itself is structure-agnostic, we'll not be able to generate satisfactory results even with the postprocessing, and 2) we have to manually choose the persistence threshold to prune the unnecessary branches for each image, which is annoying and unrealistic in practice. The proposed joint training strategy overcomes both these two issues. First, during the training, we incorporate the structure-aware loss ($L_{skeleton}$). Consequently, the trained segmentation branch itself is structure-aware essentially. On the other hand, with the prior and posterior nets, we are able to learn a reliable distribution of the persistence threshold ($\epsilon$) given an image in the inference stage. Sampling over the distribution makes it possible to generate satisfactory structure-preserving segmentation maps within a few trials (the inference won't take long), which is more much efficient. \section{Conclusion} \label{sec:conslusion} Instead of learning pixel-wise representation, we propose to learn structural representation with a probabilistic model to segment with correct topology. Specifically, we construct the structural space by leveraging classical discrete Morse theory. And then we build a probabilistic model to learn a distribution over structures. The model is trained and optimized jointly. In the inference stage, we are able to generate a set of structure-preserving segmentation maps and explore the structure-level uncertainty, which is beneficial for human-in-the-loop quality control. Extensive experiments have been conducted to demonstrate the efficacy of the proposed method. \textbf{Limitation.} This paper is the first work to learn structural representation for image segmentation task. As a pioneer work of this specific task, though addressing the problem from a novel view, pruning the structures with a global threshold (the sampled persistence threshold $\epsilon$) is somewhat brutal. Ideally, we should be able to adaptively prune the unnecessary branches locally, which is left for further work.
2,869,038,156,594
arxiv
\section*{Small Oscillations of a Vortex Ring: Hamiltonian Formalism and Quantization} {S.V. TALALOV} {Department of Applied Mathematics, Togliatti State University, \\ 14 Belorusskaya str., Tolyatti, Samara region, 445020 Russia.\\ svt\[email protected]} \end{center} \begin{abstract} This article investigates small oscillations of a vortex ring with zero thickness that evolves under the Local Induction Equation (LIE). We deduce the differential equation that describes the dynamics of these oscillations. We suggest the new approach to the Hamiltonian description of this dynamic system. This approach is based on the extension of the set of dynamical variables by adding the circulation $\Gamma$ as a dynamical variable. The constructed theory is invariant under the transformations of the Galilei group. The appearance of this group allows for a new viewpoint on the energy of a vortex filament with zero thickness. We quantize this dynamical system and calculate the spectrum of the energy and acceptable circulation values. The physical states of the theory are constructed with help of coherent states for the Heisenberg -Weyl group. \end{abstract} {\bf keywords:} vortex ring, constrained Hamiltonian systems, quantization {\bf PACS numbers:} 47.10.Df 47.32.C \vspace{5mm} \section{Introduction} The study of various vortex structures has a long history. In spite of this fact, dynamics of such objects continue to attract interest\cite{Moff}. Without trying to make a review of the literature on this topic, we will mention only some works that are directly relevant to our research. So, in the work \cite{Hasim} the vortex filament (in the LIE approximation) was described firstly in terms of solutions of non-linear Schr\"odinger equation. It should also be mentioned that gauge equivalence between the non-linear Schr\"odinger equation and the continuous Heisenberg spin chain exists\cite{TakFad}. It is well-known that quantization of similar non-linear systems is a complicated problem (see, for example, \cite{Sklyanin}). As a consequence, the connection with the initial hydrodynamical system (the vortex ring in our case) maybe seems not obvious. That is why the investigations of the small perturbations of certain initial and stable configurations are interesting, both classical and quantum. Let us note here the work \cite{Majda_Bertozzi}, where, in particular, the system of small-perturbed straightforward vortex filaments with certain interactions were investigated. In the work \cite{AbhGuh}, the quantization of such filaments was considered. It is also necessary to mention the direction of the study of quantum vortices in superfluid helium \cite{Donn,Aarts,Andr}. Since our research lies in a different plane, we will not dwell on this in detail. In this work, we consider the closed evolving curve ${\boldsymbol{r}}(\tau,\xi)$ that is defined by the formula \begin{equation} \label{involve} {\boldsymbol{r}}(\tau,\xi) = \boldsymbol{q} + {R_0}\, \int\limits_{0}^{2\pi} \left[\, {\xi - \eta}\,\right] {\boldsymbol j}(\tau,\eta) d\eta\,. \end{equation} Both parameter $\xi$ that parametrizes the curve ${\boldsymbol{r}}(\cdot,\xi)$ and evolution parameter $\tau$ are dimensionless parameters. The constant $R_0$ defines the scale of length. The notation $[\,x\,]$ means the integer part of the number $x/{2\pi}$, the variables $\boldsymbol{q} = \boldsymbol{q}(\tau)$ may be $\tau$-depended (conditionally, these are the coordinates of the ''mass center''). $2\pi$-periodical vector function ${\boldsymbol j}(\xi) \in E_3$ defines unit tangent vector. We postulate that function ${\boldsymbol{r}}(\tau,\xi)$ satisfies the LIE equation \begin{equation} \label{LIE_eq} \partial_\tau {\boldsymbol{r}}(\tau ,\xi) = \frac{1}{R_0}\, \partial_\xi{\boldsymbol{r}}(\tau ,\xi)\times\partial_\xi^{\,2}{\boldsymbol{r}}(\tau ,\xi)\,. \end{equation} Consequently, the function ${\boldsymbol j}(\xi) $ satisfies the equation for the continuous Heisenberg spin chain \begin{equation} \label{CHSCeq} \partial_\tau {\boldsymbol{j}}(\tau ,\xi) = \,{\boldsymbol{j}}(\tau ,\xi)\times\partial_\xi^{\,2}{\boldsymbol{j}}(\tau,\xi)\,. \end{equation} The following equalities are fulfilled too: \begin{equation} \label{constr_j_0} \int\limits_{0}^{2\pi}{j}_k(\xi)\,d\xi = 0\, \qquad (k=x,y,z)\,, \end{equation} The original space-time symmetry group for this system is the group $E(3)\times E_\tau$, where the group $E(3)$ is the group of motions of space $E_3$ and $E_\tau$ is the group of ''translations'' $\tau \to \tau +c$. The standard simplest configuration here will be the vortex ring with radius $R_0$ that moves parallel to $z$-axis with some constant velocity: \begin{equation} \label{ring_1} {\boldsymbol{r}_0}(\tau,\xi) = \boldsymbol{q}_0 + {R_0}\, \int\limits_{0}^{2\pi} \left[\, {\xi - \eta}\,\right] {\boldsymbol j}_0(\tau,\eta) d\eta\,. \end{equation} Tangent vector ${\boldsymbol j}_0$ has the following coordinates: \begin{equation} \label{tang_v} {\boldsymbol{j}_0}(\xi) = \{ - \sin\xi\,, \quad \cos\xi\,, \quad 0\}\,. \end{equation} The quantities $ {q_0}_i = const$ for the indexes $i = x,y$ here; the quantity ${q_0}_z$ is some linear function on the variable $\tau$ that will be specified latter. We consider the small perturbation $(\varepsilon << 1)$ of the tangent vector (\ref{tang_v}) and coordinates $ \boldsymbol{q}_0$: \begin{equation} \label{tang_v_per} \boldsymbol{q} = \boldsymbol{q}_0 + \varepsilon\, \boldsymbol{q}_{prt}\,,\qquad \boldsymbol{j}(\tau,\xi) = {\boldsymbol{j}_0}(\xi) + \varepsilon {\boldsymbol{j}_{prt}}(\tau,\xi) \,. \end{equation} Therefore, we have representation $${\boldsymbol{r}}(\tau ,\xi) = {\boldsymbol{r}_0}(\tau,\xi) + \varepsilon {\boldsymbol{r}_{prt}}(\tau ,\xi)\,.$$ Let us substitute the representation (\ref{tang_v_per}) for the vector ${\boldsymbol{j}}(\xi)$ into equation (\ref{CHSCeq}). Taking into account the equality $\partial^{\,2}_\xi {\boldsymbol{j}_0}(\xi) = - {\boldsymbol{j}_0}(\xi)$ and neglecting the terms of order $\varepsilon^2$, we deduce the equation \begin{equation} \label{lin_eq} \partial_\tau {\boldsymbol{j}_{prt}}(\tau ,\xi) = {\boldsymbol{j}_0}(\tau ,\xi) \times \left[\, {\boldsymbol{j}_{prt}}(\tau ,\xi) + \partial_\xi^{\,2}{\boldsymbol{j}_{prt}}(\tau,\xi)\, \right]\,. \end{equation} Thus, we have the linear equation that describes the dynamics of small perturbations for the vortex ring (\ref{ring_1}). Next, we will not write the ${''prt''}$ index explicitly, hoping that this will not lead to misunderstandings. It must be emphasized that equalities (\ref{constr_j_0}) must be fulfilled strongly for all configurations (perturbed or not perturbed). The symmetry of the initial configuration ${\boldsymbol{j}_0}(\tau ,\xi)$ makes natural using the cylindrical coordinates $\{\rho, \phi, z\}$. The z-axis of such a system coincides with the axis of the unperturbed vortex ring. Let three vectors $\{ {\boldsymbol{e}_\rho}\,, {\boldsymbol{e}_\phi}\,, {\boldsymbol{e}_z}\,\}$ denote the local basis of the cylindrical system. Obviously, in special case when $ \boldsymbol{j} \equiv {\boldsymbol{j}_0}$, the parameter $\xi = \phi$. Therefore, $$ {\boldsymbol{j}_0} = {\boldsymbol{e}_\phi}\,.$$ Because we consider the small perturbations of the function ${\boldsymbol{j}_0}$ only, we assume $\xi = \phi$ for all configurations. Thus, the equation (\ref{lin_eq}) takes the following form in the cylindrical basis: \begin{equation} \label{lin_eq_cyl} \partial_\tau {\boldsymbol{j}}(\tau ,\xi) = \Bigl({{j}_z}(\tau ,\xi) + \partial_\xi^{\,2}{{j}_z}(\tau,\xi)\Bigr) {\boldsymbol{e}_\rho} - \Bigl( \partial_\xi^{\,2}{{j}_\rho}(\tau,\xi) -2\, \partial_\xi {j}_\phi (\tau,\xi) \Bigr) \boldsymbol{e}_z \,. \end{equation} This equation demonstrates that $ \partial_\tau {j_\phi}(\tau ,\xi) \equiv 0\,.$ For example, the initial data ${j_\phi}(0 ,\xi) \equiv j_\phi^{\,0} $, where $ \partial_\xi j_\phi^{\,0} \equiv 0$, lead to the equality $${j_\phi}(\tau ,\xi) \equiv j_\phi^{\,0}\,, \qquad j_\phi^{\,0} = const\,. $$ Obviously, the perturbation ${\boldsymbol{j}_0}(\xi) \to {\boldsymbol{j}_0}(\xi) + \varepsilon j_\phi^{\,0}{\boldsymbol{e}_\phi}$ conserves the form of the original vortex ${\boldsymbol{r}_0}$. In our subsequent considerations we will consider the case $j_\phi^{\,0} = const $ only. Therefore, the perturbation amplitude ${\boldsymbol j}(\tau ,\xi)$ of the vector ${\boldsymbol{j}_0}(\tau ,\xi)$ has following form in the cylindrical basis: \begin{equation} \label{j-two} {\boldsymbol j}(\tau ,\xi) = j_\rho(\tau,\xi){\boldsymbol{e}}_\rho + j_\phi^{\,0} \boldsymbol{e}_\phi + j_z(\tau ,\xi) {\boldsymbol{e}_z}\,. \end{equation} The following identity takes place: \begin{equation} \label{iden_1} {\boldsymbol j}(\tau ,\xi){\boldsymbol{j}_0}(\tau ,\xi) \equiv j_\phi^{\,0}\,. \end{equation} The notation ${\boldsymbol j}{\boldsymbol{j}_0}$ means an inner product of two vectors. In the future, it is more convenient for us to consider the non-trivial components of the vector ${\boldsymbol j}$ in a complex-valued form. We introduce the notation ${\sf j}$ for the complex perturbation amplitude to avoid any ambiguity. Thus, $$ {\sf j} = j_\rho + i j_z\,.$$ Equation (\ref{lin_eq_cyl}) writing in complex form as follows: \begin{equation} \label{lin_eq_com} \partial_\tau {\sf j} = - i \partial_\xi^{\,2} {\sf j} - \frac{i}{2}\Bigl({\sf j} - \overline{\,\sf j\,}\, \Bigr) \,. \end{equation} This simple equation can be solved explicitly: \begin{equation} \label{sol_1} {\sf j}(\tau ,\xi) = \sum_{n} {\sf j}_{\,n}\, e^{\,i\,[n\xi + n \sqrt{n^2 - 1}\,\tau\,]} \,, \end{equation} where ${\sf j}_{\,n} \equiv const$ and the coefficients $ \overline{\, {\sf j}\,}_{\,-n}$ and ${\sf j}_{\,n}$ are related to each other as follows: \begin{equation} \label{jn_jn-conj} \overline{\, {\sf j}\,}_{\,-n} = 2 \left[n\sqrt{n^2 -1} - n^2 + \frac{1}{2} \right] {\sf j}_{\,n} \,. \end{equation} It is clear that this solution is stable for $\tau \to \infty$. Note that the stability of the real vortex rings is a non-trivial problem in general. For example, this problem was investigated in the work \cite{Protas} for the Norbury rings. Restrictions (\ref{constr_j_0}) are rewriting in terms of the cylindrical coordinates as follow: \begin{equation} \label{cons_cyl_1} \int_0^{2\pi} j_\rho(\xi)\,e^{\pm i\xi}d\xi =0\,,\qquad \int_0^{2\pi} j_z(\xi)\,d\xi =0\,. \end{equation} As regards to the coefficients ${\sf j}_{\,0}$, ${\sf j}_{\pm 1}$ in the solution (\ref{sol_1}), constraints (\ref{cons_cyl_1}) lead to the formulas \begin{eqnarray} \label{j_0} {\sf j}_{\,0} & = & \frac{1}{2\pi} \int_0^{2\pi}\Bigl(j_\rho + i j_z \Bigr)d\xi = \frac{1}{2\pi} \int_0^{2\pi}j_\rho \,d\xi\,, \\[2mm] {\sf j}_{\,\pm 1} & = & \frac{1}{2\pi}\int_0^{2\pi}\Bigl(j_\rho + i j_z \Bigr)\,e^{\mp i\xi} d\xi = \frac{1}{2\pi}\int_0^{2\pi}j_z (\pm\sin\xi + i\cos\xi)\,d\xi\,. \label{j_pm} \end{eqnarray} So, last equalities lead to the restrictions for the coefficients ${\sf j}_{\pm 1}$: \begin{equation} \label{j_pm_rstr} {\sf j}_{\, 1} = - \overline{\,{\sf j}\,}_{\,-1}\,. \end{equation} As regards the coefficient ${\sf j}_{\,0}$, the equalities (\ref{j_0}) lead to the restriction ${\sf j}_{\,0} = \,\overline{\,{\sf j}\,}_{\,0}$. Although these restrictions are deduced from the constraint (\ref{constr_j_0}), they are consistent with the formula (\ref{jn_jn-conj}). \section{Dynamical invariants and extended set of the variables} The consideration of vortex structure in terms of the equation (\ref{LIE_eq}) only is apparently too formal. As an addition to the LIE equation, we postulate in our theory the canonical formulas for the momentum and angular momenta, that was deduced in the fluid dynamics \cite{Batche}: \begin{equation} \label{p_and_m_st} \tilde{\boldsymbol{p}} = \frac{1}{2 }\,\int\,\boldsymbol{r}\times\boldsymbol{\omega}(\boldsymbol{r})dV\,,\qquad \tilde{\boldsymbol{s}} =\frac{1}{3}\, \int\,\boldsymbol{r}\times \bigl(\boldsymbol{r}\times\boldsymbol{\omega}(\boldsymbol{r})\bigr)dV\,. \end{equation} The vector $\boldsymbol{\omega}(\boldsymbol{r})$ means the vorticity. The fluid density $\varrho \equiv 1$ here. As well-known, the vorticity of the closed vortex filament calculates by means of the formula \begin{equation} \label{vort_w} \boldsymbol{\omega}(\boldsymbol{r}) = \Gamma \int\limits_{0}^{2\pi}\,\hat\delta(\boldsymbol{r} - \boldsymbol{r}(\xi))\partial_\xi{\boldsymbol{r}}(\xi)d\xi\,, \end{equation} where the symbol $\Gamma$ denotes the circulation and the symbol $\hat\delta(\xi)$ means $2\pi$-periodical $3D$ $\delta$-function. Taking into account the formulae (\ref{involve}), (\ref{constr_j_0}) and (\ref{vort_w}), the following expression for the canonical momentum is deduced: \begin{equation} \label{impuls_def} \tilde{\boldsymbol{p}} = {R_0}^2 \Gamma {\boldsymbol f} \,, \qquad {\boldsymbol f} = \frac{1}{2}\iint\limits_{0}^{2\pi} \left[\, {\xi - \eta}\,\right]\,{\boldsymbol j}(\eta)\times{\boldsymbol j}(\xi)d\xi d\eta\,. \end{equation} A similar formula can be written for the angular momenta $\tilde{\boldsymbol{s}}$; however, we omit the relevant details here. As opposed to formulae for the values $\tilde{\boldsymbol{p}}$ and $\tilde{\boldsymbol{s}}$, the canonical formula for the energy ${\mathcal E}$ gives the unsatisfactory result because of the divergence of the integral. We will return to this question later. Let us substitute the expansion (\ref{tang_v_per}) into expression for the vector ${\boldsymbol f}$ (see (\ref{impuls_def})). Taking into account identity (\ref{iden_1}), the following formula for the vector ${\boldsymbol f}$ holds: \begin{equation} \label{f_gen} {\boldsymbol f} = \pi(1 + 2\,\varepsilon j_\phi^{\,0}) {\boldsymbol e}_z - \varepsilon {\boldsymbol f}_{\!\!\perp} \,,\qquad {\boldsymbol f}_{\!\!\perp} = \int_0^{2\pi} j_z(\xi) {\boldsymbol e}_\phi \,d\xi\,. \end{equation} The quantity ${\boldsymbol f}$ includes both unperturbed ($\pi{\boldsymbol e}_z$) and perturbed ($ 2\pi\,\varepsilon j_\phi^{\,0} {\boldsymbol e}_z - \varepsilon {\boldsymbol f}_{\!\!\perp}$) values. Correspondingly, the full momentum $\tilde{\boldsymbol{p}}$ has following form: $$ \tilde{\boldsymbol{p}} = - \varepsilon \tilde{\boldsymbol{p}}_{\!\perp} + \tilde{p}_{\,\parallel} {\boldsymbol{e}_z}\,,$$ where \begin{equation} \label{p_z1} \tilde{p}_{\,\|} = \pi (1 + 2\,\varepsilon j_\phi^{\,0}) {R_0}^2 \Gamma \,. \end{equation} As for the amplitudes of perturbations for the momentum, we have the following formulas: \begin{eqnarray} \label{p_z_pert} \tilde{p}_z & = & 2\pi {R_0}^2 \Gamma j_\phi^{\,0} \, \\ \tilde{\boldsymbol{p}}_{\!\perp} & = & {R_0}^2 \Gamma {\boldsymbol f}_{\!\!\perp}\,. \label{imp_perp} \end{eqnarray} Next, we intend to construct a Hamiltonian dynamical system that corresponds to the equation (\ref{lin_eq}). For the general case (\ref{LIE_eq}), the corresponding approach was proposed by the author in the work \cite{Tal_1}. We will briefly recall here the main points of the proposed theory, making the necessary modifications along the way. In our opinion, the following steps must be done to construct the physically interpreted dynamical system in our case: \begin{enumerate} \item As mentioned above, we need to supplement the equation (\ref{LIE_eq}) with formulas for the momentum $\tilde{\boldsymbol{p}}$ and the angular momenta $\tilde{\boldsymbol{s}}$; \item The variable ''velocity of liquid'' is absent in our theory. We propose to take into account the dynamics of the surrounding fluid in a minimal way: to declare the value $\Gamma$ as a dynamic variable, in addition to variables $\boldsymbol{j}(\xi)$ and $\boldsymbol{q}$. We denote as ${\mathcal A}$ this (extended) set of the dynamical variables $\{\,\boldsymbol{q}\,,{\boldsymbol j}(\xi)\,,\Gamma\,\}$ constrained by the conditions (\ref{constr_j_0}). \item The theory must contain a sufficient number of dimensional constants. So far, we use the dimensionless ''time'' $\tau$. Consequently, the additional dimensional constant $t_0$ that defines the scale of time, must be added to the theory in some way. Subsequently, we will express this constant in terms of other dimensional constants that have a clear physical meaning in our model. In addition to the $R_0$ and $t_0$ constants, the theory should contain a ''mass constant'' $m_0$. This constant will appear in the theory in a completely natural way later on. \end{enumerate} As a subtotal, we have the following \begin{prop} The set ${\mathcal A}$ parametrizes the considered dynamical system - the closed vortex filament ${\boldsymbol{r}}(\xi) $ that evolving in accordance with the LIE equation. This dynamical system has a momentum $\tilde{\boldsymbol{p}}$ and angular momenta $\tilde{\boldsymbol{s}}$ that calculated as prescribed above. \end{prop} To perform the hamiltonization of our system, we are going to describe the set ${\mathcal A}$ in terms of other variables. The reasons are following: \begin{itemize} \item we intend to expand the symmetry group $E(3)\times E_\tau$ to Galilei group ${\mathcal G}_3$ and use the group-theoretical approach for the definition of the energy of our system; \item new variables will be more suitable for subsequent quantization that will be fulfilled later in this article. \end{itemize} In addition, the new variables will give the obvious interpretation of the considered dynamical system as some structured particle. It is probably appropriate to mention here the pioneering work \cite{Thom} in which the observed particles are modeled by vortex structures. As a first step, we extend the set ${\mathcal A}$. Let us denote as ${\mathcal A}^{\,\prime}$ the set of the independent variables $({\boldsymbol q}\,;\, \tilde{\boldsymbol p}\,; {\boldsymbol j}(\xi)\,)$. The formula (\ref{impuls_def}) makes the injection $F$: $$F:\quad {\mathcal A} \quad \mathrel{\mathop{\longrightarrow}^{F} } \quad {\mathcal A}^{\,\prime}\,, \qquad {\rm Ran}\, F \subset {\mathcal A}^{\,\prime}\,. $$ On the set ${\mathcal A}^{\,\prime}$ the action of the central extended Galilei group $\widetilde{\mathcal G}_3 $ is defined by natural way. We parametrize the elements $g \in \widetilde{\mathcal G}_3 $ as follows: $$ g:\qquad \Bigl( {\mathcal R}\,, {\boldsymbol v}\,, {\boldsymbol a}\,, c\,; m_0 \Bigr)\,$$ where ${\mathcal R}\in SO(3)$, ${\boldsymbol v}\,, {\boldsymbol a} \in E_3$, $c \in {\sf R}$ and the central charge $m_0 \in {\sf R}$. Traditionally, the last parameter is interpreted as ''mass of the particle''. Before determining the action of this group on the set ${\mathcal A}^{\,\prime}$, we introduce the factor $m_0/R_0^3$ in standard hydrodynamical formulas (\ref{p_and_m_st}) to provide the dimension for the values $\tilde{\boldsymbol{p}}$ and $\tilde{\boldsymbol{s}}$ as in classical mechanics: \begin{equation} \label{rep_factor} \tilde{\boldsymbol{p}} \longrightarrow {\boldsymbol{p}} = (m_0/R_0^3) \tilde {\boldsymbol{p}}\,,\qquad \tilde{\boldsymbol{s}} \longrightarrow {\boldsymbol{s}} = (m_0/R_0^3) \tilde {\boldsymbol{s}}\,. \end{equation} Taking into account this redefinition, the group action $\circ$ on the set ${\mathcal A}^{\,\prime}$ is defined as follows: $$ g\circ ({\boldsymbol q} \,;\, {\boldsymbol p}\,; {\boldsymbol j}(\xi)\,) = ({\mathcal R}{\boldsymbol q} + {\boldsymbol v}t_0\tau + {\boldsymbol a}\,;\, {\mathcal R}{\boldsymbol p} + m_0 {\boldsymbol v}\,; {\mathcal R}{\boldsymbol j}(\xi)\,)\,,$$ and $g\circ (t_0\tau) = t_0\tau + c$. Next, we introduce the variables $$ q_i(0) = q_i - {\tau}(t_0/ m_0) p_i \,,\qquad i=x,y,z\,, $$ that will be convenient sometimes for using. The curve ${\boldsymbol{r}}(\tau ,\xi)$ is reconstructed through variables $({\boldsymbol q}(0)\,;\, {\boldsymbol p}\,; {\boldsymbol j}(\xi)\,)$ in accordance with the formula \begin{equation} \label{z_funct} {\boldsymbol{r}}(\tau,\xi) = {\boldsymbol {q}(0)} + \tau (t_0/ m_0) {\boldsymbol{p}} + {R_0} \int\limits_{0}^{2\pi} \left[\,{\xi - \eta}\,\right] {\boldsymbol j}(\tau,\eta) d\eta\,. \end{equation} As a second step, we must introduce certain constraints on the set ${\mathcal A}^{\,\prime}$ that define set $\Omega \subset {\mathcal A}^{\,\prime} $. Criteria for introducing these constraints - the one-to-one correspondence $$ {\mathcal A} \longleftrightarrow \Omega\,.$$ It is clear that we must define two constraints, because we introduce three variables $({\boldsymbol p})$ instead one variable $\Gamma$. Fist of all, we must require that the vectors ${\boldsymbol p}_{\!\perp}$ and ${\boldsymbol f}_{\!\!\perp}$ are proportional. In general, it is not true on set ${\mathcal A}^{\,\prime}$, because these vectors are independent at this set. For convenience, let us introduce the complex values: $$ {\sf p} = (p_{\!\perp})_x + i (p_{\!\perp})_y\,, \qquad {\sf f} = ({f}_{\!\perp})_{\,x} + i({f}_{\!\perp})_{\,y} \,.$$ In accordance both the definition of the vector ${\boldsymbol f}_{\!\perp}$ (see (\ref{f_gen})) and the formulas (\ref{j_pm}) we have the equalities: \begin{equation} \label{f_perp} {\sf f} = -\int_0^{2\pi}\! j_z \sin\xi d\xi + i \int_0^{2\pi}\! j_z \cos\xi d\xi = 2\pi {\sf j}_{-1} \,. \end{equation} Taking into account the replacement (\ref{rep_factor}), formula (\ref{imp_perp}) takes following form: \begin{equation} \label{complex_p} {\sf p} = \frac{m_0 \Gamma}{R_0}\, {\sf f} = \frac{2\pi m_0 \Gamma}{R_0}\,{\sf j}_{-1}\,. \end{equation} Therefore, by virtue of the formula (\ref{imp_perp}) and the complex-valued notations for the corresponding values, we must demand: \begin{equation} \label{constr_prop} \exists \lambda \in {\sf R}: \qquad \quad {\sf p} = 2\pi \lambda p_{\,0} {\sf j}_{-1}\,. \end{equation} The variables ${\sf p}$ and ${\sf j}_{-1}$ are the complex numbers here and the value $p_{\,0} = m_0 R_0/t_0 $. As we will show further, the constant $p_{\,0}$ has a clear physical meaning, so it seems natural to use it as {\it in-put} constant, instead of a value $t_0$. However, we will use the constant $t_0$ to simplify some formulas. The condition (\ref{constr_prop}) can also be written in following form: \begin{equation} \label{constr_0} p_x (f_{\!\perp})_y - p_y (f_{\!\perp})_x =0\,. \end{equation} Of couse, this equality (just like equality (\ref{constr_prop})) is fullfilled identically in the case when our theory is parametrized by the set ${\mathcal A}$. In complex-valued notations the constraint (\ref{constr_0}) is written as follows: \begin{equation} \label{constr_fin1} \Phi_0 \equiv {\sf p}\,{\overline{\,\sf j\,}_{-1}} - {\overline{\sf p}}\,{{\sf j}_{-1}} = 0 \,. \end{equation} If the quantities ${\sf p}$ and ${\sf j}_{-1} $ take non-zero values and the condition (\ref{constr_fin1}) is fulfilled, the variable $\Gamma$ can be determined unambiguously through the formulas: \begin{equation} \label{Gamma} \Gamma = \frac{\lambda R_0^2}{t_0}\,, \qquad \quad |{\sf p}|^2 - 4\pi^2 p_0^{\,2}\lambda^2 |{\sf j}_{-1}|^2 =0\,. \end{equation} In corresponding zero points the value $\Gamma = \Gamma_0$ is still undetermined. Let us substitute the representation (\ref{z_funct}) for the original vortex ${\boldsymbol{r}_0}(\tau,\xi)$ in LIE equation (\ref{LIE_eq}). This procedure leads to the equality for the momentum unperturbed vortex ring: $$ p_z = p_{\,0} = m_0 R_0/t_0\,.$$ To deduce this formula we suppose that $\partial_\tau {\boldsymbol p} = \partial_\tau {\boldsymbol q}(0) =0$. These conditions will be coordinated with the subsequent hamiltonization of our dynamical system. In accordance with formula (\ref{p_z1}) and (\ref{rep_factor}) we have for this case: $p_{\,0} = \pi m_0 \Gamma_0 /{R_0}\,.$ Therefore, $\Gamma_0 = R_0^2/\pi t_0 $. Let us return to the perturbed case. In this case the value $\Gamma$ in the formula (\ref{p_z_pert}) must be same value as was determined in formula (\ref{Gamma}). That is why we must write the second constraint: \begin{equation} \label{constr_fin2} \Phi_1 \equiv |{\sf j}_{-1}|^2 p_z^2 - (j_\phi^{\,0})^2 |{\sf p}|^2 = 0\,. \end{equation} In special case when the value $j_\phi^{\,0} = 0$, we have the constraint \begin{equation} \label{constr_fin2_0} \Phi_1 \equiv p_z = 0\, \end{equation} instead constraint (\ref{constr_fin2}). This means that set $\Omega$ describes the planar system here. We will consider the case $j_\phi^{\,0} = 0 $ only. In a general case when $j_\phi^{\,0} \not= 0$, the set $\Omega \subset {\mathcal A}^{\,\prime} $ defines by constraints (\ref{constr_fin1}) and (\ref{constr_fin2}). Finally, we have the following \begin{prop} The variables ${\boldsymbol j}(\xi)$, $\boldsymbol{q}$, $\boldsymbol{p}$, that are declared as the new fundamental variables, parametrize uniquely considered dynamical system. These variables are constrained by the equalities (\ref{constr_fin1}) and (\ref{constr_fin2}). \end{prop} \section{Energy and Hamiltonian structure } The straightforward calculation of the energy of a vortex filament is usually performed using the canonical formula \cite{Saffm} \begin{equation} \label{can_energy} {\mathcal E} = \frac{1}{8\pi}\,\iint \frac{\boldsymbol{\omega}(\boldsymbol{r})\boldsymbol{\omega}(\boldsymbol{r}^{\prime})}{|\,\boldsymbol{r} - \boldsymbol{r}^{\prime}|}\,dVdV^{\prime}= \frac{{\Gamma}^{\,2}}{8\pi}\iint \frac{\partial_\xi{\boldsymbol{r}}(\xi)\partial_\xi{\boldsymbol{r}}(\xi^{\prime})}{|\,{\boldsymbol{r}}(\xi) - {\boldsymbol{r}}(\xi^{\prime}) |}\,d\xi d\xi^{\prime}\,,\nonumber \end{equation} The result is unsatisfactory if the filament has zero thickness: the integral in this formula diverges. The standard approach to solve this problem is to take into account the finite thickness $a$ of the filament and the subsequent regularization of the integral (see, for example, \cite{Zhu}, where the interaction between pairs of quantized vortex rings was studied). In the proposed approach, we have chosen a different method: the energy of the arbitrary configuration in our model will be considered from the group-theoretical viewpoint. Indeed, the Lee algebra of group $\widetilde{\mathcal G}_3$ has three Cazimir functions: $$ {\hat C}_1 = m_0 {\hat I}\,,\quad {\hat C}_2 = \left({\hat M}_i - \sum_{k,j=x,y,z}\epsilon_{ijk}{\hat P}_j {\hat B}_k\right)^2 \quad {\hat C}_3 = \hat H - \frac{1}{2m_0}\sum_{i=x,y,z}{\hat P}_i^{\,2}\,,$$ where ${\hat I}$ is the unit operator, ${\hat M}_i$, $\hat H$, ${\hat P}_i$ and ${\hat B}_i$ ($i = x,y,z$) are the respective generators of rotations, time and space translations and Galilean boosts. As it is well known, the function ${\hat C}_3 $ can be interpreted as an ''internal energy of the particle''. Because our dynamical system has an ''internal degrees of the freedom'', the function ${\hat C}_3 $ can depend on the internal variables. We define these functions as follows $${ C}_3 = {\mathcal E}_0 \sum_{n>1} |\,{\sf j}_{\,-n}|^2 n\sqrt{n^2 -1} \,.$$ Here we have introduced into consideration the value ${\mathcal E}_0= m_0 R_0^2/t_0^2$ which defines the energy scale in our theory. The choice of the function $C_3$ will be quite justified after the definition of the Hamiltonian structure. As a result, the following function on the set $\Omega$ is a good candidate for the energy\footnote{We are considering here the energy of excitations only.}: \begin{equation} \label{energy_1} {H}_0(p_1,p_2,p_3\,;{\sf j}) = \frac{{\boldsymbol{p}}^{\,2}}{2m_0} + {\mathcal E}_0 \sum_{n>1} |\,{\sf j}_{\,-n}|^2 n\sqrt{n^2 -1} \,. \end{equation} To complete the consideration of energy, we must define the Poisson brackets that are compatible with the dynamics and constraints. In this article we consider the simplest case that corresponds the value $j_\phi^{\,0} = 0$. Consequently, the constraint (\ref{constr_fin2_0}) is fulfilled. It is quite natural to add the additional constraint \begin{equation} \label{add_constr} \Phi_2 \equiv q_z - R_0 \tau = 0 \,. \end{equation} Pursuant to Dirac's prescriptions about the primacy of Hamiltonian structure, we define such structure axiomatically here. The correspondent definitions are following. \begin{itemize} \item Phase space ${\mathcal H} = {\mathcal H}_3 \times {\mathcal H}_j $. The space $ {\mathcal H}_3$ is the phase space of a $3D$ free structureless particle. It is parametrized by the variables ${\boldsymbol{q}}$ and ${\boldsymbol{p}}$. The space $ {\mathcal H}_j$ is parametrized by the quantities ${\sf j}_{\,-n}$ ($n = 0, 1, \dots$). \item Poisson structure: \begin{eqnarray} \{p_i\,,q_j\} & = & \delta_{ij}\,,\qquad i,j = x,y,z\,, \nonumber \\ \label{ja_jb} \{ {\sf j}_{\,m}, \overline{\,\sf j\,}_{\,n}\} & = & (i/{\mathcal E}_0 t_0)\, \delta_{mn}\,, \qquad m,n = -1,-2,\dots \end{eqnarray} All other brackets vanish. The variable ${\sf j}_{\,0}$ annulates all brackets. Thus, the Poisson structure of the theory is degenerate in general: the value ${\sf j}_{\,0}$ marks the symplectic sheets where the structure will be non-degenerate. \item Constraints (\ref{constr_fin1}), (\ref{constr_fin2_0}) and (\ref {add_constr}). It is clear that constraints (\ref{constr_fin2_0}) and (\ref {add_constr}) form the pair of second type constraints in Dirac terminology. Moreover the following equalities hold: $$ \{\Phi_0, \Phi_k\} = 0\, \qquad k = 1,2.$$ Therefore, we can exclude the coordinates $p_z$ and $q_z$ from the phase space $ {\mathcal H}_3$ replacing it to the phase space $ {\mathcal H}_2$. The last one - the phase space of a free structureless particle on a plane. There are no additional constraints here because the equality $ \{H, \Phi_0\} = 0\,.$ \item Hamiltonian \begin{equation} \label{H_ful} H=H_0+ \ell\Phi_0\,, \end{equation} where the function $H_0 $ is defined by the formula (\ref{energy_1}) with replacing $\boldsymbol{p} \to \boldsymbol{p}_\perp$. The quantity ${\ell}$ is the Lagrange factor. \end{itemize} Let's pay attention to the following point. We introduce the set of the new variables ${\mathcal A}^{\,\prime}$ which is more extensive than the set of original variables. Constraints on set ${\mathcal A}^{\,\prime}$ were postulated. These constraints lead to a certain arbitrariness in dynamics. That is why the constructed dynamical system is not equivalent to the original one, but it is more extensive. Indeed, let us define the physical (dimensional) time $t = t_0\tau$. The following Proposition is true: \begin{prop} The following Hamilton equations are valid: \begin{eqnarray} \frac{\partial \boldsymbol{q}}{\partial t } & = & \{H_0,{\boldsymbol{q}}\} = \frac{\boldsymbol{p}}{m_0} \,, \qquad \frac{\partial \boldsymbol{p}}{\partial t} = \{H_0, \boldsymbol{p}\} = 0\,,\\ \frac{\partial {\sf j}(\tau,\xi) }{\partial t} & = & \{H_0, {\sf j}(\tau,\xi)\} = \frac{i}{t_0} \sum_{|n| >1} {\sf j}_{\,n}n\sqrt{n^2 - 1}\, e^{\,i[\,n\xi + n\sqrt{n^2 - 1}\,\tau\,]} \,. \end{eqnarray} \end{prop} Because the Hamiltonian $H$ differs from $H_0$, constructed system is equivalent to original if the Lagrange factor ${\ell} = 0$. \section{Quantization } Numerous articles devoted to the problem of turbulence show that the understanding of this phenomenon is still not full. This statement also fully applies to the turbulence of quantum liquids. Without setting out to review the literature on this issue, we note that a number of authors (see, for example, \cite{TsFuYu} ) assume that the key to understanding this problem is the topological defects of such liquids - vortices. That is why the quantum description of vortices is an important task. The information about the spectrum of energy of such defects for concrete tasks allows investigating certain statistical characteristics of the system. In this paper, we aim to develop a new approach to the quantization of a single closed vortex with zero thickness. The author believes that in the future, the results obtained may provide new opportunities for explaining the behavior of quantum liquids. The constructed Hamiltonian structure opens up possibilities for quantization of the small perturbations of the vortex ring under study. Firstly, we must define a Hilbert space $\boldsymbol{H}$ of the quantum states of our dynamical system. The structure of the phase space ${\mathcal H}$ lead to the following structure: \begin{equation} \label{space_quant} \boldsymbol{H} = \boldsymbol{H}_2 \otimes \boldsymbol{H}_F\,, \end{equation} where the symbol $\boldsymbol{H}_2$ denotes the Hilbert space of a free structureless particle on a plane (the space $L^2({\sf R}_2)$ for example) and symbol $\boldsymbol{H}_F $ denotes the Fock space for the infinite number of the harmonic oscillators. The creation and annihilation operators which are defined in the space $\boldsymbol{H}_F $, have standard commutation relations $$ [\,\hat{a}_m, \hat{a}_n^+] = \hat{I}_F\,, \qquad \hat{a}_m|\,0\rangle = 0 \,, \qquad m,n = 1,2,\dots \,, \qquad |\,0\rangle \in \boldsymbol{H}_F \,,$$ where the operator $\hat{I}_F$ is unit operator in the space $\boldsymbol{H}_F $. Let us quantize our theory. We must to construct the function $A \to \hat{A}$, where $A$ denotes some classical variable and $\hat{A}$ denote some operator in the space $\boldsymbol{H}$. Traditionally, we must to demand $$ [\hat{A},\hat{B}] = -i\hbar \widehat{\{A,B\}}\,$$ if the quantities $A$, $B$, $\dots$ denote the funamental variables in our theory. This equality can possess of some ''anomalous terms'' if the ''observables'' $A$, $B$ are the functions of the fundamental variables. These terms depend on a rule of the ordering of non-commuting operators. We will not discuss these issues here \cite{Berezin}. Let us consider the case when $\boldsymbol{H}_2 = L^2({\sf R}_2)$. This case corresponds to the perturbation of a vortex ring in unbounded space. Our postulate of quantization is following: $$ q_{x,y} \to q_{x,y}\otimes \,\hat{I}_F \,,\qquad p_{x,y} \to - i\hbar\frac{\partial}{\partial q_{x,y}} \otimes \,\hat{I}_F \,,\qquad {\sf j}_{\,-n} \to \sqrt{\frac{\hbar}{t_0{\mathcal E}_0}}\, (\hat{I}_2 \otimes\,\hat{a}_n)\,, $$ where $ n = 1,2,\dots\, $ and operator $\hat{I}_2$ is unit operator in the space $\boldsymbol{H}_2$. Next, we will not write the index $n = 1$ explicitly: $\hat{a}_{1} = \hat{a}$ and so on. This simplification will be justified later. Moreover we will not write the constructions $(\dots \otimes \,\hat{I}_F)$ and $ (\hat{I}_2 \otimes\,\dots)$ explicitly, hoping that this will not lead to misunderstandings. As a next step, we should to construct the physical subspace $\boldsymbol{H}_{phys} \subset \boldsymbol{H}$. In accordance with Dirac's prescription, the presence of constraint (\ref{constr_fin1}) leads to the equation for the vectors $ |\psi\rangle \in \boldsymbol{H}_{phys} $: $$ {\widehat\Phi_0}|\psi\rangle = 0 \,.$$ Here it is extremely important to pay attention to the following fact. In a classical theory, all forms of the first-type constraints lead to the same theory: in our case, we can assume $\Phi_0 = 0$ or $\Phi_0^{\,2} = 0$ and so on. This is not the case at all in the quantum version of the model. The different forms of first type constraints correspond to the different equations for the ''physical vectors'' in a quantum theory\footnote{Apparently, Dirac's words are still relevant: {\it''\dots methods of quantization are all of the nature of practical rules, whose application depends on consideration of simplicity''}}. For instant, it is clear that the solutions of the equation ${\widehat\Phi_0}|\psi\rangle = 0$ differ from the solutions of the equation ${\widehat\Phi_0^{\,2}}|\psi\rangle = 0$. Consequently, we need to supplement the quantization rules with a specific choice of the form of the constraint in classical theory. Let us investigate this problem in our model in more detail. First of all, we consider the classical constraint in form (\ref{constr_prop}). Taking into account formula (\ref{f_perp}), we search the vectors $|\psi_{phys}\rangle$ so that \begin{equation} \label{constr_q_pr1} \exists \lambda \in {\sf R}: \qquad (\hat{\sf p} - 2\pi\lambda p_{\,0}\sqrt{\frac{\hbar}{t_0{\mathcal E}_0}}\,\hat{a}) |\psi_{phys}\rangle =0 \,. \end{equation} Let the complex number ${\sf p} = p_x + i p_y$ is eigenvalue that corresponds to the (generalized) eigenvector $|{\sf p}\rangle \in \boldsymbol{H}_2^{\,\prime}$ of the operator $\hat{{\sf p}}$. The notation $\boldsymbol{H}_2^{\,\prime}$ means that prosedure of a rigging of the space $\boldsymbol{H}_2$ must be fulfilled to consider the generalized eigenvectors rigorously \cite{BerShu}. As ansatz for the solutions of the equation (\ref{constr_q_pr1}), we use the following form for the ''physical vectors'' $|\psi_{phys}\rangle$: \begin{equation} \label{phys_vect1} |\psi_{phys}({\sf p})\rangle = |{\sf p}\rangle |\psi_p\rangle\,,\qquad |{\sf p}\rangle\in \boldsymbol{H}_2^{\,\prime}\,,\quad |\psi_p\rangle\in\boldsymbol{H}_F\,. \end{equation} Therefore, the equation for the vector $|\psi_p\rangle$ takes following form: \begin{equation} \label{constr_q_pr2} \exists \lambda \in {\sf R}: \qquad ( {\sf p} - 2\pi\lambda p_{\,0}\sqrt{\frac{\hbar}{t_0{\mathcal E}_0}}\,\hat{a}) |\psi_{p}\rangle =0 \,, \quad |\psi_p\rangle\in\boldsymbol{H}_F\,, \quad {\sf p} \in {\sf C} \, . \end{equation} In other words, the vectors $|\psi_{p}\rangle$ are the eigenvectors of the spectral problem (\ref{constr_q_pr2}). \begin{prop} Let the the vectors $| {\sf p}/\lambda \rangle \in \boldsymbol{H}_F$ form the system of coherent states\footnote{See, for example, \cite{Perelomov}} for Heisenberg - Weyl group with algebra $$ [\,\hat{a}, \hat{a}^+] = \hat{I}\,, \qquad [\,\hat{a}^+, \hat{I}] = [\,\hat{a}, \hat{I}] = 0\,$$ as follows: \begin{equation} \label{coherent_1} | {\sf p} /\lambda \rangle = \exp\Bigl[ ({\sf p}\,\hat{a}^+ - {\overline{{\sf p}}}\,\hat{a}) / 2\pi\lambda \sqrt{\frac{\hbar p_{\,0}}{R_{\,0}}}\, \Bigr]\,|0\rangle\,, \qquad \lambda \in {\sf R}\,, \quad {\sf p} \in {\sf C}\,. \end{equation} Then the equation (\ref{constr_q_pr2}) has solutions \begin{equation} \label{eig_ham} |\psi^\lambda\rangle = |n_1,\dots,n_k; {\sf p}/\lambda\rangle \equiv \hat{a}^+_{n_1}\hat{a}^+_{n_2}\dots\hat{a}^+_{n_k}| {\sf p}/\lambda \rangle \,, \quad \qquad n_j > 1 \,.\nonumber \end{equation} \end{prop} This statement can be verified directly because the coherent states $| {\sf p}/ \lambda \rangle$ are eigenvectors of the operator $\hat{a}$ with eigenvalue ${\sf p}/(2\pi\lambda \sqrt{{\hbar p_{\,0}}/{R_{\,0}}}) $. Recall that the parameter $\lambda$ has the meaning of ''dimensionless circulation'' in our model. Let us consider the vectors $|\psi_{[n]}^\lambda({\sf p})\rangle \in \boldsymbol{H}^{\,\prime}$: \begin{equation} \label{ent_2} |\psi_{[n]}^\lambda({\sf p})\rangle = C_{[n]} |{\sf p}\rangle |n_1,\dots,n_k; {\sf p}/\lambda\rangle \,, \qquad \lambda \in {\sf R} \,,\quad [n] = n_1,\dots,n_k \,, \end{equation} where numbers $C_{[n]} $ are normalizing coeffisients, so that normalizing conditions are fulfilled: $$ \langle \psi_{[n]}^\lambda({\sf p}) |\psi_{[m]}^\lambda({\sf p^{\,\prime}})\rangle = \delta_{k l} \delta_{n_1 m_1}\dots \delta_{n_k m_k} \delta(p_x - p_x^{\,\prime})\delta(p_y - p_y^{\,\prime})\,.$$ Here we should note that the entire set of vectors (\ref{ent_2}) can't form a physical subspace $\boldsymbol{H}_{phys} \subset \boldsymbol{H}$ because any superpositions $$ c_1 |\psi_{[n]}^{\lambda_1}({\sf p}_1)\rangle + c_2 |\psi_{[n]}^{\lambda_2}({\sf p}_2)\rangle \,,\qquad \lambda_1 \not= \lambda_2\,,\quad {\sf p}_1 = {\sf p}_2 $$ is not a solution to the spectral problem (\ref{constr_q_pr2}). Moreover we can't postulate ''superselection rules'' here. Indeed, the coherent states $| {\sf p}/ \lambda \rangle$ are not orthogonal for different values of parameter ${\sf p}/\lambda$. Therefore, the vectors $|\psi_{[n]}^\lambda({\sf p})\rangle$ are not orthogonal for same values ${\sf p}$ and different values $\lambda$. We will proceed as follows. Let the vector $|\psi_{[n]}^\star({\sf p})\rangle \in \boldsymbol{H}^{\,\prime} $ will be the vector (\ref{ent_2}) where the value $\lambda = \lambda_{\,0} =1/\pi$: \begin{equation} \label{ent_star} |\psi_{[n]}^\star ({\sf p})\rangle = C_{[n]} |{\sf p}\rangle |n_1,\dots,n_k; {\sf p}/\lambda_{\,0}\rangle\,, \qquad k = 0,1,2,\dots \,. \end{equation} The state (\ref{ent_star}) which corresponds $k = 0$, we call as ''ground state'' of our theory. Recall that the value $\lambda = \lambda_{\,0} $ corresponds to the circulation $\Gamma_0$ of the unperturbed vortex ring: $\Gamma_0 = {R_0^{\,2}}/{\pi t_0 }$. We declare that the physical subspace $\boldsymbol{H}_{phys}$ is spanned by the following vectors \begin{equation} \label{phys_vect} |\psi_{phys}\rangle = \sum_{n_1,\dots,n_k \atop n_j>1} \int dp_x d p_y\,\varphi_{n_1,\dots,n_k} (p_x,p_y) |\psi_{[n]}^\star ({\sf p})\rangle \,, \end{equation} where the wave functions $\varphi_{n_1,\dots,n_k} (p_x,p_y)$ are normalized. Does the constructed space $\boldsymbol{H}_{phys}$ describes the states with circulation $\Gamma_0$ only? In our opinion, the properties of coherent states allow us to assume that this is not the case. Indeed, every system of coherent states $|\alpha\rangle$ is an overdetermined system. Because $\langle\alpha_1 |\alpha_2\rangle \not= 0$ even for different complex numbers $\alpha_1$ and $\alpha_2$, we can conclude that any specific coherent state $|\alpha_0\rangle$ contains ''some part'' all other coherent states $|\alpha\rangle$, $ \alpha \not= \alpha_0$. Returning to our notation, we have the following expression for the amplitude for any $\lambda \in {\sf R}$: \begin{equation} \label{amplitude} \langle \psi_{[n]}^\lambda({\sf p}) |\psi_{phys}\rangle = \varphi_{[n]} (p_x,p_y)\exp\left[- \frac{|{\sf p}|^2 R_{\,0}}{8\, \hbar\, p_{\,0}}\left(\frac{\lambda_{\,0}}{\lambda} - 1\right)^{\!2} \right]\,. \end{equation} Although $|\psi_{[n]}^\lambda({\sf p})\rangle \not\in \boldsymbol{H}_{phys} $ for the case $\lambda \not= 1/\pi$, we suppose that the amplitude (\ref{amplitude}) defines the probability density to find our dynamical system with circulation $\Gamma = {\lambda R_0^2}/{t_0}$, transverse impulse $ {\sf p} = p_x + i p_y$ and quantum numbers $[n] = \{n_1,\dots,n_k\}$. The numbers $[n] $ define the energy of our system. Indeed, in accordance with our classical formulas for the Hamilton function, the quantum expression for the Hamiltonian $\hat{H}$ takes the form \begin{equation} \label{H_quant} \hat{H} = \frac{\hat{{\sf p}}^+\hat{{\sf p}}}{2m_0} + \frac{\hbar}{t_0} \sum_{n>1} \hat{a}^+_{n} \hat{a}_{n} n\sqrt{n^2 - 1} \,.\nonumber \end{equation} The following statement can be proved by direct verification: \begin{prop} The vectors $|\psi_{[n]}^\star ({\sf p})\rangle = C_{[n]} |{\sf p}\rangle|n_1,\dots,n_k; {\sf p}/\lambda_{\,0}\rangle $ are the eigenvectors of operator $\hat{H}$ with eigenvalues \begin{equation} \label{spectr_ham} {\mathcal E} = \frac{|{\sf p}|^2}{2m_0} + \frac{\hbar}{t_0} \sum_{n>1} \sum_{j=1}^k \delta_{n,n_j} n\sqrt{n^2 - 1} \,. \end{equation} \end{prop} As regards the constraint in the form (\ref{constr_fin1}), the following equality takes place: $$ \langle \psi_{phys} | {\widehat\Phi_0} |\psi_{phys}\rangle = 0\,.$$ As a result, the constructed quantum description of the system allows for a visual interpretation of the vortex as a structured particle. The connection between external degrees of freedom (space $\boldsymbol{H}_2$) and internal degrees of freedom (space $\boldsymbol{H}_F$) is nontrivial due to the constraint (\ref{constr_q_pr1}). As a consequence, the quantum states of such a system are entangled states. The formula (\ref{phys_vect}) demonstrates that the vectors \begin{equation} |\psi_{[0]}^\star ({\sf p})\rangle = |{\sf p}\rangle |{\sf p}/\lambda_{\,0}\rangle\, \nonumber \end{equation} forms the ''set of the ground states'' of our system. These vectors corresponds to the vortex ring with the exact value of the transverse momentum ${\sf p}$, the most probable value of the circulation $\Gamma_0 = {R_0^{\,2}}/{\pi t_0 }$ and the minimal energy ${|{\sf p}|^2}/{2m_0}$. \section{Concluding remarks } This paper has constructed a model describing the classical and quantum dynamics of small perturbations of the vortex ring, which evolves according to the LIE equation. The theory has three-dimensional constants: $R_0$ (radius of unperturbed vortex ring), $p_0$ (momentum of unperturbed vortex ring), and $m_0$ (the central charge for the central extension of Galilei group). We quantized our model as an abstract dynamical system, without any connection with the quantum properties of the surrounded liquid. Of course, taking into account such property is an important area that is presented in the literature (see \cite{Stamp}, for example). In the case under consideration, when $\boldsymbol{H}_2 = L^2({\sf R}_2)$, the eigenvalues of the momentum operator belong to a continuous set. The same can be said about the energy ${\mathcal E}$ (see (\ref{spectr_ham})) and the circulation $\Gamma$ (see (\ref{amplitude})). However, there is no contradiction with the experiment: for example, all the experiments, where the quantization of the circulation was observed \cite{Donn}, correspond to one of the real cases when certain boundary conditions are present. To take into account any boundary conditions in our approach, we should consider the case when $\boldsymbol{H}_2 = L^2({\sf D})$, where the domain ${\sf D}$ is a certain compact subset of a plane ${\sf R}_2$. We suppose that corresponding theory leads to the discrete values of energy ${\mathcal E}$ and circulation $\Gamma$. The author hopes to devote the next article to this issue.
2,869,038,156,595
arxiv
\subsection{Risk-Averse Q-Learning (RAQL) and proof of convergence} \begin{algorithm}[ht] \scriptsize \caption{Risk-Averse Q-Learning (RAQL)~\cite{shen2014risk}} \label{alg:Risk_Averse_QLearning} \begin{algorithmic}[1] \STATE For $\forall (s,a)$, initialize $Q(s,a) = 0$; $N(s,a) = 0$. \FOR{$t=1$ to $T$} \STATE At state $s_t$, choose action according to the $\epsilon$-greedy strategy. \STATE Observe $s_t, a_t, r_t, s_{t+1}$ \STATE $N(s_t, a_t) = N(s_t, a_t) +1$ \STATE Set learning rate $\alpha_t = \frac{1}{N(s_t, a_t)}$ \STATE Update Q : \begin{align}\label{eq:Utility_Update_Rule} &Q_{t+1}(s_t, a_t) = Q_{t}(s_t, a_t) + \alpha_t (s_t, a_t)\cdot \left[u\left(r_t + \gamma~\cdot \underset{a}{\max}Q_t(s_{t+1},a) - Q_t (s_t, a_t)\right) -x_0\right] \end{align} where $u$ is a utility function, here we use $u(x) = -e^{\beta x}$ where $\beta<0$; $x_0 = -1$ \ENDFOR \STATE \textbf{Return} Q. \end{algorithmic} \end{algorithm} \subsubsection{Proof of \cref{thm:RARL_Converge}}\label{sec:proof_RARL} This proof is originally proved by \cite{shen2014risk}, but we describe it here in detail because it will be useful for later proofs for our proposed algorithms. First, we show the following Lemma : \begin{lemma}\label{lem:Convergence_of_RiskAverse_UpdateRule}For the iterative procedure \begin{align} Q_{t+1}(s_t, a_t) = Q_t(s_t,a_t)+\alpha_t(s_t,a_t)\left[u\left(r_t+\gamma\cdot\underset{a}{\max}Q_t(s_{t+1},a)-Q_t(s_t,a_t)\right)-x_0\right] \end{align} where $\alpha_t\geq 0$ satisfy that for any $(s,a)$, $\sum_{t=0}^{\infty}\alpha_t(s,a)=\infty$; and $\sum_{t=0}^{\infty}\alpha^2_t(s,a)<\infty$, then $Q_t\rightarrow Q^*$, where $Q^*$ is the solution of the Bellman equation\begin{align} (H^A Q^*)(s,a) = \alpha\cdot\expectation_{s,a}\left[\tilde{u}\left(r_t+\gamma\cdot\underset{a}{\max}Q^*(s_{t+1},a)-Q^*(s,a)\right)\right]+Q^*(s,a) = Q^*(s,a)\qquad \forall (s,a) \end{align} \end{lemma} If \cref{lem:Convergence_of_RiskAverse_UpdateRule} holds, then it's shown in \cite{shen2014risk} that the corresponding policy optimizes the objective function \cref{eq:Risk_Averse_Objective}. \subsubsection{Proof of \cref{lem:Convergence_of_RiskAverse_UpdateRule}}\label{sec:proof_of_update_operator_converge} Before proving the convergence, we consider a more general update rule \begin{align}\label{eq:update_rule_1} q_{t+1}(i) = (1-\alpha_t (i))q_t(i) + \alpha_t(i)\left[(Hq_t)(i) + w_t(i)\right] \end{align} where $i$ is the independent variable (e.g., in single agent Q learning, it's the state-action pair $(s,a)$), $q_t\in{\mathbb{R}}^d$, $H:{\mathbb{R}}^d \rightarrow {\mathbb{R}}^d$ is an operator, $w_t$ denotes some random noise term and $\alpha_t$ is learning rate with the understanding that $\alpha_t(i) = 0$ if $q(i)$ is not updated at time $t$. Denote by ${\mathcal{F}}_t$ the history of the algorithm up to time $t$,\begin{align} {\mathcal{F}}_t = \{q_0(i),...,q_t(i),w_0(i),...,w_{t}(i),\alpha_0(i),...,\alpha_{t}(i)\} \end{align} Recall the following essential proposition : \begin{proposition}\cite{Bertsekas2009}\label{prop:convergence_of_contraction} Let $q_t$ be the sequence generated by the iteration \cref{eq:update_rule_1}, if we assume the following hold : \begin{enumerate}[label=(\alph*)] \item The Learning rates $\alpha_t(i)$ satisfy : \begin{align} \alpha_t (i) \geq 0;\qquad \sum_{t=0}^{\infty}\alpha_t(i) = \infty; \qquad \sum_{t=0}^{\infty}\alpha_t^2(i) < \infty;\quad \forall i \end{align} \item The noise terms $w_t(i)$ satisfy\begin{enumerate}[label=(\roman*)] \item $\expectation[w_t(i)|{\mathcal{F}}_t] = 0$ for all $i$ and $t$; \item There exist constants $A$ and $B$ such that $\expectation[w_t^2(i)|{\mathcal{F}}_t]\leq A+B\left\|q_t\right\|^2$ for some norm $\left\|\cdot\right\|$ on ${\mathbb{R}}^d$. \end{enumerate} \item The mapping $H$ is a contraction under sup-norm. \end{enumerate} Then $q_t$ converges to the unique solution $q^*$ of the equation $Hq^* = q^*$ with probability 1. \end{proposition} In order to apply \cref{prop:convergence_of_contraction}, we reformulate the update rule \cref{eq:RAA_Q_Update} by letting \begin{align} q_{t+1}(s,a) = \left(1-\frac{\alpha_t(s,a)}{\alpha}\right)q_t(s,a)+\frac{\alpha_t(s,a)}{\alpha}[\alpha\cdot u(d_t) -\alpha\cdot x_0 + q_t(s,a)] \end{align} where $\tilde{u}(x) := u(x)-x_0$; $d_t := r_t + \gamma\cdot\underset{a}{\max}q_t(s_{t+1},a) - q_t(s,a)$. And we set \begin{align}\label{eq:def_of_contraction} (Hq_t)(s,a) &= \alpha\cdot\expectation_{s,a}\left[\tilde{u}\left(r_t+\gamma\cdot\underset{a}{\max}q_t(s_{t+1},a)-q_t(s,a)\right)\right]+q_t(s,a)\\ w_t(s,a) &= \alpha\cdot\tilde{u}(d_t)-\alpha\cdot\expectation_{s,a}\left[\tilde{u}(r_t+\gamma\cdot\underset{a}{\max}q_t(s^{\prime},a)-q_t(s,a))\right] \end{align} where $s^{\prime}$ is sampled from ${\mathcal{T}} [\cdot | s,a]$. More explicitly, $Hq$ is defined as \begin{align} (Hq)(s,a) = \alpha\cdot\sum_{s^{\prime}}{\mathcal{T}}[s^{\prime}|s,a]\cdot\tilde{u}\left(r(s,a)+\gamma\cdot\underset{a^{\prime}}{\max}\;q(s^{\prime},a^{\prime})-q(s,a)\right)+q(s,a) \end{align} Next, we show that $H$ is a contraction under sup-norm. Note that we assume the utility function satisfy : \begin{assumption} \label{ass:Utility_Func_Assumption} \begin{enumerate} [label=(\roman*)] \item The utility function $u$ is strictly increasing and there exists some $y_0\in{\mathbb{R}}$ such that $u(y_0) = x_0$. \item There exist positive constants $\epsilon, L$ such that $0<\epsilon\leq \frac{u(x)-u(y)}{x-y}\leq L$ for all $x\ne y\in{\mathbb{R}}$. \end{enumerate} \end{assumption} Note that \cref{ass:Utility_Func_Assumption} seems to exclude several important types of utility functions like the exponential function $u(x) = exp(c\cdot x)$ since it does not satisfy the global Lipschitz. But this can be solved by a truncation when $x$ is very large and by an approximation when $x$ is very close to 0. For more details see \citeauthor{shen2014risk}~(2014). And we also assume that the immediate reward $r_t$ always satisfy a sub-Gaussian tail assumption. This allows the reward to be unbounded, which is closer to practical settings with tail events, for example, in financial markets. :\begin{assumption} $r_t$ is uniformly sub-Gaussian over $t$ with variance proxy $\sigma^2$, i.e., \begin{align} \expectation[r_t] &= 0\\ \expectation[exp(c\cdot r_t)]&\le exp\left(\frac{\sigma^2 c^2}{2}\right)\qquad \forall c\in {\mathbb{R}} \end{align} \label{ass:immediate_reward_bound} \end{assumption} The above uniform sub-Gaussian assumption is equivalent to the following form, commonly seen in statistics and machine learning: there exists $C > 0, \alpha$ such that for every $K > 0$ and every $r_t$, we have: \begin{align} \mathbb{P} (|r_t| > K) \leq C e^{-\alpha K^2} \end{align} \begin{proposition}\label{prop:H_is_contraction} Suppose that \cref{ass:Utility_Func_Assumption} and \cref{ass:immediate_reward_bound} hold and $0<\alpha<\min (L^{-1},1)$. Then there exists a real number $\bar{\alpha}\in[0,1)$ such that for all $q,q^{\prime}\in{\mathbb{R}}^d$, $\left\|Hq-Hq^{\prime}\right\|_{\infty}\leq \bar{\alpha}\left\|q-q^{\prime}\right\|_{\infty}$. \end{proposition} \begin{proof} Define $v(s):=\underset{a}{\max}~q(s,a)$ and $v^{\prime}(s):= \underset{a}{\max}~q^{\prime}(s,a)$. Thus, \begin{align} |v(s)-v^{\prime}(s)|\leq \underset{s,a}{\max}|q(s,a)-q^{\prime}(s,a)| = \left\|q-q^{\prime}\right\|_{\infty} \end{align} By \cref{ass:Utility_Func_Assumption}, and the monotonicity of $\tilde{u}$, there exists a $\xi_{(x,y)}\in[\epsilon, L]$ such that $\tilde{u}(x)-\tilde{u}(y) = \xi_{(x,y)}\cdot(x-y)$. Then we can obtain \begin{align} &(Hq)(s,a) - (Hq^{\prime})(s,a) \\ &= \sum_{s^{\prime}}{\mathcal{T}}[s^{\prime}|s,a]\cdot\Big\{\alpha\xi_{(s,a,s^{\prime},q,q^{\prime})}\cdot\left[\gamma v(s^{\prime})-\gamma v^{\prime}(s^{\prime}) - q(s,a)+q^{\prime}(s,a)\right]+(q(s,a)-q^{\prime}(s,a))\Big\}\\ &\leq \left(1-\alpha(1-\gamma)\sum_{s^{\prime}}{\mathcal{T}}[s^{\prime}|s,a]\cdot\xi_{(s,a,s^{\prime},q,q^{\prime})}\right)\left\|q-q^{\prime}\right\|_{\infty}\\ &\leq (1-\alpha(1-\gamma)\epsilon)\left\|q-q^{\prime}\right\|_{\infty} \end{align} Hence, $\bar{\alpha} = 1-\alpha(1-\gamma)\epsilon$ is the required constant. \end{proof} Now that we've shown the requirements (a) and (c) of \cref{prop:convergence_of_contraction} hold, it remains to check (b). By \cref{eq:def_of_contraction}, $\expectation[w_t(s,a)|{\mathcal{F}}_t] = 0$. Next, we prove (b)(ii). \begin{align} \expectation[w_t^2(s,a)|{\mathcal{F}}_t] &= \alpha^2\expectation[(\tilde{u}(d_t))^2|{\mathcal{F}}_t]-\alpha^2(\expectation[\tilde{u}(d_t)|{\mathcal{F}}_t])^2\\ &\leq \alpha^2\expectation[(\tilde{u}(d_t))^2|{\mathcal{F}}_t] \end{align} By \cref{ass:immediate_reward_bound}, $\expectation |r_t| < (2\sigma)^{\frac{1}{2}}\Gamma(\frac{1}{2})$, where $\Gamma(\cdot)$ is the Gamma function (see \cite{SubGaussian80} for details). We denote the upper bound for $\expectation[|r_t|]$ as $R_1$. Then $\expectation[|d_t|]\leq R_1+2\left\|q_t\right\|_{\infty}$, due to \cref{ass:Utility_Func_Assumption}, it implies that\begin{align} \expectation\left[|\tilde{u}(d_t)-\tilde{u}(0)|\right] \leq \expectation\left[L\cdot d_t\right]\leq L(R_1+2\left\|q_t\right\|_{\infty}) \end{align} Hence by triangle inequality, \begin{align} \expectation[|\tilde{u}(d_t)|]\leq \tilde{u}(0)+LR_1+2L\left\|q_t\right\|_{\infty} \end{align} And since \begin{align} (a+b)^2 \leq 2 a^2 + 2 b^2\qquad \forall a,b\in{\mathbb{R}} \end{align} , we have\begin{align} (|\tilde{u}(0)|+LR_1 + 2L\left\|q_t\right\|_{\infty})^2\leq 2(|\tilde{u}(0)|+LR_1)^2 + 8L^2\left\|q_t\right\|_{\infty}^2 \end{align} And since \begin{align} \expectation\left[\left(\tilde{u}(d_t)-\tilde{u}(0)\right)^2 | {\mathcal{F}}_t\right] &\le\expectation\left[L\cdot d_t^2\right]\\ &= \expectation\left[L\cdot\left(r_t + \gamma\cdot\underset{a}{\max}q_t(s^{\prime},a)-q_t(s,a)\right)^2\right]\\ &= \expectation\left[L\cdot\left(r_t^2 + 2r_t\cdot(\gamma\cdot\underset{a}{\max}q_t(s^{\prime},a)-q_t(s,a))+(\gamma\cdot\underset{a}{\max}q_t(s^{\prime},a)-q_t(s,a))^2\right)\right]\\ &= LR_2 + 2LR_1(1-\gamma)\cdot\left\|q_t\right\|_{\infty} + L(1-\gamma)^2\cdot\left\|q_t\right\|_{\infty}^2 \end{align} where $R_2$ is the upper bound for $\expectation[r_t^2]$ due to \cref{ass:immediate_reward_bound} ($\expectation[r_t^2]\leq 4\sigma^2\cdot \Gamma(1)$ \cite{SubGaussian80}). Note that here $\tilde{u}(0)=0$, hence we have \begin{align} \alpha^2\expectation[(\tilde{u}(d_t) )^2|{\mathcal{F}}_t]\leq \alpha^2\cdot\left(LR_2 + 2LR_1(1-\gamma)\cdot\left\|q_t\right\|_{\infty} + L(1-\gamma)^2\cdot\left\|q_t\right\|_{\infty}^2\right) \end{align} Hence, \begin{align} \expectation[w_t^2(s,a)|{\mathcal{F}}_t]\leq 2\alpha^2\cdot\left(LR_2 + 2LR_1(1-\gamma)\cdot\left\|q_t\right\|_{\infty} + L(1-\gamma)^2\cdot\left\|q_t\right\|_{\infty}^2\right) \end{align} if $\left\|q_t\right\|_{\infty}\leq 1$, then \begin{align} \expectation[w_t^2(s,a)|{\mathcal{F}}_t]\leq 2\alpha^2\cdot\left(LR_2 + 2LR_1(1-\gamma) + L(1-\gamma)^2\cdot\left\|q_t\right\|_{\infty}^2\right) \end{align} if $\left\|q_t\right\|_{\infty}> 1$, then \begin{align} \expectation[w_t^2(s,a)|{\mathcal{F}}_t]\leq 2\alpha^2\cdot\left(LR_2 +(2LR_1(1-\gamma)+L(1-\gamma)^2)\cdot\left\|q_t\right\|_{\infty}^2\right) \end{align} Then we have shown that $q_t$ satisfy all of the requirements in \cref{prop:convergence_of_contraction}, then $q_t\rightarrow q^*$ with probability 1. \iffalse\textbf{Proof of Base Case} : For the given initialization $\bar{Q}_0 = 0$, note that we have $\hat{{\mathcal{T}}}_k(\bar{Q}_{0}) = r_u$ and $\tilde{{\mathcal{T}}}_N(\bar{Q}_{0}) = r_u$. Consequently, we have $\hat{{\mathcal{T}}}_k(\bar{Q}_{0}) - \tilde{{\mathcal{T}}}_N(\bar{Q}_{0}) = 0$, so that the variance reduced updates \cref{eq:Update_Rule} reduce to the case of \cref{eq:Utility_Update_Rule} with step size $\alpha_k=\frac{1}{1+(1-\gamma)k}$. \fi \iffalse\begin{align} Q_{k+1}\leftarrow Q_k + \lambda_k\cdot\tilde{u}\left(\hat{{\mathcal{T}}}_k(Q_k) - Q_k\right) + \lambda_k\cdot\tilde{u}\left(\hat{{\mathcal{T}}}_k(\bar{Q}_{m-1})-\tilde{{\mathcal{T}}}_N(\bar{Q}_{m-1})\right) \end{align} where $\tilde{u}(x) = u(x) - x_0$ \fi \subsection{Nash-Q Learning Algorithm} \label{sec:Multi_Agent_QLearning} This section describes the Nash-Q Learning Algorithm~\cite{MultiAgentQLearning98} and its convergence guarantees, we restate them here since our \cref{alg:MultiAgent_QLearning_RiskAverse} (RAM-Q\xspace) is designed based on Nash-Q. Also note that \cref{ass:Bimatrix_Nash_Assumption} will also be used in RAM-Q\xspace. \begin{algorithm}[ht] \caption{Nash Q-Learning for Agent $A$~\cite{MultiAgentQLearning98}} \label{alg:MultiAgent_QLearning} \scriptsize \begin{algorithmic}[1] \STATE For $\forall (s,a_A,a_B)$, initialize $Q^1_A(s,a_A,a_B) = 0$; $Q^2_A(s,a_A,a_B) = 0$; $N_A(s,a_A,a_B) = 0$. \FOR{$t = 1$ to $T$} \STATE At state $s_t$, compute $\pi^1_A(s_t)$, which is a mixed strategy Nash equilibrium solution of the bimatrix game $(Q^1_A(s_t), Q^2_A(s_t))$. \STATE Choose action $a_t^A$ based on $\pi^1_A(s_t)$ according to $\epsilon$-greedy strategy. \STATE Observe $r_t^A, r_t^B, a_t^B$ and $s_{t+1}$. \STATE At state $s_{t+1}$, compute $\pi^1_A(s_{t+1})$,$\pi^2_A(s_{t+1})$, which are mixed strategies Nash equilibrium solution of the bimatrix game $(Q^1_A(s_{t+1}), Q^2_A(s_{t+1}))$. \STATE $N_A(s_t,a^A_t, a^B_t) = N_A(s_t,a^A_t, a^B_t) + 1$ \STATE Set learning rate $\alpha_t^A = \frac{1}{N_A(s_t,a^A_t, a^B_t)}$. \STATE Update $Q_A^1, Q_A^2$ such that \begin{align*} Q_A^1(s_t,a^A_t, a^B_t) = (1-\alpha_t^A)\cdot Q_A^1(s_t,a^A_t, a^B_t) + \alpha_t^A\cdot\left[r_t^A + \gamma\cdot\pi^1_A(s_{t+1})Q_A^1(s_{t+1})\pi^2_A(s_{t+1})\right]\\ Q_A^2(s_t,a^A_t, a^B_t) = (1-\alpha_t^A)\cdot Q_A^2(s_t,a^A_t, a^B_t) + \alpha_t^A\cdot\left[r_t^B + \gamma\cdot\pi^1_A(s_{t+1})Q_A^2(s_{t+1})\pi^2_A(s_{t+1})\right] \end{align*} \ENDFOR \end{algorithmic} \end{algorithm} \begin{assumption}\cite{MultiAgentQLearning98}\label{ass:Bimatrix_Nash_Assumption} A Nash equilibrium $(\pi^1(s),\pi^2(s))$ for any bimatrix game $(Q^1(s),Q^2(s))$ during the training process satisfies one of the following properties :\begin{enumerate} \item The Nash equilibrium is global optimal. \begin{align} \pi^1(s)Q^k(s)\pi^2(s)\geq \hat{\pi}^1(s)Q^k(s)\hat{\pi}^2(s)\qquad \forall\hat{\pi}^1 (s),\hat{\pi}^2 (s),\;and\;k=1,2 \end{align} \item If the Nash equilibrium is not a global optimal, then an agent receives a higher payoff when the other agent deviates from the Nash equilibrium strategy.\begin{align} \pi^1(s)Q^1(s)\pi^2(s)\leq \pi^1(s)Q^1(s)\hat{\pi}^2(s)\qquad \forall \hat{\pi}^2(s)\\ \pi^1(s)Q^2(s)\pi^2(s)\leq \hat{\pi}^1(s)Q^2(s)\pi^2(s)\qquad \forall \hat{\pi}^1(s) \end{align} \end{enumerate} \end{assumption} \begin{theorem} (Theorem 4, \citeauthor{MultiAgentQLearning98}~1998) Under \cref{ass:Bimatrix_Nash_Assumption}, the coupled sequences $Q_A^1, Q_A^2$ updated by \cref{alg:MultiAgent_QLearning}, converge to the Nash equilibrium Q values $(Q^{1}_{*}, Q^{2}_{*})$, with $Q^{k}_{*}\;(k=1,2)$ defined as \begin{align} Q^{1}_*(s,a^A,a^B) = r^A(s,a^A,a^B) + \gamma\cdot\expectation_{s^{\prime}\sim{\mathcal{P}}(\cdot|s,a^A,a^B)}\left[J^A(s^{\prime},\pi^{A}_{*}, \pi^{B}_{*})\right]\\ Q^{2}_*(s,a^A,a^B) = r^B(s,a^A,a^B) + \gamma\cdot\expectation_{s^{\prime}\sim{\mathcal{P}}(\cdot|s,a^A,a^B)}\left[J^B(s^{\prime},\pi^{A}_{*}, \pi^{B}_{*})\right] \end{align} where $(\pi^{A}_{*}, \pi^{B}_{*})$ is a Nash equilibrium solution for this stochastic game $(J^A, J^B)$ and \begin{align} J^A(s^{\prime},\pi^{A}_{*}, \pi^{B}_{*}) = \sum_{t=0}^{\infty}\gamma^t\expectation\left[r_t^A|\pi^A_*, \pi^B_*, s_0 = s^{\prime}\right]\\ J^B(s^{\prime},\pi^{A}_{*}, \pi^{B}_{*}) = \sum_{t=0}^{\infty}\gamma^t\expectation\left[r_t^B|\pi^A_*, \pi^B_*, s_0 = s^{\prime}\right] \end{align} \end{theorem} \subsection{Proof of \cref{thm:Convergence_of_RAA}}\label{sec:proof_of_convergence_RAA} Poisson masks $M\sim Poisson(1)$ provides parallel learning since $Binomial(T, \frac{1}{T})\rightarrow Poisson(1)$ as $T\rightarrow\infty$, so each Q table $Q^i$ is trained in parallel. The proof of convergence of $Q^i$ for all $i\in\{1,..., k\}$ is exactly same as \cref{sec:proof_RARL}. Hence $\frac{1}{k}\sum_{i=1}^{k}Q^i\rightarrow Q^*$ w.p. 1. \subsection{Proof of convergence of \cref{alg:MultiAgent_QLearning_RiskAverse} (RAM-Q\xspace)} In this section, we prove the convergence of \cref{alg:MultiAgent_QLearning_RiskAverse} under \cref{ass:Bimatrix_Nash_Assumption}. The convergence proof is based on the following lemma\begin{lemma}\label{lem:Conditional_Averate_lemma}[Conditional Averaging Lemma~\cite{ValueBasedRL99}] Assume the learning rate $\alpha_t$ satisfies \cref{prop:convergence_of_contraction}(a). Then, the process $Q_{t+1}(i) =(1-\alpha_t(i))Q_t(i)+\alpha_t w_t(i)$ converges to $\expectation[w_t(i)|h_t, \alpha_t]$, where $h_t$ is the history at time $t$. \end{lemma} We take the proof of convergence of $Q^P$ as an example, and the proof of convergence of $Q^A$ is exactly the same. And we first reformulate the update rule \cref{eq:UpdateRule_MultiAgent_RiskAverse_1} as :\begin{align} &Q^P(s_t,a^P_t, a^A_t) = (1-\frac{\alpha_t}{\alpha})\cdot Q^P(s_t,a^P_t, a^A_t) +\\ &\frac{\alpha_t}{\alpha}\cdot \left[\alpha\cdot u^P\left(r_t^P + \gamma\cdot\pi^P(s_{t+1})Q^P(s_{t+1})\pi^A(s_{t+1})-Q^P(s_t,a^P_t, a^A_t)\right) - \alpha\cdot x_0 + Q^P(s_t,a^P_t, a^A_t)\right] \end{align} And we set \begin{align}\label{eq:def_of_wt_2} (H^P Q^P)(s_t,a^P_t, a^A_t) &= \alpha\cdot u^P\left(r_t^P + \gamma\cdot\pi^P(s_{t+1})Q^P(s_{t+1})\pi^A(s_{t+1})-Q^P(s_t,a^P_t, a^A_t)\right) - \alpha\cdot x_0 + Q^P(s_t,a^P_t, a^A_t) \end{align} And $H^A Q^A$ is defined symmetrically as \begin{align} (H^A Q^A)(s_t,a^P_t, a^A_t) &= \alpha\cdot u^A\left(r_t^A + \gamma\cdot\pi^P(s_{t+1})Q^A(s_{t+1})\pi^A(s_{t+1})-Q^A(s_t,a^P_t, a^A_t)\right) - \alpha\cdot x_1 + Q^A(s_t,a^P_t, a^A_t) \end{align} It's shown in \cite{MultiAgentQLearning98} that the operator $(M^P_t,M^A_t)$ is a $\gamma$-contraction mapping where $(M^P_t,M^A_t)$ is defined as \begin{align} M^P_t Q^P (s) = r^P_t + \gamma\cdot\pi^P(s)Q^P(s)\pi^A(s)\\ M^A_t Q^A (s) = r^A_t + \gamma\cdot\pi^P(s)Q^A(s)\pi^A(s) \end{align} Next, we show that $(H^P, H^A)$ is a contraction under sup-norm (under assumption \cref{ass:Utility_Func_Assumption}).\begin{align} H^P Q^P - H^P \hat{Q}^P &=\alpha\cdot\left[\xi^P_{Q^P,\hat{Q}^P}\cdot\left(M^P Q^P - M^P \hat{Q}^P -(Q^P-\hat{Q}^P)\right)\right] + (Q^P-\hat{Q}^P)\\ &\le \alpha\cdot\left[\xi^P_{Q^P,\hat{Q}^P}\cdot(\gamma-1)\left\|Q^P-\hat{Q}^P\right\|_{\infty}\right] + \left\|Q^P-\hat{Q}^P\right\|_{\infty}\\ &\leq \left(1-\alpha\epsilon(1-\gamma)\right)\cdot\left\|Q^P-\hat{Q}^P\right\|_{\infty} \end{align} Similarly, $ H^A Q^A - H^A \hat{Q}^A\leq \left(1-\alpha\epsilon(1-\gamma)\right)\cdot\left\|Q^A-\hat{Q}^A\right\|_{\infty}$. Hence $(H^P, H^A)$ is a $\left(1-\alpha\epsilon(1-\gamma)\right)$-contraction under sup-norm. Hence by \cref{lem:Conditional_Averate_lemma} the update rule \cref{eq:UpdateRule_MultiAgent_RiskAverse_1,eq:UpdateRule_MultiAgent_RiskAverse_2} respectively converges to \begin{align} Q^P(s_t,a^P_t, a^A_t)\rightarrow\expectation\left[\alpha\cdot u^P\left(r_t^P + \gamma\cdot\pi^P(s_{t+1})Q^P(s_{t+1})\pi^A(s_{t+1})-Q^P(s_t,a^P_t, a^A_t)\right) - \alpha\cdot x_0 + Q^P(s_t,a^P_t, a^A_t)\right]\\ Q^A(s_t,a^P_t, a^A_t)\rightarrow\expectation\left[\alpha\cdot u^A\left(r_t^A + \gamma\cdot\pi^P(s_{t+1})Q^A(s_{t+1})\pi^A(s_{t+1})-Q^A(s_t,a^P_t, a^A_t)\right) - \alpha\cdot x_1 + Q^A(s_t,a^P_t, a^A_t)\right] \end{align} i.e., \cref{eq:UpdateRule_MultiAgent_RiskAverse_1,eq:UpdateRule_MultiAgent_RiskAverse_2} respectively converges to $Q^*_P,Q^*_A$ where $Q^*_P, Q^*_A$ are the solution to the Bellman equations \begin{align}\label{eq:Bellman_Q_MultiAgent_1} \expectation_{s,a^P,a^A}\left[u^P\left(r^P(s,a^P,a^A) + \gamma\cdot\pi^{P*}(s^{\prime})Q_P^*(s^{\prime})\pi^{A*}(s^{\prime}) - Q_P^*(s,a^P,a^A)\right)\right] = x_0\\ \expectation_{s,a^P,a^A}\left[u^A\left(r^A(s,a^P,a^A) + \gamma\cdot\pi^{P*}(s^{\prime})Q_A^*(s^{\prime})\pi^{A*}(s^{\prime}) - Q_A^*(s,a^P,a^A)\right)\right] = x_1 \end{align} where $(\pi^{P*},\pi^{A*})$ is the Nash equilibrium solution to the bimatrix game $(Q_P^*, Q_A^*)$. Next we show that $(\pi^{P*},\pi^{A*})$ is a Nash equilibrium solution for the game with equilibrium payoffs $\left(\tilde{J}^P(s,\pi^{P*},\pi^{A*}), \tilde{J}^A(s,\pi^{P*},\pi^{A*})\right)$. As in \cite{shen2014risk}, for any $X\in{\mathbb{R}}$, define ${\mathcal{U}}^P(X|s,a^P,a^A) :{\mathbb{R}} \times {\mathcal{S}}\times{\mathcal{A}}\times{\mathcal{A}}\rightarrow{\mathbb{R}}$ be a mapping (for brevity, could be written as ${\mathcal{U}}^P_{s,a^P,a^A}(X)$ ) defined by \begin{align} {\mathcal{U}}^P_{s,a^P,a^A}(X) = sup \Big\{m\in{\mathbb{R}}|\expectation_{s,a^P,a^A}\left[u^P(X-m)\right]\geq x_0\Big\} \end{align} Similar to \cite{shen2014risk,RiskSensitiveshen13}, suppose $(\pi^P,\pi^A)$ is a Nash equilibrium solution to the game $\left(\tilde{J}^P(s,\pi^P,\pi^A), \tilde{J}^A(s,\pi^P,\pi^A)\right)$, then the payoffs $\tilde{J}^P(s,\pi^P,\pi^A),\; \tilde{J}^A(s,\pi^P,\pi^A)$ are the solution to the risk-sensitive Bellman equations\begin{align}\label{eq:State_Value_Optimize} \tilde{J}^P(s,\pi^P,\pi^A) = \pi^P(s){\mathcal{U}}^P_{s,a^P,a^A}\left(r^P(s,:,:) + \gamma\cdot \tilde{J}^P(s^{\prime},\pi^P,\pi^A)\right)\pi^A(s)\qquad\forall s\in{\mathcal{S}}\\ \tilde{J}^A(s,\pi^P,\pi^A) = \pi^P(s){\mathcal{U}}^P_{s,a^P,a^A}\left(r^A(s,:,:) + \gamma\cdot \tilde{J}^A(s^{\prime},\pi^P,\pi^A)\right)\pi^A(s)\qquad\forall s\in{\mathcal{S}} \end{align} And the corresponding Q tables satisfies\begin{align}\label{eq:Q_Value_Optimize_Equilibrium} Q_P (s,a^P,a^A) = {\mathcal{U}}^P_{s,a^P,a^A}\left(r^P(s,a^P,a^A) + \gamma \tilde{J}^P(s^{\prime},\pi^P,\pi^A)\right)\\ Q_A (s,a^P,a^A) = {\mathcal{U}}^P_{s,a^P,a^A}\left(r^A(s,a^P,a^A) + \gamma \tilde{J}^A(s^{\prime},\pi^P,\pi^A)\right) \end{align} Note that ${\mathcal{U}}^P_{s,a^P,a^A}$ is monotonic one-to-one mapping, so as shown in [\textbf{Theorem 4.6.5}~\cite{filar-competitiveMDP}], $(\pi^P, \pi^A)$ are the Nash equilibrium solution to the bimatrix game $(Q_P, Q_A)$. Then if we can show that $Q_P = Q_P^*$ and $Q_A = Q_A^*$ (i.e., $Q_P$ and $Q_A$ are the solution to \cref{eq:Bellman_Q_MultiAgent_1} ), then the Nash solution of the bimatrix game $(Q_P^*,Q_A^*)$ returned by \cref{alg:MultiAgent_QLearning_RiskAverse} will be the Nash solution for the game $(\tilde{J}^P, \tilde{J}^A)$. \cite{shen2014risk} showed that \cref{eq:Q_Value_Optimize_Equilibrium} is equivalent to \begin{align} \expectation_{s,a^P,a^A}\left[u^P\left(r^P(s,a^P,a^A) + \gamma \tilde{J}^P(s^{\prime},\pi^P,\pi^A) - Q_P (s,a^P,a^A)\right)\right] = x_0 \\ \expectation_{s,a^P,a^A}\left[u^A\left(r^A(s,a^P,a^A) + \gamma \tilde{J}^A(s^{\prime},\pi^P,\pi^A) - Q_A (s,a^P,a^A)\right)\right] = x_1 \end{align} Plugging \cref{eq:State_Value_Optimize} in, we get \begin{align} \expectation_{s,a^P,a^A}\left[u^P\left(r^P(s,a^P,a^A) + \gamma\cdot \pi^P Q_P(s^{\prime})\pi^A - Q_P (s,a^P,a^A)\right)\right] = x_0 \\ \expectation_{s,a^P,a^A}\left[u^A\left(r^A(s,a^P,a^A) + \gamma\cdot \pi^P Q_A(s^{\prime})\pi^A - Q_A (s,a^P,a^A)\right)\right] = x_1 \end{align} which is exactly \cref{eq:Bellman_Q_MultiAgent_1}. Hence we have shown that under \cref{ass:Bimatrix_Nash_Assumption}, \cref{eq:State_Value_Optimize} and \cref{eq:Bellman_Q_MultiAgent_1} are equivalent. Hence \cref{alg:MultiAgent_QLearning_RiskAverse} converges to $(Q_P^*, Q_A^*)$ s.t. the Nash equilibrium solution $(\pi^{P*}, \pi^{A*})$ for the bimatrix game $(Q_P^*, Q_A^*)$ is the Nash equilibrium solution to the game and the equilibrium payoffs are $\tilde{J}^P(s,\pi^{P*}, \pi^{A*})$; $\tilde{J}^A(s,\pi^{P*}, \pi^{A*})$. \subsection{Discussion of RA3-Q} \label{sec:Discussion_of_RAAA} \begin{algorithm*}[t!] \footnotesize \caption{Risk-Averse Adversarial Averaged Q-Learning (\RAAA)} \label{alg:Risk_Averse_Adversarial_Averaged_QLearning_fullversion} \textbf{Input :} Training steps $T$; Exploration rate $\epsilon$; Number of models $k$; Risk control parameters $\lambda_P, \lambda_A$; Utility function parameters $\beta^P < 0; \beta^A > 0$. \begin{spacing}{0.8} \begin{algorithmic}[1] \STATE Initialize $Q_P^i(s,a_P,a_A)= 0$; $Q_A^i(s,a_P,a_A)= 0$ for $\forall i = 1,..., k \;$and$\;(s,a_A, a_P)$; $N= \mathbf{0}\in{\mathbb{R}}^{|{\mathcal{S}}|\times|{\mathcal{A}}|\times|{\mathcal{A}}|}$; \STATE Randomly sample action choosing head integers $H_P, H_A\in\{1,...,k\}$. \FOR{$t = 1$ to $T$} \STATE $Q_{P} = Q_{P}^{H_P}$ \STATE Compute $\hat{Q}_{P}$ by \begin{align} \hat{Q}_{P}(s,a_P,a_A) = Q_{P}(s,a_P,a_A) - \lambda_P\cdot \frac{\sum_{i=1}^{k}(Q_P^i(s,a_P,a_A) - \bar{Q}_P(s,a_P,a_A))^2}{k-1} \qquad \lambda_P>0 \end{align} where $\bar{Q}_P(s,a_P,a_A) = \frac{1}{k}\sum_{i=1}^{k}Q_P^i(s,a_P,a_A)$ \STATE $Q_{A} = Q_{A}^{H_A}$ \STATE Compute $\hat{Q}_{A}$ by \begin{align} \hat{Q}_{A}(s,a_P,a_A) = Q_{A}(s,a_P,a_A) + \lambda_A\cdot \frac{\sum_{i=1}^{k}(Q_A^i(s,a_P,a_A) - \bar{Q}_A(s,a_P,a_A))^2}{k-1}\qquad \lambda_A>0 \end{align} where $\bar{Q}_A(s,a_P,a_A) = \frac{1}{k}\sum_{i=1}^{k}Q_A^i(s,a_P,a_A)$ \STATE The optimal actions $(a_P^{\prime}, a_A^{\prime})$ are defined as \begin{align} \hat{Q}_P(s_t, a_P^{\prime}, a_A^0) = \underset{a_P,a_A}{\max}\hat{Q}_P(s_t,a_P,a_A)\qquad\text{for some $a_A^0$}\\ \hat{Q}_A(s_t, a_P^0, a_A^{\prime}) = \underset{a_P,a_A}{\max}\hat{Q}_A(s_t,a_P,a_A)\qquad\text{for some $a_P^0$} \end{align} \STATE Select actions $a_P, a_A$ according to $\hat{Q}_{P},\hat{Q}_{A}$ by applying $\epsilon$-greedy strategy. \STATE Two agents respectively execute actions $a_P,a_A$ and observe $(s_t, a_P,a_A, r^A_t, r^P_t, s_{t+1})$ \STATE Generate mask $M\in {\mathbb{R}}^k \sim Poisson(1)$ \STATE $N(s_t,a_P,a_A)= N(s_t,a_P,a_A) + 1$ \STATE $\alpha(s_t,a_P,a_A) =\frac{1}{N(s_t,a_P,a_A)}$ \FOR{$i=1,...,k$} \IF{$M_i = 1$} \STATE Update $Q_P^i$ by \begin{align}\label{eq:RA3QProtagonist_UpdateRule} Q_P^i(s_t,a_P, a_A) = Q_P^i(s_t,a_P,a_A) + \alpha(s_t,a_P,a_A)\cdot\left[u^P\left(r^P_t + \gamma\cdot\underset{a_P,a_A}{\max} Q_P^i(s_{t+1}, a_P,a_A) - Q_P^i(s_t, a_P,a_A)\right)-x_0\right] \end{align} where $u^P$ is a utility function, here we use $u^P(x) = -e^{\beta^P x}$ where $\beta^P<0$; $x_0 = -1$ \ENDIF \ENDFOR \FOR{$i=1,...,k$} \IF{$M_i = 1$} \STATE Update $Q_A^i$ by \begin{align}\label{eq:RA3QAdversary_UpdateRule} Q_A^i(s_t,a_P,a_A) = Q_A^i(s_t,a_P,a_A) + \alpha(s_t,a_P,a_A)\cdot\left[u^{A}\left(r^A_t + \gamma\cdot\underset{a_P,a_A}{\max} Q_A^i(s_{t+1}, a_P,a_A) - Q_A^i(s_t,a_P, a_A)\right)-x_1\right] \end{align} where $u^A$ is a utility function, here we use $u(x) = e^{\beta^A\cdot x}$ where $\beta^A>0$; $x_1 = 1$ \ENDIF \ENDFOR \STATE Update $H_P$ and $H_A$ by randomly sampling integers from $1$ to $k$ \ENDFOR \STATE \textbf{Return} $\frac{1}{k}\sum_{i=1}^{k}Q_P^i$; $\frac{1}{k}\sum_{i=1}^{k}Q_A^i$ \end{algorithmic} \end{spacing} \end{algorithm*} We have presented a short version of \RAAA in \cref{alg:Risk_Averse_Adversarial_Averaged_QLearning}, a detailed version is presented in \cref{alg:Risk_Averse_Adversarial_Averaged_QLearning_fullversion}. In this section, we discuss convergence issues on \RAAA. First we discuss a simplified setting where we show that if the adversary's policy is a \emph{fixed} policy $\pi^A_0$, the update rule for protagonist \cref{eq:RA3QProtagonist_UpdateRule} converges to the optimal of $J^P(s,:,\pi^A_0)$. Similarly, if the protagonist's policy is a \emph{fixed} policy $\pi^P_0$, the update rule for adversary \cref{eq:RA3QAdversary_UpdateRule} converges to the optimal of $J^A(s,\pi^P_0,:)$. Poisson masks $M\sim Poisson(1)$ provides parallel learning since $Binomial(T, \frac{1}{T})\rightarrow Poisson(1)$ as $T\rightarrow\infty$, so each Q table of protagonist/adversary, $Q^i_P$, $Q^i_A$, are trained in parallel respectively. Similar to \cref{sec:proof_RARL}, we need to prove the convergence of the iterative procedure. We take agent protagonist as an example, and the proof for adversary is similar. Fix the policy for adversary, then according to [\cite{shen2014risk} \textbf{Proposition 3.1}], for any random variable $X$, the following statements are equivalent $$\text{(i) } \frac{1}{\beta^P}\log \expectation_{\mu}\left[exp\left(\beta^P\cdot X\right)\right] = m^* $$ $$\text{(ii) } \expectation_{\mu}\left[u^P(X-m^*)\right] = x_0$$ We'll use this proposition in the following context to show that our convergent point is the optimal of the objective function $\tilde{J}^P(s,:,\pi^A_0)$. Compared to \cref{alg:Risk_Averse_QLearning} (RAQL), \RAAA uses multi-agent extension of MDP (where the transition function is ${\mathcal{P}}:{\mathcal{S}}\times{\mathcal{A}}\times{\mathcal{A}}\rightarrow{\mathbb{R}}^{{\mathcal{S}}}$. We reformulate the update rule \cref{eq:RA3QProtagonist_UpdateRule} by letting \begin{align} &q_{t+1}^{P}(s,a_P,a_A) = \left(1- \frac{\alpha_t(s,a_P,a_A)}{\alpha}\right)q^P_t(s,a_P,a_A) + \frac{\alpha_t(s,a_P,a_A)}{\alpha}\cdot\left[\alpha\cdot u(d_t) - x_0 + q^P_t(s,a_P,a_A) \right]\\ &\text{where } d_t := r_t^P + \gamma\cdot\underset{a_P,a_A}{\max}\;q^P_t(s^{\prime},a_P,a_A) - q^P_t(s,a_P,a_A)\qquad x_0 = -1\qquad \alpha\in(0,\min(L^{-1},1)] \end{align} And we set \begin{align} (H^P q_t^P)(s,a_P,a_A) &= \alpha\cdot\expectation_{s,a_P,a_A}\left[\tilde{u}\left(r_t^P + \gamma\cdot\underset{a_P,a_A}{\max}\;q^P_t(s^{\prime},a_P,a_A) - q^P_t(s,a_P,a_A)\right)\right] + q_t^P(s,a_P,a_A)\\ w_t(s,a_P,a_A) &=\alpha\cdot\tilde{u}(d_t) - \alpha\cdot\expectation_{s,a_P,a_A}\left[\tilde{u}\left(r_t^P + \gamma\cdot\underset{a_P,a_A}{\max}\;q^P_t(s^{\prime},a_P,a_A) - q^P_t(s,a_P,a_A)\right)\right]\label{eq:def_of_wt}\\ \tilde{u}(x) &= u(x) - x_0 \end{align} Next we show that $H^P$ is a $(1-\alpha(1-\gamma)\epsilon)$-contractor under \cref{ass:Utility_Func_Assumption}: For any two q tables $q,q^{\prime}$, define $v^P(s):= \underset{a_P,a_A}{\max}\;q(s,a_P,a_A)$ and $v^{P^\prime}(s):= \underset{a_P,a_A}{\max}\;q^{\prime}(s,a_P,a_A)$. Thus,\begin{align} |v^{P}(s)-v^{P^\prime}(s)|\leq \underset{s,a_P,a_A}{\max}|q(s,a_P,a_A) - q^{\prime}(s,a_P,a_A)| = \left\|q-q^{\prime}\right\|_{\infty}\; \end{align} By \cref{ass:Utility_Func_Assumption} and monotonicity of $\tilde{u}$, for given $x,y\in{\mathbb{R}}$, there exists $\xi_{(x,y)}\in[\epsilon,L]$ such that $$\tilde{u}(x) - \tilde{u}(y) = \xi_{(x,y)}\cdot (x-y).$$ Then we can obtain \begin{align} &(H^P q)(s,a_P,a_A) - (H^P q^{\prime})(s,a_P,a_A)\\ &= \sum_{s^{\prime}}{\mathcal{P}}[s^{\prime}|s,a_P,a_A]\cdot\Big\{\alpha\xi_{(s,a_P,a_A,s^{\prime},q,q^{\prime})}\cdot[\gamma\cdot v^{P}(s^{\prime}) - \gamma\cdot v^{P^{\prime}}(s^{\prime}) - q(s,a_P,a_A) + q^{\prime}(s,a_P,a_A)] + (q(s,a_P,a_A) - q^{\prime}(s,a_P,a_A))\Big\}\\ &\leq \left(1-\alpha(1-\gamma)\sum_{s^{\prime}}{\mathcal{P}}[s^{\prime}|s,a_P,a_A]\cdot\xi_{(s,a_P,a_A,s^{\prime},q,q^{\prime})}\right)\left\|q-q^{\prime}\right\|_{\infty}\\ &\leq (1-\alpha(1-\gamma)\epsilon)\left\|q-q^{\prime}\right\|_{\infty} \end{align} Hence $H^P$ is a contractor. By \cref{eq:def_of_wt}, $\expectation\left[w_t(s,a_P,a_A)|{\mathcal{F}}_t\right] = 0$. Hence it remains to prove b(ii) in \cref{prop:convergence_of_contraction}. \begin{align} \expectation\left[w_t^2(s,a_P,a_A)|{\mathcal{F}}_t\right] = \alpha^2\cdot\expectation\left[(\tilde{u}(d_t))^2|{\mathcal{F}}_t\right] - \alpha^2(\expectation\left[\tilde{u}(d_t)|{\mathcal{F}}_t\right])^2\leq \alpha^2\cdot\expectation\left[(\tilde{u}(d_t))^2|{\mathcal{F}}_t\right] \end{align} Following from the same procedures as \cref{sec:proof_RARL}, condition b(ii) of \cref{prop:convergence_of_contraction} also holds in this case. And recall that the learning rate satisfies condition a, hence by \cref{prop:convergence_of_contraction}, $q\rightarrow q^*$, where $q^*$ is the solution to the Bellman equation \begin{align} \expectation_{s,a_P,a_A}\left[u^P\left(r_t^P + \gamma\cdot\underset{a_P,a_A}{\max}\;q(s^{\prime},a_P,a_A) - q(s,a_P,a_A)\right)\right] = x_0\qquad \pi^A_0\text{ is fixed} \end{align} for $\forall (s,a_P,a_A)$. Where $s^{\prime}$ is sampled from ${\mathcal{P}}[\cdot|s,a_P,a_A]$. Similarly, we can show that for a fixed policy for protagonist, the update rule \cref{eq:RA3QAdversary_UpdateRule} will guarantee that $q_A\rightarrow q_A^*$, where $q_A^*$ is the solution to the Bellman equation \begin{align} \expectation_{s,a_P,a_A}\left[u^A\left(r^A_t + \gamma\cdot\underset{a_P,a_A}{\max}\;q(s^{\prime},a_P,a_A) - q(s,a_P,a_A)\right)\right] = x_1 \qquad \pi^P_0\text{ is fixed} \end{align} for $\forall (s,a_P,a_A)$. Where $s^{\prime}$ is sampled from ${\mathcal{P}}[\cdot|s,a_P,a_A]$. Note that this does not imply a convergence guarantee of \RAAA because of the \emph{protagonist/adversary's policy is fixed} assumption. Only if one of the agents (say protagonist) stops learning (and its policy becomes fixed) at some point, then the other agent (adversary) will also converge. Note that in the general multi-agent learning case this is always a challenge and it is often hard to a balance between theoretical algorithms (with convergence guarantees) and practical algorithms (loosing guarantees but with good empirical results), see our experimental results in \cref{sec:risk_and_robustness_evaluation} and related literature~\cite{bowling2002multiagent,weinberg2004best,littman2001value}. \subsection{Meta-game payoff examples and EGT plots} \label{sec:meta_game_examples} \begin{table}[h!] \scriptsize \caption{Payoff Table of Rock-Paper-Scissors} \label{table:Payoff_RPS} \centering \begin{tabular}{c c c|c c c} \toprule $N_{Rock}$ & $N_{Paper}$ & $N_{Scissors}$ & $R_{Rock}$ & $R_{Paper}$ & $R_{Scissors}$\\ \hline 2 & 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 0 & -1 & 1 & 0\\ 0 & 2 & 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 1 & 0 & -1\\ 0 & 0 & 2 & 0 & 0 & 0\\ 0 & 1 & 1 & 0 & -1 & 1\\ \bottomrule \end{tabular} \end{table} \begin{figure}[!h] \centering \includegraphics[width=5cm]{RPS.png} \caption{Directional Field of Rock-Paper-Scissors} \label{fig:directional_example_rps} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=5cm]{RPS_Traject.png} \caption{Trajectory Plot of Rock-Paper-Scissors} \label{fig:trajectory_example_rps} \end{figure} The payoff table of a well-known game rock-scissors-papers is as shown in \cref{table:Payoff_RPS}, its corresponding directional field is as shown in \cref{fig:directional_example_rps}, and its trajectory plot is as shown in \cref{fig:trajectory_example_rps}. It can be observed from \cref{fig:directional_example_rps,fig:trajectory_example_rps} that the equilibrium of Rock-Paper-Scissors is the centroid of the strategies simplex. \begin{table}[h!] \scriptsize \caption{An example of a meta game payoff table of 2 players, 3 strategies.} \label{table:MetaPayoff} \centering \begin{tabular}{c c c|c c c} \toprule $N_{i1}$ & $N_{i2}$ & $N_{i3}$ & $R_{i1}$ & $R_{i2}$ & $R_{i3}$\\ \hline 2 & 0 & 0 & 0.5 & 0 & 0\\ 1 & 1 & 0 & 0.3 & 0.7 & 0\\ 0 & 2 & 0 & 0 & 0.9 & 0 \\ 1 & 0 & 1 & 0.35 & 0 & 0.45\\ 0 & 0 & 2 & 0 & 0 & 0.6\\ 0 & 1 & 1 & 0 & 0.66 & 0.38\\ \bottomrule \end{tabular} \end{table} \begin{figure}[!h] \centering \includegraphics[width=5cm]{ExampleDirectionField.png} \caption{Directional Field of \cref{table:MetaPayoff}} \label{fig:directional_example} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=5cm]{ExamplewithTraject.png} \caption{Trajectory Plot of \cref{table:MetaPayoff}} \label{fig:trajectory_example} \end{figure} Another example of a 2-player meta-game payoff table of 3 strategies is in \cref{table:MetaPayoff} with its corresponding directional field as shown in \cref{fig:directional_example} and its trajectory plot in \cref{fig:trajectory_example}, where the white circles denote unstable equilibria (saddle points) and black solid circles denote globally stable equilibria. \subsection{Proof of \cref{thm:approx_equili_riskaversegame}}\label{sec:approx_equili_proof} \begin{theorem} For a Normal Form Game with $p$ players, and each player $i$ chooses a strategy $\pi^i$ from a set of strategies $S^i = \{\pi^i_1, ..., \pi^i_k\}$ and receives a risk averse payoff $h^i(\pi^1, ..., \pi^p):S^1\times...\times S^p\rightarrow{\mathbb{R}}$ satisfying \cref{ass:stochastic_reward_bounded}. If $\mathbf{x}$ is a Nash Equilibrium for the game $\hat{h}^i (\pi^1, ..., \pi^p)$, then it is a $2\epsilon$-Nash equilibrium for the game $h^i (\pi^1, ..., \pi^p)$ with probability $1-\delta$ if we play the game for $n$ times, where \begin{align} n \ge \max \left\{ -\frac{8R^2}{\epsilon^2}\log\left[\frac{1}{4}\left(1-(1-\delta)^{\frac{1}{|S^1|\times ...\times |S^p|\times p}}\right)\right], \right. \left. \frac{64\beta^2\omega^2\cdot\Gamma(2)}{\epsilon^2\left[1-(1-\delta)^{\frac{1}{|S^1|\times...\times |S^p|\times p}}\right]}\right\} \end{align} \end{theorem} \begin{assumption}\label{ass:stochastic_reward_bounded} The stochastic return $h$ (for each player and each strategy) for each simulation has a sub-Gaussian tail. i,e, there exists $\omega > 0$ s.t. \begin{align} \expectation\left[exp\left(c\cdot(h-\expectation[h])\right)\right]&\leq exp\left(\frac{\omega^2 c^2}{2}\right) \qquad \forall c\in {\mathbb{R}} \end{align} And we also select $R>0$ s.t. $h\in[-R, R]$ almost surely. \end{assumption} \begin{proof} Note that we have the following relation: \begin{align}\label{eq:Equilibrium_Approximation_1} \mathbb{E}_{\pi\sim \mathbf{x}}\left[h^i (\pi)\right] = \mathbb{E}_{\pi\sim \mathbf{x}}\left[\hat{h}^i (\pi)\right] + \mathbb{E}_{\pi\sim \mathbf{x}}\left[h^i (\pi) - \hat{h}^i (\pi)\right] \end{align} Then \begin{align} &\mathbb{E}_{\pi^{-i}\sim\mathbf{x}^{-i}}\left[h^{i}(\pi^i, \mathbf{\pi}^{-i})\right] = \mathbb{E}_{\pi^{-i}\sim\mathbf{x}^{-i}}\left[\hat{h}^{i}(\pi^i, \mathbf{\pi}^{-i})\right] + \mathbb{E}_{\pi^{-i}\sim\mathbf{x}^{-i}}\left[h^{i}(\pi^i, \mathbf{\pi}^{-i}) - \hat{h}^{i}(\pi^i, \mathbf{\pi}^{-i})\right]\\ &\underset{\pi^i }{\max}~\mathbb{E}_{\pi^{-i}\sim\mathbf{x}^{-i}}\left[h^{i}(\pi^i, \mathbf{\pi}^{-i})\right] \le \underset{\pi^i }{\max}~\mathbb{E}_{\pi^{-i}\sim\mathbf{x}^{-i}}\left[\hat{h}^{i}(\pi^i, \mathbf{\pi}^{-i})\right] +\underset{\pi^i }{\max}~\mathbb{E}_{\pi^{-i}\sim\mathbf{x}^{-i}}\left[h^{i}(\pi^i, \mathbf{\pi}^{-i}) - \hat{h}^{i}(\pi^i, \mathbf{\pi}^{-i})\right] \end{align} Hence, \begin{align}\label{eq:Equilibrium_Approximation_2} &\underset{\pi^i }{\max}~\mathbb{E}_{\pi^{-i}\sim\mathbf{x}^{-i}}\left[h^{i}(\pi^i, \mathbf{\pi}^{-i})\right] - \mathbb{E}_{\pi\sim\mathbf{x}}\left[h^i (\pi)\right]\\ \le & \underbrace{\underset{\pi^i }{\max}\;\mathbb{E}_{\pi^{-i}\sim\mathbf{x}^{-i}}\left[\hat{h}^{i}(\pi^i, \mathbf{\pi}^{-i})\right] - \mathbb{E}_{\pi\sim\mathbf{x}}\left[\hat{h}^i (\pi)\right]}_{=0 \text{ since } \textbf{x} \text{ is a Nash Equilibrium for }\hat{h}^{i}} + \underbrace{\underset{\pi^i }{\max}\;\mathbb{E}_{\pi^{-i}\sim\mathbf{x}^{-i}}\left[h^{i}(\pi^i, \mathbf{\pi}^{-i}) - \hat{h}^{i}(\pi^i, \mathbf{\pi}^{-i})\right]}_{\le \epsilon} + \underbrace{\mathbb{E}_{\pi\sim\mathbf{x}}\left[\hat{h}^{i}(\pi) - h^{i}(\pi)\right]}_{\le\epsilon} \end{align} Hence, if we can control the difference between $|h^i (\pi)-\hat{h}^{i}(\pi)|$ uniformly over players and actions, then an equilibrium for the empirical game is almost an equilibrium for the game defined by the reward function. Hence the question is how many samples $n$ do we need to assess that a Nash equilibrium for $\hat{h}$ is a $2\epsilon$-Nash equilibrium for $h$ for a fixed confidence $\delta$ and a fixed $\epsilon$. In the following, in short, we fix player $i$ and the joint strategy $\pi = (\pi^1,..., \pi^p)$ for $p$ players and and in short, denote $h^i = h^i(\pi)$, $\hat{h}^i = \hat{h}^i(\pi)$. By Hoeffding inequality, \begin{align}\label{eq:bound_hoeffding} \mathbb{P}\left[\left|\bar{R^i} -\mathbb{E}[R^i] \right|\geq \frac{\epsilon}{2}\right]\leq 2\cdot exp\left(-\frac{\epsilon^2 n}{8R^2}\right) \end{align} Now, it remains to give a batch scenario for the unbiased estimator of variance penalty term. Denote $V^2_n = \frac{1}{n-1}\sum_{j=1}^{n}\left(R^i_j - \bar{R^i}\right)^2$, then $\mathbb{E}[V^2_n] = \mathbb{V}ar[R^i] = \sigma^2$, i.e., it's an unbiased estimator of the game variance. We first compute the variance of $V^2_n$. Let $Z^i_j = R^i_j - \expectation[R^i]$, then $\expectation[Z^i] = 0$ and $Z^i_1, ...Z^i_n$ are independent. Then we have \begin{align} V^2_n = \mathbb{V}ar[R^i] = \mathbb{V}ar[Z^i]. \end{align} \begin{align}\label{eq:variance_of_samplevariance} &\mathbb{V}ar[V^2_n] = \mathbb{E}[V^4_n] - (\mathbb{E}[V^2_n])^2\\ &= \expectation\left[\frac{n^2(\sum_{j=1}^{n}(Z_j^i)^2)^2 - 2n(\sum_{j=1}^{n}(Z_j^i)^2) (\sum_{j=1}^{n}Z_j^i)^2 + (\sum_{j=1}^{n}Z_j^i)^4}{n^2(n-1)^2}\right] - \sigma^4\\ &= \frac{n^2\expectation\left[\left(\sum_{j=1}^n(Z_j^i)^2\right)^2\right] - 2n\expectation\left[\left(\sum_{j=1}^n (Z_j^i)^2\right)\left(\sum_{j=1}^n Z_j^i\right)^2\right] + \expectation\left[\left(\sum_{j=1}^n Z_j^i\right)^4\right]}{n^2(n-1)^2} - \sigma^4 \end{align} Since $Z_1^i, ..., Z_n^i$ are independent, then we have that for distinct $j,k,m$, \begin{align} \expectation[Z^i_j Z^i_k] = 0; \quad \expectation[(Z^i_j)^3 Z^i_k] = 0;\quad \expectation[(Z^i_j)^2 Z^i_k Z^i_m] = 0. \end{align} And we denote\begin{align} \expectation[(Z^i_j)^2 (Z^i_k)^2] = \mu_2^2 = \sigma^4;\quad \expectation[(Z^i_j)^4] =\mu_4. \end{align} Then, with algebraic manipulations, we can simplify \cref{eq:variance_of_samplevariance} as:\begin{align} \mathbb{V}ar[V_n^2] &= \frac{n^2\left(n\mu_4 + n(n-1)\mu_2^2\right) - 2n(n\mu_4 + n(n-1)\mu_2^2) + n\mu_4 + 3n(n-1)\mu_2^2}{n^2(n-1)^2} - \sigma^4\\ &= \frac{(n-1)\mu_4 +(n^2-2n+3)\sigma^4}{n(n-1)} - \sigma^4\\ &= \frac{\mu_4}{n} - \frac{\sigma^4 (n-3)}{n(n-1)}. \end{align} By Chebyshev's inequality, \begin{align} \mathbb{P}\left[\left|V_n^2 - \mathbb{V}ar[R^i]\right|\geq \frac{\epsilon}{2\beta}\right]&\leq \frac{\mathbb{V}ar[V_n^2]}{(\frac{\epsilon}{2\beta})^2}\\ &\leq \frac{ 4\beta^2\left(\frac{\mu_4}{n} - \frac{\sigma^4 (n-3)}{n(n-1)}\right)}{\epsilon^2} \end{align} By \cref{ass:stochastic_reward_bounded}, \begin{align} \mu_4\leq 16\omega^2\cdot\Gamma(2) \end{align} By triangle inequality, \begin{align} {\mathbb{P}}\left[\left|h^i - \hat{h}^i\right|\geq \epsilon\right]&\le {\mathbb{P}}\left[\left|\expectation[R^i]-\bar{R}^i\right|+\beta\cdot\left|V_n^2 - \mathbb{V}ar[R^i]\right|\geq \epsilon\right]\\ &\le {\mathbb{P}}\left[\left|\expectation[R^i]-\bar{R}^i\right|\ge \frac{\epsilon}{2}\;or\;\beta\cdot\left|V_n^2 - \mathbb{V}ar[R^i]\right|\geq \frac{\epsilon}{2}\right]\\ &\le {\mathbb{P}}\left[\left|\expectation[R^i]-\bar{R}^i\right|\ge \frac{\epsilon}{2}\right] + {\mathbb{P}}\left[\left|V_n^2 - \mathbb{V}ar[R^i]\right|\geq \frac{\epsilon}{2\beta}\right]\\ &\le 2\cdot exp\left(-\frac{\epsilon^2 n}{8R^2}\right) + \frac{ 4\beta^2\left(\frac{16\omega^2\cdot\Gamma(2)}{n} - \frac{\sigma^4 (n-3)}{n(n-1)}\right)}{\epsilon^2}\\ &\le 2\cdot exp\left(-\frac{\epsilon^2 n}{8R^2}\right) + \frac{64\beta^2\omega^2\cdot\Gamma(2)}{n\epsilon^2}\\ &= f(n, \epsilon). \end{align} Hence, for per joint strategies $\pi$ and per player $i$, we have the following bound : \begin{align} {\mathbb{P}}\left[\underset{\pi,i}{\sup}\left|h^i(\pi) - \hat{h}^i (\pi)\right|<\epsilon\right]\geq \left(1-f(n,\epsilon)\right)^{|S^1|\times ...\times |S^p|\times p} \end{align} Hence, for \begin{align} n\ge \max\left\{-\frac{8R^2}{\epsilon^2}\log\left[\frac{1}{4}\left(1-(1-\delta)^{\frac{1}{|S^1|\times ...\times |S^p|\times p}}\right)\right]\;; \;\frac{64\beta^2\omega^2\cdot\Gamma(2)}{\epsilon^2\left[1-(1-\delta)^{\frac{1}{|S^1|\times...\times |S^p|\times p}}\right]}\right\} \end{align} we have ${\mathbb{P}}\left[\underset{\pi,i}{\sup}\left|h^i(\pi) - \hat{h}^i (\pi)\right|<\epsilon\right]\ge 1-\delta$. Plugging the result into \cref{eq:Equilibrium_Approximation_2}, we have \begin{align}\label{eq:Equilibrium_Approximation_3} &\underset{\pi^i }{\max}~\mathbb{E}_{\pi^{-i}\sim\mathbf{x}^{-i}}\left[h^{i}(\pi^i, \mathbf{\pi}^{-i})\right] - \mathbb{E}_{\pi\sim\mathbf{x}}\left[h^i (\pi)\right]\le 2\epsilon \end{align} \end{proof} \iffalse \begin{algorithm*}[ht] \small \caption{Adversarial Variance Reduced Q-Learning (Proposed algorithm)} \label{alg:Adversarial_Variance_Reduced_QLearning} \begin{algorithmic} \STATE \textbf{Input :} Training epochs $T$; Environment $env$; Adversarial Action Schedule $X$; Exploration rate $\epsilon$; Number of models $k$; Epoch length $K$, Recentering sample sizes $\{N_m\}_{m\geq 1}$; Utility function parameter for protagonist $\beta^P<0$; Utility function parameter for adversary $\beta^A>0$; \STATE Initialize $\bar{Q}^{P}_0 = \mathbf{0}$; $\bar{Q}^{A}_0 = \mathbf{0}$; $m_P=1$; $B = \mathbf{0}\in{\mathbb{R}}^{|{\mathcal{S}}|\times{|{\mathcal{A}}|}}$. \FOR{$t=1$ to $T$} \STATE Choose Agent $g$ from $\{A;P\}$ according to $X$. \STATE Select action $a_t$ according to $\bar{Q}^{g}_{m_g-1}$ by applying $\epsilon$-greedy strategy. \STATE Execute action and get $(s_e, a_e, obs, reward, done)$; \STATE $RB_g=RB_g\cup\{(s_e, a_e, obs, reward, done)\}$. \FOR{$i=1,...,N$} \STATE Define $$\hat{\mathcal{T}_i}(Q)(s,a)=r+\gamma^n\cdot\underset{a^{\prime}}{\max}Q(s^{\prime},a^{\prime})$$ where $r$ is the reward of agent $g$, e.g., $r_P(s,a)=r(s,a)+\sum_{i=j}^{n}\gamma^j r(s^A_j, a^A_j)$. $a_j^A$ are selected accroding to $\bar{Q}^A_{m_A}$, and $s_j^A$, $s^{\prime}$ are sampled from the MDP. And $\hat{\mathcal{T}_{i}}$ are empirical Bellman operators constructed using independent samples. \ENDFOR \IF{$g = P$} \STATE Define $\tilde{\mathcal{T}_{N}}(\bar{Q}^{P}_{m_P-1})=\frac{1}{N}\sum_{i\in\mathcal{D}_{N}}\hat{\mathcal{T}_{i}}(\bar{Q}^{P}_{m_P-1}).$ where $\mathcal{D}_{N}$ is a collection of $N$ i.i.d. samples (i.e., matrices with samples for each state-action pair $(s,a)$ from $RB_P$) \STATE $Q_{1}^{P} = \bar{Q}^{P}_{m_P-1}$ \FOR{$k=1,...,K$} \STATE Compute stepsize $\lambda_k = \frac{1}{1+(1-\gamma)k}.$ \STATE $$ Q_{k+1}^{g} \leftarrow (1-\lambda_k)\cdot Q_{k}^{g} + \lambda_k\cdot\left[\hat{\mathcal{T}_{k}}(Q_k^g)-\hat{\mathcal{T}_{k}}(\bar{Q}^{g}_{m_g-1}) + \tilde{\mathcal{T}}_{N}(\bar{Q}^{g}_{m_g-1})\right].$$ where $\hat{\mathcal{T}_{k}}$ is empirical Bellman operator constructed using a sample not in $\mathcal{D}_N$, thus the random operators $\hat{\mathcal{T}_k}$ and $\hat{\mathcal{T}_N}$ are independent. \ENDFOR \STATE $\bar{Q}^g_{m_g} = Q_{K+1}^{g}$ \STATE $m_g$ = $m_g +1$ \ENDIF \IF{$g = A$} \FOR{$k = 1,..., K$} \STATE $B(s_{m_A}, a_{m_A}) = B(s_{m_A}, a_{m_A})+1$ \STATE $\alpha_{m_A} = \frac{1}{B(s_{m_A}, a_{m_A})}$ \STATE $$Q^{A}_{m_A}\leftarrow (1-\alpha_{m_A})\cdot Q^{A}_{m_A} + \alpha_{m_A}\cdot \hat{{\mathcal{T}}}(Q^A_{m_A})$$ \ENDFOR \ENDIF \ENDFOR \end{algorithmic} \end{algorithm*}\fi \iffalse \begin{align} \underset{a_P}{\max}Q^P(s,a_P) = \underset{a_P}{\max}\underset{a_A}{\max}Q^P(s,a_P,a_A) = \underset{a_P}{\max}\underset{a_A}{\min}Q^P(s,a_P,a_A) \end{align} we get policies $(\pi^{P*},\pi^A*)$ \begin{align} J^P(s,Q^{P*}, Q^{A*}) = \expectation[\sum \gamma^t\cdot r^P_t|s,\pi^P_*, \pi^{A*}] \end{align} i.e., for any other policy $\pi^P$,\begin{align} \expectation[\sum \gamma^t\cdot r^P_t|s,\pi^{P*}, \pi^{A*}] \geq \expectation[\sum \gamma^t\cdot r^P_t|s,\pi^P, \pi^{A*}] \end{align} for any other policy $\pi^A$ \begin{align} \expectation[\sum \gamma^t\cdot r^A_t|s,\pi^{P*}, \pi^{A*}] \geq \expectation[\sum \gamma^t\cdot r^A_t|s,\pi^{P*}, \pi^A] \end{align}\fi \section{Introduction} Reinforcement learning (RL) has moved from toy domains to real-world applications such as games~\cite{berner2019dota}, navigation~\cite{bellemare2020autonomous}, software engineering~\cite{bagherzadeh2020reinforcement}, industrial design~\cite{mirhoseini2020chip}, and finance~\cite{li2017deep}. Each of these applications has inherent difficulties which are long-standing fundamental challenges in RL, such as: limited training time, costly exploration and safety considerations, among others. In particular, in finance, there are some examples of RL in stochastic control problems such as option pricing~\cite{li2009learning}, market making~\cite{spooner2018market}, and optimal execution~\cite{ning2018double}. However, the most well-known finance application is algorithmic trading, where the goal is to design algorithms capable of automatically making trading decisions based on a set of mathematical rules computed by a machine~\cite{theate2021application}. In algorithmic trading the environment represents the market (and the rest of the actors). The agent's task is to take actions related to how and how much to trade, and the objective is usually to maximize profit while considering risk. There are diverse challenges in this setting such as partial observability, a large action space, a hard definition of rewards and learning objectives~\cite{theate2021application}. In our work we focus on two sought properties for learning agents in realistic scenarios: risk assessment and robustness. Risk assessment is a cornerstone in financial applications. A well-known approach is to consider risk while assessing the performance (profit)\footnote{Even when the usual financial term for profit is \emph{return}, this could be confused with the usual definition of return in RL (cumulative sum of discounted rewards).} of a trading strategy. Here, risk is a quantity related to the variance (or standard deviation) of the profit and it is commonly refereed to as ``volatility". In particular, the Sharpe ratio~\cite{sharpe1994sharpe} considers both the generated profit and the risk (variance) associated with a trading strategy. Note that this objective function (Sharpe ratio) is different from traditional RL where the goal is to optimize the expected return, usually, without considerations of risk. There are existing works that proposed risk-sensitive RL algorithms~\cite{mihatsch2002risk,di2012policy} and variance reduction techniques~\cite{anschel2017averaged}. In a similar spirit our proposed algorithms aim to reduce variance while also having convergence guarantees and improved robustness via adversarial learning. Deep RL has been shown to be brittle in many scenarios~\cite{henderson2018deep}. Therefore, improving robustness is essential for deploying agents in realistic scenarios. A line of work has improved robustness of RL agents via adversarial perturbations~\cite{morimoto2005robust,pinto2017robust}. In particular, the framework assumes an adversary (who is also learning) who is allowed to take over control at regular intervals. This approach has shown good experimental results in robotics~\cite{pan2019risk}, and our proposed algorithms extend on this idea while providing convergence guarantees. Since our motivation is to use RL agents in trading markets (which can be seen as multi-agent interactions) we also evaluate these agents from the perspective of game theory. However, it may be too difficult to analyze in standard game theoretic framework since there is no normal form representation (commonly used to analyze games). Fortunately, empirical game theory~\cite{walsh2002analyzing,wellman2006methods} overcomes this limitation by using the information of several rounds of repeated interactions and assuming a higher level of strategies (agents' policies). These modifications have made possible the analysis of multi-agent interactions in complex scenarios such as markets~\cite{bloembergen2015trading}, and multi-agent games~\cite{tuyls2020bounds}. However, these works have not studied the interactions under risk metrics (such as Sharpe ratio) as we do in this work. In summary, we take inspiration from previous works to combine \emph{risk-awareness, variance reduction and robustness} techniques with four different algorithms. Risk-Averse Averaged Q-Learning (\RAA) and Variance Reduced Risk-Averse Q-Learning (\RAAV) use risk-averse functions and variance reduction techniques. Then, we augment the framework to a multi-agent scenario where we assume an adversary that can perturb the learning process. We propose Risk-Averse Multi-Agent Q-Learning (RAM-Q\xspace) which is a multi-agent version of adversarial learning with strong assumptions and theoretical guarantees. Risk-Averse Adversarial Averaged Q-Learning (\RAAA) relaxes those assumptions and proposes a more practical algorithm that keeps the multi-agent adversarial component to improve robustness. Lastly, we present a theoretical result using empirical game theory analysis on games with risk-sensitive payoff. \section{Preliminaries} \subsection{Single-Agent Reinforcement Learning} A Markov Decision Process is defined by a set of states $\mathcal{S}$ describing the possible configurations, a set of actions $\mathcal{A}$ and a set of observations $\mathcal{O}$ for each agent. A stochastic policy $\pi_{\theta} : \mathcal{O}\times \mathcal{A} \rightarrow [0,1]$ parameterized by $\theta$ produces the next state according to the state transition function $\mathcal{T}: \mathcal{S}\times\mathcal{A}\rightarrow \mathcal{S}$. The agent obtains rewards as a function of the state and agent’s action $r:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}$, and receives a private observation correlated with the state $\mathbf{o} : \mathcal{S}\rightarrow\mathcal{O}$. The initial states are determined by a distribution $d_0 : \mathcal{S}\rightarrow [0,1]^{|{\mathcal{S}}|}$. \subsection{Multi-Agent Reinforcement Learning} In RL, each agent $i$ aims to maximize its own total expected return, e.g., for a Markov game with two agents, for a given initial state distribution $d_0$, the discounted returns are respectively :\begin{align} J^1(d_0, \pi^1, \pi^2)= \sum_{t=0}^{\infty}\gamma^t\expectation\left[r_t^1 | \pi^1, \pi^2, d_0\right]\\ J^2(d_0, \pi^1,\pi^2)=\sum_{t=0}^{\infty}\gamma^t\expectation\left[r_t^2 | \pi^1, \pi^2, d_0\right] \end{align} where $\gamma$ is a discount factor, $r_t^1, r_t^2,\;t = 1,2,...$ are respectively immediate rewards for agent 1 \& 2. And a Nash equilibrium for Markov game (with two agents) is defined as following \begin{definition}\cite{MultiAgentQLearning98} A Nash equilibrium point of game $(J^1, J^2)$ is a pair of strategies $(\pi_*^1, \pi_*^2)$ such that for $\forall s\in{\mathcal{S}}$, \begin{align} J^1(s, \pi^1_*, \pi^2_*)\geq J^1(s, \pi^1, \pi^2_*)\quad \forall \pi^1\\ J^2(s, \pi^1_*, \pi^2_*)\geq J^2(s, \pi^1_*, \pi^2)\quad \forall \pi^2 \end{align} \end{definition} \subsubsection{Multi-agent Extension of MDP} A Markov game for $N$ agents is defined by a set of states $\mathcal{S}$ describing the possible configurations of all agents, a set of actions $\mathcal{A}_1, ..., \mathcal{A}_{N}$ and a set of observations $\mathcal{O}_1, ..., \mathcal{O}_N$ for each agent. To choose actions, each agent $i$ uses a stochastic policy $\pi_{\theta_i} : \mathcal{O}_i\times \mathcal{A}_i \rightarrow [0,1]$ parameterized by $\theta_i$, which produces the next state according to the state transition function $\mathcal{P}: \mathcal{S}\times\mathcal{A}_1\times...\times\mathcal{A}_N\rightarrow \mathcal{S}$. Each agent $i$ obtains rewards as a function of the state and agents' action $r_i:\mathcal{S}\times\mathcal{A}_1\times...\times\mathcal{A}_N\rightarrow\mathbb{R}$, and receives a private observation correlated with the state $\mathbf{o}_i : \mathcal{S}\rightarrow\mathcal{O}_i$. The initial states are determined by a distribution $d_0 : \mathcal{S}\rightarrow [0,1]^{|{\mathcal{S}}|}$. In multi-agent Q learning, the Q tables are defined over joint actions for each of the agents. Each agent receives rewards according to its reward function, with transitions dependent on the actions chosen jointly by the set of agents. \subsection{Empirical Game Theory} We analyze the multi-agent behaviours in a trading market using empirical game theory, where a \emph{player} corresponds to an agent, and a \emph{strategy} corresponds to a learning algorithm. Then, in a $p$-player game, players are involved in a single round strategic interaction. Each player $i$ chooses a strategy $\pi^i$ from a set of $k$ strategy $S^i = \{\pi_1^i, ..., \pi_k^i \}$ and receives a stochastic payoff $R^i (\pi^1, ..., \pi^p ): S^1\times S^2\times...\times S^p\rightarrow \mathbb{R}$. The underlying game that is usually studied is $r^i (\pi^i, ..., \pi^p) = \mathbb{E}[R^i (\pi^1, ..., \pi^p)]$. In general, we denote the payoff of player $i$ as $\mu^i$ and $\mathbf{x}^{-i}$ as the joint strategy of all players except for player $i$. \begin{definition A joint strategy $\mathbf{x} = (x^1, ..., x^p) = (x^i, \mathbf{x}^{-i})$ is a Nash equilibrium if for all $i$ : \begin{align} \mathbb{E}_{\bf{\pi}\sim\mathbf{x}}\left[\mu^i (\pi)\right] = \underset{\pi^i} \max~\mathbb{E}_{\pi^{-i}\sim\mathbf{x}^{-i}}\left[\mu^i (\pi^i, \mathbf{\pi}^{-i})\right] \end{align} \end{definition} \begin{definition A joint strategy $\mathbf{x} = (x^1, ..., x^p) = (x^i, \mathbf{x}^{-i})$ is an $\epsilon$-Nash equilibrium if for all $i$: \begin{align}\label{eq:NashEquilibrium} \underset{\pi^i} \max~\mathbb{E}_{\pi^{-i}\sim\mathbf{x}^{-i}}\left[\mu^i (\pi^i, \mathbf{\pi}^{-i})\right]-\mathbb{E}_{\bf{\pi}\sim\mathbf{x}}\left[\mu^i (\pi)\right]\le \epsilon \end{align} \end{definition} Evolutionary dynamics have been used to analyze multi-agent interactions. A well-known model is replicator dynamics (RD)~\cite{weibull1997evolutionary} which describes how a population evolves through time under evolutionary pressure (in our analysis, a population is composed by learning algorithms). RD assumes that the reproductive success is determined by interactions and their outcomes. For example, the population of a certain type increases if they have a higher \emph{fitness} (in our case this means the expected return in certain interaction) than the population average; otherwise that population share will decrease. To view the dominance of different strategies, it is common to plot the directional field of the payoff tables using the replicator dynamics for a number of strategy profiles $\mathbf{x}$ in the simplex strategy space~\cite{tuyls2020bounds}. In \cref{sec:risk_and_robustness_evaluation} we present results in this format evaluating our proposed algorithms. \section{Related Work} Our work is mainly situated in the broad area of safe RL~\cite{garcia2015comprehensive}. In particular, a subgroup of works aims to improve robustness of learned policies by assuming two opposing learning processes: one that aims to disturb the most and another one that tries to control the perturbations~\cite{morimoto2005robust}. This approach has been recently adapted to work with neural networks in the context of deep RL~\cite{pinto2017robust}. Moreover, Risk-Averse Robust Adversarial Reinforcement Learning (RARL)~\cite{pan2019risk} extended this idea by combining with Averaged DQN~\cite{anschel2017averaged}, an algorithm that proposes averaging the previous $k$ estimates to stabilize the training process. RARL trains two agents -- protagonist and adversary in parallel, and the goal for those two agents are respectively to maximize/minimize the expected return as well as minimize/maximize the variance of expected return. RARL showed good experimental results, but lacked theoretical guarantees and theoretical insights on the variance reduction and robustness. Multi-agent Q-learning \cite{MultiAgentQLearning98} is useful for finding the optimal strategy when there exists a unique Nash equilibrium in general sum stochastic games, and this approach could also be used in adversarial RL. \citeauthor{wainwright2019variance}~(2019) proposed a variance reduction Q-learning algorithm (V-QL) which can be seen as a variant of the SVRG algorithm in stochastic optimization~\cite{NIPS2013_ac1dd209}. Given an algorithm that converges to $Q^*$, one of its iterates $\bar{Q}$ could be used as a proxy for $Q^*$, and then recenter the ordinary Q-learning updates by a quantity $-\hat{{\mathcal{T}}}_k(\bar{Q}) + {\mathcal{T}}(\bar{Q})$, where $\hat{{\mathcal{T}}}_k$ is an empirical Bellman operator, ${\mathcal{T}}$ is the population Bellman operator, which is not computable, but an unbiased approximation of it could be used instead. This algorithm is shown to be convergent and enjoys minimax optimality up to a logarithmic factor. Lastly, another group of works proposed the use of risk-averse objective functions~\cite{mihatsch2002risk} with the Q-learning algorithm. Since these ideas are highly related to our proposed algorithms we will describe in greater detail in the next section. \subsection{Risk Averse Q Learning} \label{sec:RAQL} \citeauthor{shen2014risk}~(2014) proposed a Q learning algorithm that is shown to converge to the optimal of a risk-sensitive objective function, the training scheme is the same as Q learning, except that in each iteration, a utility function is applied to a TD-error (see \cref{alg:Risk_Averse_QLearning} in Appendix). Since the goal is to optimize the expected return as well as minimizing the variance of the expected return, an expected utility of the return could be used as the objective function instead: \begin{align} \label{eq:Risk_Averse_Objective} \tilde{J}_{\pi}= \frac{1}{\beta}\mathbb{E}_{\pi}\left[exp\left(\beta\sum_{t=0}^{\infty}\gamma^t r_t\right)\right]. \end{align} By a straightforward Taylor expansion, \cref{eq:Risk_Averse_Objective} yields \begin{align*} \expectation[\sum_{t=0}^{\infty}\gamma^t r_t] + \frac{\beta}{2}{\mathbb{V}} ar[\sum_{t=0}^{\infty}\gamma^t r_t] + O(\beta^2) \end{align*} where when $\beta<0$ the objective function is risk-averse, when $\beta=0$ the objective function is risk-neutral, and when $\beta>0$ the objective function is risk-seeking. \citeauthor{shen2014risk}~(2014) proved that by applying a monotonically increasing concave utility function $u(x) = -exp(\beta x)$ where $\beta<0$ to the TD error, \cref{alg:Risk_Averse_QLearning} converges to the optimal point of \cref{eq:Risk_Averse_Objective}. Hence, it can be shown that: \begin{theorem} (Theorem 3.2, \citeauthor{shen2014risk}~2014) \label{thm:RARL_Converge} Running \cref{alg:Risk_Averse_QLearning} from an initial Q table, $Q\rightarrow Q^*$ w.p. 1, where $Q^*$ is the unique solution to \begin{align*} \expectation_{s^{\prime}}\left[u\left(r(s, a) + \gamma\cdot\underset{a}{\max}Q^*(s^{\prime},a) - Q^*(s,a)\right)\right]-x_0 = 0 \end{align*} $\forall (s,a)$. Where $s^{\prime}$ is sampled from ${\mathcal{T}}[\cdot|s,a]$. And the corresponding policy $\pi^*$ of $Q^*$ satisfies $\tilde{J}_{\pi^*}\geq \tilde{J}_{\pi}\;\forall \pi$. \end{theorem} \subsection{Multi-Agent Q-Learning} \citeauthor{MultiAgentQLearning98}~(1998) proposed Nash-Q, a Multi-Agent Q-learning algorithm (\cref{alg:MultiAgent_QLearning} in Appendix) in the framework of general-sum stochastic games. When there exists a unique Nash equilibrium in the game, this algorithm is useful for finding the optimal strategy. Nash-Q assumes an agent can observe the other agent's immediate rewards and previous actions during learning. Each learning agent maintains two Q-tables, one for its own Q values, and one for the other agents'. \citeauthor{MultiAgentQLearning98}~(1998) showed that under strong assumptions (\cref{ass:Bimatrix_Nash_Assumption} in Appendix), Nash-Q converges to the Nash Equilibrium. We leave the full version of the algorithm and the convergence theorem to \cref{sec:Multi_Agent_QLearning}. \begin{table}[] \caption{Comparison of related algorithms. Our proposed algorithms are marked with \textbf{bold} and are described in \cref{sec:algorithms}.} \label{tab:comparison_algs} \scriptsize \centering \begin{tabular}{@{}p{2cm}|p{3.0cm}|p{2.5cm}@{}} \toprule \bf Algorithm & \bf Description & \bf Guarantees \\ \hline Risk averse Q-Learning~\cite{shen2014risk}& Q-Learning with a utility function applied to TD Error in Q update & Convergence to optimal of a risk-averse objective function\\ Variance reduced Q-learning~\cite{wainwright2019variance} & Use average estimation of multiple $Q$ tables in Q-table updates to reduce variance & Convergent to optimal of expected return. Convergence rate is minimax optimal up to a logarithmic factor.\\ Nash Q-learning~\cite{MultiAgentQLearning98} & Two-agent Q-Learning in multi-agent MDP setting & Convergence to Nash equilibrium of the two-agent game (if exists) \\ Risk-Averse Robust Adversarial Reinforcement Learning (RARL)~\cite{pan2019risk} & Q-Learning with risk-averse/risk-seeking behaviors of protagonist/adversary with multiple $Q$ tables & No convergence guarantee \\ \bf Risk-Averse Averaged Q-Learning (\RAA) & Q-Learning with a utility function + a more stable choice of actions with multiple Q tables & Convergence to optimal of a risk-averse objective function and reduced training variance.\\ \bf Variance Reduced Risk-Averse Q-Learning (\RAAV) & Use average estimation of multiple $Q$ tables in Q updates; Apply utility function in Q updates & No convergence guarantee \\ \bf Risk-Averse Multiagent Q-learning (RAM-Q\xspace) & Multi-agent Nash Q-Learning with a utility function + a risk-averse/risk-seeking behaviors of protagonist/adversary + multiple Q tables & Convergence to Nash equilibrium (if exists) of the two-agent game (with Risk-Averse/Seeking payoffs respectively)\\ \bf Risk-Averse Adversarial Averaged Q-Learning (\RAAA) & Multi-agent Q-Learning with a utility function + a risk-averse/risk-seeking behaviors of protagonist/adversary + multiple Q tables & No convergence guarantee\\ \bottomrule \end{tabular} \end{table} \section{Proposed Algorithms} \label{sec:algorithms} Here we describe our proposed algorithms continuing the results discussed in the previous sections. We first present two algorithms \RAA and \RAAV which use a risk-averse utility functions and reduce variance by training multiple Q tables in parallel. Then, we present RAM-Q\xspace which is a multi-agent algorithm that assumes an adversary which can perturb the learning process. While RAM-Q\xspace is proven to have convergence guarantees, it also needs strong assumptions that might not hold in reality. Therefore, our last proposal, \RAAA, keeps the adversarial component to improve robustness while relaxing the strong assumptions. As a summary, \cref{tab:comparison_algs} presents closely related works and the comparison with our proposed algorithms. \subsection{Risk-Averse Averaged Q-Learning (\RAA) }\label{sec:RA2-Q} \begin{algorithm*}[ht] \scriptsize \caption{Risk-Averse Averaged Q-Learning (RA2-Q)} \label{alg:Risk_Averse_Averaged_QLearning} \textbf{Input :} Training steps $T$; Exploration rate $\epsilon$; Number of models $k$; risk control parameter $\lambda_P$; Utility function parameter $\beta$.\begin{spacing}{0.8} \begin{algorithmic}[1] \STATE Initialize $Q^i= \mathbf{0}$, $N^i= \mathbf{0}$, $\alpha^i = \mathbf{1}$ for $\forall i = 1,..., k$. \STATE Initialize Replay Buffer $RB= \emptyset$; Randomly sample action choosing head integers $H\in[1,k]$ \FOR{$t=1$ to $T$} \STATE $Q = Q^{H}$ \STATE Compute $\hat{Q}$ by \begin{align} \hat{Q}(s,a) = Q(s,a) - \lambda_P\cdot \frac{\sum_{i=1}^{k}(Q^i(s,a) - \bar{Q}(s,a))^2}{k-1} \end{align} where $\lambda_P>0$ is a constant; $\bar{Q}(s,a) = \frac{1}{k}\sum_{i=1}^{k}Q^i(s,a)$ \STATE Select action $a_t$ according to $\hat{Q}$ by applying $\epsilon$-greedy strategy. \STATE Execute actions and get $(s_t, a_t, r_t, s_{t+1})$, append to the replay buffer $RB = RB\cup \{(s_t, a_t, r_t, s_{t+1})\}$ \STATE Generate mask $M\in {\mathbb{R}}^k \sim Poisson(1)$ \FOR{$i=1,...,k$} \IF{$M_i = 1$} \STATE Update $Q^i$ by \begin{align} \label{eq:RAA_Q_Update} Q^i(s_t,a_t) = Q^i(s_t,a_t) + \alpha^i(s_t,a_t)\cdot\left[u\left(r(s_t,a_t) + \gamma\cdot\underset{a}{\max} Q^i(s_{t+1}, a) - Q^i(s_t, a_t)\right)-x_0\right] \end{align}where $u$ is a utility function, here we use $u(x) = -e^{\beta x}$ where $\beta<0$; $x_0 = -1$ \STATE $N^i(s_t,a_t) = N^i(s_t,a_t) + 1$; Update learning rate $\alpha^i(s_t,a_t) = \frac{1}{N^i(s_t,a_t)}$. \ENDIF \ENDFOR \STATE Update $H$ by randomly sampling integers from 1 to $k$. \ENDFOR \STATE \textbf{Return} $\frac{1}{k}\sum_{i=1}^{k}Q^i$ \end{algorithmic} \end{spacing} \end{algorithm*} Although in RAQL (\cref{alg:Risk_Averse_QLearning}) we discussed the convergence to the optimal of risk-sensitive objective function with probability 1, the proof assumes visiting every state infinitely many times whereas the actual training time is finite. Our main idea is that we can reduce the training variance further by choosing more risk-averse actions during the finite training process. Averaged DQN~\cite{anschel2017averaged} reduces training variance by averaging multiple Q tables in the update. In a similar spirit, our proposed \RAA also trains multiple Q tables in parallel. However, we do not directly use the same update rule since that would break the convergence guarantee, in contrast, we train $k$ Q tables in parallel using \cref{eq:RAA_Q_Update} as update rule. To select more \emph{stable} actions we use the sample variance of $k$ Q tables as an approximation to the true variance and then compute a risk-averse $\hat{Q}$ table and select actions according to it. A detailed description is presented in \cref{alg:Risk_Averse_Averaged_QLearning}. The objective function here is also \cref{eq:Risk_Averse_Objective}, and it can be shown that \cref{alg:Risk_Averse_Averaged_QLearning} also converges to the optimal. \begin{theorem}\label{thm:Convergence_of_RAA} Running \cref{alg:Risk_Averse_Averaged_QLearning} for an initial Q table, then for all $i\in\{1,...,k\}$, $Q^i\rightarrow Q^*$ w.p. 1, hence the returned table $\frac{1}{k}\sum_{i=1}^{k}Q^i\rightarrow Q^*$ w.p. 1, where $Q^*$ is the unique solution to \begin{align*} &\expectation_{s^{\prime}}\left[u\left(r(s, a) + \gamma\cdot\underset{a}{\max}Q^*(s^{\prime},a) - Q^*(s,a)\right)\right]-x_0 = 0 \end{align*} for all $(s,a)$. Where $s^{\prime}$ is sampled from ${\mathcal{T}}[\cdot|s,a]$. And the corresponding policy $\pi^*$ of $Q^*$ satisfies $\tilde{J}_{\pi^*}\geq \tilde{J}_{\pi}\;\forall \pi$. \end{theorem} \cref{thm:Convergence_of_RAA} follows directly from \cref{thm:RARL_Converge} (see \cref{sec:proof_of_convergence_RAA} for detail). \subsection{Variance Reduced Risk-Averse Q-Learning (\RAAV)}\label{sec:RAAV} \begin{algorithm*}[t] \scriptsize \caption{Variance Reduced Risk-Averse Q-Learning (\RAAV)} \label{alg:Variance_Reduced_RAQL} \textbf{Input :} Training epochs $T$; Exploration rate $\epsilon$; Number of models $k$; Epoch length $K$; Recentering sample size $N$; Utility function parameter $\beta<0$; \begin{spacing}{0.8} \begin{algorithmic}[1] \STATE Initialize $\bar{Q}_0 = \mathbf{0}$; $m = 1$; $RB = \emptyset$. \FOR{$m = 1$ to $T$} \STATE Select action according to $\bar{Q}_{m-1}$ by applying $\epsilon-$greedy strategy \STATE Execute action and get $(s,a,r(s,a),s^{\prime})$ and update the replay buffer $RB = RB\cup (s,a,r(s,a),s^{\prime})$. \FOR{$i = 1,..., N$} \STATE Define the empirical Bellman operator $\ddot{\mathcal{T}_i}$ as $$\ddot{\mathcal{T}_i}(Q)(s,a)=u\left(r(s,a)+\gamma\cdot\underset{a^{\prime}}{\max}\;Q(s_i,a^{\prime})\right) - x_0$$ \qquad where $s_i$ is randomly sampled from ${\mathcal{T}}[\cdot|s,a]$; $u$ is the utility function, and $u(x) = -e^{\beta x}$, $\beta<0$ and $x_0 = -1$ \ENDFOR \STATE Define $\tilde{\mathcal{T}}_{N}(\bar{Q}_{m-1})=\frac{1}{N}\sum_{i\in\mathcal{D}_{N}}\ddot{\mathcal{T}_i}(\bar{Q}_{m-1})$, where $\mathcal{D}_{N}$ is a collection of $N$ i.i.d. samples (i.e., matrices with samples for each state-action pair $(s,a)$ from $RB$). \STATE Define $Q_1 = \bar{Q}_{m-1}$. \FOR{$k = 1,..., K$} \STATE Compute stepsize $\lambda_k = \frac{1}{1+(1-\gamma)k}$ \STATE \begin{align}\label{eq:Update_Rule} Q_{k+1} = (1-\lambda_k)\cdot Q_{k} + \lambda_k\cdot\left[\ddot{\mathcal{T}_k}(Q_k)-\ddot{\mathcal{T}_k}(\bar{Q}_{m-1}) + \tilde{\mathcal{T}}_{N}(\bar{Q}_{m-1})\right]. \end{align} where $\ddot{\mathcal{T}_k}$ is empirical Bellman operator constructed using a sample not in $\mathcal{D}_N$, thus the random operators $\ddot{\mathcal{T}_k}$ and $\tilde{\mathcal{T}}_N$ are independent \ENDFOR \STATE $\bar{Q}_{m} = Q_{K+1}$; $m$ = $m +1$ \ENDFOR \STATE \textbf{Return} $\bar{Q}_{m}$ \end{algorithmic} \end{spacing} \end{algorithm*} \citeauthor{wainwright2019variance}~(2019) proposed Variance Reduced Q-learning which trains multiple Q tables in parallel and uses the averaged Q table in the update rule. It is shown that it guarantees a convergence rate which is minimax optimal. Inspired by that work, we propose our \RAAV (\cref{alg:Variance_Reduced_RAQL}) which applies a utility function to the TD error during Q updates for the purpose of further reducing variance. To select more \emph{stable} actions during training, we use the sample variance of $k$ Q tables as an approximation to the true variance and then compute a risk-averse $\hat{Q}$ table and select actions according to it. We'll discuss more details in \cref{sec:Discussion}. \subsection{Multi-Agent Risk-Averse Q-Learning (RAM-Q\xspace)} \begin{algorithm*}[ht] \scriptsize \caption{Risk-Averse Multi-Agent Q-Learning (RAM-Q)} \label{alg:MultiAgent_QLearning_RiskAverse} \textbf{Input :} Training steps $T$; Exploration rate $\epsilon$; Number of models $k$; Utility function parameters $\beta^P<0; \beta^A > 0$. \begin{spacing}{0.8} \begin{algorithmic}[1] \STATE For $\forall (s,a_P,a_A)$, initialize $Q^P(s,a_P,a_A) = 0$; $Q^A(s,a_P,a_A) = 0$; $N(s,a_P,a_A) = 0$. \FOR{$t = 1$ to $T$} \STATE At state $s_t$, compute $\pi^P(s_t)$, $\pi^A(s_t)$, which is a mixed strategy Nash equilibrium solution of the bimatrix game $(Q^P(s_t), Q^A(s_t))$. \STATE Choose action $a_t^P$ based on $\pi^P(s_t)$ according to $\epsilon$-greedy and choose action $a_t^A$ based on $\pi^A(s_t)$ according to $\epsilon$-greedy \STATE Observe $r_t^P, r_t^A$ and $s_{t+1}$. \STATE At state $s_{t+1}$, compute $\pi^P(s_{t+1})$,$\pi^A(s_{t+1})$, which are mixed strategies Nash equilibrium solutions of the bimatrix game $(Q^P(s_{t+1}), Q^A(s_{t+1}))$. \STATE $N(s_t,a^P_t, a^A_t)= N(s_t,a^P_t, a^A_t) + 1$ \STATE Set learning rate $\alpha_t = \frac{1}{N(s_t,a^P_t, a^A_t)}$. \STATE Update $Q^P, Q^A$ such that \begin{align}\label{eq:UpdateRule_MultiAgent_RiskAverse_1} Q^P(s_t,a^P_t, a^A_t) = Q^P(s_t,a^P_t, a^A_t) + \alpha_t\cdot \left[u^P\left(r_t^P + \gamma\cdot\pi^P(s_{t+1})Q^P(s_{t+1})\pi^A(s_{t+1})-Q^P(s_t,a^P_t, a^A_t)\right) - x_0\right] \end{align} where $u^P$ is a utility function, here we use $u^P(x) = -e^{\beta^P x}$ where $\beta^P<0$; $x_0 = -1$. \begin{align}\label{eq:UpdateRule_MultiAgent_RiskAverse_2} Q^A(s_t,a^P_t, a^A_t) = Q^A(s_t,a^P_t, a^A_t) + \alpha_t\cdot\left[ u^A\left(r_t^A + \gamma\cdot\pi^P(s_{t+1})Q^A(s_{t+1})\pi^A(s_{t+1})-Q^A(s_t,a^P_t, a^A_t)\right) - x_1\right] \end{align} where $u^A$ is a utility function, here we use $u^A(x) = e^{\beta^A x}$ where $\beta^A>0$; $x_1 = 1$. \ENDFOR \STATE \textbf{Return} $(Q^P, Q^A)$ \end{algorithmic} \end{spacing} \end{algorithm*} In complex scenarios such as financial markets learned RL policies can be brittle. To improve robustness, we adapt ideas from adversarial learning to a multi-agent learning problem similar to \cite{MultiAgentQLearning98}. In the adversarial setting we assume there are two learning processes happening simultaneously, a main protagonist \emph{(P)} and an adversary \emph{(A)}: the goal of protagonist is to maximize the total return as well as minimize the variance; the goal of adversary is to minimize the total return of protagonist as well as maximizing the variance. Here, we assume that each agent can observe its opposite's immediate reward. Let $r_t^P$ be the immediate reward received by protagonist at step $t$, and let $r_t^A$ be the immediate reward received by adversary at step $t$. Then we choose the objective functions as follows: The objective function for the protagonist is, \begin{align}\label{eq:RAM-Q_Objective_protagonist} \tilde{J}_{\pi}^{P} = \frac{1}{\beta^P}\expectation_{\pi}\left[exp\left(\beta^P\sum_{t=0}^{\infty}\gamma^t\cdot r_t^P \right)\right] \qquad \beta^P < 0 \end{align} by a Taylor expansion, \cref{eq:RAM-Q_Objective_protagonist} yields, \begin{align*} \tilde{J}_{\pi}^{P} &= \expectation\left[\sum_{t=0}\gamma^t\cdot r_t^P\right] + \frac{\beta^P}{2}{\mathbb{V}} ar\left[\sum_{t=0}\gamma^t\cdot r_t^P\right] + O((\beta^P)^2). \end{align*} Similarly, the objective function for the adversary is, \begin{align}\label{eq:RAM-Q_Objective_adversary} \tilde{J}_{\pi}^{A} = \frac{1}{\beta^{A}}\expectation_{\pi}\left[exp\left(\beta^{A}\sum_{t=0}^{\infty}\gamma^t r_t^A\right)\right] \qquad \beta^A >0 \end{align} and by Taylor expansion, \cref{eq:RAM-Q_Objective_adversary} yields, \begin{align*} \tilde{J}_{\pi}^{A} =& \expectation\left[\sum_{t=0}\gamma^t\cdot r_t^A\right] +\frac{\beta^{A}}{2}{\mathbb{V}} ar\left[\sum_{t=0}\gamma^t\cdot r_t^A\right] + O((\beta^A)^2) . \end{align*} Using the same spirit in \cite{MultiAgentQLearning98}, we proposed \cref{alg:MultiAgent_QLearning_RiskAverse} and the following guarantee holds : \begin{theorem}\label{thm:RAMconvergenceRate} If the two-agent game $(\tilde{J}^P, \tilde{J}^A)$ has a Nash equilibrium solution, then running \cref{alg:MultiAgent_QLearning_RiskAverse} from initial Q tables $Q^P, Q^A$ will converge to $Q_P^*$ and $Q_A^*$ w.p. 1. s.t. the Nash equilibrium solution $(\pi^{P}_*, \pi^{A}_*)$ for the bimatrix game $(Q_P^*, Q_A^*)$ is the Nash equilibrium solution to the game $(\tilde{J}^P_{\pi}, \tilde{J}^A_{\pi})$, and the equilibrium payoff are $\tilde{J}^P(s,\pi^{P}_*, \pi^{A}_*)$, $\tilde{J}^A(s,\pi^{P}_*, \pi^{A}_*)$. \end{theorem} Although \cref{thm:RAMconvergenceRate} gives a solid convergence guarantee, it suffers from drawbacks like expensive computational cost and idealized assumptions, e.g., in trading markets, there may not exist a Nash equilibrium to $(\tilde{J}^P, \tilde{J}^A)$, and during the training process, assumptions about the Nash equilibrium (\cref{ass:Bimatrix_Nash_Assumption} in Appendix) break easily~\cite{bowling2000convergence}. Hence, we design another novel algorithm \RAAA which relaxes these assumptions (at the expense of loosing theoretical guarantees) while enhancing robustness and performing well in reality. \subsection{Risk-Averse Adversarial Averaged Q-Learning (\RAAA)} \label{sec:RA3-Q} \begin{algorithm}[t] \scriptsize \caption{Risk-Averse Adversarial Averaged Q-Learning (\RAAA) \textit{Short Version}} \label{alg:Risk_Averse_Adversarial_Averaged_QLearning} \textbf{Input :} Training steps $T$; Exploration rate $\epsilon$; Number of models $k$; Risk control parameters $\lambda_P, \lambda_A$; Utility function parameters $\beta^P < 0; \beta^A > 0$. \begin{spacing}{0.8} \begin{algorithmic}[1] \STATE Initialize $Q_P^i, Q_A^i\;\forall i = 1,...,k$; $N = \mathbf{0}\in{\mathbb{R}}^{|{\mathcal{S}}|\times|{\mathcal{A}}|\times|{\mathcal{A}}|}$. Randomly sample action choosing head integers $H_P,H_A\in\{1,...,k\}$. \FOR{$t=1$ to $T$} \STATE Set $Q_P = Q_P^{H_P}$. Then compute $\hat{Q}_P$, the risk-averse protagonist $Q$ table by the $k$ Q tables $Q_P^i, i = 1,...,k$. \STATE Set $Q_A = Q_A^{H_A}$. Then compute $\hat{Q}_A$, the risk-seeking protagonist $Q$ table by the $k$ Q tables $Q_A^i, i = 1,...,k$. \STATE Select actions $a_P,a_A$ according to $\hat{Q}_P,\hat{Q}_A$ by applying $\epsilon$-greedy strategy. \STATE Generate mask $M\in{\mathbb{R}}^k \sim Poisson(1)$ and update $Q_P^i, Q_A^i, i = 1,..., k$ according to mask $M$ using update rules \cref{eq:RA3QProtagonist_UpdateRule} and \cref{eq:RA3QAdversary_UpdateRule}. \STATE Update $H_P$ and $H_A$ \ENDFOR \STATE \textbf{Return} $\frac{1}{k}\sum_{i=1}^{k}Q_P^i$; $\frac{1}{k}\sum_{i=1}^{k}Q_A^i$. \end{algorithmic} \end{spacing} \end{algorithm} We start from the same objective functions for the protagonist, \cref{eq:RAM-Q_Objective_protagonist}, and adversary, \cref{eq:RAM-Q_Objective_adversary}. In order to optimize $\tilde{J}^{P}$ and $\tilde{J}^{A}$, we apply utility functions to TD errors when updating Q tables, and combining the idea of training multiple Q tables in parallel as \cref{alg:Risk_Averse_Averaged_QLearning} to select actions with low variance, we get a novel \cref{alg:Risk_Averse_Adversarial_Averaged_QLearning} (full version \cref{alg:Risk_Averse_Adversarial_Averaged_QLearning_fullversion} in \cref{sec:Discussion_of_RAAA}). Note that \RAAA combines (i) risk-averse using utility functions (ii) variance reduction by training multiple Q tables and (iii) robustness by adversarial learning. Intuitively, as the adversary is getting stronger, the protagonist experiences harder challenges, thus enhancing robustness. Compared to \cref{alg:MultiAgent_QLearning_RiskAverse}, where the returned policy $(\pi^P, \pi^A)$ is a Nash equilibrium of the $(\tilde{J}^P, \tilde{J}^A)$, \cref{alg:Risk_Averse_Adversarial_Averaged_QLearning} does not have a convergence guarantee, however, it has several practical advantages including computational efficiency, simplicity (no strong assumptions) and more stable actions during training. For a longer discussion see \cref{sec:Discussion} and \cref{sec:Discussion_of_RAAA}. \iffalse \begin{theorem}\label{thm:RAAAconvergenceRate} Running \cref{alg:Risk_Averse_Adversarial_Averaged_QLearning} from an initial Q table, $\frac{1}{k}\sum_{i=1}^{k}Q^{i}_{P}\rightarrow Q^{P*}$ w.p. 1, where $Q^{P*}$ is a solution to \begin{align*} \expectation_{s_t,a_P,a_A} & \left[u^P\left(r_t^P +\gamma\cdot\underset{a}{\max}\;Q^{P*}(s_{t+1},a_P,a_A) - Q^{P*}(s_t,a_P,a_A)\right) \right] = x_0 \end{align*} $\forall (s_t,a_P,a_A)$. Where $s_{t+1}$ is sampled from ${\mathcal{P}}[\cdot|s_t, a_P, a_A]$. And the corresponding policy $\pi^{*}_{P}$ of $Q^{P*}$ satisfies $\tilde{J}_{\pi^{*}_{P}}\geq \tilde{J}_{\pi}\;\forall \pi$. And we also have $\frac{1}{k}\sum_{i=1}^{k}Q^{i}_{A}\rightarrow Q^{A*}$ w.p. 1, where $Q^{A*}$ is a solution to \begin{align*} \expectation_{s_{t},a_P,a_A} & \left[u^A\left( r_t^A + \gamma\cdot\underset{a}{\max}\;Q^{P*}(s_{t+1},a_P,a_A) - Q^{P*}(s_t,a_P,a_A)\right)\right] = x_1 \end{align*} $\forall (s_t,a_P,a_A)$. Where $s_{t+1}$ is sampled from ${\mathcal{P}}[\cdot|s_t, a_P, a_A]$. And the corresponding policy $\pi^{*}_{A}$ of $Q^{A*}$ satisfies $\tilde{J}_{\pi^{*}_{A}}^A\geq \tilde{J}_{\pi}^A\;\forall \pi$. \end{theorem} \fi \section{Performance Evaluated by Empirical Game Theory} When the environment is populated by many learning agents, how do we evaluate their performance and decide which strategy is the best? Although different approaches can be used, we focused on empirical game theory (EGT) to address this question. In EGT each agent is a player involved in rounds of strategic interaction (games). By meta-game analysis, we can evaluate the superiority of each strategy. Our contribution is to theoretically prove that the Nash-Equilibrium of risk averse meta-game is an approximation of the Nash-Equilibrium of the population game, to our knowledge, this is the first work doing this type of risk-averse analysis. \subsection{Replicator dynamics} In EGT, we can visualize the dominance of strategies by plotting the meta-game payoff tables together with the replicator dynamics. A meta game payoff table could be seen as a combination of two matrices $(N|R)$, where each row $N_i$ contains a discrete distribution of $p$ players over $k$ strategies, and each row yields a discrete profile $(n_{\pi_1}, ..., n_{\pi_k})$ indicating exactly how many players play each strategy with $\sum_{j}n_{\pi_j} = p$. A strategy profile $\mathbf{u} = \left(\frac{n_{\pi_1}}{p}, ..., \frac{n_{\pi_k}}{p}\right)$. And each row $R_i$ captures the rewards corresponding to the rows in $N$. For example, for a game $A$ with 2 players, and 3 strategies $\{\pi_1, \pi_2, \pi_3\}$ to choose from, the meta game payoff table could be constructed as follows : In the left side of the table, we list all of the possible combinations of strategies. If there are $p$ players and $k$ strategies, then there are $\binom{p+k-1}{p}$ rows, hence in game $A$, there are 6 rows. See \cref{sec:meta_game_examples} for a concrete example. Once we have a meta-game payoff table and the replicator dynamics, a directional field plot is computed where arrows in the strategy space indicates the direction of flow, or change, of the population composition over the strategies (see \cref{sec:meta_game_examples} for two examples of directional field plots in multi-agent problems). In \cref{sec:risk_and_robustness_evaluation} we present trading market experiments and results based on meta-game analysis with the performance of RAQL, \RAA and \RAAV. \subsection{Nash Equilibrium with risk neutral payoff} Previously, \citeauthor{tuyls2020bounds}~(2020) showed that for a game $r^i (\pi^i, ..., \pi^p) = \expectation [R^i(\pi^1, ..., \pi^p)]$, with a meta-payoff (empirical payoff) $\hat{r}^i (\pi^i, ..., \pi^p)$, the Nash Equilibrium of $\hat{r}$ is an approximation of Nash Equilibrium of $r$. \begin{lemma}\cite{tuyls2020bounds} \label{lem:approx_equili_normalgame} If $\mathbf{x}$ is a Nash Equilibrium for the game $\hat{r}^i (\pi^1, ..., \pi^p)$, then it is a $2\epsilon$-Nash equilibrium for the game $r^i (\pi^1, ..., \pi^p)$, where $\epsilon = \underset{\pi,i}{sup}~|\hat{r}^{i}(\pi) - r^i (\pi)|$. \end{lemma} \cref{lem:approx_equili_normalgame} implies that if for each player, we can bound the estimation error of empirical payoff, then we can use the Nash Equilibrium of meta game as an approximation of Nash Equilibrium of the game. \subsection{Risk averse payoff EGT} Recall our objective is to consider risk averse payoff to evaluate strategies. Hence, instead of letting $$r^i (\pi^i, ..., \pi^p) = \mathbb{E}[R^i (\pi^1, ..., \pi^p)],$$ we choose $$h^i(\pi^i, ..., \pi^p) = \mathbb{E}[R^i (\pi^1, ..., \pi^p)] - \beta\cdot\mathbb{V}ar[R^i (\pi^1, ..., \pi^p)]$$ (where $\beta>0$) as the game payoff. Moreover, we use\begin{align}\label{eq:Risk_Averse_Payoff} \hat{h}^i(\pi^i, ..., \pi^p) = \bar{R^i} - \beta\cdot \left[\frac{1}{n-1}\sum_{j=1}^{n}\left(R^i_j - \bar{R^i}\right)^2\right] \end{align} as meta-game payoff, where $\bar{R^i} = \frac{1}{n}\sum_{j=1}^{n}R^i_j$ and $R_j^i$ is the stochastic payoff of player $i$ in $j-$th experiment. To our knowledge, there is no previous work on empirical game theory analysis with risk sensitive payoff. Below we give the first theoretical analysis showing that for our risk-averse payoff game, we can still approximate the Nash Equilibrium by meta game. \begin{theorem} \label{thm:approx_equili_riskaversegame} Under \cref{ass:stochastic_reward_bounded}, for a Normal Form Game with $p$ players, and each player $i$ chooses a strategy $\pi^i$ from a set of strategies $S^i = \{\pi^i_1, ..., \pi^i_k\}$ and receives a meta payoff $h^i(\pi^1, ..., \pi^p)$ (\cref{eq:Risk_Averse_Payoff}). If $\mathbf{x}$ is a Nash Equilibrium for the game $\hat{h}^i (\pi^1, ..., \pi^p)$, then it is a $2\epsilon$-Nash equilibrium for the game $h^i (\pi^1, ..., \pi^p)$ with probability $1-\delta$ if we play the game for $n$ times, where \begin{align} \begin{split} n \ge \max \left\{ -\frac{8R^2}{\epsilon^2}\log\left[\frac{1}{4}\left(1-(1-\delta)^{\frac{1}{|S^1|\times ...\times |S^p|\times p}}\right)\right], \right. \\ \left. \frac{64\beta^2\omega^2\cdot\Gamma(2)}{\epsilon^2\left[1-(1-\delta)^{\frac{1}{|S^1|\times...\times |S^p|\times p}}\right]}\right\} \end{split} \end{align} \end{theorem} \section{Experiments} \subsection{Setup} Our experiments use the open-sourced ABIDES~\cite{byrd2019abides} market simulator in a simplified setting. The environment is generated by replaying publicly available real trading data for a single stock ticker.\footnote{https://lobsterdata.com/info/DataSamples.php} The setting is composed of one non-learning agent that replays the market deterministically~\cite{balch2019evaluate} and learning agents. The learning agents considered are: RAQL, \RAA, \RAAV, and \RAAA. We follow a similar setting to existing implementations in ABIDES\footnote{https://github.com/abides-sim/abides/blob/\\master/agent/examples/QLearningAgent.py} where the state space is defined by two features: current holdings and volume imbalance. Agents take one action at every time step (every second) selecting among: \emph{buy/sell} with limit price $ base + i\cdot K$, where $i \in\{1,2,..,6\}$ or \emph{do nothing}. The immediate reward is defined by the change in the value of our portfolio (mark-to-market) and comparing against the previous time step. Our comparisons are in terms of Sharpe ratio,\footnote{We could have compared in terms of the objectives functions (e.g., \cref{eq:RAA_Q_Update}) but instead we used Sharpe ratio which is more common in practice.} which is a widely used measure in trading markets. \subsection{Risk and robustness evaluation} \label{sec:risk_and_robustness_evaluation} \begin{table}[] \scriptsize \caption{Meta-payoff of 2 players, 3 strategies, respectively RAQL~\cite{shen2014risk}, \RAA and \RAAV over 80 simulations. The return here used is Sharpe Ratio.} \label{table:MetaPayoff_oneAgentAlgos} \centering \begin{tabular}{c c c|c c c} \toprule $N_{i1}$ & $N_{i2}$ & $N_{i3}$ & $R_{i1}$ & $R_{i2}$ & $R_{i3}$\\ \hline 2 & 0 & 0 &0.9130 & 0 & 0\\ 1 & 1 & 0 & 0.7311 & 0.7970 & 0\\ 0 & 2 & 0 & 0 &1.0298 & 0 \\ 1 & 0 & 1 & 0.6791 & 0 & 1.0786\\ 0 & 0 & 2 & 0 & 0 & 2.2177 \\ 0 & 1 & 1 & 0 & 0.7766 & 1.4386\\ \bottomrule \end{tabular} \end{table} \begin{figure} \centering \subfigure[]{ \includegraphics[width = 3.85cm]{DirectionalwithSpeed.png} } \subfigure[]{ \includegraphics[width = 3.85cm]{DirectionFieldTraject.png} } \caption{(a) Directional field plot and (b) Trajectory plot of the simplex of 3 strategies based on the meta-game payoff from \cref{table:MetaPayoff_oneAgentAlgos}. It can be seen that \RAAV (top) is the the strongest attractor. White circles represent equilibria.} \label{fig:DirectionalFieldRiskAverseAlgo} \end{figure} \cref{table:MetaPayoff_oneAgentAlgos} shows the meta-payoff table of a two player-game among three strategies: RAQL, \RAA and \RAAV. The results show that our two proposed algorithms \RAA and \RAAV obtained better results than RAQL. With those payoffs we obtained the directional and trajectory plots shown in \cref{fig:DirectionalFieldRiskAverseAlgo}, where black solid circles denote globally-stable equilibria, and the white circles denote unstable equilibria (saddle-points), in (a) the plot is colored according to the speed at which the strategy mix is changing at each point; in (b) the lines show trajectories for some points over the simplex. \begin{table}[t] \scriptsize \caption{Comparison in terms for Sharpe ratio with two types of perturbations: The trained adversary from \RAAA is used in testing time. Zero-intelligence agents are added to the simulation to perturb the market. \RAAA obtains better results in both cases due to its enhanced robustness.}\label{table:RA2Q_RA3Q} \centering \begin{tabular}{c | c | c} \toprule Algorithm/Setting & Adversarial Perturbation & ZI Agents Perturbation\\ \hline \RAA & 0.5269 & 0.9538\\ \RAAA & 0.9347 & 1.0692\\ \bottomrule \end{tabular} \end{table} Our last experiment compares \RAA and \RAAA in terms of robustness. In this setting we trained both agents under the same conditions as a first step. Then in testing phase we added two types of perturbations, one adversarial agent (trained within \RAAA) or adding noise (aka. zero-intelligence) agents in the environment. In both cases, the agents will act in a perturbed environment. The results are presented in \cref{table:RA2Q_RA3Q} in terms of Sharpe ratio using cross validation with 80 experiments. \section{Discussion} \label{sec:Discussion} Here we briefly discuss some trade-offs between practical and theoretical results about our proposed algorithms. As mentioned in \cref{sec:RAAV}, we did not show that \cref{alg:Variance_Reduced_RAQL} (\RAAV) has a convergence guarantee, however, it obtained good empirical results (better than RAQL and \RAA). It is an open question whether \RAAV converges to the optimal of \cref{eq:Risk_Averse_Objective}, furthermore, it could be interesting to study whether it also enjoys minimax optimality convergence rate up to a logarithmic factor as in \cite{wainwright2019variance}. Similarly, \RAAA does not have a convergence guarantee in the multi-agent learning scenario (when protagonist and adversary are learning simultaneously). However, \RAAA obtained better empirical results than \RAA highlighting its robustness. In \cref{sec:Discussion_of_RAAA} we show a related result showing that \cref{eq:RA3QProtagonist_UpdateRule} or \cref{eq:RA3QAdversary_UpdateRule} converge to optimal assuming the policy for the adversary (or protagonist) is fixed (thus, it is no longer a multi-agent learning setting). On the side of EGT analysis, previous works used average as payoff~\cite{tuyls2020bounds} and our work considers a risk-averse measure based on variance (second moment), studying higher moments and other measures is one interesting open question. \section{Conclusions} We have proposed 4 different Q-learning style algorithms that augment reinforcement learning agents with risk-awareness, variance reduction, and robustness. \RAA and \RAAV are risk-averse but use slightly different techniques to reduce variance. RAM-Q\xspace and \RAAA are two proposals that extend by adding an adversarial learning layer which is expected to improve its robustness. On the one side, our theoretical results show convergence results for \RAA and RAM-Q\xspace, on the other side, in our empirical results \RAAV and \RAAA obtained better results in a simplified trading scenario. Lastly, we contributed with risk-averse analysis of our algorithms using empirical game theory. As future work we want to perform a more extensive set of experiments to evaluate the algorithms under different conditions.
2,869,038,156,596
arxiv
\section{Introduction} \section{Definitions} \label{sec1} \textbf{Receptive Field (RF):} is a local region (including its depth) on the output volume of the previous layer that a neuron is connected to. This term has been prevalent in Neurosciences since the study of Hubel and Wiesel~\cite{hubel1962receptive} in which they suggested local features are detected in early visual layers of the visual cortex and are then progressively combined to create more complex patterns in a hierarchical manner. As an example, assume that the input RGB image to a CNN has size $[32\times32\times3]$. For a filter size of $5\times5$, then each neuron in the first convolutional layer will be connected to a $[5\times5\times3]$ region in the input volume. Thus, a total of $5\times5\times3 = 75$ weights (+1 bias parameter) needs to be learned. Notice that RF is a 3D tensor with its depth being equal to the depth of the volume in the previous layer. Here, for simplicity, we discard the depth in our calculation. \textbf{Effective Receptive Field (ERF):} is the area of the original image that can possibly influence the activation of a neuron. One important point to notice here is that RF and ERF are the same for the first convolutional layer. However, they differ as we moves along the CNN hierarchy. The RF is simply equal to filter size over the previous layer but ERF traces the hierarchy back to the input image and indicates the extent of the input image which can modulate the activity of a neuron. Here, we focus on ERF calculation. It is worth noting that ERF and RF are sometimes used interchangeably (and hence confused) in the computer vision community. \textbf{Projective Field (PF):} is the set of neurons to which a neuron projects its output~\cite{lehky1988network}. Figure~\ref{fig:RFPF} illustrated these definitions. \begin{figure}[t] \begin{center} \includegraphics[width=0.7\linewidth]{RF-PF.png} \end{center} \caption{Schematic plot demonstrating receptive and projective fields of a neuron (Borrowed from http://www.scholarpedia.org/article/Projective\_field).} \label{fig:RFPF} \end{figure} \section{Calculating the ERF} In convolutional neural networks~\cite{lecun1998gradient}, the ERF of a neuron indicates which area of the input image is considered by the filter. Calculating the size of the ERF would help choosing suitable filter sizes using domain knowledge for enhancing the performance of CNNs. \\ There are two ways to calculate the ERF size: 1) Bottom-Up, and 2) Top-Down. Both ways produce the same result. The intermediate values of each approach, however, have different meanings in each case. \subsection{Bottom-Up Approach} \begin{figure}[t] \begin{center} \includegraphics[width=0.6\linewidth]{BottomUp.png} \end{center} \caption{Example of Bottom-up approach for ERF calculation for the network shown in the top row. Red area is the ERF of the lower layer. Yellow and blue are non-overlapped areas, used to indicate how stride affects the calculation of the additional area. In this example, after the first pooling, each additional filter adds 2 pixels to the ERF. After the second pooling, each additional filter adds 4 pixels to the ERF.} \label{fig:BottomUp} \end{figure} The bottom-up approach is the method to calculate the ERF of a neuron at layer k projected on the input image. Let \(R_{k}\) be the ERF of a neuron at layer k. Given the ERF of the previous layer \(R_{k-1}\), where \(R_{0} = 1\) is the ERF at input image layer, the ERF for a neuron at current layer \(R_{k}\) can be computed by adding the non-overlapped-area A to \(R_{k-1}\): \begin{equation} R_{k} = R_{k-1} + A \label{eq:eq1} \end{equation} Let \(f_{k}\) represent the filter size of layer k. There are \((f_{k} - 1)\) filters overlap with each other. Since a filter can be convolved with a stride greater than one, it can significantly increase the non-overlapped area. Thus, it is necessary to account for the number of pixels each extra filter contributes to the ERF. Since the stride of the lower layer also affects the ERF of the higher layer, the pixel contributions of all layers must be accumulated. Therefore the non-overlapped area is calculated as: \begin{equation} A = (f_{k} - 1)\prod^{k-1}_{i=1}s_i \label{eq:eq2} \end{equation} where \(s_{i}\) is the stride of the layer i. Combining equation \ref{eq:eq1} and equation \ref{eq:eq2}, the ERF can be computed as: \begin{equation} R_{k} = R_{k-1} + (f_{k} - 1)\prod^{k-1}_{i=1}s_i \label{eq:eq3} \end{equation} Figures \ref{fig:BottomUp} and \ref{fig:BottomUp1D} illustrate ERF calculation for a sample architecture. The advantage of bottom-up approach is that it produces the ERF for all layers in one feed-forward pass. \begin{figure}[t] \begin{center} \includegraphics[width=.9\linewidth]{BottomUp1D.png} \end{center} \caption{1 dimensional example illustrating how each layer expands the ERF.} \label{fig:BottomUp1D} \end{figure} \subsection{Top-Down Approach} The computation of ERF in this approach is done via calculating the RF a neuron at layer k projected on the lower layer j, where the RF of the last layer would be the ERF. Given the RF of a neuron at higher layer \(R_{k,j+1}\), if there is no overlap (i.e., stride equal filter size) then the RF of the current layer is: \begin{equation} R_{k,j} = R_{k,j+1} f_{j+1} \label{eq:eq4} \end{equation} where \(f_{j+1}\) is the filter size of higher layer. The RF is 1 when \(j = k\). When the filters are overlapped with each other, the overlapped area must be subtracted from the value. Imagine placing down a filter, then every filter being placed after it would have area overlap with the previously placed filter. Since the RF of the higher layer is the one being projected down, the number of filters that overlap with each other would simply be the RF of higher layer minus one: \begin{equation} O = R_{k,j+1}-1 \label{eq:eq5} \end{equation} The subsequent filter being placed down would be shifted by the stride, thus the overlapped area is dependent on the size of the filter and the stride. Larger strides would yield to less overlap. Larger filters would result in more overlap. The overlapped area of each filter would be the difference of filter size and stride of the higher layer \(s_{j+1}\): \begin{equation} A = f_{j+1}-s_{j+1} \label{eq:eq6} \end{equation} Having the number of overlapped filters and the area that each filter overlapped, the RF at the current layer can be computed by combining the equations \ref{eq:eq4}, \ref{eq:eq5}, and \ref{eq:eq6}: \begin{equation} R_{k,j} = R_{k,j+1} f_{j+1} - (R_{k,j+1}-1)(f_{j+1}-s_{j+1}) \label{eq:eq7} \end{equation} Expanding and simplifying the above equation gives the final top-down equation: \begin{equation} R_{k,j} = (R_{k,j+1}-1) s_{j+1} + f_{j+1} \label{eq:eq8} \end{equation} The top-down approach is helpful during the analysis as it can be computed relatively quickly. Also, given a point on a filter, it is possible to speculate the nodes that contributed to its output. For the deconvolutional networks, the top-down approach can be used to control the resolution of the output image. Thus instead of using upside down CNN layers, the deconvolutional layers can be designed to incorporate any domain knowledge about the problem. Figures \ref{fig:TopDown} and \ref{fig:TopDown1D} show examples of the progression of RF being projected back to lower layers. \begin{figure}[t] \begin{center} \includegraphics[width=0.6\linewidth]{TopDown.png} \end{center} \caption{Example of Top-Down approach showing the RF of the last layer ($9\times9$ Convolution) being projected back to the input image. With stride of 2 and filter size of 2 $\times$ 2, the RF is simply doubled in size.} \label{fig:TopDown} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=.9\linewidth]{TopDown1D.png} \end{center} \caption{1 dimensional example illustrating how the top-down approach expands the RF through each lower layer.} \label{fig:TopDown1D} \end{figure} \subsection{Case Study} Here, we calculate the ERF of neurons for an the CNN from Wei et al.,~\cite{Wei}. In their paper, Wei et al. proposed a method for pose estimation (known as Convolutional Pose Machine). Figure~\ref{fig:PoseMachine} shows the original architecture. Here, we focus on calculating ERF for part of the network shown in Figure~\ref{fig:Architecture}, with the 1x1 filters being omitted. \begin{figure*}[t] \centerline{\includegraphics[width=\textwidth]{PoseMachine.png}} \caption{Convolutional Pose Machine by Wei et al.~\cite{Wei} and the ERF of neurons in different layers (Figure taken from~\cite{Wei}).} \label{fig:PoseMachine} \vspace*{-10pt} \end{figure*} \begin{figure*}[t] \centerline{\includegraphics[width=.7\textwidth]{Architecture.png}} \caption{Sample CNN architecture with filter sizes and effective receptive fields shown, reproduced from \cite{Wei}.} \label{fig:Architecture} \vspace*{-10pt} \end{figure*} \noindent \textbf{Bottom-Up approach:} The ERF of each layer is computed progressively while skipping the $1\times1$ filters as they do not have any effect on the ERF size. The process of computing ERF of the architecture in Figure \ref{fig:Architecture}, according to equation \ref{eq:eq3}, is shown below: \begin{equation} \begin{matrix*}[l] R_{0} = 1 \\ R_{1} = 1 + (9 - 1)(1) = 9 \\ R_{2} = 9 + (2 - 1)(1) = 10\\ R_{3} = 10 + (9 - 1)(2) = 26\\ R_{4} = 26 + (2 - 1)(2) = 28\\ R_{5} = 28 + (9 - 1)(2*2) = 60\\ R_{6} = 60 + (9 - 1)(2*2) = 64\\ R_{7} = 64 + (5 - 1)(2*2*2) = 96\\ R_{8} = 96 + (9 - 1)(2*2*2) = 160\\ R_{9} = 160 + (11 - 1)(2*2*2) = 240\\ R_{10} = 240 + (11 - 1)(2*2*2) = 320\\ R_{11} = 320 + (11 - 1)(2*2*2) = 400 \end{matrix*} \label{eq:BottomUp} \end{equation} \noindent \textbf{Top-Down approach:} Here, the ERF is calculated for each layer separately. So for a network with n layers, n passes back to the image are needed. In other words, intermediate numbers can not be reused. The process of computing ERF for the architecture in Figure \ref{fig:PoseMachine}, according to equation \ref{eq:eq8}, is shown below. \(R_{11,0}\) is the ERF of the $11th$ layer in the network. Notice that a separate computation is needed to compute \(R_{12,0}\) or \(R_{10,0}\). \begin{equation} \begin{matrix*}[l] R_{11,11} = 1 \\ R_{11,10} = (1-1)(1) + 11 = 11 \\ R_{11,9} = (11-1)(1) + 11 = 21 \\ R_{11,8} = (20-1)(1) + 11 = 31 \\ R_{11,7} = (31-1)(1) + 9 = 39 \\ R_{11,6} = (39-1)(1) + 5 = 43 \\ R_{11,5} = (43-1)(2) + 2 = 86 \\ R_{11,4} = (86-1)(1) + 9 = 94 \\ R_{11,3} = (94-1)(2) + 2 = 188 \\ R_{11,2} = (188-1)(1) + 9 = 196 \\ R_{11,1} = (196-1)(2) + 2 = 392 \\ R_{11,0} = (392-1)(1) + 9 = 400 \end{matrix*} \label{eq:TopDown} \end{equation} \section{Projective field size} In this section, we discuss the calculation of the PF size of a neuron. For the example in Section~\ref{sec1}, assuming 10 filters of size $5 \times 5$, and stride equal 5, in the first convolutional layer, the PF of each image pixel (i.e., input neuron and in each R, G, or B channels), would be $1\times1\times10$. Notice that this calculation is independent of the filter size but as we will show below depends on the stride size. Further, notice that as in calculation of the RF, there is a depth component involved as well. For simplicity, we discard the depth in what follows. \begin{figure}[t] \begin{center} \includegraphics[width=0.5\linewidth]{ProjectiveField.png} \end{center} \caption{Projective field size. The gray box is the filter with size of $5 \times 5$. The small circle is the output node of the filter map with the stride of two. The PF of a neuron (green box) is calculated by counting how many filters (circles) being applied within the bounding box. Depend on the location, the PF of a neuron would be different from other.} \label{fig:ProjectiveField} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=0.5\linewidth]{PFCounting.png} \end{center} \caption{Illustration of projective field size calculation. In this image, the blue box is where the filter is being applied. To calculate the PF of a neuron (shown in green), the filter (gray area) is applied around the neuron to determine the PF. For the showed neuron, the PF is 2x2. The same action can be done with other cell. The values can be verified with more complex process showed in Figure~\ref{fig:Grid1}, and Figure~\ref{fig:Grid2}.} \label{fig:PFCounting} \end{figure} The size of the projective field can be calculated by sliding the filter over an area and update the counter of each neuron when it overlaps with the filter (Figure \ref{fig:Grid1}, \ref{fig:Grid2}). However, sliding the filter is prone to error and difficult to keep track of the value in the x and y directions. A simpler method to determine the projective field was is shown in Figures \ref{fig:ProjectiveField} and \ref{fig:PFCounting}. With a stride of 1, the immediate PF is the same size as the filter size of the next layer. For example, if the filter size is $3 \times 3$, then a neuron will influence $3 \times 3$ nodes in the output filter map. For the above example, assuming 10 filters in the first convolutional layer, the PF of each image pixel (i.e., input neuron and in each R, G, or B channel), would be $3\times3\times10$. The pixels at the corners and the edges would have slightly smaller PFs. Here, for simplicity we assume that the input image has been zero padded. With a stride greater than 1, then some neurons will have bigger PFs than other neurons. For example, for a filter size of $5 \times 5$ and stride of 2, the center neuron (see figure \ref{fig:ProjectiveField}) has the PF of $3 \times 3$. The neurons on the x-axis and y-axis of the center neuron would have PFs of $3 \times 2$ or $2 \times 3$ , respectively. The neurons diagonal to the center neurons would have PF sizes of $2 \times 2$. From the above analysis (See Figures \ref{fig:Grid1}, \ref{fig:Grid2}, and \ref{fig:PFCounting}), the projective field of a node at layer k of a CNN is bounded with a set of four pairs of values: \begin{equation} \begin{matrix*}[l] P_k = \bigg \{ \floor*{\frac{f_{k+1}}{s_{k+1}}}\times\floor*{\frac{f_{k+1}}{s_{k+1}}}, \floor*{\frac{f_{k+1}}{s_{k+1}}}\times\ceil*{\frac{f_{k+1}}{s_{k+1}}}, \\\\ \ceil*{\frac{f_{k+1}}{s_{k+1}}}\times\floor*{\frac{f_{k+1}}{s_{k+1}}}, \ceil*{\frac{f_{k+1}}{s_{k+1}}}\times\ceil*{\frac{f_{k+1}}{s_{k+1}}} \bigg \} \end{matrix*} \label{eq:ProjectiveField} \end{equation} where $f_{k+1}$ and $s_{k+1}$ are the filter size and stride in the next layer. According to the equation \ref{eq:ProjectiveField}, if the remainder of the fraction is zero, then all the nodes have equal projective field sizes. Otherwise, depending on the location of the nodes, projective field sizes would be different. In other words, when the fraction does not yield an integer value, there is disparity in the influence of the nodes in the next layer. This is perhaps why researchers tend not to use strides greater than 1 in convolution layers (or use strides equal the filter size in pooling layers). Nonetheless, it is unclear whether such disparity can cause any practical problems. Deconv nets are inverted versions of CNNs. Therefore, their projective field can be calculated using the ERF formulas. Similarly, their ERF is the same as the projective field in CNNs. \begin{figure*}[t] \centerline{\includegraphics[width=0.95\textwidth]{Grid1.png}} \caption{Sequence of sliding a filter for calculating the projective field of neurons. Here filter size is $5 \times 5$ and stride is 2.} \label{fig:Grid1} \vspace*{-10pt} \end{figure*} \begin{figure*}[t] \centerline{\includegraphics[width=0.95\textwidth]{Grid2.png}} \caption{Sequence of sliding a filter for calculating the projective field of neurons (continued).} \label{fig:Grid2} \vspace*{-10pt} \end{figure*} \section{Discussion} Here, we discussed how receptive, effective receptive and projective fields of neurons in CNNs are calculated. Understanding these quantities has important implications in deep learning research. First, it helps setting the parameters such as filter size, number of filters, and stride more effectively. Second, it allows analyzing how objects are represented by CNNs and investigate whether some image information is lost along the CNN hierarchy. \vspace*{-5pt} \section{Acknowledgment} We wish to thank all participants in the Advanced Computer Vision course at UCF who contributed to discussions. \vspace*{-5pt} {\small \bibliographystyle{ieee}
2,869,038,156,597
arxiv
\section{Introduction}\label{Intro} The values of the refractive index most commonly observed in nature are positive and on the order of 1. Finding materials with an unusual refractive index -- very high, zero, or negative -- has long been a subject of interest; of course, to be of practical use, it should be accompanied by low absorption. It was proposed that quantum interference effects could be used to create an enhanced index of refraction with no absorption in $\Lambda$-type atoms \cite{Scully,Dowling-Bowden}. However, it was later noted that at the densities required for such methods, quantum corrections such as cooperative effects must be considered, which diminish the promise of the original proposals \cite{Y-F, F-Y,Fleischhauer}. For example, radiation trapping effectively decreases the effects of coherence. Since then, there have been increasing efforts to modify the index of refraction of a system or to attain a particular value through the design of metamaterials \cite{meta1,meta2,meta3} or control by external fields \cite{Kastel,density1,fieldcontrol,Marina}. Such unusual values for the index have been suggested for use in various applications; for example, a negative index could be used to implement cloaking \cite{cloaking,cloaking2,cloaking3,cloaking4} or to create a ``perfect lens" with infinite resolution \cite{ni,lens}, while a medium with a large index decreases the wavelength of light traveling through it, which could be useful in optical imaging \cite{nanooptics}. There are many parameters that can be used to control an index of refraction in some medium. In theoretical approaches focusing on the use of external fields to change the optical response of a system, these parameters can simply be changed in order to modify the index of refraction. However, there are limits to what values are actually possible. In this paper, we focus on enhancing or increasing the refractive index in atomic systems. We are concerned only with linear dispersion, and do not look at absorption. We look at what can be attained in the most simple case to find the basic limits on the index. From this, we look for additional effects, again in the simplest cases in which they occur, that can be used to enhance the index from the baseline case. The index of refraction of a medium is defined by $n=\sqrt{\epsilon\mu}$, with relative permittivity $\epsilon=1+\chi_e$ and relative permeability $\mu=1+\chi_m$. The electric susceptibilty $\chi_e$ and magnetic susceptibilty $\chi_m$ can be complex, resulting in a complex index $n$, for which the real part is the ``traditional" refractive index which represents the amount of refraction, while the imaginary part represents the amount of absorption or gain. In this paper we assume that $\mu\approx 1$, since magnetic coupling is typically weaker by a factor of $\alpha^2\approx 1/137^2$ for transitions in the visible spectrum, so that $n\approx\sqrt{1+\chi_e}$ (henceforth we drop the subscript ``$e$"). Although the real part of $n$ depends on a complex $\chi$, in practice, ultimately one would want zero or minimal absorption, where the imaginary part of $\chi$ is negligible. We are here interested only in the best-case scenarios for enhancing the real part of $n$, so we will focus on the real part of $\chi$ in Sections \ref{2L section} and \ref{3L section}, assuming that $n\approx Re(n)\approx \sqrt{1+Re(\chi)}$. We suppose that plane-wave electric fields of the form $\mathbf{E}(z,t)=\frac{1}{2}\hat{\mathbf{\epsilon}}\mathcal{E}(z)e^{i(kz-\nu t)} + \text{c.c.}$ (where c.c. denotes the complex conjugate) interact with atomic ensembles of atoms with two or more levels. With linear dispersion, the polarization due to such an electric field has the form $\mathbf{P}(z,t)=\frac{1}{2}\hat{\mathbf{\epsilon}}\mathcal{P}(z)e^{i(kz-\nu t)} + \text{c.c.}$, and $\mathcal{P}(z)=\epsilon_0 \chi \mathcal{E}(z)$ if the polarization depends on only one electric field. We first examine a two-level system to see what can be used to modify the electric susceptibility and enhance the refractive index in a medium in the most basic case. Then we consider a three-level system to see if coherence effects can improve the two-level result. Finally, we consider a four-level system to see if frequency dependence in wave mixing can be used to enhance the index. \begin{centering} \section{Two-Level}\label{2L section} \end{centering} We start with a simple two-level atom in order to find baseline values for the refractive index to which other values can be compared. We will find $\chi$ which would be needed to calculate the refractive index, and then look at how $\chi$ can be changed while considering the natural limitations in doing so. The atom has states $\ket{1}$ and $\ket{2}$ with atomic transition frequency $\omega$, and population decay rate $\Gamma$. There can be either an electric or magnetic dipole transition between the levels; in this paper we will consider only electric dipoles, which are typically stronger, with dipole operator $\hat{\mathbf{d}}$. A medium consisting of these two-level atoms has atomic number density $N$, and we assume that there is no interaction among the atoms. The medium is driven by an electric field $\mathbf{E}$ with amplitude $\mathcal{E}$ as in Section \ref{Intro}, with angular frequency $\nu$, detuned from the two-level resonance by $\Delta = \nu-\omega$. The polarization of the medium is $\mathbf{P}=N\langle\hat{\mathbf{d}}\rangle = N\hat{\mathbf{\epsilon}}(d\rho_{12}+d^*\rho_{21})$, so we have \begin{equation} \mathcal{P} = 2Nd^*\rho_{21} = \epsilon_0\chi \mathcal{E}. \label{pol} \end{equation} Solving the steady-state Bloch equations gives \begin{equation} \rho_{21}= \frac{i\Gamma-2\Delta}{\Gamma^2+4\Delta^2+2\Omega^2}\Omega. \label{rho21} \end{equation} \begin{figure}[h] \centerline{ \includegraphics[width=\linewidth]{Figures_2LA.png} } \caption[Two-level system]{Two-level system with Rabi frequency $\Omega$, detuning $\Delta$, and population decay rate $\Gamma$, coupled to the external field $\mathbf{E}$.} \label{2 level} \end{figure} To find $\chi$, we combine \eqs{pol} and \noeq{rho21}, and use the Rabi frequency in terms of the field amplitude: $\Omega=d \mathcal{E}/\hbar$. To replace $|d|^2$, we use the expression for spontaneous emission rate: $\Gamma = |d|^2 \omega^3 /(3\pi\hbar\epsilon_0 c^3) \approx 8\pi^2|d|^2/(3\hbar\epsilon_0 \lambda^3)$, where $\lambda$ is the wavelength of the incident light, and where we have assumed that $\Delta \ll \nu, \omega$. The standard result is \cite{text1} \begin{equation} \begin{aligned} \chi &=2N\frac{d^*\rho_{21}}{\epsilon_0\mathcal{E}} \\ &=N\frac{3\lambda^3\Gamma}{4\pi^2}\frac{i\Gamma-2\Delta}{\Gamma^2+4\Delta^2+2\Omega^2}. \end{aligned} \end{equation} For a given $\lambda$, it is evident that the density $N$ is the only parameter that could potentially be varied in order to modify the absolute value of the susceptibility by an appreciable amount. As an example, we will estimate the density needed to give a maximum for the real part of $\chi$ on the order of $1$. This is \begin{equation} Re(\chi)_{max}\approx\frac{3}{8\pi^2}N\lambda^3=\mathcal{O}(1). \end{equation} For optical wavelengths, this would require a density on the order of $10^{14}$ atoms per cm\textsuperscript{3}. The susceptibility could be theoretically increased by increasing the density, but values greater than $10^{14}$ cm\textsuperscript{-3} are not realistically possible \cite{Marina,density4}. At such high densities, collisions and nonlinear effects become dominant compared to the linear contribution to the dispersion, so this simple model is no longer valid \cite{Thommen,density1,density2,density3}. The real part of $\chi$ is shown in \fig{2 vs 3 re} (with $\Delta/\Gamma \equiv \Delta_1/\Gamma_1$), along with the corresponding three-level results which will be discussed in the next section. \begin{figure}[h] \centerline{ \includegraphics[width=\linewidth]{2_vs_3_level_susc_re.png} } \caption[Two-level and three-level system susceptibilities]{Real part of susceptibility for the two-level system (solid line) and three-level system (dashed line). $N=10^{12}$ cm\textsuperscript{-3}, $\Omega\equiv\Omega_1=\Gamma_1, \Omega_2=\Gamma_1, \gamma_0=\Gamma_1/100, \omega\equiv \omega_1=2\pi\times 10^{14}$ s\textsuperscript{-1}.} \label{2 vs 3 re} \end{figure} \section{Three-Level} \label{3L section} In the interest of finding ways to modify the susceptibility and refractive index of a system, we consider coherent modification and thus, three-level systems as done in \cite{Scully, Dowling-Bowden}, and find the limitations on the linear susceptibility. In a three-level V or $\Lambda$ system, one transition couples to the field $\mathbf{E_1}$ and another to $\mathbf{E_2}$, which have the same plane-wave form as before (see \fig{3LA}). The $\ket{3}-\ket{1}$ transition has characteristic frequency $\omega_1$, dipole moment $d_1$, and population decay rate $\Gamma_{1}$, and the $\ket{3}-\ket{2}$ transition has characteristic frequency $\omega_2$, dipole moment $d_2$, and population decay rate $\Gamma_{2}$. The Rabi frequencies are $\Omega_1$ and $\Omega_2$. We can also include a typically small decoherence rate $\gamma_0$ between levels $\ket{1}$ and $\ket{2}$, but find that this does not significantly affect the results. Each transition has an associated polarization and susceptibility which depend on the coherence density matrix elements corresponding to that transition. The susceptibility due to the $\ket{3}-\ket{1}$ transition depends on $\rho_{31}$ and the susceptibility due to the $\ket{3}-\ket{2}$ transition depends on $\rho_{32}$. For the $\Lambda$ system of \fig{3LA}, in terms of the other density matrix elements these are: \begin{subequations} \begin{align} \rho_{31}&=\frac{\Omega_{1}(\rho_{33}-\rho_{11})-\Omega_{2}\rho_{21}}{i(\Gamma_{1}+\Gamma_{2})+2\Delta_1}, \label{rho31} \\ \rho_{32}&=\frac{\Omega_{2}(\rho_{33}-\rho_{22})-\Omega_{1}\rho_{12}}{i(\Gamma_{1}+\Gamma_{2})+2\Delta_2}. \label{rho32} \end{align} \end{subequations} The main contribution from the third level which differs from the two-level result is seen from the term proportional to $\rho_{21}$ in \eq{rho31} and the term proportional to $\rho_{12}$ in \eq{rho32}, which show that although levels $\ket{1}$ and $\ket{2}$ are not directly coupled, the fields create a coherence between levels $\ket{1}$ and $\ket{2}$ which causes cross-coupling between the transition coherences: $\rho_{31}$ depends on $\mathbf{E_2}$ and $d_2$ through $\Omega_{2}$ and $\rho_{32}$ depends on $\mathbf{E_1}$ and $d_1$ through $\Omega_{1}$. This means that the susceptibility from each transition depends on additional parameters such as Rabi frequencies and dipole matrix elements from another transition, which can affect the refractive index experienced by a field. For example, the frequency of $\mathbf{E_2}$ could affect the index experienced by $\mathbf{E_1}$ through the dipole moment of the $\ket{3}-\ket{2}$ transition. Varying these parameters in general also changes $\rho_{21}$ and $\rho_{12}$, which can be prevented in a four-level system, as discussed in the next section. However, we find that the coherences $\rho_{21}$ and $\rho_{12}$ depend on at least second order in the fields. Therefore, the cross-coupling does not affect the refractive index when only the linear parts of the dispersion are considered for both fields. Alternatively, we could keep linear dispersion only in $\mathbf{E_1}$ and calculate the refractive index experienced by that field, keeping any order of $\mathbf{E_2}$. In this case, the result for the relevant coherence is \begin{equation} \rho_{31}=\frac{\Omega_1}{2|\Omega_2|^2/(i\gamma_0+\Delta_1-\Delta_2)-[i(\Gamma_{1}+\Gamma_{2})+2\Delta_1]}, \end{equation} and the index for $\mathbf{E_1}$ is $n=\sqrt{1+\chi}$ with \begin{equation} \chi=\frac{2N|d_1|^2}{\hbar\epsilon_0}\frac{\rho_{31}}{\Omega_1}. \end{equation} This depends on the second transition through $\Omega_2$, but not the amplitude of $\mathbf{E_1}$ or phases of either field. Varying $|\Omega_2|$ from zero to arbitrarily high does not result in any significant change in $n$, so this is still no better than the two-level result. At best, for small $\Gamma_2$ compared to $\Gamma_1$, this approaches the two-level result, but as $\Gamma_2$ becomes comparable to or greater than $\Gamma_1$, the maximum possible real part of the susceptibility becomes smaller than what is possible with two levels. This is shown in \fig{2 vs 3 re}. Since coherence effects in a three-level system do not help in enhancing the index, we will consider the effects of four-wave mixing on the index by moving to a four-level system in the next section. \begin{figure}[h] \centerline{ \includegraphics[width=\linewidth]{Figures_3LA.png} } \caption[Three-level system]{Three-level system. $\mathbf{E_1}$ and $\mathbf{E_2}$ are external fields which couple to different transitions; $\Omega_{1}$ and $\Omega_{2}$ are Rabi frequencies; $\Delta_1$ and $\Delta_2$ are detunings.} \label{3LA} \end{figure} \section{Four-Level} So far, the atomic and field properties seen in Sections \ref{2L section} and \ref{3L section} have not been useful in enhancing the index, except for the density, which has the practical limitations noted above. Field frequencies have appeared in detuning parameters, which cannot significantly enhance the index, but the frequencies may be more directly utilized. In order to introduce direct frequency dependence, we consider the four-level system shown in \fig{4LA}. The addition of a fourth level allows for four-wave mixing with frequency dependence. The $\ket{4}-\ket{1}$ and $\ket{4}-\ket{2}$ transitions are strongly driven on resonance by external fields. The $\ket{3}-\ket{1}$ electric dipole transition has moment $d_{1}$ (assumed real) and is coupled to the probe field $\mathbf{E_1}$ with complex amplitude $\mathcal{E}_1$ and frequency $\nu_1$, and the $\ket{3}-\ket{2}$ transition has moment $d_{2}$ (assumed real) and is driven by the probe field $\mathbf{E_2}$ with amplitude $\mathcal{E}_2$ and frequency $\nu_2$. We assume that the decay rate $\Gamma_1$ is the same for the $\ket{3}-\ket{1}$ and $\ket{4}-\ket{1}$ transitions, and the decay rate $\Gamma_2$ is the same for the $\ket{3}-\ket{2}$ and $\ket{4}-\ket{2}$ transitions, which is valid if levels 3 and 4 are close together. The atomic transition frequency between levels $\ket{3}$ and $\ket{1}$ is $\omega_1$; we assume that $\ket{3}$ and $\ket{4}$ are closely spaced so that the frequency between $\ket{4}$ and $\ket{1}$ is approximately $\omega_1$. The atomic transition frequency between $\ket{3}$ and $\ket{2}$ is $\omega_2$, which is approximately equal to the frequency between $\ket{4}$ and $\ket{2}$. \begin{figure}[H] \centerline{ \includegraphics[width=\linewidth]{Figures_4LA.png} } \caption[Four-level system]{Four-level system. $\Omega_{\mathcal{E}_{1}}$, $\Omega_{\mathcal{E}_{2}}$, $\Omega_{1}$, and $\Omega_{2}$ are Rabi frequencies; $\Delta$ is a detuning. The $\ket{3}-\ket{1}$ transition is driven by the probe field $\mathbf{E_1}$ with angular frequency $\nu_1$, and the $\ket{3}-\ket{2}$ transition is driven by the probe field $\mathbf{E_2}$ with angular frequency $\nu_2$.} \label{4LA} \end{figure} As seen in the previous section, the coherences that lead to the cross-coupling between the probe fields $\mathbf{E_1}$ and $\mathbf{E_2}$ are $\rho_{12}$ and $\rho_{21}$. We want this cross-coupling to exist, while also having some way to enhance the linear part of the dispersion of a probe field. This was not possible with a three-level system, but here the fourth level allows for the additional strong driving fields to create and maintain $\rho_{12}$ and $\rho_{21}$, which in turn couple $\mathbf{E_1}$ and $\mathbf{E_2}$. We will show that changing $\omega_2$ while maintaining two-photon resonance on transitions $\ket{3}-\ket{2}$ and $\ket{4}-\ket{2}$ can affect the refractive index experienced by $\mathbf{E_1}$. This amounts to ``moving" level 2 theoretically, which of course is not possible in practice, but this could be used to choose a level scheme in order to produce a potentially large refractive index. There is a polarization due to the $\ket{3}-\ket{1}$ transition and a polarization due to the $\ket{3}-\ket{2}$ transition, which depend on the corresponding coherences, $\rho_{31}$ and $\rho_{32}$. $\rho_{31}$ and $\rho_{32}$ depend on the coherences $\rho_{12}$ or $\rho_{21}$ which are created by the strong fields. From the optical Bloch equations, the relevant density matrix elements are: \begin{subequations} \begin{align} \rho_{31}&=\frac{|\Omega_{2}|^2\Omega_{\mathcal{E}_1} - \Omega_{1}\Omega_{2}^*\Omega_{\mathcal{E}_2}}{[2\Delta-i(\Gamma_1+\Gamma_2)](|\Omega_{1}|^2+|\Omega_{2}|^2)}, \\ \rho_{32}&=\frac{- \Omega_{1}^*\Omega_{2}\Omega_{\mathcal{E}_1} + |\Omega_{1}|^2\Omega_{\mathcal{E}_2}}{[2\Delta-i(\Gamma_1+\Gamma_2)](|\Omega_{1}|^2+|\Omega_{2}|^2)}. \end{align} \end{subequations} We find that $\rho_{31}$ and $\rho_{32}$ depend on the Rabi frequencies from each transition and thus on the amplitudes and phases of the four fields. However, in the results that follow, we find that the phases of the two driving fields do not actually affect the refractive index; also, in defining linear susceptibilities, the phases of $\mathbf{E}_1$ and $\mathbf{E}_2$ do not appear, although they must be chosen properly to obtain a particular value for the index, which will be discussed later. We move on by writing the coherences in the following form: \begin{subequations} \begin{align} \rho_{31}&=\alpha_{11}\Omega_{\mathcal{E}_1}+\alpha_{12}\Omega_{\mathcal{E}_2} \nonumber \\ &=\alpha_{11}\frac{d_{1}}{\hbar}\mathcal{E}_1+\alpha_{12}\frac{d_{2}}{\hbar}\mathcal{E}_2, \\ \rho_{32}&=\alpha_{12}\Omega_{\mathcal{E}_1}+\alpha_{22}\Omega_{\mathcal{E}_2} \nonumber \\ &=\alpha_{12}\frac{d_{1}}{\hbar}\mathcal{E}_1+\alpha_{22}\frac{d_{2}}{\hbar}\mathcal{E}_2. \end{align} \end{subequations} Due to the cross-coupling, $\rho_{31}$ and $\rho_{32}$ and therefore each polarization depend on two fields: \begin{subequations} \begin{align} \mathcal{P}_1&=2Nd_{1}\rho_{31} \label{P1a} \\ &=\epsilon_0\chi_{11}\mathcal{E}_1+\epsilon_0\chi_{12}\mathcal{E}_2, \label{P1b} \end{align} \end{subequations} \begin{subequations} \begin{align} \mathcal{P}_2&=2Nd_{2}\rho_{32} \label{P2a} \\ &=\epsilon_0\chi_{21}\mathcal{E}_1+\epsilon_0\chi_{22}\mathcal{E}_2, \label{P2b} \end{align} \end{subequations} which shows the linear dependence on the amplitudes $\mathcal{E}_1$ and $\mathcal{E}_2$ as was desired. Following a similar procedure as in the two-level section, the four susceptibilities can be found: \begin{equation} \begin{split} &\chi_{11}=2N\frac{d_{1}^2}{\hbar\epsilon_0}\alpha_{11} \\ &=N\frac{6\pi c^3\Gamma_1}{\omega_1^3}\frac{|\Omega_{2}|^2}{[2\Delta-i(\Gamma_1+\Gamma_2)](|\Omega_{1}|^2+|\Omega_{2}|^2)}, \\ &\chi_{12}=2N\frac{d_{1}d_{2}}{\hbar\epsilon_0}\alpha_{12} \\ &=-N\frac{6\pi c^3\sqrt{\Gamma_1\Gamma_2}}{\sqrt{\omega_1^3\omega_2^3}}\frac{\Omega_{1}\Omega_{2}^*}{[2\Delta-i(\Gamma_1+\Gamma_2)](|\Omega_{1}|^2+|\Omega_{2}|^2)}, \\ &\chi_{21}=2N\frac{d_{1}d_{2}}{\hbar\epsilon_0}\alpha_{21} \\ &=-N\frac{6\pi c^3\sqrt{\Gamma_1\Gamma_2}}{\sqrt{\omega_1^3\omega_2^3}}\frac{\Omega_{1}^*\Omega_{2}}{[2\Delta-i(\Gamma_1+\Gamma_2)](|\Omega_{1}|^2+|\Omega_{2}|^2)}, \\ &\chi_{22}=2N\frac{d_{2}^2}{\hbar\epsilon_0}\alpha_{22} \\ &=N\frac{6\pi c^3\Gamma_2}{\omega_2^3}\frac{|\Omega_{1}|^2}{[2\Delta-i(\Gamma_1+\Gamma_2)](|\Omega_{1}|^2+|\Omega_{2}|^2)}. \end{split} \end{equation} $\chi_{12}$ corresponds to the $\ket{3}-\ket{1}$ transition but comes from the cross-coupling with $\mathbf{E}_2$; it depends on the dipole moment of the $\ket{3}-\ket{2}$ transition, which in turn depends on $\omega_2$; likewise, $\chi_{21}$ from the $\ket{3}-\ket{2}$ transition depends on $\omega_1$. These do not depend on the amplitudes or phases of $\mathbf{E_1}$ or $\mathbf{E_2}$ since the polarizations are linear in the probe fields. As with the two-level susceptibility, these are proportional to the density and cannot be significantly increased by changing the Rabi frequencies, decay rates, or detuning. The important difference between this and the two-level system is that $\mathcal{P}_1$ and $\mathcal{P}_2$ now depend on the frequencies of two atomic transitions, so the frequency corresponding to one transition can affect the susceptibility and polarization seen by the field couple to the other transition. We assume that all fields are on resonance so that $\Delta=0$. This means that we must have $\nu_1=\omega_1$, and as $\omega_2$ is changed, the frequency of $\mathbf{E_2}$ must be chosen so that $\nu_2=\omega_2$. We also assume that for different positions of level 2, the decay rate $\Gamma_2$ remains roughly constant. We now calculate the index of refraction experienced by $\mathbf{E_1}$ and see how it is affected by $\omega_2$. This is no longer as simple as using one susceptibility term in $n=\sqrt{1+\chi}$. The refractive indices that could possibly be attained in this system can be found by solving the Maxwell equations for the electric fields $\mathbf{E_1}$ and $\mathbf{E_2}$, which depend on the susceptibilities found above. Maxwell's equations lead to equations for the amplitudes, $\mathcal{E}_1$ and $\mathcal{E}_2$, which depend on $z$: \begin{subequations} \begin{align} \nabla^2 \mathcal{E}_1 &= -{\left( \frac{\nu_1}{c}\right)}^2 \left( 1+\chi_{11}\right) \mathcal{E}_1 - {\left( \frac{\nu_1}{c}\right)}^2 \chi_{12}\mathcal{E}_2, \\ \nabla^2 \mathcal{E}_2 &= -{\left( \frac{\nu_2}{c}\right)}^2 \chi_{21} \mathcal{E}_1 - {\left( \frac{\nu_2}{c}\right)}^2 \left( 1+\chi_{22}\right) \mathcal{E}_2. \end{align} \end{subequations} Using the ans{\"a}tze $\mathcal{E}_{1}(z)=\mathcal{E}_{1}(0)e^{\lambda z}$ and $\mathcal{E}_{2}(z)=\mathcal{E}_{2}(0)e^{\lambda z}$, these equations can be solved, which results in two possible eigenvalues for $\lambda$. We are interested in the complex index seen by $\mathbf{E_1}$, which is related to an eigenvalue by $n=c\lambda/\nu_1$. The eigenvalues do not depend on the amplitudes and phases of either probe field, but the amplitudes and phases determine which eigenvalue and therefore which index value will be obtained. The corresponding eigenvectors give the initial amplitudes and phases of the probe fields that are required to obtain each eigenvalue. The results show that the two possible indices for $\mathbf{E_1}$ are affected by the frequency of the other probe field, $\mathbf{E_2}$. This means that for the four-level system, there is new effect which can be considered in order to obtain a desired refractive index. An example is shown in \fig{4 level plot re}, which shows the real parts of the possible indices for $\mathbf{E_1}$, which represent the amount of refraction experienced by the field. For $n_1$ (solid line), the index begins to increase for increasing $\omega_2/\omega_1$. Incidentally, this also keeps the absorption low for $n_1$, as seen in \fig{4 level plot im}. The trade-off for this is seen in \fig{4 level amplitudes}, which is a plot with logarithmic scaling of the relative field amplitudes needed to obtain each eigenvalue. The eigenvectors give the initial amplitudes and phases of $\mathbf{E_1}$ and $\mathbf{E_2}$ that are required to obtain each eigenvalue. In order to obtain the index that increases with $\omega_2$, the amplitude of $\mathbf{E_2}$ must be much greater than that of $\mathbf{E_1}$ for larger $\omega_2$. At some point, this could become unrealistic since we are assuming that both fields are small, and for a given $\mathbf{E_2}$, only a much smaller $\mathbf{E_1}$ can be refracted. Also, increasing $\omega_2$ decreases $\chi_{12}$, $\chi_{21}$, and $\chi_{22}$, which eventually will effectively decouple the fields. These observations suggest that there are limits to this behavior in the linear regime, in addition to the limitation imposed by the density. With four levels, there is more freedom in attaining a desired refractive index by choosing a certain level scheme, but this does not result in a significantly enhanced index in this simple case without nonlinear effects. \begin{figure}[h] \centerline{ \includegraphics[width=\linewidth]{4_level_index_plot_re.png} } \caption[Four-level index]{Real parts of the two possible eigenvalues for refractive index seen by $\mathbf{E_1}$ in the four-level system. $\Gamma_2=2\Gamma_1$, $\Omega_{1}=\Omega_{2}=\Gamma_1$, $\Delta=0$, $N=2.5\times10^{14}$ cm\textsuperscript{-3}.} \label{4 level plot re} \end{figure} \begin{figure}[h] \centerline{ \includegraphics[width=\linewidth]{4_level_index_plot_im.png} } \caption[Four-level index]{Imaginary parts of the two possible eigenvalues for refractive index seen by $\mathbf{E_1}$ in the four-level system. Parameters are the same as in \fig{4 level plot re}.} \label{4 level plot im} \end{figure} \begin{figure}[H] \centerline{ \includegraphics[width=\linewidth]{4_level_index_eigenvector_amplitudes.png} } \caption[Eigenvectors]{Plot of the relative amplitudes of the two fields required to obtain the corresponding eigenvalue for refractive index, with logarithmic scaling. Parameters are the same as in \fig{4 level plot re}.} \label{4 level amplitudes} \end{figure} \section{Conclusion} We looked at how the density of two-level systems in a medium places a natural limitation on the electric susceptibility and index of refraction of the medium and concluded that significant modification of the index is not practical with two-level systems. Additional transitions and fields must be used to introduce other ways of modifying the refractive index, such as coherence effects in a three-level system, but to find some enhancement in the linear susceptibilities, we need to use at least four levels. Frequency dependence introduced via four-wave mixing in a more versatile four-level system allows for ways of producing a particular refractive index by selecting a system with a suitable atomic level structure. Example results showed how the resulting index experienced by one probe field on a lower transition is affected by the placement of the other lower level, but there are still limitations to this effect; larger enhancements require relatively smaller amplitudes of the field of interest. In addition to the basic (two-level) result, coherence effects, and frequency dependence in the most simple systems in which they occur, more complicated systems could be investigated to find other effects that can enhance the index. More possibilities yet are available if one allows for effects that are nonlinear in the probe field, but this is beyond the scope of this article. We note that we did disregard here absorption and gain, which is also important in modified-index applications which are often limited by high absorption \cite{density2,absorptionsuppressed}. This paper is meant as a very basic starting point in looking for susceptibility-changing applications based on atoms, which can be used to realize a negative, zero, or high refractive index. \section{Acknowledgments} We would like to acknowledge useful discussions with Marina Litinskaya and funding by the National Science Foundation via grants PHY-1912607 and PHY-1607637. \bibliographystyle{apsrev4-1}
2,869,038,156,598
arxiv
\section{Introduction} \label{sec:introduction} Processing cores and the accompanying main memory working together make the modern processor work. It is common to fabricate the cores and memory separately on different packages using 2D packaging technology and then connect them using off-chip interconnects. However, the limited bandwidth of the interconnect often becomes the performance bottleneck in the 2D processor. Recent advances in semiconductor manufacturing have enabled high-density integration of core and memory wherein the designers place them on the same package using 2.5D packaging technology to improve bandwidth. Designers can now stack memory and core on top of each other as layers using 3D packing technology for several magnitudes increase in bandwidth~\cite{loh2007processor}. These advances enable the next generation of high-performing high-density 2.5D and 3D processors. The tighter (and vertical) integration of core and memory into a single package results in the power of both core and memory getting channeled in the same package. However, there is not much increase in the package’s corresponding surface area. Consequently, the integration significantly increases the power density of the processor. Therefore, these high-density processors (packaged with stacked memory) face even more severe thermal issues than low-density 2D processors~\cite{coudrain2016experimental}. Promising as they are, the thermal issues associated with high-density processors prevent them from going mainstream. Therefore, thermal management for high-density processors is now an active research subject~\cite{Hajkazemi2017, Lo2016, siddhu2019predictncool,siddhu2020leakage}. However, the availability of thermal sensors in real-processors is limited, and they often lack the temporal and spatial resolutions needed for thermal management research. Given the challenges involved in measuring temperatures in real-world processors, thermal simulations play an essential role in enabling thermal management research. However, due to the lack of better open-source tools, existing works on thermal management of high-density processors and most works on thermal management of low-density processors are based on in-house trace-based simulators~\cite{cao2019survey}. Recent advances in Electronic Design Automation~(EDA) have enabled detailed core-only interval thermal simulations using sophisticated open-source toolchains~\cite{pathania2018hotsniper,rohith2018lifesim, Hankin:2021:Hotgauge}. Trace-based simulation relies on first collecting traces (performance, power) of each application running in isolation. It then performs segregated temperature simulations on the merged (independent) traces. In contrast, an interval-based simulation executes all applications in parallel, allowing it to consider contention on shared resources. \textcolor{black}{The following motivational example shows that interval simulations are more detailed and accurate than trace-based simulations.} \input{images/motivational} \textcolor{black}{ \textbf{Motivational Example:} Figure~\ref{fig:motivational}(a) shows a scenario in which two instances of \emph{SPLASH-2 ocean.ncont} are executing in parallel. Both instances of \emph{ocean.ncont} compete for DRAM bandwidth, which leads to a performance reduction of 16\% due to stall cycles; trace-based simulation (I) cannot capture this effect. In addition, stall cycles reduce the power consumption and consequently the temperature. Therefore, trace-based simulation overestimates the temperature (II). One can overcome this overestimation by obtaining traces for all combinations of applications, but such an approach might already be prohibitive. Dynamic Voltage Frequency Scaling (DVFS) technology in processors further aggravates the problem. Scaling V/f levels affects the performance of memory- and compute-intensive applications differently. Figure~\ref{fig:motivational}(b) shows the traces of the compute-intensive \emph{PARSEC swaptions} and the memory-intensive \emph{SPLASH-2 ocean.ncont} at 4\,GHz~(top) and 1\,GHz~(bottom). The points in their execution the two applications reach (after 100\,ms) at 4\,GHz is different from the points they reach when operating at 1\,GHz. Therefore, the trace one obtains at a constant of 1\,GHz can not be used to continue a simulation that switches from 4\,GHz to 1\,GHz after 100\,ms. The trace-based simulation would require traces of all combinations of applications at all V/f levels and all relative shifts of applications. The collection of all these traces is practically infeasible.} Cycle-accurate simulations~\cite{binkert2011gem5} are more accurate than interval simulations. However, they are also extremely slow and difficult to parallelize (often single-threaded). Cycle-accurate simulations are quintessential to test the accuracy of new micro-architecture designs, which designers can do in a limited number of processor cycles. In system research, on the other hand, we are required to simulate multiple (many) processors simultaneously for time measured in minutes (hours) rather than cycles to reproduce the necessary system-level behavior. For example, a single-core processor running at 1 GHz goes through a billion processor cycles every second. Cycle-accurate simulations are therefore too slow for system-level research. Interval simulations are several magnitudes faster than cycle-accurate simulations and provide a good trade-off between simulation speed and accuracy. Interval simulations are therefore best suited for system-level research that requires simulation of a multi-/many-core processor for a long duration but with high fidelity. However, existing interval thermal simulation toolchains do not model the main memory and cannot be used to study the high-density processors wherein core and memory are tightly integrated and thermally coupled. In this work, we present the first interval thermal simulation toolchain, called \emph{CoMeT}\xspace, that holistically integrates both core and memory. \emph{CoMeT}\xspace provides performance, power, and temperature values at regular user-defined intervals (epochs) for core and memory. The support for thermal interval simulation for both core and memory using the \emph{CoMeT}\xspace toolchain comes at only $\sim$5\% additional simulation-time overhead over {\em HotSniper}~\cite{pathania2018hotsniper} (state-of-the-art thermal interval simulation toolchain for core-only simulations). \emph{CoMeT}\xspace enables users to evaluate and analyze run-time thermal management policies for various core-memory (integration) configurations as shown in Figure~\ref{fig:SysMemArch}~\cite{coudrain2016experimental,stow2016cost, sodani2015knights, hassan2015near, park2021high, loh20083d}. Figure~\ref{fig:SysMemArch}(a) shows a conventional but the most common configuration with cores and 2D DRAM on separate packages. Figure~\ref{fig:SysMemArch}(b) replaces the 2D DRAM with a 3D memory for faster data access. Figure~\ref{fig:SysMemArch}(c) further bridges the gap between core and 3D memory by putting them side by side within a package. Figure~\ref{fig:SysMemArch}(d) advances the integration by stacking cores over the 3D memory to reduce data access delays further. We refer to configurations shown in Figures~\ref{fig:SysMemArch}(a),~\ref{fig:SysMemArch}(b),~\ref{fig:SysMemArch}(c), and~\ref{fig:SysMemArch}(d) as {\em 2D-ext}, {\em 3D-ext}, {\em 2.5D}, and {\em 3D-stacked}, respectively. \begin{figure}[t] \centering \includegraphics[width=0.72\linewidth]{images/SysMemArch.pdf} \caption{Various core-memory configurations. Core also includes caches \textcolor{black}{and can have multiple layers as well.} } \label{fig:SysMemArch} \end{figure} We see \emph{CoMeT}\xspace primarily as a tool for system-level thermal management research. \emph{CoMeT}\xspace, therefore, comes equipped with several features that facilitate system research. \emph{CoMeT}\xspace ships with \textit{SchedAPI} Application Programming Interface (API) library, which the users can use to implement their custom thermal (resource) management policies. We also develop and integrate \emph{HeatView}\xspace into \emph{CoMeT}\xspace, which generates a representative video of the thermal simulation for a quick human-comprehensible visual analysis. It also contains an integrated floorplan generator. \emph{CoMeT}\xspace has an extendable automatic build verification~(smoke) test suite that checks critical functionalities across various core-memory configurations and their underlying architectural parameters for quick validation of code edits. We develop and integrate the \emph{SimulationControl}\xspace framework in \emph{CoMeT}\xspace, using which the users can run simulations for various workloads and configurations in batch mode. Using \emph{CoMeT}\xspace, we also illustrate the thermal patterns for different core-memory configurations using benchmarks from several diverse benchmark suites. These experiments helped us develop many insights into the thermal interactions of cores and memory and their influence on each other's temperatures. We also present a thermal-aware scheduling case study wherein we simulate operations of the default {\em on-demand} Governor~\cite{pallipadi2006ondemand} from {\em Linux} operating in conjunction with a Dynamic Thermal Management~(DTM) on a 3D stacked processor. We make several new interesting thermal observations through our case study. In the same spirit, we envision other researchers will also identify several new thermal behaviors for existing and upcoming core-memory configurations using \emph{CoMeT}\xspace. Consequently, \emph{CoMeT}\xspace will enable them to propose and evaluate novel thermal management policies for these configurations. In particular, we make the following key contributions in this work. \begin{enumerate} \item We introduce an open-source interval thermal simulation toolchain, called \emph{CoMeT}\xspace, that holistically integrates core and memory. It supports the simulation of multi-/many-core processors in several different core-memory configurations. \item We describe several novel features in \emph{CoMeT}\xspace that facilitate system-level thermal management research in processors. \item We perform thermal analysis of different core-memory configurations using \emph{CoMeT}\xspace and present new interesting thermal observations. We also highlight the suitability of \emph{CoMeT}\xspace for studying thermal-aware system scheduling via a case study. \end{enumerate} \textbf{Open Source Contribution:} The source code for \emph{CoMeT}\xspace is released under {\em MIT} license for unrestricted use and is available for download at \href{https://github.com/marg-tools/CoMeT}{https://github.com/marg-tools/CoMeT}. \section{Background and Related Work} \label{sec:related_work} Thermal-aware design of computing systems has been a significant area of research since the early 2000s. With the current technology nodes, the phenomenon of dark silicon (not being able to use the entire chip simultaneously due to thermal issues)~\cite{pathania2018hotsniper, henkel2015darksilicon} is becoming prominent. It is driving the need for fine-grained thermal management to respect the thermal limits of the system. Multi-/many-core processors exhibit unbalanced temperatures and distributed thermal hotspots, making thermal management non-trivial~\cite{khdr2014mdtm}. Various works have addressed thermal management for cores using different techniques~\cite{Huang2000, lasbouygues2007temperature, calimera2008thermal, bao2009line, homayoun2010relocate, kumar2010neural, bailis2011dimetrodon, ayoub2013cometc, liu2013layout, cox2013thermal, sironi2013thermos, zapater2013leakage, cochran2013thermal, sridhar20133d, juan2014statistical, khdr2014mdtm, zhang2015hotspot, prakash2016improving, bogdan2016power, Zheng2016, wang2017fast, kumar2017fighting, liu2018thermal, pathania2018hotsniper, hmctherm, sadrosadati2019itap, smartboost}. These works primarily include voltage and frequency scaling, hardware reconfiguration, power and clock gating, and cache throttling as knobs for thermal management. They propose proactive thermal management policies such as task allocation based on future temperature and reactive thermal management policies such as task migration and scheduling for cores. Also, some works have addressed optimizing multiple metrics such as energy, power, and temperature. To name a few, Huang et al.~\cite{Huang2000} proposed a framework for dynamic management of energy and temperature of core in a unified manner. Khdr et al.~\cite{khdr2014mdtm} proposed a technique that employs centralized and distributed predictors to prevent violating temperature thresholds while maintaining the balance between the temperature of different cores. Zapater et al.~\cite{zapater2013leakage} proposed a temperature- and leakage-aware control policy to reduce the energy consumption of data centers by controlling the fan speed. Designing appropriate thermal management policies requires a fast and accurate thermal simulator for quick evaluation, which has resulted in the development of various open-source thermal simulators such as {\em 3D-ICE}~\cite{sridhar20133d} and {\em HotSpot}~\cite{zhang2015hotspot}. Such thermal simulators~\cite{zhang2015hotspot, sridhar20133d, dramsim3, sultan2020fast, sultan2021variability, wang2017fast} use floorplan and power traces as inputs and generate temperature values as output. {\em 3D-ICE}~\cite{sridhar20133d} is a thermal simulator having a transient thermal model with microchannel cooling for liquid-cooled ICs. {\em HotSpot}~\cite{zhang2015hotspot} provides a fast and accurate thermal model for transient and steady-state simulations. Thermal simulators pave the way for an early-stage understanding of potential thermal issues in chips and facilitate studies to understand the implications of different designs and floorplans and develop cooling solutions. A performance simulator for processors, such as {\em Sniper}~\cite{sniper} or {\em gem5}~\cite{binkert2011gem5}, integrated with a power model (such as {\em McPAT}~\cite{mcpat}) generates the power traces used inside these thermal simulators. {\em McPAT}~\cite{mcpat} framework can model the power, area, and timing of processor components. It supports technology nodes ranging from 90\,nm to 22\,nm. {\em Sniper}~\cite{sniper} is a multi-/many-core performance simulator that uses interval simulation to simulate a system at a higher level of abstraction than a detailed cycle-accurate simulator. {\em Sniper} achieves several magnitudes faster simulation speeds over a cycle-accurate simulator such as {\em gem5}. {\em Sniper} integrates {\em McPAT} and enables regular monitoring of the processor's power consumption. \textcolor{black}{{Looking at the memory part, \em DRAMsim3}~\cite{dramsim3} and {\em HMCTherm}~\cite{hmctherm} are cycle-accurate DRAM simulators and support thermal simulation. While {\em DRAMsim3} models 2D and 3D DRAMs, {\em HMCTherm} models 3D memory based on Hybrid Memory Cube (HMC) specification. The detailed cycle-accurate modeling significantly reduces their simulation speed and makes them unsuitable for integration with existing core-only interval-based performance simulators~\cite{sniper}. Further, {\em DRAMsim3} and {\em HMCTherm} focus on off-chip memories and do not consider novel technologies such as 2.5D or 3D integration of cores and memories. {\em CACTI-3DD}~\cite{cacti} is an architecture-level integrated power, area, and timing modeling framework for new memory technologies such as 3D-stacked memories in addition to commodity 2D DRAM and caches. It enables easier integration with an architectural-level core performance simulator.} Several works have used trace-based evaluation for core thermal management policies~\cite{cox2013thermal, Zheng2016, liu2018thermal, deshwal2019moos}. Cox et al.~\cite{cox2013thermal} use trace-based simulation using {\em HotSpot} to obtain temperature and perform a thermal-aware mapping of streaming applications on 3D processors. Liu et al.~\cite{liu2018thermal} use a trace-based thermal simulation methodology using power traces generated from {\em gem5} + {\em McPAT} for dynamic task mapping on systems with reconfigurable network-on-chip. A thermal-aware design space exploration work, {\em MOOS}~\cite{deshwal2019moos}, generates power traces from {\em gem5} and {\em McPAT} and uses {\em HotSpot} and a custom analytical model for temperature estimation of 3D integrated cores and caches. Such a trace-based approach was sufficient for their work as they did not consider any dynamic policy for thermal management. A key limitation of evaluating thermal management policies using trace-based simulations is that they do not feed the temperature impact into the performance simulator. This limitation limits the accuracy and scope of the analysis. Many aspects of thermal management, such as reducing the frequency of heated cores based on temperature or adapting cache partitioning based on temperature, cannot be captured by traces collected in isolation and hence can lead to errors or inaccuracies in the overall evaluation. Further, as motivated in Section~\ref{sec:introduction}, an infeasible number of traces might need to be generated to capture the parameter tuning in both performance and thermal simulators. Addressing these issues associated with trace-based simulators requires integrating performance and thermal simulators in a coherent manner. {\em HotSniper} was the first to provide an integrated infrastructure for interval-based performance and thermal simulation of 2D processor cores. {\em HotSniper}~\cite{pathania2018hotsniper} integrates the {\em Sniper} performance simulator with {\em HotSpot} thermal simulator and provides an infrastructure for core-only interval thermal simulations of multi-/many-core processors. {\em HotSniper} enables a feedback path for temperature from the thermal simulator ({\em HotSpot}) to the performance simulator ({\em Sniper}) to help make thermal-aware decisions for thermal management. {\em LifeSim}~\cite{rohith2018lifesim} is another notable example of a similar attempt with an additional focus on thermals-based reliability. Recently released {\em HotGauge}~\cite{Hankin:2021:Hotgauge} integrates {\em Sniper} with {\em 3D-ICE}. Conventionally, memories have lower power dissipation and thus induce lower heating (compared to high-frequency cores~\cite{bircher2008analysis}), thereby requiring limited thermal management. Therefore, prior works such as {\em HotSniper} supported thermal analysis only for cores. With increasing memory bandwidth requirements of applications, high-density {\em 3D-ext}, {\em 2.5D}, and {\em 3D-stacked} processors are becoming popular but face severe thermal issues~\cite{Hajkazemi2017, Lo2016}. Furthermore, high-density processors (and memories within) have significant leakage power dissipation that increases with temperature and forms a positive feedback loop between leakage power and temperature. Therefore, in recent times, memory heating in high-density processors has also received significant research attention~\cite{siddhu2020leakage, siddhu2019predictncool}. {\em FastCool}~\cite{siddhu2020leakage} discusses DTM strategies for 3D memory considering the leakage power dissipation in memory and positive feedback loop between leakage power and temperature. {\em PredictNCool}~\cite{siddhu2019predictncool} proposes a proactive DTM policy for 3D memories using a lightweight steady-state temperature predictor. \textcolor{black}{ Instead of using detailed command level 3D memory models~\cite{dramsim3, hmctherm}, both {\em FastCool} and {\em PredictNCool} obtain energy-per-access from {\em CACTI-3DD}~\cite{cacti} to derive memory power based on access traces and used {\em HotSpot} for thermal simulation in a trace-based methodology. While they use the same tools (CACTI-3DD, Sniper, and HotSpot) used in \emph{CoMeT}\xspace, their evaluation suffers from the already discussed limitations of a trace-based simulation. Moreover, such a setup cannot provide dynamic feedback to cores, limiting its scope and accuracy.} \section{COMET: Integrated Thermal Simulation for Cores and Memories} \label{sec:proposal} \emph{CoMeT}\xspace integrates a performance simulator~({\em Sniper}~\cite{sniper}), a power model for core~({\em McPAT}~\cite{mcpat}), a power model for memory~({\em CACTI}~\cite{cacti}), and a thermal simulator ({\em HotSpot}~\cite{zhang2015hotspot}) to perform an integrated interval performance, power, and thermal simulation for cores and memories. It also provides many other useful features and utilities to handle multiple core-memory configurations, thermal management, floorplan generation, etc., within the framework. We present the proposed \emph{CoMeT}\xspace tool flow and features in this section. \subsection{\emph{CoMeT}\xspace Tool Flow} \label{sec:ToolFlow} We first provide an overview of the \emph{CoMeT}\xspace toolchain and then explain each block in detail. \subsubsection{Overview} \label{sec:OverviewToolFlow} \begin{figure*}[t] \centering \begin{minipage}[t]{\linewidth} \centering \includegraphics[width=\linewidth]{images/COMET.pdf} \caption{Overview of \emph{CoMeT}\xspace Flow} \label{fig:Tool-Flow} \end{minipage} \end{figure*} Figure~\ref{fig:Tool-Flow} shows an overall picture of the \emph{CoMeT}\xspace toolchain. Components in blue indicate the key contributions of \emph{CoMeT}\xspace. The toolchain first invokes an \circled{1} interval performance simulator~(e.g., {\em Sniper}~\cite{sniper}) to simulate a workload and tracks access counts to various internal components such as execution units, caches, register files, etc. We also extended the existing performance simulator to monitor memory access counts. The access counts are accumulated and passed to the \circled{2} power model at every epoch (e.g., \SI{1}{\milli\second}). The power model (e.g., {\em McPAT}~\cite{mcpat}) calculates the core and memory power during the corresponding epoch, which toolchain then feeds along with the chip \circled{3} floorplan to a \circled{4} thermal simulator (e.g., {\em HotSpot}~\cite{zhang2015hotspot}) for calculating core and memory temperature. Depending upon the type of core-memory configuration, thermal simulation of core and memory can occur separately (Figures~\ref{fig:SysMemArch}(a), (b)) using two different invocations of the thermal simulator or together (Figures~\ref{fig:SysMemArch}(c), (d)) using a single invocation. As shown in Figure~\ref{fig:Tool-Flow}, the toolchain provides the core and memory temperatures as inputs to the \circled{5} DTM policy. If the temperature exceeds a threshold, the DTM policy will invoke knobs (e.g., changing the core and memory power state, operating voltage/frequency, task/data mapping, etc.) to manage the temperature. Such knobs would affect the performance simulation, and the above process repeats until the end of the workload. Once the simulation is complete, \emph{CoMeT}\xspace collects various metrics such as IPC, cache hit rate, DRAM bandwidth utilization, etc., from the performance simulator, power traces from the power model, and temperature traces from the temperature simulator. These metrics and traces are processed to generate different plots and statistics, enabling easier and more detailed analysis. We provide details of each of these blocks in the following subsection. \subsubsection{Toolchain Details} \label{sec:DetailsToolFlow} \begin{figure*}[t] \centering \begin{minipage}[t]{\linewidth} \centering \includegraphics[width=\linewidth]{images/COMET-Details.pdf} \caption{\emph{CoMeT}\xspace Detailed Flow} \label{fig:Detailed-Tool-Flow} \end{minipage} \end{figure*} Figure~\ref{fig:Detailed-Tool-Flow} shows different blocks of the \emph{CoMeT}\xspace flow in more detail. Figure~\ref{fig:Detailed-Tool-Flow}(a) illustrates the performance simulation block, which simulates a workload and provides access counts of different core blocks and memory. We updated the {\em Sniper}~\cite{sniper} performance simulator to monitor and accumulate memory read (RD) access and write (WR) access counts (separately for each bank) in each epoch. Modern-day cores use various voltage-frequency levels and scheduling strategies to improve performance, increase energy efficiency, and reduce temperature. We provide an \emph{ondemand} governor and a scheduler for open systems~\cite{open_system} as default policies that users can suitably modify (more details in Section~\ref{sec:Scheduler}) to jumpstart development. The DVFS controller and the scheduler control various aspects of performance simulation and form inputs to the performance simulator. The user defines different processor architecture parameters such as the number of cores, frequency range, cache sizes, etc., as a part of settings. \emph{CoMeT}\xspace then provides the settings as inputs to the performance simulation block. We move now to explain the power model block shown in Figure~\ref{fig:Detailed-Tool-Flow}. \emph{CoMeT}\xspace uses the access counts generated from the performance simulation block, the power state, and operating voltage/frequency of each core to calculate core and memory power at every epoch. The core and memory power is calculated separately for each core and bank, respectively. The settings provide various technology parameters (e.g., technology node in nano-meters) and architecture parameters (such as cache attributes, number of functional units, etc.) as inputs to the power model. \emph{CoMeT}\xspace calculates the core power using {\em McPAT}~\cite{mcpat}. \emph{CoMeT}\xspace first extracts the energy per access for RD and WR operation from a memory modeling tool~({\em CACTI-3DD}~\cite{cacti}) to calculate the memory power. This energy per access data is used within the access rate to the power converter block to convert the RD and WR access counts of each bank to corresponding dynamic power. The next block is the thermal simulation block (Figure~\ref{fig:Detailed-Tool-Flow}(c)), which calculates the temperature of individual core and memory banks using their power consumption and the chip floorplan/layer information. While a user can provide a floorplan file developed manually, we also implemented an automatic floorplan generator to generate the floorplan for various regular layouts (details in Section~\ref{sec:Floorplan})). The users provide the floorplan as an input to the thermal simulation block. We use a fast and popular thermal simulator called {\em HotSpot}~\cite{zhang2015hotspot}\footnote{\textcolor{black}{ \emph{CoMeT}\xspace uses {\em HotSpot} as the thermal simulator, but one can extend it to support any other (more accurate) thermal simulator. This extension is possible because most thermal simulators (e.g., HotSpot, 3D-ICE) follow similar input-output interfaces but different formats. During each interval, these simulators require a power trace as an input (generated by a performance simulator like Sniper) and generate temperature trace as an output. Therefore, an addition of a trace format converter within \emph{CoMeT}\xspace should suffice to support different thermal simulators. These simulators also require configuration parameters and floorplan information as inputs which typically remain unchanged during the entire simulation. Thus, different thermal simulators can be supported by generating this information in an appropriate format, either manually or through automation (e.g., \textit{floorplanlib}). A plugin-type integration of various simulators would be useful, and we leave it as future work for now.}}, within \emph{CoMeT}\xspace, extended to combine the dynamic power with the temperature-dependent leakage power at each epoch. Section~\ref{sec:leakage-aware} presents details of the temperature-dependent leakage-aware modeling. The user provides the thermal and tool parameters (epoch time, initial temperatures, config file, etc.) to the thermal simulation block. As shown in Figure~\ref{fig:Detailed-Tool-Flow}(d), the DTM policy manages temperature by employing a range of actions, such as using low power states, decreasing core voltage-frequency (V/F), changing the task/data mapping, and reducing power density. We provide a default throttle-based scheme (Section~\ref{sec:Scheduler}), which can be used as a template to help users develop and evaluate different thermal management schemes. The DTM policy uses the temperature data provided by the thermal simulation block and controls the power states, V/F settings, or the task/data mapping. The performance simulation and power model block make use of these knobs. After the workload simulation completes, using \emph{SimulationControl}\xspace, \emph{CoMeT}\xspace outputs the performance, power, and temperature for various timesteps for both core and memory (not shown in Figure~\ref{fig:Detailed-Tool-Flow}). Such traces are also available in graphical format, enabling a quicker and better analysis. In addition, \emph{HeatView}\xspace generates a temperature video showing the thermal map of various cores and memory banks at different time instances. \emph{SimulationControl}\xspace allows users to run simulations in batch mode (Section~\ref{sec:SimulationControl}) on the input side. We elaborate on the various key features of \emph{CoMeT}\xspace in the following subsections. \subsection{Support for Multiple Core-Memory Configurations} \label{sec:ArchSupport} In this section, we discuss various core-memory configurations supported in \emph{CoMeT}\xspace. Today's technology supports integrating the core and memory in a processor (computer system) in multiple ways~\cite{hassan2015near}. As shown in Figure~\ref{fig:SysMemArch}, we support four different kinds of core-memory configurations in \emph{CoMeT}\xspace. Designers can package the core and memory separately (Figure~\ref{fig:SysMemArch}(a), (b)) or on the same package (Figure~\ref{fig:SysMemArch}(c), (d)). They also solder the packaged chips onto a Printed Circuit Board~(PCB) for mechanical stability and electrical connectivity. Off-chip 2D core-memory configurations~\cite{jacob2009memory}, referred to as \textit{2D-ext} in this work, are the most widely used configurations today. In such core-memory configurations, usually, the core has a heat sink for cooling while the DRAM memory is air-cooled (Figure~\ref{fig:SysMemArch}(a)). The \emph{CoMeT}\xspace toolchain studies thermal issues in such core-memory configurations. In many processors, off-chip 3D memories are becoming popular with the rising need for higher memory bandwidth. However, the increased power density causes thermal issues, requiring a heat sink for memory cooling (Figure~\ref{fig:SysMemArch}(b)). We refer to such core-memory configurations as \textit{3D-ext} in this work. \textcolor{black}{The 3D memory contains a logic core layer (not shown in the figure) at the bottom which manages the routing of requests and data between various layers of the 3D memory.} The above off-package core-memory configurations (\textit{2D-ext} or \textit{3D-ext}) have a higher interconnect delay. In an alternative core-memory configuration referred to as \textit{2.5D}~\cite{coudrain2016experimental, hassan2015near} (Figure~\ref{fig:SysMemArch}(c)), a 3D memory and 2D core are placed within the same package, thereby reducing the interconnect delay. An interposer~\cite{coudrain2016experimental} acts as a substrate and helps route connections between the memory and core. However, the thermal behavior gets complicated as the design places memory and core closer, influencing each other’s temperature. In Figure~\ref{fig:SysMemArch}(d), the core and memory are stacked, achieving the lowest interconnect delay. Designers prefer to place the core nearer to the heat sink for better cooling. We refer to such a core-memory configuration as \textit{3D-stacked} in this work. \emph{CoMeT}\xspace supports all these four core-memory configurations with various options to configure the number of cores, memory banks, and layers. \textcolor{black}{\emph{CoMeT}\xspace also models the power dissipation from the logic core layer in the \textit{3D-ext} and \textit{2.5D} configurations.} We perform a detailed analysis of thermal patterns for these four core-memory configurations and present the corresponding observations in Section~\ref{sec:ExpStudy}. We built \emph{CoMeT}\xspace to consider certain aspects of various recent emerging memory technologies where a Non-Volatile Memory~(NVM)~\cite{salkhordeh2016operating}, such as Phase Change Memory~(PCM), acts as the main memory. Unlike conventional DRAMs, the energy consumption for the read and write operations in PCM is considerably different. Hence, \emph{CoMeT}\xspace needs to account reads and writes separately. \emph{CoMeT}\xspace allows the user to specify separate parameters for the read and write energy per access, thereby providing hooks for being extended to support such emerging memory technologies. \textcolor{black}{ However, one limitation in replacing DRAM with NVM within \emph{CoMeT}\xspace is the underlying architectural simulator ({\em Sniper}), which does not accurately model heterogeneous read and write access times for memory. We plan to work in the future to overcome this limitation.} \subsection{Leakage-Aware Thermal Simulation for Memories} \label{sec:leakage-aware} \input{images/power_temperature} As the temperature rises, the leakage power consumption increases. This increase raises the temperature, forming a temperature-leakage positive feedback loop. We model the thermal effects of temperature-dependent leakage power (for memories) similar to~\cite{siddhu2019predictncool} and validate them using \textit{ANSYS Icepak}~\cite{icepak}, a commercial detailed temperature simulator. We use {\em CACTI-3DD}~\cite{cacti} to note variations in the leakage power dissipation of the memory bank with temperature. We observe that, for memories, the leakage power contributes significantly ($\sim$40\%) to the total power dissipation (at $\sim$70\textdegree C, see Figure~\ref{fig:Leak-Impact}). In \emph{CoMeT}\xspace, we obtain \textcolor{black}{(using exponential curve fitting)} and add the temperature-dependent leakage power consumption during thermal simulation. Following a similar approach, we use {\em McPAT} to extend {\em HotSpot} to account for temperature-dependent leakage power for cores. \subsection{Simulation Control Options} \label{sec:SimulationControl} \begin{figure*}[t] \centering \begin{minipage}[t]{\linewidth} \centering \includegraphics[width=\linewidth]{images/SimulationControl.pdf} \caption{\emph{SimulationControl}\xspace} \label{fig:SimulationControl} \end{minipage} \end{figure*} A common use-case with multi-/many-core simulators is to run many simulations that vary only in a few parameters, such as the workload and architectural parameters. These simulation runs are then quantitatively compared. \emph{CoMeT}\xspace's \emph{SimulationControl}\xspace package provides features to facilitate this use case (Figure~\ref{fig:SimulationControl}). It enables running simulations in batch mode and stores the traces in separate folders. The \emph{SimulationControl}\xspace package provides a simple Python API to specify the parameters of each simulation run: workload and \emph{CoMeT}\xspace configuration options. After each run, \emph{CoMeT}\xspace stores the generated traces in a separate folder and creates plots (images) for the major metrics (power, temperature, CPU frequency, IPS, CPI stacks, etc.). Optionally, it also automatically creates the thermal video using the \emph{HeatView}\xspace feature (Section~\ref{sec:HeatView}). In addition to an API to run many different simulations, the \emph{SimulationControl}\xspace package provides a Python API to read the generated traces and higher-level metrics (e.g., average response time, peak temperature, and energy). This API enables to build custom evaluation scripts. The \emph{SimulationControl}\xspace package, for example, can run the same random workload at varying task arrival rates with different thermal management policies. It generates graphs that enable users to check the resulting temperature traces visually. Users can further perform evaluations using the \emph{SimulationControl}\xspace API~(e.g., print a table with the peak temperature of each run). \subsection{SchedAPI: Resource Management Policies for Application Scheduling, Mapping, and DVFS} \label{sec:Scheduler} Run-time thermal management affects the performance, power, and temperature of a multi-/many-core processor~\cite{smartboost}. Conversely, the design of run-time thermal management techniques depends on the objective (e.g., performance or energy), the constraints (e.g., temperature), and also on the targeted platform and its characteristics (e.g., micro-architecture or cooling system). Thus, in research exists several thermal management techniques catering to different scenarios. The purpose of \emph{CoMeT}\xspace is to facilitate the development and evaluation of novel run-time thermal management techniques targeting, but not limited to, the new (stacked) core-memory configurations. Thermal management utilizes knobs like application scheduling, mapping and migration, and Dynamic Voltage and Frequency Scaling (DVFS). It then makes decisions on these knobs using observations of the system state: applications and their characteristics, power consumption, core/memory temperature, etc. One needs to tightly integrate all these properties into the infrastructure to provide these metrics to a thermal management policy during the simulation. Thermal management generally targets an open system, where applications arrive at times that are unknown beforehand~\cite{open_system}. {\em HotSniper}~\cite{pathania2018hotsniper} was the first toolchain to explore the concept of scheduling for open systems using {\em Sniper}. The scheduler API (\emph{schedAPI}) in \emph{CoMeT}\xspace extends this feature but with a strong focus on user-friendliness to integrate new policies for mapping, migration, and DVFS. The arrival times of the applications are configurable in several ways. \emph{CoMeT}\xspace supports uniform arrival times, random arrival times (Poisson distribution), or explicit user-defined arrival times. Task mapping and migration follow the one-thread-per-core model, common in many-core processors~\cite{one_thread_per_core}. The default policy assigns cores to an incoming application based on a static priority list. It is straightforward to extend the default policy to implement more sophisticated policies. DVFS uses freely-configurable voltage/frequency (V/f) levels. \emph{CoMeT}\xspace comes with two reference policies: a policy that assigns static frequency levels (intended to characterize the system with different applications at different V/f levels) and the Linux \textit{ondemand} governor~\cite{pallipadi2006ondemand}. Users can configure the epoch durations (default to \SI{1}{\milli\second}) for scheduling, migration, and DVFS. A common use-case of \emph{CoMeT}\xspace is the implementation and study of custom resource management policies. To this end, \emph{schedAPI} provides APIs as abstract {\em C++} classes that users can extend with custom policies, with minimal changes in the codebase. We discuss a case study of using such a policy in \emph{CoMeT}\xspace and corresponding insights for a stacked architecture in Section~\ref{sec:CaseStudy}. \subsection{\emph{HeatView}\xspace} \label{sec:HeatView} A workload executing on a typical multi-/many-core processor undergoes multiple heating and cooling phases. Such phases might occur due to the workload characteristics themselves or the effect of a DTM policy. In such cases, developing deeper insights into the workload behavior and operation of DTM policy is essential. However, analyzing various log files can be cumbersome and error-prone. We develop and integrate \emph{HeatView}\xspace within \emph{CoMeT}\xspace to analyze such thermal behavior. \emph{HeatView}\xspace generates a video to visually present the simulation's thermal behavior, with the temperature indicated through a color map. \emph{HeatView}\xspace takes the temperature trace file generated from \emph{CoMeT}\xspace as input and other configurable options and generates images corresponding to each epoch and a video of the entire simulation. The users can use the videos corresponding to different workloads (or core-memory configurations) for comparing heating patterns across workloads (or architectures). \emph{HeatView}\xspace configures according to the core-memory configurations to generate patterns. Depending upon the specified core-memory configuration type among the four choices (\textit{2D-ext, \textit{3D-ext}, 2.5D, or 3D-stacked}), \emph{HeatView}\xspace can represent a core and memory stacked over each other or side-by-side. The temperature scale used within \emph{HeatView}\xspace to show the thermal patterns is also configurable. Additionally, to reduce the video generation time, \emph{HeatView}\xspace provides an option to periodically skip frames based on a user-specified sampling rate. \emph{HeatView}\xspace also allows configuring of parameters to improve viewability. We present a 3D view of the core and memory (stacked or side-by-side) to the user by default. Users can specify the core or memory layer number to plot separately as a 2D map (Figures~\ref{fig:hv_3Dmem}, ~\ref{fig:hv_2.5D}, and ~\ref{fig:hv_3D}). Figure~\ref{fig:hv_2.5D_2D} shows users can also view each layer separately. \emph{HeatView}\xspace always plots the \textit{2D-ext} architecture as a 2D view (example in Figure~\ref{fig:hv_DDR}). \subsection{Floorplan Generator} \label{sec:Floorplan} Thermal simulation of a 2D processor requires a floorplan that specifies the sizes and locations of various components (cores, caches, etc.) on the silicon die. It requires one floorplan per layer, and a layer configuration file specifies the layer ordering and thermal properties for a stacked processor. \emph{CoMeT}\xspace comes with some built-in floorplans and layer configuration files for several different architectures and configurations, as examples. However, in the general case of custom simulations, it is required to create floorplans and layer configuration files according to the properties of the simulated system. \emph{CoMeT}\xspace comes with an optional helper tool (\textit{floorplanlib}) to generate custom floorplans. The tool supports all the four core-memory configurations described in Figure~\ref{fig:SysMemArch}. It supports creating regular grid-based floorplans, where cores and memory banks align in a rectangular grid. The user only needs to specify the number and dimensions of cores, memory banks, \textcolor{black}{thicknesses of core or memory layers, the distance between core and memory (for 2.5D configurations), etc.} \textcolor{black}{In addition, it is possible to provide a per-core floorplan (e.g., ALU, register file, etc.), which replicates for each core in the generated floorplan.} User can still provide more complex (irregular) floorplans manually to \emph{CoMeT}\xspace. \subsection{Automated Build Verification (Smoke Testing)} \label{sec:SmokeTest} While making changes to the code base, one might inadvertently introduce errors in an already working feature in the tool. To efficiently detect such scenarios, we provide an automated test suite with \emph{CoMeT}\xspace for verifying the entire toolchain for the correct working of its key features. We use a combination of different micro-benchmarks to develop a test suite that tests various tool features. After the test completes, we summarize the pass/failure status of test cases and error logs to help users debug the causes of failure of \emph{CoMeT}\xspace's features. While the test-suite performs a comprehensive smoke test of all \emph{CoMeT}\xspace features, users can control and limit the testing to only a subset of features to save time. The automated build verification tests further help users test critical functionalities of \emph{CoMeT}\xspace when they plan to extend the toolchain by adding new test cases corresponding to the added functionalities. In addition to this, it would also facilitate debugging new thermal management policies quickly. \section{Experimental Studies} \label{sec:ExpStudy} In this section, we discuss various experiments to demonstrate the features of \emph{CoMeT}\xspace and discuss various insights developed through these studies. Further, we also quantify the simulation time overhead of \emph{CoMeT}\xspace over the state-of-the-art. \subsection{Experimental Setup} \label{sec:exptSetup} We use a diverse set of benchmark suites -- {\em PARSEC~2.1}~\cite{parsec}, {\em SPLASH-2}~\cite{splash2}, and {\em SPEC~CPU2017}~\cite{spec2017} -- to study the performance, power, and thermal profiles for core and memory. Table~\ref{table:benchmark_list} lists the selected benchmarks from each suite. We classify the benchmarks into compute-intensive~(\textit{blackscholes, swaptions, barnes, radiosity, lu.cont, raytrace, gcc, exchange, x264, nab, mcf}), mixed~(\textit{streamcluster, vips, dedup, bodytrack, water.nsq, cholesky}), and memory-intensive~(\textit{lbm}, \textit{mcf}) based on their memory access rate. We compile the source code for {\em PARSEC~2.1} (with input size \textit{simmedium}) and {\em SPLASH2} benchmarks (with input size \textit{small}) to get the binaries for simulation. We directly use pre-generated traces (Pinballs) from~\cite{specTraces} for simulation (with 100M instructions) for {\em SPEC~CPU2017} benchmarks. Table~\ref{table:CoreMemParams} shows the core and memory parameters for various core-memory configurations that we use in our experiments. We use \emph{CoMeT}\xspace's automated \textit{floorplanlib} tool to generate the floorplans for various core-memory configurations. We run simulations using \emph{CoMeT}\xspace and obtain performance, power, and temperature metrics for various workloads. \textcolor{black}{\textit{HotSpot} uses grid-level simulation with an 8x8 grid in the center mode.} Thermal simulation is invoked periodically at \SI{1}{\milli\second} frequency. \begin{table}[t] \caption{Core and Memory Parameters} \centering \begin{tabular}{|p{3.6cm}|p{7.7cm}|} \hline \textbf{Core Parameter} & \textbf{Value}\tabularnewline \hline \hline Number of Cores& 4\tabularnewline \hline Core Model & 3.6 GHz, 1.2 V, 22 nm, out-of-order, 3 way decode, 84 entry ROB, 32 entry LSQ\tabularnewline \hline L1 I/D Cache & 4/16 KB, 2/8-way, 64B-block \tabularnewline \hline L2 Cache & Private, 64 KB, 8-way/64B-block\tabularnewline \hline \hline \textbf{Memory Parameter} & \textbf{Value}\tabularnewline \hline \hline 3D Memory ({\em 3D-ext}, {\em 2.5D}, {\em 3D-stacked}) Configuration & 1 GB, 8 layer, 16 channels, 8 ranks, 1 bank per rank, closed page policy, 29/20/15 ns (latency), 7.6 GBps (per channel bandwidth)\tabularnewline \hline 2D Memory Off-chip Configuration & 2 GB, 1 layer, 1 channel, 4 ranks, 4 bank per rank, closed page policy, 45 ns (latency), 7.6 GBps (per channel bandwidth)\tabularnewline \hline \end{tabular} \label{table:CoreMemParams} \end{table} \begin{table}[t] \caption{List of Benchmarks} \centering \begin{tabular}{|p{3.3cm}|p{8.7cm}|} \hline \textbf{Benchmark Suite} & \textbf{Selected Benchmarks}\tabularnewline \hline \hline {\em PARSEC~2.1} & \textit{dedup, streamcluster, vips, bodytrack, swaptions, blackscholes}\tabularnewline \hline {\em SPLASH-2} & \textit{lu.cont, water.nsq, radiosity, raytrace, barnes, cholesky}\tabularnewline \hline {\em SPEC~CPU2017} & \textit{lbm, mcf, gcc, nab, x264, exchange} \tabularnewline \hline \end{tabular} \label{table:benchmark_list} \end{table} \subsection{Thermal Profile for Various Architecture Configurations} \label{sec:ThermalProfileArchitecture} \textcolor{black}{ We present the thermal behavior of cores and memories for each of the four core-memory configurations supported by \emph{CoMeT}\xspace. We consider \textit{exchange}, \textit{x264}, \textit{mcf}, and \textit{lbm} benchmarks from the {\em SPEC~CPU2017} suite and map them on Cores 0, 1, 2, and 3, respectively, to exercise a heterogeneous workload containing benchmarks of different memory intensity. Each core maps to a fixed set of 3D memory channels. Core 0 maps to Channels \{0, 1, 4, 5\}, Core 1 maps to Channels \{2, 3, 6, 7\}, Core 2 maps to Channels \{8, 9, 12, 13\}, and Core 3 maps to Channels \{10, 11, 14, 15\}.} \emph{HeatView}\xspace uses the temperature trace generated during the simulation to create a video of the thermal pattern of various cores and memory banks. The videos for the simulations are available online at \href{run:https://tinyurl.com/cometVideos}{tinyurl.com/cometVideos}. Figures~\ref{fig:hv_DDR}, ~\ref{fig:hv_3Dmem}, ~\ref{fig:hv_2.5D}, and ~\ref{fig:hv_3D} present snapshots at \SI{15}{\milli\second} of simulation time for each of the four architectures. \textcolor{black}{ Figure~\ref{fig:hv_DDR} presents the temperature profile of cores and the external DDR memory. We observe that Cores 0 and 1 have relatively higher temperatures than Cores 2 and 3 due to the execution of compute-intensive benchmarks on Cores 0 and 1. Further, Core 1 has a slightly higher temperature than Core 0 as \textit{x264} is more compute-intensive than \textit{exchange}. We do not observe any temperature gradient on the memory side. We consider a single channel for the 2D memory with accesses from different cores shared and uniformly distributed among banks, thereby eliminating any gradient. } Figure~\ref{fig:hv_3Dmem} shows the temperature profile of cores and an external 8-layer 3D memory. As cores and memory banks physically locate on different chips, they do not influence each other's temperature and have different thermal profiles. We see that the memory banks attain significantly higher temperatures. Further, due to the heat sink at the top of the 3D memory, the temperature of the lower layers is higher than that of the upper layers, with a gradual decrease as we move up the memory stack. \textcolor{black}{ Due to the heterogeneous nature of benchmarks and each core mapping to a fixed set of channels, we observe that different 3D memory channels attain different temperatures. In the cross-section view of a memory layer shown in the figure, Channels 10, 11, 14, and 15 correspond to \textit{lbm}, a highly memory-intensive benchmark. \textit{lbm} is a highly memory-intensive benchmark that results in high temperatures in the memory layer. Channels 0, 1, 4, and 5 are relatively cooler as they correspond to Core 0, which executes a compute-intensive benchmark (\textit{exchange}). Channels 2, 3, 6, and 7 also correspond to a compute-intensive benchmark (\textit{x264}), but Channels 6 and 7 have higher temperatures than Channels 2 and 3 due to thermal coupling from adjacent hot Channels 10 and 11. Different cores also attain different temperatures due to the differing nature of the benchmarks executed. } While this core-memory configuration (\textit{3D-ext}) differs from \textit{2D-ext} only in terms of using an external 3D memory compared to a DDR memory, we observe that the cores in \textit{3D-ext} (Figure~\ref{fig:hv_3Dmem}) are relatively hotter than the cores in the \textit{2D-ext} (Figure~\ref{fig:hv_DDR}) because of faster execution enabled by the 3D memory. \emph{CoMeT}\xspace enables such insights due to the integrated core-memory thermal simulation that cannot be easily quantified (accurately) when using a standalone trace-based simulation infrastructure. \begin{figure}[t] \centering \includegraphics[width=0.70\linewidth]{images/hv_ddr.PNG} \caption{\textcolor{black}{Thermal profile of core and memory at \SI{15}{\milli\second} when executing a heterogeneous workload on {\em 2D-ext} core-memory configuration.}} \label{fig:hv_DDR} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.90\linewidth]{images/hv_3Dmem.PNG} \caption{\textcolor{black}{Thermal profile of core and memory at \SI{15}{\milli\second} when executing a heterogeneous workload on {\em 3D-ext} core-memory configuration.}} \label{fig:hv_3Dmem} \end{figure} \begin{figure}[t] \includegraphics[width=0.90\linewidth]{images/hv_2.5D.PNG} \caption{\textcolor{black}{Thermal profile of core and memory at \SI{15}{\milli\second} when executing a heterogeneous workload on {\em 2.5D} core-memory configuration.}} \label{fig:hv_2.5D} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.90\linewidth]{images/hv_3D.PNG} \caption{\textcolor{black}{Thermal profile of core and memory at \SI{15}{\milli\second} when executing a heterogeneous workload on {\em 3D-stacked} core-memory configuration.}} \label{fig:hv_3D} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.90\linewidth]{images/hv_2.5D_2Dview.PNG} \caption{\textcolor{black}{Detailed/2D view of each layer for the {\em 2.5D} configuration, corresponding to Figure~\ref{fig:hv_2.5D}}} \label{fig:hv_2.5D_2D} \end{figure} Figure~\ref{fig:hv_2.5D} shows the temperature profile of cores and 3D memory integrated on the same package in a {\em 2.5D} configuration. \textcolor{black}{ Similar to the previous case of {\em 3D-ext} (Figure~\ref{fig:hv_3Dmem}), we observe that different cores and different memory channels in the {\em 2.5D} core-memory configuration attain different temperatures due to the heterogeneous nature of the workload. Also, the core and memory are thermally coupled in the {\em 2.5D} core-memory configuration, resulting in significantly higher temperatures for the same workload. Since the cores are away from the heat sink compared to {\em 3D-ext}, their heat dissipation capability reduces, leading to much higher temperatures. } Figure~\ref{fig:hv_3D} shows the thermal profile for a {\em 3D-stacked} configuration with one layer of four cores stacked over an 8-layer 3D memory. We observe that any layer of the memory is hotter than the corresponding layer in the \textit{3D-ext} or {\em 2.5D} core-memory configuration due to the increased stacking of cores on top of the 3D memory, limiting the heat dissipation paths further raising the temperature. \textcolor{black}{ Similar to other core-memory configurations, we observe that different memory channels attain different temperatures due to the heterogeneous nature of the workload. However, the cores heat almost uniformly given their proximity to the heat sink and their coupling with the memory layers. While Cores 0 and 1 executing compute-intensive benchmarks should attain higher temperatures than Cores 2 and 3, their corresponding memory channels exhibit lower temperatures due to fewer memory accesses and help absorb the excess heat. \emph{CoMeT}\xspace enables such insights due to its support for various core-memory configurations. } To illustrate the feature of \emph{HeatView}\xspace to create thermal maps with detailed layerwise details (2D view), we use the {\em 2.5D} configuration (Figure~\ref{fig:hv_2.5D}) as an example. The corresponding layerwise plot is shown in Figure~\ref{fig:hv_2.5D_2D} and provides more details of each layer. \subsection{Thermal Profile for Various Benchmarks} \label{sec:ThermalProfileBenchmarks} We analyze the performance, power, and thermal profile for core and memory for various benchmark suites using \emph{CoMeT}\xspace. Figure~\ref{fig:tss} shows the core, memory temperature, and execution time for {\em PARSEC~2.1}, {\em SPLASH-2}, and {\em SPEC~CPU2017} benchmarks running on a four-core system with an off-chip 3D memory (\textit{3D-ext} architecture). \textcolor{black}{In these experiments, we execute multiple instances of the benchmark, one on each CPU core.} A four-core system with a heat sink has sufficient cooling paths. However, we see a significant temperature rise with higher power dissipation in cores. Most benchmarks in {\em PARSEC~2.1} and {\em SPLASH-2} suite are multi-threaded and compute-intensive. So the average DRAM access rate remains low for these benchmarks throughout their execution. However, due to the high density in 3D memories, the leakage power (temperature-dependent) dissipation contributes significantly to overall memory power dissipation. Stacking also increases the power density, resulting in memory temperatures of around 71$^\circ$C (increasing with memory access rate). For the {\em SPEC~CPU2017} suite, \textit{lbm} is a memory-intensive benchmark with a high average DRAM access rate (as high as $10^8$ accesses per second). Therefore, \textit{lbm} results in significantly higher memory temperatures than other benchmarks. \input{images/benchmark_characterization} \textcolor{black}{\subsection{Thermal Profile with Fine-grained Core Components}} \label{sec:subcore} \textcolor{black}{ We illustrate the ability of \emph{CoMeT}\xspace to simulate a fine-grained core floorplan with individual components of a core modeled explicitly. As mentioned in Section~\ref{sec:Floorplan}, \textit{floorplanlib} can generate a multi-core floorplan if the user also provides a fine-grained floorplan for a single core as input. We obtain the area of each component from \textit{McPAT} to obtain the floorplan for a single core. The relative placement of different components is similar to {\em Intel's Skylake} processor design from HotGauge~\cite{Hankin:2021:Hotgauge}. \textit{floorplanlib} generates a fine-grained floorplan for four cores using this single-core floorplan as a template. We use the four-core floorplan to simulate workloads in a \textit{3D-ext} configuration. We use the same workloads and 3D memory configuration as in our previous experiments in Section~\ref{sec:exptSetup} and a finer grid size of 32x32. Figure~\ref{fig:subcore} shows the corresponding thermal map obtained from \emph{CoMeT}\xspace.\footnote{\textcolor{black} {The thermal map figure is generated outside of \emph{HeatView}\xspace as currently \emph{HeatView}\xspace supports plotting of uniform blocks only.}} Similar to our previous result for the \textit{3D-ext} configuration shown in Figure~\ref{fig:hv_3Dmem}, we observe that different cores attain different temperatures due to heterogeneous workloads. In addition, due to consideration of a fine-grained floorplan with their power consumption, we observe the presence of a thermal gradient between different components of the same core. We observe that the execution units like the ALU (Arithmetic and logic unit), FPU (Floating point processing unit), ROB (Reorder buffer), etc., attain a higher temperature than the rest of the components such as ID (Instruction decoder), L1I (L1 instruction cache), or the L2 cache. Such a feature present in \emph{CoMeT}\xspace can provide deeper insights about thermal hotspots within a core. Accordingly, one can take more appropriate thermal management decisions. } \begin{figure}[t] \centering \includegraphics[width=0.99\linewidth]{images/subcore.png} \caption{\textcolor{black}{Thermal profile of cores with a fine-grained floorplan in a \emph{3D-ext} core-memory configuration}} \label{fig:subcore} \end{figure} \textcolor{black}{\subsection{Effect of thermal coupling in 2.5D architecture}} \input{images/2.5D_coupling} \textcolor{black}{We illustrate the effect of thermal coupling between the core and memory in a \textit{2.5D} core-memory configuration. As the 3D memory co-locates with the cores on the same package, memory temperature affects the core temperature and vice-versa. We experiment with the 3D memory power enabled (both leakage and dynamic power taken into account) and 3D memory power disabled (leakage and dynamic power forced to 0) during simulation. We repeat this experiment for eight different workloads to exercise different activity factors for cores and memory. We use six homogeneous workloads (Table~\ref{table:benchmark_list}) as used in previous experiments and two heterogeneous workloads, with each workload consisting of four independent benchmarks. The \textit{mix1} heterogeneous workload includes a mix of \textit{lbm}, \textit{x264}, \textit{exchange}, and \textit{mcf} benchmarks, while the \textit{mix2} workload includes \textit{lbm}, \textit{gcc}, \textit{nab}, and \textit{mcf} benchmarks. Figure~\ref{fig:coupling} shows the maximum core temperature (out of the four cores) for memory power being enabled and disabled, with workloads ordered as per increasing memory intensity. We observe that the thermal coupling increases as we move from compute-intensive workloads (towards left) to memory-intensive workloads (towards the right). A higher memory activity raises the memory temperature, leading to higher thermal coupling. Memory-intensive workloads (e.g., \textit{lbm}, \textit{mcf}) induce maximum thermal coupling, and enabling 3D memory power dissipation raises the temperature of cores by up to 11 \textdegree C (for \textit{lbm}). } \subsection{Case Study: Thermal-Aware Scheduler and DVFS} \label{sec:CaseStudy} We show in this section a case study of the analyses that are possible with \emph{CoMeT}\xspace and demonstrate some trends that appear in stacked core-memory configurations. We employ the Linux \emph{ondemand} governor~\cite{pallipadi2006ondemand} with DTM. The \emph{ondemand} governor increases or decreases per-core V/f-levels when the core utilization is high or low, respectively. DTM throttles the chip to the minimum V/f-level when some thermal threshold exceeds and increases the frequency back to the previous level if the temperature falls below the thermal threshold minus a hysteresis parameter. In this experiment, we set the two thresholds to 80$^\circ$C and 78$^\circ$C. The temperature is initialized to a 70$^\circ$C peak to emulate a prior workload execution. We execute the \emph{PARSEC} \emph{swaptions} with four threads to fully utilize all processor cores. Figure~\ref{fig:case_study} depicts the temperature and frequency throughout the execution. \emph{Swaptions} is compute-bound, and hence the \emph{ondemand} governor selects the highest available frequency. Consequently, the processor reaches the temperature limit of 80$^\circ$C fast. DTM then reduces the frequency until the temperature has decreased, leading to thermal cycles as shown in the figure, where the frequency toggles between a low and a high value. We observe the peak temperature not on the cores but the memory layers (Section~\ref{sec:ThermalProfileArchitecture}). This simulation uses a {\em 3D-stacked} architecture---enabled by \emph{CoMeT}\xspace---and shows some interesting trends. The temperature on the core layer is directly affected by DTM. The temperature immediately reduces almost exponentially upon thermal throttling, e.g., at 203\,ms (Point~A). It takes several milliseconds until the temperature at layers farther away from the core layer reduces (in this example 5\,ms), during which the temperature overshoots the thermal threshold. Similarly, when returning to normal operation, the temperature of the hotspot reacts with a significant delay to DTM decisions. This delay is because the hotspot's location (lowest memory layer) is far from the layer most affected by DTM (core layers). This observation is unlike traditional 2D architectures, where the two coincide (thermal hotspots in the cores). Existing state-of-the-art thermal management~\cite{rapp2020neural} and power budgeting algorithms~\cite{niknam2021t} cannot account for these trends. Therefore, such different trends require novel policies (algorithms) that can be easily evaluated on \emph{CoMeT}\xspace using the interfaces presented in Section~\ref{sec:Scheduler}. \input{images/case_study} \subsection{Parameter Variation} \label{sec:Parameter} \subsubsection{Increasing the Number of Cores} We study the performance and thermal effect of increasing the number of cores (and threads) for the {\em PARSEC} workloads running on {\em 3D-ext} configuration. We increase the number of cores from 4 to 16 and observe that some {\em PARSEC} workloads~(such as \textit{bodytrack}, \textit{streamcluster}, \textit{vips}, and \textit{swaptions}) can utilize parallelism more effectively (Figure~\ref{fig:Parsec-16C}). Workloads such as \textit{blackscholes} and \textit{dedup} either have a significant serial phase or imbalanced thread execution time, resulting in a limited speedup with a marginal increase in temperature. For \textit{blackscholes}, we observe a speedup of $\sim$1.5x (compared to a 4x increase in the number of cores) as it spends most of the execution time in the serial phase. \input{images/arch_comparison} \subsubsection{Increasing the Number of Core Layers} Until now, all our experiments have considered cores on a single layer. Here, we demonstrate the ability of \emph{CoMeT}\xspace to perform thermal simulation for multiple layers of cores. We consider the same \textit{3D-stacked} core-memory configuration corresponding to Figure~\ref{fig:hv_3D} but extend it to have two layers of cores and, therefore, a total of 8 cores. \textcolor{black}{ We execute the same set of heterogeneous benchmarks, with the same benchmark mapped to vertically stacked cores. Specifically, \textit{exchange}, \textit{x264}, \textit{mcf}, and \textit{lbm} are mapped to cores \{0,4\}, \{1,5\}, \{2,6\}, and \{3,7\} respectively. Figure~\ref{fig:hv_3D_2L} shows the temperature pattern of various layers of core and memory. We observe that, compared to Figure~\ref{fig:hv_3D} with only a single core layer, an additional layer of core on top raises the temperatures of the bottom layers significantly. Further, the temperature gradient (effect of adjacent layers) is more pronounced with two core layers than one core layer (Figure~\ref{fig:hv_3D_2L}). } This experiment demonstrates the versatility of \emph{CoMeT}\xspace in adapting to different kinds of core-memory configurations with single/multiple layers of cores integrated with single/multiple layers of memory. Such a capability enables \emph{CoMeT}\xspace to analyze the performance, power, and thermal behavior of various emerging core-memory configurations and identify various optimization opportunities within them. We strongly believe that \emph{CoMeT}\xspace could help identify many newer research problems and evaluate their proposed solutions. \begin{figure}[t] \centering \includegraphics[width=0.90\linewidth]{images/hv_3D_2L.PNG} \caption{\textcolor{black}{Thermal profile of core and memory at \SI{15}{\milli\second} when executing a heterogeneous workload on \textit{3D-stacked} core-memory configuration with 2 layers of core on top of 8 layers of memory.}} \label{fig:hv_3D_2L} \end{figure} \subsection{Overhead Analysis} \label{sec:Overhead Analysis} \input{images/overhead} Compared to {\em HotSniper}, which runs core-only performance and thermal simulations, \emph{CoMeT}\xspace executes thermal simulations for core and memory. Figure~\ref{fig:Parsec-Overhead} compares simulation time for the PARSEC workloads running on a processor with off-chip 2D DRAM (\textit{2D-ext} core-memory configuration) under {\em HotSniper} and \emph{CoMeT}\xspace. For \textit{2D-ext}, \emph{CoMeT}\xspace runs separate thermal simulations for core and memory. Compared to {\em HotSniper}, we observe only a marginal increase in simulation time ($\sim$5\%, on average) using \emph{CoMeT}\xspace. This observation is because the performance simulation is the dominant portion of the total simulation time and hence an additional thermal simulation leads to only a marginal increase. Furthermore, we simulated other configurations ({\em 3D-ext}, {\em 2.5D}, and {\em 3D-stacked}) and observed less than $\sim$2\% variation in simulation times. Overall, \emph{CoMeT}\xspace leads to an acceptable increase of $\sim$5\% in simulation time to provide memory temperatures (in addition to core temperatures) at the epoch level. \section{Conclusion and Future Work} \label{sec:conclusion} High-performance high-density stacked core-memory configurations for multi-/many-core processors are becoming popular and need efficient thermal management. We present the first work featuring an integrated core and memory interval thermal simulation toolchain, namely \emph{CoMeT}\xspace, supporting various core-memory configurations. \emph{CoMeT}\xspace provides several useful features such as a thermal visualization (video), user-modifiable DTM policy, a built-in floorplan generator, easy simulation control, and an automated testing framework to facilitate system-level thermal management research for processors. We discussed various experimental studies performed using \emph{CoMeT}\xspace, which will help researchers identify research opportunities and enable detailed, accurate evaluation of research ideas. Compared to a state-of-the-art core-only interval thermal simulation toolchain~\cite{pathania2018hotsniper}, \emph{CoMeT}\xspace adds only an additional $\sim$5\% simulation-time overhead. The source code of \emph{CoMeT}\xspace has been made open for public use under the {\em MIT} license. \textcolor{black}{ We plan to extend \emph{CoMeT}\xspace to support 3D-stacked SRAM caches and NVM architectures. We would also explore a plugin-based integration of thermal simulators to simplify the usage of emerging and more accurate thermal simulators with \emph{CoMeT}\xspace. } \ifdefined\IEEEformat \bibliographystyle{IEEEtran}
2,869,038,156,599
arxiv
\section{INTRODUCTION} In recent years, much of the research in semiconductor physics has been shifting towards {\em spintronics}, \cite{pri,wol-aws-buh} the novel branch of electronics in which the information is carried, at least in part, by the spin of the electrons. The electron spin might be used in the future to build quantum computing devices combining logic and storage based on spin-dependent effects in semiconductors. In order to achieve this goal, much study has been devoted recently to magnetic and optical \cite{opt} properties of semiconductors quantum dots \cite{tav,los-div} and quantum wells. \cite{gol} One of the most popular spin-based devices was proposed by Datta and Das.\cite{dat-das} Improvements to the original design have been proposed recently by Egues et al.\ \cite{egu-bur-los,sch-egu-los} The Datta-Das device makes use of the Rashba spin-orbit coupling \cite{ras,byc-ras} in order to perform controlled rotations of the spins of electrons passing through the channel of a field-effect transistor (FET), thus creating a spin-FET. The Rashba term is the manifestation of the spin-orbit interaction in quasi-one-dimensional (quasi-1D) semiconductor nanostructures lacking {\it structural} inversion symmetry. Additionally, the lack of {\it bulk} inversion symmetry enables another spin-orbit term in the electronic Hamiltonian, the Dresselhaus term, \cite{dre} which is also taken into account in the spin-FET design introduced in Ref.\ [\onlinecite{sch-egu-los}]. The influence of the Rashba and Dresselhaus Hamiltonians in quantum dots (QD) has recently been treated in a number of theoretical works. The most-often studied geometry is that of quasi-two-dimensional dots with parabolic confinement in the plane.\cite{gov,tsi-loz-gog,des-ull-mar} On the other hand, there is a growing interest and experimental progress in another type of quantum dots defined inside quasi-1D structures called nanorods or nanowhiskers.\cite{lie} In these structures, additional confinement in the longitudinal direction can be introduced with great precision, thereby allowing the formation of quasi-1D heterostructures, such as multiple quantum dots \cite{sam,bjo-etal} and dot superlattices. \cite{wu-fan-yan} Nanorods can be grown out of numerous semiconductor materials. Their lateral widths can be controlled by selecting the size of the gold nanoparticles which are used to catalyze their growth and can be made as small as 3 nm.\cite{mor-lie} Recently, the transport properties of these nanorod dots have been measured and the gated control of the number of electrons in them has been demonstrated.\cite{bjo-etal} Motivated by this experimental progress, we study theoretically the electronic structure of quasi-1D coupled double dots including spin-orbit effects. This type of dot systems has also attracted interest in the field of quantum control of orbital wave functions due to their simplicity and tunability.\cite{tam-met,zha-zha,chaos,cre-pla} As we will see here they are also well-suited for applications involving control of the spin degrees of freedom since they allow a great deal of control over the Rashba and Dresselhaus Hamiltonians. In this paper we study the influence of the Rashba and Dresselhaus spin-orbit Hamiltonians on the electronic structure of quasi-1D QDs, akin to those formed in semiconductor nanorods. Our emphasis on the spin-orbit interaction is obviously motivated by the current widespread interest in developing spintronic applications, which require a detailed understanding of the dynamics of the spin degree of freedom in semiconductor nanostructures. Let us denote by $x$ and $y$ the two transversal and by $z$ the longitudinal direction of a quasi-1D nanorod, and let us call $V_z(z)$ the confining potential that defines a pair of coupled QDs along the nanorod. The laterally-confining potentials $V_x(x)$ and $V_y(y)$ are crucial in the determination of the Rashba and Dresselhaus Hamiltonians and we consider different combinations of these potentials which can arise in our elongated geometry. We calculate the energy spectra and the wave functions by exact numerical diagonalization of the total Hamiltonian and analyze how the energy levels and the effective $g$-factor change as the Rashba and Dresselhaus couplings are modulated by varying the lateral confining potentials. Furthermore, we study the effect of varying the size of one of the dots and the width of the central barrier between them. Since the strength of the spin-orbit interaction varies greatly among semiconductor compounds, we look at several materials such as GaAs, InSb, GaSb, and InAs. Finally, we investigate the effective spin $\left\langle S_{z} \right\rangle$ as a function of the strength of the Rashba-like term for all the eigenfunctions of InSb with two different geometries. The quantization along different directions results in peculiar spin-momentum dependence. This in turn results in SO effects that depend strongly on the symmetries of the lateral confinement potentials. As such, the observation of SO spin splittings, as we will see, is directly attributable to asymmetry of the confinement and provides an interesting probe of built-in strain fields and/or unbalanced composition gradients. We organize the paper as follows. In Sec.\ \ref{sec:intro} we introduce the effective one-dimensional Hamiltonian and list the simplified forms it takes depending on the choice of confinement potentials. In Sec.\ \ref{sec:energies} we present the results for the energy levels including either the Dresselhaus or the Rashba term. In Sec.\ \ref{sec:gfactor} we study the effective {\em g}-factor and the expectation value of the {\em z}-component of the spin as a function of the strength of the Rashba term for different semiconductors and eigenstates. In Sec.\ \ref{sec:conclusions} we provide a discussion and conclusion. \section{The one-dimensional Hamiltonian} \label{sec:intro} We start with the complete Hamiltonian for a three-dimensional semiconductor structure in the absence of magnetic field, \begin{equation} H = \frac{p^2}{2m^{*}} + V(\mathbf{r}) + H_D + H_R, \end{equation} where $m^*$ is the conduction-band effective mass, $\mathbf{p}$ is the momentum, $V(\mathbf{r})$ is the confinement potential, and $H_D$ and $H_R$ are the general Dresselhaus and Rashba Hamiltonians.\cite{des-ull-mar-2004b} Here we follow the current practice of calling Rashba terms those spin-orbit contributions to the Hamiltonian that arise due to the structural inversion asymmetry of the nanostructure, as opposed to the Dresselhaus terms which come from the bulk inversion asymmetry of the III-V semiconductors. Integrating out the {\em x} and {\em y} variables, we obtain the following effective one-dimensional Hamiltonian: \begin{equation} H_{1d} = \frac{p_{z}^{2}}{2m^{*}}+V_{z}\left(z\right)+H_{1dD}+H_{1dR}, \end{equation} \begin{eqnarray} H_{1dD}&=&\frac{\gamma_{D}}{\hbar^{3}} \{\sigma_{x}\left\langle p_{x}\right\rangle \left(\left\langle p_{y}^{2}\right\rangle -p_{z}^{2}\right) -\sigma_{y}\left\langle p_{y}\right\rangle \left(\left\langle p_{x}^{2}\right\rangle -p_{z}^{2}\right)\nonumber\\ &+& \sigma_{z}p_{z}\left(\left\langle p_{x}^{2}\right\rangle -\left\langle p_{y}^{2}\right\rangle \right)\}, \\ H_{1dR} &=& \frac{\gamma_{R}}{\hbar}\{\sigma_{x}\left(\left\langle \frac{\partial V_y}{\partial y}\right\rangle p_{z}-\frac{\partial V_z}{\partial z}\left\langle p_{y}\right\rangle \right)- \nonumber \\ &-& \sigma_{y}\left(\left\langle \frac{\partial V_x}{\partial x}\right\rangle p_{z}-\frac{\partial V_z}{\partial z}\left\langle p_{x}\right\rangle \right)+ \nonumber \\ &+& \sigma_{z}\left(\left\langle \frac{\partial V_x}{\partial x}\right\rangle \left\langle p_{y}\right\rangle-\left\langle \frac{\partial V_y}{\partial y}\right\rangle \left\langle p_{x}\right\rangle \right)\}, \end{eqnarray} where $\sigma_i$, $i=x,y,z$, are the Pauli matrices, $H_{1dD}$ is the one-dimensional Dresselhaus term, and $H_{1dR}$ is the Rashba-like term enabled by the inversion asymmetry of the laterally confining potentials $V_x$ and $V_y$. $\gamma_{R}$ and $\gamma_{D}$ are parameters that depend on the materials. The averages $\langle \ldots \rangle$ are taken over the lowest-energy wavefunctions of the laterally confining potentials as we assume small nanorod widths. In Table I we present the parameters used in our calculations for different semiconductor materials. An example of the confining potential in the longitudinal direction, $V_{z}(z)$, is shown in Fig.\ \ref{fig:potential_dots}, along with a schematic drawing of the nanorod QDs. \begin{figure}[tbp] \includegraphics*[width=8cm]{fig1.eps} \caption{Potential-energy profile and schematic drawing of two Al/InSb coupled nanorod quantum dots. For InSb-based systems we take a well height of 100 meV, and for Al/GaAs, 220 meV. In this example, the QD width is $300 \mbox{ \AA}$ and barrier width 30 \AA, with smoothly changing barriers over a width of a few angstroms. The drawings (a)-(c) illustrate the lateral confinement geometries described in the text.} \label{fig:potential_dots} \end{figure} We now list four different possibilities for the confining potentials $V_x(x)$ and $V_y(y)$, based on the degree of symmetry of the structure. The Dresselhaus and Rashba Hamiltonians simplify considerably due to the fact that, in the absence of a magnetic field, the eigenstates can be chosen real, and therefore, expectation values of the momentum are zero.\cite{sha} The four cases are (see Fig.\ 1(a)-(c) for schematic drawings of the potentials in the first three cases): \medskip \noindent {\bf (a)} Circular: $V_{x}\left(x\right), V_{y}\left(y\right)$ have inversion symmetry about the origin and are equal, $V_{x}\left(x\right)=V\textrm{$_{y}$}\left(y\right)$: \begin{equation} H_{SO} = H_{1dD}+H_{1dR}=0, \end{equation} yields no SO contributions. \medskip \noindent {\bf (b)} $V_{x}\left(x\right), V_{y}\left(y\right)$ have no inversion symmetry but are equal, $V_{x}\left(x\right) = V_{y}\left(y\right)$: \begin{equation} H_{SO} = H_{1dD}+H_{1dR}= \frac{\gamma_{R}}{\hbar}\left\langle \frac{\partial V_x}{\partial x}\right\rangle p_{z}\left(\sigma_{x}-\sigma_{y}\right), \end{equation} so that only Rashba terms are present. \medskip \noindent {\bf (c)} Elliptical: $V_{x}\left(x\right), V_{y}\left(y\right)$ are inversion symmetric functions and different, $V_x(x) \neq V_y(y)$: \begin{equation} H_{SO} = H_{1dD} + H_{1dR}= \frac{\gamma_{D}}{\hslash^{3}}\sigma_{z}p_{z}\left(\left\langle p_{x}^{2}\right\rangle -\left\langle p_{y}^{2}\right\rangle\right), \end{equation} results in only Dresselhaus terms. \medskip \noindent {\bf (d)} $V_{x}\left(x\right), V_{y}\left(y\right)$ have no inversion symmetry and are different, $V_{x}\left(x\right)\neq V_{y}\left(y\right)$: \begin{widetext} \begin{eqnarray} H_{SO}=H_{1dD}+H_{1dR}=\frac{\gamma_{D}}{\hslash^{3}}\sigma_{z}p_{z} \left(\left\langle p_{x}^{2}\right\rangle -\left\langle p_{y}^{2} \right\rangle \right)+\frac{\gamma_{R}}{\hbar} p_{z} \left(\sigma_{x}\left\langle \frac{\partial V}{\partial y} \right\rangle - \sigma_{y}\left\langle \frac{\partial V}{\partial x} \right \rangle \right), \end{eqnarray} represents the most general case and both Rashba and Dresselhaus contributions are present. \end{widetext} For the calculation of the effective $g$-factor we introduce a weak magnetic field along the $z$-direction. The field is chosen small so that the $x-y$ orbital wave functions are not perturbed significantly. Thus, we only add a Zeeman term to the Hamiltonian, $H_{Z}=\frac{\mu_B}{2} g_0 \, B \sigma_{z}$, where $\mu_B$ is the Bohr magneton, \textbf{B} is the magnetic field, and $g_0$ is the electron $g$-factor as per Table I. To calculate the energy levels and eigenfunctions, we expand the total Hamiltonian on a basis set of $300$ wave functions of the quantum box of size $L$, i.e.\ $\phi_{n,s}(z)=\sqrt{\frac{2}{L}}sin(\frac{n\pi z}{L})\chi\left(s\right)$, where $\chi(s)$ is the spin function, and diagonalize it numerically without further approximations. The size $L$ of the box is such that the whole double-dot structure is enclosed in the box, including the barriers on the sides of the dots and as such is irrelevant in the final eigenstates. We should notice that the geometry of the dots that we study here includes widths of 2-5 nm, while the most common nanorod widths in experiments are of the order of tens of nm. However, as we mentioned above, there are no experimental limitations to reducing the nanorod width to values we consider here. Smaller widths allow us to explore the basic physics and control of electronic wave functions with only one relevant lateral energy sublevel. Moreover, notice that typical charge depletion induced by the free surfaces further reduces the effective width of the nanorods, making them more 1D-like. A final comment is that the incorporation of additional transverse levels in the nanorod is straightforward, but results in systems of coupled differential equations. \begin{table}[h] \caption{Parameters for semiconductors\cite{des-ull-mar-2004b}} \label{Tab1}\setlength{\belowcaptionskip}{10pt} \centering \begin{tabular}{ccccc} \hline Parameter&GaAs&GaSb&InAs&InSb \\ \hline $m^{*}=m/m_{0}\cite{tab}$&$0.067$&$0.041$&$0.0239$&$0.013$\\ $\gamma_{R}\cite{tabl}$($A^{2}$)&$5.33$&$33$&$110$&$500$\\ $\gamma_{D}\cite{tab}$($meV/A^{3}$)&$24$&$187$&$130$&$220$\\ $g_{0}$&$-0.44$&$-7.8$&$-15$&$-51$\\ \hline \end{tabular} \end{table} \section{ENERGY LEVELS} \label{sec:energies} We present results for the energy levels in cases (b) and (c), i.e.\ with only Rashba and Dresselhaus terms present, respectively. The general case (d) does not present qualitatively different features from (b) or (c) and therefore we concentrate here on the simpler cases. For case (b) we fix the strength of the Rashba term by giving the structural electric field $\left\langle \frac{\partial V}{\partial x}\right\rangle$. For case (c), we use as confining potentials in the lateral directions two harmonic-oscillator potentials with different frequencies: $V_{q}(q) = \frac{1}{2} m^{*} \omega_{q}^{2} q^2$, $q = x, \: y$. These potentials have associated characteristic lengths $\ell_{q}=\sqrt{\hbar/m^{*}\omega_{q}}$. In Fig.\ \ref{fig2} we plot the two lowest energy levels for the InSb QDs taking $\left\langle \frac{\partial V}{\partial x}\right\rangle= 0.5 \, \mbox{meV}/\mbox{ \AA}$ for case (b), and $\ell_x=50 \, \mbox{\AA}$, $\ell_y=20 \mbox{ \AA}$ for case (c). The indices on the horizontal axis denote the inclusion of different terms in the Hamiltonian. The figure shows how the energy levels of $H_{0}$ (indices 1 and 4) are changed by the inclusion of a Rashba contribution $H_{1dR}$ (case (b), index 2), and of a Dresselhaus contribution $H_{1dD}$ (case (c), index 5), without magnetic field. With a weak magnetic field we have total Hamiltonians $H_{0}+H_{1dR}+H_{Z}$ (index 3) and $H_{0}+H_{1dD}+H_{Z}$ (index 6). We have carried out analogous calculations for the semiconductors quoted in Table I and the results were qualitatively similar to the ones shown here. The main general conclusion is that the effect of $H_{1dR}$ is always stronger than that of $H_{1dD}$ for the chosen parameters, which are representative of possible experimental situations. We note that the Rashba and Dresselhaus terms do not remove the spin degeneracy (as expected from the Kramers degeneracy in the absence of magnetic field) but that they simply shift the levels downwards, the strength of the shifts being controlled by the parameters $\left\langle \frac{\partial V}{\partial x}\right\rangle $ for Rashba and $\ell_{x}$ and $\ell_{y}$ for Dresselhaus. For the parameters chosen here the Rashba shift is of the order of $0.1 \mbox{meV}$ for InSb and $0.1 \mu\mbox{V}$ for GaAs while the Dresselhaus shift is of the order of $0.01 \mbox{meV}$ for InSb and $0.01 \mu\mbox{V}$ for GaAs. \begin{figure}[tbp] \includegraphics*[width=7cm]{fig2.eps} \caption{ Ground-state and first-excited-state energy levels of the InSb nanorod QDs shown in Fig.\ 1. We compare the eigenenergies of (1,4) $H_{0}= \frac{P_{z}^{2}}{2m^{*}} + V_{z}(z)$ to those of (2) $H_{0}+H_{1dR}$, (5) $H_{0}+H_{1dD}$, (3) $H_{0}+H_{1dR}+H_{Z}$, and (6) $H_{0}+H_{1dD}+H_{Z}$. $B=0.2 \mbox{T}$. } \label{fig2} \end{figure} As can be seen in Fig.\ \ref{fig3} the energy shift produced by the $H_{1dR}$ varies quadratically with the structural electric field $\left\langle \frac{\partial V}{\partial x}\right\rangle$. In Fig.\ \ref{fig4} we show how the energy levels vary in case (c) as a function of $\ell_{x}$ for the two lowest-energy states for fixed $\ell_{y}=50\mbox{ \AA}$. The functional dependence here is also parabolic. This suggests that the spin-orbit corrections to the energy levels could be calculated fairly accurately with second-order perturbation theory. We performed the second-order perturbative calculation in the case with Rashba Hamiltonian, with a small magnetic field applied (0.1 T) in order to work with non-degenerate perturbation theory. A comparison between the exact and second-order energies shows, for example, a difference of 17\% for $\left\langle \frac{\partial V}{\partial x}\right\rangle = 1.5 \, \mbox{meV}/\mbox{\AA}$, and increasing differences for larger Rashba fields, as expected. These results agree qualitatively with those of Ref.\ [\onlinecite{tsi-loz-gog}] for quasi-2D circular dots, where differences of up to 30\% between the results of exact calculations and of second-order perturbation theory have been found. \begin{figure}[tbp] \includegraphics*[width=8cm]{fig3.eps} \caption{ Contribution of the Rashba term to the energy levels of InSb (a) and GaAs (b) QDs as a function of $\left\langle \frac{\partial V}{\partial x}\right\rangle$. GS: Ground state, $1$ and $2$: first and second excited states, respectively. Notice effect is much smaller in GaAs (energy given in $\mu\mbox{eV}$), as anticipated.} \label{fig3} \end{figure} \begin{figure}[tbp] \includegraphics*[width=7cm]{fig4.eps} \caption{Contribution of the Dresselhaus term to the energy levels of InSb as a function of $\ell_{x}$ for the ground state (GS) and the first excited state (1) for $\ell_{y}=50 \mbox{ \AA}$. Level splitting in GaAs is barely visible on the same scale as in InSb.} \label{fig4} \end{figure} \section{Effective $g$-factor} \label{sec:gfactor} The small magnetic field ${\mathbf B} = 0.1 \mbox{T} \, {\mathbf z}$ breaks the spin degeneracy of the ground state and allows the calculation of the effective $g$-factor ($g^*$) as a function of $\left\langle \frac{\partial V}{\partial x}\right\rangle$ (case (b)) for GaAs, InSb, InAs and GaSb. In the figures we report normalized $g$-factors: \begin{equation} \frac{g^*}{g_0}=\frac{\left(E_{2}-E_{1}\right)}{\frac{\mu_B B g_0}{2}}, \end{equation} where $E_1$ and $E_2$ are the Zeeman-split ground-state levels. Figure \ref{fig5} shows the results for case (b) (i.e. with only Rashba contributions) as a function of $\left\langle \frac{\partial V}{\partial x}\right\rangle$. The decreasing trend of $g^*$ is qualitatively similar for all the materials but the magnitude of this Rashba effect varies greatly among them. The decrease of the $g^*$ is strongest for InSb and weakest for GaAs. We now examine what happens to $g^*$ when one modifies the features of the longitudinal potential $V_z(z)$, such as the barrier width $w$ and the size of the QDs (so far we have taken $L_{QD1}=L_{QD2} =300 \mbox{ \AA)}$. In Fig.\ \ref{fig6}(a) we show $g^*$ for $w=30, 130$, and $330 \mbox{ \AA}$ as a function of $\left\langle \frac{\partial V}{\partial x}\right\rangle$. We increase the barrier width but reducing at the same time the sizes of the two QDs so that the total size of the structure remains constant at $630 \mbox{ \AA}$. We note that increasing $w$ leads gradually to having two uncoupled QDs and to a stronger variation of $g^*$. In Fig.\ \ref{fig6}(b) we set $w= 30 \mbox{ \AA}$ and change the QDs' sizes. We take $L_{QD1}= 100 \mbox{ \AA}$ and $L_{QD2}= 500\mbox{ \AA}$ in one case, and $L_{QD1}=L_{QD2}= 300\mbox{ \AA}$ in the other. We observe here that the {\em symmetric} potential produces a stronger variation of $g^*$ than the {\em asymmetric} one. \begin{figure}[tbp] \includegraphics*[width=8cm]{fig5.eps} \caption{ Effect of the Rashba Hamiltonian on the effective $g$-factor. $g^*/g_{0}$ for the ground state for different semiconductors as a function of $\left\langle \frac{\partial V}{\partial x}\right\rangle$.} \label{fig5} \end{figure} \begin{figure}[tbp] \includegraphics*[width=8cm]{fig6.eps} \caption{Normalized effective $g$-factor for the ground state of InSb structures with Rashba Hamiltonian. (a) For different barrier widths $w=30, 130, 330 \mbox{\AA}$. (b) For different sizes of the QDs. Asymmetric case: $L_{QD1}=100 \mbox{ \AA}$ and $L_{QD2}=500\mbox{ \AA}$; symmetric case $L_{QD1}=L_{QD2}=300 \mbox{ \AA}$.} \label{fig6} \end{figure} We look at these symmetric and asymmetric structures in more detail, and calculate the expectation value $\left\langle S_{z}\right\rangle$ as a function of $\left\langle \frac{\partial V}{\partial x}\right\rangle$ for InSb dots and for the four lowest pairs of states (Zeeman doublets). Again a magnetic field $\mathbf{B} =0.1 \mbox{T}$ is included. As expected, $\left\langle S_{z}\right\rangle = \pm\frac{1}{2}$ in the absence of $\left\langle \frac{\partial V}{\partial x}\right\rangle$. Figure 7 shows the results for a symmetric structure with $L_{QD1}=L_{QD2}=300 \mbox{ \AA}$ and Fig.\ 8 for an asymmetric one with $L_{QD1}=100 \mbox{ \AA}$ and $L_{QD2}=500 \mbox{ \AA}$. The symmetric case shows a crossing in $\left\langle S_{z}\right\rangle$ (Fig.\ 7(a)) while the asymmetric one does not (Fig. 8(a)). Using this information we recalculate the effective $g$-factor for the first four pairs of eigenstates for the symmetric (Fig.\ 7(b)) and asymmetric (Fig.\ 8(b)) structures. The effective $g$-factor, given here by the difference in $\left\langle S_{z}\right\rangle$ values for every Zeeman pair, vanishes at the crossing of $\left\langle S_{z}\right\rangle$. This vanishing of $g^*$ is a potentially useful effect in spintronics applications, as it can be achieved as a function of the potentially adjustable Rashba parameter $\left\langle \frac{\partial V}{\partial x}\right\rangle$. It is interesting to note how different spatial asymmetry, introduced by the confinement potential along $z$ (i.e.\ different size dots), has strong effect on $g^*$, and results in a finite value even at large Rashba fields. \begin{figure}[tbp] \includegraphics*[width=8cm]{fig7.eps} \caption{Mean value of $S_z$ and effective $g$-factor for InSb systems with symmetric $V_z(z)$ (two equal dots with $L_{QD1}=L_{QD2}=300 \mbox{ \AA}$). (a) $\left\langle S_{z}\right\rangle$ as a function of $\left\langle \frac{\partial V}{\partial x}\right\rangle$ for the four lowest-energy doublets (pairs of Zeeman-split states). (b) $g^*/g_{0}$ for the same states.} \label{fig7} \end{figure} \begin{figure}[tbp] \includegraphics*[width=8cm]{fig8.eps} \caption{Same as Fig.\ 7 for asymmetric $V_z(z)$ with $L_{QD1}=100 \mbox{ \AA}$ and $L_{QD2}=500 \mbox{ \AA}$} \label{fig8} \end{figure} \section{Conclusions} \label{sec:conclusions} We have studied how the spin-orbit Rashba and Dresselhaus terms modify the electronic structure of nanorod quasi-one-dimensional double quantum dots. We have solved the problem by numerical diagonalization of the total Hamiltonian for varying confining potentials, in the lateral as well as in the longitudinal directions. The main conclusions of our work are the following: \\ (1) For our system, the Rashba and Dresselhaus Hamiltonians shift downwards the energy levels but do not break the spin degeneracy of the electronic levels in the absence of an external magnetic field (as prescribed by the Kramers degeneracy.) \\ (2) The Rashba effects are in general stronger than the Dresselhaus effects, but the latter are not negligible in general either.\\ (3) Changing the strength of the spin-orbit terms, which is done by changing the lateral confinement length $\ell_{x}$ or $\ell_{y}$ in the case of Dresselhaus or the structural electric field $\left\langle \frac{\partial V}{\partial x}\right\rangle$ in the case of Rashba, results in energy levels that vary nearly quadratically with the control parameter. This indicates that the SO corrections to the energy levels are close to the second-order corrections in perturbation theory. We verified this result by comparing the exact and the perturbatively calculated energies.\\ (4) By changing the strength of the Rashba term, the size of the central barrier, and the size and symmetry of the two QDs, it is possible to control the value of the effective $g$-factor, which determines the Zeeman splitting. In particular, it is possible to make the effective $g$-factor equal to zero. \begin{acknowledgments} We acknowledge support from the CMSS Program at Ohio University, Proyectos UBACyT 2001-2003 and 2004-2007, Fundaci\'on Antorchas, ANPCyT grant 03-11609, and NSF-CONICET through a US-Argentina-Brazil collaboration grant NSF 0336431. P.I.T.\ is a researcher of CONICET. \end{acknowledgments}
2,869,038,156,600
arxiv
\section{} In this supplemental section, technical details concerning the numerical calculation and the lattice model are provided, as well as additional results. \section{Hamiltonian Model} The Hamiltonian used in our calculations is a multiorbital Hubbard model composed of kinetic and interacting energy terms: $H=H_{\rm kin} + H_{\rm int}$. The kinetic part is \begin{equation} \begin{split} H_{\rm kin} = &-\sum_{i,\sigma,\gamma,\gamma'} t_{\gamma\gamma'} \bigl( c^+_{i, \sigma, \gamma}c_{i+1, \sigma, \gamma'} + \mathrm{H.c.} \bigr) \\ &+ \sum_{i,\sigma,\gamma}\Delta_\gamma n_{i, \sigma, \gamma}, \end{split} \end{equation} where $t_{\gamma \gamma'}$ is a hopping matrix built in the orbital space $\{\gamma\}$ that connects sites $i$ and $i+1$ $(\gamma=0,\,1,\,2)$ of a one dimensional $L$-site system. The hopping matrix is: \begin{equation} t_{\gamma \gamma'} = \begin{pmatrix} t_0 & 0 & -V \\ 0 & t_1 & -V \\ -V & -V & t_2 \end{pmatrix}. \end{equation} In this matrix only hybridizations between orbitals 0 and 2, and 1 and 2, are considered. $\Delta_\gamma$ defines an orbital-dependent crystal-field splitting. The interacting term of $H$ is \begin{equation} \begin{split} H_{\rm int} =& \; U\sum_{i,\gamma} n_{i,\uparrow,\gamma}n_{i,\downarrow,\gamma} -2J\sum_{i,\gamma<\gamma'} \mathbf{S}_{i,\gamma} \cdot \mathbf{S}_{i,\gamma'} \\ +& \left(U'-J/2\right)\sum_{i,\gamma<\gamma'} n_{i,\gamma}n_{i,\gamma'} \\ +&\, J\sum_{i,\gamma<\gamma'} \left( c^+_{i,\uparrow,\gamma}c^+_{i,\downarrow,\gamma} c_{i,\downarrow,\gamma'}c_{i,\uparrow,\gamma'} + \mathrm{H.c.} \right), \end{split} \end{equation} where $U$ is the intra-orbital Hubbard repulsion and $J$ the Hund's rule coupling. The operator $c_{i,\sigma,\gamma}$ annihilates a particle with spin $\sigma$ in orbital $\gamma$ at site $i$, and $n_{i,\sigma,\gamma}$ is the particle operator at $i$ with quantum numbers $(\sigma,\gamma)$. The spin (density) operators are $\mathbf{S}_{i,\gamma}$ ($n_{i,\gamma}$) acting at site $i$ in orbital $\gamma$. In the case of SU(2) systems the following relation holds: $U'=U-2J$. The ratio $J/U=1/4$ was fixed in all the calculations, and $U$ and the filling $n$ were varied. The values of the hoppings are (eV units): $t_{00} = t_{11} = -0.5$, $t_{22} = -0.15$, $t_{02} = t_{12} = V = 0.1$, $t_{01} = 0$, and $\Delta_0=0$, $\Delta_1=0.1$, $\Delta_2=0.8$; with a total bandwidth $W=4.9|t_{00}|$. Both hoppings and $W$ are comparable in magnitude to those used in more realistic pnictides models~\cite{Daghofer10}. These phenomenological parameters also reproduce qualitatively the position of the electron and hole pockets present in these materials. \section{DMRG Details} The density matrix renormalization group (DMRG)~\cite{dmrg1,dmrg2,dmrg3} results reported in this publication were obtained using open chains of lengths from 24 up to 150 orbitals, keeping up to 1300 states per block and up to 19 sweeps were performed in the finite-size algorithm. This choice of parameters gives a discarded weights in the range $10^{-6}-10^{-4}$. The accuracy for a poorer set of parameters was shown to be enough to get reliable observables with small finite-size dependence. Indeed, a similar behavior was previously observed in DMRG studies of multiorbital Hubbard models~\cite{Sakamoto02}. Part of this work was done with an open source code~\cite{dmrgpp}. \section{Measurements} For the sake of completeness, we show the explicit form of the observables calculated with the DMRG. The occupation number of each orbital is \begin{equation} n_\gamma = \frac{1}{L}\sum_{i,\sigma} \langle n_{i,\sigma,\gamma}\rangle. \end{equation} We have calculated correlation functions such as the total charge and spin correlations defined as follows: $\langle n_{i} n_{j} \rangle$, and $\langle \mathbf{S}_{i } \cdot \mathbf{S}_{j} \rangle$, where $n_i = \sum_{\gamma} n_{i, \gamma}$ and $\mathbf{S}_{i}= \sum_{\gamma} \mathbf{S}_{i, \gamma}$. In addition, we also calculated the orbital-dependent charge correlation functions $\langle n_{i,\gamma} n_{j,\gamma} \rangle$. The structure factors for charge and spin are defined as \begin{equation} N_\gamma(q) = \frac{1}{L}\sum_{k,j} e^{-i q (j-k)} \langle (n_{k,\gamma} - n)(n_{j,\gamma} - n) \rangle, \label{Nq} \end{equation} and \begin{equation} S(q) = \frac{1}{L}\sum_{k,j} e^{-i q (j-k)} \langle \mathbf{S}_{k} \cdot \mathbf{S}_{j} \rangle, \label{Sq} \end{equation} respectively; a similar expression holds for $N(q)$. The calculation of the orbital-dependent Luttinger liquid correlation exponent, $K_\gamma$, was done by setting the hybridization to zero, $V=0$. The exponent is extracted by considering the limit of small wave vectors in $N_\gamma(q)$, $q\rightarrow 0$~\cite{Schulz90,GiamarchiBook}: \begin{equation} N_\gamma(q) \rightarrow K_\gamma q / \pi, \qquad q\rightarrow 0. \end{equation} Assuming that the structure factor behaves linearly, its slope is proportional to $K_\gamma$ \section{Additional Results} In this section, we present additional results on the quantum phase transition (QPT) between orbital-selective Mott phases (OSMP), with different degrees of localization. Figure~\ref{fig:S1} shows the orbital occupation $n_\gamma$ as a function of $U/W$ for filling $n=3.5$. As in the cases discussed in the main text, we observe the evolution from a metallic state for small $U$ towards an OSMP with one orbital localized and two itinerant ones, and then for the strong-$U$ limit, we see the advertized OSMP QPT, where the second phase corresponds to an OSMP with two orbitals localized and one itinerant. The formation of a robust magnetic moment within the OSMPs is a clear signature of its existence, which in this case reaches a value $\langle\mathbf{S}^2\rangle~\approx 2.875$ in the strong-$U$ regime. The evolution of $K_\gamma$ for this filling is shown in the main text in Fig.~3. \begin{figure} \includegraphics*[width=.8\columnwidth]{SMnvsUosmps} \caption{Mean value of the orbital occupancy, $n_\gamma$ (open symbols), and mean value of the total spin, $\langle\mathbf{S}^2\rangle$ (closed symbols), vs.~$U/W$, at a fixed $J/U = 1/4$ and $n=3.5$. $\langle\mathbf{S}^2\rangle_2$ is the magnetic moment for $\gamma=2$. The different phases are marked by vertical dashed lines.} \label{fig:S1} \end{figure} \begin{figure}[!b] \includegraphics*[width=.8\columnwidth]{SMKrho} \caption{Orbital-dependent Luttinger correlation exponent, $K_\gamma$, vs.~$U/W$, for $n=4.5$ and $J/U=1/4$. The abrupt changes correspond to quantum phase transitions (see Fig.~2(b) in main text).} \label{fig:S2} \end{figure} In Fig.~\ref{fig:S2}, we plot the Luttinger correlation exponent for each orbital $\gamma$ versus the repulsion $U$ ($V=0$). The corresponding results for $n_\gamma$ are shown in the main text (see Fig.~2(b)). $K_\gamma$ exhibits a slightly correlated metallic behavior for $U/W < 1$. As the interaction increases the orbitals become more interacting as indicated by a decrease in $K_\gamma$. For $U/W\gtrsim 1$, orbital $\gamma=2$ localizes ($K_2=0$) while the itinerant ones have an exponent close to 1/2. Upon further increase of $U$, orbital $\gamma=1$ localizes as well, leaving a metallic orbital ($\gamma=0$) with $K_0=1/2$ signaling the onset of free-spinless-fermion behavior and the OSMP QPT (more details in the main text). Figure~\ref{fig:S3} shows the total-charge structure factor in the strong-$U$ OSMP, where spinless fermions are found. We explore the range $3\leqslant n \leqslant 5$. $N(q)$ clearly displays charge fluctuations typical of free spinless fermions. Fig.~\ref{fig:S3} shows the dependence of the effective Fermi wave vector of the spinless fermions on the total filling of the system. The $n$-dependence of the Fermi momentum is symmetric around $n=4$ \begin{figure \includegraphics*[width=.8\columnwidth]{SMNq} \caption{Total-charge structure factor, $N(q)$ vs.~momentum, $q$, for several $n$ at $J/U=1/4$. The values of $U$ have been chosen such that the system is in the strong-$U$ OSMP (either OSMP2 for $n<4$ or OSMP3 for $n>4$). The effective Fermi momentum of the free spinless fermions is $n$-dependent (see main text for details).} \label{fig:S3} \end{figure} \begin{figure}[!b] \centering \includegraphics*[width=.8\columnwidth]{SMphdgNq48} \caption{DMRG phase diagram of the three-orbital model varying $U/W$ and $n$, at $J/U=1/4$ for a 48-orbital lattice. The phases are labeled as in the main text. The boundaries defined by the squares and dashed-dotted lines are for the case $J/U = 0.15$. The lines are guides to the eye. } \label{fig:S4} \end{figure} The finite-size behavior of the three-orbital model studied in this work is now discussed. Figure~\ref{fig:S4} shows the phase diagram for an 48-orbital lattice. By comparing Fig.~\ref{fig:S4} to Fig.~1, in the main text, it is possible to see that the corrections to the phase diagram due to finite-size effects are fairly small. The same phases, magnetic and charge patterns are found in both phase diagrams. More importantly, the advertized QPT between OSMP states---with different localization properties---, and all the main features of the measurements reported in the main text, are found as well in other system sizes. It is worth mentioning that the finite-size behavior exhibited by these quantities is very similar to that previously reported on similar multiorbital Hubbard models~\cite{Sakamoto02,Rincon14}. In addition to study the OSMP-OSMP transition for the value $J/U = 1/4$, we have also studied the presence of this transition for $J/U = 1/3$, and $0.15$. The results for the latter ratio are shown in Fig.~\ref{fig:S4}. As expected from the effects of the Hund's coupling, the boundaries separating the OSMP QPT have a lower value than in the prototypical case $J/U=1/4$, rendering the advertised transition to more realistic values. In particular, note that for $J/U = 0.15$ and electronic density $n\sim 3.25$, the critical line dividing the OSMP states is at a value of $U/W$ close to 1, namely in a realistic range of couplings. On the other hand, for the weak-coupling transition the critical values increase moderately.
2,869,038,156,601
arxiv
\section{Introduction} Multi-object tracking (MOT) is a longstanding computer vision problem in which the goal is to keep track of the identities and locations of multiple objects throughout a video. A popular MOT approach is tracking-by-detection~\cite{luo2020multiple}, in which an object detector is first run on every frame, and those detections are fed as input to a MOT algorithm. Convolutional neural networks (CNNs) have led to the creation of highly accurate detectors~\cite{ciaparrone2020deep}, thus spurring the development of approaches that rely heavily on these high-quality detections, e.g.~\cite{bochinski2017high}. Training such highly accurate detectors requires significant labeled data. The majority of the MOT literature has focused on tracking pedestrians and vehicles~\cite{luo2020multiple,wen2020ua}, two settings in which labeled data is plentiful. However, in specialized tracking scenarios we may have considerably less data; for instance, tracking a new species of insect, or tracking fish off the coast of a tropical island. With limited training data, even the best detectors will have limited performance. An ideal tracking algorithm would be able to perform robustly even given an imperfect detector~\cite{solera2015towards}, but it is still not clear how to accomplish this. One alternative which has gained popularity recently is to forgo tracking-by-detection altogether and train an end-to-end approach that simultaneously learns to detect and track objects of interest~\cite{feichtenhofer2017detect,sun2020simultaneous,zhou2020tracking}. Although useful in many situations, this approach requires a large dataset of videos labeled with track information. In the settings we study, there is little to no labeled video data needed for end-to-end tracking approaches. Indeed, even properly labeled still image data needed to train a standard object detector may be fairly scarce, greatly increasing the difficulty of the problem compared to the standard MOT setting. We have found that even when (pretrained) CNN detectors are trained on little data, they often are still able to predict the general location of objects in the scene, albeit sometimes with very low confidence and many false positives. However, the traditional MOT pipeline discards most of this information, first filtering out the low-confidence detections, and thereafter discarding the detection confidence values~\cite{wen2020ua}. Ideally, a tracker could make use of the full \textit{unfiltered} set of detections to achieve more robust performance. Unfortunately, removing this filtering step greatly increases the computational burden, and requires algorithms to cope with extremely noisy input. Due to these challenges, we are not aware of any tracking-by-detection approaches capable of efficiently handling an unfiltered set of detections Therefore, we present Robust Confidence Tracking (RCT), an algorithm which tracks efficiently and robustly given unfiltered detections as input. The key idea behind RCT is that, instead of discarding detection confidence values, we can use these values to guide the tracking process, using lower-confidence detections only to ``fill in gaps'' between higher-confidence detections. Specifically, RCT uses detection confidence in three ways: To determine where to best initialize tracks, probabilistically combined with a Kalman Filter to optimally extend tracks, and to filter out low-quality tracks. Alongside this, RCT incorporates the Median Flow single object tracker (SOT) and some simple heuristics for track trimming and joining to achieve excellent performance, even compared to more complicated and resource-intensive deep tracking methods. To test trackers such as RCT in challenging scenarios where data is scarce, datasets of common objects do not suffice. Therefore, we present a new, challenging real-world fish tracking dataset, FISHTRAC. We conduct a comprehensive evaluation on both FISHTRAC as well as the UA-DETRAC~\cite{wen2020ua} vehicle dataset (using a low-accuracy detector). \section{Problem Setup} We consider \textbf{offline} multi-object tracking problems within a tracking-by-detection framework, where the goal is to track all objects of a desired class $\ell$ throughout a video sequence. Note that $\ell$ is often known to be of practical importance, but in the settings we consider may be rare enough that accurate detection is difficult. Specifically, we assume there exists a video $\mathcal{V}$ with $N$ frames $v_1,\dots,v_N$ and a detector $\mathcal{D}$ which outputs detections on each frame $d_1,\dots,d_N$. Each $d_i$ is a set containing tuples $b = (x,y,w,h,c)$ denoting the detected bounding box and its confidence $0 \leq c \leq 1$ that the box corresponds to an instance of an object of class $\ell$ (here we use $b \in \ell$ to denote the case that a box is a member of the class $\ell$). The goal of the tracking algorithm $\mathbb{T}$ is to produce an optimal set of tracks $T = {T_1,\dots,T_K}$ where each track $T_j$ consists of a list of tuples $t_{j,f} = (x,y,w,h,c)$ where $f \in [1,N]$ is the frame number. \section{Related Work} \textbf{Multi-object Tracking Datasets and Codebases:} There are several public MOT datasets, see Table~\ref{tab:datasetcompare}. However, tracking fish in natural underwater scenarios is a challenging and understudied problem which is not well-represented by existing datasets.\footnote{Note that our assessment of 2 real-world fish videos for TAO~\cite{dave2020tao} is based only on examining the TAO train and validation dataset as the test dataset is not yet fully released. Similarly, our assessment of 2 real-world fish videos for OVIS~\cite{qi2021occluded} is based only on examining the OVIS train dataset as the validate and test datasets are not yet fully released.} Although some datasets do include fish data, the video is usually of artificial settings such as aquarium tanks, which greatly simplifies the tracking problem. The one significant exception is Fish4Knowledge/SeaCLEF~\cite{jager2016seaclef,jager2017visual,kavasidis2012semi,kavasidis2014innovative}, however that dataset suffers from several problems, including low image quality and low FPS (5 FPS). Indeed, most of the datasets with more variety have sacrificed FPS (e.g. 1 FPS for TAO~\cite{dave2020tao}), and some such as TAO also have incomplete annotations, making comphrensive evaluation difficult. Low FPS is a particularly poor choice for fish tracking, since fish move and change direction rapidly. Our FISHTRAC dataset contains high-quality (at least 1920x1080) video of real-world underwater fish behavior, and is completely annotated at 24 FPS. While not as diverse or large as datasets like TAO~\cite{dave2020tao}, FISHTRAC fills an important gap by helping shed light on a highly challenging real-world application.~\footnote{The public release of the dataset is forthcoming.} \begin{table*} \begin{center} \caption{A comparison of public MOT datasets. FPS refers to the annotation FPS.} \label{tab:datasetcompare} \resizebox{\textwidth}{!}{ \begin{tabular}{p{3.2cm}|p{1cm}|p{2cm}|p{1cm}|p{1.5cm}|p{3cm}|p{1.3cm}|p{2cm}} % \textbf{Dataset} & \textbf{Num Videos} & \textbf{\# ``In the wild'' fish videos} & \textbf{FPS} & \textbf{Min resolution} & \textbf{Provides unfiltered detections?} & \textbf{Complete labels?} & \textbf{\# MOT algs in codebase} \\\specialrule{2.5pt}{1pt}{1pt} UA-DETRAC~\cite{wen2020ua} & 100 & 0 & 24 & 960x540 & No & No & 8 \\\hline KITTI~\cite{geiger2013vision} & 40 & 0 & 10 & 1242x375 & No & No & 0 \\\hline TAO~\cite{dave2020tao} & 2,907 & 2 & 1 & 640x480 & N/A & No & 1 \\\hline MOT20~\cite{dendorfer2020mot} & 8 & 0 & 30 & 1173x880 & No & Yes & 0 \\ \hline YTVIS 2021~\cite{yang2019video} & 3,859 & 2 & 5 & 320x180 & N/A & Yes & 0 \\\hline OVIS~\cite{qi2021occluded} & 901 & 2 & 3-6 & 864 x 472 & N/A & Yes & 0 \\\hline SeaCLEF \cite{jager2017visual,kavasidis2012semi,kavasidis2014innovative} & 10 & 10 & 5 & 320x240 & N/A & No & 2 \\ \specialrule{2.5pt}{1pt}{1pt} FISHTRAC (Ours) & 14 & 14 & 24 & 1920x1080 & Yes & Yes & 16 \\ \hline \end{tabular}} \end{center} \end{table*} Additionally, there is currently a lack of MOT codebases that facilitate comparison to other methods. Leaderboards such as MOTChallenge~\cite{milan2016mot,dendorfer2020mot} are the predemoninant way to compare trackers, but this does not allow one to compare algorithms on new videos or detections. Running trackers on a new dataset takes substantial implementation time and effort (converting formats, handling very slow trackers, etc.). The UA-DETRAC~\cite{wen2020ua} codebase allows one to compare 8 trackers, but it is intended for only a single dataset and is based on proprietary (paid) software (MATLAB). We present a heavily modified version of the DETRAC code which is adapted to open-source technologies and contains everything needed to run 16 trackers on fish data, car data, or a new dataset.~\footnote{The public release of the codebase is forthcoming.} \textbf{Fish tracking:} Work in real-world fish tracking has been relatively scarce due to lack of suitable datasets. Most attention has focused on artificial settings, such as tracking Zebra Fish in a glass enclosure~\cite{pedersen20203d,romero2019idtracker}. One exception is J{\"a}ger et al.~\shortcite{jager2017visual}, who developed a custom approach to track fish in real-world scenarios. We compare to this tracker in our experiments. \textbf{Detection Confidence:} Virtually all tracking-by-detection methods filter detections based on a confidence threshold $h$ and thereafter discard confidence information, e.g. let $d'_f = {b \textrm{ s.t. } c_b \geq h, b \in d_f}$, where $c_b$ denotes the confidence of the box $b$. Indeed, this is enforced by codebases such as UA-DETRAC~\cite{wen2020ua}. The few exceptions add another threshold to differentiate between high and medium confidence~\cite{bochinski2018extending}, or require modifying detection approaches to expose additional information which may not always be accessible~\cite{verma2003face,breitenstein2009robust}. Bayesian approaches like JPDA~\cite{fortmann1983sonar,rezatofighi2015joint} incorporate a fixed detection probability, but do not utilize the individual detection confidences. We are not aware of any MOT algorithms that make use of the detection confidence values associated with each produced bounding box in a manner more sophisticated than just thresholding them and discarding them, a procedure which eliminates the more nuanced information contained in these values. \section{Robust Confidence Tracking (RCT)} \begin{figure*} \center \includegraphics[width=0.9\textwidth]{figures/RCTAlg.pdf} \caption{Process diagram of our Robust Confidence Tracking (RCT) algorithm. } \label{fig:rct_alg} \end{figure*} Our Robust Confidence Tracking (RCT) algorithm contains four components: Initializing tracks based on detection confidence, probabilistically combining detection confidence with motion probability, incorporating a single-object tracker as a fallback when detections alone are insufficient, and track postprocessing. Figure~\ref{fig:rct_alg} gives an overview. \subsection{Initializing tracks based on detection confidence} Unlike other MOT algorithms, our RCT algorithm uses the detection confidence as a key to distilling the detections into coherent tracks. For each track $T_j$, RCT chooses the maximum confidence detection (across all frames) for its initial box $I_j$ ($I_j = \argmax_{b \in d_1\cup \dots \cup d_N} c_b $) ; we refer to the frame associated with $I_j$ as $f_{I,j}$. To ensure that this detection does not overlap with a previously-used track, RCT excludes boxes from the max where there exists some $w$ such that $t_{w,f_{I,j}} \in d_{f_{I,j}}$ and $|B(t_{w,f_{I,j}}) \cap B(I_j)| > 0$, where the function $B$ returns the set of all pixel coordinates that fall within the box. Also, to avoid edge cases, we do not select detections that are near the edge of the screen (we enlarge $I_j$ by $\beta$\% and check that it is still onscreen, where $\beta$ is an RCT parameter). RCT works in a track-wise fashion: once the first track is built (described below), the next highest detection confidence detection is selected as the start frame for the next track, and so on, as long as the initial confidence $c_{I,j}$ is above an RCT threshold parameter $h_I$. Note that RCT uses detections with confidence $<h_I$ elsewhere \subsection{Combining detection confidence and motion} Next, RCT initializes a Kalman Filter $k$ with this detection. The Kalman filter state $s^k$ is a tuple $(x^k,y^k,v^k_x,v^k_y,w^k,h^k)$ where $v^k_x$ and $v^k_y$ are unobserved (latent) velocities which together form a vector $\vec{v}^k = <v^k_x,v^k_y>$. Let $b^k = (x^k,y^k,w^k,h^k)$ be the box derived from the state. From the initial box $I_j$, RCT could extend the track either forward or backward in time, but it does not know which will best help estimate velocity. To handle this, RCT initially tries both options, and selects the option with best score (as defined below). For clarity we will describe the forward case, the backward case is analogous. Given a frame $f$, partial track $j$ and a Kalman filter state $s^k_{f,j}$, RCT must score each detection in $d_f$ to find the best candidate to extend the track. The Kalman filter probability of the box based on the track so far $P(b' = t_{j,f}|t_{j,f-1},\dots,t_{j,1})$ can be used as a motion model score. Similarly, the Detector $D$ assigns a probabilistic score $P(b' \in \ell|v_f) = c_{b'}$ reflecting the probability the detector believes this object is of the desired class. Our goal is to find the joint probability $P(b' = t_{j,f},b' \in \ell|v_f,t_{j,f-1},\dots,t_{j,1})$. If we make the simplifying assumption that the class and the track assignment are conditionally independent given the past sequence of boxes and frame image, we have: \begin{small} \begin{multline} P(b' = t_{j,f},b' \in \ell|v_f,t_{j,f-1},\dots,t_{j,1}) \\= P(b' = t_{j,f}|v_f,t_{j,f-1},\dots,t_{j,1})P(b' \in \ell|v_f,t_{j,f-1},\dots,t_{j,1}) \end{multline} \end{small} Since our detector gives $P(b' \in \ell|v_f,t_{j,f-1},\dots,t_{j,1}) = P(b' \in \ell|v_f) = c_{b'}$, and our Kalman filter assumes $P(b' = t_{j,f}|v_f,t_{j,f-1},\dots,t_{j,1}) = P(b' = t_{j,f}|s^k_{j,f})$, the joint probability can be calculated as: \begin{small} \begin{equation} P(b' = t_{j,f},b' \in \ell|v_f,t_{j,f-1},\dots,t_{j,1}) = c_{b'} P(b' = t_{j,f}|s^k_{j,f}). \label{eq:rctcore} \end{equation} \end{small} RCT uses equation~\eqref{eq:rctcore} to a score a detected box based on both detection confidence and motion model score. However, this does not tell us when none of the detected boxes on a certain frame are a reasonable extension of the track - a situation that arises frequently with an imperfect detector. To do so, RCT checks two criteria. First, RCT checks whether the the center point of the chosen detection $b'$ is contained within the box derived from the Kalman filter state, specifically $C(b') \in B(b^k_{j,f})$ where $C$ is a function that returns the geometrical center of the box. If not, it is likely not a kinematically plausible extension of the track.\footnote{We optimize by only considering detections that overlap $b^k_{j,f}$.} Next, RCT checks if $P(b' = t_{j,f}|s^k_{j,f}) \geq P(b' = t_{j,f-1}|s^k_{j,f-1})$, in other words, whether the previous detection is more likely under the current Kalman filter state than the previous Kalman filter state: if not, it is likely moving in the wrong direction. If the detection is rejected due to either of the above reasons, RCT sets $t_{j,f}$ to a placeholder value indicating a missing observation. Otherwise, RCT sets $t_{j,f} = b'$, and marks $b'$ so that it cannot be re-used in another track. After $\delta$ ($\delta$ is an RCT parameter) iterations extending the track in both directions, RCT then switches to a single direction (forward, then backward) as the estimate of the velocity is likely sufficiently accurate. RCT stops this process when the current box is more offscreen than the last box, setting the rest of the $t_j$ to missing since the object is offscreen. The Kalman filter is used to perform one final smooth at the end, letting $t_{j,f} = b^k_{j,f}$ to smooth out any noise in the track and replace missing observations with inferred boxes.\footnote{Specifically, RCT runs a (forward-backward) smoothing pass separately on frames $f' < f_{I,j} + \delta$ and $f' > f_{I,j} - \delta$, where the $\delta$ extra frames past the start frame are used as context.} \subsection{Incorporating a single object tracker} The aforementioned approach forms the core of the RCT algorithm; however, the Kalman filter assumes linear motion if we do not find a matching detection, which performs poorly when motion is complex. Therefore, RCT uses a SOT algorithm as a fallback option if no reasonable detections can be found. Specifically, we use the MedianFlow tracker~\cite{kalal2010forward}: MedianFlow has been successfully used in past MOT approaches~\cite{bochinski2018extending}; a strength is that it can determine when it has lost track of an object. As with the Kalman Filter, we initialize the MedianFlow tracker on frame $I_f$ and update it in both the forwards and the backwards directions We observed that the Kalman filter could overcome a short sequence of missing detections or occlusions, while visual information is critical to overcoming a longer sequence of missing detections. Therefore, RCT switches to MedianFlow only if, when detecting on frame $f$ of track $j$, for all $f' \in \{f,f-1,\dots,f-\delta_m\}$ $t_{j,f'} \not\in d_{f'}$ (i.e. the last $\delta_m$ frames also have no valid detections), where $\delta_m$ is an RCT parameter. Additionally, we require that the MedianFlow track is plausible according to our Kalman filter, specifically RCT tests that $C(m_{j,f}) \in B(b^k_{j,f})$, where $m_{j,f}$ is the MedianFlow box on frame $f$ of track $j$. If these conditions are met, and MedianFlow did not report a tracking failure, RCT sets $t_{j,f'} = m_{j,f'}$ for $f' \in \{f,f-1,\dots,f-\delta_m\}$. In the case where both a MedianFlow box $m_{j,f}$ and an acceptable detected box $b' \in d_j$ are available, and the previous box was MedianFlow ($t_{j,f-1} = m_{j,f-1}$), RCT only sets $t_{j,f} = b'$ if $C(b') \in B(m_{j,f})$ and $C(m_{j,f}) \in B(b')$, which tests whether the detection diverges significantly from the MedianFlow prediction. If it does, it is likely a spurious detection and RCT keeps using the MedianFlow boxes. To further reduce the reliance on motion, RCT replaces some boxes with MedianFlow after the track is built. If on some track $j$ and frame $f$, either the detection is a missing placeholder or it overlaps with another track (i.e. there exists some $w \neq j$ such that $t_{w,f} \in d_f$ and $|B(t_{j,f}) \cap B(t_{w,f})| > 0$), we try to see if we can replace $t_{j,f}$ with a better box. First, RCT tries a MedianFlow box: if $|B(m_{j,f}) \cap B(t_{j,f-1})| > 0$, it is a reasonable extension of the track, so we let $t_{j,f} = m_{j,f}$. Else, RCT sets $t_{j,f}$ to indicate a missing observation. \subsection{Track joining, confidence-based filtering, trimming} The approach so far can produce tracks that are fragmented - therefore, RCT joins smaller tracks as a postprocessing step. Instead of computationally expensive matching approaches~\cite{dehghan2015gmmcp}, we use a fast and simple greedy heuristic that joins two tracks if they are similar enough in terms of time and motion. Specifically, RCT examines the time period in which the tracks switched to purely motion. Without loss of generality, let $f_j$ be the last non-motion-box frame of track $j$, and $f_w$ be the first non-motion-box frame of track $w$ (we try every possible ordered pairing of tracks). If $f_j \leq f_w$, then RCT computes the temporal distance as $D_{time} = f_w-f_j$ If $f_j > f_w$, then we require there to be at most two frames $f$ where $IoU(t_{f,w}, t_{f,j}) < h_u$, where IoU is the intersection over union function and $h_u$ is an RCT parameter. In other words, the track needs to overlap on almost every frame in which there are detections, if so RCT sets $D_{time}=0$. RCT only considers joining pairs of tracks where $D_{time} < D_{max}$, where $D_{max}$ is an RCT parameter. Next we consider whether the distance reached in that number of frames would be reasonable according to the Kalman Filter. Specifically, let \begin{equation} v_{max} = \max_{i \in \{w,j\}, f \in \{1,\dots,N\}} \sqrt{\left(v^k_{i,f,x}\right)^2 + \left(v^k_{i,f,y}\right)^2}, \end{equation} which gives the fastest speed it is reasonable for these objects to have under the Kalman filter. Then we test whether \begin{equation} d_{euclid}(C(b_{f_w,w}), C(b_{f_j,j})) \leq D_{time} v_{max}, \end{equation} where $d_{euclid}$ gives the Euclidean distance. If not, the distance between tracks is too large to reasonably join them. Additionally, RCT checks that it is moving in the right direction: that is, that a Kalman filter initialized on track $w$ and extended to track $j$ would determine that $P(b_{j,f_j}| s^k_{f_j}) \geq P(b_{j,f_j}| s^k_{f_j-1})$. RCT iterates this process, greedily adding tracks until there are no more pairs that meet our join criteria. After each join, RCT re-smooths the tracks. Since the detector is low-accuracy, it may be that long sequences of detections occur on objects outside $\ell$ (e.g. coral instead of fish), necessitating track filtering. RCT relies on detection confidence to filter tracks: Detections of smaller objects tend to be naturally lower confidence, but if a track is both exceptionally long and contains many large boxes, at least some of the detections should be fairly high confidence if it is truly the target class. To determine if a track $T_q$ qualifies as large and long, we first define a set $\mathcal{T}_l$ consisting of all the high-quality large long tracks. Specifically, $T_j \in \mathcal{T}_l$ if two conditions are met: $c_{I,j} > h_q$, where $h_q$ is an RCT parameter, and $S(T_j) \geq \frac{ \sum_{T_i \in T} S(T_i)}{|T|}$, where $S(T_i) = \sum_{f \in \{1,\dots,N\}} |B(t_{i,f})|$, that is, the a total size (as calculated by summing the box sizes across all frames) greater than the mean across all tracks. For each $T_i \in \mathcal{T}_l$, RCT computes $S(T_i)$, producing a set of scalar sizes $\mathcal{S}_l$. RCT then fits a Gaussian distribution to the mean and standard deviation of the elements in $\mathcal{S}_l$, the intuition being that the Gaussian distribution captures what sizes are reasonable for large tracks to have in the dataset. If for the track in question $T_q$, $S(T_q)$ is above the 95\% Gaussian tail, and it is low confidence ($c_{I,q} < h_q$), $T_q$ is removed from the track set. RCT also removes redundant tracks, i.e. where the average IoU between two tracks is greater than RCT parameter $h_u$ Finally, RCT trims the ends of tracks (which has a large impact on scores, see our ablation study). Specifically, RCT stops tracking objects when the width of the box is offscreen by more than $\omega$ percent of the frame width, and the height of the box is offscreen by more than $\omega$ percent of the frame height where $\omega$ is an RCT parameter. When an object is moving offscreen, RCT applies constant acceleration of $\alpha\vec{v}^k$ to the Kalman-derived velocity vector $\vec{v}^k$ to move the track swiftly offscreen, where $\alpha$ is an RCT parameter. Additionally, to avoid incorrect extrapolation of the tracks by the Kalman filter, RCT trims all boxes that are based on missing Kalman observations at the tail ends of the tracks as long as there are at least $\delta_n$, where $\delta_n$ is an RCT parameter. \section{FISHTRAC Dataset} \subsection{A high-resolution MOT fish dataset} Real-world underwater fish tracking is a particularly challenging MOT problem. Fish move unpredictably, change appearance, and are frequently occluded. When video is collected by divers, additional complicated motion and parallax effects arise; additionally, fish often intentionally try to swim away or hide from the diver. And yet fish tracking is an important task in marine science, for instance to aid in studies of fish behavior, and also has recreational applications. We present FISHTRAC, which is, to our knowledge, the first high-resolution fish dataset designed for multi-object tracking. FISHTRAC contains 14 videos totaling 3,449 fully-annotated frames of real-world underwater video. Annotators were instructed that, if a fish is unambiguously identifiable in at least one frame of video, it should be annotated for all frames that it is believed to be within the camera's Field of View (FOV). This results in 131 total individual fish annotated (5-20 per video). Video is in high-resolution 1920x1080 (or higher) format collected at 24 frames per second, see Figure~\ref{fig:example_fishtrac} for an example. The videos were collected off the coast of Hawai{\okina}i island, primarily by a SCUBA diver, although we also include a video collected by a snorkeler and a video from a stationary camera. To simulate tracking with scarce data, just 3 videos are designated for training, the other 11 are reserved for testing. Likewise, when training on UA-DETRAC, we use just 3 videos from the train set (MVI\_41073, MVI\_40732, MVI\_40141). Additionally, we present the FISHTRAC codebase which includes everything needed (conversion/visualization scripts, etc.) to evaluate 16 tracking algorithms on a new MOT problem. Our code is based entirely on free technologies (GNU Octave and Python) and supports Linux. (The public release of the dataset and codebase are both forthcoming. \subsection{FISHTRAC object detection} \begin{figure} \centering \includegraphics[width=\columnwidth]{images/RetAndYolo.pdf} \caption{Precision-Recall Curves of RetinaNet compared to YOLOv4 on the FISHTRAC training dataset. } \label{fig:detectorcomparison} \end{figure} \begin{table} \begin{center} \caption{Precision, recall and mean average precision (mAP) of RetinaNet compared to YOLOv4 on the FISHTRAC train.} \label{tab:detectorcomparison} \resizebox{\columnwidth}{!}{ \begin{tabular}{ p{4cm} |l|l|l} % \textbf{Algorithm} & \textbf{Precision @ 0.5} & \textbf{Recall @ 0.5} & \textbf{mAP} \\\specialrule{2.5pt}{1pt}{1pt} RetinaNet & \textbf{85.83} & \textbf{42.25} & \textbf{60.41} \\ \hline YOLOv4 - 1024x1024 & 76.90 & 16.30 & 25.11 \\ \hline YOLOv4 - 608x608 & 83.89 & 18.51 & 30.40 \\ \hline YOLOv4 Tiny - 608x608 & 69.94 & 23.64 & 39.32 \\ \hline \end{tabular}} \end{center} \end{table} In order to run a tracking-by-detection MOT pipeline on FISHTRAC, we need to train an object detector; however, this requires significant training data (even after pretraining the network on a general-purpose dataset like ImageNet). Although manually annotating images is one option, that is time-consuming and expensive, and for many applications significant training data is available in large public datasets. Therefore, to generate training data for our FISHTRAC detectors, we scraped all human-annotated bounding boxes labeled ``fish'' from Google Open Images Dataset~\cite{OpenImages}, one of the largest bounding box datasets. However, this resulted in only 1800 images (many of which were not in real-world underwater environments), which is more limited than the data usually used to train a deep learning model. Next we examine different object detection approaches - although we wish to study cases where detections are low-quality, it is important to select a detection pipeline that maximizes the quality of detections given our limited training data. Therefore, we compared state-of-the-art detectors on our FISHTRAC set and selected the one with the best performance. Specifically, we compared the RetinaNet~\cite{lin2017focal} architecture to variants of YOLOv4~\cite{bochkovskiy2020yolov4}. For RetinaNet, we selected a ResNet50 backbone~\cite{he2016deep} pretrained on ImageNet~\cite{deng2009imagenet}, and trained it for 10 epochs. For YOLOv4, we used the officially published model architectures (both full size and Tiny variants). We followed the official guide in the GitHub repo\footnote{\url{https://github.com/AlexeyAB/darknet}} to train the YOLO models on our custom objects, using the pretrained MSCOCO weights. The only slight deviation was in how we set the training steps. By the recommended 6000 training steps, loss was still decreasing and was not lower than 1, so per the instructions we increased the number of steps in an attempt to achieve lower loss, specifically trained all YOLO models for approximately 12000 steps. Of these steps we selected the model with the lowest train mAP for evaluation. RetinaNet rescales images to between 800-1000 pixels on each side, whereas YOLOv4 by default rescales its input to 608x608, so we also tried a variant of YOLOv4 with a larger input data size (1024 x 1024). We evaluated the various object detection models on the FISHTRAC train dataset, using an IoU threshold of 0.5. As one can see from the precision-recall curves in Figure~\ref{fig:detectorcomparison} and the scores in Table~\ref{tab:detectorcomparison}, the results show that RetinaNet significantly outperforms the more recent YOLOv4 model on FISHTRAC data, hence justifying our choice of it as the source of our detections. This is consistent with past work which has shown that RetinaNet performs especially well with very little training data~\cite{bickel2018automated,weinstein2019individual}. Despite this, the resulting RetinaNet detector still has mediocre performance on FISHTRAC train: at a 0.5 confidence threshold, it has 85.53\% precision and just 42.25\% recall (60.4 mAP) . \noindent\textbf{Detection for UA-DETRAC:} For DETRAC, one can train an accurate detector given the ubiquity of vehicle data; but we intentionally trained on limited data to realistically simulate poor quality detections. Specifically, we trained RetinaNet with 200 car images from Google Open Images. We then trained the same RetinaNet architecture used in FISHTRAC. Unsurprisingly, this resulted in mediocre performance on our DETRAC train set: just 62\% precision and 46\% recall (50.3 mAP). \section{Experiment Setup} \begin{figure} \includegraphics[width=\columnwidth]{images/v1_lele.png} \caption{FISHTRAC frame marked with ground truth (GT).} \label{fig:example_fishtrac} \end{figure} \subsection{Evaluation Metrics} \begin{figure} \centering \includegraphics[width=0.5\columnwidth]{figures/diou/diou_vs_iou.pdf} \caption{The scores of DIoU and IoU in a real fish tracking scenario. The orange box is the ground truth and the red is the predicted (tracked) box. The middle image seems best from an end-use perspective, but using IoU there is no way to differentiate it from the bottom image, which is clearly much worse. } \label{fig:diou} \end{figure} As a primary metric we use the recent HOTA metric~\cite{luiten2020hota}, which has gained popularity due to its strong performance in user studies~\cite{luiten2020hota}. But we also report more classic CLEAR MOT metrics like MOTA~\cite{bernardin2008evaluating}, as secondary metrics. However, one limitation of these MOT evaluation metrics is that they are both based on the IoU (intersection over union) between each box in the predicted track and each box in the ground truth track. MOTA uses a fixed threshold on IoU to determine whether a ground truth and a predicted box are close enough to match, whereas recent improvements like HOTA take the average score over many possible thresholds. However, regardless of the exact threshold, if the predicted and the ground truth track do not overlap, the IoU value is zero. So a predicted box which does not overlap any ground truth box will count as a false positive. Indeed, in such situations the tracker would have received a better score if it had simply not tracked the object at all. The rationale behind this approach is that a tracker which completely loses track of the object should detect that it has failed and not issue a prediction so as not to confuse the downstream pipeline. Although at first this seems reasonable, in our setting we found that this produced highly counter-intuitive results. Consider Figure~\ref{fig:diou} for instance. The middle image is clearly better than the top image at giving the user a sense of where the fish is: it has roughly the right size box in roughly the right location, whereas with the top image the location and size of the target are both completely inaccurate. Yet with a low enough IoU threshold, the top image will count as a matched detection due to the overlap, while the middle image never will no matter the IoU threshold. Additionally, with the IoU metric, there is no way to differentiate the middle image from the bottom image, even though the middle prediction is clearly much more useful to an end-user than the bottom prediction. With low-accuracy detections, situations similar to the middle image will happen a lot: especially when targets become small (such as fish swimming away from the camera) the tracker may have to rely on a motion model rather than visual information to determine where the object is. In these situations, as long as the tracker produces a track that is ``close'' to the original it will still be helpful for downstream applications even if there is no overlap, especially for small targets. Therefore, we instead use Distance-IoU (DIoU)~\cite{zheng2020distance}, a recent metric that combines IoU with the normalized distance between the boxes to give more ``partial credit'' to non-overlapping detections. Specifically, DIoU computes \begin{equation} DIoU(b_1, b_2) = 1 - IoU(b_1,b_2) + \frac{d_{euclid}^2(C(b_1), C(b_2)}{g^2(b_1,b_2))}, \end{equation} where g is the diagonal length of the smallest box enclosing the two boxes and $C$ is the center point operator. Intuitively, this combines IoU with the normalized distance between the boxes. DIoU ranges from between 0 to 2 (note that, unlike IoU, lower is better), and we wish to have a threshold greater than 1 but less than 2 to admit boxes that may not overlap. If two equally-sized boxes barely touch at a corner, they will have DIoU 1.25, so this is the value we use to initialize the track. DETRAC's CLEAR MOT implementation originally allowed 20\% variability in the threshold to allow for more leeway while tracking the object; we followed this approach, allowing the DIoU to rise up to 1.5 while tracking an object. To ensure our HOTA and MOTA metrics were considering a similar range of DIoU values, we modified HOTA to integrate over DIoU values between 1.25 and 1.5. The resulting scores on the training set of FISHTRAC more closely matched our intuition than the IoU-based scores, for instance DIoU with these thresholds would successfully count the middle scenario in Figure~\ref{fig:diou} as a match while excluding the much worse bottom scenario. \subsection{Evaluation Protocol} Trackers fed low-accuracy detections might take an extremely long time, or might fail to produce any results. To handle the time issue, our code kills the tracker after 30 minutes have passed on a single video - this is recorded as a \textbf{timeout}. In contrast, a \textbf{failure} occurs when a tracker fails to produce any results at all for an entire video, usually due to an assumption in the original code not being met - e.g. assuming that there are detections on every frame. Additionally, each tracker other than RCT requires setting $h$, the threshold on detection confidence. We set this separately for each tracker. A robust tracker should never timeout or fail, so we first select the threshold(s) that minimize the sum of timeouts and failures. In the case of ties, we select based on average HOTA over the DETRAC and FISHTRAC train sets. We then use this threshold on the test videos. After this process, all trackers had zero timeouts/failures on the training data, except for the three slowest methods (D3S, GMMCP, and IHTLS), which often timed out. \subsection{RCT and Baseline Implementations} \noindent\textbf{RCT Implementation Details:} Like most other MOT algorithms, RCT has a number of parameters. In our case, other than $h_I=0.5$ which was set purely based on intuition, we set the other 10 parameters to maximize MOTA and qualitative performance on the 6 FISHTRAC/DETRAC training videos. This resulted in the following settings: $\beta=50\%$, $\delta = 4$, $\delta_m = 2$, $h_u = 0.3$, $D_{max} = 20$, $h_q = 0.8$, $h_f = 0.2$, $\omega=1\%$, $\alpha=1.1$, and $\delta_n = 5$. The majority (7 of 10) of these parameters control the trimming and joining heuristics (see ablation study for the impact of these components). A list of the RCT parameters used in our experiments are provided in Table \ref{tab:rct-params}. In terms of Kalman filter parameters, we set the transition and observation covariance matrices to standard 1-diagonal form (with 0 elements for the velocity observations since they are unobserved), although we did a small amount of tuning on the diagonal velocity transition elements (which were set to 0.2), and the diagonal position observation elements (which were set to 0.5). \begin{table} \begin{center} \caption{RCT parameters and meaning.} \label{tab:rct-params} \resizebox{\columnwidth}{!}{ \begin{tabular}{|l|l|} \hline Parameter& Meaning\\ \hline $h_i$& Detection confidence threshold.\\ \hline $\beta$& Percent box is enlarged to check if it is sufficiently far from image edge.\\ \hline $\delta$& Number of previous frames used to calculate approximate position and velocity using Kalman filter.\\ \hline $\delta_m$& Number of boxes needed to justify switching from Kalman filter to median flow.\\ \hline $h_u$& IoU threshold to determine if two detections are potentially on the same object \\ \hline $D_{max}$& Maximum number of frames with missing detections to permit joining two tracklets.\\ \hline $h_q$& Detection confidence threshold used to filter ``high-quality" detections. \\ \hline $h_f$& IoU threshold used to filter redundant tracks. \\ \hline $\omega$& Percentage offscreen an object must be in order to trim its track. \\ \hline $\alpha$& Acceleration factor when objects are moving offscreen. \\ \hline $\delta_n$& Number of frames of missing detections needed before deciding to trim them from track. \\ \hline \end{tabular}} \end{center} \end{table} \noindent\textbf{Classic and Specialized Baselines:} In total, we compare RCT to 15 trackers. We compare to four classic trackers from the original DETRAC set (\textbf{GOG}~\cite{pirsiavash2011globally}, \textbf{CMOT}~\cite{bae2014robust}, \textbf{RMOT}~\cite{yoon2015bayesian}, and \textbf{IHTLS}~\cite{dicle2013way}). To this we add \textbf{GMMCP}~\cite{dehghan2015gmmcp}, a tracker used in recent video-based person re-identification systems~\cite{liu2019spatial,jiang2021ssn}. We compare two related improvements of the IOU tracker~\cite{bochinski2017high}, \textbf{KIOU} (which uses a Kalman Filter) and \textbf{VIOU}~\cite{bochinski2018extending} (which uses MedianFlow). We compare \textbf{JPDA\_m}~\cite{rezatofighi2015joint}, an optimization of the classic JPDA approach~\cite{fortmann1983sonar} that, like RCT, incorporates motion model probability. We also compare to Visual Fish Tracking (\textbf{VFT})~\cite{jager2017visual}, which is specially designed to track fish in real-world video. \noindent\textbf{Deep Baselines:} We compare to \textbf{DAN}~\cite{sun2019deep}, which has exceptional performance on UA-DETRAC. We fine-tuned DAN on our train set (pretraining on the provided pedestrian model) to maximize performance. We also compare \textbf{AOA}~\cite{du2020tao}, which won the recent 2020 ECCV TAO challenge and uses an improved version of the popular DeepSORT~\cite{wojke2018deep} algorithm. \noindent\textbf{SOT Baselines:} Comparing to SOT approaches is unusual in the MOT literature; however, SOT approaches rely less on detection quality and thus may be a viable approach in this setting. We adapt these approaches to the MOT setting in a way that mirrors RCT: we initialize the tracker on the highest confidence detection that does not overlap previous tracks, and run the tracker forward and backward from that frame; continuing to add tracks while there are still uncovered detections. As SOT trackers, we try \textbf{MedianFlow}~\cite{kalal2010forward} and \textbf{KCF}~\cite{henriques2014high}, which have shown good performance in MOT pipelines~\cite{bochinski2018extending}. We also compare \textbf{D3S}~\cite{lukezic2020d3s}, a deep segmentation approach which is one of the top performers on the recent real-time VOT-RT 2020 challenge~\cite{kristan2020vot}. However, even ``real-time'' SOT trackers may be slow when applied to the more complex MOT task. Therefore, we compare to \textbf{GOTURN}~\cite{held2016learning}, a deep tracker which ranked \#1 in terms of speed and \#6 of 39 in accuracy on the large-scale GOT-10k benchmark~\cite{huang2019got}. \section{Results} \subsection{RCT Performance Analysis} One of the key aspects of RCT is its use of the exact detection confidence, instead of the standard method of ``prefiltering'' the detections by a fixed confidence threshold, and then discarding the confidence. Figure~\ref{fig:prefilter} shows a comparison of RCT to a variant with an initial prefilter. No matter how we set the threshold, we cannot reach the original performance, showing the benefit of utilizing the exact detection confidence when tracking. \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{images/prefiltergraph.pdf} \caption{RCT on FISHTRAC train with/without a prefilter.} \label{fig:prefilter} \end{figure}\\ \indent RCT is capable of efficiently searching through an unfiltered set of detections to produce an effective track. Table~\ref{tab:unfiltered} shows that this is not the case for other trackers. Several methods could not cope with the large number of unfiltered detections, being unable to return a result even after three days.\footnote{The methods were either run on a state-of-the-art high-performance computing cluster, or on a modern GPU-capable server, depending on their hardware and software requirements.} The methods that completed showed poor performance. In contrast, RCT was able to quickly produce accurate tracks in this setting. \begin{table} \centering \caption{Performance when trackers are fed unfiltered detections for one DETRAC video (MVI\_40752, 2025 frames). } \label{tab:unfiltered} \resizebox{0.8\columnwidth}{!}{ \begin{tabular}{ l |l|l|l|l} % \textbf{Tracker} & \textbf{Time} & \textbf{HOTA} & \textbf{ID switches} & \textbf{MOTA} \\\specialrule{2.5pt}{1pt}{1pt} RCT & 13 min & 51.44 & 8 & 36.78\\ \hline MEDFLOW & 36 min & 45.47 & 88 & -8.21 \\ \hline KCF & 57 min & 44.89 & 307 & 2.13 \\ \hline DAN & 61 min & 13.38 & 843 & -757.63 \\ \hline VIOU & 155 min & 12.49 & 219 & -984.70 \\ \hline KIOU & 334 min & 6.06 & 330 & -4126.46\\ \hline GOG & 452 min & 15.40 & 367 & -828.10 \\ \hline GOTURN & 524 min & 2.84 & 189 & -5499.44 \\ \hline AOA & 677 min & 6.74 & 572 & -2219.49 \\ \hline D3S & 1010 min & 10.09 & 125 & -951.38 \\ \hline JPDA\_m & 2483 min & 9.50 & 835 & -1688.34 \\ \hline VFT & $>$ 3 days & -- & -- & -- \\ \hline CMOT & $>$ 3 days & -- & -- & -- \\ \hline RMOT & $>$ 3 days & -- & -- & -- \\ \hline GMMCP & $>$ 3 days & -- & -- & -- \\ \hline IHTLS & $>$ 3 days & -- & -- & -- \\ \hline \end{tabular}} \end{table} \label{sec:ablate} Given that our method contains several non-essential components, we ran an ablation study to determine the impact of each factor. The results are shown in Table~\ref{tab:ablation}. We see that removing any of the various features of RCT does result in a decrease in the training data HOTA and MOTA, and typically an increase in ID switches. It was surprising that the precise method of track trimming had such a large impact - this may point to a deficiency in the HOTA/MOTA metrics as the differences in track trimming method often cause little to no visually noticeable change in the results \setcounter{table}{5} \begin{table*} \centering \caption{Test set results for FISHTRAC (shorthand: Fish) and DETRAC (shorthand: Car). Timeouts and Failures are summed across the datasets, while the Avg HOTA and Avg FPS are averaged. The table is sorted by the sum of timeouts and failures, and second by the average HOTA. Bolded values indicate the best scores of trackers that produced results on all sequences. } \label{tab:test} \resizebox{\textwidth}{!}{ \begin{tabular}{ l |p{1cm}|p{0.8cm}|p{1cm}|p{1cm}|p{1cm} |p{1.2cm} |p{1cm} |p{1cm} |p{1cm} |p{1cm} |p{1cm} |p{1cm} |p{1cm} |p{1cm} |p{1cm}|p{1cm} } % \textbf{Tracker} & \textbf{Time-outs} & \textbf{Fail-ures} & \textbf{Avg HOTA} & \textbf{Total ID Sw} & \textbf{Fish HOTA} & \textbf{Fish MOTA} & \textbf{Fish ID Sw} & \textbf{Fish Prcn} & \textbf{Fish Recall} & \textbf{Car HOTA} & \textbf{Car MOTA} & \textbf{Car ID Sw} & \textbf{Car Prcn} & \textbf{Car Recall} & \textbf{Avg FPS} \\\specialrule{2.5pt}{1pt}{1pt} RCT & \textbf{0} & \textbf{0} & \textbf{44.58} & \textbf{553} & \textbf{49.67} & \textbf{45.97} & \textbf{47} & 83.65 & 57.48 & 39.49 & 29.60 & \textbf{506} & \textbf{94.15} & 31.64 & 4.08 \\ \hline KCF & \textbf{0} & \textbf{0} & 43.16 & 3563 & 30.45 & 27.75 & 884 & 69.62 & 58.35 & \textbf{55.87} & \textbf{47.86} & 2679 & 81.62 & 62.33 & 20.68\\ \hline MEDFLOW & \textbf{0} & \textbf{0} & 42.95 & 800 & 32.03 & -58.51 & 108 & 37.01 & \textbf{82.45} & 53.87 & 33.67 & 692 & 68.12 & 63.49 & 2.51\\ \hline DAN & \textbf{0} & \textbf{0} & 42.02 & 17253 & 44.24 & 42.05 & 361 & \textbf{90.73} & 49.17 & 39.80 & 35.00 & 16892 & 76.38 & 54.53 & 5.08\\ \hline GOG & \textbf{0} & \textbf{0} & 39.41 & 15873 & 37.85 & 45.08 & 414 & 87.48 & 55.42 & 40.98 & 39.42 & 15459 & 74.27 & 64.06 & \textbf{94.17}\\ \hline AOA & \textbf{0} & \textbf{0} & 35.21 & 20848 & 39.28 & 13.79 & 593 & 57.57 & 65.53 & 31.14 & 3.60 & 20255 & 52.25 & \textbf{79.00} & 10.40 \\ \hline \rowcolor{Gray} KIOU & 0 & 1 & 46.64 & 5109 & 49.47 & 46.72 & 119 & 88.22 & 54.72 & 43.81 & 31.64 & 4990 & 65.98 & 66.95 & 159.45\\ \hline \rowcolor{Gray} VIOU & 0 & 1 & 45.86 & 2768 & 48.91 & 46.44 & 51 & 93.69 & 50.12 & 42.81 & 35.39 & 2717 & 71.98 & 58.64 & 5.41\\ \hline \rowcolor{Gray} JPDA\_m & 0 & 1 & 37.79 & 1357 & 34.11 & 35.75 & 77 & 94.69 & 38.35 & 41.47 & 32.99 & 1280 & 89.13 & 37.81 & 17.30 \\ \hline \rowcolor{Gray} GOTURN & 0 & 2 & 20.47 & 1102 & 19.53 & -281.09 & 114 & 16.24 & 67.47 & 21.41 & -88.88 & 988 & 23.00 & 37.80 & 8.61 \\ \hline \rowcolor{Gray} RMOT & 5 & 0 & 38.28 & 1077 & 39.74 & 40.21 & 133 & 90.91 & 45.54 & 36.82 & 25.77 & 944 & 88.84 & 29.65 & 5.05\\ \hline \rowcolor{Gray} CMOT & 5 & 1 & 46.61 & 5731 & 54.40 & 50.30 & 110 & 84.47 & 62.41 & 38.82 & 12.71 & 5621 & 55.75 & 65.92 & 2.38\\ \hline \rowcolor{Gray} VFT & 8 & 0 & 23.49 & 5257 & 30.73 & 33.93 & 449 & 93.27 & 39.39 & 16.25 & 16.45 & 4808 & 90.55 & 19.22 & 12.23\\ \hline \rowcolor{Gray} D3S & 26 & 1 & 34.11 & 82 & 54.72 & 23.20 & 33 & 60.94 & 65.12 & 13.50 & 2.15 & 49 & 64.20 & 4.89 & 0.62 \\ \hline \rowcolor{Gray} GMMCP & 30 & 11 & 17.25 & 138 & 29.76 & 31.10 & 114 & 89.78 & 35.84 & 4.75 & 0.40 & 24 & 87.03 & 0.48 & 0.17\\ \hline \rowcolor{Gray} IHTLS & 45 & 1 & 6.80 & 242 & 13.60 & -2.54 & 242 & 48.40 & 17.16 & 0.00 & 0.00 & 0 & -- & 0.00 & 0.08\\ \hline \end{tabular}} \end{table*} \setcounter{table}{4} \begin{table} \centering \caption{Ablation study. HOTA and MOTA are averaged over the two train datasets; ID switches are summed.} \label{tab:ablation} \resizebox{\columnwidth}{!}{ \begin{tabular}{ p{4cm} |l|l|p{2cm}} % \textbf{Variation} & \textbf{Avg HOTA} & \textbf{Total ID Switches} & \textbf{Avg MOTA} \\\specialrule{2.5pt}{1pt}{1pt} Unmodified & 60.61 & 32 & 45.86 \\ \hline No MedianFlow & 56.02 & 40 & 41.33 \\ \hline No track joining & 58.32 & 46 & 45.27 \\ \hline Not filtering long, large, low confidence tracks & 56.79 & 64 & 29.03 \\ \hline Not trimming when box is offscreen & 25.61 & 38 & -608.37 \\ \hline Trimming as soon as box touches offscreen & 57.06 & 23 & 36.65 \\ \hline Not trimming when box is fully onscreen & 58.37 & 34 & 32.91 \\ \hline \end{tabular}} \end{table} \subsection{Test Results} We ran all 16 trackers across all 11 FISHTRAC test videos and the 40 UA-DETRAC test videos. We followed good practice regarding test data, in particular, we did not in any way evaluate RCT on the test videos during its development. Our objective is the same as it was when selecting thresholds: we wish to minimize timeouts and failures for reliability, and then to maximize the average HOTA score. Test results are shown in Table~\ref{tab:test}. Our main result is that, of the trackers which successfully produced results for every sequence (i.e. no timeouts or failures), our RCT algorithm has the best average HOTA across the FISHTRAC and the DETRAC dataset. This demonstrates the advantages of RCT in terms of robust performance (which is notable given that RCT was developed based on examining just 6 videos). Many other trackers were not nearly as robust - for instance, while CMOT has an impressive HOTA score on the FISHTRAC dataset, it cannot cope with the longer DETRAC sequences, resulting in 5 timeouts and 1 failure. In contrast, our adaptation of the KCF single-object tracker does extremely well on DETRAC, but significantly worse on FISHTRAC - likely because fish are significantly more difficult to track based on visual information due to appearance changes etc. The fact that KCF and MEDFLOW perform so well on DETRAC highlight the importance of comparing to SOT algorithms even when attempting to solve a MOT problem. Although KCF is thought to be quite a low baseline on SOT problems, our experiments indicate that stronger SOT trackers like D3S are too computationally expensive to run on our MOT problems - in fact, D3S timed out on over half the test videos. GOTURN has sufficient speed, but performs poorly, in part due to not adequately handling MOT-specific issues such as track termination. One of the most notable features of our RCT algorithm is how it achieves just 553 identity switches across all 51 test videos - the only algorithms with fewer are D3S, GMMCP, and IHTLS, algorithms that simply did not produce any tracks for the majority of videos. The other MOT trackers have an order of magnitude more identity switches, even algorithms such as DAN, VIOU, and KIOU which achieve good HOTA. This is due to RCT's ability to fuse low-confidence detections, a motion model, and a single object tracker to rapidly produce high-quality continuous tracks even when high-confidence detections are sparse. Minimizing ID switches is very important for practical applications - for instance, we intend to use RCT to help divers keep track of individual fish while underwater. Numerous ID switches are likely to confuse the diver and cause them to follow the incorrect fish. In these types of applications, we would much rather miss some objects, but ensure the tracks we do provide are high-quality, with little to no identity switches, even in the face of unreliable detections. We expect RCT to excel in these situations. \section{Conclusion} We have studied the problem of multi-object tracking-by-detection with unreliable detections. To illustrate this, we presented a new MOT dataset, FISHTRAC, with high-resolution videos of underwater fish behavior. We also present RCT, which takes a different approach than other MOT algorithms, using the detection confidence in three different ways to produce high-quality tracks given completely unfiltered set of input detections. We find that RCT outperforms baselines (including the 2020 TAO challenge winner and a top performer on the VOT-RT 2020 challenge), tracking objects accurately with very few ID switches and no timeouts or failures. The public release of our FISHTRAC dataset and codebase are forthcoming A next step is adapting RCT to work in an online and real-time fashion in a way that can be deployed in the field. One practical benefit of RCT is that it does not use a GPU, which in edge settings may be fully utilized by the detection network. Additionally, we found many of high-performing MOT methods work poorly with a low-quality detector; so it would be interesting to explore an adaptive approach which analyzes detection quality and adapts the tracker behavior accordingly. Pursuing these directions will help MOT algorithms be more easily deployed to solve a diverse set of real-world problems. \section{Acknowledgements} We gratefully acknowledge the assistance of Timothy Kudryn, Ilya Kravchik, Dr. Timothy Grabowski, Christopher Hanley, and Sebastian J. Carter on this research project. This work was supported by NSF CAREER Award \#HCC-1942229 and NSF EPSCoR Award \#OIA-1557349.
2,869,038,156,602
arxiv
\section{Introduction}\label{sec:intro} The production of photons at large transverse momenta is studied for a variety of final-state configurations at particle colliders, for example in inclusive photon production, photon pair production or photon-plus-jet production. These observables probe fundamental QCD and QED dynamics, help to constrain the parton content of the colliding hadrons, and yield final states that are also of interest in new particle searches. At the LHC, measurements of single-photon~\cite{ATLAS:2017nah,CMS:2018qao,ATLAS:2019buk,ATLAS:2019iaa} and di-photon~\cite{CMS:2014mvm,ATLAS:2017cvh,ATLAS:2021mbt} observables are now reaching an experimental accuracy of a few per cent, thereby demanding a comparable level of precision for the corresponding theory predictions. The leading-order parton-level production process of photons at large transverse momenta is their radiation off quarks, which is also called prompt or direct production. Another source of final-state photons is their radiation in the hadronisation process of an ordinary jet production event, called fragmentation process. This photon fragmentation process is described by (non-perturbative) fragmentation functions of different partons into photons~\cite{Koller:1978kq,Laermann:1982jr}. The contribution of the fragmentation process to a photon production observable can be minimised by imposing an isolation criterion, which requires the photon to be well-separated from any final-state hadrons in the event. In experimental measurements, the photon isolation is formulated by allowing only a limited amount of hadronic energy in a fixed-size cone around the photon. For a finite-sized cone, this hadronic energy threshold must be non-zero to ensure infrared safety of the resulting observables, consequently leading to a non-vanishing fragmentation contribution that must also be accounted for in the theory predictions. An alternative isolation procedure is to use a dynamical cone~\cite{Frixione:1998jh}, which lowers the hadronic energy threshold towards the center of the cone and fully suppresses the fragmentation contribution. While theory predictions at higher orders frequently employ the dynamical cone isolation due to its simplicity, all experimental measurements to date are based on fixed-cone isolation. The uncertainty resulting from using different isolation prescriptions in theory and experiment forms a systematic source of error that is difficult to quantify. For a fixed-size cone isolation, it is not even possible to disentangle the prompt and fragmentation processes, since the parton-level collinear photon radiation off a final-state quark is kinematically indistinguishable from photon fragmentation. After renormalisation and mass factorisation of the incoming parton distributions, this parton-level process yields a left-over collinear singularity, which is absorbed into the mass factorisation of the photon fragmentation functions~\cite{Koller:1978kq}. Consequently, the next-to-leading order (NLO) corrections for inclusive photon~\cite{Aurenche:1987fs,Baer:1990ra,Aurenche:1992yc,Gordon:1993qc,Gluck:1994iz,Catani:2002ny}, photon-plus-jet~\cite{Aurenche:2006vj} and di-photon production~\cite{Binoth:1999qq} depend on the photon fragmentation functions. These fulfil DGLAP-type evolution equations~\cite{Laermann:1982jr} with an inhomogeneous term from the quark-to-photon splitting, with a priori unknown non-perturbative boundary conditions. Parametrisations of the photon fragmentation functions mainly rely on models for these boundary conditions~\cite{Owens:1986mp,Gluck:1992zx,Bourhis:1997yu}. The only measurements were performed up to now at LEP~\cite{Buskulic:1995au,Ackerstaff:1997nha}, enabling an determination of the photon fragmentation functions~\cite{GehrmannDeRidder:1997gf} and a critical assessment~\cite{GehrmannDeRidder:1998ba} of the previously available models. Calculations of next-to-next-to-leading order (NNLO) QCD corrections for inclusive-photon~\cite{Campbell_2017,Chen_2020}, photon-plus-jet~\cite{Chen_2020,Campbell_2017a} di-photon~\cite{Catani:2011qz,Campbell:2016yrh,Catani:2018krb,Gehrmann:2020oec,Chawdhry:2021hkp,Badger:2021ohm} or tri-photon production~\cite{Chawdhry:2019bji,Kallweit:2020gcp} have been performed up to now only for dynamical cone isolation (or variations thereof, \cite{Siegert:2016bre}). Theory predictions for fixed-cone isolation were not feasible at NNLO QCD up to now, since none of the available QCD subtraction techniques at NNLO is able to handle fragmentation processes. It is the objective of this paper to extend the antenna subtraction method~\cite{GehrmannDeRidder:2005cm,Daleo:2006xa,Currie:2013vh} to be able to account for photon fragmentation up to NNLO. In section~\ref{sec:dfrag}, we review the mass factorisation of the photon fragmentation functions up to NNLO, which forms the basis for the compensation of collinear singularities between direct and fragmentation processes. The different contributions to photon production cross sections up to NNLO in QCD are described in detail in section~\ref{sec:xsec}, where we construct the antenna subtraction terms that are required to handle collinear photon radiation at NLO and NNLO. These antenna subtraction terms contain novel fragmentation antenna functions for double real radiation at tree level and single real radiation at one loop, which are differential in the final-state photon momentum fraction. The integration of these fragmentation antenna functions over the respective antenna phase spaces is described in sections~\ref{sec:X40int} and \ref{sec:X31int}. Finally, we conclude in section~\ref{sec:conc} with a discussion of possible applications and extensions of the newly developed formalism. Two appendices document the relevant mass factorisation kernels and all integrated NLO fragmentation antenna functions for identified photons or partons. \section{Mass Factorisation of the Photon Fragmentation Functions}\label{sec:dfrag} Collinear photon radiation off partons leads to singularities in cross sections involving identified final-state photons. These singularities are absorbed into a redefinition (mass factorisation) of the parton-to-photon fragmentation functions. This factorisation is performed at a fragmentation scale $\mu_a$, and the resulting mass-factorised fragmentation functions consequently depend on $\mu_a$. The relation between mass-factorised and bare fragmentation functions can be expressed as \begin{equation} D_{i\to \gamma}(z,\mu_a^2) = \sum_j \mathbf{\Gamma}_{i\to j}(z,\mu_a^2) \otimes D_{j \to \gamma}^{B}(z) \, , \label{eq:DgamDb_component} \end{equation} where flavours $i,j \in \{ g, q , \bar{q} , \gamma \}$. $\mathbf{\Gamma}_{i \to j}$ are the mass factorisation kernels of the fragmentation functions. We use a bold letter to indicate that these kernels carry colour factors. For a compact notation we have introduced a photon-to-photon fragmentation function $D_{\gamma \to \gamma}$. It is given by \begin{equation} D_{\gamma \to \gamma}(z,\mu_a^2) = D_{\gamma \to \gamma}^B(z) = \delta(1-z) \label{eq:Dgamtogam} \end{equation} In the convolution on the right-hand side of \eqref{eq:DgamDb_component}, we indicate the variable $z$ on both components with the implicit understanding that $z$ only emerges after performing the convolution. This prescription will allow us in the subsequent sections to distinguish convolutions in final-state momentum fractions $z$ and in initial-state momentum fractions $x$ related to the parton distribution functions (PDF), which appear simultaneously in some of the higher-order expressions. Equation \eqref{eq:DgamDb_component} can be written in matrix form, i.e.\ \begin{equation} \mathbf{D}_{\gamma}(z,\mu_a^2) = \mathbf{\Gamma}(z,\mu_a^2) \otimes \mathbf{D}_{\gamma}^B(z). \label{eq:DgamDb_matrix} \end{equation} In the equation at hand $\mathbf{D}_{\gamma}$ and $\mathbf{D}_{\gamma}^B$ are vectors in flavour space and $\mathbf{\Gamma}$ is a matrix in flavour space. The mass factorisation kernel has a perturbative expansion in the strong coupling constant $\alpha_s$ and in the electromagnetic coupling $\alpha$. The bare fragmentation functions can now be expressed in terms of the mass-factorised fragmentation functions by inversion of \eqref{eq:DgamDb_matrix}, \begin{equation} \mathbf{D}_{\gamma}^B(z) = \mathbf{\Gamma}^{-1}(z,\mu_a^2) \otimes \mathbf{D}_{\gamma}(z,\mu_a^2), \end{equation} which can be expanded in $\alpha$ and $\alpha_s$ to obtain the bare fragmentation functions up to a required perturbative order. For the calculation of isolated photon production processes up to NNLO in QCD, this expansion is required to order $\alpha^1\alpha_s^1$. For the quark-to-photon fragmentation function we find \begin{eqnarray} D_{q \to \gamma}^B(z) &=& D_{q \to \gamma}(z,\mu_a^2) - \frac{\alpha}{2 \pi} \mathbf{\Gamma}^{(0)}_{q \to \gamma} \nonumber \\ &&- \frac{\alpha_s}{2\pi} \left( \mathbf{\Gamma}^{(1)}_{q \to q} \otimes D_{q\to \gamma} + \mathbf{\Gamma}^{(1)}_{q \to g} \otimes D_{g\to \gamma} + \frac{\alpha}{2\pi} \mathbf{\Gamma}^{(1)}_{q \to \gamma} - \frac{\alpha}{2\pi} \mathbf{\Gamma}^{(1)}_{q \to q} \otimes \mathbf{\Gamma}^{(0)}_{q \to \gamma} \right), \label{eq:Dqtogambar} \end{eqnarray} while the gluon-to-photon fragmentation function reads \begin{eqnarray} D_{g \to \gamma}^B(z) &=& D_{g\to \gamma}(z,\mu_a^2) -\frac{\alpha_s}{2\pi} \bigg( \mathbf{\Gamma}^{(1)}_{g \to g} \otimes D_{g \to \gamma} + \sum_{q} \mathbf{\Gamma}^{(1)}_{g \to q} \otimes D_{q \to \gamma} \nonumber \\ &&\quad + \frac{\alpha}{2 \pi} \mathbf{\Gamma}_{g\to \gamma}^{(1)} - \frac{\alpha}{2 \pi} \sum_q \mathbf{\Gamma}_{g \to q}^{(1)} \otimes \mathbf{\Gamma}_{q\to \gamma}^{(0)} \bigg) \, , \label{eq:Dgtogambar} \end{eqnarray} where the sum runs over all quark flavours (and also includes anti-quarks) and we have used that $\mathbf{\Gamma}^{(0)}_{g \to \gamma} = 0$. It will prove useful to introduce some additional notation for combinations of terms that are involved in the mass factorisation of the fragmentation functions. We define \begin{eqnarray} {\mathbf{F}}^{(0)}_{q \to \gamma} &=& \frac{\alpha}{2 \pi} \mathbf{\Gamma}^{(0)}_{q \to \gamma} \, , \nonumber \\ {\mathbf{F}}^{(1)}_{q \to \gamma} &=& \mathbf{\Gamma}^{(1)}_{q \to q} \otimes \left( D_{q\to \gamma} - \frac{\alpha}{2\pi} \mathbf{\Gamma}^{(0)}_{q \to \gamma} \right) + \mathbf{\Gamma}^{(1)}_{q \to g} \otimes D_{g \to \gamma} + \frac{\alpha}{2\pi} \mathbf{\Gamma}^{(1)}_{q \to \gamma} \, , \nonumber \\ {\mathbf{F}}^{(0)}_{g \to \gamma} &= &\mathbf{\Gamma}^{(0)}_{g \to \gamma} = 0 \, , \nonumber \\ {\mathbf{F}}^{(1)}_{g \to \gamma} &= & \mathbf{\Gamma}^{(1)}_{g \to g} \otimes D_{g \to \gamma} + \sum_{q} \mathbf{\Gamma}^{(1)}_{g \to q} \otimes \left( D_{q \to \gamma} -\frac{\alpha}{2\pi} \mathbf{\Gamma}_{q\to \gamma}^{(0)} \right) + \frac{\alpha}{2 \pi} \mathbf{\Gamma}_{g\to \gamma}^{(1)} \, , \label{eq:kernelbar} \end{eqnarray} and rewrite the relation between the bare and the mass-factorised fragmentation functions as \begin{equation} D_{i \to \gamma}^B(z) = D_{i \to \gamma}(z,\mu_a^2) - {\mathbf{F}}^{(0)}_{i \to \gamma}(z,\mu_a^2) - \frac{\alpha_s}{2\pi} {\mathbf{F}}^{(1)}_{i \to \gamma}(z,\mu_a^2) \, . \label{eq:DbartoRcom} \end{equation} We can further decompose $\mathbf{F}^{(1)}_{i \to \gamma}$ into \begin{equation} {\mathbf{F}}^{(1)}_{i \to \gamma} = {\mathbf{F}}^{(1),A}_{i \to \gamma} + {\mathbf{F}}^{(1),B}_{i \to \gamma} + {\mathbf{F}}^{(1),C}_{i \to \gamma} \end{equation} with \begin{eqnarray} {\mathbf{F}}^{(1),A}_{q \to \gamma} &= &\mathbf{\Gamma}^{(1)}_{q \to q} \otimes D_{q\to \gamma} + \mathbf{\Gamma}^{(1)}_{q \to g} \otimes D_{g \to \gamma} \, , \nonumber\\ {\mathbf{F}}^{(1),A}_{g \to \gamma} &= &\mathbf{\Gamma}^{(1)}_{g \to g} \otimes D_{g \to \gamma} + \sum_{q} \mathbf{\Gamma}^{(1)}_{g \to q} \otimes D_{q \to \gamma} \, , \nonumber\\ {\mathbf{F}}^{(1),B}_{q \to \gamma} &= &-\frac{\alpha}{2 \pi} \mathbf{\Gamma}^{(1)}_{q \to q} \otimes \mathbf{\Gamma}^{(0)}_{q \to \gamma} \, ,\nonumber \\ {\mathbf{F}}^{(1),B}_{g \to \gamma} &= &-\frac{\alpha}{2\pi} \sum_{q} \mathbf{\Gamma}^{(1)}_{g \to q} \otimes \mathbf{\Gamma}^{(0)}_{q \to \gamma} \, ,\nonumber \\ {\mathbf{F}}^{(1),C}_{q \to \gamma} &= &\frac{\alpha}{2\pi} \mathbf{\Gamma}^{(1)}_{q \to \gamma} \, , \nonumber\\ {\mathbf{F}}^{(1),C}_{g \to \gamma} &= &\frac{\alpha}{2\pi} \mathbf{\Gamma}^{(1)}_{g \to \gamma} \, . \label{eq:gambarABC} \end{eqnarray} \section{Photon Production Cross Section} \label{sec:xsec} Any isolated photon production cross section at higher orders in QCD consists of a direct and a fragmentation contribution. Its general form reads: \begin{equation} \text{d} \hat{\sigma}^{\gamma + X} = \text{d} \hat{\sigma}_{\gamma} + \sum_{p} \text{d} \hat{\sigma}_{p} \otimes D^B_{p \to \gamma} \, , \label{eq:csgamgeneral} \end{equation} with $p \in \{ q_j, \bar{q}_j , g \}$. $\text{d} \hat{\sigma}_p$ is the cross section for the production of parton $p$ with large transverse momentum and $\text{d} \hat{\sigma}_{\gamma}$ describes the direct contribution to the photon production cross section. Beyond the Born approximation, it contains singularities originating from configurations where partons are collinear to the photon. The bare fragmentation contribution in the above equation further decomposes in two parts: a piece where $\text{d} \hat{\sigma}_p$ is convoluted with the mass-factorised fragmentation functions and the mass factorisation counterterms of the fragmentation functions, which will cancel the parton-photon collinear singularities in the direct contribution. Genuine QCD infrared singularities that do not involve the photon are fully contained inside the direct contribution, where they compensate each other between partonic subprocesses of different multiplicity. By using a dynamical photon isolation, which regulates any parton-photon collinear configurations and discards the fragmentation contribution, these singularities can be handled with generic QCD subtraction methods up to NNLO. Following this procedure, NNLO results have been obtained for photon-plus-jet production~\cite{Campbell_2017,Chen_2020}, di-photon production~\cite{Catani:2011qz,Campbell:2016yrh,Catani:2018krb,Gehrmann:2020oec}, di-photon-plus-jet production~\cite{Chawdhry:2021hkp,Badger:2021ohm} and tri-photon production~\cite{Chawdhry:2019bji,Kallweit:2020gcp}. In the following, it is assumed that the genuine QCD singularities have already been handled using antenna subtraction, such that only the remaining parton-photon collinear singularities remain to be dealt with. The subtractions for infrared-singular genuine QCD and parton-photon collinear configurations are largely independent (except for the occurrence of simple collinear quark-photon singularities in a single type of genuine QCD subtraction terms, discussed in Section~\ref{sec:directRR} below) up to NNLO, such that the corresponding subtraction terms can just be combined in an additive manner. We will thus discuss only the construction of parton-photon collinear subtractions, their interplay with the mass factorisation of the parton-to-photon fragmentation functions, and generic fragmentation function contributions in the following. The cross section $\text{d} \hat{\sigma}_i$ is expanded in powers of $\alpha_s$, i.e. \begin{equation} \text{d} \hat{\sigma}_i = \text{d} \hat{\sigma}^{{\rm LO}}_i + \frac{\alpha_s}{2 \pi} \text{d} \hat{\sigma}^{{\rm NLO}}_i + \left(\frac{\alpha_s}{2 \pi} \right)^2 \text{d} \hat{\sigma}^{{\rm NNLO}}_i + \mathcal{O}(\alpha_s^3) \, . \end{equation} With the power counting of the fragmentation functions given by $D_{q/\bar{q}/g \to \gamma} = \mathcal{O}(\alpha)$, the different contributions to the photon cross section at the different levels of accuracy read \begin{eqnarray} \text{d} \hat{\sigma}^{\gamma+ X,{\rm LO}} &= &\text{d} \hat{\sigma}_{\gamma}^{{\rm LO}}, \label{eq:siggampX0} \\ \text{d} \hat{\sigma}^{\gamma+ X,{\rm NLO}} &= &\text{d} \hat{\sigma}_{\gamma}^{{\rm NLO}} + {\rm d} \hat{\sigma}^{{\rm LO}}_{g} \otimes D_{g \to \gamma} + \sum_q \text{d} \hat{\sigma}^{{\rm LO}}_{q} \otimes D_{q \to \gamma} - \sum_q \text{d} \hat{\sigma}^{{\rm LO}}_{q} \otimes \mathbf{F}^{(0)}_{q \to \gamma},\label{eq:siggampX1} \end{eqnarray} \newpage and \begin{eqnarray} \text{d} \hat{\sigma}^{\gamma+ X,{\rm NNLO}} &= & \text{d} \hat{\sigma}_{\gamma}^{{\rm NNLO}} + \sum_q \text{d} \hat{\sigma}_{q}^{{\rm NLO}} \otimes D_{q \to \gamma} - \sum_q \text{d} \hat{\sigma}_{q}^{{\rm NLO}} \otimes \mathbf{F}^{(0)}_{q \to \gamma} \nonumber \\ &&\quad - \sum_q \text{d} \hat{\sigma}_{q}^{{\rm LO}} \otimes \frac{\alpha_s}{2 \pi}\mathbf{F}^{(1)}_{q \to \gamma} + \text{d} \hat{\sigma}_{g}^{{\rm NLO}} \otimes D_{g \to \gamma} - \text{d} \hat{\sigma}_{g}^{{\rm LO}} \otimes \frac{\alpha_s}{2\pi} \mathbf{F}^{(1)}_{g \to \gamma}. \label{eq:siggampX2} \end{eqnarray} We used \eqref{eq:DbartoRcom} to express the bare fragmentation functions in terms of the mass-factorised fragmentation functions and used that $\mathbf{F}^{(0)}_{g \to \gamma} = 0$. The sums in the above equations also run over anti-quarks. In general, the fragmentation functions are flavour-sensitive while the mass factorisation kernels are flavour-blind. In the cross section $\text{d}\hat{\sigma}_i$ the final-state particle $i$ has to be identified. This holds not only for the case $i = \gamma$ but also when $i$ is a parton. Therefore, it is useful to rewrite the higher-order cross section as \begin{eqnarray} \text{d}\hat{\sigma}_i^{{\rm NLO}} &=& \int \text{d} \Phi_{n+1} \left(\text{d} \hat{\sigma}^R_i - \text{d} \hat{\sigma}_{i}^S -\text{d} \hat{\sigma}_{j(i)}^S\right) \nonumber \\ &&+ \int \text{d} \Phi_{n} \left( \text{d} \hat{\sigma}^V_i - \text{d} \hat{\sigma}^T_i - \text{d} \hat{\sigma}^T_{j(i)} \right) \label{eq:NLOsigid} \end{eqnarray} and \begin{eqnarray} \text{d}\hat{\sigma}_i^{{\rm NNLO}} &=& \int \text{d} \Phi_{n+2} \left(\text{d} \hat{\sigma}^{RR}_i - \text{d} \hat{\sigma}_{i}^S -\text{d} \hat{\sigma}_{j(i)}^S\right)\nonumber \\ &&+ \int \text{d} \Phi_{n+1} \left( \text{d} \hat{\sigma}^{RV}_i - \text{d} \hat{\sigma}^T_i - \text{d} \hat{\sigma}^T_{j(i)} \right) \nonumber \\ &&+ \int \text{d} \Phi_{n} \left( \text{d} \hat{\sigma}^{VV}_i - \text{d} \hat{\sigma}^U_i - \text{d} \hat{\sigma}^U_{j(i)} \right) \, , \label{eq:NNLOsigid} \end{eqnarray} where we divided the subtraction terms in a part in which the identified particle remains resolved and a part in which particle $i$ is unresolved, i.e.\ in the reduced matrix element there is no momentum corresponding to particle $i$ alone, but it is part of a cluster of identity $j \in \{q_j, \bar{q}_j, g\}$. All explicit poles in $\text{d}\hat{\sigma}^T_{j(\gamma)}$ and $\text{d}\hat{\sigma}^U_{j(\gamma)}$ have to cancel against the mass factorisation terms of the fragmentation functions. It should be noted that the composition of these cross sections slightly deviates from the pure QCD case~\cite{Currie:2013vh}, where all counterterms from the mass factorisation of the incoming parton distributions are contained in $\text{d}\hat{\sigma}^T$ and $\text{d}\hat{\sigma}^U$. In $\text{d}\hat{\sigma}^T_{j(\gamma)}$ and $\text{d}\hat{\sigma}^U_{j(\gamma)}$, only counterterms associated with the parton distributions are included, while the mass factorisation counterterms of the photon fragmentation functions are not included, but added explicitly to \eqref{eq:NLOsigid} and \eqref{eq:NNLOsigid}. This distinction will allow a more transparent identification of of infrared cancellations associated with the photon fragmentation process in the following. \subsection{Subtraction at NLO} The form of the NLO cross section is given in \eqref{eq:siggampX1}. Using additionally the notation of \eqref{eq:NLOsigid} for $\text{d}\hat{\sigma}^{{\rm NLO}}_{\gamma}$, we have \newpage \begin{eqnarray} \text{d} \hat{\sigma}^{\gamma+ X,{\rm NLO}} &=& \int \text{d} \Phi_{n+1} \left(\text{d} \hat{\sigma}^R_{\gamma} - \text{d} \hat{\sigma}_{\gamma}^S -\sum_q \text{d} \hat{\sigma}_{q(\gamma)}^S\right) \nonumber \\ &&+ \int \text{d} \Phi_{n} \left( \text{d} \hat{\sigma}^V_{\gamma} - \text{d} \hat{\sigma}^T_{\gamma} \right) \nonumber\\ &&+ \int \text{d} \Phi_{n} \sum_q \left(- \text{d} \hat{\sigma}^T_{q(\gamma)} - \text{d} \hat{\sigma}^{B}_{q} \otimes \mathbf{F}^{(0)}_{q \to \gamma} \right) \nonumber \\ &&+ \int \text{d} \Phi_n \sum_q \text{d} \hat{\sigma}^{B}_{q} \otimes D_{q \to \gamma} + \int \text{d} \Phi_n \text{d} \hat{\sigma}^{B}_{g} \otimes D_{g \to \gamma} \, , \label{eq:gampXNLO} \end{eqnarray} where each individual line is free of implicit and explicit divergences. ${\rm d} \hat{\sigma}^S_{q(\gamma)}$ subtracts the quark-photon singular collinear configurations of ${\rm d} \hat{\sigma}^R_{\gamma}$. At NLO, these configurations always yield a quark as a parent cluster so that there is no contribution to ${\rm d} \hat{\sigma}^S_{g(\gamma)}$. The flavour sum runs over $\{ u,d,s,c,b \}$ and does not distinguish between quarks and anti-quarks as $D_{q \to \gamma} = D_{\bar{q} \to \gamma}$. The full $\text{d} \hat{\sigma}^S_{q(\gamma)}$ subtraction term is a sum of contributions of the type, \begin{eqnarray} \text{d} \hat{\sigma}^S_{q(\gamma)} &=& \mathcal{N}^R_{NLO} \sum_{{\rm perm.}} \text{d} \Phi_{n+1}(k_1,\, ...\,,k_n , k_{\gamma}; p_1 , p_2) \frac{1}{S_{n+1}} \nonumber \\ &&\times A^0_3(\check{k}_{\bar{q}}, k_{\gamma}^{{\rm id.}}, k_q) \, Q_q^2 \, M^0_{n+2}( ... \, , k_{(q\gamma)} , \, ...) \emph{J}^{(n)}_{m} ( \{ \tilde{k} \}_n ; z). \label{eq:gernicSqgam} \end{eqnarray} The antenna function in \eqref{eq:gernicSqgam} mimics the singular $q\parallel \gamma$ limit of the real-radiation matrix element. At NLO one can always choose the $A^0_3$ antenna function in its final-final or initial-final crossing to subtract this limit. We indicate the reference momentum with a check-mark and the identified particle with a superscript (id.). The reduced matrix element is a Born-level jet matrix element and it is multiplied with the charge $Q_q$ of the quark, to which the photon becomes collinear. $z$ is the momentum fraction of the photon within the cluster momentum $k_{(q\gamma)}$. It is given by $z = z_3\left(\check{k}_{\bar{q}}, k_{\gamma}^{{\rm id.}}, k_q\right)$ with the general definition for the NLO momentum fraction \begin{equation} z_3\left(\check{k}_a , k_b^{{\rm id.}}, k_c\right) = \frac{s_{ab}}{s_{ab} + s_{ac}} \, . \label{eq:derz3generic} \end{equation} The jet function $J^{(n)}_m$ applies the jet algorithm as well as any cuts on the photon. Consequently, it retains an explicit functional dependence on $z$. The new class of antenna functions in \eqref{eq:gernicSqgam}, in which a final-state particle is identified, are called fragmentation antenna functions. The limit in which the photon becomes collinear to the reference momentum corresponds to the limit $z \to 0$, and this configuration will be vetoed by the jet function. The reference particle can be either in the final or in the initial state, i.e.\ final-final or initial-final fragmentation antenna functions can be used. In the case of an initial-final fragmentation antenna function we exclusively use the initial-state momentum as the reference direction in the definition of the momentum fraction $z$. Therefore, in this case we have $\check{k}_{\bar{q}}=\check{p}_q$ in \eqref{eq:gernicSqgam}. To integrate the subtraction term we have to factorise the phase space in \eqref{eq:gernicSqgam} and make the integration over $z$ explicit. A different phase space factorisation applies for the cases of initial-final and final-final antenna functions. Using the initial-final phase space factorisation~\cite{Daleo:2006xa}, we obtain \begin{equation} {\rm d} \Phi_{n+1}(\dots,k_q,k_\gamma,\dots;p_q,p_2) = {\rm d} \Phi_n(\dots, k_{(q \gamma)},\dots;\bar{p}_{q},p_2) \frac{{\rm d} x}{x} \frac{Q^2}{2\pi}{\rm d} \Phi_2 \delta\left( z - \frac{s_{ \check{q} \gamma}}{s_{ \check{q} \gamma}+s_{\check{q} q}} \right) {\rm d} z \, , \end{equation} where $q^2 = (p_{q} - k_{\gamma} - k_{q})^2=-Q^2$ and ${\rm d} \Phi_2 = {\rm d} \Phi_2(q,p_q;k_{\gamma},k_q)$. We used that in the case of an initial-final antenna we have $\check{k}_{\bar{q}}=\check{p}_q$. Using the final-final phase space factorisation~\citep{GehrmannDeRidder:2005cm}, one can rewrite the $n+1$ particle phase space as \begin{equation} {\rm d} \Phi_{n+1}(\dots,k_{\bar{q}},k_\gamma,\dots;p_1,p_2) = {\rm d} \Phi_n(\dots,\tilde{k}_{\bar{q}}, k_{(q \gamma)},\dots;p_1,p_2) {\rm d} \Phi_3 P_2^{-1} \delta\left( z - \frac{s_{ \bar{q} \gamma}}{s_{ \bar{q} \gamma}+s_{\bar{q} q}} \right) {\rm d} z \, , \end{equation} where ${\rm d} \Phi_3 = {\rm d} \Phi_3(k_{\bar{q}},k_{\gamma},k_q;\tilde{k}_{\bar{q}}+k_{(q\gamma)})$ and $P_2$ is the integrated two-body phase space, i.e.\ \begin{equation} P_2 = \int {\rm d} \Phi_2 = 2^{-3+2\epsilon} \pi^{-1+\epsilon} \frac{\Gamma(2-2\epsilon)}{\Gamma(1-\epsilon)} s^{-\epsilon} \, . \end{equation} After factorising the phase space in \eqref{eq:gernicSqgam}, the integration of the subtraction term ${\rm d} \hat{\sigma}^S_{q(\gamma)}$ can be performed: \begin{eqnarray} \text{d}\hat{\sigma}^T_{q(\gamma)} &=& - \mathcal{N}^V_{NLO} Q_q^2 \sum_{{\rm perm.}} \int \frac{\text{d}x}{x} \int_0^1 \text{d} z \, \text{d} \Phi_n(k_1, \, ... \, , k_{q} , \, ... \, , k_n;x p_1, p_2 ) \nonumber \\ &&\times \frac{1}{S_n} \mathcal{A}^{0, \, {\rm id.} \gamma}_{3,\bar{q}}(x , z) \, M^0_{n+2}( ... \, , k_q , \, ...) \, \emph{J}^{(n)}_m( \{ k \}_n ; z) \, , \label{eq:gernicTqgam} \end{eqnarray} where $\mathcal{N}^V_{NLO} = \mathcal{N}^R_{NLO} \, C(\epsilon)$, with $C(\epsilon) = \left( 4\pi e^{-\gamma_E}\right)^{\epsilon}/(8 \pi^2)$. In case a final-final antenna function is used there is no explicit $x$-dependence in the subtraction term. $\mathcal{A}^{0, \, {\rm id.} \gamma}_{3,\bar{q}}$ is the integrated fragmentation antenna function. The subscript corresponds to the reference particle used in the definition of the momentum fraction. The integration of a general $X^{0,{\rm id.} j}_{i,jk}$ fragmentation antenna function with identified particle $j$ and reference particle $i$ in the initial-final configuration reads \begin{eqnarray} \mathcal{X}^{0,{\rm id.} j}_{3,i}\left(x,z\right) &=& \frac{1}{C(\epsilon)}\int {\rm d} \Phi_2(q,p_{i};k_{j},k_k) \, X_{i,jk}^{0,{\rm id.} j} \, \frac{Q^2}{2\pi} \, \delta\left( z - \frac{s_{ij}}{s_{ij}+s_{ik}} \right) \nonumber \\ &= &\frac{Q^2}{2} \frac{e^{\gamma_E \epsilon}}{\Gamma(1-\epsilon)} \left(Q^2\right)^{-\epsilon} \mathcal{J}(x,z) \, X_{i,jk}^{0,{\rm id.} j}(x,z) \label{eq:intX30IFfrag} \end{eqnarray} with $q^2 = (p_i-k_j-k_k)^2 = -Q^2<0$ and the Jacobian factor is given by \begin{equation} \mathcal{J}(x,z) = (1-x)^{-\epsilon} x^{\epsilon} z^{-\epsilon} (1-z)^{-\epsilon} \, . \label{eq:JacPhi2} \end{equation} It originates from expressing the integration over the two-body phase space as a single integration over $z$. After expressing the invariants in the antenna function in terms of $x$ and $z$, all terms of the form $(1-x)^{-1-\epsilon}$ and $(1-z)^{-1-\epsilon}$ are expanded in distributions, where we use the notation \begin{equation} \mathcal{D}_n(u) = \left[\frac{\log^n (1-u) }{1-u} \right]_+ \, \, , \, n \in \mathbb{N}_0 \, . \label{eq:Dndef} \end{equation} The integrated $A^0_3$ fragmentation antenna function in the initial-final configuration reads \begin{eqnarray} \mathcal{A}^{0, {\rm id.} \gamma}_{3, \hat{q}}(x,z)&=& \left(Q^2\right)^{-\epsilon} \bigg[ -\frac{1}{2\epsilon}\delta(1-x) p^{(0)}_{\gamma q}(z) + \frac{1}{2}-\frac{x}{2}+\frac{z}{4}+\frac{x z}{4}+\frac{1}{2} z \delta(1-x) \nonumber \\ &&+\left(-\frac{1}{4}-\frac{x}{4}+\frac{1}{2} \mathcal{D}_0(x)+\frac{1}{2} \delta(1-x) \left( \log (1-z)+\log(z)\right)\right) p^{(0)}_{\gamma q}(z) \bigg] + \mathcal{O}(\epsilon) \, , \nonumber \\ \label{eq:A30qgamIF} \end{eqnarray} where $p^{(0)}_{\gamma q}$ denotes the quark-photon splitting function given in \eqref{eq:LOsplittingfunc}. In the final-final configuration the integration of a fragmentation antenna functions $X^{0,{\rm id.} j}_{ijk}$ with identified particle $j$ and reference particle $i$ takes the form \begin{eqnarray} \mathcal{X}_{3, i}^{0, \, {\rm id.} j}(z) &=&\frac{1}{C(\epsilon)} \int {\rm d} \Phi_3\left(k_i,k_{j},k_k;\tilde{k}_i+k_{(kj)}\right) P_2^{-1} \delta\left( z - \frac{s_{ij}}{s_{ij}+s_{ik}} \right) X^{0, \, {\rm id.} j}_{ijk} \nonumber \\ &= &\frac{e^{\gamma_E \epsilon}}{\Gamma(1-\epsilon)} s_{ijk}^{-1+2\epsilon} z^{-\epsilon} (1-z)^{-\epsilon} \int_0^{s_{ijk}} {\rm d} s_{jk} (s_{ijk}-s_{jk})^{1-2\epsilon} s_{jk}^{-\epsilon} X^{0, \, {\rm id.} j}_{ijk}(s_{jk},z) \, . \nonumber \\ \end{eqnarray} As in the initial-final integration, we obtain a Jacobian factor from rewriting one of the two non-trivial integrations of the three-body phase space as an integration over $z$. The remaining integration is straightforward for the tree-level $X^0_3$ fragmentation antenna functions. The final-final fragmentation antenna function needed to subtract quark-photon collinear singularities is $A^0_3(\check{\bar{q}},\gamma^{\rm id.},q)$. Its integrated form reads \begin{eqnarray} \mathcal{A}^{0, {\rm id.} \gamma}_{3,\bar{q}}(z)&=& \left(s_{q\gamma \bar{q}}\right)^{-\epsilon} \bigg[ -\frac{1}{2 \epsilon } p^{(0)}_{\gamma q}(z) +\frac{1}{4}+\frac{z}{8}+\left(-\frac{3}{8}+\frac{1}{2} \log (1-z)+\frac{\log (z)}{2}\right) p^{(0)}_{\gamma q}(z) \bigg] \nonumber \\ &&+\mathcal{O}(\epsilon) \label{eq:A30qgamFF} \end{eqnarray} It can be seen from \eqref{eq:A30qgamIF} and \eqref{eq:A30qgamFF} that the quark-photon collinear singularity is manifest in an $1/\epsilon$-pole at the integrated level. This pole is cancelled by the mass factorisation contribution of the fragmentation functions, which reads \begin{eqnarray} \text{d} \hat{\sigma}^B_q \otimes \mathbf{F}^{(0)}_{q \to \gamma} &=& \frac{1}{2} \, \mathcal{N}^V_{NLO} \, Q_q^2 \sum_{{\rm perm.}} \int_0^1 \text{d} z \, \text{d} \Phi_n(k_1 , ..., k_q , ... , k_n ; p_1 , p_2 ) \nonumber \\ && \times\frac{1}{S_n} \, \mu_a^{-2\epsilon} \, \Gamma_{\gamma q}^{(0)}(z) M^0_{n+2}( ... \, , k_q , \, ...) \, \emph{J}^{(n)}_m( \{ k \}_n ; z) \, , \label{eq:gernicBqgam} \end{eqnarray} where $\mu_a$ denotes the fragmentation scale and we have expanded out the mass factorisation counterterm $\mathbf{F}^{(0)}_{q \to \gamma}$ in terms of coupling factors and colour-ordered coefficients $\Gamma$ as outlined in \eqref{eq:kernelbar} and in detailed appendix \ref{app:MFkernels}. The Born cross section is \begin{equation} {\rm d} \hat{\sigma}^B_q = \mathcal{N}^{LO}_{{\rm jet}}\sum_{\rm perm.} {\rm d} \Phi_n(\{k\}_n;p_1,p_2)\frac{1}{S_n}M_n^0(\dots,k_q,\dots)J_m^{(n)}(\{k\}_n;z)\, . \end{equation} In here, the jet function depends on $z$ because the quark momentum $k_q$ denotes a quark-photon cluster containing a photon with momentum fraction $z$. The normalisation factors are related by $\mathcal{N}^B_{{\rm jet}} \alpha/({2 \pi} )\left( 4\pi e^{-\gamma_E}\right)^{\epsilon} = \ \mathcal{N}^V_{NLO}/2$. Adding the integrated subtraction term \eqref{eq:gernicTqgam} and the mass factorisation contribution \eqref{eq:gernicBqgam}, one has \begin{eqnarray} \text{d}\hat{\sigma}^T_{q(\gamma)} + \text{d} \hat{\sigma}^B_q \otimes \mathbf{F}^{(0)}_{q \to \gamma} &=& - \, \mathcal{N}^V_{NLO} \, Q_q^2 \sum_{{\rm perm.}} \int \frac{\text{d}x}{x} \, \int_0^1 \text{d} z \frac{1}{S_n} \text{d} \Phi_n(k_1 , ..., k_q , ... , k_n ;x p_1 ,p_2 ) \nonumber \\ && \times \boldsymbol{J}^{(1), \, {\rm id.} \gamma}_{2,\bar{q}}(\bar{q},q) \, M^0_{n+2}( ... \, , k_q , \, ...) \, \emph{J}^{(n)}_m( \{ k \}_n ; z) \, . \end{eqnarray} Combination of the integrated initial-final fragmentation antenna function \eqref{eq:A30qgamIF} with the mass factorisation kernel yield the NLO fragmentation dipole \begin{equation} \boldsymbol{J}^{(1), {\rm id.} \gamma}_{2,\hat{q}}(\hat{q},q) = \mathcal{A}^{0,{\rm id. \gamma}}_{3,\hat{q}}(z,x) - \frac{1}{2} \, \mu_a^{-2\epsilon} \, \Gamma^{(0)}_{ \gamma q}(z) \, \delta(1-x) \, . \label{eq:iddipoleqgamIF} \end{equation} In case a final-final $A^0_3$ fragmentation antenna function is used, the dipole reads \begin{equation} \boldsymbol{J}^{(1), \, {\rm id.} \gamma}_{2,\bar{q}}(\bar{q},q) = \mathcal{A}^{0,{\rm id. \gamma}}_{3,\bar{q}}(z) - \frac{1}{2} \, \mu_a^{-2\epsilon} \, \Gamma^{(0)}_{\gamma q}(z) \, , \label{eq:iddipoleqgamFF} \end{equation} where the integrated fragmentation antenna function is given in \eqref{eq:A30qgamFF}. The fragmentation dipoles \eqref{eq:iddipoleqgamIF} and \eqref{eq:iddipoleqgamFF} are $\epsilon$-finite. Therefore, having expressed $\text{d}\hat{\sigma}^T_{q(\gamma)} + \text{d} \hat{\sigma}^B_q \otimes \mathbf{F}^{(0)}_{q \to \gamma}$ in terms of these dipoles the pole cancellation between the direct part and the mass factorisation contribution is guaranteed. The fragmentation contribution to the photon production cross section at NLO takes the form \begin{eqnarray} \text{d} \hat{\sigma}^{B}_{i} \otimes D_{i \to \gamma} &= &\mathcal{N}^B_{{\rm jet}} \sum_{{\rm perm.}} \int_0^1 \text{d} z \, \text{d} \Phi_n(k_1 , ..., k_i , ... , k_n ; p_1 , p_2 )\nonumber \\ && \times\frac{1}{S_n} \, D_{i \to \gamma}(z,\mu_a^2) M^0_{n+2}( ... \, , k_i , \, ...) \, \emph{J}^{(n)}_m( \{ k \}_n ; z) \, , \end{eqnarray} where $i$ can be a gluon or a quark. $z$ is the momentum fraction which the photon carries away from its parent momentum $k_i$ during the process of fragmentation. \subsection{Subtraction at NNLO} The pole cancellation among the different pieces \eqref{eq:siggampX2} at NNLO is more involved. These pieces can be rearranged according to whether they contain the mass-factorised photon fragmentation functions (fragmentation contribution) or not (direct contribution). The direct contribution contains all photon-parton singular configurations and their associated counterterms from the mass factorisation of the photon fragmentation functions. \subsubsection{Fragmentation Contribution} The fragmentation contribution appears in isolated photon cross sections only from NLO onwards. Consequently, its contribution at NNLO amounts to an NLO-type correction to the production of an identified parton, which subsequently fragments into the photon. This contribution can be further divided into the generic NLO hard subprocess cross sections, \begin{eqnarray} \text{d}\hat{\sigma}^{\gamma +X,{\rm NNLO}}_{f_1} &= &\sum_q \int \text{d} \Phi_{n+1} \left( \text{d} \hat{\sigma}^R_q - \text{d}\hat{\sigma}^S_q - \text{d} \hat{\sigma}^S_{q(q)} - \text{d} \hat{\sigma}^S_{g(q)} \right) \otimes D_{q \to \gamma} \nonumber \\ &&+ \sum_q \int \text{d} \Phi_{n} \left( \text{d}\hat{\sigma}^V_q - \text{d} \hat{\sigma}^T_q - \text{d}\hat{\sigma}^T_{q(q)} - \text{d} \hat{\sigma}^T_{g(q)} \right) \otimes D_{q \to \gamma} \nonumber\\ &&+ \int \text{d} \Phi_{n+1} \, \left( \text{d} \hat{\sigma}^R_g - \text{d} \hat{\sigma}^S_g - \text{d} \hat{\sigma}^S_{g(g)} - \text{d} \hat{\sigma}^S_{q(g)} \right) \otimes D_{g \to \gamma}\nonumber \\ &&+ \int \text{d} \Phi_n \, \left( \text{d} \hat{\sigma}^V_{g} - \text{d} \hat{\sigma}^T_g - \text{d} \hat{\sigma}^T_{g(g)} - \text{d} \hat{\sigma}^T_{q(g)} \right) \otimes D_{g \to \gamma} \, . \label{eq:sigf1} \end{eqnarray} and terms resulting from the mass factorisation of the photon fragmentation functions, corresponding to the mass factorisation kernels $\mathbf{F}^{(1),A}_{q\to \gamma}$ and $\mathbf{F}^{(1),A}_{g\to \gamma}$ in \eqref{eq:siggampX2}: \begin{equation} \text{d}\hat{\sigma}^{\gamma +X,{\rm NNLO}}_{f_2} = - \sum_q \text{d} \hat{\sigma}_{q}^{{\rm LO}} \otimes \frac{\alpha_s}{2 \pi}\mathbf{F}^{(1),A}_{q \to \gamma} -\text{d} \hat{\sigma}_{g}^{{\rm LO}} \otimes \frac{\alpha_s}{2\pi} \mathbf{F}^{(1),A}_{g \to \gamma} \end{equation}. Expanding these kernels yields the full fragmentation contribution: \begin{eqnarray} \text{d}\hat{\sigma}^{\gamma +X,{\rm NNLO}}_{f} &= & \text{d}\hat{\sigma}^{\gamma +X,{\rm NNLO}}_{f_1} +\text{d}\hat{\sigma}^{\gamma +X,{\rm NNLO}}_{f_2}\nonumber \\ &=&\sum_q \int \text{d} \Phi_{n+1} \left( \text{d} \hat{\sigma}^R_q - \text{d}\hat{\sigma}^S_q - \text{d} \hat{\sigma}^S_{q(q)} - \text{d} \hat{\sigma}^S_{g(q)} \right) \otimes D_{q \to \gamma} \nonumber\\ &&+ \sum_q \int \text{d} \Phi_{n} \left( \text{d}\hat{\sigma}^V_q - \text{d} \hat{\sigma}^T_q - \text{d}\hat{\sigma}^T_{q(q)} - \text{d} \hat{\sigma}^B_q \otimes \frac{\alpha_s}{2 \pi} \mathbf{\Gamma}^{(1)}_{q \to q} \right) \otimes D_{q \to \gamma} \nonumber\\ &&+ \sum_q \int \text{d} \Phi_{n} \left( - \text{d} \hat{\sigma}^T_{g(q)} - \text{d} \hat{\sigma}^B_g \otimes \frac{\alpha_s}{2\pi} \mathbf{\Gamma}^{(1)}_{g \to q} \right) \otimes D_{q\to \gamma} \nonumber\\ &&+ \int \text{d} \Phi_{n+1} \, \left( \text{d} \hat{\sigma}^R_g - \text{d} \hat{\sigma}^S_g - \text{d} \hat{\sigma}^S_{g(g)} - \text{d} \hat{\sigma}^S_{q(g)} \right) \otimes D_{g \to \gamma} \nonumber\\ &&+ \int \text{d} \Phi_{n} \, \left( \text{d} \hat{\sigma}^V_g - \text{d} \hat{\sigma}^T_g - \text{d} \hat{\sigma}^T_{g(g)} - \text{d} \hat{\sigma}^B_{g} \otimes \frac{\alpha_s}{2 \pi} \mathbf{\Gamma}^{(1)}_{g\to g} \right) \otimes D_{g \to \gamma}\nonumber \\ &&+ \int \text{d} \Phi_{n} \, \left( - \text{d} \hat{\sigma}^T_{q(g)} - \text{d} \hat{\sigma}^B_{q} \otimes \frac{\alpha_s}{2 \pi} \mathbf{\Gamma}^{(1)}_{q\to g} \right) \otimes D_{g \to \gamma} \, , \label{eq:fconNNLO} \end{eqnarray} where we used the expression for the kernels given in \eqref{eq:gambarABC}. Each line in \eqref{eq:fconNNLO} is free of explicit and implicit divergences. The only singularity which is subtracted in $\text{d}\hat{\sigma}^S_{q(q)}$ corresponds to the $q \parallel g$ limit, in which the two partons form a parent quark and the original quark momentum within the quark-gluon cluster is identified. The subtraction term takes the form \begin{eqnarray} \text{d}\hat{\sigma}^S_{q(q)} &=& \mathcal{N}^R_{{\rm jet}} \sum_{{\rm perm.}} \text{d} \Phi_{n+1} (k_1, ... , k_q, ... , k_g, ..., k_{n+1}; p_1,p_2) \frac{1}{S_{n+1}} \nonumber \\ && \times X^0_3(\check{k}_s,k_g,k_q^{{\rm id.}}) M^0_{n+2}(... , k_{(gq)}, ... ) \emph{J}^{(n)}_m \{ \tilde{k} \}_n ; u) \, , \label{eq:sigSqqid} \end{eqnarray} where we used the same notation as in \eqref{eq:gernicSqgam} to indicate the reference parton with momentum $k_s$ and the identified quark momentum $k_q$. The momentum $k_g$ is the momentum of the gluon which is colour connected to the quark for the specific colour ordering. $u$ is the momentum fraction of the quark in the cluster momentum $k_{(gq)}$. It reads \begin{equation} u = z_3\left(\check{k}_s, k_q^{{\rm id.}} , k_g\right) = \frac{s_{sq}}{s_{sq}+s_{sg}} \, . \end{equation} For the subtraction of a quark-gluon collinear limit either a $D^0_3$ or $A^0_3$ antenna function can be used and the reference particle can be in the initial or final state. The convolution of the subtraction term with the quark-to-photon fragmentation functions is given by \begin{eqnarray} \text{d}\hat{\sigma}^S_{q(q)} \otimes D_{q \to \gamma} &&=\mathcal{N}^R_{{\rm jet}} \sum_{{\rm perm.}} \int_0^1 \text{d} v \, \text{d} \Phi_{n+1} (k_1, ... , k_q, ... , k_g, ..., k_{n+1}; p_1,p_2) \frac{1}{S_{n+1}} \nonumber \\ &&\hspace{-6mm} \times X^0_3(\check{k}_s,k_g,k_q^{{\rm id.}}) M^0_{n+2}(... , k_{(gq)}, ... ) \emph{J}^{(n)}_m( \{ \tilde{k} \}_n ; z = uv) \, D_{q \to \gamma}(v,\mu_a^2) \, . \label{eq:sigSqqidconD} \end{eqnarray} In here, we have two momentum fractions: $v$ denotes the fraction of the photon momentum in the quark-photon cluster. The quark itself is part of a quark-gluon cluster, in which it carries a momentum fraction $u$. The jet function $\emph{J}^{(n)}_m$ has to reconstruct the photon momentum from the mapped momentum $ k_{(gq)}$. The momentum fraction of the photon in the quark-gluon-photon cluster momentum $ k_{(gq)}$ is given by $z=uv$, which is entering the jet function. The subtraction term $\text{d}\hat{\sigma}^S_{g(q)}$ subtracts the $q \parallel \bar{q}$ limit. In this case the cluster identity is a gluon. This subtraction term reads \begin{eqnarray} \text{d}\hat{\sigma}^S_{g(q)} &= &\mathcal{N}^R_{{\rm jet}} \bigg\lbrace \sum_{{\rm perm.}} \text{d} \Phi_{n+1} (k_1, ... , k_{q}, ... , k_{\bar{q}}, ..., k_{n+1}; p_1,p_2) \frac{1}{S_{n+1}}\nonumber \\ && \times X^0_3(\check{k}_s,k_{q}^{{\rm id.}},k_{\bar{q}}) M^0_{n+2}(... , k_{(q\bar{q})}, ... ) \emph{J}^{(n)}_m( \{ \tilde{k} \}_n ; u) + ( q \leftrightarrow \bar{q}) \bigg\rbrace \, , \label{eq:sigSgqid} \end{eqnarray} where we have indicated that there is an identical term in which the anti-quark is identified. Both of these terms are convoluted with the same fragmentation function since $D_{q \to \gamma}=D_{\bar{q} \to \gamma}$. The antenna function in \eqref{eq:sigSgqid} can be either a $E^0_3$ or $G^0_3$ antenna and they can be used in the initial-final or final-final configuration. The convolution of \eqref{eq:sigSgqid} with the fragmentation function takes the same form as \eqref{eq:sigSqqidconD}. The integration of the subtraction terms \eqref{eq:sigSqqid} and \eqref{eq:sigSgqid} proceeds in the same manner as the integration of ${\rm d} \hat{\sigma}^S_{q(\gamma)}$. After integrating over the antenna phase space the fragmentation antenna function retains an explicit dependence on the momentum fraction. Performing the convolution with the fragmentation functions, we find \begin{eqnarray} \text{d} \hat{\sigma}^T_{q(q)} \otimes D_{q \to \gamma} &=& - \mathcal{N}^V_{{\rm jet}} \sum_{{\rm perm.}} \int \frac{\text{d}x}{x}\, \int_0^1 \text{d} z \int_z^1 \frac{\text{d} u}{u} \, \, \text{d} \Phi_n(k_1,...,k_q,...,k_n;x p_1 , p_2) \nonumber \\ &&\times \frac{1}{S_n} \mathcal{X}^{0, \, {\rm id.} q}_{3,s}(x , u) \, D_{q \to \gamma}\left(\frac{z}{u},\mu_a^2\right) M^0_{n+2}(...,k_q,...) \emph{J}^{(n)}_m(\{k\}_n;z) \, . \label{eq:sigTqqid} \end{eqnarray} The jet function again only depends on $z=uv$. In case a final-final antenna function is used to subtract the $q \parallel g$ limit, there is no explicit dependence on and integration over $x$. To obtain a fragmentation dipole from the integrated fragmentation antenna function it has to be combined with the mass factorisation contribution from the fragmentation functions. For the case at hand the corresponding mass factorisation contribution is $\mathbf{F}^{(1),A}_{q \to \gamma}$. It reads \begin{eqnarray} \lefteqn{\text{d}\hat{\sigma}^B_q \otimes \frac{\alpha_s}{2\pi} \mathbf{\Gamma}^{(1)}_{q \to q} \otimes D_{q\to \gamma} =}\nonumber \\ && \mathcal{N}^V_{{\rm jet}} \sum_{{\rm perm.}} \int_0^1 \text{d} z \int_z^1 \frac{\text{d} u}{u} \frac{1}{S_n} \text{d} \Phi_n(k_1,...,k_q,...,k_n;p_1 , p_2) \nonumber \\ &&\times \mu_a^{-2\epsilon} \, \Gamma^{(1)}_{q q}(u) D_{q \to \gamma}\left(\frac{z}{u},\mu_a^2\right) \, M^0_{n+2}(...,k_q,...) \, \emph{J}^{(n)}_m(\{ k \}_n ; z) \, , \label{eq:sigBqcongam} \end{eqnarray} where we used $\mathcal{N}^V_{{\rm jet}} = \mathcal{N}^B_{{\rm jet}} \left( 4\pi e^{-\gamma_E}\right)^{\epsilon} {\alpha_s N}/({2\pi})$ and where the mass factorisation kernel is expanded according to \eqref{eq:allkernels}. Combining \eqref{eq:sigTqqid} and \eqref{eq:sigBqcongam}, we find \begin{eqnarray} \lefteqn{\text{d}\hat{\sigma}^T_{q(q)} \otimes D_{q \to \gamma} + \text{d} \hat{\sigma}^B_q \otimes \frac{\alpha_s}{2\pi} \mathbf{\Gamma}^{(1)}_{q \to q} \otimes D_{q \to \gamma} }\nonumber \\ &=&- \mathcal{N}^V_{{\rm jet}} \sum_{{\rm perm.}} \int \frac{\text{d}x}{x}\, \int_0^1 \text{d} z \int_z^1 \frac{\text{d} u}{u} \, \text{d} \Phi_n(k_1,...,k_q,...,k_n; x p_1 , p_2) \frac{1}{S_n} D_{q \to \gamma}\left(\frac{z}{u},\mu_a^2\right) \nonumber \\ && \times \left( \mathcal{X}^{0, \, {\rm id.} q}_{3,s}(x,u) - \mu_a^{-2\epsilon} \delta(1-x) \Gamma^{(1)}_{qq}(u) \right) M^0_{n+2}(..., k_q, ...) \emph{J}^{(n)}_m(\{ k\}_n ; z) \, . \label{eq:sigTqidq} \end{eqnarray} If the reference particle in the fragmentation antenna function $X^0_3$ is in the initial state there is an additional contribution from the mass factorisation terms of the initial-state parton distributions. In this case we would have to replace the integrated fragmentation antenna function by the dipole which includes this contribution. There are two final-final fragmentation dipoles for the limit under consideration, \begin{eqnarray} \boldsymbol{J}^{(1), \, {\rm id.} q}_{2,\bar{q}}(\bar{q},q) &=& \mathcal{A}^{0,{\rm id.} \, q}_{3,\bar{q}}(u) - \mu_a^{-2\epsilon} \, \Gamma^{(1)}_{q q}(u) \, , \nonumber \\ \boldsymbol{J}^{(1), \, {\rm id.} q}_{2,g}(g,q) &=& \mathcal{D}^{0,{\rm id.} \, q}_{3,g}(u) - \mu_a^{-2\epsilon} \, \Gamma^{(1)}_{q q}(u) \, . \label{eq:iddipolesqqclusterFF} \end{eqnarray} In the initial-final configuration we have \begin{eqnarray} \boldsymbol{J}^{(1) , \, {\rm id.} q}_{2,\hat{q}}(\hat{q},q) &= &\mathcal{A}^{0,{\rm id.} \, q}_{3,\hat{q}}(u,x) - \mu_F^{-2\epsilon} \, \Gamma^{(1)}_{qq}(x) \delta(1-u) - \mu_a^{-2 \epsilon} \, \Gamma^{(1)}_{qq}(u) \delta(1-x) \, , \nonumber \\ \boldsymbol{J}^{(1), \, {\rm id.} q}_{2,\hat{g}}(\hat{g},q) &= &\mathcal{D}^{0,{\rm id.} \, q}_{3,\hat{g}}(u,x) - \frac{1}{2} \mu_F^{-2\epsilon} \, \Gamma^{(1)}_{gg}(x) \delta(1-u) - \mu_a^{-2 \epsilon} \, \Gamma^{(1)}_{q q}(u) \delta(1-x) \, , \label{eq:iddipolesqqclusterIF} \end{eqnarray} where we also included the contribution from the mass factorisation terms of the initial-state parton distributions, which are mass-factorised at the factorisation scale $\mu_F$. The remaining poles of the identified dipoles are all proportional to $\delta(1-x) \, \delta(1-u)$ so that a cancellation with the virtual contribution can take place. The expressions for the $\mathcal{A}^{0,{\rm id.} \, q}_{3,q}$ and $\mathcal{D}^{0,{\rm id.} \, q}_{3,g}$ integrated antenna functions are given in appendix~\ref{app:X30integration}. In case the identified quark is part of a quark-anti-quark cluster resulting from a gluon splitting, we have \newpage \begin{eqnarray} \text{d} \hat{\sigma}^T_{g(q)} \otimes D_{q \to \gamma}&=& -2 \, \mathcal{N}^V_{{\rm jet}} \sum_{{\rm perm.}} \int \frac{\text{d}x}{x}\int_0^1 \text{d} z \int_z^1 \frac{\text{d} u}{u} \, \text{d} \Phi_n(k_1,...,k_g,...,k_n;x p_1 , p_2)\nonumber \\ &&\times \frac{1}{S_n} \mathcal{X}^{0, \, {\rm id.} q}_{3,s}(x,u) \, D_{q \to \gamma}\left(\frac{z}{u}, \mu_a^2\right) M^0_{n+2}(...,k_g,...) \emph{J}^{(n)}_m(\{k\}_n;z)\, , \label{eq:sigTgqid} \end{eqnarray} where $k_g$ is the momentum of the $q\bar{q}$-cluster. The overall factor 2 accounts results from summation over quark and anti-quark of flavour $q$, which have identical fragmentation functions to photons. The additional contribution coming from $\mathbf{F}^{(1),A}_{g \to \gamma}$ in \eqref{eq:fconNNLO} takes the form \begin{eqnarray} \lefteqn{\text{d}\hat{\sigma}^B_g \otimes \frac{\alpha_s}{2\pi} \mathbf{\Gamma}^{(1)}_{g \to q} \otimes D_{q\to \gamma}} \nonumber \\ &=&2 \, \mathcal{N}^V_{{\rm jet}} \sum_{{\rm perm.}} \int_0^1 \text{d} z \int_z^1 \frac{\text{d} u}{u} \text{d} \Phi_n(k_1,...,k_g,...,k_n;xp_1 ,p_2) \, \frac{1}{S_n} \nonumber \\ &&\times \mu_a^{-2\epsilon} \, \Gamma^{(1)}_{qg}(u) D_{q \to \gamma}\left(\frac{z}{u},\mu_a^2\right) \, M^0_{n+2}(...,k_g,...) \, \emph{J}^{(n)}_n(\{ k \} ; z = uv) \, , \label{eq:sigBgcongam} \end{eqnarray} where we used $\mathcal{N}^V_{{\rm jet}} = \left( 4\pi e^{-\gamma_E}\right)^{\epsilon}{\alpha_s }/({2\pi}) \tilde{\mathcal{N}}^B_{{\rm jet}}$. Usually there is an additional factor of $N$ when going from the Born normalisation to the virtual normalisation factor. This factor is absent in \eqref{eq:sigBgcongam} since the normalisation in \eqref{eq:sigTgqid} refers to a four-quark matrix element, while the Born normalisation factor in \eqref{eq:sigBgcongam} refers to a two-quark matrix element. Combination of both contributions yields \begin{eqnarray} \lefteqn{\text{d}\hat{\sigma}^T_{g(q)} \otimes D_{q \to \gamma} + \text{d} \hat{\sigma}^B_g \otimes \frac{\alpha_s}{2\pi} \mathbf{\Gamma}^{(1)}_{g \to \gamma} \otimes D_{q \to \gamma} } \nonumber \\ &=&- 2 \mathcal{N}^V_{{\rm jet}} \sum_{{\rm perm.}} \int \frac{\text{d}x}{x} \int_0^1 \text{d} z \int_z^1 \frac{\text{d} u}{u}\, \text{d} \Phi_n(k_1,...,k_g,...,k_n;x p_1 ,p_2) \frac{1}{S_n} D_{q \to \gamma}\left(\frac{z}{u},\mu_a^2\right) \nonumber \\ && \times \left( \mathcal{X}^{0, {\rm id.} q}_{3,s}(x,u) - \mu_a^{-2\epsilon} \delta(1-x) \Gamma^{(1)}_{qg} (u)\right) \, M^0_{n+2}(..., k_g, ...) \emph{J}^{(n)}_m(\{ k\}_n ; z) \, . \end{eqnarray} The term which is convoluted with the fragmentation function defines another fragmentation dipole, containing a $G^0_3$ or $E^0_3$ antenna function for the $q \parallel \bar{q}$ limit. For the reference particle being in the final state the corresponding dipoles read \begin{eqnarray} \boldsymbol{J}^{(1), \, {\rm id.} q'}_{2,q}(q,g) &=& \mathcal{E}^{0,{\rm id.} \, q'}_{3,q}(u) - \mu_a^{-2 \epsilon} \, \Gamma^{(1)}_{qg}(u) \, ,\nonumber \\ \boldsymbol{J}^{(1), \, {\rm id.} q'}_{2,g}(g,g) &=& \mathcal{G}^{0,{\rm id.} \, q'}_{3,g}(u) - \mu_a^{-2 \epsilon} \, \Gamma^{(1)}_{qg}(u)\, ,\ \label{eq:J21qpidFF} \end{eqnarray} and for an initial-state reference particle we have \begin{eqnarray} \boldsymbol{J}^{(1), \, {\rm id.} q'}_{2,\hat{q}}(\hat{q},g) = \mathcal{E}^{0,{\rm id.} \, q'}_{3,\hat{q}}(u,x) - \mu_a^{-2 \epsilon} \, \Gamma^{(1)}_{qg}(u) \delta(1-x) \, , \nonumber\\ \boldsymbol{J}^{(1), \, {\rm id.} q'}_{2,\hat{g}}(\hat{g},g) = \mathcal{G}^{0,{\rm id.} \, q'}_{3,\hat{g}}(u,x) - \mu_a^{-2 \epsilon} \, \Gamma^{(1)}_{qg}(u) \delta(1-x) \, . \label{eq:J21qpidIF} \end{eqnarray} The integrated antenna functions $\mathcal{E}^{0,{\rm id.} \, q'}_{3,q}=\mathcal{E}^{0,{\rm id.} \, \bar{q}'}_{3,q}$ and $\mathcal{G}^{0,{\rm id.} \, q'}_{3,g}=\mathcal{G}^{0,{\rm id.} \, \bar{q}'}_{3,g}$ are documented in appendix~\ref{app:X30integration}. As the dipoles \eqref{eq:J21qpidFF} and \eqref{eq:J21qpidIF} correspond to a flavour-changing limit, they are $\epsilon$-finite. Using these fragmentation dipoles the NNLO fragmentation contribution \eqref{eq:fconNNLO} can be implemented using the antenna subtraction formalism. \subsubsection{Direct Contribution: Structure and Final-State Mass Factorisation} The direct contribution to the NNLO photon production cross section reads according to \eqref{eq:siggampX2}: \begin{eqnarray} \text{d}\hat{\sigma}_{\gamma}^{{\rm NNLO}} &=& \int \text{d} \Phi_{n+2} \left(\text{d} \hat{\sigma}^{RR}_{\gamma} - \text{d} \hat{\sigma}_{\gamma}^S - \sum_q \text{d} \hat{\sigma}_{q(\gamma)}^S - \text{d} \hat{\sigma}_{g(\gamma)}^S \right)\nonumber \\ &&+ \int \text{d} \Phi_{n+1} \left( \text{d} \hat{\sigma}^{RV}_{\gamma} - \text{d} \hat{\sigma}^T_{\gamma} - \sum_q \text{d} \hat{\sigma}^T_{q(\gamma)} - \text{d} \hat{\sigma}^T_{g(\gamma)} \right) \nonumber\\ &&+ \int \text{d} \Phi_{n} \left( \text{d} \hat{\sigma}^{VV}_{\gamma} - \text{d} \hat{\sigma}^U_{\gamma} - \sum_q \text{d} \hat{\sigma}^U_{q(\gamma)} - \text{d} \hat{\sigma}^U_{g(\gamma)} \right). \label{eq:sigNNLOdir} \end{eqnarray} This contribution contains final-state parton-photon collinear singularities that are cancelled by the mass factorisation terms of the fragmentation functions. The relevant terms at NNLO are as follows: \begin{eqnarray} \text{d}\hat{\sigma}^{\gamma +X,{\rm NNLO}}_{{\rm MF}} &=& - \sum_q \left( \text{d} \hat{\sigma}^{{\rm NLO}}_q \otimes \mathbf{F}^{(0)}_{q \to \gamma} + \text{d} \hat{\sigma}^{{\rm LO}}_q \otimes \frac{\alpha_s}{2\pi} \mathbf{F}^{(1),B}_{q \to \gamma} \right) - \text{d} \hat{\sigma}^{{\rm LO}}_g \otimes \frac{\alpha_s}{2\pi} \mathbf{F}^{(1),B}_{g\to \gamma} \nonumber \\ && - \sum_q \text{d} \hat{\sigma}^{{\rm LO}}_q \otimes \frac{\alpha_s}{2\pi}\mathbf{F}^{(1),C}_{q \to \gamma} - \text{d} \hat{\sigma}^{{\rm LO}}_g \otimes \frac{\alpha_s}{2\pi} \mathbf{F}^{(1),C}_{g\to \gamma} \nonumber \\ &=&\sum_q \int \text{d} \Phi_{n+1} \left( \text{d} \hat{\sigma}^R_q - \text{d}\hat{\sigma}^S_q - \text{d} \hat{\sigma}^S_{q(q)} - \text{d} \hat{\sigma}^S_{g(q)} \right) \otimes \left( - \frac{\alpha}{2\pi} \mathbf{\Gamma}^{(0)}_{q \to \gamma} \right) \nonumber \\ &&+ \sum_q \int \text{d} \Phi_{n} \left( \text{d}\hat{\sigma}^V_q - \text{d} \hat{\sigma}^T_q - \text{d}\hat{\sigma}^T_{q(q)} - \text{d} \hat{\sigma}^B_q \otimes \frac{\alpha_s}{2 \pi} \mathbf{\Gamma}^{(1)}_{q \to q} \right) \otimes \left( - \frac{\alpha}{2\pi} \mathbf{\Gamma}^{(0)}_{q \to \gamma} \right)\nonumber \\ &&+ \sum_q \int \text{d} \Phi_{n} \left( - \text{d} \hat{\sigma}^T_{g(q)} - \text{d} \hat{\sigma}^B_g \otimes \frac{\alpha_s}{2\pi} \mathbf{\Gamma}^{(1)}_{g \to q} \right) \otimes \left( - \frac{\alpha}{2\pi} \mathbf{\Gamma}^{(0)}_{q \to \gamma} \right) \nonumber \\ &&- \sum_q \int \text{d} \Phi_n \text{d} \hat{\sigma}^{B}_q \otimes \frac{\alpha_s}{2 \pi} \frac{\alpha}{2\pi} \mathbf{\Gamma}^{(1)}_{q \to \gamma} - \int \text{d} \Phi_n \text{d} \hat{\sigma}^{B}_g \otimes \frac{\alpha_s}{2\pi} \frac{\alpha}{2\pi} \mathbf{\Gamma}^{(1)}_{g \to \gamma}. \label{eq:sigMF} \end{eqnarray} The individual lines in the above expressions \eqref{eq:sigNNLOdir} and \eqref{eq:sigMF} are free of implicit singularities. However, each term contains explicit poles in $\epsilon$, which eventually have to cancel among the different contributions. To guarantee the cancellation we also have to include additional mass factorisation terms of the initial-state parton distributions at different levels of the calculation. These terms are contained inside the $\text{d} \hat{\sigma}^{T,U}_{i}$ in the above expression and take the general form \begin{eqnarray} \text{d} \hat{\sigma}^{\gamma + X,{\rm NNLO}}_{{\rm ISMF1}} &=& -\int \text{d} \Phi_{n+1} \mathbf{\Gamma}^{(1)}_{{\rm PDF}} \otimes \left(- \text{d} \hat{\sigma}^{S,{\rm NLO}}_{q(\gamma)} \right) \, , \label{eq:sigISMF1} \\ \text{d} \hat{\sigma}^{\gamma + X,{\rm NNLO}}_{{\rm ISMF2}} &= &-\int \text{d} \Phi_{n+1} \mathbf{\Gamma}^{(1)}_{{\rm PDF}} \otimes \left(- \text{d} \hat{\sigma}^{T,{\rm NLO}}_{q(\gamma)} \right) \, . \label{eq:sigISMF2} \end{eqnarray} Finally, products of mass factorisation terms of initial-state parton distributions and final-state fragmentation functions appear at the double virtual level. These mixed terms read: \begin{eqnarray} \text{d} \hat{\sigma}^{\gamma +X,{\rm NNLO}}_{{\rm MF3}} &=& \int \text{d} \Phi_{n} \mathbf{\Gamma}^{(1)}_{{\rm PDF}} \otimes \left( \frac{\alpha}{2\pi} \mathbf{\Gamma}^{(0)}_{q \to \gamma} \right) \otimes \left( \text{d} \hat{\sigma}^{B}_{q} \right) \, . \label{eq:MF3mix} \end{eqnarray} We discuss the cancellation of the implicit and explicit singularities at each level of final-state multiplicity. \subsubsection{Direct Contribution: Double Real Level} \label{sec:directRR} The double-real subtraction terms in which the photon becomes unresolved can be decomposed into the parts $\text{d}\hat{\sigma}^{S,a}, \text{d}\hat{\sigma}^{S,b},\text{d}\hat{\sigma}^{S,c}$ and $\text{d}\hat{\sigma}^{S,d}$ as it is done for genuine QCD subtraction terms~\cite{Currie:2013vh}. The notion of colour connection is however slightly different here, since the final-state photon can not become soft. It is thus only colour connected to one of the hard emitters in its antenna function. The second hard emitter is only used as reference momentum to define the collinear momentum fraction, and does not play a role in any unresolved limit. If this reference momentum is shared with another antenna function, the configuration is still viewed as colour unconnected and thus part of $\text{d}\hat{\sigma}^{S,d}$. The single unresolved subtraction term $\text{d}\hat{\sigma}^{S,a}$ follows the same construction pattern as the NLO real subtraction term. Moreover, in a single unresolved limit the photon can only become part of quark-photon cluster. Consequently, we have $\text{d} \hat{\sigma}^{S,a}_{g(\gamma)} =0$. The subtraction term for a single collinear $q \parallel \gamma $ limit reads \begin{eqnarray} \text{d} \hat{\sigma}^{S,a}_{q(\gamma)} &=& \mathcal{N}^{RR} \sum_{{\rm perm.}} \text{d} \Phi_{n+2}(k_1,\, ...\,,k_q, \, ...\,,k_{n+1} , k_{\gamma}; p_{\hat{q}} , p_2) \frac{1}{S_{n+2}}\nonumber \\ &&\times A^0_3(\check{p}_{\hat{q}}, k_{\gamma}^{{\rm id.}}, k_q) \, Q_q^2 \, M^0_{n+3}( ... \, , k_{(q\gamma)} , \, ...) \emph{J}^{(n+1)}_{n} \left( \{ \tilde{k} \}_{n+1} ; z\right) \, , \label{eq:gernicSaqgam} \end{eqnarray} where the momentum fraction $z=z_3(\check{p}_{\hat{q}},k_{\gamma}^{\rm id.},k_q)$ is given in \eqref{eq:derz3generic}. The subtraction term \eqref{eq:gernicSaqgam} subtracts the single collinear quark-photon limits of the corresponding double real radiation matrix element. In the construction of the double real subtraction term we use the $A^0_3$ fragmentation antenna function in its initial-final configuration, where it contains only the final-state quark-photon collinear limit. In case there is no initial-state quark in the corresponding double real matrix element an initial-state gluon momentum is used as reference momentum in the $A^0_3$ antenna function. The subtraction term at hand introduces spurious additional singularities in almost colour connected and colour disconnected limits as the jet function allows an additional parton to become unresolved. Likewise, the genuine QCD double real subtraction term of type $\text{d}\hat{\sigma}^{S,a}$ (which were originally constructed for a dynamical photon isolation) contains reduced matrix elements that can develop collinear quark-photon singularities once a fixed-cone photon isolation is applied. One has to account for both these types of spurious singularities when constructing the subtraction terms $\text{d}\hat{\sigma}^{S,c}$ and $\text{d}\hat{\sigma}^{S,d}$. Terms of the form of \eqref{eq:gernicSaqgam} are reintroduced at the real-virtual level upon integration over the antenna phase space. They combine with the term $\text{d} \hat{\sigma}^R_q \otimes (- \mathbf{F}^{(0)}_{q \to \gamma})$ in \eqref{eq:sigMF} and form fragmentation dipoles, which were introduced in \eqref{eq:iddipoleqgamIF}. The subtraction terms for colour connected double unresolved limits including the photon are $\text{d} \hat{\sigma}^{S,b_1}_{q(\gamma)}$ and $\text{d} \hat{\sigma}^{S,b_1}_{g(\gamma)}$. In both cases the unresolved limits correspond to triple collinear configurations. The singular limit which is subtracted by $\text{d} \hat{\sigma}^{S,b_1}_{q(\gamma)}$ is the triple collinear $q \parallel g \parallel \gamma$ limit, while the limit subtracted by $\text{d} \hat{\sigma}^{S,b_1}_{g(\gamma)}$ is the triple collinear $q \parallel \gamma \parallel \bar{q}$ limit. Therefore, the only $X^0_4$ antenna functions needed are $\tilde{A}^0_4(q,g,\gamma,\bar{q})$ and $\tilde{E}^0_4(q,q',\gamma,\bar{q}')$~\cite{GehrmannDeRidder:2005cm}. We use these antenna functions exclusively in their initial-final configuration. The subtraction term for the limit where the photon and a gluon simultaneously become unresolved reads \begin{eqnarray} \text{d} \hat{\sigma}^{S,b_1}_{q(\gamma)} &=& \mathcal{N}^{RR} \sum_{{\rm perm.}} \text{d} \Phi_{n+2}(k_1, ...,k_q , ... , \, k_g , ... ,k_{n+1} , k_{\gamma}; p_{\hat{q}} , p_2) \frac{1}{S_{n+2}}\nonumber \\ &&\times \tilde{A}^0_4(\check{p}_{\hat{q}},k_g, k_{\gamma}^{{\rm id.}}, k_q) \, Q_q^2 \, M^0_{n+2}( ... \, , k_{(q\gamma g)} , \, ...), \emph{J}^{(n)}_{n} \left( \{ \tilde{k} \}_n ; z\right) \, , \label{eq:sigSb1qgam} \end{eqnarray} where the momentum $k_g$ is the momentum of the gluon to which the final-state quark is colour connected. The momentum fraction of the photon in the cluster momentum $k_{(q\gamma g)}$ is $z=z_4\left( \check{p}_{\hat{q}}, k_{\gamma}^{{\rm id.}},k_g,k_q\right)$ with the general definition of the NNLO momentum fraction \begin{equation} z_4\left(\check{k}_a,k_b^{{\rm id.}},k_c,k_d\right) = \frac{s_{ab}}{s_{ab} + s_{ac} +s_{ad}} \, . \end{equation} The subtraction term for the $q \parallel \gamma \parallel \bar{q}$ limit reads \begin{eqnarray} \text{d} \hat{\sigma}^{S,b_1}_{g(\gamma)} &=& \mathcal{N}^{RR} \sum_{{\rm perm.}} \text{d} \Phi_{n+2}(k_1, ...,k_{q'} , ... , \, k_{\bar{q}'} , ... ,k_{n+1} , k_{\gamma}; p_{\hat{q}} , p_2) \frac{1}{S_{n+2}} \nonumber \\ &&\times \tilde{E}^0_4(\check{p}_{\hat{q}},k_{q'}, k_{\gamma}^{{\rm id.}}, k_{\bar{q}'}) \, Q_{q'}^2 \, M^0_{n+2}( ... \, , k_{(q'\gamma \bar{q}')} , \, ...), \emph{J}^{(n)}_{n} \left( \{ \tilde{k} \}_n ; z\right) \, , \label{eq:sigSb1ggam} \end{eqnarray} where the momentum fraction $z$ is given by $z=z_4\left(\check{p}_q,k_{\gamma}^{{\rm id.}}, k_{q'}, k_{\bar{q}'}\right)$. In case there is no initial-state quark in the corresponding double real matrix element, we use an initial-state gluon momentum in the $X^0_4$ antenna functions in \eqref{eq:sigSb1qgam} and \eqref{eq:sigSb1ggam} as the reference momentum. The integration of these two $X^0_4$ fragmentation antenna functions is explained in detail in section~\ref{sec:X40int}. After integrating over the antenna phase space the contributions \eqref{eq:sigSb1qgam} and \eqref{eq:sigSb1ggam} are added back at the double virtual level. The antenna functions in $\text{d} \hat{\sigma}^{S,b_1}_{j(\gamma)}$ contain single unresolved singular limits which have to be subtracted to guarantee an overall successful subtraction (see~\cite{Currie:2013vh} for details). The single unresolved limits of $\text{d} \hat{\sigma}^{S,b_1}_{q(\gamma)}$ are subtracted by \begin{eqnarray} \text{d} \hat{\sigma}^{S,b_2}_{q(\gamma)} &=& -\mathcal{N}^{RR} \sum_{{\rm perm.}} \text{d} \Phi_{n+2}(k_1, ...,k_q , ... , \, k_g , ... ,k_{n+1} , k_{\gamma}; p_{\hat{q}} , p_2) \frac{1}{S_{n+2}} Q_q^2 \nonumber \\ &&\times \bigg( A^0_3(p_{\hat{q}},k_g, k_q) A^0_3(\check{\bar{p}}_{\tilde{\hat{q}}},k_{\gamma}^{{\rm id.}}, k_{(gq)}) \, M^0_{n+2}( ... \, , k_{((gq) \gamma)} , \, ...) \emph{J}^{(n)}_{n} \left( \{ \tilde{\tilde{k}} \}_n ; z\right) \nonumber \\ &&\quad + A^0_3(\check{p}_{\hat{q}},k_{\gamma}^{{\rm id.}}, k_q) A^0_3(\check{\bar{p}}_{\tilde{\hat{q}}},k_{g}, k^{{\rm id.}}_{(\gamma q)}) \, M^0_{n+2}( ... \, , k_{((\gamma q) g)} , \, ...) \emph{J}^{(n)}_{n} \left( \{ \tilde{\tilde{k}} \}_n ; z= u v\right) \bigg) \, .\nonumber \\ \label{eq:sigSb2qgam} \end{eqnarray} The first term in the above equation subtracts the single unresolved gluon limit while the second term subtracts the single unresolved photon limit of $\tilde{A}^0_4$. In the first term the momentum fraction $z$ of the photon in the cluster momentum is calculated from the first mapped momentum set, i.e.\ \begin{equation} z = z_3\left(\check{\bar{p}}_{\tilde{\hat{q}}}, k_{\gamma}^{{\rm id.}},k_{(gq)}\right) = \frac{s_{\tilde{\hat{q}} \gamma}}{s_{\tilde{\hat{q}} \gamma } + s_{\tilde{\hat{q}} (gq) }} = \frac{s_{\hat{q} \gamma}}{s_{\hat{q} \gamma} + s_{\hat{q} q} + s_{\hat{q} g}} = z_4\left(\check{p}_{\hat{q}}, k_{\gamma}^{{\rm id.}} ,k_g , k_q\right), \label{eq:defzwithmap} \end{equation} using an initial-final mapping: \begin{eqnarray} k_{(gq)} &=& k_g + k_q - (1-x) p_{\hat{q}} \, ,\nonumber \\ \bar{p}_{\tilde{\hat{q}}} &=& x p_{\hat{q}} \, , \label{eq:mappingIF} \end{eqnarray} with $x$ being the initial-state momentum fraction. It is crucial that the momentum fractions in \eqref{eq:sigSb1qgam} and in \eqref{eq:sigSb2qgam} coincide in the single unresolved limits to guarantee the cancellation of the single unresolved singularities. For the second term in \eqref{eq:sigSb2qgam} two momentum fractions have to be calculated. In the first antenna function we identify the photon, i.e.\ we calculate its momentum fraction in the $(q\gamma)$-cluster, which reads \begin{equation} u = z_3\left(\check{p}_{\hat{q}}, k_{\gamma}^{{\rm id.}},k_q\right) = \frac{s_{\hat{q} \gamma}}{s_{\hat{q} \gamma} +s_{\hat{q} q}} \, . \end{equation} In the second antenna function we identify the $(q\gamma)$-cluster within the $((q\gamma)g)$-cluster. The corresponding momentum fraction reads \begin{equation} v = z_3\left(\check{\bar{p}}_{\tilde{\hat{q}}},k_{(\gamma q)}^{{\rm id.}}, k_g\right) = \frac{s_{\tilde{\hat{q}} (\gamma q)}}{s_{\tilde{\hat{q}} (\gamma q)} + s_{\tilde{\hat{q}} g}} = \frac{s_{\hat{q} \gamma} + s_{\hat{q} q}}{s_{\hat{q} \gamma} + s_{\hat{q} q} + s_{\hat{q} g}} \, , \end{equation} where we again used \eqref{eq:mappingIF} to rewrite the momentum fraction in terms of the original momentum set. The momentum fraction of the photon within the $(qg\gamma)$-cluster is then given by \begin{equation} z = u \, v = \frac{s_{\hat{q} \gamma}}{s_{\hat{q} \gamma} + s_{\hat{q} g} + s_{\hat{q} q}} = z_4\left(\check{p}_{\hat{q}}, k_{\gamma}^{{\rm id.}} ,k_g , k_q\right), \end{equation} which coincides with the NNLO momentum fraction in \eqref{eq:sigSb1qgam}. Note that the two terms in \eqref{eq:sigSb2qgam} are added back at the real-virtual level after integration over the primary antenna phase space. The term in which the photon is in the primary antenna will combine with the contribution $\text{d} \hat{\sigma}^S_{q(q)} \otimes (-\mathbf{F}^{(0)}_{q \to \gamma})$ in $\text{d} \hat{\sigma}^{\gamma +X,{\rm NNLO}}_{{\rm MF}}$ to form the fragmentation dipole of \eqref{eq:iddipoleqgamIF}. The term in which the photon is part of the secondary antenna will contribute to $\text{d} \hat{\sigma}^{T,b}_{q(\gamma)}$ below and combine with the newly introduced one-loop fragmentation antenna functions. The subtraction of the single unresolved limit of the $\tilde{E}^0_4$ antenna function in \eqref{eq:sigSb1ggam} takes a similar form. However, this antenna function only contains single unresolved limits involving the photon. Therefore, we only obtain terms in which the photon is part of the primary antenna function, i.e.\ \begin{eqnarray} \text{d} \hat{\sigma}^{S,b_2}_{g(\gamma)} &=& -\mathcal{N}^{RR} \sum_{{\rm perm.}} \text{d} \Phi_{n+2}(k_1, ...,k_{q'} , ... , \, k_{\bar{q}'} , ... ,k_{n+1} , k_{\gamma}; p_{\hat{q}} , p_2) \frac{1}{S_{n+2}} Q_{q'}^2\nonumber \\ &&\times \bigg( A^0_3(\check{p}_{\hat{q}},k_{\gamma}^{{\rm id.}}, k_{\bar{q}'}) E^0_3(\check{\bar{p}}_{\tilde{\hat{q}}}, k_{q'},k_{(\bar{q}'\gamma)}^{{\rm id.}}) \, M^0_{n+2}( ... \, , k_{((\bar{q}'\gamma) q')} , \, ...) \emph{J}^{(n)}_{n} \left( \{ \tilde{\tilde{k}} \}_n ; z = u\,v\right) \nonumber \\ &&+ A^0_3(\check{p}_{\hat{q}},k_{\gamma}^{{\rm id.}}, k_{q'}) E^0_3(\check{\bar{p}}_{\tilde{\hat{q}}},k_{(q'\gamma)}^{{\rm id.}}, k_{\bar{q}'}) \, M^0_{n+2}( ... \, , k_{((q'\gamma) \bar{q}')} , \, ...) \emph{J}^{(n)}_{n} \left( \{ \tilde{\tilde{k}} \}_n ; z= u \, v\right) \bigg) \, .\nonumber \\ \label{eq:sigSb2ggam} \end{eqnarray} The first term subtracts the $\gamma \parallel \bar{q}'$ limit and the second term the $\gamma \parallel q'$ limit. In both cases two momentum fractions are calculated. In the first term we have $u = z_3\left(\check{p}_{\hat{q}}, k_{\gamma}^{{\rm id.}},k_{\bar{q}'}\right)$ and $v = z_3\left(\check{\bar{p}}_{\tilde{\hat{q}}},k_{(\bar{q}'\gamma)}^{{\rm id.}}, k_{q'}\right)$. The momentum fraction of the photon in the $(q'\gamma \bar{q}')$-cluster is given by the product of the two momentum fractions, i.e.\ \begin{equation} z = u \, v = \frac{s_{\hat{q}\gamma}}{s_{\hat{q} \gamma} + s_{\hat{q} q'} + s_{\hat{q} \bar{q}'}} = z_4\left( \check{p}_{\hat{q}} , k_{\gamma}^{{\rm id.}}, k_{q'} , k_{\bar{q}'} \right)\, , \end{equation} where we expressed the mapped momenta from the first initial-final mapping in terms of the original momentum set. For the second term the construction of the momentum fraction follows the same steps with the replacement $q' \leftrightarrow \bar{q}'$. Since both terms in \eqref{eq:sigSb2ggam} have the photon in the primary antenna function, they both combine with the contribution $\text{d} \hat{\sigma}^S_{g(q)} \otimes (-\mathbf{F}^{(0)}_{q \to \gamma})$ upon integration and form a fragmentation dipole of the form of \eqref{eq:iddipoleqgamIF}. To successfully compensate all oversubtractions of photonic limits in $\text{d} \hat{\sigma}^{S,a}$ and $\text{d} \hat{\sigma}^{S,b}$, as well as in the genuine QCD $\text{d} \hat{\sigma}^{S,a}$, one also has to introduce the $\text{d} \hat{\sigma}^{S,c}_{q(\gamma)}$ and $\text{d} \hat{\sigma}^{S,d}_{q(\gamma)}$ subtraction terms. The subtraction terms in $\text{d} \hat{\sigma}^{S,c}_{q(\gamma)}$ consist of a primary QCD antenna function, in which the final-state quark to which the photon becomes collinear acts as a hard radiator and the fragmentation antenna function $A^0_3$, i.e. \begin{eqnarray} \text{d} \hat{\sigma}^{S,c}_{q(\gamma)} &= &\mathcal{N}^{RR} \sum_{{\rm perm.}} \text{d} \Phi_{n+2}(k_1, ...,k_q , ... , \, k_g , ... ,k_{n+1} , k_{\gamma}; p_{\hat{q}} , p_2) \frac{1}{S_{n+2}} \nonumber \\ &&\times X^0_3(k_{m}, k_g , k_{q}) A^0_3(\check{p}_{\hat{q}}, k_{\gamma}^{{\rm id.}},k_{(qg)})\, Q_q^2\, M^0_{n+2}(... , k_{((q g)\gamma)} , ... ) \emph{J}^{(n)}_n\left( \{ \tilde{\tilde{k}} \}_n ; z \right) \, , \nonumber \\ \label{eq:sigScsqgam} \end{eqnarray} where parton $m$ is colour connected to the gluon. The subtraction terms in $\text{d} \hat{\sigma}^{S,c}_{q(\gamma)}$ are needed to reproduce the correct soft-collinear limits of the real radiation matrix element. The last contribution to the double real subtraction term is $\text{d} \hat{\sigma}^{S,d}_{q(\gamma)}$. It takes care of colour disconnected double unresolved limits. It reads \begin{eqnarray} \text{d} \hat{\sigma}^{S,d}_{q(\gamma)} &=& \mathcal{N}^{RR} \sum_l \sum_{{\rm perm.}} \text{d} \Phi_{n+2}(k_1, ...,k_q , ... , \, k_g , ... ,k_{n+1} , k_{\gamma}; p_{\hat{q}} , p_2) \frac{1}{S_{n+2}} \nonumber \\ &&\times A^0_3(\check{p}_{\hat{q}}, k_{\gamma}^{{\rm id.}},k_q) X^0_3(k_i, k_l , k_m) Q_q^2 M^0_{n+2}(... , k_{(\gamma q)} , ... ) \emph{J}^{(n)}_n\left( \{ \tilde{k} \}_n ; z\right) \, . \label{eq:sigSdqggam} \end{eqnarray} Here one of the radiators $i$ or $m$ can correspond to the initial-state quark used in the $A^0_3$ antenna function. Recall that the reconstructed momentum fraction $z_3$ vanishes in the initial-state collinear limit. Therefore, the above subtraction term does only subtract colour disconnected unresolved limits even if they share the same initial-state radiator. Terms in $\text{d} \hat{\sigma}^{S,d}_{q(\gamma)}$ sharing the same initial-state radiator are added back at the real-virtual level while terms with distinct radiators are added back at the double virtual level. There are no contributions of the form $\text{d} \hat{\sigma}^{S,c}_{g(\gamma)}$ or $\text{d} \hat{\sigma}^{S,d}_{g(\gamma)}$. \subsubsection{Direct Contribution: Real-Virtual Level} \label{sec:directRV} The real-virtual subtraction terms in which the photon becomes unresolved can be decomposed into the parts $\text{d}\hat{\sigma}^{T,a}, \text{d}\hat{\sigma}^{T,b}$ and $\text{d}\hat{\sigma}^{T,c}$, following the structure used for genuine QCD subtraction terms~\cite{Currie:2013vh}. The first contribution to the real-virtual subtraction term $\text{d} \hat{\sigma}^{T}_{q(\gamma)}$ is given by integrating $\text{d} \hat{\sigma}^{S,a}_{q(\gamma)}$ in \eqref{eq:gernicSaqgam} over the antenna phase space. One finds \begin{eqnarray} -\text{d} \hat{\sigma}^{T,a}_{q(\gamma)} &=& \mathcal{N}^{RV} \int \frac{\text{d} x}{x} \int_0^1 \text{d}z \sum_{{\rm perm.}} \text{d} \Phi_{n+1}(k_1, ... , k_q, ... , k_{n+1} ; x p_{\hat{q}} ,p_2) \nonumber \\ &&\times \frac{1}{S_{n+1}} \mathcal{A}^{0, {\rm id.} \gamma}_{3,\hat{q}}(x,z) \, Q_q^2 \, M^0_{n+3} ( ... , k_q, ...) \, J^{(n+1)}_n(\{k\}_{n+1} ;z) \, , \end{eqnarray} where the integrated fragmentation antenna function is given in \eqref{eq:A30qgamIF}. The reduced matrix element in $\text{d} \hat{\sigma}^{T,a}_{q(\gamma)}$ is a real radiation jet matrix element. Therefore, there is a corresponding contribution in $\text{d} \hat{\sigma}^R_q \otimes (- \mathbf{F}^{(0)}_{q \to \gamma} )$, which is part of $\text{d}\hat{\sigma}^{\gamma +X,{\rm NNLO}}_{{\rm MF}}$. It reads \begin{eqnarray} -\text{d} \hat{\sigma}^R_q \otimes \mathbf{F}^{(0)}_{q \to \gamma} &= & -\frac{1}{2} \, \mathcal{N}^{RV} \, Q_q^2 \sum_{{\rm perm.}} \int_0^1 \text{d} z \, \text{d} \Phi_{n+1}(k_1 , ..., k_q , ... , k_{n+1} ; p_{\hat{q}} , p_2 ) \nonumber \\ && \times\frac{1}{S_{n+1}} \, \mu_a^{-2\epsilon} \, \Gamma_{\gamma q}^{(0)}(z) M^0_{n+3}( ... \, , k_q , \, ...) \, \emph{J}^{(n+1)}_n( \{ k \}_{n+1} ; z) \, , \end{eqnarray} where, as at NLO, the factor 1/2 is due to the different normalisations of the photon and jet matrix elements. Combining both contributions, we find \begin{eqnarray} -\text{d} \hat{\sigma}^{T,a}_{q(\gamma)} - \text{d} \hat{\sigma}^R_q \otimes \mathbf{F}^{(0)}_{q \to \gamma} &=& \mathcal{N}^{RV} \, Q_q^2 \sum_{{\rm perm.}} \int \frac{\text{d}x}{x} \int_0^1 \text{d} z \, \text{d} \Phi_{n+1}(k_1 , ..., k_q , ... , k_{n+1} ;x p_{\hat{q}} ,p_2 ) \nonumber \\ && \times \frac{1}{S_{n+1}} \, \boldsymbol{J}^{(1), {\rm id.} \gamma}_{2,\hat{q}}(\hat{q},q) \, M^0_{n+3}( ... \, , k_q , \, ...) \, \emph{J}^{(n+1)}_n( \{ k \}_{n+1} ; z) \, ,\nonumber \\ \label{eq:sigTacomsigRoGam0} \end{eqnarray} where the initial-final fragmentation dipole is given in \eqref{eq:iddipoleqgamIF}. Terms of the form of \eqref{eq:sigTacomsigRoGam0} are $\epsilon$-finite but contain single unresolved limits, which have to be subtracted to guarantee an overall successful cancellation of the singularities. To subtract the $q \parallel \gamma$ limit of the one-loop matrix elements, the following term is required: \begin{eqnarray} \text{d} \hat{\sigma}^{T,b}_{q(\gamma)} &=&\phantom{+}\mathcal{N}^{RV} \sum_{\rm {perm.}} \text{d} \Phi_{n+1}(k_1, ... , k_q, ...,k_n, k_{\gamma};p_{\hat{q}} , p_2) \frac{1}{S_{n+1}} \nonumber \\ &&\qquad \times A^0_3(\check{p}_{\hat{q}}, k_{\gamma}^{\rm id.}, k_q) \, Q_q^2 \, M^1_{n+2}(..., k_{(q\gamma)} ,...) J^{(n)}_n\left(\{ \tilde{k} \}_n ; z\right) \, . \nonumber \\ && + \mathcal{N}^{RV} \sum_{\rm {perm.}} \int \frac{\text{d} x}{x} \text{d} \Phi_{n+1}(k_1, ... , k_q , ...,k_n, k_{\gamma};x p_{\hat{q}} , p_2) \frac{1}{S_{n+1}} Q_q^2 \nonumber \\ &&\qquad\times \left( \tilde{A}^1_3(\check{\bar{p}}_{\hat{q}}, k_{\gamma}^{\rm id.}, k_q) \delta(1-x) + \boldsymbol{J}^{(1)}_2(\hat{q}(\bar{p}_{\hat{q}}),q(k_q))(x) \, A^0_3(\check{\bar{p}}_{\hat{q}}, k_{\gamma}^{\rm id.}, k_q) \right) \nonumber \\ &&\qquad\times \, M^0_{n+2}(..., k_{(q\gamma)} ,...) J^{(n)}_n\left(\{ \tilde{k} \}_n ; z\right) \, , \label{eq:sigTb} \end{eqnarray} where we used that the $q \parallel \gamma$ limit of a one-loop matrix element can be subtracted using the one-loop colour-subleading antenna function $\tilde{A}^1_3$~\cite{GehrmannDeRidder:2005cm}. For photon production it is sufficient to use this one-loop fragmentation antenna function in the initial-final configuration, with the initial-state momentum as a reference momentum in the definition of the momentum fraction. The integration of this class of fragmentation antenna functions is discussed in section \ref{sec:X31int}. Note that in case there is no quark in the initial-state we use an initial-state gluon momentum as the reference momentum. The $\boldsymbol{J}^{(1)}_2$ term in \eqref{eq:sigTb} is a QCD dipole, unrelated to the photon. It contains the integral of the first contribution in \eqref{eq:sigSb2qgam}, where the primary antenna is a QCD antenna function, as well as contributions from the mass factorisation of the incoming parton distributions from \eqref{eq:sigISMF1}. Combining these two contributions yields the integrated inclusive QCD dipole factor which can be found in~\cite{Currie:2013vh}. The momentum fraction entering the jet function is the same as at NLO, i.e.\ $z=z_3\left(\check{p}_{\hat{q}}, k_{\gamma}^{{\rm id.}},k_{q}\right)$. The integration over the contribution $\text{d} \hat{\sigma}^{S,c}_{q(\gamma)}$ takes the same form as the last term in \eqref{eq:sigTb}, i.e.\ \begin{eqnarray} \text{d} \hat{\sigma}^{T,c\, (s)}_{q(\gamma)} &=& \mathcal{N}^{RV} \sum_{\rm {perm.}} \int \frac{\text{d} x}{x} \text{d} \Phi_{n+1}(k_1, ... , k_q , ...,k_n, k_{\gamma};x p_{\hat{q}} ,p_2) \frac{1}{S_{n+1}} \nonumber \\ &&\times \boldsymbol{J}^{(1)}_2(m(k_m),q(k_q))(x) \, A^0_3(\check{\bar{p}}_{\hat{q}}, k_{\gamma}^{\rm id.}, k_q) \, Q_q^2 \, M^0_{n+2}(..., k_{(q\gamma)} ,...) J^{(n)}_n\left(\{ \tilde{k} \}_n ; z\right) \, ,\nonumber \\ && \label{eq:sigTcs} \end{eqnarray} where $\boldsymbol{J}^{(1)}_2$ is the inclusive dipole corresponding to the primary antenna function used in $\text{d} \hat{\sigma}^{S,c}_{q(\gamma)}$ in \eqref{eq:sigScsqgam}. The superscript $s$ indicates that the photon enters the secondary, unintegrated antenna function. In case parton $m$ is in the initial state, $\boldsymbol{J}^{(1)}_2$ also contains the mass factorisation countertems for the incoming parton distribution. In general, it is necessary to include additional terms to correctly subtract the $q \parallel \gamma$ limit of the corresponding real-virtual matrix element without introducing spurious poles in $\epsilon$. These terms take the same form as \eqref{eq:sigTcs}, but in this case the integrated dipole contains momenta from the mapped momentum set $\{ \tilde{k} \}$. These extra terms also form part of $\text{d} \hat{\sigma}^{T,c\, (s)}_{q(\gamma)} $. Combining the contributions \eqref{eq:sigTb} and \eqref{eq:sigTcs}, the expression \begin{equation} \int \text{d} \Phi_{n+1} \left( \text{d} \hat{\sigma}^{RV}_{\gamma} - \text{d} \hat{\sigma}^T_{\gamma} - \sum_q \left( \text{d} \hat{\sigma}^{T,b}_{q(\gamma)} - \text{d} \hat{\sigma}^{T,c\, (s)}_{q(\gamma)} \right) \right) \end{equation} is free of explicit and implicit singularities. At this point we have not added back the contribution in $\text{d} \hat{\sigma}^{S,b_2}_{q(\gamma)}$ in which the photon is part of the primary antenna function. To distinguish this contribution from the contribution which is included in \eqref{eq:sigTb} we denote it as $\text{d} \hat{\sigma}^{S,b_2(p)}_{q(\gamma)}$, where the superscript $p$ indicates that this contribution originates from the piece of double real subtraction term $\text{d} \hat{\sigma}^{S,b_2}_{q(\gamma)}$ in which the photon is in the primary antenna. After integrating over the phase space of the primary antenna, this contribution takes the form \newpage \begin{eqnarray} \text{d} \hat{\sigma}^{T,b_2(p)}_{q(\gamma)} &=& \mathcal{N}^{RV} Q_q^2 \sum_{{\rm perm.}} \int_0^1 \text{d} v \int \frac{\text{d} x}{x} \text{d} \Phi_{n+1}(k_1, ... , k_q , ... , k_{n+1}; x p_{\hat{q}} ,p_2) \nonumber \\ &&\times \frac{1}{S_{n+1}} \mathcal{A}^{0, {\rm id.} \gamma}_{3,\hat{q}}(x,v) A^0_3(\check{\bar{p}}_{\hat{q}}, k_g , k_q^{{\rm id.}}) M^0_{n+2} ( ... , k_{(qg)}, ...) J^{(n)}_n\left(\{ \tilde{k} \}_n ; z=uv\right) \nonumber \\ \label{eq:sigTb2qgampri} \end{eqnarray} This expression is very similar to \eqref{eq:sigSqqidconD}. The momentum fraction $v$ is the momentum fraction of the photon in the $(q\gamma)$-cluster. It is an external convolution variable. $u=z_3(\check{\bar{p}}_{\hat{q}},k_q^{{\rm id.}},k_g)$ is the momentum fraction of the quark in the $(qg)$-cluster, which is calculated during the mapping. Therefore, $z=uv$ describes the momentum fraction of the photon within the $(qg\gamma)$-cluster. Note that the momenta entering the integrated antenna function are unmapped momenta. Equation \eqref{eq:sigTb2qgampri} combines with the counterterm contribution \begin{eqnarray} \text{d}\hat{\sigma}^S_{q(q)} \otimes \mathbf{F}^{(0)}_{q \to \gamma} &=& \mathcal{N}^R_{{\rm jet}}\frac{\alpha}{2 \pi} \left( 4\pi e^{-\gamma_E}\right)^{\epsilon} Q_q^2 \sum_{{\rm perm.}} \int_0^1 \text{d} v \nonumber \\ &&\times \text{d} \Phi_{n+1} (k_1, ... , k_q, ... , k_g, ..., k_{n+1}; p_{\hat{q}},p_2) \frac{1}{S_{n+1}} \, \mu_a^{-2\epsilon} \, \Gamma^{(0)}_{\gamma q}(v) \nonumber \\ &&\times A^0_3(\check{p}_{\hat{q}},k_g,k_q^{{\rm id.}}) M^0_{n+2}(... , k_{(qg)}, ... ) \emph{J}^{(n)}_n\left( \{ \tilde{k} \}_n ; z=uv\right) \, , \label{eq:sigSqqcongam0} \end{eqnarray} such that \begin{eqnarray} \lefteqn{-\text{d} \hat{\sigma}^{T,b_2(p)}_{q(\gamma)} + \text{d} \hat{\sigma}^S_{q(q)} \otimes \mathbf{F}^{(0)}_{q \to \gamma}} \nonumber \\ &=& -\mathcal{N}^{RV} Q_q^2 \sum_{{\rm perm.}} \int_0^1 \text{d} v \int \frac{\text{d}x}{x} \text{d} \Phi_{n+1}(k_1, ... , k_q ,...,k_g, ... , k_{n+1}; x p_{\hat{q}} ,p_2) \frac{1}{S_{n+1}}\nonumber \\ &&\times \boldsymbol{J}^{(1),\, {\rm id.} \gamma}_{2,\hat{q}}(\hat{q}(\bar{p}_{\hat{q}}),q(k_{q}))\left(x,v\right) A^0_3(\check{\bar{p}}_{\hat{q}}, k_g , k_q^{{\rm id.}}) M^0_{n+2} ( ... , k_{(qg)}, ...) J^{(n)}_n\left(\{ \tilde{k} \}_n ; z=uv\right) \, . \nonumber \\ \label{eq:sigTb2posigSqq} \end{eqnarray} Compared to \eqref{eq:sigTacomsigRoGam0} the fragmentation dipole does not multiply a real radiation matrix element but an unintegrated antenna function and a reduced matrix element. In the case in which the photon becomes unresolved in a gluon type cluster we have \begin{eqnarray} \lefteqn{-\text{d} \hat{\sigma}^{T,b_2(p)}_{g(\gamma)} + \text{d} \hat{\sigma}^S_{g(q)} \otimes \mathbf{F}^{(0)}_{q \to \gamma}} \nonumber \\ &=& -\mathcal{N}^{RV} Q_{q'}^2 \sum_{{\rm perm.}} \int_0^1 \text{d} v \int \frac{\text{d}x}{x} \text{d} \Phi_{n+1}(k_1, ... , k_{\bar{q}'}, ...,k_{q'} , ... , k_{n+1}; x p_{\hat{q}} ,p_2) \frac{1}{S_{n+1}}\nonumber \\ &&\quad \quad \times \bigg( \boldsymbol{J}^{(1),\, {\rm id.} \gamma}_{2,\hat{q}}(\hat{q}(\bar{p}_{\hat{q}}),\bar{q}'(k_{\bar{q}'}))\left(x,v\right) E^0_3(\check{\bar{p}}_{\hat{q}}, k_{q'}, k_{\bar{q}'}^{{\rm id.}} ) \nonumber\\ &&\quad \quad + \boldsymbol{J}^{(1),\, {\rm id.} \gamma}_{2,\hat{q}}(\hat{q}(\bar{p}_{\hat{q}}),q'(k_{q'}))\left(x,v\right) E^0_3(\check{\bar{p}}_{\hat{q}}, k_{q'}^{{\rm id.}}, k_{\bar{q}'}) \bigg) \nonumber \\ &&\quad \quad \times M^0_{n+2} ( ... , k_{(q'\bar{q}')}, ...) J^{(n)}_n\left(\{ \tilde{k} \}_n ; z=uv\right) \, . \label{eq:sigb2pggam} \end{eqnarray} Contributions of the form of \eqref{eq:sigTb2posigSqq} and \eqref{eq:sigb2pggam} subtract parts of the single unresolved limits of \eqref{eq:sigTacomsigRoGam0}. In general it is necessary to include additional terms to achieve an overall subtraction of the unresolved limits in \eqref{eq:sigTacomsigRoGam0}. Two classes of these additional terms are distinguished. The first class consists of all subtraction terms in which the unintegrated antenna function is a fragmentation antenna function. We call this contribution $\text{d} \hat{\sigma}^{T,c_1\, (p)}_{i(\gamma)}$. It has the form \begin{eqnarray} \lefteqn{-\text{d} \hat{\sigma}^{T,c_1 \, (p)}_{q(\gamma)} + \text{d} \hat{\sigma}^S_{q(q)} \otimes \mathbf{F}^{(0)}_{q \to \gamma}} \nonumber \\ &= &- \mathcal{N}^{RV} Q_q^2 \sum_{{\rm perm.}} \int_0^1 \text{d} v \int \frac{\text{d}x}{x} \text{d} \Phi_{n+1}(k_1, ... , k_q, ... , k_{n+1}; x p_{\hat{q}} , p_2) \frac{1}{S_{n+1}}\nonumber \\ && \times \boldsymbol{J}^{(1),\, {\rm id.} \gamma}_{2,\hat{q}}(\hat{q}(\bar{p}_{\hat{q}}),q(k_{(qg)}))\left(x,v \right) X^0_3(\check{k}_{l}, k_g , k_q^{{\rm id.}}) M^0_{n+2} ( ... , k_{(qg)}, ...) J^{(n)}_n\left(\{ \tilde{k} \}_n ; z =uv\right) \, .\nonumber \\ && \end{eqnarray} In contrast to \eqref{eq:sigTb2posigSqq} the momenta entering the identified dipole belong to the mapped momentum set $\{ \tilde{k} \}$. $-\text{d} \hat{\sigma}^{T,c_1 \, (p)}_{g(\gamma)} + \text{d} \hat{\sigma}^S_{g(q)} \otimes \mathbf{F}^{(0)}_{q \to \gamma} $ takes a similar form. In this case the unintegrated antenna function subtracts a $q\parallel\bar{q}$ limit. The second class of newly introduced subtraction terms does not contain an unintegrated fragmentation antenna function. We denote this contribution by $\text{d} \hat{\sigma}^{T,c_2 \, (p)}_{q(\gamma)}$ and it has the form \begin{eqnarray} \lefteqn{-\text{d} \hat{\sigma}^{T,c_2\, (p)}_{q(\gamma)} + \text{d} \hat{\sigma}^S_{q} \otimes \mathbf{F}^{(0)}_{q \to \gamma} } \nonumber \\ &=& -\mathcal{N}^{RV} Q_q^2 \sum_{l} \sum_{{\rm perm.}} \int_0^1 \text{d} z \int \frac{\text{d}x}{x} \text{d} \Phi_{n+1}(k_1, ... , k_q , ... , k_{n+1}; x p_{\hat{q}} ,p_2) \frac{1}{S_{n+1}}\nonumber \\ &&\times \boldsymbol{J}^{(1), \, {\rm id.} \gamma}_{2,\hat{q}}(\hat{q}(\bar{p}_{\hat{q}}),q(k_q))\left(x,z\right) X^0_3(k_i, k_l , k_m) M^0_{n+2} ( ... k_I, k_M, ..., k_q, ...) J^{(n)}_n\left(\{ \tilde{k} \}_n ; z\right) \, ,\nonumber \\ && \end{eqnarray} where $i$ and $m$ can be any hard radiator but not the final-state quark entering the integrated dipole. This contribution also contains those terms from $\text{d} \hat{\sigma}^{S,d}_{q(\gamma)}$, in which the two antenna functions share the same initial-state radiator. Note that there is no contribution of the type $\text{d} \hat{\sigma}^{T,c_2\, (p)}_{g(\gamma)}$. Combining all terms in which the photon is part of the primary (integrated) antenna function, we find \begin{eqnarray} \lefteqn{\int \text{d} \Phi_{n+1} \left( \text{d} \hat{\sigma}^R_q - \text{d}\hat{\sigma}^S_q - \text{d} \hat{\sigma}^S_{q(q)} - \text{d} \hat{\sigma}^S_{g(q)} \right) \otimes \boldsymbol{J}^{(1), \, {\rm id.} \gamma}_{2,\hat{q}} } \nonumber \\ & = &\left(-\text{d} \hat{\sigma}^{T,a}_{q(\gamma)} - \text{d} \hat{\sigma}^R_q \otimes \mathbf{F}^{(0)}_{q \to \gamma} \right) + \left(- \text{d} \hat{\sigma}^{T,b_2\, (p)}_{q(\gamma)} + \text{d} \hat{\sigma}^S_{q(q)} \otimes \mathbf{F}^{(0)}_{q \to \gamma} \right) \nonumber \\ &&+\left( -\text{d} \hat{\sigma}^{T,c_1}_{q(\gamma)} + \text{d} \hat{\sigma}^S_{q(q)} \otimes \mathbf{F}^{(0)}_{q \to \gamma} \right) + \left(-\text{d} \hat{\sigma}^{T,c_1}_{g(\gamma)} + \text{d} \hat{\sigma}^S_{g(q)} \otimes \mathbf{F}^{(0)}_{q \to \gamma} \right) \nonumber \\ &&+ \left( -\text{d} \hat{\sigma}^{T,c_2}_{q(\gamma)} + \text{d} \hat{\sigma}^S_{q} \otimes \mathbf{F}^{(0)}_{q \to \gamma} \right) \end{eqnarray} where we have absorbed the contributions from $\text{d} \hat{\sigma}^{T,c_i \, (p)}_{q(\gamma)}$ into $\text{d} \hat{\sigma}^{T,c_i}_{q(\gamma)}$. None of these terms subtracts explicit poles or unresolved limits of $\text{d} \hat{\sigma}^{RV}_{\gamma}$, thus decoupling from the remaining subtraction at the real-virtual level. \subsubsection{Direct Contribution: Double Virtual Level} \label{sec:directVV} At the double virtual level all subtraction terms which have not yet been added back are combined. The terms in $\text{d} \hat{\sigma}^U_{j(\gamma)}$ include integrals of subtraction terms in which the photon becomes unresolved. All explicit poles in $\text{d} \hat{\sigma}^{U}_{j(\gamma)}$ cancel against the mass factorisation terms of the fragmentation functions. The first contribution in $\text{d} \hat{\sigma}^U_{q(\gamma)}$, $\text{d} \hat{\sigma}^{U,a}_{q(\gamma)}$ is given by the first term of $\text{d} \hat{\sigma}^{T,b}_{q(\gamma)}$ after integrating over the antenna phase space. It is combined with the corresponding contribution $\text{d} \hat{\sigma}^V_q \otimes ( - \mathbf{F}^{(0)}_{q \to \gamma})$ in \eqref{eq:sigMF}: \begin{eqnarray} \lefteqn{-\text{d} \hat{\sigma}^{U,a}_{q(\gamma)} - \text{d} \hat{\sigma}^V_q \otimes \mathbf{F}^{(0)}_{q \to \gamma}} \nonumber \\ &= &\mathcal{N}^{VV} Q_q^2 \sum_{\rm {perm.}} \int_0^1 \text{d} z \int \frac{\text{d} x}{x} \text{d} \Phi_{n}(k_1, ... , k_q , ...,k_n;x p_{\hat{q}} , p_2) \nonumber \\ &&\times \frac{1}{S_{n}} \boldsymbol{J}^{(1), \, {\rm id.} \gamma}_{2,\hat{q}}(\hat{q}(\bar{p}_{\hat{q}}),q(k_q))(x,z) M^1_{n+2}(..., k_{q} ,...) J^{(n)}_n(\{ k \}_n ; z) \, . \label{eq:sigUAcomqgam} \end{eqnarray} This expression still exhibits explicit poles in the dimensional regulator $\epsilon$ coming from the one-loop matrix element. The poles of the one-loop matrix element cancel with one-loop dipoles, resulting from integrated antenna functions in inclusive or fragmentation kinematics. To organise the cancellation of the terms, it is helpful to collect the contributions from $\text{d} \hat{\sigma}^{S,b_1}$ and $\text{d} \hat{\sigma}^{T,b}$ and combine them with mass factorisation contributions from the fragmentation functions alone \eqref{eq:sigMF} as well as with mixed initial-final mass factorisation contributions \eqref{eq:MF3mix}. For the case in which the photon becomes unresolved in a quark-type cluster this combination yields \begin{eqnarray} \lefteqn{-\text{d} \hat{\sigma}^{U,b}_{q(\gamma)} + \text{d} \hat{\sigma}^T_{q(q)} \otimes \mathbf{F}^{(0)}_{q \to \gamma} - \frac{\alpha_s}{2\pi} \text{d} \hat{\sigma}^B_q \otimes ( \mathbf{F}^{(1),B}_{q \to \gamma} + \mathbf{F}^{(1),C}_{q \to \gamma}) + \text{d} \hat{\sigma}^B_{q} \otimes \mathbf{\Gamma}_{{\rm PDF}} \otimes \mathbf{F}^{(0)}_{q \to \gamma}} \nonumber \\ &= &\mathcal{N}^{VV} Q_q^2 \sum_{\rm {perm.}} \int_0^1 \text{d} z \int \frac{\text{d} x}{x} \text{d} \Phi_{n}(k_1, ... , k_q , ...,k_n;x p_{\hat{q}} , p_2) \frac{1}{S_{n}} \nonumber \\ &&\times \left( \tilde{\mathcal{A}}^{0,{\rm id.} \gamma}_{4,\hat{q}}(x,z) + \tilde{\mathcal{A}}^{1,{\rm id.} \, \gamma}_{3,\hat{q}}(x,z) - \mu_F^{-2\epsilon} \mathcal{A}^{0,{\rm id.} \gamma}_{3,\hat{q}}(x,z) \otimes \Gamma^{(1)}_{qq}(x) \right. \nonumber\\ &&\left. -\frac{1}{2} \mu_a^{-2\epsilon} \mathcal{A}^{0,{\rm id.} \, q}_{3,\hat{q}}(x,z) \otimes \Gamma^{(0)}_{\gamma q}(z)+ \frac{1}{2} \left( \mu_F \, \mu_a \right)^{-2\epsilon} \Gamma^{(0)}_{\gamma q}(z) \, \Gamma^{(1)}_{qq}(x) \right. \nonumber \\ &&\left. +\frac{1}{2} \left(\mu_a^2\right)^{-2\epsilon} \left( \Gamma^{(0)}_{\gamma q}(z) \otimes \Gamma^{(1)}_{qq}(z) - \Gamma^{(1)}_{\gamma q}(z) \right) \right) M^0_{n+2}(..., k_{q} ,...) J^{(n)}_n(\{ k \}_n ; z) \, . \nonumber \\ \label{eq:sigUBqgam} \end{eqnarray} Note that this combination of antenna functions and mass factorisation kernels can be related to a combination of NNLO coefficient functions for semi-inclusive deep inelastic scattering~\cite{Gehrmann:2021lwb}. It is useful to rewrite \eqref{eq:sigUBqgam} as a sum of a finite two-loop fragmentation dipole and a convolution of two fragmentation dipoles, i.e.\ \newpage \begin{eqnarray} \lefteqn{-\text{d} \hat{\sigma}^{U,b}_{q(\gamma)} + \text{d} \hat{\sigma}^T_{q(q)} \otimes \mathbf{F}^{(0)}_{q \to \gamma} - \frac{\alpha_s}{2\pi} \text{d} \hat{\sigma}^B_q \otimes \left( \mathbf{F}^{(1),B}_{q \to \gamma} + \mathbf{F}^{(1),C}_{q \to \gamma} \right) + \text{d} \hat{\sigma}^B_{q} \otimes \mathbf{\Gamma}_{{\rm PDF}} \otimes \mathbf{F}^{(0)}_{q \to \gamma}} \nonumber \\ &=&\mathcal{N}^{VV} Q_q^2 \sum_{\rm {perm.}} \int_0^1 \text{d} z \int \frac{\text{d} x}{x} \text{d} \Phi_{n}(k_1, ... , k_q , ...,k_n;x p_{\hat{q}} , p_2) \frac{1}{S_{n}} \nonumber \\ &&\times \left( \boldsymbol{J}^{(2),{\rm id.} \gamma}_{2,\hat{q}}(\hat{q},q) + \boldsymbol{J}^{(1),{\rm id.} \gamma}_{2,\hat{q}}(\hat{q},q) \otimes \boldsymbol{J}^{(1),{\rm id.} q}_{2,\hat{q}}(\hat{q},q) \right) M^0_{n+2}(..., k_{q} ,...) J^{(n)}_n(\{ k \}_n ; z) \, ,\nonumber \\ \label{eq:sigUBqgamre} \end{eqnarray} where the one-loop dipoles are given in \eqref{eq:iddipoleqgamIF} and in \eqref{eq:iddipolesqqclusterIF} respectively. The two-loop quark-to-photon dipole $\boldsymbol{J}^{(2),{\rm id.} \gamma}_{2,\hat{q}}(\hat{q},q) $ expressed in terms of fragmentation antenna functions and mass factorisation terms reads \begin{eqnarray} \boldsymbol{J}^{(2), {\rm id.} \gamma}_{2,\hat{q}}(\hat{q},q) &= &\tilde{\mathcal{A}}^{0, {\rm id.} \gamma}_{4,\hat{q}}(x,z) + \tilde{\mathcal{A}}^{1, {\rm id.} \gamma}_{3,\hat{q}}(x,z) \nonumber \\ &&- \left(\mathcal{A}^{0,{\rm id.}\gamma}_{3,\hat{q}}(x,z) \otimes ( \mathcal{A}^{0,{\rm id.} q}_{3,\hat{q}}(x,z) - \mu_a^{-2\epsilon} \, \Gamma^{(1)}_{q q}(z)) \right) \nonumber \\ &&- \frac{1}{2} \left(\mu_a^2 \right)^{-2\epsilon} \, \Gamma^{(1)}_{\gamma q}(z) \, . \label{eq:J22qgam} \end{eqnarray} Since this two-loop dipole is $\epsilon$-finite, the poles in \eqref{eq:sigUBqgamre} are all contained in the fragmentation dipoles for identified partons $\boldsymbol{J}^{(1),{\rm id.} q}_{2,\hat{q}}(\hat{q},q)$. These poles partly cancel the poles in \eqref{eq:sigUAcomqgam}. For the case in which the photon is clustered into a gluon, we find \begin{eqnarray} \lefteqn{-\text{d} \hat{\sigma}^{U,b}_{g(\gamma)} + \text{d} \hat{\sigma}^T_{g(q)} \otimes \mathbf{F}^{(0)}_{q \to \gamma} - \frac{\alpha_s}{2\pi} \text{d} \hat{\sigma}^B_g \otimes \left( \mathbf{F}^{(1),B}_{g \to \gamma} + \mathbf{F}^{(1),C}_{g \to \gamma} \right)} \nonumber \\ &=& \mathcal{N}^{VV} N_{q'} Q_{q'}^2 \sum_{\rm {perm.}} \int_0^1 \text{d} z \int \frac{\text{d} x}{x} \text{d} \Phi_{n}(k_1, ... , k_g , ...,k_n;x p_{\hat{q}} , p_2) \frac{1}{S_{n}} \nonumber \\ &&\times \left( \tilde{\mathcal{E}}^{0,{\rm id.} \gamma}_{4,\hat{q}}(x,z) - \mu_a^{-2\epsilon} \mathcal{E}^{0,{\rm id.} \, q'}_{3,\hat{q}}(x,z) \otimes \Gamma^{(0)}_{\gamma q}(z) \right. \nonumber\\ && \left. + \left( \mu_a^2 \right)^{-2\epsilon} \left( \Gamma^{(0)}_{\gamma q}(z) \otimes \Gamma^{(1)}_{qg}(z) - \tilde{\Gamma}^{(1)}_{\gamma g}(z) \right) \right) M^0_{n+2}(..., k_{g} ,...) J^{(n)}_n(\{ k \} ; z) \, . \label{eq:sigUBggam} \end{eqnarray} with \begin{equation} \tilde{\Gamma}^{(1)}_{\gamma g} = \frac{1}{2} \left( \frac{1}{2\epsilon^2} p^{(0)}_{q g} \otimes p^{(0)}_{\gamma q} - \frac{1}{2 \epsilon} p^{(1)}_{\gamma g} \right) \,. \end{equation} We can rewrite \eqref{eq:sigUBggam} as a sum of a two-loop dipole and the convolution of two fragmentation dipoles, \begin{eqnarray} \lefteqn{-\text{d} \hat{\sigma}^{U,b}_{g(\gamma)} + \text{d} \hat{\sigma}^T_{g(q)} \otimes \mathbf{F}^{(0)}_{q \to \gamma} - \frac{\alpha_s}{2\pi} \text{d} \hat{\sigma}^B_g \otimes \left( \mathbf{F}^{(1),B}_{g \to \gamma} + \mathbf{F}^{(1),C}_{g \to \gamma} \right)} \nonumber \\ &=& \mathcal{N}^{VV} N_{q'} Q_{q'}^2 \sum_{\rm {perm.}} \int_0^1 \text{d} z \int \frac{\text{d} x}{x} \text{d} \Phi_{n}(k_1, ... , k_g , ...,k_n;x p_{\hat{q}} , p_2) \frac{1}{S_{n}} \nonumber \\ &&\times \left( \boldsymbol{J}^{(2), {\rm id.} \gamma}_{2,\hat{q}}(\hat{q},g) + 2 \, \boldsymbol{J}^{(1), {\rm id.} \gamma}_{2,\hat{q}}(\hat{q},q) \otimes \boldsymbol{J}^{(1), {\rm id.} q'}_{2,\hat{q}}(\hat{q},g) \right) M^0_{n+2}(..., k_{g} ,...) J^{(n)}_n(\{ k \} ; z) \, ,\nonumber\\ \label{eq:sigUBggamre} \end{eqnarray} where the one-loop dipoles are given in \eqref{eq:iddipoleqgamIF} and in \eqref{eq:J21qpidIF} respectively. The two-loop gluon-to-photon dipole reads \begin{eqnarray} \boldsymbol{J}^{(2), \, {\rm id.} \gamma}_{2, \hat{q}}(\hat{q},g) &= &\tilde{\mathcal{E}}^{0,{\rm id.} \gamma}_{4,\hat{q}}(x,z) - 2 \left( \mathcal{A}^{0,{\rm id.} \gamma}_{3,\hat{q}}(x,z) \otimes ( \mathcal{E}^{0, {\rm id.} q'}_{3,\hat{q}}(x,z) - \mu_a^{-2\epsilon} \Gamma^{(1)}_{qg}(z) ) \right) \nonumber\\ &&- \left( \mu_a^2 \right)^{-2\epsilon} \tilde{\Gamma}^{(1)}_{\gamma g}(z) \, . \label{eq:J22ggam} \end{eqnarray} Note that all three dipoles in \eqref{eq:sigUBggamre} correspond to flavour-changing limits. Therefore, all three of them are by themselves $\epsilon$-finite. Two more contributions from the double real subtraction terms and real-virtual subtraction terms have to be added back, $\text{d} \hat{\sigma}^{S,d}_{q(\gamma)}$ and $\text{d} \hat{\sigma}^{T,c}_{q(\gamma)}$. Among these contributions only the terms in $\text{d} \hat{\sigma}^{T,c_1\, (p)}_{q(\gamma)}$ consist of two fragmentation antenna functions. Integration over the antenna phase space and combination with mass factorisation contributions from \eqref{eq:sigMF} and \eqref{eq:MF3mix} yields \begin{eqnarray} \lefteqn{-\text{d} \hat{\sigma}^{U,c_1}_{q(\gamma)} + \text{d} \hat{\sigma}^T_{q(q)} \otimes \mathbf{F}^{(0)}_{q \to \gamma} - \frac{\alpha_s}{2\pi} \text{d} \hat{\sigma}^B_q \otimes \mathbf{F}^{(1),B}_{q \to \gamma} + \text{d} \hat{\sigma}^B_{q} \otimes \mathbf{\Gamma}_{{\rm PDF}} \otimes \mathbf{F}^{(0)}_{q \to \gamma}} \nonumber \\ &=& \mathcal{N}^{VV} Q_q^2 \sum_{\rm {perm.}} \int_0^1 \text{d} z \int \frac{\text{d} x_1}{x_1} \frac{\text{d}x_2}{x_2} \text{d} \Phi_{n}(k_1, ... , k_q , ...,k_n;x_1 p_{\hat{q}} , x_2 p_2) \frac{1}{S_{n}} \nonumber \\ &&\times \left( \boldsymbol{J}^{(1),{\rm id.} \gamma}_{2,\hat{q}}(\hat{q},q) \otimes \boldsymbol{J}^{(1),{\rm id.} q}_{2,l}(l,q) \right) M^0_{n+2}(..., k_{q} ,...) J^{(n)}_n(\{ k \}_n ; z) \, , \label{eq:sigUC1qgamre} \end{eqnarray} where $l$ can either be a quark or a gluon. If $l$ is in the final state there is no contribution from the mixed initial-final mass factorisation contribution \eqref{eq:MF3mix}. Integrating the subtraction terms of $\text{d} \hat{\sigma}^{T,c\, (s)}_{q(\gamma)}$ over the antenna phase space, we obtain \begin{eqnarray} \lefteqn{-\text{d} \hat{\sigma}^{U,c_2}_{q(\gamma)} + \text{d} \hat{\sigma}^B_{q} \otimes \mathbf{\Gamma}_{{\rm PDF}} \otimes \mathbf{F}^{(0)}_{q \to \gamma}}\nonumber \\ &= &\mathcal{N}^{VV} Q_q^2 \sum_{\rm {perm.}} \int_0^1 \text{d} z \int \frac{\text{d} x_1}{x_1} \frac{\text{d}x_2}{x_2} \text{d} \Phi_{n}(k_1, ... , k_q , ...,k_n;x_1 p_{\hat{q}} , x_2 p_2) \frac{1}{S_{n}} \nonumber \\ &&\times \left( \boldsymbol{J}^{(1),{\rm id.} \gamma}_{2,\hat{q}}(\hat{q},q) \otimes \boldsymbol{J}^{(1)}_{2}(l,q) \right) M^0_{n+2}(..., k_{q} ,...) J^{(n)}_n(\{ k \}_n ; z) \, , \label{eq:sigUC2qgamre} \end{eqnarray} where $l$ is either a quark or a gluon in the initial or final state. In contrast to \eqref{eq:sigUC1qgamre}, the secondary dipole in the equation at hand is an inclusive dipole, i.e.\ it has no explicit $z$ dependence so that the convolution in the final-state momentum fraction is trivial. In the convolution of the two dipoles we have included terms from the mixed initial-final mass factorisation contribution \eqref{eq:MF3mix}. To complete the fragmentation dipoles in \eqref{eq:sigUC2qgamre} also pure final-state mass factorisation terms are needed. Since these terms cancel after summation of the different terms in \eqref{eq:sigUC2qgamre}, they do not appear on the left-hand side of the equation. The last contribution to the double virtual subtraction takes into account the terms from $\text{d} \hat{\sigma}^{S,d}_{q(\gamma)}$ and $\text{d} \hat{\sigma}^{T,c_2\, (p)}_{q(\gamma)}$ and we denote it $\text{d} \hat{\sigma}^{U,d}_{q(\gamma)}$. Adding the corresponding mass factorisation contributions yields \newpage \begin{eqnarray} \lefteqn{-\text{d} \hat{\sigma}^{U,d}_{q(\gamma)} + \text{d} \hat{\sigma}^T_{q} \otimes \mathbf{F}^{(0)}_{q \to \gamma} + \text{d} \hat{\sigma}^B_{q} \otimes \mathbf{\Gamma}_{{\rm PDF}} \otimes \mathbf{F}^{(0)}_{q \to \gamma}}\nonumber \\ &=& \mathcal{N}^{VV} Q_q^2 \sum_{\rm {perm.}} \int_0^1 \text{d} z \int \frac{\text{d} x_1}{x_1} \frac{\text{d}x_2}{x_2} \text{d} \Phi_{n}(k_1, ... , k_q , ...,k_n;x_1 p_{\hat{q}} , x_2 p_2) \frac{1}{S_{n}}\nonumber \\ &&\times \left( \boldsymbol{J}^{(1),{\rm id.} \gamma}_{2,\hat{q}}(\hat{q},q) \otimes \boldsymbol{J}^{(1)}_{2}(i,m) \right) M^0_{n+2}(..., k_{q} ,...) J^{(n)}_n(\{ k \}_n ; z) \, , \label{eq:sigUDqgamre} \end{eqnarray} where $i$ and $m$ can be any partons in the process but not the identified quark in the final state. We have rewritten all double virtual subtraction terms in which the photon becomes unresolved in terms of two newly introduced two-loop fragmentation dipoles $\boldsymbol{J}^{(2),{\rm id.} \gamma}_{2,\hat{q}}(\hat{q},q)$ and $\boldsymbol{J}^{(2),{\rm id.} \gamma}_{2,\hat{q}}(\hat{q},g)$ and convolutions of two dipoles in which one dipole is always given by $\boldsymbol{J}^{(1),{\rm id.} \gamma}_{2,\hat{q}}(\hat{q},q)$. All two-loop parton-to-photon dipoles are by themselves $\epsilon$-finite. The poles in the convolution terms cancel the explicit poles in $\text{d} \hat{\sigma}^{U,a}_{q(\gamma)}$. \section{Integration of $X^0_4$ Fragmentation Antenna Functions}\label{sec:X40int} $X^0_4$ initial-final antenna functions are kinematically described by a scattering process of the form \begin{equation} q+p \rightarrow k_j + k_l + k_k \, . \end{equation} The final-state momenta and the initial-state momentum $p$ are massless $p^2=k_j^2=k_l^2=k_k^2=0$ and we have $q^2= -Q^2 <0$. The fully inclusive integrated $\mathcal{X}^0_4$ antenna functions are obtained by integration over the corresponding three-body phase space~\cite{Daleo:2009yj}: \begin{equation} \mathcal{X}^0_{i,jkl}(x) = \frac{1}{C(\epsilon)^2} \int \text{d} \Phi_3(k_j,k_k,k_l;p,q) \frac{Q^2}{2 \pi} X^0_{i,jkl} \, , \label{eq:def_ifantenna_qcd} \end{equation} with $x= {Q^2}/({2p \cdot q})$ and the normalisation factor \begin{equation} C(\epsilon) = \frac{\left(4\pi e^{-\gamma_E}\right)^{\epsilon}}{8 \pi^2} \, . \end{equation} For initial-final fragmentation antenna functions the same normalisation as in \eqref{eq:def_ifantenna_qcd} is used but the integration remains differential in the final-state momentum fraction $z$, i.e. \begin{equation} \mathcal{X}^{0, \, {\rm id.} j}_{i,j k l}(x,z) = \frac{1}{C(\epsilon)^2} \int \text{d} \Phi_3(k_j,k_k,k_l;p,q) \, \delta\left(z -x \frac{(p+k_j)^2}{Q^2} \right) \frac{Q^2}{2\pi} X^0_{i,j k l} \, . \label{eq:def_intphotonicantenna} \end{equation} The final-state momentum fraction is fixed by the additional $\delta$-distribution and it describes the fraction of energy carried by particle $j$ in the unresolved limit. In the definition of the momentum fraction the initial-state momentum $p$ is used as a reference momentum, which can be seen by rewriting its definition \begin{equation} z = x\frac{(k_j + p)^2}{Q^2} = \frac{s_{jp}}{s_{jp} + s_{kp} + s_{lp}} \, . \end{equation} For an identified photon i.e.\ $j=\gamma$ there are two fragmentation antenna functions: $\tilde{A}^0_4(\hat{q},g,\gamma^{{\rm id.}},q)$ containing the triple-collinear $q\to qg\gamma$ configuration and $\tilde{E}^0_4(\hat{q},q',\gamma^{{\rm id.}},\bar{q}')$ containing the triple collinear $g\to q'\bar{q}'\gamma$ configuration. To integrate these fragmentation antenna functions, we use the reduction to master integrals technique. Using \begin{equation} 2 \pi i \delta(k^2) = \frac{1}{k^2+ i \epsilon} - \frac{1}{k^2-i \epsilon} \, , \end{equation} we rewrite the phase space integrals as $2\to 2$ three-loop-integrals with forward scattering kinematics. The reduction is performed with the program \texttt{Reduze2}~\cite{vonManteuffel:2012np}. For the integration of the two photonic fragmentation antenna functions we find nine master integrals. The master integrals are calculated using their differential equations in the two kinematic variables $x$ and $z$. The boundary conditions are fixed by integrating the solution of the differential equations over $z$ and comparing the result with the inclusive master integrals calculated in~\cite{Daleo:2009yj}. The master integrals take the general form \begin{equation} I(x,z) = (1-x)^{a - 2 \epsilon} ( z^{-\epsilon} A(x,z) + z^{-2\epsilon} B(x,z)) \end{equation} with $a \in \{-1,0,1\}$. After being inserted into the antenna functions, the factor $(1-x)^{a - 2 \epsilon}$ can give rise to factors of the form $(1-x)^{-1 -2 \epsilon}$, whose expansion reads \begin{equation} (1-x)^{-1-2 \epsilon} = -\frac{1}{2 \, \epsilon} \delta(1-x) + \sum_n \frac{(- 2 \, \epsilon)^n}{n!} \mathcal{D}_n(x) \, , \label{eq:distexp} \end{equation} where we used the notation introduced in \eqref{eq:Dndef}. Potential factors of the form $z^{-1 - a \epsilon}$ do not have to be expanded in terms of distributions, since the endpoint $z=0$ corresponds to a soft photon singularity. This singularity will be regulated by the jet function, which requires a minimum $p_T$ of the photon so that the endpoint $z=0$ does not contribute to any observable with a photon in the final state. However, to check the result of the integrated fragmentation antenna functions we also derive the master integrals with the exact scaling in $z=0$ in the limit of $z \to 0$. In the scattering \begin{equation} q + p \to p_1(k_{\gamma}) + p_2(k_2) + p_3(k_3) \end{equation} 12 different propagators appear from which four are cut propagators. Using four-momentum conservation $k_3 = q + p - k_{\gamma} - k_2$, they read \begin{eqnarray} D_1 &=& (q-k_{\gamma})^2 \, , \nonumber \\ D_2 &=& (p+q-k_{\gamma})^2 \, ,\nonumber \\ D_3 &=& (p-k_2)^2 \, ,\nonumber \\ D_4 &=& (q- k_2)^2 \, ,\nonumber \\ D_5 &=& (p+q-k_2)^2 \, ,\nonumber \\ D_6 &=& (q-k_{\gamma} -k_2)^2\, ,\nonumber \\ D_7 &=& (p-k_{\gamma}-k_2)^2 \, , \nonumber \\ D_8 &=& (k_{\gamma} + k_2)^2\, , \nonumber \\ D_9 &=& k_{\gamma}^2\, ,\nonumber \\ D_{10} &=& k_2^2 \, , \nonumber \\ D_{11} &=& (q+p-k_{\gamma}-k_2)^2 \, , \nonumber \\ D_{12} &=& (p-k_{\gamma})^2 + Q^2 \frac{z}{x} \, , \end{eqnarray} where the cut propagators are $D_9-D_{12}$. We label the master integrals by the propagators in the corresponding integral (omitting the cut propagators, which we require in each integral), for example: \begin{equation} I[-3,7] = \frac{Q^2(2\pi)^{-2d+3}}{x} \int \text{d}^d k_{\gamma} \, \text{d}^d k_2 \, \delta\left(D_9\right) \, \delta\left(D_{10}\right) \, \delta\left(D_{11}\right) \delta\left(D_{12}\right) \frac{D_3}{D_7}. \end{equation} The factor $Q^2/x$ originates from rewriting the $\delta$-distribution fixing the momentum fraction $z$ in \eqref{eq:def_intphotonicantenna} in terms of $\delta(D_{12})$. As there are seven linearly independent scalar products the integration families consist of the four cut propagators and three additional propagators. We find three integral families and in total nine master integrals which are summarised in Table\,\ref{tabMI}. \begin{table}[t] \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{c|c|c|c|c} family & master & deepest pole & behaviour at $x=1$ & known to order \\ \hline \multicolumn{1}{l|}{} & $I[0]$ & $\epsilon^0$ & $(1-x)^{1-2\epsilon}$ & all \\ \hline \multirow{2}{*}{A} & $I[5]$ & $\epsilon^{-1}$ & $(1-x)^{-2 \epsilon}$ & all \\ & $I[2,3,5]$ & $\epsilon^{-2}$ & $(1-x)^{-1-2\epsilon}$ & $\epsilon^1$ \\ \hline \multirow{4}{*}{B} & $I[7]$ & $\epsilon^0$ & $(1-x)^{1-2\epsilon}$ & $\epsilon^2$ \\ & $I[-2,7]$ & $\epsilon^0$ & $(1-x)^{1-2\epsilon}$ & $\epsilon^2$ \\ & $I[-3,7]$ & $\epsilon^0$ & $(1-x)^{1-2\epsilon}$ & $\epsilon^2$ \\ & $I[2,3,7]$ & $\epsilon^{-2}$ & $(1-x)^{-2\epsilon}$ & $\epsilon^0$ ($\epsilon^1$ at $x=1$) \\ \hline \multirow{2}{*}{C} & $I[5,7]$ & $\epsilon^{-1}$ & $(1-x)^{-2 \epsilon}$ & $\epsilon^0$ ($\epsilon^1$ at $x=1$) \\ & $I[3,5,7]$ & $\epsilon^{-2}$ & $(1-x)^{- 2\epsilon}$ & $\epsilon^0$ ($\epsilon^1$ at $x=1$) \end{tabular}% } \caption{\label{tabMI} Summary of the double real radiation master integrals.} \end{table} The phase space integral $I[0]$ has been calculated directly by carrying out the three-body phase space integral and by solving the differential equation in the kinematic variable $z$ and fixing the boundary condition by comparing to the inclusive three-body phase space. It reads \begin{equation} I[0] = N_{\Gamma} \left(Q^2\right)^{1-2\epsilon} (1-x)^{1-2\epsilon} x^{-1+2\epsilon} z^{-\epsilon} (1-z)^{1-2\epsilon} \, , \end{equation} with the normalisation factor \begin{equation} N_{\Gamma} = \frac{2^{-5+4\epsilon} \pi^{-3+2\epsilon}\, \Gamma^2(2-\epsilon)}{\Gamma^2\left(3- 2\epsilon\right)}\, . \end{equation} The only other master integral which admits a simple closed form solution is the master integral $I[5]$. We find \begin{eqnarray} I[5] &=& N_{\Gamma} \left(\frac{1-2\epsilon}{\epsilon} \right)^2 \left(Q^2 \right)^{-2\epsilon} (1-x)^{-2\epsilon} x^{2\epsilon} \nonumber \\ &&\times \left( z^{-\epsilon} {}_2F_1(\epsilon,2\epsilon,1+\epsilon;z) - z^{-2\epsilon} \frac{\Gamma\left(1-2\epsilon\right)\Gamma(1+\epsilon)}{\Gamma(1-\epsilon)} \right)\, . \end{eqnarray} All other master integrals have been calculated in terms of a Laurent expansion in $\epsilon$. The integrated antenna functions are then obtained by reducing the integrand in \eqref{eq:def_intphotonicantenna} to these master integrals, and applying \eqref{eq:distexp} to extract the end-point contributions in $x=1$. The results for $\tilde{\mathcal{A}}^{0, \, {\rm id.} \gamma}_{q,\gamma qg}(x,z)$ and $\tilde{\mathcal{E}}^{0, \, {\rm id.} \gamma}_{q,\gamma q' \bar{q}'}(x,z)$ are too lengthy to be expressed in the text here, and are included as ancillary files. \section{Integration of $X^1_3$ Fragmentation Antenna Functions} \label{sec:X31int} The inclusive integrated one-loop antenna functions in the initial-final configuration are defined as~\cite{Daleo:2009yj} \begin{equation} \mathcal{X}^1_{i,jk}(x) = \frac{1}{C(\epsilon)} \int \text{d} \Phi_2(k_j,k_k;p_i,q) \frac{Q^2}{2 \pi} X^1_{i,jk} \, , \label{eq:def_X31_QCD} \end{equation} where $X^1_{i,jk}$ is the unintegrated one-loop antenna function and $\text{d} \Phi_2$ the two-particle phase space. We define the integrated initial-final one-loop fragmentation antenna functions in line with \eqref{eq:def_X31_QCD} as \begin{eqnarray} \mathcal{X}^{1, {\rm id.} j}_{i,j k}(x,z) &=& \frac{1}{C(\epsilon)} \int \text{d} \Phi_2(k_j,k_k;p_i,q) \, \delta \left( z - \frac{s_{i j}}{s_{i j} + s_{i k}} \right) \frac{Q^2}{2 \pi} X^1_{i,j k} \nonumber \\ &=& \frac{Q^2}{2} \frac{e^{\gamma_E \epsilon}}{\Gamma(1-\epsilon)} \left(Q^2\right)^{-\epsilon} \mathcal{J}(x,z) \, X^1_{i,j k} \, . \label{eq:def_X31_integrated_photonic} \end{eqnarray} The integration takes the same form as for the $X^0_3$ initial-final fragmentation antenna functions, see \eqref{eq:intX30IFfrag} above. The Jacobian factor $\mathcal{J}$ is given in \eqref{eq:JacPhi2}. As can be seen from \eqref{eq:def_X31_integrated_photonic}, no actual integration has to be performed to obtain the integrated fragmentation antenna functions $\mathcal{X}^{1, {\rm id.} j}_{i,j k}$. However, to express the integrated fragmentation antenna functions in terms of distributions in $(1-x)$ and in $z$ we first have to cast the unintegrated antenna functions in a form suitable for this expansion. Therefore, deriving the integrated initial-final one-loop fragmentation antenna functions follows the steps of the derivation of the integrated initial-initial one-loop antenna functions presented in~\cite{Gehrmann:2011wi}. In contrast to the NLO $X^0_3$ antenna functions which only contain rational terms in the invariants, the one-loop antenna functions $X^1_3$ also contain logarithms and polylogarithms in the invariants. These functions have branch cuts in the limits $x \to 1$ and $z \to 0$. Therefore, the expansion in distributions in $z=0$ and $x=1$ cannot be performed directly. We follow the strategy of~\cite{Gehrmann:2011wi} and express the one-loop antenna functions in terms of one-loop master integrals. The one-loop master integrals appearing in the expressions for the one-loop antenna functions are the one-loop bubble ${\rm Bub}(s_{ij})$ and the one-loop ${\rm Box}(s_{ij},s_{ik})$ in all kinematic crossings. The expression for the one-loop bubble reads \begin{equation} \text{Bub}(s_{ij}) = \left[ \frac{(4\pi)^{\epsilon}}{16 \pi^2} \frac{\Gamma(1+\epsilon) \Gamma^2(1-\epsilon)}{\Gamma(1-2\epsilon)} \right] \frac{i}{\epsilon(1-2\epsilon)} \left( - s_{ij} \right)^{-\epsilon} \equiv A_{2,LO} \left( - s_{ij} \right)^{-\epsilon} \, , \label{eq:definition_bubble} \end{equation} and the expression for the one-loop box is \begin{eqnarray} \lefteqn{\text{Box}(s_{ij},s_{ik})} \nonumber \\ &=& \frac{2 (1-2 \epsilon)}{\epsilon} A_{2,LO} \frac{1}{s_{ij}s_{ik}} \nonumber \\ &&\times \bigg[ \left(\frac{s_{ij} s_{ik}}{s_{ij}-s_{ijk}}\right)^{-\epsilon} {}_2 F_1\left( - \epsilon, -\epsilon; 1- \epsilon ; \frac{s_{ijk} - s_{ij} - s_{ik}}{s_{ijk} - s_{ij}}\right) \nonumber \\ &&+ \left(\frac{s_{ij} s_{ik}}{s_{ik}-s_{ijk}}\right)^{-\epsilon} {}_2 F_1\left( - \epsilon, -\epsilon; 1- \epsilon ; \frac{s_{ijk} - s_{ij} - s_{ik}}{s_{ijk} - s_{ik}}\right) \nonumber \\ &&- \left(\frac{- s_{ijk} s_{ij} s_{ik}}{(s_{ij}-s_{ijk})(s_{ik}- s_{ijk})}\right)^{-\epsilon} {}_2 F_1\left( - \epsilon, -\epsilon; 1- \epsilon ; \frac{s_{ijk}(s_{ijk} - s_{ij} - s_{ik})}{(s_{ijk}- s_{ij}) (s_{ijk} - s_{ik})}\right) \bigg] \, . \label{eq:definition_box} \end{eqnarray} For the following discussion we adopt the labelling to $p_i \to p_1$, $k_j \to k_3$ and $k_k \to k_2$ in \eqref{eq:def_X31_integrated_photonic}, so that the particle with momentum $k_3$ is identified and the momentum $p_1$ is the reference momentum. Using this convention, the invariants expressed in terms of $x$, $z$ and $Q^2$ read \begin{eqnarray} s_{12} &=& (p_1 - k_2)^2 = - Q^2 \frac{(1-z)}{x} \, , \nonumber \\ s_{13} &=& (p_1- k_3)^2 = - Q^2 \frac{z}{x} \, , \nonumber \\ s_{23} &=& (k_2 + k_3)^2= - Q^2 \frac{(x-1)}{x} \, , \nonumber \\ s_{123} &=& (k_1 + k_2 - p_1)^2 = - Q^2 \, . \end{eqnarray} Both master integrals are well-defined in the Euclidean region, in which all invariants are smaller than 0. The master integrals have to be analytically continued from this kinematic region, to the kinematic region under consideration given by \begin{equation} s_{12} < 0 \quad , \quad s_{13} < 0 \quad , \quad s_{23} >0 \quad , \quad s_{123} = - Q^2 < 0 \, . \label{eq:kinregion} \end{equation} The analytic continuation of the bubble master integral is straightforward, taking into account $s_{ij} \to s_{ij} + i\delta$ in \eqref{eq:definition_bubble}. In the analytic continuation of the box integrals, the prefactors in front of the hypergeometric functions as well as the hypergeometric functions themselves have to be considered. In particular branch cuts of the hypergeometric functions in the kinematic endpoints $x=1$ and $z=0$ have to be avoided: for these values the arguments of the hypergeometric function must not be unity or $+\infty$. To further avoid explicit imaginary parts from the hypergeometric functions, their arguments are moreover transformed to be less than $+1$ using their well-known transformation rules~\cite{bateman}. It is noted that this will typically require to partition the kinematic region defined by~\eqref{eq:kinregion} into up to four segments~\cite{Graudenz:1993tg,Gehrmann:2002zr}, see Figure~\ref{fig:kinematic_regions_com} below. In the following, we discuss the transformations of the arguments for the different hypergeometric functions appearing in the box master integrals for all kinematic crossings: ${\rm Box}(s_{12},s_{23})$, ${\rm Box}(s_{13},s_{23})$ and ${\rm Box}(s_{12},s_{13})$. In ${\rm Box}(s_{12},s_{23})$ the arguments of the hypergeometric functions read \begin{eqnarray} a_1(s_{12},s_{23}) &=& \frac{s_{123}-s_{12}-s_{23}}{s_{123}-s_{12}} = -\frac{z}{1-x-z} \, ,\nonumber \\ a_2(s_{12},s_{23}) &=& \frac{s_{123}-s_{12}- s_{23}}{s_{123}-s_{23}} = z \, , \nonumber \\ a_3(s_{12},s_{23}) &=& \frac{s_{123} \, s_{13} }{(s_{13}+s_{23}) (s_{12} + s_{13})} = - \frac{x \, z}{1-x-z} \, . \label{eq:a_boss12s23} \end{eqnarray} All arguments vanish in the kinematic endpoint $z=0$. However, $a_1$ and $a_3$ are equal to unity in the kinematic endpoint $x=1$. Therefore, the analytic continuation of the corresponding hypergeometric functions proceeds by expressing these functions as hypergeometric functions in terms of new arguments: \begin{eqnarray} \tilde{a}_1(s_{12},s_{23}) &=& 1-\frac{1}{a_1(s_{12},s_{23})} = \frac{1-x}{z} \, , \nonumber \\ \tilde{a}_3(s_{12},s_{23}) &=& 1-\frac{1}{a_3(s_{12},s_{23})} = \frac{(1-x)(1-z)}{xz} \, . \end{eqnarray} The arguments $\tilde{a}_1$ and $\tilde{a}_3$ vanish in the endpoint $x=1$ but yield unity in the endpoint $z=0$. Therefore, to obtain an expression for ${\rm Box}(s_{12},s_{23})$ which does not contain hypergeometric functions with branch cuts in $z=0$ and $x=1$, it is necessary to distinguish the two regions \begin{eqnarray} R_1 &=&\{ s_{13} , s_{23} : s_{13} + s_{23} > 0 \Leftrightarrow z < 1-x \} \, ,\nonumber \\ R_2 &=& \{ s_{13} , s_{23} : s_{13} + s_{23} < 0 \Leftrightarrow z > 1-x \} \, . \label{eq:def_R1_R2} \end{eqnarray} The regions are depicted in Figure~\ref{fig:kinematic_regions_com}. In region $R_1$ which contains the endpoint $z=0$ we use the hypergeometric functions with the arguments given in \eqref{eq:a_boss12s23}, while in region $R_2$, which contains the endpoint $x=1$ we express ${\rm Box}(s_{12},s_{23})$ in terms of hypergeometric functions with arguments $\tilde{a}_1, a_2$ and $\tilde{a}_3$. \begin{figure}[t] \centering \begin{subfigure}{.28\textwidth} \centering \includegraphics[width=\linewidth]{coordinatexz_R.pdf} \end{subfigure}% \begin{subfigure}{0.28\textwidth} \centering \includegraphics[width=\linewidth]{coordinatexz_T.pdf} \end{subfigure}% \begin{subfigure}{.28\textwidth} \centering \includegraphics[width=\linewidth]{coordinatexz_U.pdf} \end{subfigure} \caption{Kinematic regions in the $(x,z)$-plane relevant for the analytic continuation of the box master integrals . The kinematic endpoints are $z=0$ (blue line) and $x=1$ (red line).} \label{fig:kinematic_regions_com} \end{figure} For ${\rm Box}(s_{13},s_{23})$ the arguments of the hypergeometric functions read \begin{eqnarray} a_1(s_{13},s_{23}) &=& \frac{s_{123}-s_{13}-s_{23}}{s_{123}-s_{13}} = -\frac{1-z}{z-x} \, ,\nonumber \\ a_2(s_{13},s_{23}) &=& \frac{s_{123}-s_{13}- s_{23}}{s_{123}-s_{23}} = 1-z \, , \nonumber \\ a_3(s_{13},s_{23}) &=& \frac{s_{123} \, s_{12} }{(s_{12}+s_{23}) (s_{13} + s_{12})} = - \frac{x(1-z)}{z-x} \, . \end{eqnarray} The arguments $a_1$ and $a_3$ are equal to unity for the endpoint $x=1$. Moreover, arguments $a_2$ and $a_3$ are unity for $z=0$. After expressing ${\rm Box}(s_{13},s_{23})$ in terms of hypergeometric functions with arguments \begin{eqnarray} \tilde{a}_1(s_{13},s_{23}) &=& 1-\frac{1}{a_1(s_{13},s_{23})} = \frac{1-x}{1-z} \, , \nonumber\\ \tilde{a}_2(s_{13},s_{23}) &=& 1-a_2(s_{13},s_{23})= z \, , \nonumber\\ \tilde{a}_3(s_{13},s_{23}) &=&1-\frac{1}{a_3(s_{13},s_{23})} = \frac{z (1-x)}{x (1-z)} \, , \label{eq:atildes13s23} \end{eqnarray} none of these functions contain branch cuts in the kinematic endpoints. However, the arguments $\tilde{a}_1$ and $\tilde{a}_3$ are larger than unity for $z>x$. Therefore, the hypergeometric functions with arguments $\tilde{a}_1$ and $\tilde{a}_3$ yield a non-vanishing imaginary part in this region. To separate the imaginary part of ${\rm Box}(s_{13},s_{23})$ from the hypergeometric functions we distinguish the regions \begin{eqnarray} T_1 &=& \{ s_{12}, s_{23}: s_{12} + s_{23} > 0 \Leftrightarrow z > x \} \, , \nonumber \\ T_2 &=& \{ s_{12}, s_{23}: s_{12} + s_{23} < 0 \Leftrightarrow z < x \} \end{eqnarray} and apply the transformations of argument $a_1$ and $a_3$ in \eqref{eq:atildes13s23} only in region $T_2$ and not in region $T_1$. For ${\rm Box}(s_{12},s_{13})$ the arguments of the hypergeometric functions read \begin{eqnarray} a_1(s_{12},s_{13}) &=& \frac{s_{123}-s_{12}-s_{13}}{s_{123}-s_{12}} = \frac{1-x}{1-x-z} \, ,\nonumber \\ a_2(s_{12},s_{13}) &=& \frac{s_{123}-s_{12}-s_{13}}{s_{123}-s_{13}} = \frac{1-x}{z-x} \, , \nonumber\\ a_3(s_{12},s_{13}) &=& \frac{s_{123} \, s_{23}}{(s_{12}+s_{23}) \, (s_{13} +s_{23})} = -\frac{(1-x) \, x}{(z-x) \, (1-x-z)} \, . \end{eqnarray} The argument $a_1$ is equal to unity in the kinematic endpoint $z=0$. To avoid the corresponding branch cut of the hypergeometric function we map this argument to \begin{equation} \tilde{a}_1(s_{12},s_{13}) = 1-\frac{1}{a_1(s_{12},s_{13})} = \frac{z}{1-x} \, . \label{eq:tildea1s12s13} \end{equation} The endpoint $z=0$ is mapped to $\tilde{a}_1=0$. However, we have $ \tilde{a}_1 \to + \infty$ as $x$ approaches 1. To avoid this other branch cut of the hypergeometric function we apply \eqref{eq:tildea1s12s13} only in the region $R_1$ and keep the argument $a_1$ in region $R_2$. The hypergeometric function with argument $a_2$ does not yield any branch cuts in the kinematic endpoints. However, the argument $a_2$ is larger than unity in region $T_1$. Therefore, we apply the following mapping in region $T_1$: \begin{equation} \tilde{a}_2(s_{12},s_{13}) = 1-\frac{1}{a_2(s_{12},s_{13})} = \frac{1-z}{1-x} \, . \end{equation} For the analytic continuation of the hypergeometric function in $a_3$ we have to distinguish the regions \begin{eqnarray} U_1 &=& \{s_{12}, s_{13}, s_{23}: s_{13} + s_{23} > 0 \, \wedge \, s_{12} + s_{23} < 0 \Leftrightarrow z < 1-x \, \wedge \, z < x \} \, ,\nonumber \\ U_2 &=& \{s_{12}, s_{13}, s_{23}: s_{13} + s_{23} < 0 \, \wedge \, s_{12} + s_{23} < 0 \Leftrightarrow z > 1-x \, \wedge \, z < x \} \, ,\nonumber \\ U_3 &=& \{s_{12}, s_{13}, s_{23}: s_{13} + s_{23} > 0 \, \wedge \, s_{12} + s_{23} > 0 \Leftrightarrow z < 1-x \, \wedge \, z > x \} \, , \nonumber \\ U_4 &=& \{s_{12}, s_{13}, s_{23}: s_{13} + s_{23} < 0 \, \wedge \, s_{12} + s_{23} > 0 \Leftrightarrow z > 1-x \, \wedge \, z > x \} \, . \end{eqnarray} The different regions are shown in Figure\,\ref{fig:kinematic_regions_com}. For $a_3$, we have \begin{eqnarray} a_3(s_{12}, s_{13} ) \geq 1 \, \, &\text{in}& \, \, U_1 \cup U_4 \, , \\ a_3(s_{12},s_{13} ) \leq 0 \, \, &\text{in}& \, \, U_2 \cup U_3 \, . \end{eqnarray} Moreover, $a_3=1$ for $z=0$. Therefore, we map the argument $a_3$ of the hypergeometric function in region $U_1 \cup U_4$ to \begin{equation} \tilde{a}_3(s_{12},s_{13}) = 1-\frac{1}{a_3(s_{12},s_{13})} = \frac{z(1-z)}{x(1-x)} \end{equation} by means of an appropriate identity for the hypergeometric function. In the region $U_2$ the hypergeometric function with argument $a_3$ does not have a branch cut in $x=1$ and no mapping of the argument is required. To obtain the analytic continuation of the third hypergeometric function in $U_3$, we take the result in region $U_1 \cup U_4$ and apply the transformation \begin{equation} \tilde{\tilde{a}}_3(s_{12},s_{13}) = \frac{1}{1-\tilde{a}_3(s_{12},s_{13})} = a_3(s_{12},s_{13}) \, , \end{equation} to the argument of the hypergeometric function. Note that even though the arguments of the hypergeometric functions in region $U_2$ and $U_3$ are the same, the result in $U_3$ contains additional terms originating from the analytic continuation from region $U_1 \cup U_4$ to region $U_3$. After having performed the analytic continuation of the master integrals in the different parts of the physical region the expansion in terms of distribution can safely be performed. We have checked that the expressions in the different regions are continuous at the boundaries. We have cast the hypergeometric functions in the box master integrals in a form that an expansion in terms of distributions in $z=0$ can be performed. The same does not hold for the endpoint $z=1$. However, at the level of the integrated fragmentation antenna functions we are able to recover any distributions in $z=1$ by exchanging particles 2 and 3 which corresponds to exchanging $z$ with $1-z$. To this end, factors of the the form $1/(s_{12}s_{13})$ have to be rewritten using partial fractions. After inserting these bubble and box master integrals in the $X_3^1$ antenna functions, the expansions of factors $z^{-1-\epsilon}$ and $(1-x)^{-1-\epsilon}$ in terms of distributions can be performed. The results in the different segments of the physical region, Figure~\ref{fig:kinematic_regions_com}, can then be recast in a form that ensures that the pole terms and the coefficients of the distributions in $z$ and $(1-x)$ take the same form in all segments. The $z$-integration of the resulting expressions recovers the known real-virtual initial-final master integrals~\cite{Daleo:2009yj} and enabled us to identify an error in their numerical implementation for jet production in deep-inelastic scattering~\cite{Currie:2017tpe}. The relevant one-loop integrated fragmentation antenna function for photon production is $\mathcal{\tilde{A}}^{1, \, {\rm id.} \gamma}_{3,\hat{q}}(x,z)$. Its expression is very lengthy and is enclosed as ancillary file together with the expressions for the other integrated one-loop fragmentation antenna functions. \section{Conclusions} \label{sec:conc} In this paper, we extended the antenna subtraction method to account for identified photons in the final state, and derived all required ingredients for the computation of photonic cross sections up to NNLO. This extension required to introduce novel fragmentation antenna functions, which are differential in the momentum fraction of the final-state photon. The unintegrated forms of the fragmentation antenna functions could be inferred from their inclusive QCD counterparts. They come with novel forms of phase space factorisation at NLO and NNLO, allowing to retain the photon momentum fraction as a variable in all stages of the event reconstruction. The corresponding integrated fragmentation antenna functions were newly computed for all photon and parton fragmentation processes at NLO and for photon fragmentation up to NNLO. The developments in this paper allow to compute the NNLO corrections to processes involving final-state photons (also in association with jets), with a realistic fixed-cone based isolation prescription for the photon. The new subtraction terms are largely separate from previously derived subtraction terms obtained for an idealised dynamical-cone isolation, and can be added to existing NNLO implementations. First applications could be photon-plus-jet or di-photon production, where NNLO corrections for fixed-cone based isolation will allow to accurately quantify the effects of the photon isolation procedure. Moreover, it will then also become possible to compute NNLO-accurate cross sections for alternative photon isolation prescriptions~\cite{Glover:1993xc,Hall:2018jub} (or even without any photon isolation) and to investigate observables that could allow for direct determinations of the photon fragmentation functions at hadron colliders~\cite{Kaufmann:2016nux}. The formalism derived in this paper for fragmentation antenna functions can be further generalised from photons to identified hadrons. Cross sections for identified hadrons are obtained by convoluting cross sections for the production of specific partons with parton-to-hadron fragmentation functions. Their description at higher orders requires fragmentation antenna functions, differential in the momentum fraction of a final-state quark or gluon. The full set of these functions at NLO is already given in appendix \ref{app:X30integration}. An extension to NNLO will require the integration of all double real and real-virtual fragmentation antenna functions, each in initial-final and final-final kinematics. In the initial-final case, no integration is required for the real-virtual functions and the results are obtained directly along the lines of the section~\ref{sec:X31int}; they are included as ancillary files. More conceptual work and new master integrals are needed for integrated fragmentation antenna functions for identified partons in the double real case, as well as in final-final kinematics. \section*{Acknowledgements} We would like to thank Alexander Huss and Marius H\"ofer for multiple discussions and comments that helped shaping and testing the formulation of the method that is presented in this paper. In the course of this project, we also benefitted from numerous discussions with Xuan Chen, Jonathan Mo and Giovanni Stagnitto, whom we would like to thank for their input. This work has received funding from the Swiss National Science Foundation (SNF) under contract 200020-204200 and from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme grant agreement 101019620 (ERC Advanced Grant TOPUP). \begin{appendix} \section{Mass Factorisation Kernels} \label{app:MFkernels} The components of the mass factorisation kernels $\mathbf{\Gamma}$ are given in~\cite{GehrmannDeRidder:1997gf}. Adopted to our notation they read \begin{eqnarray} \mathbf{\Gamma}^{(0)}_{q \to \gamma} &=& Q_q^2 \left( 4\pi e^{-\gamma_E}\right)^{\epsilon} \left( \mu^2 / \mu_a^2\right)^{\epsilon} \Gamma^{(0)}_{\gamma q}(z) \, , \nonumber \\ \mathbf{\Gamma}^{(0)}_{g \to \gamma} &=&0 \, , \nonumber \\ \mathbf{\Gamma}^{(0)}_{\gamma \to \gamma} &=& \delta(1-z) \, , \nonumber \\ \mathbf{\Gamma}^{(0)}_{\gamma \to p} &=& \mathbf{\Gamma}^{(1)}_{\gamma \to p} = 0 \quad \text{for } p \in \{q, \bar{q}, g\} \, , \nonumber \\ \mathbf{\Gamma}^{(0)}_{q \to q} &=& \delta(1-z) \, , \nonumber \\ \mathbf{\Gamma}^{(0)}_{q \to q'} &=& \mathbf{\Gamma}^{(1)}_{q \to q'} = 0 \quad \text{for } q \neq q' \, , \nonumber \\ \mathbf{\Gamma}^{(0)}_{q \to g} &=& 0 \, , \nonumber \\ \mathbf{\Gamma}^{(0)}_{g \to g} &=& \delta(1-z) \, , \nonumber \\ \mathbf{\Gamma}^{(0)}_{g \to q} &=& 0 \, , \nonumber \\ \mathbf{\Gamma}^{(1)}_{q \to \gamma} &=& \left( \frac{N^2-1}{N} \right) Q_q^2 \left( 4\pi e^{-\gamma_E}\right)^{2\epsilon} \left( \mu^2 / \mu_a^2\right)^{2\epsilon} \Gamma^{(1)}_{\gamma q}(z) \, , \nonumber \\ \mathbf{\Gamma}^{(1)}_{g \to \gamma} &=& \left( 4\pi e^{-\gamma_E}\right)^{2\epsilon} \left( \mu^2 / \mu_a^2\right)^{2\epsilon} \Gamma^{(1)}_{\gamma g}(z) \, , \nonumber \\ \mathbf{\Gamma}^{(1)}_{\gamma \to \gamma} &=& 0 \, , \nonumber \\ \mathbf{\Gamma}^{(1)}_{q \to q} &=& \left( \frac{N^2-1}{N} \right) \left( 4\pi e^{-\gamma_E}\right)^{\epsilon} \left( \mu^2 / \mu_a^2\right)^{\epsilon} \Gamma^{(1)}_{qq}(z) \, , \nonumber \\ \mathbf{\Gamma}^{(1)}_{g \to g} &=& \left( 4\pi e^{-\gamma_E}\right)^{\epsilon} \left( \mu^2 / \mu_a^2\right)^{\epsilon} \left( N \, \Gamma^{(1)}_{gg}(z) + N_f \, \Gamma^{(1)}_{gg,F}(z) \right) \, , \nonumber \\ \mathbf{\Gamma}^{(1)}_{g \to q} &=& \left( 4\pi e^{-\gamma_E}\right)^{\epsilon} \left( \mu^2 / \mu_a^2\right)^{\epsilon} \Gamma^{(1)}_{qg}(z) \, , \nonumber \\ \mathbf{\Gamma}^{(1)}_{q \to g} &=& \left( \frac{N^2-1}{N} \right) \left( 4\pi e^{-\gamma_E}\right)^{\epsilon} \left( \mu^2 / \mu_a^2\right)^{\epsilon} \Gamma^{(1)}_{gq}(z) \, . \label{eq:allkernels} \end{eqnarray} Since we set $D_{g \to \gamma}=\mathcal{O}(\alpha)$, the mass factorisation kernels $\mathbf{\Gamma}^{(1)}_{q\to g}$ and $\mathbf{\Gamma}^{(1)}_{g\to g}$ are non-zero. Moreover, we decomposed the kernels by factors $N$ and $N_f$. The factorisation kernels can be expressed in terms of leading order and next-to-leading order splitting functions, i.e.\ \begin{eqnarray} \Gamma^{(0)}_{\gamma q}(z) &=& - \frac{1}{\epsilon} p^{(0)}_{\gamma q}(z) \, , \nonumber \\ \Gamma^{(1)}_{\gamma q}(z) &=& \frac{1}{2} \left[ \frac{1}{2 \epsilon^2} (p^{(0)}_{q q} \otimes p^{(0)}_{\gamma q })(z) - \frac{1}{2\epsilon} p^{(1)}_{\gamma q}(z) \right] \, , \nonumber \\ \Gamma^{(1)}_{\gamma g}(z) &=& \frac{1}{2} \sum_{q} Q_q^2 \left( \frac{1}{2\epsilon^2} (p^{(0)}_{q g} \otimes p^{(0)}_{\gamma q})(z) - \frac{1}{2 \epsilon} p^{(1)}_{\gamma g}(z) \right) \, , \nonumber \\ \Gamma^{(1)}_{qq}(z) &=& - \frac{1}{2\epsilon} p^{(0)}_{q q}(z) \, , \nonumber \\ \Gamma^{(1)}_{q g}(z) &=& - \frac{1}{2\epsilon} p^{(0)}_{q g}(z) \, , \nonumber \\ \Gamma^{(1)}_{g q}(z) &=& - \frac{1}{2\epsilon} p^{(0)}_{gq}(z) \, , \nonumber \\ \Gamma^{(1)}_{g g,F}(z) &=& - \frac{1}{\epsilon} p^{(0)}_{gg,F}(z) \, , \nonumber \\ \Gamma^{(1)}_{g g}(z) &=& - \frac{1}{\epsilon} p^{(0)}_{gg}(z) \, . \label{eq:gamnonvanishpco} \end{eqnarray} The factors of $1/2$ appearing in \eqref{eq:gamnonvanishpco} originate from decomposing the colour factors $C_F=(N^2-1)/(2N)$ and $T_R=1/2$ in~\cite{GehrmannDeRidder:1997gf}. The lowest order splitting functions are given by \begin{eqnarray} p^{(0)}_{qq}(z) &=& \frac{3}{2} \delta(1-z) + 2 \mathcal{D}_0(z) -1 -z \, , \nonumber \\ p^{(0)}_{qg}(z) &=& 1- 2z +2 z^2 \, , \nonumber \\ p^{(0)}_{gq}(z) &=& \frac{2}{z} - 2 + z \, , \nonumber \\ p^{(0)}_{gg}(z) &=& \frac{11}{6} \delta(1-z) + 2 \mathcal{D}_0(z) + \frac{2}{z} - 4 + 2z - 2z^2 \, , \nonumber \\ p^{(0)}_{gg,F}(z) &=& -\frac{1}{3} \delta(1-z) \, , \nonumber \\ p^{(0)}_{\gamma q}(z) &=& \frac{2}{z} - 2 + z \, , \label{eq:LOsplittingfunc} \end{eqnarray} and the next-to-leading quark-to-photon and gluon-to-photon splitting functions read \begin{eqnarray} p^{(1)}_{\gamma q}(z) &=& -\frac{1}{2} + \frac{9}{2} z + \left( -8 + \frac{1}{2} z \right) \log z + 2 z \log (1-z) + \left(1 - \frac{1}{2}z\right) \log^2 z \nonumber \\ &&+ \left[\log^2(1-z) + 4 \log z \log (1-z) + 8 \text{Li}_2(1-z) - \frac{4}{3} z \right] p^{(0)}_{\gamma q}(z) \, , \nonumber \\ p^{(1)}_{\gamma g}(z) &=& -2 + 6z - \frac{82}{9} z^2 + \frac{46}{9z} + \left( 5 + 7z + \frac{8}{3} z^2 + \frac{8}{3z} \right) \log z \nonumber \\ &&+ (1+z) \log^2 z \, . \end{eqnarray} \section{Integrated $X^0_3$ Fragmentation Antenna Functions} \label{app:X30integration} We express the integrated fragmentation antenna functions in terms of splitting functions \eqref{eq:LOsplittingfunc} and colour-ordered infrared singularity operators, which read \begin{eqnarray} \mathbf{I}^{(1)}_{q\bar{q}}(\epsilon,s_{q\bar{q}}) &=& - \frac{e^{\epsilon \gamma_E}}{2\Gamma(1-\epsilon)} \left[ \frac{1}{\epsilon^2} + \frac{3}{2\epsilon} \right] {\mathcal R}(-s_{q\bar{q}})^{-\epsilon} \, , \nonumber \\ \mathbf{I}^{(1)}_{qg}(\epsilon,s_{qg}) &=& - \frac{e^{\epsilon \gamma_E}}{2\Gamma(1-\epsilon)} \left[ \frac{1}{\epsilon^2} + \frac{5}{3\epsilon} \right] {\mathcal R}(-s_{qg})^{-\epsilon} \, , \nonumber \\ \mathbf{I}^{(1)}_{gg}(\epsilon,s_{gg}) &=& - \frac{e^{\epsilon \gamma_E}}{2\Gamma(1-\epsilon)} \left[ \frac{1}{\epsilon^2} + \frac{11}{6\epsilon} \right] {\mathcal R}(-s_{gg})^{-\epsilon} \, , \nonumber \\ \mathbf{I}^{(1)}_{q\bar{q},F}(\epsilon,s_{q\bar{q}}) &=& 0 \, , \nonumber \\ \mathbf{I}^{(1)}_{qg,F}(\epsilon,s_{qg}) &=& \frac{e^{\epsilon \gamma_E}}{2\Gamma(1-\epsilon)} \frac{1}{6\epsilon} {\mathcal R}(-s_{qg})^{-\epsilon} \, , \nonumber \\ \mathbf{I}^{(1)}_{gg,F}(\epsilon,s_{gg}) &=& \frac{e^{\epsilon \gamma_E}}{2\Gamma(1-\epsilon)} \frac{1}{3\epsilon} {\mathcal R}(-s_{gg})^{-\epsilon} \, . \end{eqnarray} The invariant masses that appear in these pole terms and in the normalisation factors of the integrated antenna functions are always constructed from three-parton invariants as $q^2=s_{12}+s_{13}+s_{23}$ and $Q^2=-q^2$. \subsection{Initial-Final Configuration} The unintegrated $X^0_3$ antenna functions in the initial-final configuration were introduced in~\cite{Daleo:2006xa}. We recall their expressions here and give the results for their integrated form differential in the final-state momentum fraction. The quark-initiated quark-quark antenna function in the initial-final configuration reads \begin{equation} A^0_3(\hat{1}_q,3_g,2_q) =\frac{1}{s_{123}} \left(\frac{2 s_{12}^2}{s_{13} s_{23}}+\frac{2 s_{12}}{s_{13}}+\frac{2 s_{12}}{s_{23}}+\frac{s_{23}}{s_{13}}+\frac{s_{13}}{s_{23}} \right) + \mathcal{O}(\epsilon) \, . \label{eq:A30IFunint} \end{equation} For the integration of the fragmentation antenna function we need to specify which parton in the final state is identified. In case the final-state gluon is identified we find \begin{eqnarray} \mathcal{A}^{0, {\rm id.} g}_{3, \hat{q}}(x,z)&=& \left(Q^2\right)^{-\epsilon} \bigg[ -\frac{1}{2\epsilon}\delta(1-x) p^{(0)}_{gq}(z) + \frac{1}{2}-\frac{x}{2}+\frac{z}{4}+\frac{x z}{4}+\frac{1}{2} z \delta(1-x) \nonumber \\ &&+\left(-\frac{1}{4}-\frac{x}{4}+\frac{1}{2} \mathcal{D}_0(x)+\frac{1}{2} \delta(1-x) \left( \log (1-z)+\log(z)\right)\right) p^{(0)}_{gq}(z) \bigg] + \mathcal{O}(\epsilon) \, , \nonumber \\ \end{eqnarray} and for the case of an identified final-state quark we have \begin{eqnarray} \mathcal{A}^{0, {\rm id.} q}_{3, \hat{q}}(x,z) &=& -2 \mathbf{I}_{q\bar{q}}^{(1)}(\epsilon,-Q^2) \delta(1-z) \delta(1-x) \nonumber \\ &&+\left(Q^2\right)^{-\epsilon} \bigg[ -\frac{1}{2\epsilon} \left(\delta(1-z) p^{(0)}_{qq}(x)+\delta(1-x) p^{(0)}_{qq}(z) \right) +\frac{9}{16}\delta(1-z) \delta(1-x) \nonumber \\ &&+\delta(1-z) \left(\frac{1}{2}-\frac{x}{2}+\mathcal{D}_1(x)-\frac{1}{2}(1+x) \log (1-x)-\frac{1+x^2}{2(1-x)} \log(x)\right) \nonumber \\ &&+\delta(1-x) \left(\frac{1}{2}-\frac{z}{2} +\mathcal{D}_1(z)-\frac{1}{2}(1+z) \log(1-z)+ \frac{1+z^2}{2(1-z)} \log(z) \right) \nonumber \\ &&- \frac{3}{8} \left( \delta(1-z) p^{(0)}_{qq}(x) + \delta(1-x) p^{(0)}_{qq}(z) \right) + \frac{1}{4} p^{(0)}_{qq}(x)p^{(0)}_{qq}(z) \nonumber \\ &&+ \frac{3}{4}-\frac{x}{4}-\frac{z}{4}-\frac{x z}{4} \bigg] +\mathcal{O}(\epsilon) \, . \label{eq:A30IFqq} \end{eqnarray} In the subtraction of quark-photon collinear limits the antenna function $A^0_3(\hat{1}_q,3_{\gamma},2_q)$ is used. Its unintegrated form coincides with \eqref{eq:A30IFunint} and we have $\mathcal{A}^{0, {\rm id.} \gamma}_{3, \hat{q}} = \mathcal{A}^{0, {\rm id.} g}_{3, \hat{q}}$. The unintegrated $D$-type quark-initiated quark-gluon antenna function is \begin{eqnarray} D^0_3(\hat{1}_q,2_g,3_g) &=& \frac{1}{s_{123}^2} \bigg(\frac{2 s_{123}^2 s_{13}}{s_{12} s_{23}}+\frac{2 s_{12} s_{123}^2}{s_{13} s_{23}}+\frac{s_{123} s_{23}}{s_{12}}+\frac{2 s_{12} s_{13}}{s_{23}} \nonumber \\ &&+ s_{12}+\frac{s_{123} s_{23}}{s_{13}}+4 s_{123}+s_{13} \bigg) + \mathcal{O} (\epsilon)\, . \end{eqnarray} It is symmetric under the exchange of gluons 2 and 3. Therefore, there is only one corresponding integrated fragmentation antenna function, i.e.\ \begin{eqnarray} \mathcal{D}^{0, {\rm id.} g}_{3, \hat{q}}(x,z)&=& -2 \mathbf{I}_{qg}^{(1)}(\epsilon,-Q^2) \delta(1-x)\delta(1-z) \nonumber \\ &&+ \left(Q^2\right)^{-\epsilon} \bigg[ -\frac{1}{2\epsilon} \left(\delta(1-x) p^{(0)}_{gg}(z)+\delta(1-z) p^{(0)}_{qq}(x) \right) + \frac{11}{16}\delta(1-x)\delta(1-z) \nonumber \\ &&+\delta(1-z) \left(\frac{1}{2}-\frac{x}{2}+\mathcal{D}_1(x)-\frac{1}{2} (1+x) \log (1-x)- \frac{1+x^2}{2(1-x)} \log(x)\right) \nonumber \\ &&+\delta(1-x) \left(\mathcal{D}_1(z) +\left(-2 + \frac{1}{z} +z - z^2\right) \log (1-z) + \frac{(1-z+z^2)^2}{(1-z)z} \log(z) \right) \nonumber \\ &&-\frac{3}{8} \delta(1-x) p^{(0)}_{gg}(z)-\frac{11}{24} p^{(0)}_{qq}(x) \delta(1-z)+ \frac{1}{4} p^{(0)}_{gg}(z) p^{(0)}_{qq}(x) -1-\frac{1}{2 x}-x+\frac{z}{2} \nonumber \\ &&+\frac{z}{x}+\frac{x z}{2}-\frac{z^2}{2}-\frac{z^2}{x}-\frac{x z^2}{2} \bigg] + \mathcal{O}(\epsilon) \, . \end{eqnarray} The three-quark quark-gluon antenna functions have the form \begin{eqnarray} E^0_3(\hat{1}_q,2_{q'},3_{\bar{q}'}) &=& \frac{1}{s_{123}^2} \left( \frac{(s_{12}+s_{13})^2}{s_{23}}-\frac{2 s_{12}s_{13}}{s_{23}}+ (s_{12}+s_{13}) \right) + \mathcal{O}(\epsilon) \, , \\ E^0_3(\hat{1}_{q'},2_{q'},3_{q}) &=& -\frac{1}{s_{123}^2} \left(\frac{(s_{13}+s_{23})^2}{s_{12}}-\frac{2 s_{13} s_{23}}{s_{12}}+ (s_{13}+s_{23}) \right) \, + \mathcal{O}(\epsilon) \label{eq:E30qpqpq} \, . \end{eqnarray} The first antenna function is symmetric in the final-state quark pair. There is one corresponding integrated fragmentation antenna function, i.e.\ \begin{eqnarray} \mathcal{E}^{0, {\rm id.} q'}_{3, \hat{q}}(x,z) &=& \left(Q^2\right)^{-\epsilon} \bigg[ -\frac{1}{2\epsilon} \delta(1-x) p^{(0)}_{qg}(z) -\frac{1}{2 x}+\frac{p^{(0)}_{qg}(z)}{2 x} +\frac{1}{2} \mathcal{D}_0(x) p^{(0)}_{qg}(z) \nonumber \\ &&+\delta(1-x) \left(\frac{1}{2}-\frac{1}{2} p^{(0)}_{qg}(z)+\frac{1}{2} \left( \log (1-z) +\log (z)\right) p^{(0)}_{qg}(z)\right) \bigg] + \mathcal{O}(\epsilon) \, . \nonumber \\ \label{eq:E30IFqqp} \end{eqnarray} The only unresolved limit of the antenna function in \eqref{eq:E30qpqpq} is the flavour-changing initial-final collinear limit. However, identifying the final-state parton $q'$ prevents it from becoming collinear to the initial-state since any jet function will require a minimum transverse momentum of the identified particle. Therefore, the only integrated fragmentation antenna function corresponding to \eqref{eq:E30qpqpq} identifies the final-state quark $q$. We find \begin{eqnarray} \mathcal{E}^{0, {\rm id.} q}_{3, \hat{q}'}(x,z) &=& \left(Q^2\right)^{-\epsilon} \bigg[ -\frac{1}{2\epsilon} \delta(1-z) p^{(0)}_{gq}(x)- \frac{1}{2}+\frac{x}{2}+\frac{1}{2} x \delta(1-z) \nonumber \\ &&-\left(\frac{1}{2}-\frac{1}{2} \mathcal{D}_0(z) +\delta(1-z) \left(\frac{1}{2}-\frac{1}{2} \log (1-x)+\frac{\log (x)}{2}\right)\right) p^{(0)}_{gq}(x) \bigg] + \mathcal{O}(\epsilon) \, . \nonumber \\ \end{eqnarray} The remaining quark-initiated antenna function is \begin{equation} G^0_3(\hat{1}_{q'},2_{q'},3_{g}) = -\frac{1}{s_{123}^2} \left( \frac{(s_{13}+s_{23})^2}{s_{12}}-\frac{2 s_{13} s_{23}}{s_{12}} \right) + \mathcal{O}(\epsilon) \, . \end{equation} The $G^0_3$ antenna function at hand only contains the flavour-changing initial-final limit. Using the same reasoning as for the $E^0_3$ antenna function, we find only one integrated fragmentation antenna function: \begin{eqnarray} \mathcal{G}^{0, {\rm id.} g}_{3, \hat{q}'}(x,z) &=& \left(Q^2\right)^{-\epsilon} \bigg[- \frac{1}{2\epsilon} \delta(1-z) p^{(0)}_{gq}(x) - \frac{1}{2}+\frac{x}{4}-\frac{z}{2}+\frac{x z}{4}+\frac{1}{2} x \delta(1-z) \nonumber \\ &&-\left(\frac{1}{4}+\frac{z}{4}-\frac{1}{2} \mathcal{D}_0(z)+\delta(1-z) \left(\frac{1}{2}-\frac{1}{2} \log (1-x)+\frac{\log (x)}{2}\right)\right) p^{(0)}_{gq}(x) \bigg] \nonumber \\ &&+ \mathcal{O}(\epsilon) \, . \end{eqnarray} The gluon-initiated quark-anti-quark antenna function is given by \begin{equation} A^0_3(2_{\bar{q}},\hat{1}_g,3_q) = -\frac{1}{s_{123}} \left(\frac{2 s_{23}^2}{s_{12} s_{13}}+\frac{s_{13}}{s_{12}}+\frac{s_{12}}{s_{13}}+\frac{2 s_{23}}{s_{12}}+\frac{2s_{23}}{s_{13}} \right) + \mathcal{O}(\epsilon) \end{equation} and its integrated form with an identified quark reads \begin{eqnarray} \mathcal{A}^{0, {\rm id.} \, q}_{3, \hat{g}}(x,z) &=& \left(Q^2\right)^{-\epsilon} \bigg[- \frac{1}{2\epsilon}\delta(1-z) p^{(0)}_{qg}(x) - 1 + \frac{p^{(0)}_{qg}(x)}{2 z} + \frac{1}{2} \mathcal{D}_0(z) p^{(0)}_{qg}(x) \nonumber \\ &&- \frac{1}{2} \delta(1-z) \left( -1 +p^{(0)}_{qg}(x) ( \log(x) - \log(1-x) ) \right) \bigg] + \mathcal{O}(\epsilon)\, . \end{eqnarray} As explained in~\cite{Daleo:2006xa}, the gluon-initiated $D^0_3$ antenna function has to be decomposed into a flavour-preserving and flavour-changing piece. The two resulting antenna functions are \begin{eqnarray} D^0_3(\hat{1}_g,2_g,3_q) &=& \frac{1}{s_{123}^2} \left( \frac{s_{12}^2}{s_{23}}+\frac{2 s_{13}^3}{s_{12} s_{23}}+\frac{4 s_{13}^2}{s_{12}}+\frac{2 s_{23}^3}{s_{12} (s_{12}+s_{13})} +\frac{6 s_{13} s_{23}}{s_{12}} \right. \nonumber \\ &&\left.+\frac{3 s_{12} s_{13}}{s_{23}}+\frac{4 s_{23}^2}{s_{12}}+6 s_{12}+\frac{4 s_{13}^2}{s_{23}}+9 s_{13}+9 s_{23}\right) + \mathcal{O}(\epsilon) \ , \\ D^0_{3, g \to q}(\hat{1}_g, 2_{q}, 3_g) &=& -\frac{1}{s_{123}^2} \left( \frac{s_{13}^2}{s_{12}}+\frac{2 s_{23}^3}{s_{12} (s_{12}+s_{13})}+\frac{3 s_{13} s_{23}}{s_{12}}+\frac{4 s_{23}^2}{s_{12}} \right) + \mathcal{O}(\epsilon) \, . \end{eqnarray} For the flavour-preserving antenna function the quark or the gluon in the final state can be identified. In the former case we find \begin{eqnarray} \mathcal{D}^{0, {\rm id.} q}_{3, \hat{g}}(x,z) &=&-2 \mathbf{I}_{qg}^{(1)}(\epsilon,-Q^2)\delta(1-x) \delta(1-z) \nonumber \\ &&+ \left(Q^2\right)^{-\epsilon} \bigg[ -\frac{1}{2\epsilon} \left( \delta(1-z) p^{(0)}_{gg}(x) + \delta(1-x) p^{(0)}_{qq}(z) \right) + \frac{11}{16} \delta(1-x) \delta(1-z) \nonumber \\ &&+\delta(1-z) \left(\mathcal{D}_1(x) + \left(-2 + \frac{1}{x} +x -x^2\right) \log(1-x) - \frac{(1-x+x^2)^2}{(1-x)x} \log(x) \right) \nonumber \\ &&+\delta(1-x) \left(\frac{1}{2}-\frac{z}{2}+\mathcal{D}_1(z)-\frac{1}{2} (1+z) \log (1-z) + \frac{1+z^2}{2(1-z)} \log(z) \right) \nonumber \\ &&-\frac{3}{8} \delta(1-z) p^{(0)}_{gg}(x) -\frac{11}{24} p^{(0)}_{qq}(z) \delta(1-x)+\frac{1}{4} p^{(0)}_{gg}(x) p^{(0)}_{qq}(z) -\frac{5}{2}+\frac{1}{2 x}+\frac{x}{2} \nonumber \\ &&-\frac{x^2}{2} -z+\frac{z}{2 x}+\frac{x z}{2}-\frac{x^2 z}{2} \bigg] + \mathcal{O}(\epsilon) \label{eq:D30IFgq} \end{eqnarray} and in the latter case \begin{eqnarray} \mathcal{D}^{0, {\rm id.} g}_{3, \hat{g}}(x,z) &=& \left(Q^2\right)^{-\epsilon} \bigg[ -\frac{1}{2\epsilon}\delta(1-x) p^{(0)}_{gq}(z) -\frac{7}{2}+\frac{1}{x}+x-x^2+z-\frac{z}{2 x}-\frac{x z}{2}+\frac{x^2 z}{2} \nonumber \\ &&+\left(\frac{1-2x+x^2-x^3}{2x} +\frac{1}{2} \mathcal{D}_0(x)+\frac{1}{2} \delta(1-x) \left( \log (1-z)+\log(z)\right)\right) p^{(0)}_{gq}(z) \nonumber \\ &&+\frac{1}{2} z \delta(1-x) \bigg] + \mathcal{O}(\epsilon) \, . \end{eqnarray} The only integrated fragmentation antenna function of the flavour-changing $D^0_3$ antenna identifies the final-state gluon. It reads \begin{eqnarray} \mathcal{D}^{0, {\rm id.} g}_{3,g \to q, \hat{g}}(x,z)&=& \left(Q^2\right)^{-\epsilon} \bigg[ -\frac{1}{2\epsilon} \delta(1-z) p^{(0)}_{qg}(x)- \frac{3}{2}+\frac{1}{x}-\frac{z}{2 x}+\frac{1}{2} \mathcal{D}_0(z) p^{(0)}_{qg}(x) \nonumber \\ &&-\delta(1-z) \left(-\frac{1}{2}-\frac{1}{2} \log (1-x) p^{(0)}_{qg}(x)+\frac{1}{2} \log (x) p^{(0)}_{qg}(x)\right) \bigg] + \mathcal{O}(\epsilon) \, . \nonumber \\ \end{eqnarray} There are two gluon-initiated gluon-gluon antenna functions: \begin{eqnarray} F^0_3(\hat{1}_{g},2_g,3_g) &=& \frac{1}{s_{123}^2} \left( \frac{2 s_{123}^2s_{23}}{s_{12} s_{13}}+\frac{2 s_{123}^2 s_{13}}{s_{12} s_{23}}+\frac{2 s_{12} s_{123}^2}{s_{13} s_{23}} \right. \nonumber \\ &&\left.+\frac{2 s_{13} s_{23}}{s_{12}}+\frac{2 s_{12} s_{23}}{s_{13}}+\frac{2 s_{12} s_{13}}{s_{23}}+8 s_{123}\right) + \mathcal{O}(\epsilon) \, , \\ G^0_3(\hat{1}_g, 2_{q'} , 3_{\bar{q}'}) &=& \frac{1}{s_{123}^2} \left( \frac{(s_{12}+s_{13})^2}{s_{23}}-\frac{2 s_{12} s_{13}}{s_{23}} \right) + \mathcal{O}(\epsilon) \, . \end{eqnarray} Both antenna functions are symmetric under the exchange of parton 2 and 3. Therefore, for each antenna function there is only one integrated fragmentation antenna function. We find \newpage \begin{eqnarray} \mathcal{F}^{0, {\rm id.} g}_{3, \hat{g}}(x,z)&=&-2 \mathbf{I}_{gg}^{(1)}(\epsilon,-Q^2)\delta(1-x) \delta(1-z) \nonumber \\ &&+ \left(Q^2\right)^{-\epsilon} \bigg[ -\frac{1}{2\epsilon} \left(\delta(1-z) p^{(0)}_{gg}(x)+ \delta(1-x) p^{(0)}_{gg}(z) \right) + \frac{121}{144}\delta(1-z)\delta(1-x) \nonumber \\ &&+\delta(1-z) \left(\mathcal{D}_1(x)+ \left(-2 + \frac{1}{x} +x -x^2 \right) \log(1-x) - \frac{(1-x+x^2)^2}{(1-x)x} \log(x)\right) \nonumber \\ &&+\delta(1-x) \left(\mathcal{D}_1(z) + \left(-2 + \frac{1}{z} +z - z^2 \right) \log(1-z) + \frac{(1-z+z^2)^2}{(1-z)z} \log(z) \right) \nonumber \\ &&-\frac{11}{24} \left( \delta(1-z) p^{(0)}_{gg}(x)+\delta(1-x) p^{(0)}_{gg}(z) \right) +\frac{1}{4} p^{(0)}_{gg}(x)p^{(0)}_{gg}(z) \nonumber \\ &&-(2-x+x^2)(2-z+z^2) \bigg] + \mathcal{O}(\epsilon) \end{eqnarray} and \begin{eqnarray} \mathcal{G}^{0, {\rm id.} q'}_{3, \hat{g}}(x,z) &=& \left(Q^2\right)^{-\epsilon} \bigg[ -\frac{1}{2\epsilon} \delta(1-x) p^{(0)}_{qg}(z)+ \frac{1}{2 x} p^{(0)}_{qg}(z) +\frac{1}{2} \mathcal{D}_0(x) p^{(0)}_{qg}(z) \nonumber \\ &&+\delta(1-x) \left(\frac{1}{2}-\frac{1}{2} p^{(0)}_{qg}(z)+\frac{1}{2} \left( \log (1-z) + \log (z) \right) p^{(0)}_{qg}(z)\right) \bigg] + \mathcal{O}(\epsilon) \, . \nonumber \\ \label{eq:G30IFgqp} \end{eqnarray} \subsection{Final-Final Configuration} The unintegrated $X^0_3$ antenna functions in the final-final configuration can be found in~\cite{GehrmannDeRidder:2005cm}. We recall their expressions here and give the results for their integrated form differential in the final-state momentum fraction. The tree-level three parton quark-anti-quark antenna function reads \begin{equation} A^0_3(1_{\bar{q}},3_g,2_q) =\frac{1}{s_{123}} \left(\frac{2 s_{12}^2}{s_{13} s_{23}}+\frac{2 s_{12}}{s_{13}}+\frac{2 s_{12}}{s_{23}}+\frac{s_{23}}{s_{13}}+\frac{s_{13}}{s_{23}} \right) + \mathcal{O}(\epsilon) \, . \label{eq:A30FFunint} \end{equation} It is symmetric under the exchange of the quark pair. We find two integrated fragmentation antenna functions. Identifying the gluon, we have \begin{eqnarray} \mathcal{A}^{0, {\rm id.} g}_{3,\bar{q}}(z)&=& \left(q^2\right)^{-\epsilon} \bigg[ -\frac{1}{2 \epsilon } p^{(0)}_{gq}(z) +\frac{1}{4}+\frac{z}{8}+\left(-\frac{3}{8}+\frac{1}{2} \log (1-z)+\frac{\log (z)}{2}\right) p^{(0)}_{gq}(z) \bigg] \nonumber \\ &&+\mathcal{O}(\epsilon) \end{eqnarray} and in case the quark is identified, we find \begin{eqnarray} \mathcal{A}^{0, {\rm id.} q}_{3,\bar{q}}(z)&=& -2 \mathbf{I}_{q\bar{q}}^{(1)}(\epsilon ,q^2) \delta(1-z)+ \left(q^2 \right)^{-\epsilon} \bigg[ -\frac{1}{2\epsilon } p^{(0)}_{qq}(z) + \frac{3}{8}-\frac{z}{8}+\left(\frac{47}{16}+\frac{\pi ^2}{6}\right) \delta(1-z) \nonumber \\ &&+\mathcal{D}_1(z)-\frac{1}{2} (1+z) \log (1-z)+\frac{1+z^2}{2(1-z)} \log(z) -\frac{3}{8} p^{(0)}_{qq}(z) \bigg] + \mathcal{O}(\epsilon) \, . \label{eq:A30FFqbq} \end{eqnarray} In the subtraction of quark-photon collinear limits the antenna function $A^0_3(1_{\bar{q}},3_{\gamma},2_q)$ is used. Its unintegrated form coincides with \eqref{eq:A30FFunint} and we have $\mathcal{A}^{0, {\rm id.} \gamma}_{3,\bar{q}} = \mathcal{A}^{0, {\rm id.} g}_{3,\bar{q}}$. The tree-level quark-gluon antenna function can be expressed as \begin{equation} D^0_3(1_q, 2_g , 3_g) = d^0_3(1_q, 2_g , 3_g) + d^0_3(1_q, 3_g, 2_g) \label{eq:D30FF} \end{equation} with the sub-antenna \begin{equation} d^0_3(1_q, 2_g, 3_g) = \frac{1}{s_{123}^2} \left( \frac{2 s_{123}^2 s_{13}}{s_{12} s_{23}}+\frac{s_{123} s_{23}}{s_{12}}+\frac{s_{12} s_{13}}{s_{23}}+\frac{s_{12}}{2}+2 s_{123}+\frac{s_{13}}{2} \right) + \mathcal{O}(\epsilon) \, . \label{eq:d30FF} \end{equation} In the sub-antenna at hand gluon 3 acts as a hard radiator while the full antenna \eqref{eq:D30FF} also contains the soft limit of gluon 3. The reference particle used in the definition of the momentum fraction has to be a hard radiator. Therefore, if we want to use the quark-gluon antenna function with the gluon as the reference particle we have to use the sub-antenna in which the reference gluon is a hard radiator. Integrating \eqref{eq:d30FF} and remaining differential in the gluon momentum fraction, we find \begin{equation} \mathcal{D}^{0, {\rm id.} g}_{3,g}(z)=\left(q^2\right)^{-\epsilon} \bigg[ -\frac{1}{2 \epsilon } p^{(0)}_{gq}(z) + \frac{5}{8}+\frac{z}{8}+\left(-\frac{11}{24}+\frac{1}{2} \log (1-z)+\frac{\log (z)}{2}\right) p^{(0)}_{gq}(z) \bigg] + \mathcal{O}(\epsilon) \end{equation} and for the case where the quark momentum is identified we have \begin{eqnarray} \mathcal{D}^{0, {\rm id.} q}_{3,g}(z) &=&-2 \mathbf{I}_{qg}^{(1)}(\epsilon,q^2) \delta(1-z)+ \left(q^2\right)^{-\epsilon} \bigg[ -\frac{1}{2\epsilon} p^{(0)}_{qq}(z) + \frac{3}{4}-\frac{z}{8}+\left(\frac{167}{48}+\frac{\pi ^2}{6}\right) \delta(1-z) \nonumber \\ &&+\mathcal{D}_1(z)-\frac{1}{2} (1+z)\log (1-z) + \frac{1+z^2}{2(1-z)} \log(z) -\frac{11}{24} p^{(0)}_{qq}(z) \bigg] + \mathcal{O}(\epsilon) \, . \label{eq:D30FFgq} \end{eqnarray} When the quark acts as a reference particle we can integrate the full antenna function \eqref{eq:D30FF}. In this case the fragmentation antenna function reads \begin{eqnarray} \mathcal{D}^{0, {\rm id.} g}_{3,q}(z)&=&-2 \mathbf{I}_{qg}^{(1)}(\epsilon,q^2) \delta(1-z) +\left(q^2\right)^{-\epsilon} \bigg[ -\frac{1}{2 \epsilon} p^{(0)}_{gg}(z) + \frac{5}{3}-\frac{13 z}{12}+\frac{13 z^2}{12} \nonumber \\ &&+\left(\frac{49}{16}+\frac{\pi ^2}{6}\right) \delta(1-z) +\mathcal{D}_1(z) + \left(-2 +\frac{1}{z} + z - z^2\right) \log (1-z) \nonumber \\ &&+\frac{(1-z+z^2)^2}{(1-z)z} \log(z) -\frac{3 }{8} p^{(0)}_{gg}(z) \bigg] + \mathcal{O}(\epsilon) \, . \end{eqnarray} The last quark-gluon antenna function is \begin{equation} E^0_3(1_q,2_{q'},3_{\bar{q}'}) = \frac{1}{s_{123}^2} \left( \frac{(s_{12}+s_{13})^2}{s_{23}}-\frac{2 s_{12}s_{13}}{s_{23}}+ (s_{12}+s_{13}) \right) + \mathcal{O}(\epsilon) \, . \end{equation} It is symmetric under the exchange of particle 2 and 3. Phase space integration with reference particle $q$ and identified particle $q'$ yields \begin{equation} \mathcal{E}^{0, {\rm id.} q'}_{3,q}(z)= \left(q^2\right)^{-\epsilon} \bigg[ -\frac{1}{2 \epsilon } p^{(0)}_{qg}(z) + \frac{2}{3} + \left( -\frac{17}{12}+\frac{1}{2} ( \log (1-z) +\log (z)) \right) p^{(0)}_{qg}(z) \bigg] + \mathcal{O}(\epsilon) \, . \label{eq:E30FFqqp} \end{equation} In case the primary quark $q$ is identified, we have \begin{equation} \mathcal{E}^{0, {\rm id.} q}_{3,\bar{q}'}(z)= -4 \mathbf{I}_{qg,F}^{(1)}(\epsilon,q^2) \delta(1-z) +\left(q^2\right)^{-\epsilon} \bigg[ -\frac{1}{12}-\frac{11}{12} \delta(1-z)+\frac{1}{3} \mathcal{D}_0(z) \bigg] + \mathcal{O}(\epsilon) \, . \end{equation} The first gluon-gluon antenna function is \begin{equation} F^0_3(1_g, 2_g , 3_g) = f^0_3(1_g , 2_g , 3_g) + f^0_3(1_g , 3_g , 2_g) \label{eq:F30FF} \end{equation} with the sub-antenna \begin{equation} f^0_3(1_g, 2_g , 3_g) = \frac{1}{s_{123}^2} \left( \frac{2 s_{123}^2 s_{13}}{s_{12} s_{23}}+\frac{s_{13} s_{23}}{s_{12}}+\frac{s_{12} s_{13}}{s_{23}}+\frac{8 s_{123}}{3} \right) + \mathcal{O}(\epsilon) \, . \end{equation} In \eqref{eq:F30FF} we have fixed $1_g$ to be the hard radiator which is used as a reference particle in the definition of the momentum fraction. Consequently, it does not contain the sub-antenna $f^0_3(2_g, 1_g , 3_g)$. Integration of \eqref{eq:F30FF} over the phase space while remaining differential in the final-state momentum fraction yields \begin{eqnarray} \mathcal{F}^{0, {\rm id.} g}_{3,g}(z)&=&-2 \mathbf{I}_{gg}^{(1)}(\epsilon,q^2) \delta(1-z)+ \left(q^2\right)^{-\epsilon} \bigg[ -\frac{1}{2 \epsilon } p^{(0)}_{gg}(z) + \frac{4}{3}-\frac{11 z}{12}+\frac{11 z^2}{12} \nonumber \\ &&+\left(\frac{523}{144}+\frac{\pi ^2}{6}\right) \delta(1-z)+\mathcal{D}_1(z) + \left(-2 +\frac{1}{z} +z-z^2 \right) \log(1-z) \nonumber \\ &&+ \frac{(1-z+z^2)^2}{(1-z)z} \log(z) -\frac{11}{24} p^{(0)}_{gg}(z) \bigg] + \mathcal{O}(\epsilon) \, . \end{eqnarray} The second gluon-gluon antenna function is \begin{equation} G^0_3(1_g, 2_{q'} , 3_{\bar{q}'}) = \frac{1}{s_{123}^2} \left( \frac{(s_{12}+s_{13})^2}{s_{23}}-\frac{2 s_{12} s_{13}}{s_{23}} \right) + \mathcal{O}(\epsilon) \, . \end{equation} It is symmetric under the exchange of particle 2 and 3. Using the gluon as reference particle and remaining differential in the momentum fraction of $q'$, the phase space integration gives \begin{equation} \mathcal{G}^{0, {\rm id.} q'}_{3,g}(z) = \left(q^2\right)^{-\epsilon} \bigg[ -\frac{1}{2 \epsilon } p^{(0)}_{qg}(z) + \frac{1}{2} + \left(-\frac{17}{12} +\frac{1}{2} (\log (1-z) +\log (z) )\right) p^{(0)}_{qg}(z) \bigg] + \mathcal{O}(\epsilon) \, . \label{eq:G30FFgqp} \end{equation} In case of an identified gluon we have \begin{equation} \mathcal{G}^{0, {\rm id.} g}_{3,\bar{q}'}(z)=-2 \mathbf{I}_{gg,F}^{(1)}(\epsilon,q^2) \delta(1-z) + \left(q^2\right)^{-\epsilon} \bigg[ -\frac{1}{6}-\frac{z}{6}-\frac{11}{12} \delta(1-z)+\frac{1}{3} \mathcal{D}_0(z) \bigg]+\mathcal{O}(\epsilon) \, . \end{equation} \end{appendix} \bibliographystyle{JHEP}
2,869,038,156,603
arxiv
\section{Introduction} Quantum metrology offers the promise to measure certain parameters with a higher precision than using classical resources only. More precisely, given a physical process $\Lambda(\varphi)$ depending on a parameter $\varphi$, one can estimate $\varphi$ with higher accuracy, if the process is applied to an entangled state of $N$ particles instead of $N$ separate particles in individual states. In the typical case, $\varphi$ is a phase acquired by a unitary evolution, which can, using entanglement, be determined with an accuracy of $(\Delta \varphi)^2 \propto 1/N^2$, the so-called Heisenberg limit (HL). Contrary to that, with separable states the standard quantum limit (SQL) $(\Delta \varphi)^2 \propto 1/N$ is an upper bound on the precision \cite{oldmetrology, Huelga1997, Sorensen2001, Toth2014, Giovannetti2006, GiovanettiScience, GiovanettiPhotonics, Pezze2009}. In any real application, however, errors are unavoidable and one has to ask whether quantum metrology offers an advantage even in the presence of noise and decoherence. Here, it was realised that noise can have a detrimental effect \cite{Escher2011, Demkowicz-Dobrzanski2012}. In fact, for generic noise models and estimation schemes, where the same unitary evolution is applied to all particles, it was shown that the Heisenberg scaling cannot be retained. This does not necessarily mean that quantum effects do not offer any advantage anymore, but it shows that one has to consider specific situations and noise models in detail, in order to find the best quantum mechanical estimation scheme. In fact, it has been shown that for very specific models the Heisenberg scaling can still be achieved \cite{acinmetro} and also ideas from quantum error correction can be used to fight against noise \cite{ecmetro1, ecmetro2, ecmetro3}. Finally, for specific noise models the optimal states for large numbers of particles have been determined \cite{Frowis2014}. In this paper, we investigate phase and frequency estimation under the effect of collective phase noise, which is a typical noise model for ion trap experiments \cite{Monz2011}. In the first part, we consider the standard linear estimation scheme and optimize the initial probe states under collective rotations. It turns out that even with this optimization the states do not provide a significant advantage over separable states, hence new concepts are needed. In the second part, we consider differential interferometry (DI) as such an alternative concept. In DI the time evolution is only applied to a subset of the particles, while the other particles are used to monitor the noise only. This means that the known negative results \cite{Escher2011, Demkowicz-Dobrzanski2012} do not apply. We use the scenario of DI as introduced in Ref.~\cite{Landini2014}, where it was shown already that DI can sometimes be useful for suppressing decoherence. For our noise model, we present a detailed study which states are optimal and how many particles should be used for applying the time evolution and how many particles should be used for monitoring the noise only. It turns out that a Heisenberg scaling can be reached again. Finally, we briefly discuss possible implementations of DI using trapped ions. This paper is organized as follows: In Section II we describe the metrology scheme and the noise model that we are using. In Section III we determine the optimized states for standard interferometry using our noise model. Section IV deals with differential interferometry. We explain the scheme and discuss the optimal states. We also comment on possible experimental implementations. Finally, we conclude and discuss further open problems. In the Appendix we present detailed calculations and derivations. \begin{figure*} \begin{center} \subfigure[ ]{\includegraphics[width=0.4\textwidth]{phase-estimation-schema2.pdf} \subfigure[ ]{\includegraphics[width=0.45\textwidth]{phase-estimation-schema3.pdf}} \caption{Measurement schemes in quantum metrology. A map $\Lambda_\varphi$ is acting on each particle individually. The linear map $\Lambda_\varphi$ depends on the parameter $\varphi$. This parameter will be estimated by a measurement. \textbf{(a)}: All particles are initially in a separable or entangled state. \textbf{(b)}: Differential Interferometry. The initial state is a bipartite state with $N_1$ particles in the first partition and $N-N_1$ in the second partition. The linear map $\Lambda_\varphi$ acts on the particles of the second partition only. }\label{fig:Metrology} \end{center} \end{figure*} \section{The set-up and the noise model} \label{sec:noise} In standard metrological schemes (see Fig.~\ref{fig:Metrology}~(a)), $N$ particles are in an initial state $\varrho_0$. A time evolution depending on the parameter $\varphi$ acts on each particle individually. The goal is to estimate this parameter $\varphi$ by measurements. In classical schemes, the particles are only classically correlated and therefore initially in a separable state. The variance for measuring $\varphi$ is bounded by the so called Standard Quantum Limit (SQL) $(\Delta \varphi)^2 \propto 1/N$. In quantum metrology the particles can be entangled. With such states the Heisenberg Limit (HL) $(\Delta \varphi)^2 \propto 1/N^2$ can be reached theoretically \cite{Sorensen2001, GiovanettiScience, Giovannetti2006, Pezze2009}. As a consequence, there is an enhancement in precision by a factor of $1/N$ by using entangled states. However, in realistic experiments, noise affects the particles and reduces the entanglement and thereby the enhancement of using entangled states. These noise effects arises because the probe system cannot be perfectly separated from their environment. A possible effect is that the energy splitting of the two-level system depends on the noise influenced by the environment. This causes the level splitting to fluctuate in time. An example of such noise effects are magnetic field fluctuations in systems with magnetic field dependent energy splitting. In the simplest noise model, all qubits receive the same fluctuations, this is also called collective phase noise. Collective phase noise is, besides micromotion, the main source of noise in experiments with ions as described in Ref. \cite{Monz2011}. In experiments with atoms, the trapping potential is fluctuating in time. Those fluctuations also cause collective phase noise, which is besides particle loss the main source of noise in experiments with atoms. Without loss of generality we assume magnetic field fluctuations in time as noise source in this paper. However, noise due to trapping potential fluctuations can be described with the same noise model. \begin{figure*} \subfigure[ ]{\hfill \includegraphics[width=0.4\textwidth]{QFI8_vs_theta_GHZ.pdf}}\hspace{0.5cm} \subfigure[ ]{\includegraphics[width=0.4\textwidth]{QFI8_vs_theta_dicke.pdf}} \caption{QFI for phase estimation with $N=8$ qubits for different rotated states over rotation angle $\alpha$. Different colors (Color online) represent different measurement times $T$. \textbf{(a)}: QFI for phase estimation with rotated GHZ states $\ket{\mathrm{GHZ}(\alpha)}$. \textbf{(b)}: QFI for phase estimation with rotated symmetric Dicke states $\ket{\mathrm{D}(\alpha)}$. The upper pictures visualize rotated symmetric Dicke states in the Bloch representation. }\label{fig:Rotation} \end{figure*} In a realistic experiment, the Hamiltonian for $N$ particles with the atomic transition frequency $\omega_0$ and the additional Zeeman splitting due to the magnetic offset field $B_0$ and the magnetic field fluctuations $\Delta B(t)$, is given by \begin{equation} H=\hbar (\underbrace{\omega_0+\gamma B_0}_{=\omega}) S_z^N + \hbar \gamma \Delta B(t) S_z^N, \end{equation} with the transition frequency $\omega$. Here $S_l^N=\sum_{i=1}^N \sigma_l^{(i)}/2$ is the collective spin operator acting on $N$ particles with $l \in \{x,y,z\}$ and the Pauli matrices $\sigma_l^{(i)}$ acting on the $i$-th ion. The free evolution time $\tau \in \left[0,T\right]$ of the initial state $\vr_0$ can be described by the unitary operator \begin{equation} U=\mathrm{exp}\left[-i\left(\omega T + \gamma \int_0^T \mathrm{d}\tau \Delta B(\tau)\right) S_z^N\right].\label{eq:timeevolution} \end{equation} Here, the magnetic field fluctuations cause phase fluctuations such that the overall phase at a fixed time $T$ is $\Phi = \omega T + \gamma \int_0^T \mathrm{d}\tau \Delta B(\tau)= \omega T + \delta \varphi$. We decompose this unitary into two commuting parts as $U=U_z(\omega T) U_z(\delta\varphi)$, where $U_z(\omega T)$ describes the signal and $U_z(\delta\varphi)$ the noise, with $U_l(\alpha)=\exp\left[-i \alpha S^N_l\right]$. The state evolution due to the noise can be described by \begin{align} \bar{\varrho}_T= \braket{U_z(\delta\varphi)\vr_0 U^\dagger_z(\delta\varphi)}_{\delta\varphi} \label{eq:average} \end{align} with $\braket{.}_{\delta\varphi}$ denoting the average over all phase fluctuations $\delta\varphi$. The final state $\vr$ at a fixed time $T$ is determined by \begin{align} \vr&=U_z(\omega T) \braket{U_z(\delta\varphi)\vr_0 U^\dagger_z(\delta\varphi)}_{\delta\varphi} U^\dagger_z(\omega T)\\ & = U_z(\omega T) \bar{\varrho}_T U^\dagger_z(\omega T).\label{eq:average2} \end{align} In the following, we make three well justified assumptions, following Ref. \cite{Monz2011}: First of all, we assume Gaussian phase fluctuations with $\braket{\delta\varphi}_{\delta\varphi}=0$. This means that there is no systematic time dependent bias due to phase fluctuations. Second we assume the time correlation $\braket{\Delta B(t) \Delta B(0)}=\Delta B^2 \exp\left[-t/\tau_c \right] $ to decay exponentially with the correlation time $\tau_c$ and the fluctuation strength $\Delta B$. Third, the noise process can be regarded as stationary $\braket{B(t+\tau)B(t)}=\braket{B(\tau)B(0)}$. The uncertainty achievable with the help of the time dependent probe state $\varrho(T)$ is lower bounded by the quantum Fisher information (QFI) $F_Q$ via the Cram{\'e}r-Rao bound \cite{Huelga1997, Helstrom1976, Holevo1982,Braunstein1994,Braunstein1996} \begin{equation} (\Delta \varphi)^2 \ge \frac{1}{F_Q}.\label{eq:cramer_rao} \end{equation} The QFI $F_Q[\varrho_0, \Lambda_\varphi]$ is defined as \begin{equation} F_Q[\varrho_0, \Lambda_\varphi]=2\sum_{\alpha,\beta} \frac{|\braket{\alpha|\partial_\varphi \varrho|\beta}|^2}{\lambda_\alpha + \lambda_\beta}\label{eq:QFI} \end{equation} with the eigenvalues $\{\lambda_\alpha\}$ and the eigenvectors $\{\ket{\alpha}\}$ of the initial state $\vr_0$. The QFI does only depend on the initial state and the change of the state $\partial_\varphi \varrho$ due to the linear map $\vr=\Lambda_\varphi (\vr_0)$ and optimizes over all possible measurements. For the time evolution given by Eq. \eqref{eq:timeevolution}, the QFI for the parameter $\varphi=\omega T$ is given by \begin{equation} F^\varphi_Q[\bar{\varrho}_T,S_z^N]=4\sum_{\alpha<\beta} \frac{(\lambda_\alpha - \lambda_\beta)^2}{\lambda_\alpha + \lambda_\beta}|\braket{\alpha|S_z^N|\beta}|^2 \end{equation} with the eigenvalues $\{\lambda_i\}$ and the eigenvectors $\{\ket{v_i}\}$ of the averaged state $\bar{\varrho}_T$ given in Eq. \eqref{eq:average}. For the estimation of the frequency $\omega$ we find $F^\omega_Q[\bar{\varrho}_T,T \,S_z^N]=T^2F^\varphi_Q[\bar{\varrho}_T,S_z^N]$. In the following, we investigate the performance of different probe states depending on time. For this estimate, we assume typical field fluctuations on the order of $\gamma \Delta B=2 \pi\cdot 50\,$Hz and correlation time $\tau_c=1\,$s (see e.g. Ref. \cite{Baumgart2014}). \section{Phase and frequency estimation with rotated GHZ and symmetric Dicke states} \label{sec:usual_metrology} In the noiseless case, Greenberger-Horne-Zeilinger (GHZ) states \cite{Greenberger1989} are known to be best for phase estimation in order to reach the HL. Under collective phase noise, they are optimal for frequency estimation, if the measurement time can be optimized \cite{Frowis2014}, which is not always possible. They have been realized in several experiments with photons \cite{Bouwmeester1999,Bouwmeester2000} and trapped cold ions \cite{Sackett2000,Meyer2001,Monz2011}. It is known that GHZ states are highly sensitive to particle loss. Losing a particle transforms the state to a separable state, which is useless from a metrological perspective. Dicke states \cite{Dicke1954} are much more robust to particle loss, which makes them interesting for quantum metrology and quantum information processing with BEC's \cite{DickeBEC}, photons \cite{DickePhotons} and trapped cold ions \cite{Schindler2013}. A simple way to enhance the robustness of GHZ and symmetric Dicke states are collective rotations. Therefore, we will investigate for both, phase and frequency estimation, probe states over collective rotations and test their enhancement in comparison to product states in experiments with collective phase noise. \subsection{GHZ states} \begin{figure*} \subfigure[ ]{\includegraphics[width=0.49\textwidth]{opt8_1.pdf}}\hfill \subfigure[ ]{\includegraphics[width=0.49\textwidth]{opt8_2.pdf}} \caption{QFI for phase and frequency estimation with $N=8$ qubits. The solid lines are the QFI optimized over the rotation angle $\alpha$ and dashed lines are the QFI of the origin states. \textbf{(a):} QFI for phase estimation over the time $T$ for different states. \textbf{(b):} The upper plot shows the QFI for frequency $\omega$ estimation. The lower plot shows the optimal rotation angle $\alpha_\mathrm{opt}$ over the time for the tested states. }\label{fig:Varianceforfreq} \end{figure*} The QFI for the GHZ state $\ket{\mathrm{GHZ}}=(\ket{0}^{\otimes N}+\ket{1}^{\otimes N})/\sqrt{2}$ under collective phase noise is given by \begin{equation} F_Q^\varphi\left[\bar{\varrho}_T,S_z^N\right]=N^2 \mathrm{e}^{-N^2 C(T)} \label{eq:QFIGHZ} \end{equation} with $C(T)=\left(\gamma \Delta B \tau_c\right)^2 \left[\exp(- T/\tau_c)+T/\tau_c -1\right]$ (see Appendix \ref{app:noisy_GHZ} for a detailed calculation). The same result can be obtained by solving the master equations for collective phase noise as has be done in Ref. \cite{Frowis2014} with $(\Delta B)^2=2/(\gamma \tau_c)^2$. This result shows that in the noiseless case, when $T=0$, the HL $F_Q^\varphi = N^2$ can be reached. For $T>0$, the QFI decreases, because the state evolves into a mixed state. The larger $N$, the faster the QFI decreases. For frequency estimation, the QFI increases with $T^2$ for small $T$ and decreases exponentially in time for larger $T$. As a result, there exists an optimal measurement time. A simple experimentally realizable optimization over the input state are collective rotations \begin{equation} U_y(\alpha)=\mathrm{exp}\left[-i \alpha S_y^N\right]. \end{equation} These rotations can be realised with a short laser pulse on all qubits. Due to the symmetry of the state, this rotation can be realised around any axis in the $x/y$-plane. Without loss of generality, we choose the $y$-axis, so that the initial state $\varrho_0$ in Eq. \eqref{eq:average} changes to \begin{equation} \varrho_0 \rightarrow U_y(\alpha)\varrho_0 U^\dagger_y(\alpha). \end{equation} We define the rotated GHZ state with $\ket{\mathrm{GHZ}(\alpha)}=U_y(\alpha)\ket{\mathrm{GHZ}}$. The QFI for phase estimation with $\ket{\mathrm{GHZ}(\alpha)}$ over the rotation angle $\alpha$ is plotted in Fig. \ref{fig:Rotation} (a). It shows the QFI for an $N=8$ GHZ state in comparison to an $N=8$ not rotated product state $\ket{\Psi}$ (dashed lines) for different times $T$. For product states $\ket{\Psi}=\ket{+}^{\otimes N}$ with $\ket{+}=(\ket{0}+\ket{1})/\sqrt{2}$, we find the optimal rotation angle $\alpha_{\mathrm{opt}}=0$ for all $T$. The QFI is symmetric around $\alpha=\pi/2$ because of the symmetry of the state. For different times $T$ there exists different optimal rotation angles $\alpha_{\mathrm{opt}} \ge 0$ as shown in Fig. \ref{fig:Rotation}(a). The reason is that the state is rotated into a state, which is less sensitive to the magnetic field but also less sensitive to collective phase noise. The QFI over time $T$ for the optimal rotation angle $\alpha_{\mathrm{opt}}$ is plotted in Fig. \ref{fig:Varianceforfreq} (a). Our numerical results show that the QFI for the optimal rotated GHZ state (red solid line) decreases slower than the the not-rotated one (red dashed line) and approaches the QFI for product states (black dashed line) for larger times $T$. For frequency estimation there exists a global maximum and a optimal measurement time for all tested states as shown in Fig. \ref{fig:Varianceforfreq}(b). Similar to Ref. \cite{Dorner2012}, we find that product states (dashed black line) perform better then GHZ states (dashed red line) for larger $T$. However, the measurement time in real experiments is often constrained by external parameters. Therefore, in experiments limited to small measurement times, optimal rotated GHZ states perform better then product states. \subsection{Symmetric Dicke states} Symmetric Dicke states with $k$ excitations are defined as \begin{equation} \ket{\mathrm{D}^{k}_N}=\frac{1}{\ensuremath{\mathcal{N}}} \sum_j \ensuremath{\mathcal{P}}_j\{\ket{0}^{\otimes N-k}\otimes\ket{1}^{\otimes k} \}, \end{equation} with $\ensuremath{\mathcal{N}}$ being a normalization constant and $\sum_j \ensuremath{\mathcal{P}}_j\{.\}$ denoting the sum over all possible permutations. In experiments with BEC's symmetric Dicke states $\ket{\mathrm{D}^{N/2}_N}$ with $k=\frac{N}{2}$ excitations are often used for quantum metrology, because they are less sensitive to losses (which often appear in such experiments) and still have a good scaling $F_Q\propto N(N+2)/2$ in the noiseless case. In the following, we investigate their performance in the presence of collective phase noise. In general, symmetric Dicke states are insensitive to rotations around the $z$-axis. Therefore, they need to be rotated $\ket{\mathrm{D}}\equiv U_{y}(\pi/2)\ket{\mathrm{D}^{N/2}_N}$, such that the scaling $F\propto N(N+2)/2$ can be achieved in the noiseless case. {There are other symmetric Dicke states, which could be metrologically useful as long as $k \propto N$. However, the QFI in the noiseless case is maximal for $k=N/2$. Therefore, we focus on symmetric Dicke states with $k=N/2$ excitations.} Similar to GHZ states, the state evolves due to collective phase noise, into a mixed state and the QFI decreases in time. Again, the performance can be enhanced by global rotations $\ket{\mathrm{D}(\alpha)}\equiv U_{y}(\pi/2+\alpha)\ket{\mathrm{D}^{N/2}_N}$. The optimal rotation angles depending on time can be found in Fig. \ref{fig:Rotation} (b). The QFI for phase estimation with optimal rotated Dicke states $\ket{\mathrm{D}(\alpha_{\mathrm{opt}})}$ (solid yellow or light grey line) is plotted in Fig. \ref{fig:Varianceforfreq} (a). There is a small enhancement between the QFI for optimal rotated Dicke states $\ket{\mathrm{D}(\alpha_{\mathrm{opt}})}$ and not rotated Dicke states $\ket{\mathrm{D}}$ (dashed yellow or light grey line) for larger $T$. We find a small time interval, where optimal rotated Dicke states $\ket{\mathrm{D}(\alpha_{\mathrm{opt}})}$ perform best, that is, also better then optimal rotated GHZ states. For frequency estimation (see Fig. \ref{fig:Varianceforfreq} (b)), not rotated Dicke states $\ket{\mathrm{D}}$ (dashed yellow or light grey line) perform better than not rotated GHZ states $\ket{\mathrm{GHZ}(0)}$ (dashed red or dark grey line) and product states (black dashed line) perform best. However, there is an enhancement by rotating Dicke states $\ket{\mathrm{D}(\alpha_{\mathrm{opt}})}$ optimal (solid yellow or light grey line). However, even after optimizing GHZ states and symmetric Dicke states with $N/2$ excitations over rotation angle, product states (black dashed lines) are still the best for frequency estimation if it is possible to tune the measurement time to the optimal one. In general, the frequency measurement has to be repeated several times and the variance is limited by \begin{equation} (\Delta\omega)^{-2} \le k F_Q^\omega =t_{o} T F_Q^\varphi \end{equation} for $k$ repetitions and the total measurement time $t_o=k T$. If $t_o$ is fixed, GHZ states are optimal for frequency estimation also in presence of collective phase noise, when $T$ can be tuned to it's optimum \cite{Frowis2014}. In this case, we found the optimal rotated states reach the identical maximum and optimal measurement time $T_\mathrm{opt}$ as the not rotated states and former, after some time $T> T_\mathrm{opt}$ they perform better then the not rotated states. Furthermore, both symmetric Dicke states and GHZ states perform better then product states, when $T$ can be tuned optimal. However, in experiments with fixed repetition rates $k/t_{o}$, measurement times $T$ are fixed. For such experiments our results in Fig. \ref{fig:Varianceforfreq} become important. From those results, the optimal state at a fixed measurement time can be read out. And we find that there is a time interval, where $\ket{\mathrm{D}(\alpha_{\mathrm{opt}})}$ are optimal, a time interval where $ \ket{\mathrm{GHZ}(\alpha_{\mathrm{opt}})}$ are optimal and for large $T$ product states are optimal. {This behaviour holds also for large $N$ as shown for $N=50$ in Appendix \ref{app:large_N}.} In total, we have found, that the GHZ state optimized over the rotation angle has the highest QFI for small times. If it is not possible to measure at small times, another state should be used. Furthermore, for frequency estimation we find that there is no enhancement in precision by rotating Dicke or GHZ states, if it is possible to measure at the optimal time. However, for smaller measurement times $T$, there is an enhancement by using one of the optimal rotated states. Though, for long measurement times $T$, the QFI for both phase and frequency estimation decreases to zero for all tested states. Therefore, it is important to investigate other metrological schemes. \section{Differential Interferometry} \label{sec:DI} In Ref. \cite{Demkowicz-Dobrzanski2012,Escher2011}, it has been shown for a linear interferometer that the enhancement by using entangled states in presence of noise is only a constant factor and not Heisenberg-like. However, in Ref. \cite{Landini2014}, it has been shown that with Differential Interferometry (DI) it is possible to reach the HL even in presence of phase noise, the main mechanism being noise cancellation \cite{Stockton2007}. DI is a non-linear interferometer for which the results from Ref. \cite{Demkowicz-Dobrzanski2012,Escher2011} do not apply. DI has been used in many areas of physics, such as measurement of rotations \cite{Durfee2006}, gradients \cite{Snadden1998} and fundamental constants \cite{Fixler2007}. So far, DI has been investigated by considering classical Fisher Information with a set of bipartite GHZ states ($\ket{\mathrm{GHZ}}\otimes \ket{\mathrm{GHZ}}$). We will investigate DI for those states by considering QFI and extend this analysis with the class of bipartite symmetric Dicke states. \begin{figure*} \subfigure[ ]{\includegraphics[width=0.49\textwidth]{QFI8_vs_t2}}\hfill \subfigure[ ]{\includegraphics[width=0.49\textwidth]{opt8_2_neu.pdf}} \caption{Phase and frequency estimation with equal splitting $N_1=N/2$, by using the ideal DI scheme (solid lines) and DI realised with spin-echo-like experiments (dashed lines), described in Sec. \ref{sec:DI_SE}, with $N=8$ qubits. \textbf{(a):} QFI for phase estimation over the time $T$ for the tested states. \textbf{(b):} QFI for frequency $\omega$ estimation over the time $T$. }\label{fig:phaseDI} \end{figure*} In DI, the system is split in two parts. Both parts will receive the same noise, but only one part will collect the phase $\varphi$ due to a collective rotation around the quantisation axis. This scheme could be interpreted as a measurement of the noise at one part and a measurement of the signal and noise at the other part, such that the noise can be subtracted. It could also be interpreted as a measurement of a phase-difference. The Hamiltonian for this scheme is given by \begin{equation} H=\hbar \omega(\openone_{N_1} \otimes S_z^{N-N_1}) + \hbar \gamma \Delta B(t) S_z^N \label{eq:DI} \end{equation} with $\openone_{N_1}$ being the identity acting on $N_1$ particles. The last term of Eq. \eqref{eq:DI} describes the noise acting on all particles and the first term is the actual signal. In the noiseless case, the maximal QFI is given by \cite{Giovannetti2006} \begin{equation} F_Q=4 (\lambda_{\mathrm{max}}-\lambda_{\mathrm{min}})^2=(N-N_1)^2, \end{equation} with $\lambda_{\mathrm{max}}$ ($\lambda_{\mathrm{min}}$) being the maximal (minimal) eigenvalue of the generator $\openone_{N_1} \otimes S_z^{N-N_1}$. This maximal QFI can be reached with the state $ \ket{\Psi}=\left(\ket{v_{\mathrm{max}}}+\ket{v_{\mathrm{min}}}\right)/\sqrt{2}$, where $\ket{v_{\mathrm{max}}}$ and $\ket{v_{\mathrm{min}}}$ are eigenvectors of the generator $\openone_{N_1} \otimes S_z^{N-N_1}$ corresponding to the maximal and respectively minimal eigenvalues. Optimizing the maximal QFI over the splitting $N_1$ leads to the standard metrological scheme $N_1=0$, discussed in Sec. \ref{sec:usual_metrology}. Here, GHZ states are optimal. However, this state suffers massively from collective phase noise, which leads to $F_Q=0$ for long measurement times leading to the steady state regime. {Due to noise the state evolves into a mixed state until it becomes a mixture of states from the decoherence free subspace (DFS). This mixed state does not change due to collective phase noise and is called steady state. } For the steady state regime the state with maximal QFI is given by (see Appendix \ref{app:DI_optimal_state}) \begin{equation} \ket{\Psi_{\mathrm{opt}}}=\frac{1}{\sqrt{2}}\left(\ket{\underbrace{0 \ldots 0}_{N/2}\underbrace{1 \ldots 1}_{N/2}}+\ket{\underbrace{1 \ldots 1}_{N/2}\underbrace{0 \ldots 0}_{N/2}}\right),\label{eq:DFS} \end{equation} with $N_1=N/2$ being optimal. This state is decoherence free with respect to collective phase noise, such that the QFI for this state is constant in time $F_Q^{\varphi}= N^2/4$ and reaches the HL. However, for equal splitting $N_1=N/2$, it also has been shown that the state $\ket{\mathrm{GHZ}}\otimes \ket{\mathrm{GHZ}}$ performs good in the presence of correlated phase noise, such that the HL can be reached up to a constant factor. This state contains only $N/2$ particle entanglement, whereas the decoherence free state from Eq. \eqref{eq:DFS} is a genuine multiparticle entangled state. In experiments with ions like in Ref. \cite{Monz2011}, the more particle entanglement a state contains the harder the preparation of the state with high fidelity. Therefore, we will focus on initial states of the form \begin{equation} \ket{\Psi_N}=\ket{\tilde{\Psi}_{N_1}}\otimes \ket{\tilde{\Psi}_{N-N_1}},\label{eq:DI_initial_states} \end{equation} where $\ket{\Psi_N}$ denotes an $N$ particle state, as described in Fig. \ref{fig:Metrology} (b). We will compare the class of states $\ket{\tilde{\Psi}_{N/2}}=\ket{\mathrm{GHZ}}$ with equal splitting $N_1=N/2$ investigated in \cite{Landini2014} with the class of states given by \begin{equation} \ket{\mathrm{D}_{N_1}^{k_1},\mathrm{D}_{N-N_1}^{k_2}}_x=U_y\left(\frac{\pi}{2}\right)\ket{\mathrm{D}_{N_1}^{k_1}} \otimes U_y\left(\frac{\pi}{2}\right)\ket{\mathrm{D}_{N-N_1}^{k_2}},\label{eq:Dicke_state} \end{equation} which are bipartite symmetric Dicke (BSD) states in the x basis at both inputs. \subsection{Phase and frequency estimation} In the following, we will first analyse the scaling behaviour of the here mentioned initial states in DI, that is also the decoherence free case. Then, we will investigate the change of the QFI by adding noise. Finally we will examine the scaling behaviour in the steady state regime. For phase estimation with the initial states and equal splitting $N_1=N/2$, we find that the QFI scales with $F_Q^\varphi =N/2$ for $\ket{\tilde{\Psi}_{N/2}}$ being product states. For GHZ states $\ket{\tilde{\Psi}_{N/2}}=\ket{\mathrm{GHZ}}$ we find $F_Q^\varphi =N^2/4$ and for the BSD state $\ket{\tilde{\Psi}_{N/2}}=U_{y}(\pi/2)\ket{\mathrm{D}^{N/4}_{N/2}}$ we find $F_Q^\varphi =N(N+4)/8$. In presence of collective phase noise as mentioned in Sec. \ref{sec:noise} the QFI decreases with the time $T$ due to noise as shown in Fig. \ref{fig:phaseDI} (a) for $N=8$. Nevertheless, the optimal rotation angle $\alpha$ for bipartite GHZ and BSD states is $\alpha_\mathrm{opt}=0$ for all $T$ in DI with equal splitting $N_1=N/2$. However, in comparison to the results without DI, for all tested states, the QFI does not decrease to zero. It decreases to a constant value $F_Q^\varphi[\vr_\mathrm{f}]\xrightarrow{} \mathrm{const} > 0$, with $\vr_\mathrm{f}$ being the steady state of the system. For frequency estimation we find no maximum for all probe states, such that there is no optimal measurement time. When the QFI for phase estimation becomes constant, that is the steady state regime, the QFI for frequency estimation scales with $F_Q^\omega \propto T^2$; The larger the measurement time $T$ the better. The QFI for frequency estimation for bipartite GHZ and BSD states, both with $N_1=N/2$, is plotted in Fig. \ref{fig:phaseDI} (b) and we can see that there is an enhancement by using one of the tested entangled states. In the steady state regime, for large $T$, the QFI for phase estimation becomes constant. For product states and equal splitting, this constant can be calculated analytically (see Appendix \ref{app:DI_scaling_ss_product}) to \begin{equation} F_Q^\varphi[\vr_\mathrm{f}]=N/4.\label{eq:QFI_for_product} \end{equation} For bipartite GHZ states ($\ket{\mathrm{GHZ}}\otimes \ket{\mathrm{GHZ}}$) and equal splitting, this constant can also be calculated analytically (see Appendix \ref{app:DI_scaling_ss_GHZ}) to \begin{equation} F_Q^\varphi[\vr_\mathrm{f}]=N^2/8. \end{equation} For both, the QFI of the initial state is by a constant factor of two greater than for the steady state. For the BSD states, we find (see Appendix \ref{app:DI_scaling_ss_Dicke}) \begin{align} \begin{split} F_Q^\varphi[\vr_\mathrm{f}] &= 4 \sum_{k'=0}^{N} \left\lbrace\sum_{q=a}^{b}\left(d^{N_1}_{q,k_1}d^{N-N_1}_{k'-q,k_2}\right)^2\left(k'-q-\frac{N-N_1}{2}\right)^2 \right. \\ &\left. -\frac{\left[\sum_{q=a}^{b}\left(d^{N_1}_{q,k_1}d^{N-N_1}_{k'-q,k_2}\right)^2 \left(k'-q-\frac{N-N_1}{2}\right)\right]^2}{\sum_{q=a}^{b}\left(d^{N_1}_{q,k_1}d^{N-N_1}_{k'-q,k_2}\right)^2 } \right\rbrace , \end{split}\label{eq:QFI_for_Dicke1} \end{align} with $a=\max\{N-N_1-k',0\}$ and $b=\min\{k',N_1\}$. Here, $d^N_{k',k}\left(\frac{\pi}{2}\right)=\braket{\mathrm{D}_{N}^{k'}|U_y^{N}\left(\frac{\pi}{2}\right)|\mathrm{D}_{N}^{k}}:=d^N_{k',k}$ is the ''small'' Wigner $D$ matrix \cite{Wigner1932} for a rotation angle of $\pi/2$, these are essentially binomial coefficients such, that Eq. \eqref{eq:QFI_for_Dicke1} can directly be evaluated. For $k_1=k_2=0$ the state in Eq. \eqref{eq:Dicke_state} leads to a product state with splitting $N_1$ and $N-N_1$. We can simplify Eq. \eqref{eq:QFI_for_Dicke1} for that case (see Appendix \ref{app:DI_optimization_product}) and find \begin{equation} F_Q^\varphi[\varrho_f]=\frac{N_1(N-N_1)}{N}, \end{equation} which is maximal for $N_1=\floor{N/2}$ with the maximum $F_Q^\varphi[\varrho_f]=N/4$, which we also found in Eq. \eqref{eq:QFI_for_product}. For all other possible combinations of $k_1$, $k=k_1+k_2$, $N_1$ and $N-N_1$, we plotted the QFI in Fig. \ref{fig:max_FI_Dicke} for $N=50$. In Fig. \ref{fig:max_FI_Dicke} (c) we plotted the maximal QFI over the total number of excitations $k$, which is proportional to the total energy in the state. For even $k$ (yellow or lighter grey), there is only one maximum for the QFI, whereas for odd $k$ (red or darker gray), there are more than one possible combination of $k_1$ and $N_1$ for maximal QFI. Both, the number of atoms in the first partition $N_1$ and the maximal QFI are symmetric around $k=N/2$. For the number of excitations in the first partition $k_1$ and $k$ being odd, there is no such a symmetry at first sight. The reason for this asymmetry is the asymmetric splitting for $k<10$ and $k>40$. However, there is a symmetry when comparing the number of excitations in the first partition $k_1$ for $k\le N/2$ with the number of not excited qubits in the first partition $N_1-k_1$ for $k\ge N/2$. Such that $k_1=\floor{k/2}$ is optimal for $k\le N/2$ and $N_1-k_1=\floor{(N-k)/2}$ is optimal for $k\ge N/2$. The QFI is maximal for $k=\floor{N/2}$, $N_1=\floor{N/2}$ and $k_1=\floor{N/4}$. For $N=4 j$ and $j$ being an integer, this leads to the BSD state $\ket{\mathrm{D}_{N/2}^{N/4},\mathrm{D}_{N/2}^{N/4}}_x$. For that initial state the QFI of the steady state is (see Appendix \ref{app:DI_scaling_ss_Dicke}) \begin{equation} F_Q^\varphi[\vr_\mathrm{f}]=\frac{N(N+4)}{16}. \end{equation} Here again, the QFI of the initial state is by a constant factor of $2$ greater than for the steady state. However, with this steady state, Heisenberg like scaling can be reached. \begin{figure*} \begin{tabular}{ccc} \subfigure[ ]{\includegraphics[width=0.3\textwidth]{optimal_excitation_FvsN1vsK1_Nis50_and_kis20.pdf} & \subfigure[ ]{\includegraphics[width=0.3\textwidth]{optimal_excitation_FvsN1vsK1_Nis50_and_kis25.pdf}} & \multirow{2}{*}{\subfigure[ ]{\includegraphics[width=0.35\textwidth]{max_QFI.pdf}} } \\ \subfigure[ ]{\includegraphics[width=0.3\textwidth]{optimal_excitation_FvsN1vsK1_Nis50_and_kis30.pdf}} & \subfigure[ ]{\includegraphics[width=0.3\textwidth]{optimal_excitation_FvsN1vsK1_Nis50_and_kis40.pdf}} \end{tabular} \caption{QFI (calculated from Eq. \eqref{eq:QFI_for_Dicke1}) of the steady state for an input state $\ket{\mathrm{D}_{N_1}^{k_1}\mathrm{D}_{N-N_1}^{k-k_1}}_x$ by using the metrological scheme described in Fig. \ref{fig:Metrology} (b). Here $N=50$ and is the total number of qubits and $k$ is the total number of excitations. In Fig. (a), (b), (d) and (e), the QFI as a function of $k_1$ and $N_1$ in shown. In (a) $k=20$, in (b) $k=25$, in (d) $k=30$ and in (e) $k=40$. The white solid lines mark $N/2$ and $k/2$. The red (grey) solid lines margin the area of allowed combinations of $N_1$ and $k_1$ (Color online). In Fig. (c), the maximal QFI is plotted over the total number of excitations $k$, red (dark gray) for odd $k$ and yellow (light gray) for even $k$. The corresponding values for $k_1$ and $N_1$ are shown as a function of $k$. For odd $k$, there are more than one possible combination of $k_1$ and $N_1$ for maximal QFI.}\label{fig:max_FI_Dicke} \end{figure*} We find that bipartite GHZ states with equal splitting are the best for all measurement times $T$. The splitting $N_1=N-N_1=N/2$ is optimal for bipartite GHZ states. If the splitting differs $N_1-N-N_1=N-2N_1 \neq 0$, the steady state is a mixed state, where all coherences vanish, for which $F_Q^\varphi[\vr_\mathrm{f}]=0$. We find that indeed it is possible to reach the HL with collective phase noise by using DI and from the tested states, bipartite GHZ states are the best for phase and frequency estimation with this metrological scheme. We investigated the scaling behaviour for the steady states and found the optimal splitting for bipartite GHZ states to be $N_1=\floor{N/2}$. We also found the optimal probe state out of the set of BSD states that is given by $N_1=\floor{N/2}$ and $k=\floor{N/2}$ and $k_1=\floor{N/4}$. Now, we will discuss possible experimental realisations. \subsection{Experimental realisation} \label{sec:DI_SE} An obvious way to realise the operator $\openone_{N/2} \otimes S_z^{N/2}$ seems to be a spin-echo-like experiment on the first $N/2$ particles and a Ramsey-like experiment on the rest of the particles. In a spin-echo-like experiment a $\pi$-pulse flips the spins after the half of the evolution time $T/2$. This flip of the spins induces a rephasing process \begin{align} \begin{split} &\mathrm{exp}[-i\omega T S_z^{N/2}- i\gamma \int_0^T \mathrm{d}t \, \omega (t) S_z^{N/2}] \xrightarrow{}\\ & \mathrm{exp}\left[-i\openone_{N/2}-i\gamma \left(\int_0^{T/2}\mathrm{d}t\, \omega (t)-\int_{T/2}^T\mathrm{d}t\, \omega (t)\right) S_z^{N/2} \right] \end{split} \end{align} with $\omega (t)=\omega+\Delta B(t)$. According to the rephasing process, the signal Hamiltonian changes $H_\mathrm{signal}=\hbar \omega S_z^{N} \xrightarrow{} \hbar \omega (\openone_{N/2}\otimes S_z^{N/2})$. However, the noise on the second part of the particles does not change, but the noise of the first $N/2$ particles changes. It flips its sign after half of the measurement time. The calculated QFI for phase and frequency estimation is shown in Fig. \ref{fig:phaseDI} (a) and respectively (b). The red or dark gray dashed line shows the behaviour in time $T$, for a bipartite GHZ state, the yellow or light gray dashed line for optimal BSD states $\ket{\mathrm{D}_{N/2}^{N/4},\mathrm{D}_{N/2}^{N/4}}_x$ and the black dashed line for product states. The QFI starts at the same values as with the ideal DI from the previous section. However, it decreases to zero for larger $T$ for both, phase and frequency estimation. This means that the advantage of DI gets lost by doing spin-echo-like experiments on one half of the particles. The reason is, that the two parts receive different noise, where as in the ideal DI both parts receive the same noise. For frequency estimation, we again find an optimal measurement time for all investigated states as shown in Fig. \ref{fig:phaseDI} (b). We also find that the QFI for frequency estimation with all tested probe states decreases to zero after reaching it's maximum. Furthermore, for all measurement times, there is no enhancement by the presented scheme in comparison to the usual metrologic scheme, discussed in section \ref{sec:usual_metrology}. This means that the presented scheme is insufficient for realising DI. Another idea, which will turn out to be wrong, to realise the metrological scheme from Fig. \ref{fig:Metrology} (b) would be to repeat the experiment: One time with noise and signal and one time with noise only. For this method the desired signal Hamiltonian can be realised, but the averaged state changes \begin{equation} \braket{.\otimes .}_{\delta\varphi} \xrightarrow{} \braket{.}_{\delta\varphi} \otimes \braket{.}_{\delta\varphi}. \end{equation} In this case, the steady state has no coherences, with respect to the bipartition of the Hamiltonian, left. This means, that for all probe states, the QFI for phase estimation will decrease to zero. Therefore, this method is also insufficient for realising DI. \section{Conclusions} We investigated the usual metrological scheme and differential interferometry with a set of prominent probe states in presence of collective phase noise. For standard metrology schemes we determined the optimized states. Then we showed that with differential interferometry it is possible to reach a good scaling - up to the Heisenberg limit - even in presence of collective phase noise. Here, from the tested set of bipartite probe states, bipartite GHZ states are optimal for both phase and frequency estimation. However, GHZ states are highly sensitive to particle losses. Therefore, in experiments where particle losses appear frequently, symmetric Dicke states are often used. We found that with bipartite symmetric Dicke states it is also possible to reach a good scaling up to Heisenberg scaling. As we have seen, however, differential interferometry may be hard to realise in experiments. Therefore, it would be useful to design experimentally feasible schemes for implementing these ideas. In addition, an extension of the differential method to other metrology schemes, e.g., the measurement of oscillating fields \cite{Baumgart2014} is highly desirable. \vspace{0.2cm} \noindent We thank I. Appelaniz, M. Johanning, J. Kolodinski, M. Mitchell, M. Oszmaniec, L. Pezz\`e, A. Smerzi, P. Treutlein, G. Vitagliano, and Ch. Wunderlich for discussions. This work has been supported by the Friedrich-Ebert-Stiftung, the ERC (Consolidator Grant 683107/Tempo and Starting Grant 258647/GEDENTQOPT), the EU (CHIST-ERA QUASAR, COST Action CA15220), the MINECO (Projects Nos. FIS2012-36673-C03-03 and FIS2015-67161-P), the Basque Government (Project No. IT4720-10), the OTKA (Contract No. K83858), the UPV/EHU program UFI 11/55, the FQXi Fund (Silicon Valley Community Foundation), and the DFG. \onecolumngrid \begin{appendix} \section{GHZ state under collective phase noise}\label{app:noisy_GHZ} As described in section \ref{sec:noise}, the $N$ particle GHZ state evolves, due to the collective phase noise, at a certain time $t$ into the (over phase fluctuations) averaged state \begin{align} \bar{\varrho}(t)=\frac{1}{4} \ket{0^{\otimes N}}\bra{0^{\otimes N}}+\frac{1}{4} \ket{1^{\otimes N}}\bra{1^{\otimes N}}+ \frac{d(t)}{4} \ket{0^{\otimes N}}\bra{1^{\otimes N}}+ \frac{d(t)}{4}\ket{1^{\otimes N}}\bra{0^{\otimes N}} \end{align} with $d(t)=\mathrm{exp}\left[-\frac{1}{2}\left(N \gamma \Delta B \tau_c\right)^2 \left(\exp(- t/\tau_c)+t/\tau_c -1\right)\right]$. The mixed state has non-zero eigenvalues $\lambda_{\pm}=\frac{1 \pm d(t)}{2}$ and corresponding eigenvectors $\ket{\pm}=\frac{1}{\sqrt{2}} (\ket{0^{\otimes N}} \pm \ket{1^{\otimes N}})$. We denote all other eigenvalues with $\lambda_i=0$ and the corresponding eigenvectors $\ket{v_i}$ such that we can rewrite the state as \begin{equation} \bar{\varrho}(t)= \frac{1 + d(t)}{2}\ketbra{+}+\frac{1 - d(t)}{2}\ketbra{-}. \end{equation} With that state, we can calculate the QFI for phase estimation by using the metrological scheme from Fig. \ref{fig:Metrology}. Therefore, we use the fact that $S_z^N\ket{+}=\frac{N}{2}\ket{-}$ such that $\braket{v_i|S_z^N|\pm}=0$ to calculate the QFI and arrive at \begin{align} \begin{split} F^\varphi_Q&=4\sum_{\alpha<\beta} \frac{(\lambda_\alpha - \lambda_\beta)^2}{\lambda_\alpha + \lambda_\beta}|\braket{\alpha|S_z^N|\beta}|^2\\ &=4\frac{(\lambda_+ - \lambda_-)^2}{\lambda_+ + \lambda_-}|\braket{+|S_z^N|-}|^2 =N^2 d(t)^2. \end{split} \end{align} For frequency estimation we find \begin{equation} F^\omega_Q=t^2N^2 d(t)^2. \end{equation} These results for the QFI are similar to the ones in Ref. \cite{Frowis2014}. \section{Optimal rotation angle for $N=50$ qubits}\label{app:large_N} {In this section we show that also for larger $N$ an optimization of the input states over the rotation angle $\alpha$ could lead to a higher precision. In Fig.~\ref{fig:Varianceforfreq_large_N} the QFI for phase- and frequency estimation with $N=50$ qubits by using the not rotated (dashed lines) and optimal rotated (solid lines) probe states is shown. The QFI for phase estimation decreases for all probe states faster than in Fig.~\ref{fig:Varianceforfreq}. Also the optimal rotation angle $\alpha_\mathrm{opt}$ changes in a smaller time scale at the beginning. For frequency estimation we find that the maximal QFI by using product states does not substantially change when comparing the estimation with $N=8$ and $N=50$ qubits. However, the QFI by using the optimal rotated GHZ state approaches the QFI by using a product state faster than in Fig.~\ref{fig:Varianceforfreq}.} \begin{figure*} \subfigure[ ]{\includegraphics[width=0.49\textwidth]{opt50_1.pdf}}\hfill \subfigure[ ]{\includegraphics[width=0.49\textwidth]{opt50_2.pdf}} \caption{QFI for phase and frequency estimation with $N=50$ qubits. The solid lines are the QFI optimized over the rotation angle $\alpha$ and dashed lines are the QFI of the original states. \textbf{(a):} QFI for phase estimation over the time $T$ for different states. \textbf{(b):} The upper plot shows the QFI for frequency $\omega$ estimation. The lower plot shows the optimal rotation angle $\alpha_\mathrm{opt}$ over the time for the tested states. }\label{fig:Varianceforfreq_large_N} \end{figure*} \section{Optimal states for DI in the steady state regime}\label{app:DI_optimal_state} In the noiseless case, the maximal QFI is given by \cite{Giovannetti2006} \begin{equation} F_Q=4 (\lambda_{\mathrm{max}}-\lambda_{\mathrm{min}})^2,\label{eq:max_qfi} \end{equation} with $\lambda_{\mathrm{max}}$ ($\lambda_{\mathrm{min}}$) being the maximal (minimal) eigenvalue of the generator, here $\openone_{N_1} \otimes S_z^{N-N_1}$. This maximal QFI can be reached with the state $ \ket{\Psi}=\left(\ket{v_{\mathrm{max}}}+\ket{v_{\mathrm{min}}}\right)/\sqrt{2}$, where $\ket{v_{\mathrm{max}}}$ and $\ket{v_{\mathrm{min}}}$ are eigenvectors of the generator $\openone_{N_1} \otimes S_z^{N-N_1}$ corresponding to the maximal and respectively minimal eigenvalue. However, in presence of noise the initial state $\rho_0$ evolves, due to collective phase noise, into a mixed state until it becomes a mixture of states from the decoherence free subspace (DFS). This mixed state does not change due to collective phase noise and is called steady state. Now, we want to optimize the QFI in the steady state regime. Since the QFI is convex that is \begin{equation} F_Q\left[p \varrho_1+(1-p)\varrho_2\right] \ge p F_Q \left[\varrho_1\right]+(1-p)F_Q \left[\varrho_2\right], \end{equation} the QFI is maximal for pure states. Therfore, we have to maximize Eq. \eqref{eq:max_qfi} over all pure states $\ket{\Psi}$ lying in the DFS. In the DFS $\ket{v_{\max}}$ and $\ket{v_{\min}}$ need to have the same total number of excitations $k$ \cite{Lidar1998} and are given by \begin{align} \ket{v_{\min}}= \begin{cases} \ket{1^{\otimes N_1}}\otimes\ket{1^{\otimes N_1-k} 0^{\otimes N-k}} &\mathrm{for}\,\,\, k>N_1,\\ \ket{1^{\otimes k}0^{\otimes N_1-k}}\otimes\ket{ 0^{\otimes N-N_1}} &\mathrm{for}\,\,\,k\le N_1\\ \end{cases} \end{align} and \begin{align} \ket{v_{\max}}= \begin{cases} \ket{0^{\otimes N-k}1^{\otimes k-(N-N_1)} }\otimes\ket{1^{\otimes N-N_1}} &\mathrm{for}\,\,\, k>N-N_1,\\ \ket{0^{\otimes N_1}}\otimes\ket{0^{\otimes N-N_1-k} 1^{\otimes k}} &\mathrm{for}\,\,\,k\le N-N_1.\\ \end{cases} \end{align} With these states, the QFI is given by \begin{align} F_Q= \begin{cases} k^2 &\mathrm{for}\,\,\, k\le \mathrm{min}\{N_1,N-N_1\},\\ N_1^2 &\mathrm{for}\,\,\,N_1<k\le N-N_1,\\ (N-N_1)^2 &\mathrm{for}\,\,\,N-N_1 <k\le N_1,\\ (N-k)^2 &\mathrm{for}\,\,\,k>\mathrm{max}\{N_1,N-N_1\},\\ \end{cases} \end{align} which is maximal $F_Q=N^2/4 $ for $k=N_1=N-N_1=N/2$ and the optimal state from the DFS is given by \begin{equation} \ket{\Psi_{\mathrm{opt}}}=\frac{1}{\sqrt{2}}\left(\ket{\underbrace{0 \ldots 0}_{N/2}\underbrace{1 \ldots 1}_{N/2}}+\ket{\underbrace{1 \ldots 1}_{N/2}\underbrace{0 \ldots 0}_{N/2}}\right). \end{equation} \section{Scaling behaviour in the noiseless case by using DI}\label{app:DI_scaling_noiseless} { For phase estimation by using the metrological scheme from Fig.~\ref{fig:Metrology} (b), the QFI for a pure initial state $\ket{\Psi}$ can be calculated analytically for the noiseless case by using the fact the QFI is additive under tensoring \begin{equation} F_Q\left[\varrho^{(1)}\otimes \varrho^{(2)}, A^{(1)}\otimes \openone+\openone \otimes A^{(2)}\right]=F_Q^\varphi\left[\varrho^{(1)}, A^{(1)}\right]+F_Q\left[ \varrho^{(2)}, A^{(2)}\right]. \end{equation} For a product state $\ket{\Psi}=\ket{+}^{\otimes N/2} \otimes \ket{+}^{\otimes N/2}$ with $\ket{+}=(\ket{0}+\ket{1})/\sqrt{2}$ we find \begin{equation} F_Q^\varphi=\frac{N}{2}. \end{equation} For a GHZ state $\ket{\Psi}=\ket{\mathrm{GHZ}} \otimes \ket{\mathrm{GHZ}}$ we find \begin{equation} F_Q^\varphi=\left(\frac{N}{2}\right)^2. \end{equation} For the rotated BSD state $\ket{\mathrm{D}_{N/2}^{N/4}}_y \otimes \ket{\mathrm{D}_{N/2}^{N/4}}_y= U_x(\pi/2)\ket{\mathrm{D}_{N/2}^{N/4}} \otimes \ket{\mathrm{D}_{N/2}^{N/4}}$ the QFI is given by \begin{equation} F_Q^\varphi=\frac{N(N+4)}{8}. \end{equation}} \section{Scaling behaviour after dephasing by using DI}\label{app:DI_scaling_ss} In Fig.~\ref{fig:phaseDI} we see that the QFI decreases with time to a constant greater than zero by using DI. In this section, we calculate this constant and investigate its scaling behaviour for the probe states. The initial probe states evolve, due to collective phase noise into a mixed state. The steady state is a mixture in the decoherence free subspace and has therefore still some coherences. \subsection{Product state}\label{app:DI_scaling_ss_product} The steady state for the product state $\ket{+}^{\otimes N}$ as an initial state is a mixture of symmetric Dicke states. The non-zero eigenvalues are given by $\lambda_{k'}=\frac{C_N^{k'}}{2^N}$, where $C_N^{k'}=\tbinom{N}{k'}$ are binomial coefficients. The corresponding eigenvectors are given by $\ket{\mathrm{D}_N^{k'}}$. We can rewrite the symmetric Dicke states as \begin{equation} \ket{\mathrm{D}_N^{k'}}=\frac{1}{\sqrt{C_N^{k'}}}\sum_{q=0}^{k'} \sqrt{C_{N/2}^q}\ket{\mathrm{D}_{N/2}^q} \otimes \sqrt{C_{N/2}^{{k'}-q}}\ket{\mathrm{D}_{N/2}^{{k'}-q}}. \end{equation} With that formulation one finds that \begin{align} \begin{split} \braket{\mathrm{D}_N^s|\openone \otimes S_z|\mathrm{D}_N^t}&= \frac{1}{\sqrt{C_N^sC_N^t}} \sum_{q,q'} \sqrt{C_{N/2}^q C_{N/2}^{q'} C_{N/2}^{s-q} C_{N/2}^{t-q'}} (t-q'-N/4)\\ &\cdot \braket{\mathrm{D}_{N/2}^q|\mathrm{D}_{N/2}^{q'}} \braket{\mathrm{D}_{N/2}^{s-q}|\mathrm{D}_{N/2}^{t-q'}} \\ &= \frac{1}{\sqrt{C_N^sC_N^t}} \sum_{q} C_{N/2}^q\sqrt{ C_{N/2}^{s-q} C_{N/2}^{t-q}} (t-q-N/4)\braket{\mathrm{D}_{N/2}^{s-q}|\mathrm{D}_{N/2}^{t-q}}\\ &= \frac{\delta_{s,t}}{C_N^s} \sum_{q} C_{N/2}^q C_{N/2}^{s-q} (s-q-N/4).\label{eq:dicke-dicke} \end{split} \end{align} Based on these, we can rewrite the QFI by \begin{align} F_Q= 4 \sum_{k'=0}^N \lambda_{k'} \sum_{k} \braket{\mathrm{D}_N^{k'}|\openone_{N/2}\otimes S_z^{N/2}|k}\braket{k|\openone_{N/2}\otimes S_z^{N/2}|\mathrm{D}_N^{k'}}, \end{align} with $\ket{k}$ being eigenstates of $\varrho$ with $\lambda_{k}=0$ and $\braket{k|\mathrm{D}_N^l}=0$ for all $l$. We can replace $\sum_{k} \ketbra{k}=\openone-\sum_l \ketbra{\mathrm{D}_N^l}$, such that the QFI reduces to \begin{align} \begin{split} F_Q&= 4 \sum_{k'=0}^N \lambda_{k'} \braket{\mathrm{D}_N^{k'}|\openone_{N/2}\otimes S_z^{N/2}|\left(\openone-\sum_l \ketbra{\mathrm{D}_N^l} \right)|\openone_{N/2}\otimes S_z^{N/2}|\mathrm{D}_N^{k'}}\\ &= 4 \sum_{k'=0}^N \lambda_{k'} \left[\braket{\mathrm{D}_N^{k'}|\left(\openone_{N/2}\otimes S_z^{N/2}\right)^2|\mathrm{D}_N^{k'}}- \left(\sum_l \braket{\mathrm{D}_N^l|\openone_{N/2}\otimes S_z^{N/2}|\mathrm{D}_N^{k'}}\right)^2\right]. \end{split} \end{align} And with Eq. \eqref{eq:dicke-dicke} and because of the symmetry of the state we can express the expectation value \begin{equation} \braket{\mathrm{D}_N^k|\openone_{N/2}\otimes S_z^{N/2}|\mathrm{D}_N^k}=\braket{\mathrm{D}_N^k|S_z^{N/2}\otimes \openone_{N/2}|\mathrm{D}_N^k}=\frac{1}{2}\braket{\mathrm{D}_N^k| S_z^{N}|\mathrm{D}_N^k}= \frac{k-N/2}{2}. \end{equation} Replacing the second term leads to \begin{align} \begin{split} F_Q&= 4 \sum_{k'=0}^N \lambda_{k'} \left[ \braket{\mathrm{D}_N^{k'}|\left(\openone_{N/2}\otimes S_z^{N/2}\right)^2|\mathrm{D}_N^{k'}}- \left(\frac{k'-N/2}{2}\right)^2\right]\\ &= \frac{1}{2^{N-2}}\sum_{k'=0}^N \left[ \sum_{q=0}^{k'} C_{N/2}^q C_{N/2}^{k'-q} \left(k'-q-\frac{N}{4}\right)^2- C_N^{k'}\left(\frac{k'-N/2}{2}\right)^2\right]\\ &=\frac{1}{2^{N-2}}\sum_{k'=0}^N \left[ \sum_{q=0}^{k'} C_{N/2}^q C_{N/2}^{k'-q} \left(k'-q-\frac{N}{4}\right)^2- \sum_{q=0}^{k'} C_{N/2}^q C_{N/2}^{k'-q}\left(\frac{k'-N/2}{2}\right)^2\right]\\ &=\frac{1}{2^{N-2}} \sum_{k'=0}^N \sum_{q=0}^{k'} C_{N/2}^q C_{N/2}^{k'-q} \frac{\left(k'-2q\right)\left(3k'-N-2q\right)}{4} \\ &=\frac{1}{2^{N-2}} N 2^{N-4}=N/4. \end{split} \end{align} \subsection{GHZ state}\label{app:DI_scaling_ss_GHZ} The steady state for a GHZ state as an initial state is given by \begin{align} \begin{split} (\vr_{GHZ}\otimes\vr_{GHZ})_{\mathrm{steady state}}&= \frac{1}{4}(\ketbra{\underbrace{0 \ldots 0}_{N}}+\ketbra{\underbrace{1 \ldots 1}_{N}}\\ &+\ketbra{\underbrace{0 \ldots 0}_{N/2} \underbrace{1 \ldots 1}_{N/2}} +\ketbra{\underbrace{1 \ldots 1}_{N/2}\underbrace{0 \ldots 0}_{N/2} }\\ &+\KetBraO{\underbrace{1 \ldots 1}_{N/2}\underbrace{0 \ldots 0}_{N/2}}{\underbrace{0 \ldots 0}_{N/2}\underbrace{1 \ldots 1}_{N/2}}{}+\KetBraO{\underbrace{0 \ldots 0}_{N/2}\underbrace{1 \ldots 1}_{N/2}}{\underbrace{1 \ldots 1}_{N/2}\underbrace{0 \ldots 0}_{N/2}}{} ). \end{split} \end{align} There are four remarkable eigenvectors; \begin{align} \begin{split} &\ket{v_1}=\ket{\underbrace{0 \ldots 0}_{N}},\\ &\ket{v_2}=\ket{\underbrace{1 \ldots 1}_{N}},\\ &\ket{v_3}=1/\sqrt{2} (\ket{\underbrace{0 \ldots 0}_{N/2} \underbrace{1 \ldots 1}_{N/2}}+\ket{\underbrace{1 \ldots 1}_{N/2}\underbrace{0 \ldots 0}_{N/2} }),\\ &\ket{v_4}=1/\sqrt{2} (-\ket{\underbrace{0 \ldots 0}_{N/2} \underbrace{1 \ldots 1}_{N/2}}+\ket{\underbrace{1 \ldots 1}_{N/2}\underbrace{0 \ldots 0}_{N/2} }) \end{split} \end{align} with eigenvalues $\lambda_{1}=\lambda_{2}=1/4$, $\lambda_{3}=1/2$ and $\lambda_{4}=0$. All other eigenvalues $\lambda_{5\ldots 2^N}=0$ and we denote the eigenvectors corrosponding to these eigenvalues with $\ket{v_{5\ldots 2^N}}$. It is easy to show, that \begin{align} \begin{split} &\braket{v_1|\openone_{N/2}\otimes S_z^{N/2}|v_3}=\braket{v_1|\openone_{N/2}\otimes S_z^{N/2}|v_4}=0,\\ &\braket{v_2|\openone_{N/2}\otimes S_z^{N/2}|v_3}=\braket{v_2|\openone_{N/2}\otimes S_z^{N/2}|v_4}=0,\\ &\braket{v_{1\ldots 4 }|\openone_{N/2}\otimes S_z^{N/2}|v_{5\ldots 2^N}}=0 \end{split} \end{align} and all terms with the same eigenvalues also vanish, so that the sum in the QFI reduces to \begin{align} F_Q= 4\cdot \frac{(\lambda_3 -\lambda_4)^2}{\lambda_3 +\lambda_4} \left|\braket{v_3|\openone_{N/2}\otimes S_z^{N/2}|v_4}\right|^2= \frac{N^2}{8}. \end{align} with $\openone_{N/2}\otimes S_z^{N/2}\ket{v_3}=N/4 \ket{v_4}$. \subsection{Bipartite symmetric Dicke state in the x-basis}\label{app:DI_scaling_ss_Dicke} We start with an arbitrary symmetric Dicke state in the $x$ basis for both inputs. We can express this BSD state in the basis of symmetric Dicke states in the $z$ basis by \begin{align} \begin{split} \ket{\mathrm{D}_{N_1}^{k_1},\mathrm{D}_{N-N_1}^{k_2}}_x=\sum_{k_1',k_2'} \braket{\mathrm{D}_{N_1}^{k_1'}|U_y^{N_1}\left(\frac{\pi}{2}\right)|\mathrm{D}_{N_1}^{k_1}}\braket{\mathrm{D}_{N-N_1}^{k_2'}|U_y^{N-N_1}\left(\frac{\pi}{2}\right)|\mathrm{D}_{N-N_1}^{k_2}}\ket{\mathrm{D}_{N_1}^{k_1'},\mathrm{D}_{N-N_1}^{k_2'}}. \end{split} \end{align} For simplicity we choose here a rotation around the $y$-axis. Where $d^N_{k',k}\left(\frac{\pi}{2}\right)=\braket{\mathrm{D}_{N}^{k'}|U_y^{N}\left(\frac{\pi}{2}\right)|\mathrm{D}_{N}^{k}}:=d^N_{k',k}$ is the ''small'' Wigner $D$ matrix \cite{Wigner1932} for a rotation angle of $\pi/2$ \begin{align} d^N_{k',k}=\sqrt{\frac{C_N^{k}}{C_N^{k'} 2^N}}\sum_{s=\max\{0,k'-k\}}^{\min\{N-k,k'\}} (-1)^{k'-k+s}\frac{ C_{N-k}^{s}}{C_{k}^{k'-s}}. \end{align} Now we can rewrite the state as \begin{align} \ket{\mathrm{D}_{N_1}^{k_1},\mathrm{D}_{N-N_1}^{k_2}}_x=\sum_{k_1',k_2'}d^{N_1}_{k_1',k_1}d^{N-N_1}_{k_2',k_2}\ket{\mathrm{D}_{N_1}^{k_1'},\mathrm{D}_{N-N_1}^{k_2'}}. \end{align} For a fixed number of excitations $k'=k_1'+k_2'$ we have \begin{align} \begin{split} \ket{\mathrm{D}_{N_1}^{k_1},\mathrm{D}_{N-N_1}^{k_2}}_x&=\sum_{k'=0}^{N} \sum_{q=\max\{k'-N_1,0\}}^{\min\{k',N_1\}}d^{N_1}_{q,k_1}d^{N-N_1}_{k'-q,k_2}\ket{\mathrm{D}_{N_1}^{q},\mathrm{D}_{N-N_1}^{k'-q}}\\ &=\sum_{k'=0}^{N} \ket{l_{k'}}=\sum_{k'=0}^{N} \sqrt{p_{k'}}\ket{v_{k'}}, \end{split} \end{align} with the not normalized states $\ket{l_{k'}}$ and the normalized states $\braket{v_{k'}|v_{k'}}=1$. We can calculate the probability $p_{k'}$ for being in the state $\ket{v_{k'}}$ by \begin{align} p_{k'}=\braket{l_{k'}|l_{k'}}=\sum_{q=\max\{k'-N_1,0\}}^{\min\{k',N_1\}}\left(d^{N_1}_{q,k_1}d^{N-N_1}_{k'-q,k_2}\right)^2. \end{align} With those we find the normalized states \begin{align} \ket{v_{k'}}=\frac{1}{\sqrt{p_{k'}}}\sum_{q=\max\{k'-N_1,0\}}^{\min\{k',N_1\}}d^{N_1}_{q,k_1}d^{N-N_1}_{k'-q,k_2}\ket{\mathrm{D}_{N_1}^{q},\mathrm{D}_{N-N_1}^{k'-q}}. \end{align} With the probabilities $p_{k'}$ and the states $\ket{v_{k'}}$, we can write the rotated state in the $z$ basis as \begin{align} \varrho=\sum_{m,n} \sqrt{p_m p_n} \ket{v_m}\bra{v_n}. \end{align} After dephasing, only the elements with $m=n$ remain \cite{Lidar1998} such that the steady state is given by $\varrho_{f}=\sum_m p_m \ketbra{v_m}$. The non-zero eigenvalues of this state are $\lambda_{k'}=p_{k'}$ with the corresponding eigenvectors $\ket{v_{k'}}$. Now we can show, that \begin{align} \begin{split} \braket{v_s|\openone_{N_1}\otimes S^{N-N_1}_z|v_{k'}}&= \frac{1}{\sqrt{p_{k'} p_s}}\sum_{q=\max\{k'-N_1,0\}}^{\min\{k',N_1\}}\sum_{q'=\max\{N-N_1-s,0\}}^{\min\{s,N_1\}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!d^{N_1}_{q,k_1}d^{N-N_1}_{k'-q,k_2}d^{N_1}_{q',k_1}d^{N-N_1}_{s-q',k_2}\underbrace{\braket{\mathrm{D}_{N_1}^{q'}|\mathrm{D}_{N_1}^{q}}}_{\delta_{q,q'}}\!\!\!\!\underbrace{\braket{\mathrm{D}_{N-N_1}^{s-q'}|S^{N-N_1}_z|\mathrm{D}_{N-N_1}^{k'-q}}}_{(k'-q-N-N_1/2)\braket{\mathrm{D}_{N-N_1}^{s-q'}|\mathrm{D}_{N-N_1}^{k'-q}}}\\ &\propto \delta_{s,k'}. \end{split} \end{align} Such that the QFI reduces to \begin{align} F_Q&=4 \sum_{k'=0}^N \lambda_{k'} \sum_{k} \braket{v_{k'}|\openone_{N_1}\otimes S_z^{N-N_1}|k}\braket{k|\openone_{N_1}\otimes S_z^{N-N_1}|v_{k'}} \end{align} with $\ket{k} \neq \ket{v_{k}}$ being an eigenvector with a zero eigenvalue. Now we repeat the same steps as for product states as initial states to rewrite the QFI as \begin{align} F_Q&=4 \sum_{k'=0}^N \lambda_{k'} \left(\Delta_{v_{k'}} (\openone_{N_1} \otimes S_z^{N-N_1})\right)^2, \end{align} where $\left(\Delta_{v_{k'}} (\openone_{N_1} \otimes S^{N-N_1}_z)\right)^2$ denotes the variance and is given by \begin{align} \begin{split} \left(\Delta_{v_{k'}} (\openone_{N_1} \otimes S_z^{N-N_1})\right)^2&= \frac{1}{\lambda_{k'}} \sum_{q=\max\{k'-N_1,0\}}^{\min\{k',N_1\}}\left(d^{N_1}_{q,k_1}d^{N-N_1}_{k'-q,k_2}\right)^2\left(k'-q-\frac{N-N_1}{2}\right)^2 \\ &-\frac{1}{\lambda_{k'}^2} \left[\sum_{q=\max\{N-N_1-k',0\}}^{\min\{k',N_1\}}\left(d^{N_1}_{q,k_1}d^{N-N_1}_{k'-q,k_2}\right)^2 \left(k'-q-\frac{N-N_1}{2}\right)\right]^2.\end{split}\label{eq:general_Dicke_var} \end{align} Together the QFI is given by \begin{align} \begin{split} F_Q^\varphi[\vr_\mathrm{f}] &= 4 \sum_{k'=0}^{N} \left\lbrace\sum_{q=\max\{N-N_1-k',0\}}^{\min\{k',N_1\}}\left(d^{N_1}_{q,k_1}d^{N-N_1}_{k'-q,k_2}\right)^2\left(k'-q-\frac{N-N_1}{2}\right)^2 \right. \\ &\left. -\frac{\left[\sum_{q=\max\{N-N_1-k',0\}}^{\min\{k',N_1\}}\left(d^{N_1}_{q,k_1}d^{N-N_1}_{k'-q,k_2}\right)^2 \left(k'-q-\frac{N-N_1}{2}\right)\right]^2}{\sum_{q=\max\{N-N_1-k',0\}}^{\min\{k',N_1\}}\left(d^{N_1}_{q,k_1}d^{N-N_1}_{k'-q,k_2}\right)^2 } \right\rbrace , \end{split}\label{eq:QFI_for_Dicke} \end{align} This is a general formula for the QFI after dephasing for an initial state of the form $\ket{\mathrm{D}_{N_1}^{k_1},\mathrm{D}_{N-N_1}^{k_2}}_x$. From Fig. \ref{fig:max_FI_Dicke}, we see, that Eq. \eqref{eq:QFI_for_Dicke} is maximal for the probe state with $N_1=N-N_1=N/2$ and $k_1=k_2=N/4$, where $N=4j$, with $j$ being an integer. For this simple case and $N \le 1000$ we have verified that the formula in Eq. \eqref{eq:QFI_for_Dicke} is equivalent to \begin{equation} F_Q=\frac{N(N+4)}{16}. \end{equation} It is very likely to hold also in general, but has not been proven yet. \section{Optimization for product states}\label{app:DI_optimization_product} We want to investigate Eq. \eqref{eq:QFI_for_Dicke} for the case of $k_1=k_2=0$. This means, that the input probe state is a product state. For this case we optimize the splitting $N_1$ and $N-N_1$. For $k_1=k_2=0$ we find that $\left(d^{N_1}_{q,0}\right)^2=C_{N_1}^{q} 2^{-N_1}$ such that \begin{align} \left(d^{N_1}_{q,0}d^{N-N_1}_{k'-q,0}\right)^2=C_{N_1}^{q}C_{N-N_1}^{k'-q} 2^{-N}. \end{align} Then the eigenvalues are given by \begin{align} \lambda_{k'}=2^{-N}\sum_{q=\max\{k'-N_1,0\}}^{\min\{k',N_1\}}C_{N_1}^{q}C_{N-N_1}^{k'-q}. \end{align} We can split the sum for $k'\le N_1$ and $k'\ge N_1$, \begin{align} \lambda_{k'}= \begin{cases} 2^{-N}\sum_{q=0}^{k'}C_{N_1}^{q}C_{N-N_1}^{k'-q} &\mathrm{for}\,\,\, k'\le N_1,\\ 2^{-N}\sum_{q=k'-N_1}^{N_1}C_{N_1}^{q}C_{N-N_1}^{k'-q} &\mathrm{for}\,\,\,k'\ge N_1.\\ \end{cases} \end{align} This expression can be simplified by using $\sum_{q=0}^{k'}C_{N_1}^{q}C_{N-N_1}^{k'-q}=C_N^{k'}$ and shifting the summation $q=j+(k'-N_1)$ for the case $k'\ge N_1$ such that $\sum_{q=k'-N_1}^{N_1}C_{N_1}^{q}C_{N-N_1}^{k'-q}=\sum_{j=0}^{N-k'}C_{N_1}^{(N-k')-j}C_{N-N_1}^{j}=C_{N}^{N-k'}=C_N^{k'}$. For both cases the eigenvalues are given by \begin{align} \lambda_{k'}=2^{-N}C_N^{k'}. \end{align} Next, we can simplify the second term in Eq. \eqref{eq:general_Dicke_var} by \begin{align} \begin{split} \braket{\openone \otimes S_z}^2&=\frac{1}{\lambda_{k'}^2} \left[\sum_{q=\max\{N-N_1-k',0\}}^{\min\{k',N_1\}}\left(d^{N_1}_{q,k_1}d^{N-N_1}_{k'-q,k_2}\right)^2 \left(k'-q-\frac{N-N_1}{2}\right)\right]^2\\ &=\frac{1}{\lambda_{k'}^2} \left[\sum_{q=\max\{N-N_1-k',0\}}^{\min\{k',N_1\}}2^{-N}C_{N_1}^{q}C_{N-N_1}^{k'-q} \left(k'-q-\frac{N-N_1}{2}\right)\right]^2. \end{split} \end{align} We again split the sum in two cases $k'\le N_1$ and $k'\ge N_1$. For $k'\le N_1$ we find \begin{align} \begin{split} \braket{S_z}^2 &=\frac{1}{\lambda_{k'}^2} \left[\sum_{q=0}^{k'}2^{-N}C_{N_1}^{q}C_{N-N_1}^{k'-q} \left(k'-q-\frac{N-N_1}{2}\right)\right]^2\\ &=\left[\frac{(-1)^{1+k'}(2k'-N)(N-N_1)(-1+k'-N)!}{2 C_N^{k'} (k')! (-N)!}\right]^2 =\left[\frac{(2k'-N)(N-N_1)}{2 N}\right]^2, \end{split} \end{align} with $(-N)!=\Gamma(-N+1)$. For the case $k'\ge N_1$, shifting the summation with $q=j+(k'-N_1)$ like for the eigenvalues and simplifying in the same way leads to the same result. Now we can simplify the expression for the QFI in Eq. \eqref{eq:QFI_for_Dicke} by \begin{align} \begin{split} F_Q &= 4 \sum_{k'=0}^{N} \left\lbrace \sum_{q=\max\{N-N_1-k',0\}}^{\min\{k',N_1\}}\left(d^{N_1}_{q,k_1}d^{N-N_1}_{k'-q,k_2}\right)^2\left(k'-q-\frac{N-N_1}{2}\right)^2 \right. \\ & \left. -\frac{1}{\lambda_{k'}} \left[\sum_{q=\max\{N-N_1-k',0\}}^{\min\{k',N_1\}}\left(d^{N_1}_{q,k_1}d^{N-N_1}_{k'-q,k_2}\right)^2 \left(k'-q-\frac{N-N_1}{2}\right)\right]^2\right\rbrace \\ &= 4 \sum_{k'=0}^{N}\sum_{q=\max\{N-N_1-k',0\}}^{\min\{k',N_1\}}2^{-N} C_{N_1}^{q}C_{N-N_1}^{k'-q}\left(k'-q-\frac{N-N_1}{2}\right)^2-2^{-N}C_N^{k'} \left[\frac{(2k'-N)(N-N_1)}{2 N}\right]^2\\ &=4 \sum_{k'=0}^{N}\sum_{q=\max\{N-N_1-k',0\}}^{\min\{k',N_1\}}2^{-N} C_{N_1}^{q}C_{N-N_1}^{k'-q}\left[\left(k'-q-\frac{N-N_1}{2}\right)^2-\left(\frac{(2k'-N)(N-N_1)}{2 N}\right)^2\right]\\ &=4 \sum_{k'=0}^{N}\sum_{q=\max\{N-N_1-k',0\}}^{\min\{k',N_1\}}2^{-N} C_{N_1}^{q}C_{N-N_1}^{k'-q}\left\lbrace\frac{(N q-k' N_1)\left[k' (N_1-2N)+N(N-N_1+q)\right]}{N^2}\right\rbrace. \end{split} \end{align} Again, we split the summation over $q$ into two cases $k'\le N_1$ and $k'\ge N_1$. For $k'\le N_1$ we find \begin{align} \begin{split} & \sum_{q=\max\{N-N_1-k',0\}}^{\min\{k',N_1\}}2^{-N} C_{N_1}^{q}C_{N-N_1}^{k'-q}\left\lbrace\frac{(N q-k' N_1)\left[k' (N_1-2N)+N(N-N_1+q)\right]}{N^2}\right\rbrace\\ &=2^{-N} C_{N}^{k'} \frac{k'(N-k')(N-N_1)N_1}{(N-1)N^2}=\lambda_{k'}\frac{k'(N-k')(N-N_1)N_1}{(N-1)N^2}. \end{split} \end{align} For the case $k'\ge N_1$, shifting the summation with $q=j+(k'-N_1)$ like for the eigenvalues and simplifying in the same way leads to the same result, such that we can calculate the variance to \begin{equation} \left(\Delta_{v_{k'}} \openone \otimes S_z\right)^2=\frac{k'(N-k')(N-N_1)N_1}{(N-1)N^2}. \end{equation} Together, the QFI is given by \begin{align} \begin{split} F_Q &= 4 \sum_{k'=0}^{N}\lambda_{k'}\left(\Delta_{v_{k'}} \openone_{N_1} \otimes S^{N-N_1}_z\right)^2\\ &=4 \sum_{k'=0}^{N}2^{-N}C_N^{k'}\frac{k'(N-k')(N-N_1)N_1}{(N-1)N^2}= \frac{(N-N_1)N_1}{N}, \end{split} \end{align} which is maximal $F_Q^{\max}=N/4$ for $N_1=N/2$. \end{appendix} \twocolumngrid
2,869,038,156,604
arxiv
\section{Introduction} \label{sec:intro} Diffusion-reaction processes in industrial chemical reactors, living cells, and biological tissues have been studied over many decades \cite{Rice85,Barzykin01,Lauffenburger,Metzler,Lindenberg}. Diffusion toward spherical traps (or sinks) is an emblematic model of such processes that attracted a considerable attention among theoreticians \cite{Jeffrey73,Kayser83,Kayser84,Felderhof85,Mattern87,Torquato86,Richards87,Rubinstein88,Torquato91,Torquato97,Kansal02}. In a basic setting, one considers the concentration of diffusing particles $c({\bm{x}},t)$ that obeys diffusion equation in the complement $\Omega$ of the union of non-overlapping balls: \begin{equation} \label{eq:diff} \frac{\partial}{\partial t} c({\bm{x}},t) = D \nabla^2 c({\bm{x}},t) , \end{equation} where $D$ is the diffusion coefficient, and $\nabla^2$ is the Laplace operator. This equation is completed by an initial concentration profile, $c({\bm{x}},t=0) = c_0({\bm{x}})$, an appropriate boundary condition describing reactions on the boundary ${\partial\Omega}$, and the regularity condition $c({\bm{x}},t) \to 0$ as $|{\bm{x}}| \to \infty$. Various arrangments of traps may account for spatial heterogeneities and help to elucidate the role of disorder onto reaction kinetics, in particular, onto the reaction rate \cite{Berezhkovskii90,Berezhkovskii92,Berezhkovskii92b,Makhnovskii93,Oshanin98,Makhnovskii99,Makhnovskii02}. More generally, reactive traps and passive spherical obstacles can be used as elementary ``bricks'' to build up model geometrical structures of porous media or macromolecules such as enzymes or proteins \cite{Traytak96,Traytak06,Lavrentovich13,Traytak13,Piazza15,Galanti16a,Galanti16b,Grebenkov19}. As explicit analytical solutions to Eq. (\ref{eq:diff}) are in general not available, various mathematical tools and numerical techniques have been broadly used. For instance, Torquato and co-workers applied the variational principle to derive upper and lower bounds on the steady-state reaction rate \cite{Richards87,Rubinstein88,Torquato91}. Among numerical techniques, Monte Carlo simulations and finite-element methods were most often employed thanks to their flexibility and applicability to arbitrary confining domains (see \cite{Lee89,Tsao01,Eun13,Eun20} and references therein). In contrast, the generalized method of separation of variables (GMSV), also known as the (multipole) re-expansion method, exploits the intrinsic local symmetries of perforated domains and relies on the re-expansion (addition) theorems. This method was applied in different disciplines ranging from electrostatics to hydrodynamics and scattering theory \cite{Ivanov70,Martin,Koc98,Gumerov02,Gumerov05}. In chemical physics, the GMSV for the Laplace equation was used to study steady-state diffusion and to compute the reaction rate in various configurations of traps \cite{Piazza15,Galanti16a,Galanti16b,Grebenkov19,Goodrich67,Traytak92,Tsao02,McDonald03,Traytak18}. In particular, a semi-analytical representation for the Green function of the Laplace equation was derived both in three-dimensional \cite{Grebenkov19} and two-dimensional spaces \cite{Chen09}, allowing one to access most steady-state characteristics of the diffusion-reaction process such as the reaction rate, the escape probability, the mean first-passage time, the residence time, and the harmonic measure density. However, these results are not applicable to transient time-dependent diffusion among traps, which is governed by diffusion equation. As the Laplace transform reduces Eq. (\ref{eq:diff}) to the modified Helmholtz equation (see below), it would be natural to adapt the GMSV to this setting. While the GMSV for ordinary Helmholtz equation has been broadly employed in scattering theory \cite{Ivanov70,Martin,Koc98,Gumerov02,Gumerov05}, its applications to the modified Helmholtz equation seem to be much less studied \cite{Traytak08,Gordeliy09}. In this paper, we employ re-expansion formulas in spherical domains to develop a general framework for solving boundary value problems for the modified Helmholtz equation with Robin boundary conditions (specified below). From the numerical point of view, the proposed method can be seen as an extension of Ref. \cite{Grebenkov19} from the Laplace equation to the modified Helmholtz equation, as well an extension of Ref. \cite{Gordeliy09} from exterior to interior domains. From the theoretical point of view, we derive a semi-analytical representation of the Green function for the modified Helmholtz equation which determines most relevant characteristics of transient time-dependent diffusion. Moreover, we discuss how this method can be adapted to compute the eigenvalues and eigenfunctions of the Laplace operator and of the Dirichlet-to-Neumann operator in such perforated domains. To our knowledge, these spectral applications of the method are new. The paper is organized as follows. Section \ref{sec:framework} presents the GMSV and its applications to get the Green function (Sec. \ref{sec:Green}), the heat kernel (Sec. \ref{sec:heat}), the Laplacian spectrum (Sec. \ref{sec:Laplace}) and the spectrum of the Dirichlet-to-Neumann operator (Sec. \ref{sec:DN}). In Sec. \ref{sec:discussion}, we describe practical aspects of these results and their applications in chemical physics. In particular, we discuss first-passage properties (Sec. \ref{sec:first}), stationary diffusion of mortal particles (Sec. \ref{sec:mortal}), as well as advantages, limitations and further extensions of the method (Secs. \ref{sec:advantages}, \ref{sec:extensions}). Section \ref{sec:conclusion} concludes the paper. Appendices regroup technical derivations and some examples. \section{General framework} \label{sec:framework} We consider diffusion outside the union of $N$ non-overlapping balls $\Omega_1,\ldots,\Omega_N$ of radii $R_i$, centered at ${\bm{x}}_i$: \begin{equation} \Omega = \Omega_0 \backslash \bigcup\limits_{i=1}^N \overline{\Omega}_i , \quad \Omega_i = \{ {\bm{x}}\in\mathbb{R}^3 ~:~ |{\bm{x}} - {\bm{x}}_i|<R_i\}, \end{equation} where $\Omega_0$ is a ball of radius $R_0$, centered at the origin ${\bm{x}}_0 = 0$, that englobes all the balls: $\overline{\Omega}_i \subset \Omega_0$ for all $i$ (Fig. \ref{fig:schema}). We allow $R_0$ to be infinite (i.e., $\Omega_0 = \mathbb{R}^3$) that describes an exterior problem, in which particles diffuse in an unbounded domain $\Omega$ and thus can escape at infinity. In turn, for any finite $R_0$, one deals with an interior problem of diffusion in a bounded domain $\Omega$. \begin{figure} \begin{center} \includegraphics[width=80mm]{figure1.pdf} \end{center} \caption{ {\bf (a)} Illustration of a bounded perforated domain $\Omega = \Omega_0 \backslash \bigcup\nolimits_{i=1}^3\overline{\Omega }_i$ with three balls $\Omega_i$ of radii $R_i$, centered at ${\bm{x}}_i$, all englobed inside a larger ball $\Omega_0$ of radius $R_0$ centered at the origin. Local spherical coordinates, $(r_i,\theta_i,\phi_i)$, are associated with each ball. The exterior problem corresponds to the limit $R_0 = \infty$ when $\Omega_0 = \mathbb{R}^3$. {\bf (b)} Any point ${\bm{x}}$ can be represented either in local spherical coordinates $(r_j,\theta_j,\phi_j)$, associated with the center ${\bm{x}}_j$, or in local spherical coordinates $(r_i,\theta_i,\phi_i)$, associated with the center ${\bm{x}}_i$. Accordingly, basis functions $\psi_{mn}^{\pm}({\bm{x}}-{\bm{x}}_j)$ can be re-expanded on basis functions $\psi_{kl}^{\pm}({\bm{x}}-{\bm{x}}_i)$, where ${\bm{x}}-{\bm{x}}_j = \bm{L}_{ij} + ({\bm{x}}-{\bm{x}}_i)$, with $\bm{L}_{ij} = {\bm{x}}_i - {\bm{x}}_j$ being the vector connecting ${\bm{x}}_j$ to ${\bm{x}}_i$.} \label{fig:schema} \end{figure} \subsection{General boundary value problem} \label{sec:general} We first consider a general boundary value problem for the modified Helmholtz equation \begin{subequations} \label{eq:Helm_problem} \begin{eqnarray} \label{eq:Helm} (q^2 - \nabla^2) w({\bm{x}};q) &=& 0 \quad ({\bm{x}}\in \Omega), \\ \label{eq:Robin} \left.\left(a_i w + b_i R_i \frac{\partial w}{\partial \bm{n}}\right)\right|_{{\partial\Omega}_i} &=& f_i \quad (i = 0,\ldots,N), \end{eqnarray} \end{subequations} where $q$ is a nonnegative parameter% \footnote{ While we focus on nonnegative $q$ throughout the main text, the method is implemented for any complex $q$, see Appendix \ref{sec:complex_q}.}, $\partial/\partial \bm{n}$ is the normal derivative on the boundary ${\partial\Omega} = \cup_{i=0}^N {\partial\Omega}_i$, oriented outwards the domain $\Omega$, $f_i$ are given continuous functions on ${\partial\Omega}_i$, and $a_i$ and $b_i$ are nonnegative constants such that $a_i + b_i > 0$ (i.e., $a_i$ and $b_i$ cannot be simultaneously $0$). The Robin boundary condition (\ref{eq:Robin}) is reduced to Dirichlet condition for $b_i = 0$ and to Neumann condition for $a_i = 0$. In particular, our description can accommodate perfectly reactive traps or sinks ($a_i > 0$, $b_i = 0$), partially reactive traps ($a_i > 0$, $b_i > 0$), and passive reflecting obstacles ($a_i = 0$, $b_i > 0$). For the exterior problem, Eq. (\ref{eq:Robin}) for $i = 0$ is replaced by the regularity condition $w({\bm{x}};q) \to 0$ as $|{\bm{x}}|\to\infty$. The basic idea of the GMSV consists in searching for the solution of Eq. (\ref{eq:Helm}) as a superposition of partial solutions $w_i$ in the exterior of each ball $\Omega_1,\ldots,\Omega_N$, and in the interior of $\Omega_0$: \begin{equation} \label{eq:g_gi} w({\bm{x}};q) = \sum\limits_{i=0}^N w_i({\bm{x}};q) \end{equation} (for the exterior problem, $w_0 \equiv 0$). As each domain $\Omega_i$ is spherical, the corresponding partial solution can be searched in the {\it local} spherical coordinates $(r_i,\theta_i,\phi_i)$ associated with $\Omega_i$, as an expansion over regular (for $i = 0$) and irregular (for $i > 0$) basis functions $\psi_{mn}^{\pm}$ with unknown coefficients $A_{mn}^i$, \begin{equation} \label{eq:gi} w_i({\bm{x}};q) = \sum\limits_{n=0}^\infty \sum\limits_{m=-n}^n A_{mn}^i \, \psi_{mn}^{\epsilon_i}(q r_i,\theta_i,\phi_i), \end{equation} where we use a shortcut notation $\epsilon_i = -$ for $i > 0$, and $\epsilon_0 = +$. For the modified Helmholtz equation, the basis functions are \begin{equation} \begin{split} \psi_{mn}^{+}(qr_i,\theta_i,\phi_i) &= i_n(qr_i) \, Y_{mn}(\theta_i,\phi_i) , \\ \psi_{mn}^{-}(qr_i,\theta_i,\phi_i) &= k_n(qr_i) \, Y_{mn}(\theta_i,\phi_i) , \\ \end{split} \end{equation} where \begin{equation} \begin{split} i_n(z) & = \sqrt{\pi/(2z)} \, I_{n+1/2}(z), \\ k_n(z) & = \sqrt{2/(\pi z)} \, K_{n+1/2}(z) \\ \end{split} \end{equation} are the modified spherical Bessel functions of the first and second kind, and $Y_{mn}(\theta,\phi)$ are the normalized spherical harmonics: \begin{equation} \label{eq:Y} Y_{mn}(\theta,\phi) = \sqrt{\frac{(2n+1) \, (n-m)!}{4\pi \, (n+m)!}} \, P_n^m(\cos\theta) \, e^{im\phi} , \end{equation} with $P_n^m(z)$ being the associated Legendre polynomials (we use the convention that $Y_{mn}(\theta,\phi) \equiv 0$ for $|m| > n$). The unknown coefficients $A_{mn}^i$ are fixed by the boundary condition (\ref{eq:Robin}) applied on each ${\partial\Omega}_i$: \begin{equation} \label{eq:fi_def} f_i = \sum\limits_{j=0}^N \sum\limits_{m,n} A_{mn}^j \biggl(a_i + b_i R_i \frac{\partial}{\partial \bm{n}}\biggr) \psi_{mn}^{\epsilon_j}(qr_j,\theta_j,\phi_j) \biggr|_{{\partial\Omega}_i} , \end{equation} where $\sum\nolimits_{m,n}$ is a shortcut notation for the sum over $n = 0,1,2,\ldots$ and $m = -n,-n+1,\ldots,n$. As spherical harmonics form a complete basis of the space $L_2({\partial\Omega}_i)$, one can project this functional equation onto $Y_{kl}(\theta_i,\phi_i)$ to reduce it to an infinite system of linear algebraic equations on the coefficients $A_{mn}^j$: \begin{equation} \label{eq:coeff} F_{kl}^i = \sum\limits_{j=0}^N \sum\limits_{m,n} A_{mn}^j \, W_{mn,kl}^{j,i} \quad \begin{cases} i=0,1,\ldots,N, \\ l=0,1,\ldots,\, |k|\leq l, \end{cases} \end{equation} where \begin{align} \label{eq:W_def} & W_{mn,kl}^{j,i} \\ \nonumber & = \biggl( \biggl(a_i + b_i R_i \frac{\partial}{\partial \bm{n}}\biggr) \psi_{mn}^{\epsilon_j}(qr_j,\theta_j,\phi_j) \biggr|_{{\partial\Omega}_i} , Y_{kl} \biggr)_{L_2({\partial\Omega}_i)} \end{align} and \begin{equation} \label{eq:F_def} F_{kl}^i = (f_i, Y_{kl})_{L_2({\partial\Omega}_i)} , \end{equation} with the standard scalar product: $(f,g)_{L_2({\partial\Omega}_i)} = \int\nolimits_{{\partial\Omega}_i} d\bm{s} \, f(\bm{s}) \, g^*(\bm{s})$, asterisk denoting the complex conjugate. Even though $A_{mn}^j$, $F_{mn}^i$, and $W_{mn,kl}^{j,i}$ involve many indices, one can re-order them to consider $A_{mn}^j$ (resp., $F_{mn}^i$) as components of a (row) vector $\mathbf{A}$ (resp., $\mathbf{F}$), while $W_{mn,kl}^{j,i}$ as components of a matrix $\mathbf{W}$, so that Eq. (\ref{eq:coeff}) becomes a matrix equation: \begin{equation} \label{eq:F_AW} \mathbf{F} = \mathbf{A} \mathbf{W} . \end{equation} In Appendix \ref{sec:AW}, we provide the explicit formulas for the matrix elements $W_{mn,kl}^{j,i}$, which depend only on $q$, on the positions and radii of the balls $\Omega_i$, and on the parameters $a_i$ and $b_i$. The derivation of these formulas relies on the re-expansion (addition) theorems for basis solutions $\psi_{mn}^{\pm}$ \cite{Hobson,Epton95}. Truncating the infinite-dimensional matrix $\mathbf{W}$ and inverting it numerically yield a truncated set of coefficients $A_{mn}^j$. In this way, Eqs. (\ref{eq:g_gi}, \ref{eq:gi}) provide a semi-analytical solution of the boundary value problem (\ref{eq:Helm}, \ref{eq:Robin}), in which the dependence on ${\bm{x}}$ is analytical (via explicit basis functions $\psi_{mn}^{\pm}$), but the coefficients $A_{mn}^i$ have to be obtained numerically from Eq. (\ref{eq:F_AW}). A practical implementation of this method is summarized in Appendix \ref{sec:implementation}, whereas its advantages and limitations are discussed in Sec. \ref{sec:advantages}. Figure \ref{fig:conc} illustrates three solutions $w({\bm{x}};q)$ of the modified Helmholtz equation with Dirichlet boundary conditions on a configuration with 7 balls enclosed by a larger sphere. As $q$ increases, the solution $w({\bm{x}};q)$ drops faster from its larger values on the outer sphere toward the perfectly absorbing traps. \begin{figure} \begin{center} \includegraphics[width=88mm]{figure2.pdf} \end{center} \caption{ {\bf (a)} Configuration of 7 perfect traps of radius $R_i = 0.1$ inside a larger sphere of radius $R_0 = 1$ on which the variable concentration profile is set: $f_0(\theta,\phi) = \frac12 (1 + \sin\theta \cos \phi)$ (illustrated by a colored contour at the equator). {\bf (b,c,d)} The solution $w({\bm{x}};q)$ evaluated on a horizontal cut at $z = 0$ (i.e., in the plane $xy$, view from the top), with $q = 0.2$ {\bf (b)}, $q = 1$ {\bf (c)} and $q = 5$ {\bf (d)}. The matrix $\mathbf{W}$ determining the coefficients $A_{mn}^i$ was truncated to the size $8(3+1)^2 \times 8(3+1)^2 = 128 \times 128$ with the truncation order $n_{\rm max} = 3$. } \label{fig:conc} \end{figure} \subsection{Green function} \label{sec:Green} The above general solution allows one to derive many useful quantities. Here, we aim at finding the Green function $G({\bm{x}},\bm{y};q)$ of the modified Helmholtz equation in $\Omega$ \cite{Duffy,Keilson} \begin{subequations} \begin{eqnarray} \label{eq:Helm2} (q^2 - \nabla^2) G({\bm{x}},\bm{y};q) &=& \delta({\bm{x}}-\bm{y}) \quad ({\bm{x}}\in \Omega), \\ \label{eq:Robin2} \left.\left(a_i G + b_i R_i \frac{\partial G}{\partial \bm{n}}\right)\right|_{{\partial\Omega}_i} &=& 0 \quad (i = 0,\ldots,N), \end{eqnarray} \end{subequations} where $\delta({\bm{x}}-\bm{y})$ is the Dirac distribution, and $\bm{y}$ is a fixed point in $\Omega$ (for the exterior problem, Eq. (\ref{eq:Robin2}) for $i=0$ is replaced by regularity condition $G({\bm{x}},\bm{y};q)\to 0$ as $|{\bm{x}}|\to\infty$). We search for the Green function in the form \begin{equation} \label{eq:Green0} G({\bm{x}},\bm{y};q) = G_{\rm f}({\bm{x}},\bm{y};q) - g({\bm{x}};\bm{y},q), \end{equation} where \begin{equation} \label{eq:G0_Helm} G_{\rm f}({\bm{x}},\bm{y};q) = \frac{\exp(-q|{\bm{x}} - \bm{y}|)}{4\pi |{\bm{x}}-\bm{y}|} \end{equation} is the fundamental solution of the modified Helmholtz equation, whereas the auxiliary function $g({\bm{x}};\bm{y},q)$ satisfies Eqs. (\ref{eq:Helm_problem}), with \begin{equation} f_i = \left.\left(a_i G_{\rm f} + b_i R_i \frac{\partial G_{\rm f}}{\partial \bm{n}}\right)\right|_{{\partial\Omega}_i} \,. \end{equation} In Appendix \ref{sec:AF}, we derive explicit formulas for the scalar product in Eq. (\ref{eq:F_def}) determining the components $F_{mn}^i$ of the vector $\mathbf{F}$. Among various applications, the Green function allows one to solve the inhomogeneous modified Helmholtz equation: \begin{subequations} \label{eq:Helm_inhom} \begin{eqnarray} (q^2 - \nabla^2) w({\bm{x}};q) &=& F({\bm{x}}) \quad ({\bm{x}}\in \Omega), \\ \left.\left(a_i w + b_i R_i \frac{\partial w}{\partial \bm{n}}\right)\right|_{{\partial\Omega}_i} &=& 0 \quad (i = 0,\ldots,N) \end{eqnarray} \end{subequations} (with a given continuous function $F$) as \begin{equation} w({\bm{x}}) = \int\limits_{\Omega} d\bm{y} \, G({\bm{x}},\bm{y};q) \, F(\bm{y}). \end{equation} Equivalently, Eqs. (\ref{eq:Helm_inhom}) could be solved by reduction to homogeneous Eqs. (\ref{eq:Helm_problem}) with the help of the fundamental solution $G_{\rm f}({\bm{x}},\bm{y};q)$. \subsection{Heat kernel} \label{sec:heat} The solution of the modified Helmholtz equation opens a way to numerous applications in heat transfer and nonstationary diffusion. For instance, the Green function $G({\bm{x}},\bm{y};q)$ is related to the Laplace transform of the heat kernel $P({\bm{x}},t|\bm{y})$ that satisfies the diffusion equation \begin{subequations} \begin{eqnarray} \frac{\partial P({\bm{x}},t|\bm{y})}{\partial t} - D \nabla^2 P({\bm{x}},t|\bm{y}) &=& 0 , \\ P({\bm{x}},t=0|\bm{y}) &=& \delta({\bm{x}}-\bm{y}) , \\ \left.\left(a_i P + b_i R_i \frac{\partial P}{\partial \bm{n}}\right)\right|_{{\partial\Omega}_i} &=& 0 \end{eqnarray} \end{subequations} (for the exterior problem, the Robin boundary condition on ${\partial\Omega}_0$ is replaced by the regularity condition $P\to 0$ as $|{\bm{x}}|\to \infty$). The heat kernel describes the likelihood of the event that a particle that started from a point $\bm{y}$ at time $0$, is survived against surface reactions on ${\partial\Omega}$ and found in a vicinity of a point ${\bm{x}}$ at a later time $t$ \cite{Gardiner,Grebenkov19e}. The Laplace transform of the diffusion equation yields the modified Helmholtz equation, so that \begin{equation} \label{eq:P_G} \int\limits_0^\infty dt \, e^{-pt} \, P({\bm{x}},t|\bm{y}) = \frac{1}{D} \, G({\bm{x}},\bm{y}; \sqrt{p/D}). \end{equation} \subsection{Laplacian eigenvalues and eigenfunctions} \label{sec:Laplace} Replacing $q$ by $iq$ transforms the modified Helmholtz equation (\ref{eq:Helm}) to the ordinary Helmholtz equation: \begin{equation} \label{eq:Helm_ordinary} (q^2 + \nabla^2) w({\bm{x}};q) = 0. \end{equation} As solutions of this equation by the GMSV were thoroughly studied in scattering theory \cite{Ivanov70,Martin,Koc98,Gumerov02,Gumerov05}, we do not discuss them here. However, we mention that the above method can also be adapted to compute the eigenvalues and eigenfunctions of the Laplace operator $-\nabla^2$ in a bounded domain $\Omega$ (i.e., with $R_0 < \infty$): \begin{subequations} \begin{eqnarray} \label{eq:eigen} \nabla^2 u({\bm{x}}) + \lambda u({\bm{x}}) &=& 0 \quad ({\bm{x}}\in \Omega), \\ \left. \left(a_i u + b_i R_i \frac{\partial u}{\partial \bm{n}}\right) \right|_{{\partial\Omega}_i} &=& 0 . \end{eqnarray} \end{subequations} As Eq. (\ref{eq:eigen}) is the ordinary Helmholtz equation, it is convenient to search for an eigenpair $(\lambda, u({\bm{x}}))$ in the form \begin{equation} \label{eq:eigen_u} u({\bm{x}}) = \sum\limits_{j=0}^{N} \sum\limits_{m,n} A_{mn}^j\, \psi_{mn}^{\epsilon_j}(qr_j, \theta_j, \phi_j) , \end{equation} with $q = i\sqrt{\lambda}$. This is equivalent to setting $f_i \equiv 0$ and thus $\mathbf{F} \equiv 0$ in Eq. (\ref{eq:F_AW}). The necessary and sufficient condition to satisfy the matrix equation $\mathbf{A} \mathbf{W} = 0$ is \begin{equation} \label{eq:eigen_det} \det(\mathbf{W}) = 0 . \end{equation} If $\{q_k\}$ is the set of the values of $q$ at which this condition is satisfied, one gets the eigenvalues: $\lambda_k = - q_k^2$. From the general spectral theory, the Laplace operator in a bounded domain with Robin boundary conditions is known to have an infinitely many nonnegative eigenvalues growing to infinity so that all zeros $q_k$ should lie on the imaginary axis. In practice, the matrix $\mathbf{W}$ is first truncated and then some zeros $q_k$ of $\det(\mathbf{W})$ are computed numerically. These zeros yield the approximate eigenvalues. The computation of the associated eigenfunctions is standard. At each value $q_k$, the system of linear equations $\mathbf{A} \mathbf{W} = 0$ is under-determined and has infinitely many solutions. If the eigenvalue $\lambda_k = -q_k^2$ is simple, one can fix a solution by setting one of unknown coefficients, e.g., $A_{00}^1$, to a constant $c$. This results in a smaller system of {\it inhomogeneous} linear equations on the remaining coefficients $A_{mn}^j$ that can be solved numerically. The corresponding eigenfunction is given by Eq. (\ref{eq:eigen_u}). The arbitrary constant $c$ is simply a choice of the normalization of that eigenfunction. Once the eigenfunction is constructed, it can be renormalized appropriately. For eigenvalues with multiplicity $m > 1$, an eigenfunction is defined up to $m$ free constants that can be chosen in a standard way. \subsection{Dirichlet-to-Neumann operator} \label{sec:DN} The GMSV can be applied to investigate the spectral properties of the Dirichlet-to-Neumann operator. For a given function $f$ on the boundary ${\partial\Omega}$, the Dirichlet-to-Neumann operator $\mathcal{M}_p$ associates another function $g = (\partial w/\partial \bm{n})_{|{\partial\Omega}}$ on that boundary, where $w$ is the solution of the Dirichlet boundary value problem \begin{equation} \label{eq:Helm3} (p - D\nabla^2) w = 0 \quad ({\bm{x}}\in \Omega), \qquad w|_{{\partial\Omega}} = f \end{equation} (for an exterior problem, the regularity condition $w({\bm{x}})\to 0$ as $|{\bm{x}}|\to\infty$ is also imposed; see \cite{Arendt14,Daners14,Arendt15,Hassell17,Girouard17} for a rigorous mathematical definition). The Dirichlet-to-Neumann operator can be used as an alternative to the Laplace operator in describing diffusion-reaction processes. In particular, the eigenvalues and eigenfunctions of $\mathcal{M}_p$ determine most diffusion-reaction characteristics, even for inhomogeneous surface reactivity \cite{Grebenkov19b,Grebenkov19c}. As the boundary ${\partial\Omega}$ is the union of non-intersecting spheres ${\partial\Omega}_i$, a function $f$ on ${\partial\Omega}$ can be represented by its restrictions $f_i = f|_{{\partial\Omega}_i}$, and Eq. (\ref{eq:g_gi}) is the semi-analytical solution of Eq. (\ref{eq:Helm3}), by setting $q = \sqrt{p/D}$, $a_i = 1$ and $b_i = 0$. The action of the operator $\mathcal{M}_p$ can be determined by computing the normal derivative of the solution $w$. In Appendix \ref{sec:dwdn}, we represented the normal derivative as \begin{equation} \label{eq:dwdn} \left. \biggl(\frac{\partial w}{\partial \bm{n}}\biggr)\right|_{{\partial\Omega}_i} = \sum\limits_{m,n} \bigl(\mathbf{F} \tilde{\mathbf{W}}^{-1} \tilde{\mathbf{W}}'\bigr)_{mn}^i Y_{mn}(\theta_i,\phi_i) , \end{equation} where the matrices $\tilde{\mathbf{W}}$ and $\tilde{\mathbf{W}}'$ are defined by explicit formulas (\ref{eq:Wtilde0}, \ref{eq:Wtilde}), and we inverted Eq. (\ref{eq:F_AW}) to express the coefficients $\mathbf{A}$. As a consequence, the Dirichlet-to-Neumann operator $\mathcal{M}_p$ is represented in the basis of spherical harmonics by the following matrix \begin{equation} \label{eq:M_DtN} \mathbf{M} = \tilde{\mathbf{W}}^{-1} \tilde{\mathbf{W}}' . \end{equation} In particular, the eigenvalues of this matrix coincide with the eigenvalues of $\mathcal{M}_p$, whereas its eigenvectors allow one to reconstruct the eigenfunctions of $\mathcal{M}_p$. In practice, one computes a truncated version of the matrix $\mathbf{M}$ so that its eigenvalues would approximate a number of eigenvalues of $\mathcal{M}_p$. For the exterior problem, one needs to reduce the matrices $\tilde{\mathbf{W}}$ and $\tilde{\mathbf{W}}'$ by removing the block row and block column corresponding to $\Omega_0$ (see Appendix \ref{sec:AW}). We recall that, in contrast to the Laplace operator, whose spectrum is continuous for the exterior problem, the spectrum of the Dirichlet-to-Neumann operator is discrete for interior and exterior perforated domains, because their boundary ${\partial\Omega}$ is bounded in both cases. The above method can also be adapted to study an extension of the Dirichlet-to-Neumann to the case when some spheres $\Omega_i$ are reflecting. In fact, let $I$ denote the set of indices of spheres ${\partial\Omega}_i$ that are reactive, whereas the remaining spheres with indices $\{0,1,\ldots,N\} \backslash I$ are reflecting. Then one can define the Dirichlet-to-Neumann operator $\mathcal{M}_p^\Gamma$, acting on a function $f$ on $\Gamma = \cup_{i\in I} {\partial\Omega}_i$ as $\mathcal{M}_p^\Gamma ~:~ f\to g = (\partial w/\partial \bm{n})|_{\Gamma}$, where $w$ is the solution of the mixed boundary value problem: \begin{equation} (p - D \nabla^2) w = 0 \quad \textrm{in} ~\Omega, \qquad \left\{ \begin{array}{l} w|_{\Gamma} = f , \\ (\partial w/\partial \bm{n})|_{\Omega\backslash \Gamma} = 0. \end{array} \right. \end{equation} The matrix representation of the operator $\mathcal{M}_p^\Gamma$ is still given by Eq. (\ref{eq:M_DtN}), in which the matrix $\tilde{\mathbf{W}}$ is replaced by another matrix evaluated with $a_i = 1$, $b_i = 0$ for $i\in I$ (Dirichlet condition) and $a_i = 0$, $b_i = 1$ for $i \in \{0,1,\ldots,N\} \backslash I$ (Neumann condition), see Appendix \ref{sec:AW}. \section{Discussion} \label{sec:discussion} The previous section presented a concise overview of several major applications of the GMSV for the modified Helmholtz equation. In this section, we discuss its practical aspects and illustrate the use of the GMSV on several examples in the context of chemical physics. \subsection{First-passage properties} \label{sec:first} As the Green function $G({\bm{x}},\bm{y};q)$ is related via Eq. (\ref{eq:P_G}) to the Laplace transform of the heat kernel, it determines most diffusion-reaction characteristics in the Laplace domain (see \cite{Grebenkov19b} for details). For instance, the Laplace transform of the probability flux density $j(\bm{s},t|\bm{y})$ reads \begin{align} \nonumber \tilde{j}(\bm{s},p|\bm{y}) & = \int\limits_0^\infty dt \, e^{-pt} \, j(\bm{s},t|\bm{y}) \\ \label{eq:jdef} & = \biggl(-\frac{\partial G({\bm{x}},\bm{y}; \sqrt{p/D})}{\partial \bm{n}}\biggr)\biggr|_{{\bm{x}} = \bm{s} \in {\partial\Omega}} . \end{align} We recall that $j(\bm{s},t|\bm{y})$ is the joint probability density of the reaction time and location on the partially reactive boundary ${\partial\Omega}$ for a particle started from a point $\bm{y} \in \Omega$. The normal derivative of the Green function was evaluated in Appendix \ref{sec:dwdn}, yielding: \begin{equation} \label{eq:tildej} \tilde{j}(\bm{s},p|\bm{y})\biggr|_{{\partial\Omega}_i} = \sum\limits_{m,n} \mathbf{J}_{mn}^i(\bm{y}) \, Y_{mn}(\theta_i,\phi_i) , \end{equation} where the components of the vector $\mathbf{J}$ are given by Eq. (\ref{eq:Jmatrix}), with $q = \sqrt{p/D}$. \subsubsection*{Probability distribution of the reaction time} The integral of the joint probability density $j(\bm{s},t|\bm{y})$ over time $t$ yields the spread harmonic measure density on the sphere ${\partial\Omega}_i$ \cite{Grebenkov06,Grebenkov15}. This is a natural extension of the harmonic measure density to partially reactive traps with Robin boundary condition, which characterizes the distribution of the reaction location. As the integral of $j(\bm{s},t|\bm{y})$ over $t$ is equal to $\tilde{j}(\bm{s},0|\bm{y})$ (i.e., with $p = q = 0$), the modified Helmholtz equation is reduced to the Laplace equation. The explicit representation of the spread harmonic measure density and its properties were discussed in Ref. \cite{Grebenkov19}. In turn, the integral of $j(\bm{s},t|\bm{y})$ over the location position $\bm{s}$ yields the probability density of the reaction time: \begin{equation} H(t|\bm{y}) = \int\limits_{{\partial\Omega}} d\bm{s} \, j(\bm{s},t|\bm{y}) . \end{equation} In the Laplace domain, the expansion (\ref{eq:tildej}) allows one to easily compute this integral due to the orthogonality of spherical harmonics: \begin{equation} \label{eq:tildeH} \tilde{H}(p|\bm{y}) = \int\limits_{{\partial\Omega}} d\bm{s} \, \tilde{j}(\bm{s},p|\bm{y}) = \sqrt{4\pi} \sum\limits_{i=0}^N R_i^2 \, \mathbf{J}_{00}^i(\bm{y}) , \end{equation} where the factor $R_i^2$ accounts for the area of the $i$-th ball, and the matrix elements $\mathbf{J}_{00}^i(\bm{y})$ are given in Eq. (\ref{eq:J00i}). Note that each term in this sum is the probability flux onto the sphere ${\partial\Omega}_i$, while the dependence on $\bm{y}$ comes explicitly through the expression for $\mathbf{J}$. By definition, $\tilde{H}(p|\bm{y}) = \langle \exp(-p \mathcal{T})\rangle$ is the generating function of the moments of the reaction time $\mathcal{T}$: \begin{equation} \langle \mathcal{T}^k \rangle = (-1)^k \lim\limits_{p\to 0} \frac{\partial^k \tilde{H}(p|\bm{y})}{\partial p^k} \,. \end{equation} One can thus determine the mean and higher-order moments of the reaction time $\mathcal T$. In turn, the inverse Laplace transform of Eq. (\ref{eq:tildeH}) gives $H(t|\bm{y})$ in time domain. The integral of $H(t|\bm{y})$ from $0$ to $t$ yields the probability of reaction up to time $t$, whereas the integral from $t$ to infinity is the survival probability of the particle. We conclude that the present approach opens new opportunities for studying various first-passage phenomena for an arbitrary configuration of non-overlapping partially reactive spherical traps. In other words, this approach generalizes the classical results for diffusion outside a single trap, for which one has $\mathbf{U} = 0$, and the above expression simplifies to \begin{equation} \label{eq:Hp_sphere} \tilde{H}(p|\bm{y}) = \frac{R_1 \, e^{-\sqrt{p/D}(|\bm{y}|-R_1)}}{|\bm{y}| (a_1 + b_1 (1+R_1\sqrt{p/D}))} \,, \end{equation} where we used the Wronskian \begin{equation} \label{eq:Wronskian} i'_n(z) k_n(z) - k'_n(z) i_n(z) = \frac{1}{z^2} \, \end{equation} and the explicit relations $i_0(z) = \sinh(z)/z$ and $k_0(z) = e^{-z}/z$. The inverse Laplace transform of this formula yields the expression for $H(t|\bm{y})$ derived by Collins and Kimball \cite{Collins49}. Setting $a_1 = 1$ and $b_1 = 0$, one retrieves another classical expression for a perfectly reactive trap studied by von Smoluchowski \cite{Smoluchowski17}. We emphasize that for a single trap, the analysis can be pushed much further by including, e.g., the interaction potentials (see \cite{Sano79,Son13,Lee20} and references therein). The more elaborate example of two concentric spheres is discussed in Appendix \ref{sec:concentric}. \subsubsection*{Presence of reflecting obstacles?} \begin{figure} \begin{center} \includegraphics[width=88mm]{figure3.pdf} \end{center} \caption{ {\bf (a,b)} Two configurations of 35 reflecting spherical obstacles of radius $\rho$ inside a larger sphere of radius $R$, with the same centers but distinct radii: $\rho/R = 0.1$ {\bf (a)} and $\rho/R = 0.2$ {\bf (b)}. {\bf (c)} Laplace-transformed probability density $\tilde{H}(p|0)$ of the first-exit time from the center of the ball of radius $R$ to its boundary ${\partial\Omega}_0$ in presence of 35 reflecting spherical obstacles. The function $\tilde{H}(p|0)$ was computed via Eq. (\ref{eq:tildeH}), with the truncation order $n_{\rm max} = 2$. For comparison, the gray dash-dotted line shows the classical expression $\tilde{H}(p|0) = 1/i_0(R\sqrt{p/D})$ for an empty ball without obstacles. The inset shows the Laplace-transformed survival probability $\tilde{S}(p|0) = (1 - \tilde{H}(p|0))/p$. } \label{fig:Hp_origin} \end{figure} How do reflecting obstacles modify the reaction time distribution? Figure \ref{fig:Hp_origin} presents the Laplace-transformed probability density $\tilde{H}(p|0)$ of the first-exit time from the center of the ball of radius $R_0 = R$ to its boundary ${\partial\Omega}_0$ in presence of 35 reflecting spherical obstacles of equal radii $R_i = \rho$. In Ref. \cite{Grebenkov17d}, we conjectured that reflecting obstacles cannot speed up the exit from the center of the ball, i.e., $S(t|0) \geq S_0(t|0)$, where $S(t|0)$ and $S_0(t|0)$ are the survival probabilities with and without obstacles, respectively. As a consequence, their Laplace transforms satisfies the same inequality: $\tilde{S}(p|0) \geq \tilde{S}_0(p|0)$. This statement is not trivial: on one hand, reflecting obstacles hinder the motion of the diffusing particle and thus increase its first-exit time; on the other hand, the obstacles reduce the available space that might speed up the exit. According to this conjecture, the hindering effect always ``wins'' for diffusion from the center to the boundary of a ball, but it is not necessarily true neither for other starting points, nor for other (non-spherical) domains. This conjecture is confirmed in our numerical example, as illustrated in the inset of Fig. \ref{fig:Hp_origin}. Expectedly, small obstacles ($\rho/R = 0.1$) almost do not alter $\tilde{H}(p|0)$ and $\tilde{S}(p|0)$, the curves being barely distinguishable. Most surprisingly, even large obstacles ($\rho/R = 0.2$) that fill $35(\rho/R)^3 \approx 28\%$ of the volume, also have a very moderate effect, which is mainly visible on the inset at small $p$. Indeed, the obstacles hinder diffusion and slightly increase the mean first-exit time $\tilde{S}(p=0|0)$, from $R^2/(6D) \approx 0.17 (R^2/D)$ without obstacles, to $0.19 (R^2/D)$ in the presence of obstacles. Even though this observation is realized for the particular geometric setting of spherical obstacles, one can question the role of hindering obstacles in more general configurations. A systematic study of this problem can be performed in future by using the present numerical and analytical approach. As discussed in Sec. \ref{sec:mortal}, $\tilde{H}(p|0)$ can alternatively be interpreted as the stationary concentration at $\bm{y} = 0$ of mortal particles whose concentration on the outer boundary is kept constant. \subsubsection*{Presence of absorbing sinks?} With the help of the GMSV, one can refine the above analysis by considering the following first-passage time problem: for a particle started from $\bm{y}$, what is the reaction time on a given trap $i$ in the presence of absorbing sinks that can irreversibly bind the diffusing particle? The role of such binding sites onto the protein search for targets on DNA chain was recently investigated within a simplified one-dimensional model \cite{Lange15}. The GMSV allows one to push this analysis further toward more elaborate geometric configurations. The related survival probability $S(t|\bm{y})$ satisfies the backward diffusion equation: \begin{subequations} \begin{align} \label{eq:St_diff} \frac{\partial S(t|\bm{y})}{\partial t} - D \nabla^2 S(t|\bm{y}) &= 0 \quad (\bm{y}\in\Omega), \\ \left. \biggl(a_i S + b_i R_i \frac{\partial S}{\partial \bm{n}} \biggr) \right|_{{\partial\Omega}_i} & = 0 , \\ \label{eq:St_diff_c} S(t|\bm{y}) \bigl|_{{\partial\Omega}_j} & = 1 \quad (j\ne i), \end{align} \end{subequations} with the initial condition $S(t=0|\bm{y}) = 1$. We emphasize that this probability characterizes the reaction events on the trap $i$; if in turn the particle binds any absorbing sink (with $j\ne i$), it survives forever, see Eq. (\ref{eq:St_diff_c}). The probability density of the reaction time is still $H(t|\bm{y}) = - \partial S(t|\bm{y})/\partial t$ but it is not normalized to $1$ given that the reaction may never happen due to irreversible binding. The Laplace transform reduces the diffusion equation (\ref{eq:St_diff}) to the modified Helmholtz equation. Rewriting this equation for the Laplace-transformed probability density, $\tilde{H}(p|\bm{y}) = 1 - p \tilde{S}(p|\bm{y})$, one gets \begin{subequations} \label{eq:H_FPT_new} \begin{align} (p - D \nabla^2) \tilde{H}(p|\bm{y}) &= 0 \quad (\bm{y}\in\Omega), \\ \label{eq:H_Robin} \left. \biggl(a_j \tilde{H} + b_j R_j \frac{\partial \tilde{H}}{\partial \bm{n}} \biggr) \right|_{{\partial\Omega}_j} & = a_j \delta_{ij} \quad (j=0,\ldots,N), \end{align} \end{subequations} where $a_j = 1$ and $b_j = 0$ for all $j\ne i$. As this is a specific case of the general boundary value problem considered in Sec. \ref{sec:general}, its semi-analytical solution is accessible via the GMSV. If one is interested in finding the reaction time on a subset $I$ of traps, the condition $a_j = 1$ and $b_j = 0$ is imposed only for $j\notin I$, and the right-hand side of Eq. (\ref{eq:H_Robin}) becomes $a_j 1_{j\in I}$, where $1_{j\in I}$ is the boolean variable taking $1$ if $j\in I$ and $0$ otherwise. When $I = \{0,1,\ldots,N\}$, one retrieves the standard first-passage time problem, with $a_j$ standing in the right-hand side for all $j$. Note also that some traps from the subset $I$ can be reflecting and thus represent passive obstacles. Finally, as $\tilde{H}(0|\bm{y})$ is the integral of $H(t|\bm{y})$, it can be interpreted as the probability of reaction, also known as the splitting probability for perfectly reactive traps. \subsection{Stationary diffusion of mortal particles} \label{sec:mortal} In the case of perfectly absorbing traps ($a_i = 1$, $b_i = 0$), the boundary condition (\ref{eq:H_Robin}) simply reads $\tilde{H}|_{{\partial\Omega}_j} = \delta_{ij}$, and the above first-passage time problem is equivalent to stationary diffusion of ``mortal'' particles, which move from a source on ${\partial\Omega}_i$ to perfect sinks on the remaining spheres ${\partial\Omega}_j$ and spontaneously disappear with the bulk rate $p$. This is a very common situation in biological and chemical diffusion-reaction processes. Among typical examples, one can mention: spermatozoa moving in an aggressive medium toward an egg cell; bacteria or viruses that can be neutralized by the immune system; cells or animals searching for food and starving to death; proteins or RNA molecules which can disassemble and be recycled within the cell; fluorescent proteins diffusing toward receptors and spontaneously loosing their signal and thus disappearing from view in single-particle tracking experiments; excited nuclei loosing their magnetization due to relaxation processes in nuclear magnetic resonance experiments; diffusing radioactive nuclei that may disintegrate on their way from the nuclear reactor core; more generally, molecules that can be irreversibly bound to bulk constituent or be chemically transformed on their way to catalytic sites \cite{Yuste13,Meerson15,Grebenkov17d,Grebenkov07,Schuss19}. For instance, setting a constant concentration $c_0$ on the outer sphere ${\partial\Omega}_0$ and zero concentration on the inner spheres ${\partial\Omega}_j$ describes the diffusive flux of particles toward perfect sinks. Alternatively, one can impose a constant flux on the outer sphere to model particles constantly coming onto ${\partial\Omega}_0$ from the exterior space. Similarly, any set of inner balls can play the role of a source. In turn, setting Neumann condition on some inner spheres switches them to inert obstacles, whereas Robin condition describes an intermediate behavior. The diffusive flux onto the trap $\Omega_j$ is then obtained from Eq. (\ref{eq:dwdn}): \begin{align} \nonumber J_j & = \int\limits_{{\partial\Omega}_j} d\bm{s} \, \left. \biggl(-D c_0 \frac{\partial \tilde{H}(p|\bm{y})}{\partial \bm{n}}\biggr)\right|_{\bm{y} = \bm{s}\in {\partial\Omega}_j} \\ & = -\sqrt{4\pi} c_0 D R_j^2 \bigl(\mathbf{F} \mathbf{M}^\dagger \bigr)_{00}^j , \end{align} where the matrix $\mathbf{M}$ is defined by Eq. (\ref{eq:M_DtN}), and we used the orthogonality of spherical harmonics. Here, the components of the vector $\mathbf{F}$ from Eq. (\ref{eq:F_def}) describe whether the $i$-th ball is source or sink. For instance, if there is a single source located on the sphere ${\partial\Omega}_i$, then $f_j(\bm{s}) = \delta_{ij}$ and thus $F_{mn}^j = \delta_{ij} \delta_{n0} \delta_{m0} \sqrt{4\pi}$ so that \begin{equation} J_j = - 4\pi D c_0 R_j^2 \bigl(\mathbf{M} \bigr)_{00,00}^{ji} . \end{equation} Expectedly, the flux is positive on traps and negative on the source. When there is a subset of sources, then this expression is summed over $i$ corresponding to sources. Note that all balls can be treated as sources, in which case particles disappear only due to the bulk rate $p$. As an example, let us consider two concentric spheres and assign the outer sphere $\Omega_0$ to be a source and the inner sphere $\Omega_1$ to be a sink. In this elementary setting, one gets an explicit solution \begin{align} w({\bm{x}};q) & = c_0 \frac{i_0(q|{\bm{x}}|) k_0(qR_1) - k_0(q|{\bm{x}}|) i_0(qR_1)}{i_0(qR_0) k_0(qR_1) - k_0(qR_0) i_0(qR_1)} \,,\\ J_1 & = \frac{4\pi c_0 D qR_0R_1}{\sinh(q(R_0-R_1))} \,, \end{align} with $q = \sqrt{p/D}$. In the limit $p\to 0$ and $R_0 \to \infty$, one retrieves the Smoluchowski formula for the steady-state reaction rate of a ball of radius $R_1$: $J_1 = 4\pi c_0 D R_1$. \subsubsection*{Reaction rate} On the other hand, the integral of $\tilde{j}(\bm{s},p|\bm{y})$ from Eq. (\ref{eq:tildej}) yields the probability flux onto the sphere ${\partial\Omega}_i$ from a point source at $\bm{y}$. If there is a constant bulk uptake (with concentration $c_0$), the diffusive uptake onto the trap ${\partial\Omega}_i$ is given by \begin{equation} \label{eq:tildeJ_ip} \overline{J}_i(p) = c_0 \int\limits_\Omega d\bm{y} \int\limits_{{\partial\Omega}_i} d\bm{s} \, \tilde{j}(\bm{s},p|\bm{y})\bigr|_{{\partial\Omega}_i} = \sqrt{4\pi} c_0 R_i^2 \overline{\mathbf{J}}_{00}^i , \end{equation} where $\overline{\mathbf{J}}$ is the vector with components $\overline{J}_{mn}^i$ given by Eq. (\ref{eq:tildeJ}) after an explicit integration of the elements of the vector $\mathbf{J}$ over the starting point $\bm{y}$. This is the amount of molecules (e.g., in mole) that have not disappeared in the bulk and come to the trap $\Omega_i$. This quantity can also be interpreted as the Laplace transform of the time-dependent reaction rate $J_i(t)$ for the $i$-th trap, if the molecules were initially distributed uniformly in the domain (with concentration $c_0$). The Laplace-transformed total reaction rate is then obtained by summing these diffusive fluxes: \begin{equation} \label{eq:Jtotal} \tilde{J}(p) = \sum\limits_{i=0}^N \overline{J}_i(p) . \end{equation} For the exterior problem, the term $i=0$ corresponding to the outer boundary is removed. In this case, $\tilde{J}(p) \propto 1/p$ as $p\to 0$, and the proportionality coefficient is the steady-state reaction rate in the long-time limit. For instance, for the exterior problem for a single sphere, one easily gets from Eq. (\ref{eq:tildeJ}) that $\overline{\mathbf{J}}_{00}^i = \sqrt{4\pi} k_1(qR_1)/(qk_0(qR_1))$, from which \begin{equation} \label{eq:Jtilde_Smol} \tilde{J}_{\rm sm}(p) \equiv \overline{J}_1(p) = 4\pi c_0 D R_1 \biggl(\frac{1}{p} + \frac{R_1}{\sqrt{pD}}\biggr). \end{equation} This is the Laplace transform of the classical Smoluchowski rate on the perfectly reactive sphere \cite{Smoluchowski17}: \begin{equation} \label{eq:J_Smol} J_{\rm sm}(t) = 4\pi c_0 D R_1 \bigl(1 + R_1/\sqrt{\pi D t}\bigr) . \end{equation} We illustrate the effect of diffusion screening between traps onto the reaction rate by considering several configurations of 6 identical perfect traps of radius $\rho = 1/6$ located along the axes at distance $L$ from the origin (Fig. \ref{fig:Jp_6balls}(a)). Figure \ref{fig:Jp_6balls}(b) shows the Laplace-transformed reaction rate $\tilde{J}(p)$, which is normalized by the above Smoluchowski rate $\tilde{J}_{\rm sm}(p)$ for a single spherical trap of radius $R_1 = 6\rho = 1$. In the limit of $p\to 0$ (no bulk reaction), the curves tend to constants, indicating the common behavior $\tilde{J}(p) \propto 1/p$. As $L$ increases, the traps become more distant and compete less for diffusing particles so that the reaction rate increases. Moreover, the particular choice $\rho = R_1/6$ ensures that the ratio $\tilde{J}(0)/\tilde{J}_{\rm sm}(0)$ approaches $1$ as $L\to\infty$: 6 very distant balls of radius $\rho$ trap the particles as efficiently as a single trap of radius $6\rho$. This is a reminiscent feature of diffusion-limited reactions and of the Smoluchowski rate, which is proportional to $R_1$ in the limit $p\to 0$. In contrast, the opposite limit $p\to\infty$ corresponds to the short-time behavior of the reaction rate. As particles diffuse on average over a distance $\sqrt{Dt}$, the 6 balls trap first the particles in their close vicinity and thus do not compete. As a consequence, the total reaction rate does not depend on the distance $L$ (if $L$ exceeds $\sqrt{Dt}$), as clearly seen on Fig. \ref{fig:Jp_6balls}. Moreover, in this limit, the second term dominates in Eq. (\ref{eq:Jtilde_Smol}), and the reaction rate is proportional to the squared radius that explains 6 times smaller limit of $\tilde{J}(p)/\tilde{J}_{\rm sm}(p)$ as $p\to\infty$. Figure \ref{fig:Jp_6balls}(c) illustrates these results in time domain by showing the total flux $J(t)$, which is obtained via a numerical Laplace transform inversion of $\tilde{J}(p)$ and then normalized by $J_{\rm sm}(t)$ from Eq. (\ref{eq:J_Smol}). At long times (corresponding to $p\to 0$), the total flux reaches its steady-state limit. At short times (corresponding to $p\to \infty$), all curves reach the same level $1/6$, which is the ratio between the total surface area of 6 balls of radius $\rho = 1/6$ and the total surface area of a single ball of radius $R_1 = 6\rho$. Finally, we note that the reaction rates on Fig. \ref{fig:Jp_6balls} were obtained by truncating matrices up to the order $n_{\rm max} = 2$. As we dealt with matrices of size $6(2+1)^2 \times 6(2+1)^2 = 54\times 54$, all curves were obtained within less than a second on a standard laptop. Remarkably, the use of the lowest truncation order $n_{\rm max} = 0$ yielded very accurate results (shown by symbols) when the traps are well separated (i.e., $L \gg \rho$). But even for close traps ($L = 0.25$), the error was not significant. From our experience, this is a common situation for exterior problems. For interior problems, the quality of the monopole approximation is usually lower. \begin{figure} \begin{center} \includegraphics[width=88mm]{figure4.pdf} \end{center} \caption{ {\bf (a)} Four configurations of 6 perfect sinks of radius $\rho = 1/6$ located on the axes at distance $L$ from the origin, with $L = 0.25, 0.5, 1, 2$. {\bf (b)} Laplace-transformed total flux $\tilde{J}(p)$ onto 6 sinks, normalized by $\tilde{J}_{\rm sm}(p)$ from Eq. (\ref{eq:Jtilde_Smol}) for the unit sphere ($R = 1$). Solid lines show $\tilde{J}(p)$ computed via Eq. (\ref{eq:Jtotal}) with the truncation order $n_{\rm max} = 2$; symbols show the results obtained with $n_{\rm max} = 0$ (monopole approximation). {\bf (c)} The corresponding total fluxes $J(t)$, obtained via the numerical inversion of the Laplace transform by the Talbot algorithm, which is normalized by $J_{\rm sm}(t)$ from Eq. (\ref{eq:J_Smol}) for the unit sphere. } \label{fig:Jp_6balls} \end{figure} \subsection{Advantages and limitations} \label{sec:advantages} As discussed in Sec. \ref{sec:intro}, different numerical methods have been applied for solving boundary value problems for the modified Helmholtz equation. In contrast to these conventional methods, the GMSV relies on the local spherical symmetries of perforated domains made of non-overlapping balls. In other words, the solution $w({\bm{x}};q)$ is decomposed on the basis functions $\psi_{mn}^{\pm}$, which are written in local spherical coordinates and thus respect {\it locally} the symmetry of the corresponding trap. As a consequence, such decompositions can often be truncated after few terms and still yield accurate results. An important advantage of the method is that the dependence on ${\bm{x}}$ is analytical and explicit: once the coefficients are found numerically, the solution and its spatial derivatives can be easily calculated (and refined) at any set of points. Moreover, integrals of the solution over spherical boundaries or balls can be found analytically with the help of re-expansions (see Appendix \ref{sec:integral}). The meshless character of the GMSV makes it an alternative to the method of fundamental solutions (see \cite{Lin16} and references therein). Another important advantage of this method is the possibility of solving {\it exterior} problems (when $\Omega_0 = \mathbb{R}^3$), which are particularly difficult from the numerical point of view. In fact, a practical implementation of standard discretization schemes such as finite difference or finite elements methods would require introducing an artificial outer boundary to deal with a finite volume. An outer boundary is also needed in Monte Carlo simulations due to the transient character of the three-dimensional Brownian motion. In contrast, the present approach does not require any outer boundary because the solution is constructed on the appropriate basis functions that vanish at infinity. Exterior problems are actually simpler than interior ones, as there is no need to impose boundary condition on the outer boundary ${\partial\Omega}_0$. In this light, the present approach is a rather unique numerical tool to deal with various exterior boundary value problems. Finally, the GMSV opens access to such fundamental entities as the Green function $G({\bm{x}},\bm{y};q)$, the Laplace operator $\nabla^2$, and the Dirichlet-to-Neumann operator $\mathcal{M}_p$. For instance, the eigenbasis of the Laplace operator yields spectral decompositions of solutions of diffusion and wave equations. In turn, the eigenbasis of the Dirichlet-to-Neumann operator allows one to deal with inhomogeneous reactivity on traps \cite{Grebenkov19b}. The spectral properties of both operators in perforated domains will be investigated in a separate paper. As any numerical technique, the proposed method has its limitations from the numerical point of view. For the truncation order $n_{\rm max}$, there are $(n_{\rm max}+1)^2$ basis functions $\psi_{mn}^{\pm}$ for each ball so that the total number of unknown coefficients $A_{mn}^i$ for a domain with $N$ traps is $N(n_{\rm max}+1)^2$ for the exterior problem and $(N+1)(n_{\rm max}+1)^2$ for the interior problem. Their numerical computation involves the construction and inversion of the matrix $\mathbf{W}$ of size $N(n_{\rm max}+1)^2 \times N(n_{\rm max}+1)^2$. To speed up the construction of the matrix elements, we adapted recurrence relations for addition theorems from Ref. \cite{Chew92}, see Appendix \ref{sec:recurrence}. However, the direct inversion of $\mathbf{W}$ becomes very time-consuming when the number of traps $N$ and/or the truncation order $n_{\rm max}$ grow. As some re-expansion formulas have a limited validity range (see Appendix \ref{sec:AW}), their truncations should include more basis functions when the balls are close to each other. In other words, computations for dense packings of balls need larger $n_{\rm max}$. In such cases, one has to resort to iterative methods (see discussion in Ref. \cite{Grebenkov19}). Significant numerical improvements of this approach can be achieved by using fast multipole methods \cite{Gumerov02,Gumerov05,Coifman93,Darve90,Epton95,Greengard97,Cheng06,Hesford10}. Note also that the size of the matrices is reduced to $N(n_{\rm max}+1) \times N(n_{\rm max}+1)$ in the case of axiosymmetrical problems by using special forms of re-expansion theorems \cite{Traytak08}. Another drawback of the method is that the parameter $q$ enters in all matrix elements that requires recomputing these matrices for each value of $q$. This is inconvenient for a numerical computation of the inverse Laplace transform of a solution of the modified Helmholtz equation in order to get back to time domain (see discussion in Appendix \ref{sec:complex_q} and in Ref. \cite{Gordeliy09}). In turn, one can still analyze the short-time and long-time asymptotic behaviors by considering the large-$q$ and small-$q$ limits, respectively. \subsection{Extensions} \label{sec:extensions} The GMSV can be further extended. For instance, we assumed that $a_i$ and $b_i$ are nonnegative constants. This assumption can be relaxed by considering $a_i$ and $b_i$ as continuous nonnegative functions on each sphere ${\partial\Omega}_i$. The overall method is still applicable, even though its practical implementation is more elaborate. In fact, the matrix elements $W_{mn,kl}^{j,i}$ and $F_{mn}^j$ will involve the scalar products of the form $(Y_{mn}, a_i Y_{kl})_{L_2({\partial\Omega}_i)}$ and $(Y_{mn}, b_i Y_{kl})_{L_2({\partial\Omega}_i)}$ that need to be computed. Even so such computations are rather standard (see, e.g., \cite{Grebenkov19b}), we do not discuss this general setting in detail. One can also consider other canonical domains (e.g., cylinders) for which re-expansion theorems are available \cite{Erofeenko}. Another direction for extensions consists in considering more sophisticated kinetics on the boundary. The Robin boundary condition employed in the present work describes irreversible binding/reaction on an impermeable boundary (e.g., of a solid catalyst). In many biological and technological applications, the boundary is a semi-permeable membrane that separates liquid and/or gaseous phases (e.g., intracellular and extracellular compartments). To describe diffusion in both phases, one can introduce two Green functions (satisfying the modified Helmholtz equation in each phase) and couple them via two exchange boundary conditions. Expanding the Green function over basis functions in each phase, one can establish the system of linear algebraic equations on their coefficients, in a very similar way as done in Sec. \ref{sec:general}, see Ref. \cite{Grebenkov19} for a detailed implementation in the case of the Laplace equation. Yet another option is to allow for reversible binding to the balls. In the Laplace domain, the reversible binding can be implemented by replacing the constant reactivity by an effective $p$-dependent reactivity \cite{Agmon90,Tachiya80,Agmon84,Kim99,Prustel13,Grebenkov19k}. In other words, the coefficients $a_i$ become $p$-dependent but the whole method remains applicable without any change. Note that each trap can be characterized by its own dissociation rate. This extension allows one to investigate the role of immobile buffering molecules in signalling processes, DNA search processes, and gene regulations, as well as many other chemical reactions (see \cite{Li09,Benichou09,Bressloff13,Lange15} and references therein). \section{Conclusion} \label{sec:conclusion} The GMSV was broadly employed for solving boundary value problems for the Laplace and ordinary Helmholtz equations in different disciplines ranging from electrostatics to hydrodynamics and scattering theory. Quite surprisingly, applications of this powerful method to the modified Helmholtz equation, which plays the crucial role for describing diffusion-reaction processes in chemical physics, are much less developed. In the present paper, we described a general analytical and numerical framework for solving such problems in perforated domains made of non-overlapping balls. In particular, we provided a semi-analytical solution $w({\bm{x}};q)$, in which the dependence on the point ${\bm{x}}$ enters {\it analytically} through explicitly known basis functions $\psi_{mn}^{\pm}$, while their coefficients are obtained {\it numerically} by truncating and solving the established system of linear algebraic equations. The high numerical efficiency of this approach relies on exploiting the local symmetries of the spherical traps and using the most natural basis functions. We applied this method to derive a semi-analytical representation of the Green function that determines various characteristics of non-stationary diffusion among partially reactive traps such as the probability flux density, the reaction rate, the survival probability, and the associated probability density of the reaction time. We also showed how this method can be adapted to obtain the eigenvalues and eigenfunctions of the Laplace operator and of the Dirichlet-to-Neumann operator. These operators play an important role in mathematical physics and have been applied in a variety of disciplines, including chemical physics. We described several applications of this technique such as the first-passage properties and stationary diffusion of mortal particles. In particular, we checked the conjecture that reflecting obstacles cannot speed up the exit from the center of a ball. Interestingly, the presence of even large obstacles had a minor effect on the distribution of the first-exit time. We also discussed how the mutual distance between absorbing traps affects the reaction rate. This discussion brings complementary insights onto the role of diffusion screening (or interaction) onto the reaction rate, which was thoroughly investigated in the steady-state limit ($t \to \infty$) but remains less known in the time-dependent regime. More generally, the developed framework provides a solid theoretical ground and efficient numerical tool for studying diffusion-controlled reactions in various media that can be modeled by spherical traps and obstacles. \begin{acknowledgments} The author thanks Prof. S. D. Traytak for fruitful discussions. \end{acknowledgments} \section*{Data Availability Statement} The data that support the findings of this study are available from the corresponding author upon reasonable request.
2,869,038,156,605
arxiv
\section*{Introduction} To obtain information on a group~$G$, a standard approach consists in considering subgroups and studying how they behave in the group. In particular, one often consider the centralizer~$Z_G(H)$ of a subgroup~$H$ in~$G$, which is defined by $$Z_G(H) = \{g\in G\mid gh = hg \textrm{ for all } h\in H\}.$$ This general approach naturally extends to other contexts. This is the case in the study of noncommutative algebras where subgroups are replaced by subalgebras. Clearly, for an algebra~$R$ and a subalgebra~$H$, the centralizer~$Z_G(H)$ is also a subalgebra. In this framework, the subalgebra~$Z_G(Z_G(H))$, called the \emph{double centralizer} of~$H$, has been considered \cite{Far,Sim}. For instance, a classical result \cite{Far} is the so-called \emph{Centralizer Theorem}, which claims that for a finite dimensional central simple algebra~$R$ over a field~$k$ and for a simple subalgebra~$H$, one has ~$Z_G(Z_G(H)) = H$. Various generalizations has been obtained leading to applications \cite{Tan,ChLe}. Regarding the result obtained in the algebra framework, and coming back to the group theory framework, one is naturally lead to consider the double-centralizer subgroup~$Z_G(Z_G(H))$ of a subgroup~$H$ in a group~$G$ and to address the question of a similar Centralizer Theorem. Let us denote by~$\DZ_G(H)$ the double centralizer of~$H$. Obviously, when the group~$G$ has a center~$Z(G)$ that is not contained in the subgroup~$H$, the equality~$\DZ_G(H) = H$ can not hold. However, one may wonder whether the subgroup~$\DZ_G(H)$ is generated by~$Z(G)$ and~$H$. More precisely, if~$Z(G)\cap H$ is trivial, one may wonder whether~$\DZ_G(H) = Z(G)\times H$. When the center of~$G$ is trivial, we recover the property of the Centralizer Theorem namely,~$\DZ_G(H) = H$. As far as we know, the first Centralizer Theorem in the group theory framework has been obtained in \cite{GKLT} by considering the braid group on~$n$ strands and its standard parabolic subgroups. Our objective here is to address the more general case of an Artin-Tits group~$G$ and a standard parabolic subgroup~$H$. \emph{Artin-Tits} groups are those groups which possess a presentation associated with a Coxeter matrix. For a finite set~$S$, a Coxeter matrix on~$S$ is a symmetric matrix ~$(m_{s,t})_{s,t\in S}$ whom entries are either a positive integer or equal to~$\infty$, with~$m_{s,t} = 1$ if and only if~$s = t$. An Artin-Tits group associated with such a matrix is defined by the presentation \begin{equation}\label{presarttitsgrps} \left\langle S\mid\underbrace{sts\ldots}_{m_{s,t}\ terms} = \underbrace{tst\ldots}_{m_{s,t}\ terms}~;\ \forall s,t\in S, s\not= t\ ; m_{s,t}\neq\infty \right\rangle. \end{equation} For instance, If we consider~$S = \{s_1,\ldots, s_n\}$ with~$m_{s_i,s_j}= 3$ for~$|i-j| =1$ and~$m_{s_i,s_j} = 2$ otherwise, we obtain the classical presentation of the braid group~$B_{n+1}$ on~$n+1$ strings considered in \cite{GKLT}. A \emph{standard parabolic subgroup} is a subgroup generated by a subset~$X$ of~$S$. It turns out that such a subgroup is also an Artin-Tits groups in a natural way (see Proposition \ref{ThVDL} below). Artin-Tits groups are badly understood and most articles on the subject focus on particular subfamilies of Artin-Tits groups, such as Artin-Tits groups of spherical type, of FC type, of large type, or of 2-dimensional type. Here again, we apply this strategy. We first consider the family of spherical type Artin-Tits groups, whom seminal example are braid groups. We refer to the next sections for definitions. We prove: \begin{thm} \label{theointro1} Assume~$A_S$ is a spherical type irreducible Artin-Tits group with~$S$ for standard generating set. Let~$X$ be strictly included in~$S$ and~$A_X$ be the standard parabolic subgroup of~$A$ generated by~$X$. Denote by~$\Delta$ the Garside element of~$A_S$. \begin{enumerate} \item If~$\D$ lies in~$\DZ_{A_S}(A_X)$ but not in~$Z(A_S)$, then $$\DZ_{A_S}(A_X)=A_X\times QZ(A_S)$$ \item If not, $$\DZ_{A_S}(A_X)=A_X\times Z(A_S).$$ \end{enumerate} \end{thm} In the above result we do not consider the case $X = S$. Indeed, for any group~$G$ one has~$\DZ_{G}(G) = G$. In the present article, we also consider Artin-Tits groups that are not of spherical type. We conjecture that \begin{conj} \label{conjintro} Assume~$A_S$ is an irreducible Artin-Tits group. Let~$A_X$ be a standard parabolic subgroup of~$A_S$ generated by a subset~$X$ of $S$. Assume~$A_X$ is irreducible. Let $A_T$ be the smallest standard parabolic subgroup of $A_S$ that contains $Z_{A_S}(A_X)$. \begin{enumerate} \item Assume~$A_X$ is not of spherical type. Then~$\DZ_{A_S}(A_X)=Z_{A_S}(A_{T})$. \item Assume~$A_X$ is of spherical type. \begin{enumerate} \item if $A_T$ is of spherical type, then, $$\DZ_{A_S}(A_X)=\DZ_{A_T}(A_X).$$ \item If~$A_T$ is not of spherical, then $$\DZ_{A_S}(A_X)= A_X.$$ \end{enumerate} \end{enumerate} \end{conj} The centralizer of a standard parabolic subgroup is well-understood in general. In particular, when Conjectures~1,2, 3 of \cite{God4} hold, for any given $X$, one can read on the Coxeter graph $\Gamma_S$ whether or not the above group $A_T$ is of spherical type. This is the case for the Artin-Tits groups considered in Theorem~\ref{theointro2}. The conjecture is supported by the following result: \begin{thm} \label{theointro2} \begin{enumerate} \item Conjecture \ref{conjintro} holds for irreducible Artin-Tits groups of FC type. \item Conjecture \ref{conjintro} holds for Artin-Tits groups of 2-dimensional type. \item Conjecture \ref{conjintro} holds for Artin-Tits groups of large type. \end{enumerate} \end{thm} The reader may note that in Theorem \ref{theointro1} there is no restriction on~$A_X$, whereas in Conjecture~\ref{conjintro} we assume that~$A_X$ is irreducible. Indeed, We can extend the above conjecture to the case where $X$ not irreducible (see Conjecture~\ref{conjintrogener}) and prove that this general conjecture holds for the same Artin-Tits groups than those considered in Theorem~\ref{theointro2}. However, the statement is more technical. This is why we postpone it and restrict to the irreducible case in the introduction. The remainder of this article is organized as follows. In Section 2, we introduce the necessary definitions and preliminaries. Section 3 is devoted to Artin-Tits groups of spherical type. Finally, in Section 4, we turn to the not spherical type cases. \section{Preliminaries} In this section we introduce the useful definitions and results on Artin-Tits groups that we shall need when proving our theorems. For all this section, we consider an Artin-Tits group~$A_S$ generated by a set~$S$ and defined by Presentation~(\ref{presarttitsgrps}) given in the introduction. \subsection{Parabolic subgroups} As explained, the subgroups that we consider in the article are the so-called \emph{standard parabolic subgroups}, that is those subgroups that are generated by a subset of~$S$. One of the main reasons that explains why these subgroups are considered is that they are themselves Artin-Tits groups: \begin{prp}\cite{Vdl}\label{ThVDL} Let~$X$ be a subset of~$S$. Consider the Artin-Tits group~$A_X$ associated with the Coxeter matrix~$(m_{st})_{s,t\in X}$. Then \begin{enumerate} \item the canonical morphism from~$A_X$ to~$A_S$ that sends~$x$ to~$x$ is into. In particular,~$A_X$ is isomorphic to, and will be identified with, the subgroup of~$A_S$ generated by~$X$. \item if~$Y$ is another subset of~$S$, then we have~$A_X\cap A_Y=A_{X\cap Y}$. \end{enumerate} \end{prp} We have already defined the notion of a centralizer~$Z_{A_S}(A_X)$ of a subgroup~$A_X$. We recall that we denote the center~$Z_{A_S}(A_S)$ of~$A_S$ by~$Z(A_S)$. More generally, for a subset~$X$ of~$S$, by~$Z(A_X)$ we denote the center of the parabolic subgroup~$A_X$. Along the way, we will also need the notions of a normalizer of a subgroup and of a quasi-centralizer of a parabolic subgroups. We recall here their definitions. \begin{df} Let~$X$ be a subset of~$S$ and $A_X$ be the associated standard parabolic subgroup. \begin{enumerate} \item The \emph{normalizer} of~$A_X$ in~$A_S$, denoted by~$N_{A_S}(A_X)$, is the subgroup of~$A_S$ defined by $$N_{A_S}(A_X) = \{g\in A_S\mid g^{-1}A_Xg = A_X\}$$ \item The \emph{quasi-centralizer} of~$A_X$ in~$A_S$, denoted by~$QZ_{A_S}(A_X)$, is the subgroup of~$A_S$ defined by $$QZ_{A_S}(A_X) = \{g\in A_S\mid g^{-1}Xg = X\}$$ \end{enumerate} \end{df} In the sequel, we will write~$QZ(A_S)$ for~$QZ_{A_S}(A_S)$. There is an obvious sequence of inclusion between these subgroups: $$Z_{A_S}(A_X)\subseteq QZ_{A_S}(A_X) \subseteq N_{A_S}(A_X).$$ But we can say more: \begin{thm}\cite{God,God_pjm,God4}\label{thm57} Let~$A_S$ be an Artin-Tits group, and~$X$ be a subset of~$S$. If~$A_S$ is of spherical type or of FC type or of 2-dimensional type, then $$N_{A_S}(A_X)=QZ_{A_S}(A_X)\cdot A_X.$$ \end{thm} This result is one of the key arguments in our proof of Theorems~\ref{theointro1} and~\ref{theointro2}. Actually, it is conjectured in~\cite{God3} that this property holds for any Artin-Tits groups. \subsection{Families of Artin-Tits groups} Our objective now is to introduce the various families of Artin-Tits groups that we considered in the introduction. \subsubsection{Irreducible Artin-Tits groups} First, we say that an Artin-Tits group is \emph{irreducible} when it is not the direct product of two of its standard parabolic subgroups. Otherwise we say that it is \emph{reducible}. Associated with the Coxeter matrix~$(m_{s,t})_{s,t\in S}$ is the Coxeter graph, which is the simple labelled graph with~$S$ as vertex set defined as it follows. There is an edge between two distinct vertices~$s$ and~$t$ when~$m_{s,t}$ is not two. The edge has label~$m_{s,t}$ when~$m_{s,t}$ is not~$3$. Therefore, the group~$A_S$ is irreducible if and only if the Coxeter graph~$\Gamma_S$ is connected. For instance the braid group on~$n+1$ strings is irreducible whereas the free abelian group on two generators is not. \subsubsection{Spherical type Artin-Tits groups} \label{secATSP}Among Artin-Tits groups, those of spherical type are the most studied and the most understood. From Presentation~(\ref{presarttitsgrps}), we obtain the presentation of the associated Coxeter group by adding the relations~$s^2 = 1$ for~$s$ in~$S$. The Artin-Tits group is said to be of spherical type when this associated Coxeter group is finite. For instance, braid groups are of spherical type as their associated Coxeter groups are the symmetric groups. Actually there is only a finite list of connected Coxeter graphs whom associated (irreducible) Artin-Tits groups are of spherical type (see \cite{Co},\cite{BrS}). \begin{center} \begin{figure}[!h] \begin{tikzpicture}[decoration={brace}][scale=2] \draw[very thick,fill=black] (0,1) circle (.1cm); \draw[very thick,fill=black] (1.3,1) circle (.1cm); \draw[very thick,fill=black] (2.6,1) circle (.1cm); \draw[very thick,fill=black] (3.9,1) circle (.1cm); \draw[very thick] (0,1) -- +(3.9,0); \draw (0.75,1.2) node {$4$}; \draw[very thick,fill=black] (5.5,1) circle (.1cm); \draw[very thick,fill=black] (6.8,1) circle (.1cm); \draw[very thick,fill=black] (8.1,1) circle (.1cm); \draw[very thick,fill=black] (9.4,1) circle (.1cm); \draw[very thick,fill=black] (10.7,1) circle (.1cm); \draw[very thick,fill=black] (8.1,2) circle (.1cm); \draw[very thick] (8.1,1) -- +(0,1); \draw[very thick] (5.5,1) -- +(5.2,0); \draw (0.75,1.2) node {$4$}; \end{tikzpicture} \caption{Artin-Tits groups of spherical types ~$B(4)$ and~$E(6)$} \end{figure} \end{center} \subsubsection{FC type Artin-Tits groups} These Artin-Tits groups are built on those of spherical type. An Artin-Tits group is of FC type when all its standard parabolic subgroups whom Coxeter graphs have no edge labelled with~$\infty$ are of spherical type. In particular, all spherical type Artin-Tits groups are of FC type. Alternatively, the family of FC type Artin-Tits groups can be defined as the smallest family of groups that contains spherical type Artin-Tits groups and that is closed under amalgamation above a standard parabolic subgroup. For instance, the Artin-Tits group associated with the following Coxeter graph is of FC type. \begin{center} \begin{figure}[!h] \begin{tikzpicture}[decoration={brace}][scale=2] \draw[very thick,fill=black] (2,1) circle (.1cm); \draw[very thick,fill=black] (3.3,1) circle (.1cm); \draw[very thick,fill=black] (4.6,1) circle (.1cm); \draw[very thick,fill=black] (5.9,1) circle (.1cm); \draw[very thick,fill=black] (2,0) circle (.1cm); \draw[very thick,fill=black] (5.9,0) circle (.1cm); \draw[very thick] (2,0) -- +(0,1); \draw[very thick] (2,0) -- +(3.9,0); \draw[very thick] (5.9,0) -- +(0,1); \draw[very thick] (2,1) -- +(3.9,0); \draw (2.65,1.2) node {$4$}; \draw (5.25,1.2) node {$4$}; \draw (3.95,1.2) node {$\infty$}; \end{tikzpicture} \caption{A FC type Artin-Tits group\label{figure2}} \end{figure} \end{center} Indeed, the Artin-Tits group in Figure~\ref{figure2} is the amalgamation of two spherical type Artin-Tits groups of type~$B(5)$ (see \cite{Bou}) above a common standard parabolic subgroup, which is of type~$A(4)$, that is a braid group~$B_5$. \subsubsection{$2$-dimensional type Artin-Tits groups} An Artin-Tits group is of 2-dimensional type when no standard parabolic subgroup generated by three, or more, generators is of spherical type. These groups has been considered, for instance, in \cite{Cha2,Che,God4}. \begin{center} \begin{figure}[!h] \begin{tikzpicture}[decoration={brace}] \draw[very thick,fill=black] (3.5,1) circle (.1cm); \draw[very thick,fill=black] (3.5,0) circle (.1cm); \draw[very thick,fill=black] (4.8,0) circle (.1cm); \draw[very thick,fill=black] (4.8,1) circle (.1cm); \draw[very thick] (3.5,0) -- +(1.3,0); \draw[very thick] (3.5,1) -- +(1.3,0); \draw[very thick] (3.5,0) -- +(0,1); \draw[very thick] (4.8,0) -- +(0,1); \draw (4.15,1.2) node {$6$}; \draw (5,.5) node {$7$}; \draw (4.15,.3) node {$6$}; \end{tikzpicture} \caption{A 2-dimensional Artin-Tits group \label{AT2D}} \end{figure} \end{center} \subsubsection{Large type Artin-Tits groups} Contained in the family of~$2$-dimensional Artin-Tits groups is the family of Artin-Tits groups of large type. An Artin-Tits group is of large type when no ~$m_{s,t}$ is equal to~$2$. Some~$2$-dimensional Artin-Tits groups are not of large type (see Figure~\ref{AT2D}). \begin{center} \begin{figure}[!h] \begin{tikzpicture}[decoration={brace}][scale=2] \draw[very thick,fill=black] (2,1) circle (.1cm); \draw[very thick,fill=black] (3.3,1) circle (.1cm); \draw[very thick,fill=black] (2.65,0) circle (.1cm); \draw[very thick] (2,1) -- +(1.3,0); \draw[very thick] (2,1) -- +(0.65,-1); \draw[very thick] (3.3,1) -- +(-0.65,-1); \draw[very thick,fill=black] (5,1) circle (.1cm); \draw[very thick,fill=black] (6.3,1) circle (.1cm); \draw[very thick] (5,1) -- +(1.3,0); \draw (5.65,1.2) node {$5$}; \end{tikzpicture} \caption{Artin-Tits groups of large types~$\tilde{A}(2)$ and~$I(5)$. \label{ATLT}} \end{figure} \end{center} \subsection{Artin-Tits monoids} As explained above, one of the main ingredients in our proof is Theorem \ref{thm57}. Another one is the positive monoid of an Artin-Tits monoid that allows to apply Garside theory. Here, we introduce only the results that we will need and refer to \cite{DDGKM} for more details on this theory. We recall that we fix an Artin-Tits group~$A_S$ generated by a set~$S$ and defined by Presentation~(\ref{presarttitsgrps}). \begin{df} The Artin-Tits monoid~$A_S^{+}$ associated with~$A_S$ is the submonoid of~$A_S$ generated by~$S$. An element of~$A_S$ that belongs to~$A_S^+$ is called a positive element. Its inverse is called a negative element. \end{df} We gather in the following proposition several properties of Artin-Tits monoids that we will need in the sequel. \begin{prp}\label{reststdAT} \begin{enumerate} \item \cite{Par4} Considered as a presentation of monoid, Presentation~(\ref{presarttitsgrps}) is a presentation of the monoid~$A_S^+$. \item When~$A_S$ is of spherical type, then \begin{enumerate} \item \cite{BrS,Cha1,DDGKM} the monoid~$A^+_S$ is a Garside monoid. In particular, every element~$g$ in~$A_S$ can be decomposed in a unique way as~$g=a^{-1}b$, with~$a,b$ positive, so that~$a$ and~$b$ have no nontrivial common left-divisors in~$A_S^+$. Furthermore, if~$c\in A_S^+$ is such that~$cg\in A_S^+$, then~$a$ right-divides~$c$ in~$A^+_S$. \item \cite{BrS,Del} There is a positive element~$\Delta$ that belongs to~$QZ(A_S)$ so that every element~$g$ in~$A_S$, can be decomposed as ~$g=a\Delta^{-n}$ with~$a$ positive and~$n\geq 0$. Moreover,~$\Delta^2$ belongs to~$Z(A_S)$. \item \cite{BrS,Del} When, moreover,~$A_S$ is irreducible then ~$QZ(A_S)$ is an infinite cyclic group generated by~$\Delta$. The group~$Z(A_S)$ is infinite cyclic generated by~$\Delta$ or by~$\Delta^2$. \end{enumerate} \end{enumerate} \end{prp} The decomposition~$g=a^{-1}b$ in Point~(ii)(a) is called the {\em Charney's (left) orthogonal splitting} of~$g$. The {\em Charney's right orthogonal splitting}~$g=ab^{-1}$ is defined in a similar way. In the sequel, we denote by~$\tau:S\to S$ the permutation of~$S$ defined by~$\D s=\tau(s)\D$ for all~$s$ in~$S$. As explained above,~$\tau$ is either the identity or an involution. In particular, we have also~$s\D=\D\tau(s)$ for all~$s$ in~$S$. Moreover, for~$a,b$ in~$A_S^+$, we write~$a\preceq b$ if~$a$ left-divides~$b$ in~$A_S^+$, that is if there exists~$c$ in~$A_S^+$ so that~$b = ac$. Similarly, we write~$b\succeq a$ if~$a$ right-divides~$b$ in~$A_S^+$. \section{Spherical type Artin-Tits groups} In this Section we focus on spherical type Artin-Tits groups and prove Theorem~\ref{theointro1}. \subsection{Artin-Tits groups of type~$E(6)$ and~$D(2k+1)$} In Theorems~\ref{theointro1} the description of~$\DZ_{A_S}(A_T)$ depends on a technical condition. Here we investigate this condition and characterize irreducible Coxeter graphs for which this condition is satisfied. \begin{prp}\label{condtech} Assume~$A_S$ is an irreducible spherical type Artin-Tits group. Let~$X$ be a proper subset of~$S$. Then, ~$\D$ does not belong to~$Z(A_S)$ but lies in~$\DZ_{A_S}(A_X)$ if and if : \begin{enumerate} \item[(a)] either~$\Gamma_S$ is of type~$D(2k+1)$ and~$X\supseteq\{s_2,s_{2'},s_3\}$ (see Figure~\ref{figuretypeD}). \item[(b)] or~$\Gamma_S$ is of type~$E_6$ and~$X=\{s_2,\dots,s_6\}$ (see Figure~\ref{figuretypeE6}). \end{enumerate} \begin{center} \begin{figure}[!h]\label{figuretypeD} \begin{tikzpicture}[scale=0.75] \draw[very thick,fill=black] (3.5,-3.5) circle (.1cm); \draw[very thick,fill=black] (8,-3.5) circle (.1cm); \draw[very thick,fill=black] (2.4,-2.8) circle (.1cm); \draw[very thick,fill=black] (2.4,-4.2) circle (.1cm); \draw[very thick] (3.5,-3.5) -- +(1,0); \draw[very thick] (2.4,-2.8) -- +(1.1,-.7); \draw[very thick] (2.4,-4.2) -- +(1.1,.7); \draw[dotted,very thick] (4.5,-3.5) -- +(2.5,0); \draw[very thick] (7,-3.5) -- +(1,0); \draw (2,-2.8) node {$s_2$}; \draw (2,-4.2) node {$s_{2'}$}; \draw (3.5,-3.9) node {$s_3$}; \draw (8,-3.9) node {$s_{2k+1}$}; \draw (4.15,-5.1) node {$X$}; \node at (3.8,-4.8) [rotate=315] {$\subseteq$}; \draw (2.65,-3.5) ellipse (1.5cm and 1.2cm); \end{tikzpicture} \caption{$\Gamma_S$ of type~$D(2k+1)$ and~$X\supseteq\{s_2,s_{2'},s_3\}$ \label{figuretypeD}} \end{figure} \begin{figure}[!h] \begin{tikzpicture}[scale=0.75] \draw[very thick,fill=black] (2,-6.7) circle (.1cm); \draw[very thick,fill=black] (3.5,-6.7) circle (.1cm); \draw[very thick,fill=black] (5,-6.7) circle (.1cm); \draw[very thick,fill=black] (5,-5.2) circle (.1cm); \draw[very thick,fill=black] (6.5,-6.7) circle (.1cm); \draw[very thick,fill=black] (8,-6.7) circle (.1cm); \draw[very thick] (2,-6.7) -- +(6,0); \draw[very thick] (5,-5.2) -- +(0,-1.5); \draw (2,-7.1) node {$s_2$}; \draw (3.5,-7.1) node {$s_3$}; \draw (5,-7.1) node {$s_4$}; \draw (6.5,-7.1) node {$s_5$}; \draw (8,-7.1) node {$s_6$}; \draw (5,-4.8) node {$s_1$}; \draw (1.5,-5.5) node {$X$}; \draw (5,-6.9) ellipse (3.5cm and .9cm); \draw[-stealth] (1.7,-5.7) -- +(.5,-.5); \end{tikzpicture} \caption{$\Gamma_S$ of type~$E_6$ and~$X=\{s_2,\dots,s_6\}$ \label{figuretypeE6}} \end{figure} \end{center} \end{prp} When proving Proposition~\ref{condtech}, we will need the following lemma. \begin{lm}\label{lm518} Assume~$A_S$ is an irreducible spherical type Artin-Tits group. Let~$X$ be a proper subset of~$S$. Assume the permutation~$\tau$ is not the identity on~$S$ and~$\D$ lies in~$\DZ_{A_S}(A_X)$ then: \begin{enumerate} \item~$\tau$ is the identity on~$S\setminus X$, that is~$\D$ lies in~$Z_{A_S}(A_{S\setminus X})$. \item~$\tau$ is not the identity on~$X$, that is~$\D$ does not lie in~$Z_{A_S}(A_X)$. \item~$\D$ stabilizes the indecomposable components of~$X$. \end{enumerate} \end{lm} \begin{proof} (i) Let~$s\in S\setminus X$. Set~$Y=X\cup\{s\}$. The elements~$\D_X^2,\D_Y^2$ lie in~$Z(A_X)$ and~$Z(A_Y)$, respectively. So, they both belong to~$Z_{A_S}(A_X)$ and, therefore, commute with~$\D$. Since ~$\D\D_X=\D_{\tau(X)}\D$ and~$\D\D_Y=\D_{\tau(Y)}\D$, we deduce that~$\tau(X) = X$ and~$\tau(Y) = Y$. Using that~$Y=X\cup\{s\}$, we concluded that ~$\D s=s\D$. Thus, ~$\D$ lies in~$Z_{A_S}(A_{S\setminus X})$ and (i) holds. Since ~$\tau$ is not the identity on~$S$, (i) implies (ii). Finally, Let~$X_1$ be an indecomposable component of~$X$. We have~$\D\D_{X_1}=\D_{\tau(X_1)}\D$. Then,~$\D_{X_1}^2$ lies in~$Z(A_X)$ and, therefore, in~$Z_{A_S}(A_X)$. Hence~$\D\D_{X_1}^2=\D_{X_1}^2\D$ and~$X_1 = \tau(X_1)$, that is~$\D X_1=X_1\D$. \end{proof} \begin{proof}[Proof of Proposition \ref{condtech}] Assume the element~$\D$ does not belong to the center~$Z(A_S)$ but lies in~$\DZ_{A_S}(A_X)$. So, assertions (i)(ii) and (iii) in Lemma~\ref{lm518} hold. In particular, the permutation~$\tau$ is not the identity map on~$S$. Using the classification of irreducible Artin-Tits groups~\cite{Bou} and well-known results on~$\D$ \cite{BrS,Del}, we deduce that the type of~$\Gamma_S$ is one of the following types: \begin{enumerate} \item[$\bullet$]~$A(k)$ with~$k\geq 2$, \item[$\bullet$]~$D(2k+1)$ with~$k\geq 1$, \item[$\bullet$]~$E_6$, or \item[$\bullet$]~$I_2(2p+1)$ with ~$p\geq 1$. \end{enumerate} By Lemma \ref{lm518}(i), the permutation~$\tau$ fixes each element of~$S\setminus X$. This imposes that~$\Gamma_S$ cannot be of type~$I_2(2p+1)$, as~$X$ is proper in~$S$ and~$\Delta$ permutes the two elements of~$S$. If~$\Gamma_S$ is of type~$A(k)$ with~$k\geq 2$, (so~$A_S$ is the braid group~$B_{k+1}$) then the unique element of~$S$ fixed by~$\tau$ is~$s_{\frac{k+1}{2}}$. This imposes~$S\setminus X=\{s_{\frac{k+1}{2}}\}$. and~$\D$ does not fix the two indecomposable components {$\{s_1,\dots,s_{\frac{k-1}{2}}\}$ and~$\{s_{\frac{k+3}{2}},\dots,s_k\}$} of~$X$, a contradiction with Lemma \ref{lm518}(iii). So~$\Gamma_S$ is not of type~$A(k)$. If~$\Gamma_S$ is of type~$D(2k+1)$, then~$\tau$ switches~$s_2$ and~$s_{2'}$. Therefore,~$s_2$ and $s_{2'}$ have to lie in~$X$. Moreover,~$s_2$ does not commute with~$\Delta$, so it cannot belong to~$Z_{A_S}(A_X)$. This imposes that~$s_3$ has to belong to~$X$. Hence, ~$\{s_2,s_{2'},s_3\}$ is included in~$X$ and we have case (a) of the proposition. Similarly, if~$\Gamma_S$ is of type~$E_6$. The elements~$s_2,s_3,s_5,s_6$ are not fix by~$\tau$, so they have to belong to~$X$. Applying Lemma \ref{lm518}(iii), we deduce that~$s_4$ has to lie in~$X$ too. Since~$X$ is not~$S$, it is equal to~$\{s_2,\dots,s_6\}$ and we have case (b) of the proposition. Conversely, in case (a) and (b), one can verify that~$\D$ does not belong to~$Z(A_S)$ but lies on~$\DZ_{A_S}(A_X)$ \end{proof} \subsection{Ribbons}\label{sectribbon} The notion of ribbon introduced in \cite{FRZ} for the case of braid groups, and then generalized in \cite{Par1,Got}, will be crucial to us in order to calculate the double-centralizer of a parabolic subgroup. Here we recall its definition and gather some properties that we shall need. In particular, we only consider spherical type Artin-Tits groups. We refer to above references and to \cite{DDGKM} for more details. Given an Artin-Tits presentation~(\ref{presarttitsgrps}), let us first introduce two notations: for a subset~$X$ of~$S$, we set $$X^\bot=\{s\in S\setminus X\mid\forall t\in X,m_{ts}=2\}$$ and $$\partial X=\{s\in S\setminus X\mid\exists t\in X,m_{ts}>2\}.$$ \begin{center} \begin{figure}[!h] \begin{tikzpicture}[scale=0.75] \draw[very thick,fill=black] (2,-13.1) circle (.1cm); \draw[very thick,fill=black] (3.5,-13.1) circle (.1cm); \draw[very thick,fill=black] (5,-13.1) circle (.1cm); \draw[very thick,fill=black] (5,-11.6) circle (.1cm); \draw[very thick,fill=black] (6.5,-13.1) circle (.1cm); \draw[very thick,fill=black] (8,-13.1) circle (.1cm); \draw[very thick,fill=black] (9.5,-13.1) circle (.1cm); \draw[very thick,fill=black] (11,-13.1) circle (.1cm); \draw[very thick] (2,-13.1) -- +(9,0); \draw[very thick] (5,-11.6) -- +(0,-1.5); \draw (2,-13.5) node {$s_2$}; \draw (3.5,-13.5) node {$s_3$}; \draw (5,-13.5) node {$s_4$}; \draw (6.5,-12.7) node {$s_5$}; \draw (8,-13.5) node {$s_6$}; \draw (9.5,-13.5) node {$s_7$}; \draw (11,-13.5) node {$s_8$}; \draw (5,-11.2) node {$s_1$}; \draw (11.8,-12.2) node {$X^\bot$}; \draw (9.5,-13.3) ellipse (2cm and .8cm); \draw[-stealth] (11.4,-12.4) -- +(-.45,-.25); \draw (1.2,-12.2) node {$X$}; \draw (3.5,-13.3) ellipse (2cm and .8cm); \draw[-stealth] (1.45,-12.4) -- +(.45,-.25); \draw (7.1,-11.2) node {$\partial X$}; \draw [rotate around={135:(5.75,-12.15)}] (5.75,-12.15) ellipse (1.7cm and .6cm); \draw[-stealth] (6.7,-11.4) -- +(-.45,-.25); \end{tikzpicture} \caption{Example :~$\partial(X)$ and~$X^\bot$}\label{exampledeltaetperp} \end{figure} \end{center} \begin{df}\label{defribb} \begin{enumerate} \item Let~$t$ belong to~$S$ and~$X$ be included in~$S$. Denote by~$X(t)$ the indecomposable component of~$X\cup\{t\}$ containing~$t$. If~$t$ lies in~$X$, we set ~$d_{X,t}=\D_{X(t)}$; otherwise, we set $$d_{X,t} = \D_{X}\D_{X-\{t\}}^{-1},$$ that is~$d_{X,t} = \D_{X(t)}\D_{X(t)-\{t\}}^{-1}$. In both cases, there exists~$Y\subseteq X\cup\{t\}$ and~$t'\in X(t)$ so that~$Y d_{X,t} = d_{X,t} X$ with~$Y\cup\{t'\} = X\cup\{t\}$ and~$Y(t')= X(t)$. The element~$d_{X,t}$ is called a {\em positive elementary~$Y$-ribbon-$X$}. \item For~$X,Y\subseteq S$, we say that~$g\in A_S^+$ is a positive~$Y$-ribbon-$X$ if ~$Yg = gX$. \end{enumerate}\end{df} For instance, considering the example in Figure~\ref{exampledeltaetperp}, ~$d_{X,s_5} = s_2s_3s_4s_5$,~$d_{X,t} = t$ for~$t$ in~$X^\perp$ and~$\Delta$ is a positive~$X$-ribbon-$X$. The connection between positive ribbons and elementary ones appears in the following result \begin{prp} Assume~$A_S$ is a spherical type Artin-Tits group and~$g$ lie in~$A_S^+$. ~$g$ is a positive~$Y$-ribbon-$X$ if and only if ~$g=g_n\cdots g_1$ where each~$g_i$ is a positive elementary~$X_i$-ribbon-$X_{i-1}$, with~$X_0=X$ and~$X_n=Y$. \end{prp} \begin{prp}\label{lm58} Assume~$A_S$ is a spherical type Artin-Tits group. Let~$X$ be included in~$S$ and~$u$ be included in $A_S^+$. Let~$\varepsilon\in\{1,2\}$ be such that~$\D_X^\varepsilon$ lies in~$Z(A_X)$. \begin{enumerate} \item Assume $u$ is a positive $Y$-ribbon-$X$ for some $Y\subseteq S$. \begin{enumerate} \item $\D_Yu=u\D_X$. \item Assume~$t$ belongs to~$S$. Then,~$u\succeq t~\Leftrightarrow~u\succeq d_{X,t}$. \end{enumerate} \item If~$u\D_X^\varepsilon\succeq u$, then there exists $Y\subseteq S$ such that $u\D_X^\varepsilon u^{-1}=\D_{Y}^\varepsilon$, $uA_X u^{-1}=A_{Y}$ and $\Gamma_X\sim\Gamma_{Y}$. Moreover, if $u$ is reduced-$X$, then $u$ is a positive $Y$-ribbon-$X$, that is $Yu = uX$. \end{enumerate} \end{prp} The above results are not all explicitly stated in \cite{Par1,Got} but are well-known from specialists. The second part of (ii) is stated in \cite[Lemma 2.2]{Got} and the first part follows (see also \cite[Lemma 5.6]{Par1}). Point~(i) is proved in the proof of \cite[Lemma~2.2]{Got} (see \cite[Lemma 5.6]{Par1} for details). For point (i)(b), see also \cite[Example 3.14]{God6}. \\ The \emph{support} of a word on~$D$ is the set of letters that are involved in this word. It follows from the presentation of~$A^+_S$ that two representing words of the same element in~$A_S^+$ have the same support. So the {\em support} of an element of~$A_S^+$ is well-defined. In the sequel, by~$Supp(g)$ we denote the support of an element~$g$ in~$A_S^+$.\\ \begin{lm}\label{lm59} Assume~$A_S$ is a spherical type Artin-Tits group. Let~$X\subsetneq S$ be such that~$\Gamma_X$ is connected, and~$t\in\partial X$. Then $$Supp(d_{X,t})=X\cup\{t\}.$$ \end{lm} \begin{proof} By assumption~$t$ is not in~$S$, so~$d_{X,t} = \D_{X}\D_{X-\{t\}}^{-1}$ and $Supp(d_{X,t})$ is included in~$X\cup\{t\}$. Let us show the converse inclusion. Now, by Proposition~\ref{lm58} (i), we have~$d_{X,t} = v_0s_0$ for some~$v_0$ in~$A_S^+$ and~$s_0=t$ belongs to the support of~$d_{X,t}$. Let~$s$ be in~$X$. Since~$X$ is connected and~$t$ belongs to~$\partial X$, there exists a finite sequence ~$s_1,\dots,s_n$ of~$X$ such that~$s_n = s$ and for all~$i\geq0$, we have~$m_{s_i,s_{i+1}}\neq 2$. We assume the sequence is chosen so that~$n$ is minimal. Assume~$d_{X,t} = v_i s_i\cdots s_0$ for some~$0 \leq i<n$ with~$v_i$ in~$A_S^+$. Since~$Yd_{X,t} = d_{X,t}X$ for some~$Y\subseteq X\cup\{t\}$, we can write ~$v_is_i\cdots s_0s_{i+1} = s'_{i+1}v_is_i\cdots s_0$ for some~$s'_{i+1}$ in~$X\cup\{t\}$. By minimality of~$n$,~$m_{s_js_{i+1}}=2$ for any~$j<i$. So~$v_is_is_{i+1}s_{i-1}\cdots s_0 = s'_{i+1}v_is_i\cdots s_0$ and~$v_is_is_{i+1} = s'_{i+1}v_is_i$. This imposes that~$v_is_is_{i+1} = s'_{i+1}v_is_i = v'\underbrace{\cdots s_{i+1}s_is_{i+1}}_{m\ terms}$ with~$m = m_{s_i,s_{i+1}}$ and~$v'$ in~$A_S^+$ (see \cite{Del,BrS}). This imposes in turn that we can write~$v_i = v_{i+1}s_{i+1}$ and~$d_{X,t} = v_i s_i\cdots s_0$. Then, we obtain step-by-step that~$d_{X,t}$ can be decomposed as~$v_n s_n\cdots s_0$. Hence~$s$ belongs to the support of~$d_{X,t}$ for any~$s$ in~$X$. So the converse inclusion holds. \end{proof} \begin{lm}\label{lm512} Let~$u\in A_S^+$ and~$s\in S$. Denote by~$u_2^{-1}v_1$ the left orthogonal splitting of the element~$u^{-1}su$. There exists~$u_1$ in~$A^+_S$ and~$s_1$ in~$S$ so that~$u = u_1u_2$, $v_1 = s_1u_2$. Moreover, $u_1$ is a positive~$s$-ribbon-$s_1$ . \end{lm} \begin{proof} By \cite[Theorem~1]{GKT} there exists~$u_1$ in~$A^+_S$ so that~$u = u_1u_2$ and~$v_1 = s_1u_2$ for some~$s_1$ in~$S$. Moreover, applying~\cite[Lemma~2.3]{GKT}, a straightforward induction on the length of $u$ proves that~$u_1$ is a positive ~$s$-ribbon-$s_1$. \end{proof} In the sequel, we say that an element of~$A_S^+$ is a positive ribbon-$X$ when it is a positive~$Y$-ribbon-$X$, for some~$Y$. Similarly we say that an element is a positive $Y$-ribbon when it is a positive~$Y$-ribbon-$X$. \subsection{The proof of Theorem~\ref{theointro1}.} In this section we prove Theorem~\ref{theointro1}. The proof need two preliminary results, namely Lemma~\ref{lm517} and Proposition \ref{prp53}, which is the main argument. The latter is proved here; the proof of the former is postponed to the next section. \begin{lm}\label{lm517} Under the assumptions of Proposition \ref{prp53}, we have $b\succeq s$ for all $s\in S\setminus X$. \end{lm} \begin{prp}\label{prp53} Let~$A_S$ be an irreducible Artin-Tits group of spherical type. Let~$X\subsetneq S$. Let~$b$ be in~$A_S^+\setminus\{1\}$ a positive ribbon-$(X\cup X^\bot)$ that is reduced-$X$. Suppose further that for all~$Y\subseteq S$ containing~$X$, and~$\varepsilon(Y)\in\{1,2\}$ minimal such that~$\D_Y^{\varepsilon(Y)}\in Z_{A_S}(A_X)$, we have~$b\D_Y^{\varepsilon(Y)}\succeq b$. Then there exists~$n\in\N^*$ so that $$b=\D^n\D_X^{-n}$$ \end{prp} Note that~$\Delta_X$ right-divides~$\Delta$ in~$A_S^+$ and~$\Delta \Delta_X = \Delta_{\tau(X)}\Delta$ by Proposition~\ref{reststdAT}. So for any positive integer~$n$ the element~$\D^n\D_X^{-n}$ belongs to~$A_S^+$. \begin{proof}[Proof of Proposition \ref{prp53}] We have~$\D_X\succeq s$ for all~$s\in X$ and, by Lemma \ref{lm517}, we have~$b\succeq s$ for all~$s\in S\setminus X$. Since, by assumption,~$b\D_X\succeq b$, we get that~$b\D_X\succeq s$ for all~$s\in S$. Thus~$b\D_X\succeq\D$ and, therefore,~$b\succeq\D\D_X^{-1}$ in~$A_S^+$. Let~$k\in\N^*$ be maximal such that~$b\succeq\D^k\D_X^{-k}$. Write~$b=d\D^k\D_X^{-k}$ with~$d\in A_S^+$. We show that~$d=1$. This will prove the proposition. Since~$\D X=\tau(X)\D$, the element~$\D^k$ is a positive~$\tau^k(X)$-ribbon-$X$ and~$\D^kX=\tau^k(X) \D^k$. Therefore by Proposition~\ref{lm58}, we have~$\D^k\D_X=\D_{\tau^k(X)}\D^k$. For the remaining of the proof, for~$Z\subseteq S$, we set~$Z_k = \tau^k(Z)$. Moreover,~$\D_X$ is a positive~$X$-ribbon-$X$. Then~$\D^k\D_X^{-k}$ is a positive~$X_k$-ribbon-$X$. For the remaining of the proof, when~$s$ lies in~$X_k$, we denote by~$s_X$ the element of~$X$ so that~$s\D^k\D_X^{-k} = \D^k\D_X^{-k}s_X$. Assume there exists~$s$ in~$X_k$ so that~$d = u s$ with~$u$ in~$A_S^+$. Then we have~$b= us\D^k\D_X^{-k}$. We get~$b = u\D^k\D_X^{-k}s_X$. But this is not possible, since~$b$ is reduced-$X$. Hence,~$d$ is reduced-$X_k$. We now prove that~$d$ is a positive ribbon-$(X_k\cup X_k^\bot)$. Let~$s$ lie in~$X_k$. We have~$s\D^k\D_X^{-k}=\D^k\D_X^{-k}s_X$. By assumption,~$b$ is a positive ribbon-$X$, therefore there exists~$s'$ in~$S$ so that~$bs_X = s' b$. Hence we get~$ds\D^k\D_X^{-k} = d\D^k\D_X^{-k}s_X = s'd\D^k\D_X^{-k}$, and therefore~$ds = s'd$. As this so for every element of~$X_k$, we deduce that~$d$ is a positive ribbon-$X_k$. Let~$s$ lie~$X_k^\bot$. For every~$t$ in~$X$, we have~$\tau^k(t)$ lies in~$X_k$ and, therefore,~$m_{\tau^k(t),s} = 2$. But the involution~$\tau$ induces an automorphism of the Coxeter graph associated with the presentation of~$A_S$. It follows that for every~$t$ in~$X$, we have~$m_{t,\tau^k(s)}=m_{\tau^k(t),s} = 2$. Hence, ~$\tau^k(s)$ belongs to~$X^\bot$. But~$b$ is a positive ribbon-$X^\bot$, then~$b\tau^k(s) = s'b$ for some~$s'\in S$. Hence we get~$ds\D^k\D_X^{-k} = d\D^k\D_X^{-k}\tau^k(s) = s'd\D^k\D_X^{-k}$, and therefore~$ds = s'd$. As this so for every element of~$X^\bot_k$, we deduce that ~$d$ is a positive ribbon-$X_k^\bot$. Gathering the two results we get that~$d$ is a positive ribbon-$(X_k\cup X_k^\bot)$. Let~$Y\subseteq S$ containing~$X_k$ and consider~$\eta(Y)$ be positive and minimal such that~$\D_Y^{\eta(Y)}$ belongs to~$Z_{A_S}(A_{X_k})$. The involution~$\tau^k$ exchanges~$X$ and~$X_k$ and~exchanges $Y$ and~$Y_k$. It follows, Firstly, that the inclusion,~$X_k\subseteq Y$ implies the inclusion~$X\subseteq Y_k$ and, Secondly, that~$\tau^k$ send~$A_{X_k}$ and~$\D_Y$ to~$A_X$ and~$\D_{Y_k}$, respectively, with~$\eta(Y) = \varepsilon(Y_k)$. Thus,~$\D_{Y_k}^{\eta(Y)}$ belongs to~$Z_{A_S}(A_X)$ with~$\eta(Y) = \varepsilon(Y_k)$. Then, by assumption, we have~$b\D_{Y_k}^{\eta(Y)}= ub$, for some~$u$ in~$A_S^+$. Since ~$b\D_{Y_k}^{\eta(Y)} = d\D^k\D_X^{-k} \D_{Y_k}^{\eta(Y)} = d\D^k \D_{Y_k}^{\eta(Y)} \D_X^{-k} = d\D_{Y}^{\eta(Y)}\D^k \D_X^{-k}$ and~$ub = u d\D^k\D_X^{-k}$ we obtain that~$d\D_{Y}^{\eta(Y)} = ud$. As a consequence, replacing~$b$ and~$X$ by~$d$ and~$X_k$, respectively, we can repeat the beginning of the argument and deduce that ~$d = d_1\D\D_{X_k}^{-1}$ for some~$d_1$ in~$A_S^+$. But this lead to a contradiction to the maximality of~$k$, since we get~$b = d\D^k\D_X^{-k} = d_1\D\D_{X_k}^{-1}\D^k\D_X^{-k}=\D^{k+1}\D_X^{-(k+1)}$. Hence~$d = 1$ and~$b = \D^k\D_X^{-k}$. \end{proof} We turn now to the proof of Theorem~\ref{theointro1}. \begin{proof}[Proof of Theorem \ref{theointro1}] Let~$u$ lie in $\DZ_{A_S}(A_X)$. We have~$Z_{A_X}(A_X)\subseteq Z_{A_S}(A_X)$, then~$\DZ_{A_S}(A_X)\subseteq Z_{A_S}(Z_{A_X}(A_X))$ and $u$ belongs to $Z_{A_S}(Z_{A_X}(A_X))$. Thanks to Theorem \ref{thm57}, we can write~$u=y\cdot z$, with~$yX=Xy$ and~$z\in A_X$. Write (see Proposition~\ref{reststdAT})~$y=\D^{-2m}h$ with $h$ in $A_S^+$, and decompose $h$ as~$h=abc$, with~$a,c\in A_X^+$ and~$b$ being~$X$-reduced-$X$. Since~$yX=Xy$ and $\D^2$ is in $Z(A_S)$, we have~$hX=Xh$ and so~$h\D_X=\D_Xh$. Using that $h = abc$ with~$a,c$ in $A_X^+$ and that~$\D^2_X$ lie in $Z(A_X)$, we deduce that~$b\D_X^2=\D_X^2b$ . The element~$b$ is reduced-$X$, then by Proposition~\ref{lm58}, we have~$bX=Xb$. It follows there exists~$z'\in A_X$ such that~$bcz=z'b$. Set~$x=az'$. Then, $x$ belongs to $A_X$ and~$u=\D^{-2m}\cdot x\cdot b$. Suppose~$b\neq 1$. By definition~$X^\bot\subseteq Z_{A_S}(A_X)$. Therefore, for all~$s\in X^\bot$ we have~$us=su$ and $s\D^{-2m}\cdot x\cdot b = \D^{-2m}\cdot x\cdot s b$. By cancellation, we obtain~$bs=sb$ for all~$s\in X^\bot$. So, $b$ is a positive ribbon-$(X\cup X^\perp)$. Now, let $Y$ be included in $S$ and containing $X$. Set $\varepsilon(Y)\in\{1,2\}$ be minimal such that~$\D_Y^{\varepsilon(Y)}$ lies in $Z_{A_S}(A_X)$. Then,~$u\D_Y^{\varepsilon(Y)}=\D_Y^{\varepsilon(Y)}u$ and, as before, we get~$b\D_Y^{\varepsilon(Y)}=\D_Y^{\varepsilon(Y)} b$. As a consequence we have~$b\D_Y^{\varepsilon(Y)}\succeq b$. By Proposition \ref{prp53}, we deduce there exists~$n\in\N^*$ so that~$b=\D^n\D_X^{-n}$. Thus, we get~$u=\D^{-2m}x\D^n\D_X^{-n}$. Assume, First, that~$\D\in \DZ_{A_S}(A_X)$ and~$\D\notin Z(A_S)$. Then, by Lemma \ref{lm518}, we have~$\D X=X\D$ and $\tau^n(x)$ belongs to $A_X$. Therefore,~$u=\D^{-2m+n}\cdot \tau^n(x)\D_X^{-n}$ and $u$ belongs to $QZ(A_S)\cdot A_X$. So~$\DZ_{A_S}(A_X)$ is included in~$QZ(A_S)\cdot A_X$. Conversely, The assumption that~$\D$ lies in $\DZ_{A_S}(A_X)$ imposes the inclusion~$QZ(A_S)\cdot A_X\subseteq \DZ_{A_S}(A_X)$. Therefore, the latter inclusion is actually an equality. Moreover we have~$QZ(A_S)\cdot A_X=A_X\cdot QZ(A_S)$, since $\D$ belongs to $QZ_{A_S}(A_X)$ by the above argument. Assume, Secondly that~$\D\notin \DZ_{A_S}(A_X)$ or~$\D\in Z(A_S)$. First, the inclusion~$Z(A_S)\cdot A_X\subseteq \DZ_{A_S}(A_X)$ holds in any case. If~$\D\in Z(A_S)$ or $n$ is even, then~$u$, that is $\D^{-2m+n}x\D_X^{-n}$, lies in $Z(A_S)\cdot A_X$ and so the other inclusion~$\DZ_{A_S}(A_X)\subseteq Z(A_S)\cdot A_X$ holds too. Assume finally~$\D\notin \DZ_{A_S}(A_X)$. Since~$u$ lies in $\DZ_{A_S}(A_X)$, for every $w$ in~$Z_{A_S}(A_X)$ we have $wu=uw$ and therefore~$\D^{-2m}xw\D^n\D_X^{-n}=\D^{-2m}x\D^nw\D_X^{-n}$. This, in turn, imposes~$\D^nw=w\D^n$ for every $w$ in~$Z_{A_S}(A_X)$. In other words~$\D^n$ lies in $\DZ_{A_S}(A_X)$ too. Since $\D^2$ lies in $\DZ_{A_S}(A_X)$ but not $\D$, we deduce that $n$ has to be even, and conclude by the above argument that $Z(A_S)\cdot A_X = \DZ_{A_S}(A_X)$ . Finally, we note that $A_X \cap QZ(A_S) = A_X \cap Z(A_S) = \{1\}$. Indeed, $X\neq S$ and $Supp(\D) = S$. Therefore, $\D^m$ does not belong to $A_X$ except if $m = 0$. Hence, we hace~$A_X\cdot QZ(A_S) = A_X\times QZ(A_S)$ and~$A_X\cdot Z(A_S) = A_X\times Z(A_S)$. \end{proof} \subsection{The proof of Lemma \ref{lm517}} Here we focus on the proof of Lemma \ref{lm517}, that was postponed in the previous section. It is technical and, to help the reader, we decompose in 3 steps, namely Lemma~\ref{lm514}, Lemma~\ref{lm516} and the final argument. \begin{lm}\label{lm514} Under the assumptions of Proposition \ref{prp53}, if~$t\in\partial X$ then $$b\nsucceq t\ \Leftrightarrow\ bt = t'b \textrm{ for some }t'\in S.$$ \end{lm} \begin{proof} Assume~$b\nsucceq t$. Set~$Y=X\cup\{t\}$. Under the assumptions of Proposition~\ref{prp53}, we have~$b\D_Y^{\varepsilon(Y)}\succeq b$. By Proposition~\ref{lm58}, we deduce that~$b\D_Y^{\varepsilon(Y)} b^{-1}=\D_{Y'}^{\varepsilon(Y)}$ and~$bA_Y b^{-1}=A_{Y'}$ for some subset $Y'$ of $S$. On the other hand, $b$ is a positive $X'$-ribbon-$X$ for some subset $X'$ of $S$. It follows that~$X'$ is included in~$A_{Y'}$ and, therefore, in~$Y'$. Now, the sets~$X'$ and~$Y'$ have the same cardinality as~$X$ and~$Y$, respectively. Then there exists~$t'$ in~$Y'$ so that~$Y'=X'\cup\{t'\}$. We are going to prove that~$bt = t'b$. By Lemma~\ref{lm512}, we can decompose $b$ as $b = b_1b_2$ with $b_2$ in $A_S^+$ and $b_1$ a positive $t'$-ribbon-$t''$ for some $t''$ in $S$, so that the left orthogonal splitting of~$b^{-1}t'b$ is~$b_2^{-1}t''b_2$. By the above argument~$b^{-1}t'b$ lies in $A_{Y}$, so~$t''$ has to lie in $Y$ and $b_2$ has to lie in $A_Y^+$. But~$b$ is reduced-$Y$. Indeed, we assumed that $b$ is reduced-$X$ and~that $b\nsucceq t$ . This imposes $b_2 = 1$, $b = b_1$ and~$bt'' = t'b$ for some $t''$ in $Y$. Finally we already have $X'b = bX$. Since $t'$ does not belong to $X'$, It follows that $t''$ cannot lie in $X$. Thus $t'' = t$ and we are done. Conversely, Assume~$bt\succeq b$, then~$b$ is a positive ribbon-$\{t\}$. Since it is a positive ribbon-$X$, it is also a positive ribbon-$Y$. Denote by $Y(t)$ the irreducible component of $Y$ that contains $t$. Since $t$ lies in $\partial(X)$, $Y(t)$ contains some element of $X$. By Proposition~\ref{lm58} (i)(b) if $t$ is a right-divisor of $b$ then so are all the element of $Y(t)$. But ~$b$ is reduced-$X$. Thus $t$ does not right-divide $b$. \end{proof} Note that we showed the above result without using the assumption:~$b$ is a positive ribbon-$X^\bot$. This hypothesis is then useless for Lemma \ref{lm514}. \begin{lm}\label{lm516} Under the assumptions of proposition \ref{prp53}, we have $$Supp(b)=S.$$ \end{lm} \begin{proof} By assumption $b\neq 1$, so its support is not empty. Assume by contradiction that~$Supp(b)\neq S$. Let~$U$ be an indecomposable component of~$Supp(b)$. Fix~$u$ in $\partial U$ and set $V = Supp(b)\setminus U$. By hypothesis~$u$ does not lie in $Supp(b)$. Then,~$u$ does not right-divide~$b$. We claim that $bub^{-1}$ lies in $A^+_S$. Indeed,~$b$ is a positive ribbon-$X\cup X^\bot$ so if~$u$ belongs to $X\cup X^\bot$ there is nothing to say; if~$u$ lies in $\partial X$, then~$bu\succeq b$ by Lemma \ref{lm514}. Now, the set~$U$ is an indecomposable component of~$Supp(b)$, then each element of $U$ commute with each element of~$V$ and we can write $b = b_2b_1$ with~$b_1\in A_U^+$, and~$b_2\in A_{V}^+$. Write $b_1 = b'_1s$ with $s\in U$. Since~$U\cup\{u\}$ is indecomposable, there exists~$u_1,\dots,u_n\in U$ such that $u_0 = u$, $u_n =s$ and~$m_{u_iu_{i+1}}>2$. Up to replacing $s$ by some $u_i$ with $i<n$, we can assume that $b$ has no right-divisor among~$u_1,\dots,u_{n-1}$. \begin{center} \begin{tikzpicture}[decoration={brace}] \draw[very thick,fill=black] (2,0) circle (.1cm); \draw[very thick,fill=black] (3.5,0) circle (.1cm); \draw[very thick,fill=black] (6,-.6) circle (.1cm); \draw[very thick] (2,0) -- +(-1,0); \draw[very thick] (2,0) -- +(1.5,0); \draw[dotted,very thick] (3.5,0) -- +(1,.4); \draw[dotted,very thick] (3.5,0) -- +(2.5,-.6); \draw[dotted,very thick] (1,0) -- +(-1,.4); \draw[dotted,very thick] (1,0) -- +(-1,-.4); \draw (2,.35) node {$u$}; \draw (3.7,.35) node {$u_1$}; \draw (5.5,.2) node {{\large$U$}}; \draw (6.45,-.5) node {$u_n$}; \draw (5.2,0) ellipse (2.1 cm and 1cm); \end{tikzpicture} \end{center} Set~$U'=\{u,u_1,\dots,u_{n-1}\}$. Let~$u_i$ lies in $U'$. If $u_i$ does not lies in $X\cup X^\bot$, then~$u_i$ belongs to $\partial X$ and, by Lemma \ref{lm514},~$bu_i = v_i b$ for some $v_i$ in $S$. On the other hand~$b$ is a positive ribbon-$(X\cup X^\bot)$ therefore $b$ is also a positive ribbon-$U'$. The graph~$\Gamma_{U'}$ is connected by definition,~$u_n$ lies in $\partial U'$ and right-divides $b$. Then by Proposition~\ref{lm58}, the positive elementary ribbon~$d_{U',u_n}$ right-divides $b$. Applying Lemma \ref{lm59}, we get that~$U'$ is contained in the support of $b$. Hence $u$ belongs to $Supp(b)$, a contradiction. So $Supp(b) = S$. \end{proof} We are now ready to prove Lemma \ref{lm517} \begin{proof}[proof of Lemma \ref{lm517}] Let~$s\in S\setminus X$, and set~$Y=S\setminus\{s\}$. Write~$b=b_1b_2$ with~$b_2\in A_Y^+$ and~$b_1$ reduced-$Y$. By Lemma \ref{lm516}, we have~$Supp(b)=S$. Since $b_2$ lies in $A_Y^+$, it follows that $b_1\neq 1$. In addition, $b_1$ is reduced-$Y$. Then, $s$ has to right-divide $b_1$. We have~$b\D_Y^2b^{-1}=b_1\D_Y^2b_1^{-1}$. According to the assumptions of Proposition~\ref{prp53}, we have~$b\D_Y^2 = zb$ for some $z$ in $A^+_S$. Indeed, if~$\varepsilon(Y)=1$ then~$b\D_Y=z_1b$ for some~$z_1\in A_S^+$. Therefore~$b\D_Y^2=z_1b\D_Y=z_1^2b$. By Proposition~\ref{lm58}, we deduce that $b_1$ is a $Y'$-ribbon-$Y$ for some $Y'\subseteq S$ and $b = b_1b_2 = b'_2b_1$ with $b_2'\in A_{Y'}^+$. Since $s$ right-divides $b_1$, it has also to right-divide $b$. \end{proof} \subsection{When~$\Gamma_S$ is not connected} In Theorem~\ref{theointro1} we consider irreducible Artin-Tits group~$A_S$ of spherical type. Here we extend the theorem to any spherical type Artin-Tits group~$A_S$. \begin{thm}\label{thm59} Let~$A_S$ be an Artin-Tits group of spherical type. Denote the indecomposable components of~$S$ by~$S_1,\dots,S_n$. Let~$A_X$ be a standard parabolic subgroup of~$A_S$ and set~$X_i=X\cap S_i$ for all~$i$. Set $$I=\{1\leq i\leq n\mid X_i\neq S_i, \D_{S_i}\in \DZ_{A_{S_i}}(A_{X_i})\textrm{ and }\D_{S_i}\notin Z(A_{S_i})\}.$$ $$J = \{1\leq i\leq n\mid X_i\neq S_i,\textrm{ and } i\not\in I\}.$$ Finally, set $S_I = \bigcup_{i\in I} S_i$ and $S_{J} = \bigcup_{i \in J} S_i$. Then we have $$\DZ_{A_S}(A_X) = A_X\times QZ(A_{S_I})\times Z(A_{S_J}).$$ \end{thm} \begin{proof} For any direct product of groups~$G=G_1\times\cdots\times G_n$ and a subgroup $H$ of $G$ we have $$Z_G(H)=Z_{G_1}(H_1)\times\cdots\times Z_{G_n}(H_n).$$ where $H_i = H\cap G_i$ for each $i$. Here, $A_S = A_{S_1}\times\cdots\times A_{S_n}$ and $A_X \cap A_{S_i} = A_{X_i}$. Now by Theorem~\ref{theointro1}, if $i$ lies in $I$, then $\DZ_{A_{S_i}}(A_{X_i}) = A_{X_i}\times QZ(A_{S_i})$; if $i$ lies in $J$ then $\DZ_{A_{S_i}}(A_{X_i}) = A_{X_i}\times Z(A_{S_i})$. In addition, if $i$ is neither in $I$ nor in $J$, then $X_i = S_i$ and $\DZ_{A_{S_i}}(A_{X_i}) = A_{X_i}$. So, we deduce that $$\DZ_{A_S}(A_X) = Z_{A_S}(\prod_{i = 1}^nZ_{A_{S_i}}(A_{X_i})) = \prod_{i = 1}^n \DZ_{A_{S_i}}(A_{X_i}) =$$ $$\prod_{i\in I} (A_{X_i}\times QZ(A_{S_i})) \times\prod_{i\in J} (A_{X_i}\times Z(A_{S_i}))\times \prod_{i\not\in I\cup J} A_{X_i} = $$ $$ \prod_{i = 1}^n A_{X_i}\times \prod_{i\in I} QZ(A_{S_i})\times\prod_{i\in J} Z(A_{S_i}).$$ But $\prod_{i = 1}^n A_{X_i} = A_X$, $\prod_{i\in I} QZ(A_{S_i}) = QZ(A_{S_I})$ and $\prod_{i\in J} QZ(A_{S_i}) = QZ(A_{S_J})$. So the equality holds. \end{proof} \subsection{Application the subgroup conjugacy problem} Given a group~$G$ and a subgroup~$H$ of~$G$, the subgroup conjugacy problem for~$H$ is solved by finding an algorithm that determines whether any two given elements of~$G$ are conjugated by an element of~$H$. In this section, we focus on Artin-Tits groups of type~$B$ or~$D$ and use Theorem~\ref{theointro1} and \cite[Theorem 1.1]{Par2} to reduce the subgroup conjugacy problem for their irreducible standard parabolic subgroups to an instance of the simultaneous conjugacy problem. We follow the strategy used in \cite{GKLT} to solve the subgroup conjugacy problem for irreducible standard parabolic subgroups of an Artin-Tits group of type $A$. The simultaneous conjugacy problem is solved for Artin-Tits groups of type~$A$ in \cite{Men3} (see also \cite{LeeLee}), but the result and its proof can be generalized verbatim to all Artin-Tits groups of spherical type, in particular to Artin-Tits groups of type $B$ or type~$D$ . Hence, we obtain a solution to the subgroup conjugacy problem for irreducible standard parabolic subgroups of Artin-Tits groups of type $B$ and $D$.\\ Let us recall \cite[Theorem 1.1]{Par2} and \cite[Theorem 2.13]{GKLT}. \begin{thm}[\cite{Par2}, Theorem 1.1]\label{thm60} Let~$A_S$ be an Artin-Tits group of spherical type such that~$\Gamma_S=A_k~(k\geq1)$,~$\Gamma_S=B_k~(k\geq2)$ or~$\Gamma_S=D_k~(k\geq4)$. Let~$X\subseteq S$ such that~$\Gamma_X$ is connected. Then~$Z_{A_S}(A_X)$ is generated by $$X^\perp\cup\{\D_Y\in Z_{A_S}(A_X) \mid X\subseteq Y\}\cup\{\D_Y\D_{Y'}\in Z_{A_S}(A_X) \mid X\subseteq Y,X\subseteq Y' \}.$$ \end{thm} Note that in the third set, we can restrict the pair $(Y,Y)$ to those so that neither $\D_Y$ nor $\D_{Y'}$ belong to $Z_{A_S}(A_X)$. In the sequel, we denote the obtained generating set by $\Upsilon(X)$. \begin{ex}\label{ex61} Consider $S = \{s_1,s_2,s_3\}$ with~$A_S$ of type~$B_3$ as below. Set $X = \{s_2\}$. \begin{center} \begin{tikzpicture} \draw[very thick,fill=black] (2,-1.5) circle (.1cm); \draw[very thick,fill=black] (3.5,-1.5) circle (.1cm); \draw[very thick,fill=black] (5,-1.5) circle (.1cm); \draw[very thick] (2,-1.5) -- +(3,0); \draw (2,-1.9) node {$s_1$}; \draw (3.5,-1.9) node {$s_2$}; \draw (5,-1.9) node {$s_3$}; \draw (2.75,-1.23) node {$4$}; \draw (4.5,-2.5) node {$X$}; \draw (3.5,-1.7) ellipse (.5cm and .8cm); \draw[-stealth] (4.3,-2.3) -- +(-.5,.5); \end{tikzpicture} \end{center}~ We have $X^\bot=\emptyset$ and $Z_{A_S}(A_X)$ is generated by $\Upsilon(X) = \{ s_2, \Delta_{\{s_1,s_2\}}, \Delta_S, \Delta^2_{s_2,s_3}\}$. \end{ex} \begin{thm}[\cite{GKLT}, Theorem 2.13]\label{thm61} Let~$G$ be a group and~$H$ be a subgroup such that~$\DZ_G(H)=Z(G)\cdot H$. Suppose further that~$Z_G(H)$ is generated by a set~$\{g_1,\dots,g_n\}$. Then for~$x,y\in G$, the following are equivalent: \begin{enumerate} \item there exists~$c\in H$ such that~$y=c^{-1}xc$. \item there exists~$z\in G$ such that \begin{enumerate} \item~$y=z^{-1}xz$, and \item[($b_i$)]~$g_i=z^{-1}g_iz$ for all~$1\leq i\leq n$. \end{enumerate} \end{enumerate} \end{thm} \begin{cor}\label{cor51} Let~$A_S$ be an Artin-Tits group of type~$B_k~(k\geq2)$ or~$D_k~(k\geq4)$. Let~$X\subseteq S$ be such that~$\Gamma_X$ is connected. In case~$\Gamma_S$ is of type~$D_{2k+1}$, assume~$\{s_2,s_{2'},s_3\}$ is not included in $X$ with the notations of Figure \ref{figuretypeD}. For any pair~$(x,y)$ of elements of~$A_S$, the following are equivalents: \begin{enumerate} \item there exists~$c\in A_X$ such that~$y=c^{-1}xc$. \item there exists~$z\in A_S$ such that \begin{enumerate} \item $y=z^{-1}xz$, \item $g = z^{-1}g z$ for all~$g$ in $\Upsilon(X)$. \end{enumerate} \end{enumerate} \end{cor} \begin{proof} By Theorem~\ref{theointro1} and Proposition~\ref{condtech} we have~$\DZ_G(A_X))=Z(A_S)\times A_X$. So we are in position to apply Theorem~\ref{thm61}. \end{proof} \section{The non spherical type cases} We turn now to the proof of Theorem~\ref{theointro2} that is concerned with Artin-Tits groups that are not of spherical type. Our main argument is Proposition~\ref{thmlastsect}. Indeed, In~\cite{God4} the second author stated several conjectures, that are proved to hold for Artin-Tits groups of various types. Our proof is based on these conjectures. \begin{prp}\label{thmlastsect} \begin{enumerate} \item Let $A_S$ be an Artin-Tits group. Assume $A_S$ has the property $(\circledast)$ stated in~\cite{God4}, then for any $X$ included in $S$ one has \begin{enumerate} \item If~$A_X$ is of spherical type, then for any positive integer $k$, $$Z_{A_S}(\Delta^{2k}_X)= N_{A_S}(A_X).$$ \item if~$A_X$ is of spherical type and there is no $Y$ of spherical type and containing $X$, then $$QZ_{A_S}(A_X) = QZ(A_X) \textrm{ and } N_{A_S}(A_X) = A_{X}$$ \item If~$A_X$ is irreducible and not of spherical type, then $$QZ_{A_S}(A_X) = A_{X^\perp} \textrm{ and } N_{A_S}(A_X) = A_{X\cup X^\perp}$$ \end{enumerate} \item\cite{God,God_pjm,God4} If $A_S$ is of spherical type, of FC type, of large type or of 2-dimensional type. Then, $A_S$ has the Property~$(\circledast)$. \end{enumerate} \end{prp} \begin{proof} (i) Conjecture~$(\circledast)$ implies that $N_{A_S}(A_X) = A_X\cdot QZ_{A_S}(A_X)$ and that $QZ_{A_S}(A_X)$ is the subgroup of $A_S$ generated by the set of positive $X$-ribbons-$X$ (see~\cite{God4}). If~$A_X$ is irreducible and not of spherical type, then the set of elementary positive $X$-ribbons is equal to $X^\perp$. Moreover all the elements of $X^\perp$ are $X$-ribbons-$X$. So $QZ_{A_S}(A_X) = A_{X^\perp}$ and Point~(c) holds. Assume $A_X$ is of spherical type. Fix a positive integer $k$. If $g$ lies in $Z_{A_S}(\Delta^{2k}_X)$, then in particular $g^{-1}\Delta^{2k}_Xg$ belongs to $A_X$. Property~$(\circledast)$ imposes that that $g$ belongs to $A_X\cdot QZ_{A_S}(A_X)$, that is to $N_{A_S}(A_X)$. Conversely, $A_X\cdot QZ_{A_S}(A_X)$ is included in $Z_{A_S}(\Delta^{2k}_X)$ because both $A_X$ and $QZ_{A_S}(A_X)$ have to fix the center of $A_X$, which contains $\Delta^2$. So Point~(a) holds. Finally, if there is no $Y$ of spherical type and containing $X$, then the elementary positive ribbons $d_{X,t}$ are the elements~$\Delta_{X(t)}$ with $t$ in $X$ (see Definition~\ref{defribb}). It follows that $QZ_{A_S}(A_X)$ is included in $A_X$ and is, therefore, equal to $QZ(A_X)$. Since $N_{A_S}(A_X) = A_X\cdot QZ_{A_S}(A_X)$, we deduce that $N_{A_S}(A_X) = A_X$. Hence Point~(b) holds. \end{proof} In the sequel we first extend Conjecture~\ref{conjintro} to the context of non irreducible parabolic subgroup (see Conjecture~\ref{conjintrogener}). Then we prove that Conjecture~\ref{conjintrogener} holds for any Artin-Tits which possesses the property $(\circledast)$ (see Theorem~\ref{propfin2}). Considering Proposition~\ref{thmlastsect} (ii), this will prove Theorem~\ref{theointro2}. \begin{conj}\label{conjintrogener} Let $A_S$ be an irreducible Artin-Tits group and $X$ be included in $S$. Let $X_s$ be the union of the irreducible components of $X$ that are of spherical type, and $X_{as}$ be the union of the other irreducible components of $X$. Then, $$\DZ_{A_S}(A_X) = Z_{A_S}(Z_{A_{X^\perp_{as}}}(A_{X_s}))$$ \begin{enumerate} \item Assume $X_s$ is empty. Then $$\DZ_{A_S}(A_X)= Z_{A_S}(A_{X^\perp}).$$ \item Assume $A_X$ is of spherical type. Let $A_T$ be the smallest standard parabolic subgroup of $A_S$ that contains $Z_{A_S}(A_{X})$. \begin{enumerate} \item If $T$ is of spherical type then $$\DZ_{A_S}(A_X) = \DZ_{A_T}(A_{X}).$$ \item If $T$ is not of spherical type then $$\DZ_{A_S}(A_X)=A_{X}.$$ \end{enumerate} \end{enumerate} \end{conj} \begin{prp} \label{propfin1} Let $A_S$ be an irreducible Artin-Tits group and $X$ be included in $S$. Assume $A_S$ has the property~$(\circledast)$ stated in~\cite{God4}. Conjecture~\ref{conjintrogener} implies Conjecture~\ref{conjintro}. \end{prp} \begin{proof} Consider the notations of Conjecture~\ref{conjintro}. Assume $X$ is irreducible. If~$X$ is not of spherical type, then $X = X_{as}$ and $X_s$ is empty. By Proposition~\ref{thmlastsect}, $Z_{A_S}(A_X) \subseteq QZ_{A_S}(A_X) = A_{X^\perp}\subseteq Z_{A_S}(A_X)$. Therefore $A_{X^\perp}= Z_{A_S}(A_X)$ and $T = X^\perp$. Thus, Conjecture~\ref{conjintrogener}(i) implies Conjecture~\ref{conjintro}(i). In the case~$X$ is of spherical type, there is nothing to prove. \end{proof} \begin{thm} \label{propfin2} Let $A_S$ be an irreducible Artin-Tits group. If $A_S$ has the property $(\circledast)$ stated in~\cite{God4}, then Conjecture~\ref{conjintrogener} holds. \end{thm} In order to prove Theorem~\ref{propfin2} We need some preliminary results. In the sequel, we assume $A_S$ is an irreducible Artin-Tits group that has the property $(\circledast)$ stated in~\cite{God4}. We fix a standard parabolic subgroup $A_X$ with $X\subseteq S$. By $X_s$ we denote the union of the irreducible components of $X$ that are of spherical type. By $X_{as}$ we denote the union of the other irreducible components of $X$. By definition $X_s$ is included in $X_{as}^\perp$. We set $$\Upsilon = \{Y\subseteq S\mid X_s\subseteq Y;\textrm{ and }A_Y\textrm{ is of spherical type.}\}$$ Let $A_T$ be the smallest standard parabolic subgroup of $A_S$ that contains $Z_{A_{S}}(A_{X_s})$. \begin{lm}\label{lemfin1} $Z_{A_S}(A_X) = Z_{A_{X_{as}^\perp}}(A_{X_s})$. \end{lm} \begin{proof} We have $A_X = A_{X_s}\times A_{X_{as}}$. Therefore $Z_{A_S}(A_X) = Z_{A_S}(A_{X_s}) \cap Z_{A_S}(A_{X_{as}})$. Let $X_1,\cdots, X_k$ be the irreducible components of $X_{\textcolor{red}{s}}$. Then $X_{as}^\perp = X^\perp_{1}\cap\cdots\cap X^\perp_{k}$. On the other hand, $A_{X_{as}} = A_{X_1}\times \cdots \times A_{X_k}$ and $Z_{A_S}(A_{X_{as}}) = Z_{A_S}(A_{X_1})\cap\cdots \cap Z_{A_S}(A_{X_k})$. By Proposition~\ref{thmlastsect}, $Z_{A_S}(A_{X_{i}}) = QZ_{A_S}(A_{X_{i}}) = A_{X^\perp_{i}}$ for each component $X_i$. Therefore $Z_{A_S}(A_{X_{as}}) = A_{X^\perp_{1}}\cap\cdots\cap A_{X^\perp_{k}} = A_{X^\perp_{1}\cap\cdots\cap X^\perp_{k}} =A_{X_{as}^\perp}$. But, $A_{X_s}$ is included in $ A_{X^\perp_{as}}$. Thus, $Z_{A_S}(A_{X_s}) \cap Z_{A_S}(A_{X_{as}}) = Z_{A_{X_{as}^\perp}}(A_{X_s})$. \end{proof} \begin{lm} \label{lemfin2} The set $\Upsilon$ is not empty and all its elements are contained in $T$. Moreover,~$T$ belongs to $\Upsilon$ if and only if $T$ is of spherical type. In this case, $T$ is the unique maximal element of $\Upsilon$. \end{lm} \begin{proof} $X_s$ is contained in $\Upsilon$, so the latter is not empty. Moreover, $X_s$ is included in $T$. Therefore the latter belong to $\Upsilon$ if and only if it is of spherical type. Finally if $Y$ belongs to $\Upsilon$, then $\Delta_Y^2$ belongs to $Z_{A_S}(A_Y)$, and therefore to $Z_{A_S}(A_X)$. Thus, $Y$ is included in $T$. Hence, if $T$ belongs to $\Upsilon$, it is its unique maximal element. \end{proof} \begin{lm}\label{lemfin3} Assume $Y$ is maximal in $\Upsilon$ for the inclusion. Then, $$\DZ_{A_S}(A_{X_s}) \subseteq \DZ_{A_Y}(A_{X_s})$$ \end{lm} \begin{proof} Assume $g$ belongs to $\DZ_{A_S}(A_{X_s})$. The element $\Delta_Y^2$ lies in $Z(A_Y)$. Since $X_s$ is included in~$Y$, it follows that $\Delta_Y^2$ lies in $Z_{A_S}(A_{X_s})$, and $g\Delta_Y^2g^{-1} = \Delta_Y^2$. By Proposition~\ref{thmlastsect}(i)(a), $g$ belongs to the subgroup $N_{A_S}(A_Y)$. But $Y$ is maximal in $\Upsilon$. By Proposition~\ref{thmlastsect}(i)(b), $N_{A_S}(A_Y) = A_Y$. Thus $DZ_{A_S}(A_{X_s}) = Z_{A_S}(Z_{A_S}(A_{X_s}))\cap A_Y = Z_{A_Y}(Z_{A_S}(A_{X_s})) \subseteq Z_{A_Y}(Z_{A_Y}(A_{X_s})) = \DZ_{A_Y}(A_{X_s})$. \end{proof} We can now prove Theorem~\ref{propfin2} \begin{proof}[Proof of Theorem~\ref{propfin2}.] By Lemma~\ref{lemfin1}, we have $Z_{A_S}(A_X) = Z_{A_{X_{as}^\perp}}(A_{X_s})$. It follows that~$\DZ_{A_S}(A_X) = Z_{A_S}(Z_{A_{X_{as}^\perp}}(A_{X_s}))$. When $X_s$, is empty, we have $X_{as} = X$, $A_{X_s} = \{1\}$. So $Z_{A_{X_{as}^\perp}}(A_{X_s}) = Z_{A_{X^\perp}}(\{1\}) = A_{X^\perp}$. Therefore, ~$\DZ_{A_S}(A_X) = Z_{A_S}(A_{X^\perp})$. Assume for the remaining of the proof that $X$ is of spherical type. Assume, First, that $T$ is of spherical type. By Lemma~\ref{lemfin2}, $T$ is maximal in $\Upsilon$ and, by Lemma~\ref{lemfin3}, $\DZ_{A_S}(A_X) \subseteq \DZ_{A_T}(A_{X})$. On the other hand, $Z_{A_T}(A_{X}) = Z_{A_S}(A_{X})\cap A_T = Z_{A_S}(A_X)$. We deduce that $\DZ_{A_T}(A_{X}) = Z_{A_T}(Z_{A_S}(A_X)) \subseteq \DZ_{A_S}(A_X)$. Hence, $\DZ_{A_S}(A_X) = \DZ_{A_T}(A_{X})$. Assume, finally, that $T$ do not lie in~$\Upsilon$. Let $Y$ be maximal in $\Upsilon$. By Lemma~\ref{lemfin3}, we get $\DZ_{A_S}(A_X) \subseteq \DZ_{A_Y}(A_{X})$. If $Y = X$, then $A_{X} \subseteq \DZ_{A_S}(A_{X}) \subseteq \DZ_{A_{X}}(A_{X}) = A_{X}$ and we are done. So, assume $X\subsetneq Y$. The group $A_Y$ is of spherical type. Applying Theorem~\ref{theointro1}, we get that $\DZ_{A_Y}(A_{X}) \subseteq QZ(A_Y)\times A_{X}$. Since $A_{X}$ is included in $\DZ_{A_S}(A_{X})$, the group~$A_{X}$ is equal to $\DZ_{A_S}(A_{X})$ if and only if $\DZ_{A_S}(A_{X})\cap QZ(A_Y) = \{1\}$. Assume this is not the case. Then, there exists $k > 0$ so that $\Delta^k_Y$ lies in $\DZ_{A_S}(A_{X})$. We can assume without restriction that $k$ is even. Since $Y$ lies in $\Upsilon$ and $T$ does not, they are distinct. It follows from the definition of $T$ that there exists $g$ in $Z_{A_S}(A_{X})$ which is not in $A_Y$. But $\Delta^k_Y$ lies in $\DZ_{A_S}(A_{X})$. So we have $\Delta^k_Yg (\Delta^k_Y)^{-1} = g$, and equivalently $g \Delta^k_Yg^{-1} = \Delta^k_Y$. The latter equality imposes that $g$ belongs to $N_{A_S}(A_Y)$ by Proposition~\ref{thmlastsect}(i)(a). But $N_{A_S}(A_Y) = A_Y$ by Proposition~\ref{thmlastsect}(i)(b), a contradiction. Hence, $\DZ_{A_S}(A_{X}) = A_{X}$. \end{proof} \begin{cor} \label{corsecfinale} Let $A_S$ be an irreducible Artin-Tits group of FC type, or of large type, or of $2$-dimensional type. Then Conjecture~\ref{conjintrogener} holds. \end{cor} \begin{rmq} In an (irreducible) Artin-Tits group that is of large type, all standard parabolic subgroups are irreducible. So, Corollary~\ref{corsecfinale} provides a complete description of the double centralizer of any standard parabolic subgroup. However, for the other not spherical types in the case both $X_s$ and $X_{as}$ are not empty, the answer is not completely satisfactory. Indeed the double centralizer is not as simple as in the cases where either $X_s$ or $X_{as}$ is empty. For instance, in the following example, we have $Z_{A_S}(A_X) = Z(A_{X_s})$ and $DZ_{A_X} = N_{A_S}(A_{X_s}) = QZ_{A_S}(A_{X_s}) \cdot A_{X_s}$ \end{rmq} \begin{center} \begin{figure}[!h] \begin{tikzpicture}[scale=0.75] \draw[very thick,fill=black] (3.5,-13.1) circle (.1cm); \draw[very thick,fill=black] (5,-13.1) circle (.1cm); \draw[very thick,fill=black] (5,-11.6) circle (.1cm); \draw[very thick,fill=black] (6.5,-13.1) circle (.1cm); \draw[very thick,fill=black] (8,-13.1) circle (.1cm); \draw[very thick,fill=black] (9.5,-13.1) circle (.1cm); \draw[very thick,fill=black] (11,-13.1) circle (.1cm); \draw[very thick] (3.5,-13.1) -- +(7.5,0); \draw[very thick] (3.5,-13.1) -- +(1.5,1.5); \draw[very thick] (5,-11.6) -- +(0,-1.5); \draw (3.5,-13.5) node {$s_2$}; \draw (5,-13.5) node {$s_3$}; \draw (6.5,-12.7) node {$s_4$}; \draw (8,-13.5) node {$s_5$}; \draw (9.5,-13.5) node {$s_6$}; \draw (11,-13.5) node {$s_7$}; \draw (5,-11.2) node {$s_1$}; \draw (11.8,-12.2) node {$X_s$}; \draw (9.5,-13.3) ellipse (2cm and .8cm); \draw[-stealth] (11.4,-12.4) -- +(-.45,-.25); \draw (2.2,-10.7) node {$X_{as}$}; \draw (4.25,-12.7) ellipse (1.25cm and 2.5cm); \draw[-stealth] (2.7,-10.9) -- +(.45,-.25); \end{tikzpicture} \end{figure} \end{center}
2,869,038,156,606
arxiv
\section*{Introduction} Pour une vari\'et\'e $M$ donn\'ee, l'\'etude des sous-vari\'et\'es de $M\,,$ du point de vue de la g\'eom\'etrie de dimension infinie, am\`ene tr\`es naturellement \`a consid\'erer l'ensemble des sous-vari\'et\'es compactes, connexes et orient\'ees de $M$ comme une vari\'et\'e fr\'ech\'etique. Cet ensemble est appel\'e la Grassmannienne non-lin\'eaire et est not\'ee $Gr_{k}(M)$ ($k$ \'etant la dimension des sous-vari\'et\'es consid\'er\'ees)\,. Il y a essentiellement deux fa\c{c}ons d'appr\'ehender $Gr_{k}(M)\,:$ \begin{center} \begin{maliste} \item la premi\`ere, qui est amplement d\'evelopp\'ee dans \cite{Kriegl-Michor}, consiste \`a prendre $\Sigma\in Gr_{k}(M)$ et \`a consid\'erer l'espace des plongements $\text{Emb}\,(\Sigma,M)\,.$ On peut alors identifier $Gr_{k}(M)$ (ou plut\^ot la r\'eunion de certaines composantes connexes de $Gr_{k}(M)$) comme \'etant le quotient de $\text{Emb}\,(\Sigma,M)$ par rapport \`a l'action naturelle du groupe $\text{Diff}^{+}(\Sigma)$ des diff\'eomorphismes de $\Sigma$ qui pr\'eservent une forme de volume donn\'ee, sur $\text{Emb}\,(\Sigma,M)\,.$ Par construction m\^eme, on obtient ainsi une structure lisse de fibr\'e principal $\text{Diff}^{+}(\Sigma)\hookrightarrow \text{Emb}\,(\Sigma,M)\rightarrow \text{Emb}\,(\Sigma,M)/\text{Diff}^{+}(\Sigma)$ (voir Theorem 44.1., page 474 de \cite{Kriegl-Michor}).\\ \item La deuxi\`eme approche, plus intuitive, consiste \`a modeler directement $Gr_{k}(M)$ sur des espaces de sections $\Gamma_{C^{\infty}}(\Sigma,N\Sigma)$ o\`u $\Sigma\in Gr_{k}(M)$ et $N\Sigma$ d\'esigne le fibr\'e normal de $\Sigma$ dans $M\,.$ Cette approche est esquiss\'ee dans \cite{Hamilton}\,. \end{maliste} \end{center} La premi\`ere partie de cet article s'attache \`a d\'ecrire tr\`es explicitement la construction \'ebauch\'ee par Hamilton dans la cat\'egorie des vari\'et\'es fr\'ech\'etiques mod\'er\'ees (``tame'' en anglais, voir \cite{Hamilton}). La deuxi\`eme partie fait le lien entre les deux points de vue cit\'es. Nous y montrons notamment un th\'eor\`eme analogue au Theorem 44.1. de \cite{Kriegl-Michor}\,, c'est-\`a-dire, nous montrons que l'espace des plongements $\text{Emb}\,(\Sigma,M)$ est un fibr\'e principal, de groupe de structure $\text{Diff}^{+}(\Sigma)$ et dont la base est une r\'eunion de certaines composantes connexes de $Gr_{k}(M)\,.$ Enfin dans la troisi\`eme partie, nous montrons que les composantes connexes de $Gr_{k}(M)$ sont homog\`enes sous l'action naturelle (et lisse) de $\text{Diff}^{\,0}(M),$ la composante connexe en l'\'el\'ement neutre du groupe des diff\'eomorphismes de $M\,.$ Cette homog\'en\'eit\'e est d\'ej\`a mentionn\'ee, mais admise sans preuve, dans \cite{Ismagilov}\,. En revanche, en adoptant le point de vue de \cite{Kriegl-Michor} qui d\'efinit la Grassmannienne non-lin\'eaire comme le quotient de $\text{Emb}\,(\Sigma,M)$ par l'action de $\text{Diff}^{+}(\Sigma)\,,$ l'homog\'en\'eit\'e des composantes connexes de $\text{Emb}\,(\Sigma,M)$ sous $\text{Diff}^{\,0}(M)$ (homog\'en\'eit\'e qui est une cons\'equence directe d'un r\'esultat classique de topologie diff\'erentielle sur les extensions des isotopies en diff\'eotopies, voir \cite{Hirsch}, theorem 1.3. page 180) implique automatiquement l'homog\'en\'eit\'e des composantes connexes correspondantes du quotient $\text{Emb}\,(\Sigma,M)/\text{Diff}^{+}(\Sigma)\,.$ C'est cette approche qui est utilis\'ee dans \cite{Haller-Vizman}\,. Ici encore, et contrairement \`a cette derni\`ere, nous utilisons l'approche de \cite{Hamilton} et regardons $Gr_{k}(M)$ comme une collection de sous-vari\'et\'es dont la structure diff\'erentielle est celle expliqu\'ee dans la section \ref{premiere partie}. Ce faisant, un travail suppl\'ementaire est n\'ecessaire pour montrer l'homog\'en\'eit\'e des composantes de $Gr_{k}(M)$\,.\\\\ La notion de calcul diff\'erentiel sur un espace de Fr\'echet n'\'etant pas ``canonique'', nous joignons un tr\`es court appendice traitant des deux notions de calcul diff\'erentiel les plus courantes sur un espace de Fr\'echet (celle d\'evelopp\'ee par exemple dans \cite{Hamilton}, et celle utilisant la notion de courbes lisses qui est d\'evelopp\'ee dans \cite{Kriegl-Michor}). Ces deux notions \'etant identiques (sur un espace de Fr\'echet), nous utiliserons indiff\'eremment l'une ou l'autre dans ce texte. \section{La structure de vari\'et\'e de $Gr_{k}(M)$}\label{premiere partie} Pour munir $Gr_{k}(M)$ d'une structure de vari\'et\'e, nous allons construire explicitement un atlas sur $Gr_{k}(M)\,.$ Pour ce faire, prenons $\Sigma\in Gr_{k}(M)$ et introduisons les notations et objets suivants : \begin{description} \item[$\bullet$] $\Theta_{\Sigma}$ un ouvert contenant la section nulle du fibr\'e normal $N\Sigma$ de $\Sigma$ dans $M\,,$ convexe fibre par fibre et tel que l'application $$ \tau_{\Sigma}\,:\,\Theta_{\Sigma}\rightarrow M\,,v\in N\Sigma_{x}\mapsto\text{exp}_{x}(v) $$ soit un diff\'eomorphisme de $\Theta_{\Sigma}$ sur son image;\\ \item[$\bullet$] $\mathcal{U}_{\Sigma}:=\{s\in \Gamma_{C^{\infty}}(\Sigma, N\Sigma)\,\vert\,s(\Sigma)\subseteq\Theta_{\Sigma}\}\,;$\\ \item[$\bullet$] $\varphi_{\Sigma}\,:\,\mathcal{U}_{\Sigma}\rightarrow Gr_{k}(M)\,,$ application d\'efinie par $\varphi_{\Sigma}(s)= \tau_{\Sigma}\big(s(\Sigma)\big)\,,$ cette derni\`ere sous-vari\'et\'e \'etant munie de l'orientation induite par le diff\'eomor- \\phisme $\Sigma\rightarrow \tau_{\Sigma}\big(s(\Sigma)\big), x\mapsto \tau_{\Sigma}\big(s(x)\big)\,.$ \end{description} Montrons que $\{(\varphi_{\Sigma}(\mathcal{U}_{\Sigma}),\varphi_{\Sigma}^{-1})\,\vert\,\Sigma\in Gr_{k}(M)\}$ est un atlas diff\'erentiable de $Gr_{k}(M)$ au moyen des deux lemmes suivants. \begin{lemme}\label{atlas} Pour $\Sigma_{1},\Sigma_{2}\in Gr_{k}(M)$, l'ensemble $\varphi_{\Sigma_{1}}^{-1}\big(\varphi_{\Sigma_{1}}(\mathcal{U}_{\Sigma_{1}})\cap\varphi_{\Sigma_{2}}(\mathcal{U}_{\Sigma_{2}})\big)$ est un ouvert de $\Gamma_{C^{\infty}}(\Sigma_{1}, N\Sigma_{1})\,.$ \end{lemme} \textbf{D\'emonstration.} Montrons le par l'absurde en supposant que $\varphi_{\Sigma_{1}}^{-1}\big(\varphi_{\Sigma_{1}}(\mathcal{U}_{\Sigma_{1}})\cap\varphi_{\Sigma_{2}}(\mathcal{U}_{\Sigma_{2}})\big)$ ne soit pas un ouvert de l'espace m\'etrique $\Gamma_{C^{\infty}}(\Sigma_{1}, N\Sigma_{1})\,.$ On peut alors trouver une section $s\in \varphi_{\Sigma_{1}}^{-1}\big(\varphi_{\Sigma_{1}}(\mathcal{U}_{\Sigma_{1}})\cap\varphi_{\Sigma_{2}}(\mathcal{U}_{\Sigma_{2}})\big)$ et une suite de sections $(s_{n})_{n\in \mathbb{N}}$ telles que $s_{n}\rightarrow s$ dans $\Gamma_{C^{\infty}}(\Sigma_{1}, N\Sigma_{1})$ et $s_{n}\not\in \varphi_{\Sigma_{1}}^{-1}\big(\varphi_{\Sigma_{1}}(\mathcal{U}_{\Sigma_{1}})\cap\varphi_{\Sigma_{2}}(\mathcal{U}_{\Sigma_{2}})\big)$ pour tout $n\in \mathbb{N}\,.$ \\Prenons aussi $U$ un voisinage ouvert de $\Sigma:=\varphi_{\Sigma_{1}}(s)$ inclu dans $\tau_{\Sigma_{1}}(\Theta_{\Sigma_{1}})\cap\tau_{\Sigma_{2}}(\Theta_{\Sigma_{2}})\,.$ L'ouvert $U$ peut \^etre vu simultan\'ement comme une fibration (non-lin\'eaire) au-dessus de $\Sigma_{1}$ et de $\Sigma_{2}$ munie des projections $\pi_{1}$ et $\pi_{2}\,:$ $$ \pi_{i}\,:\,U\rightarrow\Sigma_{i}\,. $$ Remarquons que $\pi_{i}\vert_{\Sigma}\::\,\Sigma\rightarrow\Sigma_{i}$ est un diff\'eomorphisme et que pour $x\in U$ on a : $$ \pi_{i}(x)=\pi_{\text{N}\Sigma_{i}}\Big(\tau_{\Sigma_{i}}^{-1}(x)\Big)\,, $$ o\`u $\pi_{\text{N}\Sigma_{i}}\,:\,N\Sigma_{i}\rightarrow\Sigma_{i}$ est la projection canonique. Nous allons montrer que l'application $$ \Sigma_{1}\rightarrow\Sigma_{2},\,m\mapsto(\pi_{2}\circ\tau_{\Sigma_{1}}\circ s_{n})(m) $$ est un diff\'eomorphisme pour $n$ assez grand. Remarquons d\'ej\`a que puisque $s_{n}\rightarrow s$ dans $\Gamma_{C^{\infty}}(\Sigma_{1}, N\Sigma_{1}),\,\,\tau_{\Sigma_{1}}\big(s_{n}(\Sigma_{1})\big)\subseteq U$ pour $n$ assez grand, et donc l'application ci-dessus a un sens. Nous allons travailler localement, prenons $x\in \Sigma_{1}\,.$ Puisque $s_{n}\rightarrow s$ dans $\Gamma_{C^{\infty}}(\Sigma_{1}, N\Sigma_{1})$, il existe $y\in \Sigma_{2}$ tel que $(\pi_{2}\circ\tau_{\Sigma_{1}}\circ s_{n})(x)\rightarrow y\,.$ Prenons alors des cartes trivialisantes : \begin{center} $ \begin{diagram} \node{\pi_{N\Sigma_{1}}^{-1}(W)} \arrow[2]{e,t}{\displaystyle\Psi_{W}} \arrow{se,b}{\displaystyle\pi_{N\Sigma_{1}}} \node[2]{W\times\mathbb{R}^{n-k}} \arrow{sw,b}{\displaystyle pr_{1}}\\ \node[2]{W} \end{diagram} $ \:\:\:\: $ \begin{diagram} \node{\pi_{N\Sigma_{2}}^{-1}(\Omega)} \arrow[2]{e,t}{\displaystyle\Psi_{\Omega}} \arrow{se,b}{\displaystyle\pi_{N\Sigma_{2}}} \node[2]{\Omega\times\mathbb{R}^{n-k}} \arrow{sw,b}{\displaystyle pr_{1}}\\ \node[2]{\Omega} \end{diagram} $ \end{center} avec $x\in W\subseteq\Sigma_{1}\,,y\in \Omega\subseteq\Sigma_{2}$ et telle que $(\pi_{2}\circ\tau_{\Sigma_{1}}\circ s_{n})(W)\subseteq \Omega$ \`a partir d'un certain rang.\\ Nous avons alors : $$ (\pi_{2}\circ\tau_{\Sigma_{1}}\circ s_{n})(x)=\Big(\underbrace{\pi_{N\Sigma_{2}}\circ\Psi_{\Omega}^{-1}}_{(z,w)\mapsto z}\circ\underbrace{\Psi_{\Omega}\circ\tau_{\Sigma_{2}}^{-1}\circ\tau_{\Sigma_{1}}\circ\Psi^{-1}_{W}}_{(y,v)\mapsto(\tau^{1}(y,v),\tau^{2}(y,v))}\circ\underbrace{\Psi_{W}\circ s_{n}}_{x\mapsto(x,\tilde{s}_{n}(x))}\Big)(x)\,. $$ Cette application a pour diff\'erentielle : \begin{eqnarray*} \left( Id\,\,0 \right) \left( \begin{array}{cc} \dfrac{\partial\,\tau^{1}}{\partial\,x}& \dfrac{\partial\,\tau^{1}}{\partial\,y} \\ \dfrac{\partial\,\tau^{2}}{\partial\,x} & \dfrac{\partial\,\tau^{2}}{\partial\,y} \end{array} \right) \left( \begin{array}{c} Id\\ (\tilde{s}_{n})_{*_{x}} \end{array} \right)&=& \left( Id\,\,0 \right) \left( \begin{array}{cc} \dfrac{\partial\,\tau^{1}}{\partial\,x}+ \dfrac{\partial\,\tau^{1}}{\partial\,y}(\tilde{s}_{n})_{*_{x}} \\ \dfrac{\partial\,\tau^{2}}{\partial\,x} + \dfrac{\partial\,\tau^{2}}{\partial\,y}(\tilde{s}_{n})_{*_{x}} \end{array} \right)\\ &=&\dfrac{\partial\,\tau^{1}}{\partial\,x}+ \dfrac{\partial\,\tau^{1}}{\partial\,y}(\tilde{s}_{n})_{*_{x}}\\ &\rightarrow&\dfrac{\partial\,\tau^{1}}{\partial\,x}+ \dfrac{\partial\,\tau^{1}}{\partial\,y}(\tilde{s})_{*_{x}}\,. \end{eqnarray*} La fl\`eche ci-dessus signifie uniquement que nous avons convergence dans un espace de matrices vers la matrice $\frac{\partial\,\tau^{1}}{\partial\,x}+ \frac{\partial\,\tau^{1}}{\partial\,y}(\tilde{s})_{*_{x}}$ qui est inversible puisque cette matrice repr\'esente la diff\'erentielle du diff\'eomorphisme $\big(\pi_{2}\vert_{\Sigma}\big)\circ\big(\pi_{1}\vert_{\Sigma}\big)^{-1}\,:\,\Sigma_{1}\rightarrow\Sigma_{2}\,.$ On en d\'eduit qu'\`a partir d'un certain rang, l'application $\Sigma_{1}\rightarrow\Sigma_{2},\,m\mapsto(\pi_{2}\circ\tau_{\Sigma_{1}}\circ s_{n})(m)$ est partout un diff\'eomorphisme local. Pour montrer que c'est un diff\'eomorphisme globale, il suffit de montrer que cette application est injective. Si ce n'\'etait jamais le cas, pour tout $n\in \mathbb{N}$, on pourrait trouver $x_{n},y_{n}\in \Sigma_{1},\, x_{n}\neq y_{n}$ tels que : $$ (\pi_{2}\circ\tau_{\Sigma_{1}}\circ s_{n})(x_{n})=(\pi_{2}\circ\tau_{\Sigma_{1}}\circ s_{n})(y_{n}) $$ pour tout $n\in \mathbb{N}.$ Par compacit\'e, nous pouvons supposer que $x_{n}\rightarrow x\in \Sigma_{1}$ et $y_{n}\rightarrow y\in \Sigma_{1}\,.$ En utilisant les semi-normes qui d\'efinissent la topologie de $\Gamma_{C^{\infty}}(\Sigma_{1}, N\Sigma_{1})$ (voir par exemple \cite{dieudonne}, page 236), on constate facilement que : $$ s_{n}(x_{n})\rightarrow s(x)\,\,\,\,\,\text{et}\,\,\,\,\,s_{n}(y_{n})\rightarrow s(y) $$ et donc \begin{eqnarray*} (\pi_{2}\circ\tau_{\Sigma_{1}}\circ s_{n})(x_{n})&=&(\pi_{2}\circ\tau_{\Sigma_{1}}\circ s_{n})(y_{n})\\ \downarrow\:\:\:\:\:\:\:\:\:\:\:\:\:\:&&\:\:\:\:\:\:\:\:\:\:\:\:\:\:\downarrow\\ \big(\pi_{2}\circ\tau_{\Sigma_{1}}\big)\big(s(x)\big)&=&\big(\pi_{2}\circ\tau_{\Sigma_{1}}\big)\big(s(y)\big)\\ \Rightarrow\:\:\:\:\big(\pi_{2}\vert_{\Sigma}\circ\tau_{\Sigma_{1}}\big)\big(s(x)\big)&=&\big(\pi_{2}\vert_{\Sigma}\circ\tau_{\Sigma_{1}}\big)\big(s(y)\big)\\ \Rightarrow\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:s(x)&=&s(y)\\ \Rightarrow\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:x&=&y\,. \end{eqnarray*} Ici on a utilis\'e le fait que $\pi_{2}\vert_{\Sigma}\::\,\Sigma\rightarrow\Sigma_{2}$ et $\tau_{\Sigma_{1}}$ sont des diff\'eomorphismes.\\\\ De plus, nous pouvons supposer (voir Appendice, Proposition \ref{special curve}), qu'il existe une courbe lisse $\sigma\,:\,\mathbb{R}\rightarrow\Theta_{\Sigma_{1}}\subseteq\Gamma_{C^{\infty}}(\Sigma_{1}, N\Sigma_{1})$ de $\Gamma_{C^{\infty}}(\Sigma_{1}, N\Sigma_{1})$ telle que $\sigma_{\frac{1}{n}}=s_{n}$ et $\sigma_{0}=s\,.$\\ Consid\'erons alors l'application suivante : $$ \Lambda\,:\,\mathbb{R}\times\Sigma_{1}\rightarrow\mathbb{R}\times\Sigma_{2}\,,\,(t,x)\mapsto\big(t,\,(\pi_{2}\circ\tau_{\Sigma_{1}}\circ \sigma_{t})(x)\big)\,. $$ On a que $$ \Lambda_{*_{(0,x)}}= \left( \begin{array}{cc} \text{Id} & 0 \\ \ast & (\pi_{2}\circ\tau_{\Sigma_{1}}\circ s)_{*_{x}} \end{array} \right) $$ est un isomorphisme puisque l'application $\pi_{2}\circ\tau_{\Sigma_{1}}\circ s=\big(\pi_{2}\vert_{\Sigma}\big)\circ\ \big(\pi_{1}\vert_{\Sigma}\big)^{-1}$ est un diff\'eomorphisme. On en d\'eduit que $\Lambda$ est un diff\'eomorphisme local en $(0,x)\,.$ Mais alors, de part l'\'equivalence suivante : $$ (\pi_{2}\circ\tau_{\Sigma_{1}}\circ s_{n})(x_{n})=(\pi_{2}\circ\tau_{\Sigma_{1}}\circ s_{n})(y_{n})\,\,\,\,\,\,\Leftrightarrow\,\,\,\,\,\,\Lambda\bigg(\dfrac{1}{n},x_{n}\bigg)=\Lambda\bigg(\dfrac{1}{n},y_{n}\bigg)\,, $$ il en r\'esulte que pour $n$ assez grand, $x_{n}=y_{n}$, $\Lambda$ devenant injective au voisinage de $(0,x)=(0,y)$, d'o\`u une contradiction. On en d\'eduit donc que pour $n$ assez grand, $\pi_{2}\circ\tau_{\Sigma_{1}}\circ s_{n}\::\:\Sigma_{1}\rightarrow\Sigma_{2}$ est un diff\'eomorphisme.\\ Ce dernier r\'esultat entraine que $\varphi_{\Sigma_{1}}^{-1}\big(\varphi_{\Sigma_{1}}(\mathcal{U}_{\Sigma_{1}})\cap\varphi_{\Sigma_{2}}(\mathcal{U}_{\Sigma_{2}})\big)$ est un ouvert de $\Gamma_{C^{\infty}}(\Sigma_{1}, N\Sigma_{1})$ car \begin{eqnarray*} &&\varphi_{\Sigma_{1}}(s_{n})=\varphi_{\Sigma_{2}}\big(\tau_{\Sigma_{2}}^{-1}\circ\tau_{\Sigma_{1}}\circ(\pi_{2}\circ\tau_{\Sigma_{1}}\circ s_{n})^{-1}\big)\in\varphi_{\Sigma_{2}}\big(U_{\Sigma_{2}}\big)\\ &\Rightarrow&s_{n}\in \varphi_{\Sigma_{1}}^{-1}\big(\varphi_{\Sigma_{1}}(\mathcal{U}_{\Sigma_{1}})\cap\varphi_{\Sigma_{2}}(\mathcal{U}_{\Sigma_{2}})\big) \end{eqnarray*} ce qui est une contradiction avec notre hypoth\`ese.$\hfill\square$\\ \begin{lemme} L'application $$ \varphi_{\Sigma_{2}}^{-1}\circ\varphi_{\Sigma_{2}}\::\:\varphi_{\Sigma_{1}}^{-1}\big(\varphi_{\Sigma_{1}}(\mathcal{U}_{\Sigma_{1}})\cap\varphi_{\Sigma_{2}}(\mathcal{U}_{\Sigma_{2}})\big)\rightarrow\varphi_{\Sigma_{2}}^{-1}\big(\varphi_{\Sigma_{1}}(\mathcal{U}_{\Sigma_{1}})\cap\varphi_{\Sigma_{2}}(\mathcal{U}_{\Sigma_{2}})\big) $$ est lisse mod\'er\'ee (``tame'' en anglais, voir par exemple \cite{Hamilton}). \end{lemme} \textbf{D\'emonstration.} Prenons $\Sigma_{1},\Sigma_{2},\Sigma:=\varphi_{\Sigma_{1}}(s)$ et $U$ comme dans le Lemme \ref{atlas}\,. Nous allons montrer que l'application ci-dessus est lisse mod\'er\'ee sur un voisinage de $s$ dans $\Gamma_{C^{\infty}}(\Sigma_{1}, N\Sigma_{1})\,.$ En fait, nous avons d\'ej\`a vu qu'il existe un voisinage $\mathcal{W}$ de $s$ dans $\Gamma_{C^{\infty}}(\Sigma_{1}, N\Sigma_{1})$ tel que l'application $\sigma\in \mathcal{W} \mapsto \pi_{2}\circ\tau_{\Sigma_{1}}\circ \sigma\in C^{\infty}(\Sigma_{1},\Sigma_{2})$ soit \`a valeurs dans $\text{Diff}(\Sigma_{1},\Sigma_{2})$ et forme ainsi une application lisse mod\'er\'ee de $\mathcal{W}\subseteq \Gamma_{C^{\infty}}(\Sigma_{1}, N\Sigma_{1})$ dans $\text{Diff}(\Sigma_{1},\Sigma_{2})\,.$ Il en r\'esulte que l'application $\mathcal{W}\rightarrow \Gamma_{C^{\infty}}(\Sigma_{2}, N\Sigma_{2}),\,\sigma\mapsto \tau_{\Sigma_{2}}^{-1}\circ\tau_{\Sigma_{1}}\circ(\pi_{2}\circ\tau_{\Sigma_{1}}\circ \sigma)^{-1}$ est bien d\'efinie et est lisse mod\'er\'ee puisque l'inversion et la composition sont des applications lisses mod\'er\'ees dans le contexte fr\'ech\'etique. On en d\'eduit que l'application que nous consid\'erons est bien lisse mod\'er\'ee.$\hfill\square$\\ $\text{}$\\\ Ainsi $\{(\varphi_{\Sigma}(\mathcal{U}_{\Sigma}),\varphi_{\Sigma}^{-1})\,\vert\,\Sigma\in Gr_{k}(M)\}$ est un atlas diff\'erentiable de $Gr_{k}(M)$ et induit canoniquement une topologie $\mathcal{T}$ sur $Gr_{k}(M)\,.$ Montrons que cette topologie est de Hausdorff. \begin{lemme} La topologie $\mathcal{T}$ de $Gr_{k}(M)$ est de Hausdorff. \end{lemme} \textbf{D\'emonstration.} Prenons $\Sigma_{1},\Sigma_{2}\in Gr_{k}(M)$ tels que $\Sigma_{1}\neq\Sigma_{2}\,.$ Si ces deux sous-vari\'et\'es orient\'ees se confondent en tant que sous-vari\'et\'es, mais poss\`edent une orientation diff\'erente, alors $\varphi_{\Sigma_{1}}(\mathcal{U}_{\Sigma_{1}})\cap\varphi_{\Sigma_{2}}(\mathcal{U}_{\Sigma_{2}})=\emptyset\,.$ En effet, supposons que $\Sigma$ appartienne \`a cette intersection. Alors $\Sigma$ serait munie de l'orientation induite par le diff\'eomorphisme $\Sigma_{1}\rightarrow\Sigma,\,x\mapsto\tau_{\Sigma_{1}}(s(x))$ o\`u $s$ est une certaine section du fibr\'e normal de $\Sigma_{1}\,.$ De plus, $\Sigma$ serait aussi munie de l'orientation induite par le diff\'eomorphisme $\Sigma_{2}\rightarrow\Sigma,\,x\mapsto\tau_{\Sigma_{2}}(s(x))$, avec $\Sigma_{1}$ et $\Sigma_{2}$ \'etant les m\^emes sous-vari\'et\'es mais orient\'ees diff\'eremment et $\tau_{\Sigma_{1}}=\tau_{\Sigma_{2}}$, d'o\`u la contradiction.\\\\ Si $\Sigma_{1}\neq\Sigma_{2}$ en tant que sous-vari\'et\'e, il existe $x_{1}\in \Sigma_{1}{\setminus}\Sigma_{2}$ et $\varepsilon>0$ tels que $$ \exp_{x_{1}}(\text{B}(x_{1},\varepsilon))\cap \tau_{\Sigma_{2}}(\Theta_{\Sigma_{2}})=\emptyset\,, $$ o\`u $\Theta_{\Sigma_{2}}\subseteq \{v\in N\Sigma_{2}\,\vert\,\Vert x\Vert<\varepsilon\}\,.$ D\`es lors, si l'on choisit $\Theta_{\Sigma_{1}}$ tel que $\Theta_{\Sigma_{1}}\subseteq\{v\in N\Sigma_{1}\,\vert\,\Vert x\Vert<\varepsilon\}$, alors on peut constater que $\varphi_{\Sigma_{1}}(\mathcal{U}_{\Sigma_{1}})\cap\varphi_{\Sigma_{2}}(\mathcal{U}_{\Sigma_{2}})=\emptyset\,.\hfill\square$\\ $\text{}$\\ En r\'esum\'e, \begin{proposition}[\textit{Hamilton},\,\cite{Hamilton}] $\text{ }$\:\:L'ensemble $Gr_{k}(M)$ est une vari\'et\'e fr\'ech\'etique lisse mod\'er\'ee et pour $\Sigma\in Gr_{k}(M)$, on a un isomorphisme canonique : $$ \text{T}_{\Sigma}Gr_{k}(M)\cong\Gamma_{C^{\infty}}(\Sigma, N\Sigma)\,. $$ \end{proposition} \begin{remarque} Tout comme la structure de vari\'et\'e de $C^{\infty}(N,M)$ ne d\'epend pas de la m\'etrique que l'on utilise sur $M$, la structure de vari\'et\'e de $Gr_{k}(M)$ ne d\'epend pas non plus de la m\'etrique $g\,.$ \end{remarque} \begin{remarque} Notons ${Gr}_{k}^{\vee}(M)$ l'ensemble des sous-vari\'et\'es connexes, compactes, orientables et de dimension $k$ de $M$. Alors, exactement de la m\^eme mani\`ere que pour $Gr_{k}(M)$, on montre que cet ensemble est muni d'une structure de vari\'et\'e mod\'er\'ee et il est clair que $Gr_{k}(M)$ est un rev\^etement \`a deux feuillets de ${Gr}_{k}^{\vee}(M)\,.$ \end{remarque} \section{L'espace des plongements dans $M$ comme fibr\'e principal sur la Grassmannienne non-lin\'eaire} Prenons $\Sigma\in Gr_{k}(M)$ et notons $\text{Emb}(\Sigma,M)$ l'espace des plongements de $\Sigma$ dans $M\,.$ Notons aussi $$ p\,:\,\text{Emb}(\Sigma,M)\rightarrow Gr_{k}(M)\,, $$ l'application qui est d\'efinie pour $f\in \text{Emb}(\Sigma,M)$ par $p(f):=f(\Sigma)$, cette derni\`ere sous-vari\'et\'e de $M$ \'etant munie de l'orientation naturellement induite par le diff\'eomorphisme $f\,:\,\Sigma\rightarrow f(\Sigma)\,.$ Gr\^ace \`a la proposition suivante et \`a ses corollaires, nous allons montrer que $\text{Emb}(\Sigma,M)$ est une vari\'et\'e fr\'ech\'etique lisse mod\'er\'ee et que l'application $p\,:\,\text{Emb}(\Sigma,M)\rightarrow Gr_{k}(M)$ est lisse. Nous utiliserons pour cela le calcul convenable de Kriegl et Michor (voir \cite{Kriegl-Michor}), car entre des vari\'et\'es fr\'ech\'etiques, une application est lisse au sens de Kriegl-Michor si et seulement si elle est lisse au sens de Hamilton (voir notre succinct appendice)\,. Cela nous am\`enera \`a montrer dans un deuxi\`eme temps que $\text{Emb}(\Sigma,M)$ est l'espace total d'un fibr\'e principal ayant pour base la r\'eunion de certaines composantes connexes de $Gr_{k}(M)\,.$ \begin{proposition}\label{la clef} Soit $E\overset{\pi}{\rightarrow}M$ un fibr\'e vectoriel de rang fini au-dessus de $M$ et $f\in C^{\infty}\big((-\varepsilon,\varepsilon)\times M, E\big)$ une application telle que $f_{0}\,:\,M\rightarrow E,\,x\mapsto f(0,x)$ soit la section nulle de $E\,.$ Alors il existe $\eta>0$ et $\varphi\,:\,(-\eta,\eta)\rightarrow \text{Diff}(M)$ un chemin lisse de $\text{Diff}(M)$ tel que : \begin{center} \begin{description} \item[$(i)$] $f_{t}\circ \varphi_{t}\in \Gamma_{C^{\infty}}(M,E)$ pour tout $t\in (-\eta,\eta)$ \:\:\:(ici $f_{t}(x):=f(t,x)$)\,;\\ \item[$(ii)$] $\varphi_{0}=Id\,.$ \end{description} \end{center} \end{proposition} \textbf{D\'emonstration.} Posons $\psi:\,(-\varepsilon,\,\varepsilon)\times M\rightarrow M,\,(t,x)\mapsto \pi\big( f(t,x)\big)$ et $\psi^{\vee}\,:\,(-\varepsilon,\,\varepsilon)\rightarrow C^{\infty}(M,M),\,t\mapsto\{M\ni x\mapsto \pi\big( f(t,x)\big)\in M\}\,.$ Etant donn\'e que $\psi=\pi\circ f\,,$ l'application $\psi$ est lisse, ce qui veut exactement dire que $\psi^{\vee}$ est une courbe lisse de $C^{\infty}(M,M)\, $(Voir Appendice, Proposition \ref{caracterisation courbe lisse}). Or, le groupe de Lie $\text{Diff}\,(M)$ \'etant ouvert dans $C^{\infty}(M,M)\,,$ (voir \cite{Kriegl-Michor}, Theorem 43.1.), et puisque $\psi^{\vee}(0)=Id\,,$ on en d\'eduit qu'il existe $\eta>0$ tel que $\psi^{\vee}$ restreint \`a $(-\eta,\,\eta)$ soit une courbe lisse de $\text{Diff}\,(M)\,.$\\\\ Consid\'erons alors le chemin lisse $\varphi\,:\,(-\eta,\,\eta)\rightarrow \text{Diff}\,(M),\,t\mapsto\big(\psi^{\vee}(t)\big)^{-1}\,.$ Pour $x=\big(\psi^{\vee}(t)\big)(y)\in M$, on constate que : $$ \pi\Big((f_{t}\circ\varphi_{t})(x)\Big)=\pi\Big(f_{t}\Big((\psi_{t}^{\vee})^{-1}(\psi_{t}^{\vee}(y))\Big)\Big)=\pi\Big(f_{t}(y)\Big)=\psi^{\vee}_{t}(y)=x\,. $$ Donc $\pi\circ(f_{t}\circ\varphi_{t})=Id$ ce qui signifie que $f_{t}\circ\varphi_{t}$ est une section de $E\,.\\\text{}\hfill\square$\\\\ Remarquons que comme corollaire de cette proposition, on retrouve le r\'esultat classique suivant (voir par exemple \cite{Hirsch}, Theorem 1.4, page 37)\,: \begin{corollaire} L'ensemble $\text{Emb}(\Sigma,M)$ est ouvert dans $C^{\infty}(\Sigma,M)\,.$ En particulier, $\text{Emb}(\Sigma,M)$ est naturellement une vari\'et\'e fr\'ech\'etique lisse mod\'er\'ee. \end{corollaire} \textbf{D\'emonstration.} Supposons que $\text{Emb}(\Sigma,M)$ ne soit pas ouvert dans $C^{\infty}(\Sigma,M)\,.$ On peut donc trouver une suite $(f_{n})_{n\in\mathbb{N}}$ de $C^{\infty}(\Sigma,M){\setminus}\text{Emb}(\Sigma,M)$ telle que $f_{n}\rightarrow f$ pour un certain $f\in \text{Emb}(\Sigma,M)\,.$ Soit alors $(\mathcal{U}_{f},\varphi_{f})$ une carte de $C^{\infty}(\Sigma,M)$ centr\'ee en $f$ et telle que $\varphi_{f}(\mathcal{U}_{f})$ soit convexe. Rappelons que l'on peut construire la carte $(\mathcal{U}_{f},\varphi_{f})$ en prenons $\varphi_{f}(\mathcal{U}_{f})$ un voisinage de 0 suffisamment petit de l'espace des sections $\Gamma_{C^{\infty}}(\Sigma,\,f^{*}TM)$ et $\varphi_{f}^{-1}:\,\varphi_{f}(\mathcal{U}_{f})\rightarrow \mathcal{U}_{f}\subseteq C^{\infty}(\Sigma,M)\,,$ l'application qui est d\'efinie par $\varphi^{-1}_{f}(X)(x):=\text{exp}_{f(x)}\,(X_{x})$ pour $X\in \Gamma_{C^{\infty}}(\Sigma,\,f^{*}TM)$ et $x\in \Sigma\,.$ Puisque $f_{n}\rightarrow f$, nous pouvons supposer que $f_{n}\in \mathcal{U}_{f}$ pour tout $n\in\mathbb{N}$, et ainsi consid\'erer la suite de sections $\big(\varphi_{f}(f_{n})\big)_{n\in \mathbb{N}}$ de $\varphi_{f}(\mathcal{U}_{f})\subseteq\Gamma_{C^{\infty}}(\Sigma,f^{*}TM)\,.$ Or, $\Gamma_{C^{\infty}}(\Sigma,f^{*}TM)$ \'etant un espace de Fr\'echet, nous pouvons supposer (voir Appendice, Proposition \ref{special curve}) qu'il existe une courbe lisse de sections $s\,:\,\mathbb{R}\rightarrow\Gamma_{C^{\infty}}(\Sigma,f^{*}TM)$ telle que : $$ s_{0}=s(0)=\varphi_{f}(f)\,\,\,\,et\,\,\,\,s\bigg(\dfrac{1}{n}\bigg)=\varphi_{f}(f_{n}) $$ pour tout $n\in \mathbb{N}\,.$ Si l'on construit $s$ de la m\^eme mani\`ere que dans le ``special curve lemma'' de \cite{Kriegl-Michor}, on constate que $s$ est \`a valeurs dans $\varphi_{f}(\mathcal{U}_{f})\,.$ En effet, $s(\Sigma)$ est le polygone d'arr\^etes les $\varphi_{f}(f_{n})$ et l'on a choisit $\varphi_{f}(\mathcal{U}_{f})$ convexe. Notons alors $$ g\,:\,\mathbb{R}\times\Sigma\rightarrow M,\,\,(t,x)\mapsto\varphi_{f}^{-1}(s_{t})(x)\,. $$ Par construction on a pour tout $n\in \mathbb{N}\,:$ $$ g_{0}=f\,\,\,\,\text{et}\,\,\,\,g_{\frac{1}{n}}=f_{n}\,. $$ Notons $W:=f(\Sigma)=g_{0}(\Sigma)\,.$ Pour $t$ assez petit, $g_{t}(\Sigma)\subseteq\tau_{W}(\Theta_{W})$ et nous pouvons d\`es lors consid\'erer l'application $W\ni x\rightarrow(\tau_{W}^{-1}\circ g_{t}\circ g_{0}^{-1})(x)\in NW\,.$ Cette derni\`ere application v\'erifie les hypoth\`eses de la Proposition \ref{la clef}, il existe donc une courbe $\varphi_{t}$ de $\text{Diff}(W)$ telle que : $$ \sigma_{t}\,:\,x\in W\mapsto(\tau_{W}^{-1}\circ g_{t}\circ g_{0}^{-1}\circ\varphi_{t})(x)\in NW $$ soit une section du fibr\'e normal de $W\,.$ Mais alors, pour $n$ assez grand, $$ f_{n}=g_{\frac{1}{n}}=\tau_{W}\circ\sigma_{\frac{1}{n}}\circ\varphi_{\frac{1}{n}}^{-1}\circ g_{0} $$ est un plongement de $\Sigma$ dans $M$ ce qui est une contradiction. Ainsi, $\text{Emb}(\Sigma,M)$ est bien un ouvert de $C^{\infty}(\Sigma,M)\,.\hfill\square$ \begin{corollaire} L'application $p\,:\,\text{Emb}(\Sigma,M)\rightarrow Gr_{k}(M)$ est lisse et pour un chemin lisse $f_{t}$ de $\text{Emb}(\Sigma,M)$, on a la formule : $$ \dfrac{d}{dt}\bigg\vert_{t_{0}}\,p(f_{t})=\bigg(\dfrac{\partial f}{\partial t}(t_{0})\bigg)^{\bot} $$ o\`u $\big(\frac{\partial f}{\partial t}(t_{0})\big)^{\bot}$ est la section de $\Gamma_{C^{\infty}}(f_{t_{0}}(\Sigma),Nf_{t_{0}}(\Sigma))$ qui est d\'efinie pour $x\in \Sigma$ par $\Big(\dfrac{\partial f}{\partial t}(t_{0})\Big)^{\bot}_{f_{t_{0}}(x)}:=pr\big(\frac{\partial f}{\partial t}(t_{0},x)\big) \,,$ $pr$ \'etant la projection orthogonale sur le fibr\'e normal de $f_{t_{0}}(\Sigma)\,.$ \end{corollaire} \textbf{D\'emonstration.} Prenons $f_{t}$ un chemin lisse de $\text{Emb}(\Sigma,M)\,.$ Nous devons montrer que $p(f_{t})$ est un chemin lisse de $Gr_{k}(M)$ afin de v\'erifier la lissit\'e de $p$ au sens de Kriegl-Michor. Fixons $t_{0}\in\mathbb{R}$ et notons $W:=p(f_{t_{0}})\,.$ Pour $\varepsilon>0$ suffisamment petit, l'application $(t,x)\in (t_{0}-\varepsilon,t_{0}+\varepsilon)\times W \mapsto (\tau_{W}^{-1}\circ f_{t}\circ f_{t_{0}}^{-1})(x)\in NW$, satisfait les hypoth\`eses de la Proposition \ref{la clef}\,. Il existe donc un chemin lisse $\varphi_{t}$ de diff\'eomorphismes de $W$ tel que $$ x\in W\mapsto(\tau_{W}^{-1}\circ f_{t}\circ f_{t_{0}}^{-1}\circ\varphi_{t})(x)\in NW $$ soit une section de $\Gamma_{C^{\infty}}(W,NW)$ d\`es que $t$ est suffisamment petit. Mais ceci nous donne justement la possibilit\'e d'exprimer $p(f_{t})$ au voisinage de $t_{0}$ dans la carte $\big(\varphi_{W}(\mathcal{U}_{W}),\varphi_{W}^{-1}\big)$ : $$ \varphi_{W}^{-1}\big(p(f_{t})\big)=\tau_{W}^{-1}\circ f_{t}\circ f_{t_{0}}^{-1}\circ\varphi_{t}\in \Gamma_{C^{\infty}}(W,NW)\,. $$ Cette derni\`ere courbe de sections \'etant lisse, il en r\'esulte que $p$ est lisse.\\\\ Pour la formule de la diff\'erentielle de $p$, notons $W:=f_{t_{0}}(\Sigma)\,.$ En identifiant $\text{T}_{W}Gr_{k}(M)$ \`a $\Gamma_{C^{\infty}}(W,NW)$, on a : $$ \dfrac{d}{dt}\bigg\vert_{t_{0}}\,p(f_{t})=\dfrac{d}{dt}\bigg\vert_{t_{0}}\,\varphi_{W}^{-1}\big(p(f_{t})\big)=\dfrac{d}{dt}\bigg\vert_{t_{0}}\,\big(\tau_{W}^{-1}\circ f_{t}\circ f_{t_{0}}^{-1}\circ\varphi_{t}\big) $$ et pour $x\in W,$ \begin{eqnarray*} \dfrac{d}{dt}\bigg\vert_{t_{0}}\,\big(\tau_{W}^{-1}\circ f_{t}\circ f_{t_{0}}^{-1}\circ\varphi_{t}\big)(x)&=&(\tau_{W}^{-1})_{*_{x}} \dfrac{d}{dt}\bigg\vert_{t_{0}}\,\big( f_{t}\circ f_{t_{0}}^{-1}\circ\varphi_{t}\big)(x)\\ &=&(\tau_{W}^{-1})_{*_{x}}\bigg[\underbrace{\dfrac{\partial f}{\partial t}(t_{0},f_{t_{0}}^{-1}(x))}_{\in \text{T}_{x}M}+\underbrace{\dfrac{\partial \varphi}{\partial t}(t_{0},x)}_{\in \text{T}_{x}W}\bigg]\,. \end{eqnarray*} Par construction de $\varphi_{t}$, il vient : $$ \dfrac{\partial f}{\partial t}(t_{0},f_{t_{0}}^{-1}(x))+\dfrac{\partial \varphi}{\partial t}(t_{0},x)\in N_{x}W $$ ce qui implique, puisque $\dfrac{\partial \varphi}{\partial t}(t_{0},x)\in \text{T}_{x}W$, que $$ \dfrac{\partial f}{\partial t}(t_{0},f_{t_{0}}^{-1}(x)) +\dfrac{\partial \varphi}{\partial t}(t_{0},x)=\bigg(\dfrac{\partial f}{\partial t}(t_{0})\bigg)_{x}^{\perp}\,. $$ Ainsi, \begin{eqnarray*} &&\dfrac{d}{dt}\bigg\vert_{t_{0}}\,\big(\tau_{W}^{-1}\circ f_{t}\circ f_{t_{0}}^{-1}\circ\varphi_{t}\big)(x)=(\tau_{W}^{-1})_{*_{x}}\bigg(\dfrac{\partial f}{\partial t}(t_{0})\bigg)_{x}^{\perp}\\ &=&\dfrac{d}{du}\bigg\vert_{0}\,\tau_{W}^{-1}\bigg[exp_{x}\bigg(u\bigg(\dfrac{\partial f}{\partial t}(t_{0})\bigg)_{x}^{\perp}\bigg)\bigg] =\dfrac{d}{du}\bigg\vert_{0}\,(\tau_{W}^{-1}\circ\tau_{W})\bigg(x,u\bigg(\dfrac{\partial f}{\partial t}(t_{0})\bigg)_{x}^{\perp}\bigg)\\ &=&\dfrac{d}{du}\bigg\vert_{0}\,\bigg(x,u\bigg(\dfrac{\partial f}{\partial t}(t_{0})\bigg)_{x}^{\perp}\bigg) =\bigg(\dfrac{\partial f}{\partial t}(t_{0})\bigg)_{x}^{\perp}\,. \end{eqnarray*} Par suite, $$ \dfrac{d}{dt}\bigg\vert_{t_{0}}\,p(f_{t})=\bigg(\dfrac{\partial f}{\partial t}(t_{0})\bigg)^{\perp}\,. $$ $\text{}\hfill\square$\\\\ \begin{remarque} Au vu de la formule de la diff\'erentielle de $p$, il semblerait que cette derni\`ere application d\'epende ``plus'' que de la structure diff\'erentiable de $Gr_{k}(M)$ puisque la m\'etrique utilis\'ee apparait dans la formule de la diff\'erentielle de $p\,.$ En fait, il ne faut pas oublier que pour $W\in Gr_{k}(M)$, $\text{T}_{W}Gr_{k}(M)$ n'est pas \'egal \`a l'espace $\Gamma_{C^{\infty}}(W,NW)$ mais lui est seulement isomorphe via une r\'ealisation n\'ecessitant la m\'etrique $g\,.$ \end{remarque} A pr\'esent, afin de pouvoir consid\'erer certains fibr\'es principaux, introduisons, pour $\Sigma\in Gr_{k}(M)$, les notations suivantes : \begin{description} \item[$(i)$] $p\,:\text{Emb}(\Sigma,M)\,\rightarrow Gr_{k}(M),\,\,f\mapsto p(f)$ la projection canonique;\\ \item[$(ii)$] $p_{\Sigma}\,:\text{Emb}(\Sigma,M)\,\rightarrow Gr(\Sigma,M):=p(\text{Emb}(\Sigma,M)),\,\,f\mapsto p(f)$\,;\\ \item[$(iii)$] $\lambda\,:\,\text{Emb}(\Sigma,M)\times \text{Diff}^{+}(\Sigma)\rightarrow \text{Emb}(\Sigma,M),\,\,(f,\varphi)\mapsto f\circ\varphi$ l'action naturelle \`a droite de $\text{Diff}^{+}(\Sigma)$ sur $\text{Emb}(\Sigma,M)\,.$\\ \end{description} Nous allons montrer par une s\'erie de lemmes que $\text{Emb}(\Sigma,M)$ est un $\text{Diff}^{+}(\Sigma)$-fibr\'e principal au-dessus de $Gr(\Sigma,M)\,.$ \begin{lemme}\label{composition} Soient $U,V$ deux ouverts de $M$ d'intersection non nulle et $\Sigma_{0},\Sigma_{1},W$ trois sous-vari\'et\'es de $M$ telles que : $$ \Sigma_{0}\subseteq U,\,\,\,\Sigma_{1}\subseteq V\,\,\,\,\,\,\,\text{et}\,\,\,\,\,\,\,W\subseteq U\cap V\,. $$ Soient aussi $\beta$ un chemin continu de $\text{Emb}(\Sigma_{0}, U)$ et $\tilde{\beta}$ un chemin continu de $\text{Emb}(W, V)$ tels que : $$ \beta(0)=j_{\Sigma_{0}},\,\,\beta(1)(\Sigma_{0})=W,\,\,\tilde{\beta}(0)=j_{W}\,\,\,\,\text{et}\,\,\,\,\tilde{\beta}(1)(W)=\Sigma_{1} $$ o\`u $j_{\Sigma_{0}}\,:\,\Sigma_{0}\hookrightarrow M$ et $j_{W}\,:\,W\hookrightarrow M$ sont les inclusions canoniques. Alors, l'application $\gamma\,:\,[0,2]\rightarrow \text{Emb}(\Sigma_{0},U\cup V)$ d\'efinie par : $$ \gamma(t)= \left\lbrace \begin{array}{c} \beta(t)\,\,\,\,\text{pour}\,\,\,\,t\in [0,1]\,;\\ \tilde{\beta}(t-1)\circ\beta(1)\,\,\,\,\text{pour}\,\,\,\,t\in[1,2], \end{array} \right. $$ est un chemin continu de $\text{Emb}(\Sigma_{0}, U\cup V)\,.$ \end{lemme} \textbf{D\'emonstration.} Consid\'erons l'application $$ \vartheta\,:\,\text{Emb}(W, V)\rightarrow\text{Emb}(\Sigma_{0},U\cup V),\,\,\,\rho\mapsto\rho\circ\beta(1)\,. $$ En utilisant les courbes lisses de $\text{Emb}(W, V)$, il est imm\'ediat que $\vartheta$ est une application lisse, en particulier, $\vartheta$ est continue. Mais alors :\begin{description} \item[$(i)$] $\gamma$ est clairement continue sur $[0,1]\,;$\\ \item[$(ii)$] $\gamma(t)=(\vartheta\circ\tilde{\beta})(t-1)$ pour $t\in [1,2]$ et donc $\gamma$ est continue sur [1,2]. \end{description} Il en r\'esulte que $\gamma\,:\,[0,2]\rightarrow \text{Emb}(\Sigma_{0},U\cup V)$ est bien un chemin continu de $ \text{Emb}(\Sigma_{0},U\cup V)\,.\hfill\square$ \begin{lemme}\label{lift} Soient $\Sigma_{0},\Sigma_{1}$ deux \'el\'ements de $Gr_{k}(M)$ et $\alpha\,:\,[0,1]\rightarrow Gr_{k}(M)$ un chemin continu tel que $\alpha(0)=\Sigma_{0}$ et $\alpha(1)=\Sigma_{1}\,.$ Alors il existe $\beta\,:\,[0,1]\rightarrow\text{Emb}(\Sigma_{0},M)$, un chemin continu de $\text{Emb}(\Sigma_{0},M)$ tel que : $$ \beta(0)=j_{\Sigma_{0}}\,,\,\,\,\,(p\circ\beta)(0)=\alpha(0)=\Sigma_{0}\,\,\,\,\text{et}\,\,\,\,(p\circ\beta)(1)=\alpha(1)=\Sigma_{1} $$ o\`u $p\,:\, \text{Emb}(\Sigma_{0},M)\rightarrow Gr_{k}(M)$ est la projection canonique et $j_{\Sigma_{0}}\,:\,\Sigma_{0}\hookrightarrow M$ l'inclusion canonique. \end{lemme} \textbf{D\'emonstration.} Puisque $\alpha$ est continu, l'ensemble $\alpha([0,1])$ est compact et peut donc \^etre recouvert par un nombre fini de carte. Pour simplifier, supposons que $$ \alpha([0,1])\subseteq\varphi_{\Sigma_{0}}(\mathcal{U}_{\Sigma_{0}})\cup\varphi_{\Sigma_{1}}(\mathcal{U}_{\Sigma_{1}})\,, $$ les notations \'etant celles pr\'ec\'edement introduites. Prenons $W$ un \'el\'ement de $\varphi_{\Sigma_{0}}(\mathcal{U}_{\Sigma_{0}})\cap\varphi_{\Sigma_{1}}(\mathcal{U}_{\Sigma_{1}})\,.$ On a : $$ W=\varphi_{\Sigma_{0}}(s_{0})\,\,\,\,\text{et}\,\,\,\,W=\varphi_{\Sigma_{1}}(s_{1}) $$ pour un certain $s_{0}\in \mathcal{U}_{\Sigma_{0}}$ et un certain $s_{1}\in \mathcal{U}_{\Sigma_{1}}\,.$ On peut alors consid\'erer les applications : $$ \beta\,:\,[0,1]\rightarrow\text{Emb}(\Sigma_{0},\tau_{\Sigma_{0}}(\Theta_{\Sigma_{0}})),\,\,t\mapsto\{x\in\Sigma_{0}\mapsto\text{exp}_{x}(ts_{0}(x))\} $$ et $$ \tilde{\beta}\,:\,[0,1]\rightarrow\text{Emb}(W,\tau_{\Sigma_{1}}(\Theta_{\Sigma_{1}})),\,\,t\mapsto\{x\in W\mapsto\text{exp}_{\tilde{p}(x)}((1-t)s_{1}(\tilde{p}(x)))\}\,. $$ o\`u $\tilde{p}\,:\,\tau_{\Sigma_{1}}(\Theta_{\Sigma_{1}})\rightarrow\Sigma_{1},\,\,\tau_{\Sigma_{1}}\big((x,v)\big)\mapsto x$ pour $(x,v)\in N\Sigma_{1}$ est la projection canonique. Ces deux applications, $\beta$ et $\tilde{\beta}\,,$ sont manifestement continues puisque l'on peut les \'etendre en des courbes lisses. Il en r\'esulte par le Lemme \ref{composition} (et apr\`es reparam\'etrage), qu'il existe une courbe continue $\gamma\,:\,[0,1]\rightarrow\text{Emb}(\Sigma_{0},\tau_{\Sigma_{0}}(\Theta_{\Sigma_{0}})\cup\tau_{\Sigma_{1}}(\Theta_{\Sigma_{1}}))\subseteq\text{Emb}(\Sigma_{0},M)$ telle que $\gamma(0)=\beta(0)=j_{\Sigma_{0}}$ et $\gamma(1)=\tilde{\beta}(1)\circ \beta(1)\,.$ D'o\`u : $$ (p\circ\gamma)(0)=p\big(\gamma(0)\big)=p(j_{\Sigma_{0}})=\Sigma_{0}\,. $$ De plus, puisque $$ \gamma(1)(\Sigma_{0})=\big(\tilde{\beta}(1)\circ \beta(1)\big)(\Sigma_{0})=\tilde{\beta}(1)(W)=\Sigma_{1}\,, $$ il suffit pour montrer que $p(\gamma(1))=\Sigma_{1}$, de v\'erifier que $[\gamma(1)^{*}\mu_{1}]=[\mu_{0}]$ o\`u $[\mu_{i}]$ est l'orientation de $\Sigma_{i}$ (i=0,1). Mais cela d\'ecoule de la d\'efinition m\^eme des cartes $\varphi_{\Sigma_{i}}(\mathcal{U}_{\Sigma_{i}})\,.$ En effet, d'apr\`es cette d\'efinition, l'orientation de $W$ est donn\'ee \`a la fois par $[(\beta(1)^{-1})^{*}\mu_{0}]$ et par $[\tilde{\beta}(1)^{*}\mu_{1}]\,,$ et comme $\gamma(1)=\tilde{\beta}(1)\circ \beta(1)\,,$ on en d\'eduit que $p(\gamma(1))=(\Sigma_{1},[\mu_{1}])=\Sigma_{1}\,.\hfill\square$\\\\ \begin{corollaire} L'ensemble $Gr(\Sigma,M)$ est une r\'eunion de composantes connexes de $Gr_{k}(M)\,.$ En particulier, $Gr(\Sigma,M)$ est une vari\'et\'e mod\'er\'ee. \end{corollaire} \textbf{D\'emonstration.} Soient $f\in \text{Emb}(\Sigma,M)$ et $\Sigma_{1}\in Gr_{k}(M)$ un \'el\'ement appartenant \`a la m\^eme composante connexe dans $Gr_{k}(M)$ que $p_{\Sigma}(f)=:\Sigma_{0}\,.$ Pour montrer le lemme, il suffit de montrer que $\Sigma_{1}\in Gr(\Sigma,M)\,.$\\\\ Prenons $\alpha\,:\,[0,1]\rightarrow Gr_{k}(M)$, un chemin continu de $Gr_{k}(M)$ tel que : $$ \alpha(0)=p_{\Sigma}(f)=\Sigma_{0}\:\:\:\:\text{et}\:\:\:\:\alpha(1)=\Sigma_{1}\,. $$ D'apr\`es le Lemme \ref{lift}, il existe un chemin $\beta\,:\,[0,1]\rightarrow\text{Emb}(\Sigma_{0},M)$ tel que : $$ \beta(0)=j_{\Sigma_{0}}\,,\,\,\,\,(p_{\Sigma_{0}}\circ\beta)(0)=\alpha(0)=\Sigma_{0}\,\,\,\,\text{et}\,\,\,\,(p_{\Sigma_{0}}\circ\beta)(1)=\alpha(1)=\Sigma_{1}\,. $$ Si l'on regarde $f$ comme une application \`a valeurs dans $\Sigma_{0}=p_{\Sigma}(f)$, alors on constate que $$ p_{\Sigma}(\beta(1)\circ f)=\Sigma_{1}\:\:\:\:\text{avec}\:\:\:\:\:\beta(1)\circ f\in \text{Emb}(\Sigma,M)\,. $$ Ainsi, $\Sigma_{1}\in Gr(\Sigma,M)\,.\hfill\square$ \begin{lemme}\label{section locale} L'application $p_{\Sigma}\,:\,\text{Emb}(\Sigma,M)\rightarrow Gr(\Sigma,M)$ admet des sections locales. \end{lemme} \textbf{D\'emonstration.} Soit $W\in Gr(\Sigma,M)$ et soit $f\in \text{Emb}(\Sigma,M)$ tel que $p_{\Sigma}(f)=W\,.$ On peut constater que l'application $\sigma \,:\,\varphi_{W}(\mathcal{U}_{W})\rightarrow \text{Emb}(\Sigma,M)$ d\'efinie par $$ \sigma(\varphi_{W}(s))(x):=exp_{f(x)}\,s(f(x)) $$ est une section locale de $p_{\Sigma}\,.\hfill\square$ \begin{lemme} L'action $\lambda\,:\,\text{Emb}(\Sigma,M)\times \text{Diff}^{+}(M)\rightarrow \text{Emb}(\Sigma,M)$ est libre. De plus, pour $W\in Gr(\Sigma,M)$ et $f\in \text{Emb}(\Sigma,M)$ telle que $p_{\Sigma}(f)=W\,,$ on a $p_{\Sigma}^{-1}(W)=\mathcal{O}_{f}$ o\`u $\mathcal{O}_{f}$ est l'orbite de $f$ pour l'action $\lambda\,.$ \end{lemme} \textbf{D\'emonstration.} La libert\'e de $\lambda$ est \'evidente. Montrons que $p_{\Sigma}^{-1}(W)=\mathcal{O}_{f}\,.$ Notons $[\mu]$ l'orientation de $\Sigma\,.$ Par la suite, nous noterons $f^{-1}$ l'unique application lisse de $W$ dans $\Sigma$ v\'erifiant $f^{-1}\circ f=id_{\Sigma}$ (et de m\^eme pour $g$)\,. On a : \begin{eqnarray*} g\in p_{\Sigma}^{-1}(W)&\Leftrightarrow& g(\Sigma)=f(\Sigma)\:\:\:\text{et}\:\:\:[(g^{-1})^{*}\mu]=[(f^{-1})^{*}\mu]\\ &\Leftrightarrow& g=f\circ \varphi\:\:\text{avec}\:\:\varphi=f^{-1}\circ g \in \text{Diff}(\Sigma)\\ && \text{et}\:\:\:[(g^{-1})^{*}\mu]=[(f^{-1})^{*}\mu]\\ &\Leftrightarrow&g=f\circ\varphi\:\:\text{avec}\:\:\varphi\in\text{Diff}^{+}(\Sigma)\\ &\Leftrightarrow& g=\lambda(f,\varphi)\:\:\text{avec}\:\:\varphi\in\text{Diff}^{+}(\Sigma)\,. \end{eqnarray*} Ainsi, $p_{\Sigma}^{-1}(p_{\Sigma}(f))=\mathcal{O}_{f}\,.\text{}\hfill\square$\\\\ \begin{lemme}\label{bien lisse} Pour $W\in Gr(\Sigma,M)$ et $f\in \text{Emb}(\Sigma,M)$ telle que $p_{\Sigma}(f)=\Sigma$\,, l'application $$ \Lambda\::\:p_{\Sigma}^{-1}(\varphi_{W}(\mathcal{U}_{W}))\longrightarrow\text{Diff}^{+}(\Sigma),\,\,\,g\mapsto\sigma\Big(p_{\Sigma}(g)\Big)^{-1}\circ g $$ est lisse mod\'er\'ee (ici $\sigma$ correspond \`a la section construite dans le Lemme \ref{section locale}). \end{lemme} \textbf{D\'emonstration.} Remarquons que $\Lambda$ est bien d\'efinie et est l'unique application v\'erifiant $\lambda\Big(\sigma\big(p_{\Sigma}(g)\big),\,\Lambda(g)\Big)=g$ pour tout $g\in p_{\Sigma}^{-1}(\varphi_{W}(\mathcal{U}_{W}))\,.$\\\\ Montrons que $\Lambda$ est lisse mod\'er\'ee. Pour $g\in p_{\Sigma}^{-1}(\varphi_{W}(\mathcal{U}_{W}))$ et $x\in \Sigma$ on a : \begin{eqnarray*} \Lambda(g)(x)=\Big(\Big(\sigma\big(p_{\Sigma}(g)\big)\Big)^{-1}\circ g\Big)(x)&\Rightarrow&\sigma\big(p_{\Sigma}(g)\big)\big(\Lambda(g)(x)\big)=g(x)\,. \end{eqnarray*} Notons $s\in \Gamma_{C^{\infty}}(W, NW)$ l'unique section de $\Gamma_{C^{\infty}}(W, NW)$ v\'erifiant $p_{\Sigma}(g)=\varphi_{W}(s)\,.$ On a alors : \begin{eqnarray*} \sigma\big(\varphi_{W}(s)\big)\big(\Lambda(g)(x)\big)=g(x)&\Rightarrow&\text{exp}_{(f\circ\Lambda(g))(x)}\,\,\Big(\big(s\circ f\circ \Lambda(g)\big)(x)\Big)=g(x)\\ &\Rightarrow&\big(s\circ f\circ \Lambda(g)\big)(x)=\tau_{W}^{-1}\big(g(x)\big)\\ &\Rightarrow&\big( f\circ \Lambda(g)\big)(x)=\big(\pi_{NW}\circ\tau_{W}^{-1}\circ g\big)(x)\\ &\Rightarrow& \Lambda(g)(x)=\big(f^{-1}\circ\pi_{NW}\circ\tau_{W}^{-1}\circ g\big)(x)\,.\\ \end{eqnarray*} Ainsi, $\Lambda(g)=f^{-1}\circ\pi_{NW}\circ\tau_{W}^{-1}\circ g\,,$ et l'on peut remarquer que l'application $f$ \'etant fix\'ee, $f^{-1}$ est une application lisse ind\'ependante de $g\in p_{\Sigma}^{-1}(\varphi_{W}(\mathcal{U}_{W}))\,.$ On en d\'eduit que $\Lambda$ est bien une application lisse mod\'er\'ee.\\$\text{}\hfill\square$\\\\ De cette succession de lemmes, on en d\'eduit : \begin{theoreme} L'application $p_{\Sigma}\,:\,\text{Emb}(\Sigma,M)\rightarrow Gr(\Sigma,M)$ est un $\text{Diff}^{+}(\Sigma)$-fibr\'e principal mod\'er\'e pour l'action $\lambda\,.$ \end{theoreme} \textbf{D\'emonstration.} Prenons $W\in \text{Gr}(\Sigma,\,M)$ et choisissons $f\in \text{Emb}(\Sigma,M)$ telle que $p_{\Sigma}(f)=W\,.$ On peut consid\'erer le diagramme commutatif suivant : \begin{center} $ \begin{diagram} \node{p_{\Sigma}^{-1}\big(\varphi_{W}(\mathcal{U}_{W})\big)} \arrow[2]{e,t}{\displaystyle\Psi} \arrow{se,b}{\displaystyle p_{\Sigma}} \node[2]{\varphi_{W}(\mathcal{U}_{W})\times\text{Diff}^{+}(\Sigma)} \arrow{sw,b}{\displaystyle pr_{1}}\\ \node[2]{\varphi_{W}(\mathcal{U}_{W})} \end{diagram} $ \end{center} o\`u $\Psi(g):=(p_{\Sigma}(g),\,\Lambda(g))\,.$ \\ D'apr\`es le Lemme \ref{bien lisse}, $\Psi$ est une application lisse mod\'er\'ee. Cette application est de plus $\text{Diff}^{+}$($\Sigma$)-\'equivariante, d'inverse lisse mod\'er\'ee $\Psi^{-1}\big(\varphi_{W}(s),\,\varphi\big)=\sigma\big(\varphi_{W}(s)\big)\circ\varphi\,.$ On construit ainsi des trivialisations de $\text{Emb}(\Sigma,\,M)$ faisant de $\text{Emb}(\Sigma,\,M)$ un $\text{Diff}^{+}$($\Sigma$)-fibr\'e principal au-dessus de $\text{Gr}(\Sigma,\,M)\,.\hfill\square$ \section{Homog\'en\'eit\'e des composantes connexes de $Gr_{k}(M)$ sous l'action des diff\'eomorphismes de $M$} Pour $\Sigma\in Gr_{k}(M)$, nous savons que la composante connexe $\big(Gr_{k}(M)\big)_{\Sigma}$ de $ Gr_{k}(M)$ contenant $\Sigma$ est connexe et localement connexe par arcs (l'espace mod\`ele \'etant de Fr\'echet) et donc, $\big(Gr_{k}(M)\big)_{\Sigma}$ est aussi connexe par arcs. On a alors, tout comme en dimension finie : \begin{proposition}\label{connexite} La composante connexe $\big(Gr_{k}(M)\big)_{\Sigma}$ est connexe par arcs pour des arcs lisses. \end{proposition} Pour montrer ce r\'esultat, nous avons besoin d'un lemme que l'on peut d\'eduire de \cite{Hirsch} (voir exercice 3.b, section 8.1, page 182 de \cite{Hirsch}). \begin{lemme}\label{Hirsch} Si $\beta\,:\,[0,1]\rightarrow\text{Emb}(\Sigma,M)$ est un chemin continu, alors il existe une application lisse $F\,:\, [0,1]\times\Sigma\rightarrow M$ telle que : \begin{center} \begin{description} \item[$(i)$] l'application $F_{t}\,:\,\Sigma\rightarrow M,\,\,x\mapsto F(t,x)$ soit un plongement pour tout $t\in [0,1]$\,;\\ \item[$(ii)$] $F_{0}(\Sigma)=\beta(0)(\Sigma)$ et $F_{1}(\Sigma)=\beta(1)(\Sigma)\,.$ \end{description} \end{center} \end{lemme} \textbf{D\'emonstration de la Proposition \ref{connexite}.} Prenons $\alpha\,:\,[0,1]\rightarrow\big(Gr_{k}(M)\big)_{\Sigma}$ un chemin continu de $\big(Gr_{k}(M)\big)_{\Sigma}\,.$ Notons $\Sigma_{0}:=\alpha(0)$ et $\Sigma_{1}:=\alpha(1)\,.$ D'apr\`es le Lemme \ref{lift}, il existe $\beta\,:[0,1]\rightarrow\text{Emb}(\Sigma_{0},M)$ un chemin continu de $\text{Emb}(\Sigma_{0},M)$ tel que : $$ (p\circ\beta)(0)=\alpha(0)\,\,\,\,\text{et}\,\,\,\,(p\circ\beta)(1)=\alpha(1)\,. $$ Mais alors, d'apr\`es le Lemme \ref{Hirsch}, nous pouvons trouver $\varepsilon>0$ et une application lisse $F\,:\,]-\varepsilon,1+\varepsilon[\times\Sigma_{0}\rightarrow M$ telle que : \begin{center} \begin{description} \item[$(i)$] l'application $F_{t}\,:\,\Sigma_{0}\rightarrow M,\,x\mapsto F(t,x)$ soit un plongement pour tout $t\in [0,1]\,;$\\ \item[$(ii)$] $F_{0}(\Sigma_{0})=\beta(0)(\Sigma_{0})$ et $F_{1}(\Sigma_{0})=\beta(1)(\Sigma_{0})\,.$ \end{description} \end{center} Il en r\'esulte que l'application $t\in ]-\varepsilon,1+\varepsilon[\rightarrow\text{Emb}(\Sigma_{0},M),\,t\mapsto F_{t}$ est une courbe lisse de $\text{Emb}(\Sigma_{0},M)$ pour $\varepsilon$ suffisamment petit (car $\text{Emb}(\Sigma_{0},M)$ est ouvert dans $C^{\infty}(\Sigma_{0},M)$).\\ Par suite, $p\circ F_{t}$ est une courbe lisse de $\big(Gr_{k}(M)\big)_{\Sigma}$ v\'erifiant : $$ p\circ F_{0}=F_{0}(\Sigma_{0})=\beta(0)(\Sigma_{0})=\alpha(0)=\Sigma_{0} $$ $$ \text{et}\,\,\,\,p\circ F_{1}=F_{1}(\Sigma_{0})=\beta(1)(\Sigma_{0})=\alpha(1)=\Sigma_{1} $$ ce qui montre la proposition.$\hfill\square$\\\\\\ A pr\'esent, consid\'erons $\text{Diff}^{\,0}(M)$, la composante connexe de $\text{Diff}(M)$ contenant l'\'el\'ement neutre $Id_{M}$ ainsi que son action naturelle sur $\big(Gr_{k}(M)\big)_{\Sigma}$ : $$ \vartheta\,:\,\text{Diff}^{\,0}(M)\times\big(Gr_{k}(M)\big)_{\Sigma}\rightarrow\big(Gr_{k}(M)\big)_{\Sigma}\,,\,\,(\varphi,W)\rightarrow \varphi(W)\,. $$ On a alors le r\'esultat d'homog\'en\'eit\'e suivant: \begin{theoreme}\label{theoreme 2} L'action de $\text{Diff}^{\:\,0}(M)$ sur $\big(Gr_{k}(M)\big)_{\Sigma}$ est transitive. \end{theoreme} \textbf{D\'emonstration.} $\text{}$\:\:\:\:Soient $\Sigma_{0}$ et $\Sigma_{1}$ deux \'el\'ements de $\big(Gr_{k}(M)\big)_{\Sigma}$ et $\alpha\,:\,[0,1]\rightarrow \big(Gr_{k}(M)\big)_{\Sigma}$ une courbe continue joignant $\Sigma_{0}$ et $\Sigma_{1}\,.$ Tout comme dans la d\'emonstration de la Proposition \ref{connexite}, nous pouvons trouver une application lisse $F\,:\,[0,1]\times\Sigma_{0}\rightarrow M$ telle que : $$ F_{0}(\Sigma_{0})=\Sigma_{0}\,\,\,\,\text{et}\,\,\,\, F_{1}(\Sigma_{0})=\Sigma_{1}\, $$ et telle que $F_{t}$ soit un plongement pour tout $t\in[0,1]\,.$ Mais alors, d'apr\`es un r\'esultat classique de topologie diff\'erentielle (voir Th\'eor\`eme 1.3, chapitre 8, page 180 de \cite{Hirsch}), nous pouvons trouver une application lisse $\widetilde{F}\,:\,[0,1]\times M\rightarrow M$ v\'erifiant pour tout $t\in [0,1]$: \begin{center} \begin{description} \item[$(i)$] $\widetilde{F}_{t}\in \text{Diff}(M)\,;$\\ \item[$(ii)$] $\widetilde{F}_{0}=Id$ et $F_{t}=\widetilde{F}_{t}\vert_{\Sigma_{0}}\,.$ \end{description} \end{center} D'apr\`es la caract\'erisation des courbes lisses de $\text{Diff}(M)$, on en d\'eduit que $\widetilde{F}_{t}$ est une courbe lisse de $\text{Diff}(M)$ joignant $Id_{M}$ et $\widetilde{F}_{1}$ ce qui implique en particulier que $\widetilde{F}_{1}\in \text{Diff}^{\,0}(M)\,.$ De plus, $\vartheta(\widetilde{F}_{1},\Sigma_{0})=\widetilde{F}_{1}(\Sigma_{0})=F_{1}(\Sigma_{0})=\Sigma_{1}$ ce qui prouve le th\'eor\`eme.$\hfill\square$ \begin{remarque} On pourrait montrer le Th\'eor\`eme \ref{theoreme 2} en utilisant le Th\'eor\`eme de Nash-Moser via le Th\'eor\`eme 2.4.1 de \cite{Hamilton}. \end{remarque} \begin{remarque} A partir du Th\'eor\`eme \ref{theoreme 2}\,, on peut montrer que la composante connexe $\big(Gr_{k}(M)\big)_{\Sigma}$ de la Grassmannienne est aussi homog\`eme sous l'action du groupe $\text{SDiff}(M,\mu)$ des diff\'eomorphimes de $M$ qui pr\'eservent une forme volume donn\'ee $\mu$ (voir \cite{Haller-Vizman})\,. \end{remarque} \section*{Appendice} Dans cet appendice, on donne -- sans d\'emonstrations -- quelques r\'esultats techniques utiles pour la g\'eom\'etrie en dimension infinie, plus particuli\`erement pour l'\'etude des vari\'et\'es model\'ees sur des espaces de Fr\'echet (pour une introduction aux espaces de Fr\'echet, on pourra consulter \cite{Bierstedt-Bonet} ou \cite{Jarchow}, pour les vari\'et\'es model\'ees sur des espaces de Fr\'echet, \cite{Hamilton}, \cite{Kriegl-Michor}, etc.). \begin{definition} Soit F un espace de Fr\'echet, I un ouvert de $\mathbb{R}$ et $c\,:\,I\rightarrow F$ une application. On dit que c est d\'erivable sur I si pour tout $x\in I\,,$ le quotient $\big(c(x+h)-c(x)\big)/h$ converge lorsque $h\rightarrow 0\,,$ on note alors $c'$ sa d\'eriv\'ee. On dit qu'une courbe $c\,:\,I\rightarrow F$ est lisse si elle admet des d\'eriv\'ees \`a tous les ordres. \end{definition} La proposition ``folklorique'' suivante (voir \cite{Molitor2}) relie deux notions de calcul diff\'erentiel sur les espaces de Fr\'echet. L'une utilise la notion de courbes lisses et est d\'evelopp\'ee dans \cite{Kriegl-Michor}, l'autre, plus classique, utilise la diff\'erentielle de G\^ateaux et est d\'evelopp\'ee, par exemple, dans \cite{Hamilton}, \cite{Milnor}, etc. \begin{proposition} Si $U\subseteq E$ est un ouvert d'un espace de Fr\'echet $E$ et $f\,:\,U\rightarrow F$ une application de $U$ dans un autre espace de Fr\'echet $F\,,$ alors $f$ est lisse (au sens de \cite{Hamilton}) si et seulement si $f\circ c$ est une courbe lisse de F pour toute courbe lisse $c\,:\,I\rightarrow U\,.$ \end{proposition} Pour rendre cette derni\`ere proposition utile , nous avons besoin d'une bonne description (que l'on peut trouver dans \cite{Kriegl-Michor}) des courbes lisses de $\Gamma_{C^{\infty}}(M,\,E)$ o\`u $M$ est une vari\'et\'e compacte et $E\rightarrow M$ un fibr\'e vectoriel de rang fini (pour une description de la topologie de $\Gamma_{C^{\infty}}(M,\,E)\,,$ on pourra consulter \cite{dieudonne}, Proposition 17.2.2, page 238). \begin{proposition}\label{caracterisation courbe lisse} Si $s\,:\,I\rightarrow\Gamma_{C^{\infty}}(M,\,E)$ est une courbe lisse de $\Gamma_{C^{\infty}}(M,\,E)\,,$ alors l'application $s^{\wedge}\,:\, I\times M\rightarrow E,\,(t,x)\mapsto s_{t}(x)$ est une application lisse.\\ R\'eciproquement, si $f\,:\,I\times M\rightarrow E$ est une application lisse telle que $f(t,x)\in E_{x}$ pour tout $(t,x)\in I\times M\,,$ alors l'application $f^{\vee}\,:\,I\rightarrow \Gamma_{C^{\infty}}(M,\,E)$ d\'efinie par $f^{\vee}(t)(x):=f(t,x)$ est une courbe lisse de $\Gamma_{C^{\infty}}(M,\,E)\,.$ \end{proposition} De cette proposition, on peut en d\'eduire facilement une caract\'erisation naturelle des courbes lisses des sous-vari\'et\'es de $C^{\infty}(M,N)$ pour laquelle on renvoie le lecteur \`a \cite{Kriegl-Michor}, Lemme 42.5, page 442. \begin{definition} Une suite $(x_{n})_{n\in\mathbb{N}}$ d'un espace de Fr\'echet $F$ ``converge rapidement'' vers $x\in F$ si pour tout $k\in \mathbb{N}\,,$ la suite $n^{k}(x_{n}-x)$ est born\'ee (born\'ee au sens des espaces topologiques localement convexes, voir \cite{Jarchow} ou \cite{Kriegl-Michor}). \end{definition} \begin{lemme} Si $(x_{n})_{n\in\mathbb{N}}\,,$ est une suite d'un espace de Fr\'echet $F$ qui converge vers $x\in F\,,$ alors on peut trouver une sous-suite de $(x_{n})_{n\in\mathbb{N}}$ qui converge rapidement vers x. \end{lemme} De ce lemme ainsi que du ``special curve lemma'' de \cite{Kriegl-Michor}, page 16, on en d\'eduit \begin{proposition}\label{special curve} $\text{}$\,\,Si $(x_{n})_{n\in\mathbb{N}}$ est une suite d'un espace de Fr\'echet F qui converge vers $x\in F\,,$ alors (\`a sous-suite pr\`es) on peut trouver une courbe lisse $c\,:\,\mathbb{R}\rightarrow F$ telle que $c(\frac{1}{n})=x_{n}$ et $c(0)=x\,.$ \end{proposition}$\text{}$\\\\ $\text{}$\,\,\,\,\,\,\,\,\textbf{\begin{large}Remerciements.\end{large}} Je tiens \`a remercier Tilmann Wurzbacher qui m'a encourag\'e \`a faire cet article et qui m'a chaleureusement accompagn\'e durant sa r\'edaction. \nocite{Ismagilov}
2,869,038,156,607
arxiv
\section{Introduction} The movement of animals in search for food, refugia or other resources is nowadays the subject of active research trying to unveil the mechanisms that give rise to a wide family of related complex patterns. In particular, physicists find in these a fruitful field to explore reaction-diffusion mechanisms~\cite{kuperman,abramson2013}, to apply the formalism of stochastic differential equations~\cite{okubo02,mikhailov06,schat96} and to perform simulations based on random walks~\cite{viswanathan11,viswanathan96,giuggioli09,borger08}. One of the key aspects of this phenomenon is the fed back interaction between the individual and the environment \cite{turc98}. These interactions may involve intra and inter specific competition that, together with previous experience \cite{nath08,mor10} and the search for resources, drive the displacement of the individuals. In particular, when animals move around in order to collect food from patches of renewable resources, their trajectories depend strongly on the spatial arrangement of such patches \cite{ohashi07}. This observation has motivated a large collection of studies focused on finding optimal search strategies under different assumptions of animal perception and memory \cite{bartumeus02,fronhofer13}. A related open question is that of the origin of home ranges, a concept introduced in \cite{burt} to characterize the spatial extent of the displacements of an animal during its daily activities. Many species perform bounded explorations around their refugia, even though the available space and resources extend far beyond. There are several hypotheses that try to explain this phenomenon, which could be only an emergent behavior associated to very simple causes \cite{abramson2014}. The review by B\"orger et al. \cite{borger08} is an exhaustive compilation of the state of the art. There, the authors point out that movement models not always lead to the formation of stationary home ranges. Still, home ranges arise, for example, in biased diffusion \cite{okubo02}, in self-attracting walks \cite{tan} and in models with memory \cite{schu}. Nevertheless, the quest to unveil and characterize the underlying weave of causes and effects behind the emergent patterns is not over. How do these emerge as the result of the interaction between the behavior of an organism and the spatial structure of the environment? In this context, the venerable symmetrical random walk has been the subject of many studies, with a large collection of applications and characterizations that include aspects beyond the simple walker capable of only uncorrelated short-range steps. Just to focus on what we want to present here, let us restrict the examples to random walks on discrete lattices where the walker can gather information to build up a history. One such case is the self avoiding walk (SAW), where the walker builds up its trajectory by avoiding to step onto an already visited site \cite{flor,dege}. A characteristic result corresponds to the walker running into a site with all its neighboring sites already visited and being blocked. The converse case occurs when the walker prefers sites visited earlier. Previous works have shown that introducing long-range correlations into a random walk may lead to non trivial effects translated into drastic changes in the asymptotic behavior. The usual diffusive dynamics can evolve into sub-diffusive, super-diffusive or persistent. Such random walks with long-range memory have been extensively studied in recent years \cite{hod,schu,trimp99,trimp01,kesh,para, silva,cres}. In \cite{sapo,orde1,orde2} a behavior that can be interpreted as memory has been explored. These works analyze a self attracting walk where the walker jumps to the nearest neighbor according to a probability that increases when the site has already been visited. A generalization that includes an enhancement of this memory with the frequency of visits, but also with a degradation with time, was proposed in \cite{tan}. In this work we propose a random walk with a specific memory that induces local correlations at long times. The rationale for this model is to mimic the movement of a foraging animal, e.g. a frugivore, going from one plant to another in order to feed. We show that the emergence of looped walks, that can be associated with home ranges, can be promoted by very rudimentary capacities of the individual together with a natural dynamics of the environment. \section{The model} For a forager the proximity of a plant is not enough to make it attractive for a future visit: the plant must also have a visible and interesting load of fruit. Moreover, when visiting a plant the animal usually takes only part of the available fruit and moves on. After this, the plant needs some time to recover its fruit load. Such a model was analyzed in \cite{abramson2014}. We attempt here a further simplification, coding the complex interaction of memory, consumption and relaxation in the probabilities defining the random walk from each site of the lattice. As a first simplification, consider that the animal eats all the available ripe fruit in the visited plant and leaves. Let us say that a walker moving in such a substrate has a memory, allowing it to remember the time of visit to every site and the step taken from there. When revisiting a fruitful plant the animal will consider it a success and repeat the step taken from there, ``remembering'' its previous visit. When returning to a plant before its recovery the walker takes a random step. This unlimited memory is not necessary associated to an extraordinary skill of the forager. It could be stored in the environment as the state of each plant, which proximity and fruit load can trigger on the forager the inclination to choose a specific direction. Thus, the memory of having visited a site once, needs not to be stored on the animal but recorded on the topology of the environment (as is the case in \cite{abramson2014}). Also, we can anticipate here that when a home range emerges the walker effectively uses a bounded amount of memory. Besides this, imagine two possible strategies for the \emph{update} of the memory, the details of which will be given below. A \emph{conservative} walker will keep in memory the time in which the visit to that site was successful and the step taken on that occasion. An \emph{exploring} walker, instead, will update the memory of the visit to the current time and the step to the randomly chosen one. Between these two strategies there might be intermediate ones, all of which will be explored below. Now, with the motivation just exposed, let us define a random walk that modifies the probabilities of steps from each site according to the time since the last visit and a parameter defining the strategy. The rules of the walk can be summarized as follows: \begin{itemize} \item When visiting a new site, take a random step in either of the four directions. Store in memory the time of visit $t_v$ and the step. \item When returning at time $t$ to a site previously visited at time $t_v$: \begin{itemize} \item With probability $p_r(t-t_v)$ repeat the step stored in memory. Update the visit time stored in memory. \item[Or:] \item With probability $1-p_r(t-t_v)$ take a random step and: \begin{itemize} \item With probability $\rho$, update in memory the time of visit and the step taken. \item[Or:] \item With probability $1-\rho$, keep the memory unmodified. \end{itemize} \end{itemize} \end{itemize} The probability distribution used to repeat the step taken in the previous visit is used to model the replenishment of the fruit mentioned above. It can be simply a Heaviside step function $p_r(t-t_v)=\theta(t-t_v-\tau)$, where $\tau$ is a parameter representing the recovery time of the plants. It is equivalent to the memory of the \textit{elephant walk}~\cite{schu}, but used in a different way. Contrary to the usual memory that makes the probability of revisiting a site fade out with time, here we are considering a probability of revisiting a site that increases with time. In such a case the walker will always repeat its step when returning after $\tau$ steps, and always take a random step when returning earlier. This strict condition can be relaxed by modeling $p_r$ with a smooth step function. In the results shown below only the Heaviside step distribution will be used, since, as we will show later, no significant differences where found when using a smooth distribution. In such a case, the walks are characterized by two parameters, $\tau$ and $\rho$. \begin{comment}There is an alternative interpretation of the same process. Consider for example that at each step, the animal chooses the closest plant to move, subject to the condition that the fruit load is attractive enough. After the visit, the plant needs some recovery time to reach a considerable fruit load again. For example, from plant A, the animal moves to plant B. If after some time the animal reaches A again, it may opt for the closest plant again, i.e. B only if B has recovered its fruit load, or equivalently if the time between cosecutive visits is long enough. In any other case the animal may choose other plant. The described process can be mimicked by the proposed walk with infinite memory, where the memory is just an indirect way to favor a plant over the other and the probability function a mean to measure the time needed for recovery. \end{comment} Our results show the emergence of closed circuits in non trivial ways. To characterize the behavior of these we analyze both the duration of the transient elapsed until the walker enters the closed circuit, as well as the length of such cycles. The emergence of such circuits is reflected in the fact that during the initial stages the mean square displacement exhibits a diffusive behavior whereas for longer times it reaches a plateau. Such a behavior has been already reported in previous works \cite{trimp99, trimp01} where due to a fed back coupling between a particle and its environment, it gains experiences with modified surroundings, resulting in a bounded walk. \section{Results} The results presented below correspond to mean values taken over $10^3$-$10^4$ realizations, on a 2 dimensional lattice, large enough to avoid that the walker reaches the borders. The simulations were done for $10^5$ and $10^6$ time steps, showing no significant dependence between them. One of most revealing features of any sort of walk, be this random, self avoiding, self attracting, etc. is its mean square displacement (MSD). The behavior of the MSD in the present model shows rather interesting features. Figure \ref{figure:MSD_tau20} displays the MSD as a function of time for a range of values of $\rho$, from 0 to 1, and for $\tau=20$. Recalling that $\rho$ is the probability that the walker updates the information, stored in its memory, regarding the time of visit to a site and the step taken from there, we associate $\rho=1$ with the \emph{exploring} behavior and $\rho=0$ with the \emph{conservative} one. We observe that for $\rho=0$ the behavior is clearly diffusive, while for $\rho=1$ the MSD reaches a plateau indicating that the walker remains trapped in a bounded region. Contrary to the intuitive guess, this shows that it is the exploring behavior the one which allows the walker to find closed circuits more easily, while the conservative behavior leads to a diffusive walk. Intermediate values of $\rho$ generate intermediate behaviors. We have analyzed the model for values of $\tau$ ranging from 5 to 150, finding analogous results for all of them. \begin{figure}[t] \centering \includegraphics[width=\columnwidth, clip=true]{MSD_tau20r} \caption{Mean square displacement vs. time for probability $\rho=0$ (black), $\rho=1$ (orange) and intermediate values, and corresponding to recovery time of the plants $\tau=20$. Simulations performed in a square lattice of $5000\times 5000$ sites, $10^5$ time steps and $10^3$ realizations. (Color on line.)} \label{figure:MSD_tau20} \end{figure} These results rise several questions about the dependence of the the emergence of cycles on each parameter. Even though all two-dimensional walks (including the case $\rho=0$) eventually return to a site in a condition that allows the settling of a cycle, the time necessary to fulfill this condition can vary greatly. As a result of this, after a fixed number of steps only a fraction of the walkers are able to do so. In the following we proceed to characterize the statistical behavior of these walkers by measuring several relevant quantities. Figure~\ref{figure:cant_ciclos} shows a contour diagram representing the fraction of realizations that end in a cycle, as a function of the parameters $\rho$ and $\tau$. We observe that this fraction increases both for decreasing $\tau$ as well as for increasing $\rho$. Consistently, mapping this situation to the biological scenario, when plants take too long to recover (large $\tau$), or when the foragers are not exploring enough (too small $\rho$), there is no formation of home ranges. Another informative aspect of the walks that needs characterization is the length of the cycles. The concept of a home range is always associated to the measurement of the amount of space utilized. Sometimes it is measured through the utilization distribution \cite{ford79}, that represents the probability of finding an animal in a defined area within its home range. In this case, once the cycle is established, the animal will visit each site within the cycle only once at each turn, so the utilization distribution will be uniformly distributed among the sites within the cycle. Still we can have an estimation of the amount of used spaced by measuring the longitude of the cycle. A priori we know that $\tau$ is the greatest lower bound (infimum) for the average cycle length. This average is shown in Fig.~\ref{figure:prom_ciclos}. We can conclude that the mean length of the cycles is very close to this bound for all parameters sets, showing a very weak dependence on $\rho$ for the largest values of $\tau$, undoubtedly due to the undersampling arising from the finite simulation runs. Observe, nevertheless, the wedge shaped region of very conservative walkers that never find a cycle, which grows with the recovering parameter $\tau$. \begin{figure}[t] \centering \includegraphics[width=\columnwidth, clip=true]{cant_ciclos} \caption{Contour plot of the fraction of realizations that end in a cycle, as a function of parameters $\rho$ and $\tau$. Simulations performed in a square lattice of $5000\times 5000$ sites, $10^5$ time steps and $10^4$ realizations. The gray region corresponds to realizations that do not end in cycles due to finite observation time.} \label{figure:cant_ciclos} \end{figure} \begin{figure}[htp] \centering \includegraphics[width=\columnwidth, clip=true]{prom_ciclos} \caption{Contour plot of the mean cycle length as a function of $\rho$ and $\tau$. Simulations performed in a square lattice of $5000\times 5000$ sites, $10^5$ time steps and $10^4$ realizations.} \label{figure:prom_ciclos} \end{figure} \begin{figure}[htp] \centering \includegraphics[width=\columnwidth, clip=true]{prom_transit} \caption{Contour plot of the mean transient length as a function of parameters $\rho$ and $\tau$. The color scale is logarithmic. Simulations performed in a square lattice of $10^4\times10^4$ sites, $10^6$ time steps.} \label{figure:prom_transitorio} \end{figure} Let us now focus on the extreme cases of $\rho=0$ and $\rho=1$. When $\rho=0$ we found that the behavior is diffusive for all values of $\tau$, so that $\langle x^2\rangle =D(\tau)\,t$. As shown in Fig.~\ref{figure:d_vs_tau}, $D(\tau)$ depends on $\tau$ approaching 1 from below as $\tau$ increases. On the other hand, perfect explorers---those with $\rho=1$---always find a cycle. We have found that the average length of the transient depends quadratically on $\tau$. The transient regime is longer as the value of $\tau$ is larger, i.e., for short recovery times, the walker finds a cycle easier (and faster). If $\tau$ is very large, it may happen that the walker returns successive times to the same site earlier than $\tau$, and randomly choose the next step, losing the possibility of repeating the last steps and thus entering a cycle. \begin{figure}[htp] \centering \includegraphics[width=\columnwidth, clip=true]{d_vs_tau} \caption{Diffusion coefficient of the $\rho=0$ case (slope of the average MSD curves for each value of $\tau$). Forty uniformly distributed values of $\tau$ were considered between 5 and 200. } \label{figure:d_vs_tau} \end{figure} Observe that the exploring walker is the one that continuously updates the stored information. An intuitive guess of the resulting dynamics, analyzed in terms of the intensity of exploring activity of the individual, may lead us to think that such walker would have a higher difficulty in establishing a walking pattern and finding a closed circuit. Also, for those who maintain the stored information (the conservative walkers), finding an optimal closed circuit would be a relatively simple task. However, our results show that this intuition is wrong. Relevant insight on the mechanisms that give rise to the observed behavior of the forager walk can be obtained from well known results of conventional random walks. A random walk in one and two dimensions is recurrent, i.e. the probability that the walker eventually returns to the starting site is 1. (In higher dimensions, the random walk is transient, the former probability being less than 1 \cite{green}.) So, in principle, for any value of $\tau$ and $\rho=0$ the forager walk eventually ends up in a cycle. However, this asymptotic behavior of the system may not be the most relevant one in many contexts. In the biological scenario, for example, one would be interested in the possibility of finding cycles in relatively short times. Our results can be explained by considering the so called \textit{Pólya problem} or first return time. The probability that a simple random walk in one dimension returns for the first time to a given site after $2n$ steps is \begin{equation} \binom{2n}{n}\frac{1}{(2n-1)2^{2n}}. \label{first} \end{equation} In two dimensions the probability that a simple random walk returns to a given site after $2n$ steps is the square of the previous probability \cite{green} as a simple random in two dimensions can be projected into two independent one-dimensional walks on the $x$ and $y$ axes. The probability given by Eq.~(\ref{first}) asymptotically decays as $n^{-3/2}$, indicating that returning to the initial site is increasingly improbable with the elapsed time. The forager walk can be interpreted in the following way. Until the moment that the walker gets trapped in a cycle, it performs a random walk. Afterwards, the behavior is deterministic. That very moment corresponds to the first time a cycle is completed, so it is a return to the initial step of the cycle after $\tau_c\ge \tau$ time steps, where $\tau_c$ is the period of the cycle of an individual realization for a given choice of $\tau$. Let us assume that the transient walk executed up to this first return can be used to estimate a probability analogous to Eq. (\ref{first}). We can do this from the length of the transient and the fraction of realizations that successfully ended in a cycle. The transient can be thought of as consisting of successive realizations of walks of length $\tau_c$ that were \emph{not} successful in returning to the starting point. We have verified this algebraic dependence. The immediate question about the validity of the present results for higher dimensions can be answered by invoking the recurrence theorem presented by G. Pólya in 1921 \cite{poly21}, where he. shows that a random walk is recurrent in 1 and 2-dimensional lattices, and that it is transient for lattices with more than 2 dimensions. The emergence of home ranges as presented in this work is strongly dependent of the probability of eventual returns to already visited places. Thus, for dimensions higher than 2 the expected cycle lengths will be longer and their very existence less probable, as can be deduced from the calculated probabilities of returns to the origin in these cases \cite{mont56}. Besides, the fact that increasing $\rho$ produces an increase in the probability of finding a cycle can be understood in the following way. The probability of returning to a given site decreases as the walker moves away. When $\rho$ is small the walker can move increasingly farther away from the stored site, making it rather difficult to return to it and enter a cycle. When $\rho$ is high the foraging walker constantly updates its memory, in a way that it is always relatively close to the most recently stored site. This increases the probability of returning to it and triggering a cycle. For completeness, we include a plot showing results based on the use of a smoother distribution. The smooth step depends on two parameters, $\tau$ and $w$. The limit $w\to\infty$ tends to a Heaviside step function at $t=\tau$. Figure \ref{figure:nostep} displays the behavior of the walk for three values of $\omega$ ($10$, 2 and $0.5$), exemplifying the typical behaviors for a fixed value of $\tau=20$. The MSD's are averages over 1000 realizations. The black curves correspond to $\omega=10$, which is very similar to a step, and gives an MSD almost identical to the one shown in Fig. 1, with $\rho=1$ (orange curve). While smoother curves tend to plateaus at higher values no qualitatively differences are observed in the behavior. \begin{figure}[htp] \centering \includegraphics[width=\columnwidth, clip=true]{MSD_wr} \caption{Mean square displacement vs. time considering a smooth function for the probability to repeat the step taken in the previous visit. $\omega=10$ (black), $\omega=2$ (red), $\omega=0.5$ (blue) and $\tau=20$ (color on line). Simulations performed in a square lattice of $5000\times 5000$ sites, $10^5$ time steps and $10^3$ realizations. The inset shows the functional expression and shape of the probability distribution.} \label{figure:nostep} \end{figure} \section{Conclusions} One of the important aspect related to animal movement is the effect that spatial heterogeneities have on the observed patterns. When the spatial heterogeneity is manifested through the distribution of resources, the link between resource dynamics and random walk models might be the key to answer many of open questions about the emergence of home ranges. Another route to explore this problem is by accounting for learning abilities and spatial memory \cite{stamp99}. The formation of a home range has previously been investigated with models in which a single individual displays both an avoidance response to recently visited sites and an attractive response toward places that have been visited sometime in the past \cite{fronhofer13,moor09}. An animal searching for food would choose its movements based not only on its internal state and the instantaneous perception of the environment, but also on acquired knowledge and experience. Animals use their memory to infer the current state of areas not previously visited. This memory is build up by collecting information remembered from previous visits to neighboring locations \cite{faw14}. Although the emergence of home ranges is crucial in understanding the patterns arising from animal movement, there are few mechanistic models that reproduce this phenomenon. Traditional random walks, widely used to describe animal movement, show a diffusive behavior far from displaying a bounded home range. However, the addition of memory capacity has proven to predict bounded walks \cite{schu,trimp99,trimp01}. Home ranges also arise in biased diffusion \cite{okubo02} and in self-attracting walks \cite{tan}. The interest aspect of the results presented here is that they not only reveal the non trivial behavior of the so called \textit{frugivore walk} but also contribute to a deeper understanding of the causes underlying the constitution of home ranges as an emergent phenomenon, among which we highlight the foraging strategy. By considering a minimal model we have shown that a walker with rudimentary learning abilities, together with the feedback from a dynamic substrate, give rise to an optimal foraging activity in terms of the usage of the spatial resource. Indeed, neither a foraging strategy based just on diffusion (a random walk without memory), nor a walk strongly determined by memory (like our conservative walker), are optimal. A better strategy is one that combines the use of memory with an exploratory behavior, such as our \emph{explorative} walker. There is evidence supporting that precisely this combined strategy may be the one favored by evolutionary mechanisms \cite{gaf81,eli07}. Foraging activity must balance between exploration and exploitation: on the one hand, exploring the environment is crucial to find and learn about the distributed resources; on the other hand, exploitation of known resources is energetically optimal. Indeed, this trade-off is a central thesis in current studies of foraging ecology, as it is apparent in the thorough work by W. Bell \cite{bell1991}, in Lévy flight models \cite{viswanathan11} and others. The simple mechanism analyzed here contributes with theoretical support to these ideas. We have shown that the balance between exploration and exploitation not only provides an optimal use of resources. It may also be responsible for the emergence of a home range. The balance between exploration and exploitation appears as the road to successful foraging. \begin{acknowledgments} This work was supported by grants from ANPCyT (PICT-2011-0790), U. N. de Cuyo (06/C410) and CONICET (PIP 112-20110100310) \end{acknowledgments}
2,869,038,156,608
arxiv
\section{Introduction} Star clusters are often born in a hierarchical structure which consists of several subclusters (hereafter clumps). One of the biggest systems is the Carina Nebula, which include several star clusters and smaller clumps \citep{2011ApJS..194....9F,2014ApJ...787..107K}. Such ``star cluster complexes'' are considered to have formed via the gravitational collapse of giant molecular clouds with turbulence \citep[][and references therein]{2007ARA&A..45..565M}. The formation of stars and star clusters in turbulent molecular clouds have been tested using numerical simulations \citep{2008MNRAS.389.1556B, 2012MNRAS.419.3115B,2012ApJ...754...71K,2012ApJ...761..156F,2015MNRAS.449..726F,2015PASJ...67...59F,2016ApJ...817....4F}. In these studies, hierarchical structure formation has been confirmed. Star clusters are initially embedded in their parental molecular clouds \citep{2003ARA&A..41...57L}, but once massive stars formed, the gas is expelled by gas expulsion such as ionization, stellar winds, and supernovae explosions. As a result, the embedded clusters are expected to expand. This expansion has also been studied using numerical simulations \citep{2009A&A...498L..37P,2012MNRAS.420.1503P} and also by observations \citep{2018PASP..130g2001G,2018arXiv180702115K}. Not only star clusters, star cluster complexes have also been suggested to expand by numerical simulations. \citet{2015MNRAS.449..726F} performed a series of simulations of star cluster complexes forming in turbulent molecular clouds. They suggested that star cluster complexes also expand although some subclusters (hereafter, clumps) merge and evolve to more massive clusters within a few Myr \citep[see also ][]{2015PASJ...67...59F,2016ApJ...817....4F}. However, the relative velocity among clumps in star cluster complexes was not studied in their work. Observational studies on the kinematics of star cluster complexes require an accurate velocity measurement. Thanks to Gaia Data Release 2 \citep{2018arXiv180409365G}, the proper motions of individual stars in young star clusters and associations are now available. This data allows us to study the kinematics of star cluster complexes. \citet{2018arXiv180702115K} measured inter clump velocities for some star cluster complexes such as the Carina Nebula. They reported an expansion of star cluster complexes as well as that of young star clusters. In this paper, we measure the velocity structure among clumps in star cluster complexes using the results of our numerical simulations \citep{2015PASJ...67...59F,2016ApJ...817....4F} and additional new simulations. We connect the current spacial and velocity distributions of clumps in star cluster complexes to their parental molecular clouds and estimate the properties of the parental molecular clouds. \section{Methods} \subsection{Numerical Simulations} We use the results of \citet{2016ApJ...817....4F} and also perform additional simulations for some models. Here, we briefly summarize the methods used in this study \citep[see also][]{2015PASJ...67...59F,2016ApJ...817....4F}. First, we perform hydrodynamic simulations for molecular clouds using a smoothed-particles hydrodynamics (SPH) code, Fi \citep{1989ApJS...70..419H,1997A&A...325..972G,2004A&A...422...55P, 2005PhDT........17P} with the Astronomical Multipurpose Software Environment (AMUSE) \citep{2009NewA...14..369P,2013CoPhC.183..456P,2013A&A...557A..84P}\footnote{see \url{http://amusecode.org/}.}. We set-up the initial conditions of the molecular clouds using AMUSE following \citet{2003MNRAS.343..413B}. We adopt isothermal (30K) homogeneous spheres and give a divergence-free random Gaussian velocity field $\delta \bm{v}$ with a power spectrum $|\delta v|^2\propto k^{-4}$ \citep{2001ApJ...546..980O,2003MNRAS.343..413B}. The spectral index of $-4$ appears in the case of compressive turbulence (Burgers turbulence), and recent observations of molecular clouds \citep{2004ApJ...615L..45H}, and numerical simulations \citep{2010A&A...512A..81F, 2011ApJ...740..120R,2013MNRAS.436.1245F} also suggested values similar to $-4$. We adopt the mass and density of the molecular clouds as parameters. In Table \ref{tb:IC}, the initial conditions of molecular clouds are summarized. The model names represent the initial mass and density; for example, m400k and d100 indicate a mass of $4\times10^5M_{\odot}$ and a density of $100 M_{\odot}$\,pc$^{-3} \sim 1700$\,cm$^{-3}$ assuming that the mean weight per particle is $2.33m_{\rm H}$. We adopt 10 and $100 M_{\odot}$\,pc$^{-3}$ (170 and 1700\,cm$^{-3}$) for the density and $4\times 10^4$, $10^5$, $4\times 10^5$, and $10^6M_{\odot}$ for the mass. While the density of our initial conditions is 170--1700\,cm$^{-3}$, observed density of molecular clouds is 100--500\,cm$^{-3}$ \citep{2015ARA&A..53..583H}. For models m1M-d100, m400k-d100, and m400k-d10, we use the results obtained in \citet{2015MNRAS.449..726F}. We also perform simulations for additional models m100k-d100, m40k-d100, and m100k-d10. We set the kinetic energy ($E_{\rm k}$) to be equal to the absolute value of the potential energy ($|E_{\rm p}|$). With this setting, the system is initially super-virial. For comparison, we test a model the same as model m100k-d100, but $E_{\rm k}/|E_{\rm p}|=0.5$ (model m100k-d100-vir). After 0.9 free-fall time of the initial condition, we stop the SPH simulations (0.75 and 2.4\,Myr for d100 and d10 models, respectively) and convert a part of the gas particles to stellar particles using the following procedure. We assume a local star formation efficiency (SFE), which depends on the local gas density $\rho$, given by \begin{eqnarray} \epsilon_{\rm loc} = \alpha_{\rm sfe} \sqrt{\frac{\rho}{100\, (M_{\odot}{\rm pc}^{-3})}}, \label{eq:eff} \end{eqnarray} where $\alpha_{\rm sfe}$ is a coefficient which controls the star formation efficiency and a free parameter in our simulations. We adopt $\alpha_{\rm sfe}=0.02$. This SFE is motivated by the result that the star formation rate scales with free-fall time \citep{2012ApJ...745...69K,2013MNRAS.436.3167F}. Using this equation, we calculate the local SFE for each gas particle. Following it, we choose gas particles which should be converted to stellar particles. For the selected gas particles, we randomly assign stellar masses following a Salpeter mass function \citep{1955ApJ...121..161S} with a lower and upper cut-off mass of 0.3 and 100$M_{\odot}$ and converted the gas particle to a stellar particle. Although the mass does not locally conserved, the total mass globally conserves because the mean stellar mass is equal to the gas-particle mass in the SPH simulations. We assume that the velocity of the stellar particles are the same as that of their parent gas particles. With this assumption, the stellar system can take over the velocity field of their parental molecular cloud. Resulting global SFE measured for the entire system was several \%, but the local SFE for dense regions reaches $\sim 30$\,\%. In dense regions, the local SFE exceeds 0.5 and reaches 1.0 in the densest regions. We allow such a high local SFE following the results of previous studies, in which star formation was followed using sink particles and in dense stellar clumps stars were dominant \citep{2010MNRAS.404..721M,2012MNRAS.419..841K}. The SFE of some models are summarized in Table 2 of \citet{2016ApJ...817....4F}. We remove all residual gas particles and start $N$-body simulations using the stellar distribution obtained from the SPH simulations. At this moment, the virial ratio (kinetic energy over potential energy) of the entire system is more than 0.5 \citep[see Table 2 of ][]{2016ApJ...817....4F}. The entire system, therefore, expands with time. In clumps, however, stars are dominant, and as a consequence they are initially bound. Such clumps survive, and some of them evolve to more massive clusters via mergers \citep[see][for the details]{2015PASJ...67...59F}. The $N$-body simulations are performed using a sixth-order Hermite scheme \citep{2008NewA...13..498N} without gravitational softening. We perform up to 10 runs changing the random seeds for the turbulence. Runs with different random seeds result in different shapes of collapsing molecular clouds. The number of runs and the averaged total stellar mass of each model is summarized in Table \ref{tb:clumps}. \begin{table*} \begin{center} \caption{Initial Conditions of Molecular Clouds\label{tb:IC}} \begin{tabular}{lcccc}\hline \hline Model & $N_{\rm Run}$ & $M_{\rm MC} (10^3M_{\odot})$ & $R_{\rm MC}$ (pc) & $\sigma_{\rm MC}$ (km\,s$^{-1}$) \\ \hline m1M-d100 & 1 & 1000 & 13.3 & 19.6 \\ m400k-d100 & 3 & 400 & 10.0 & 14.4 \\ m100k-d100(-vir) & 5 & 100 & 6.2 & 9.12 \\ m40k-d100 & 10 & 40 & 4.6 & 6.70 \\ m400k-d10 & 3 & 400 & 21.0 & 9.92 \\ m100k-d10 & 6 & 100 & 13.3 & 6.23 \\ \hline \end{tabular} \medskip The first column indicates the name of the model. The second column gives the number of runs. Column 3--5 are give the mass, radius, and velocity dispersion ($\sigma_{\rm MC}^2=2E_{\rm k}/M_{\rm MC}$ and $E_{\rm k}$ is the kinetic energy) of the molecular cloud. For all models we set $E_{\rm k}/|E_{\rm p}|=1.0$, but for m100k-d100-vir $E_{\rm k}/|E_{\rm p}|=0.5$. \end{center} \end{table*} \subsection{Clump finding} At 0.5\,Myr and 2\,Myr from the beginning of the $N$-body simulation, we identify clumps in the snapshots and measure their mass and velocity. Hereafter, we set the beginning of the $N$-body simulation to be 0\,Myr. We use the HOP method \citep{1998ApJ...498..137E} in AMUSE for the clump finding. HOP is a clump finding algorithm using peak densities. In HOP, however, the connection to the nearest densest particle is set for each particle. This is for separating multiple clumps which exist in a region denser than a threshold density. One of basic parameters of HOP is the outer cut-off density (a parameter for the minimum density threshold of clumps), $\delta_{\rm HOP}$. We set $\delta_{\rm out} = 4.5M_{\rm s}/(4\pi r_{\rm h}^{3})$, which is three times the half-mass density of the entire system. We first calculate the local density of each star using $N_{\rm dens}$ nearest stars. We set $N_{\rm dense}=32$. Using the local densities, the stars are connected to the densest particles within $N_{\rm hop}$ nearest stars as a potential member of the clump. In some dense region may include multiple clumps. In such a case, the clump members can be separated to multiple clumps using the saddle density ($\delta _{\rm saddle}$). HOP finds multiple clumps using the connection and separate them using the saddle density threshold, $\delta_{\rm saddle}$. We adopt $\delta _{\rm saddle}=8\delta_{\rm out}$. We also set the peak density, $\delta_{\rm peak}=10\delta_{\rm out}$. The peak density of each clump must be higher than $\delta_{\rm peak}$. We tested some combinations of these parameters and confirmed that the inter-clump velocity dispersion does not strongly depend on the choice of the parameters. We summarize the results with different parameter sets in Appendix A. If the mean density of the detected clump is less than $100\delta_{\rm out}$, we repeat the same procedure for the clump, because the clump may consist of some sub-clumps. We set the minimum number of stars for a clump to be 50, but for models m40k-d10 and m100k-d10, we reduce it to 32 because their total mass is smaller than the other models, and as a result, the number of clumps are also small. The detected clumps have a mass-radius relation similar to that of observed clusters. We confirmed it in our previous papers \citep{2015PASJ...67...59F,2016ApJ...817....4F}. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{m400k_21pc_05Myr_map-crop.pdf}\\ \includegraphics[width=\columnwidth]{m400k_21pc_05Myr_pos_vel-crop.pdf} \end{center} \caption{Spatial and velocity distribution of stars at 0.5\,Myr for model m400k-d10, which has a mass and size distribution similar to Carina. Each color indicates each detected clump. Gray dots indicate the other stars. Arrows in the top panel indicate the velocity vector of the clumps. \label{fig:snapshots1}} \end{figure} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{vel_disp_ngc2264-crop.pdf}\\ \includegraphics[width=\columnwidth]{pos_vel_ngc2264-crop.pdf} \end{center} \caption{Same as Fig.\ref{fig:snapshots1}, but for model m40k-d100 at 2\,Myr. This model has a mass and size distribution similar to NGC 2264. \label{fig:snapshots2}} \end{figure} \section{Results} We measure the center-of-mass velocity of the detected clumps with respect to the center-of-mass velocity of the all detected clumps. In the top panel of Fig. \ref{fig:snapshots1}, we present the spacial distribution of stars and detected clumps for one of model m400k-d10 at 0.5\,Myr. The initial mass and density of this model are $4\times 10^5M_{\odot}$ and $10M_{\odot}$pc$^{-3}$, respectively. We also show the velocity vector of each clump in the figure. The averaged size (three-dimensional root-mean-square radius from the center-of-mass position) and one-dimensional velocity dispersion among clumps of this model are $r_{\rm rms}=14\pm 1$ (pc) and $\sigma_{\rm 1D}=2.9\pm0.3$\,(km\,s$^{-1}$), respectively. These values are similar to those of the Carina Nebula; the two-dimensional root-mean-square radius and the one-dimensional velocity dispersion are 9.15 pc and $2.35$\,km\,s$^{-1}$, respectively \citep{2018arXiv180702115K}. Considering the root-mean-square radius in our simulations is calculated in three dimension, the observed radius is scaled to be 11.2\,pc. The entire system of this model is distributed in $\pm 20$\,pc (see the top panel of Fig. \ref{fig:snapshots1}). The Carina Nebula is also distributed on similar scale \citep[see Fig. 13 in][]{2018arXiv180702115K}. The relation between the mass of the parental molecular clouds ($M_{\rm MC}$) and the most massive star cluster ($M_{\rm cl, max}$) was discussed in our previous study \citep{2015MNRAS.449..726F}, and found that it follows: \begin{eqnarray} \frac{M_{\rm cl, max}}{1M_{\odot}}=0.2\left( \frac{M_{\rm MC}}{1M_{\odot}}\right)^{0.76}. \end{eqnarray} A similar relation is also found using radiation-hydrodynamic simulations with sink particles \citep{2018NatAs...2..725H}. Our results are roughly consistent with this relation. The mass of the most massive cluster (clump) would be an important parameter to discuss the parental molecular clouds. The mass of the most massive clump of model m400k-d10 is $3.3 \pm 1.9 \times 10^3M_{\odot}$. The most massive cluster in the Carina Nebula is Trumpler 14, which has a mass of $4.3^{+3.3}_{-1.5}\times 10^3M_{\odot}$ \citep{2010A&A...515A..26S}. The total stellar mass and gas $+$ dust mass of the Carina Nebula are estimated to be $2.8\times10^4M_{\odot}$ \citep{2011A&A...530A..34P} and $2\times 10^5M_{\odot}$, respectively, which are similar to those of model m400k-d10; $2.5\pm 0.8\times 10^{4}M_{\odot}$ and $4\times 10^5M_{\odot}$, respectively (see Tables \ref{tb:clumps} and \ref{tb:obs}). In the bottom panel of Fig. \ref{fig:snapshots1}, we present the position vs. velocity plot of individual stars in the detected clumps for this model. The clumps distribute within $v_{x}\aplt|10|$\,km\,s$^{-1}$, and this is consistent with that of the Carina Nebula \citep[see Fig.12 in][]{2018arXiv180702115K}. Here, we see the entire system is expanding. At 0.5 and 2\,Myr for each model, we calculate the average and standard deviation of the number ($N_{\rm cl}$), one-dimensional velocity dispersion ($\sigma_{\rm 1D}$), root-mean-square radius ($r_{\rm rms}$), and maximum mass ($M_{\rm cl, max}$) of the detected clumps among the same models with different random seeds, and these results are summarized in Table \ref{tb:clumps}. We, for comparison, summarize these values for Carina Nebula and NGC 2264 in Table \ref{tb:obs}. We here note that the inter-clump velocity dispersion measured in our simulations does not strongly depend on clump finding algorithms, because it is comparable to the velocity dispersion of all individual stars in the same region (see Appendix A). While the velocity dispersion did not change much, the root-mean-square radius increased. This expansion is due to the gas expulsion. After we removed all gas particles, the virial ratio of this system is larger than $0.5$ \citep{2015MNRAS.449..726F,2015PASJ...67...59F,2016ApJ...817....4F}. The velocity dispersion among clumps depends on the initial condition of the molecular clouds. Higher mass or density result in a larger velocity dispersion. We found no clear differences even if we change the initial virial ratio of the molecular clouds (see models m100k-d100 and m100k-d100-vir). We discuss this point in Section 4. In the top panel of Fig. \ref{fig:snapshots2}, we present the spacial distribution of clumps with their velocity vectors for one of model m40k-d100, which has a size similar to NGC 2264 region. In order to compare with the results of \citet{2018arXiv180702115K}, we also present the position vs. velocity plots for these models in the bottom panels of this figure. In this case, we see a clear velocity gradient, which shows an expansion. Since the age of NGC 2264 is estimated to be $\sim 3$\,Myr \citep{2017A&A...599A..23V}, we compare the results of this model at 2\,Myr. The 1D velocity dispersion and root-mean-square radius at 2\,Myr is $1.4 \pm 0.4$\,km\,s$^{-1}$, which is consistent with that of the NGC 2264 region (0.99\,km\,s$^{-1}$) \citep{2018arXiv180702115K}. The size ($r_{\rm rms}$) of this model is $3.2 \pm 1.5$\,pc at 2\,Myr, which is similar to that of NGC 2264 (2.53\,pc) \citep[see Table \ref{tb:obs} and Figure 13 of][]{2018arXiv180702115K}. Model m100k-d10 also has a velocity dispersion comparable with model m40k-d100, but the size of model m100k-d10 is $6.9 \pm 4.9$\,pc at 2\,Myr, which is twice as large as that of model m40k-d100. We also compare the maximum mass of the most massive cluster (S Mon) in NGC 2264 with the model. In order to obtain the mass of S Mon, we use the fraction of the number of samples summarized in Table 4 in \citet{2018arXiv180702115K}. According to the table, the number of samples for S Mon is 67, and the number of all samples in NGC 2264 is 516. If we assume that the fraction in the number of samples is the same as the mass fraction of S Mon, we estimate the mass of S Mon is $150\,M_{\odot}$ from the total mass of NGC 2264 ($1100M_{\odot}$) \citep{2008A&A...487..557P}. On the other hand, the mass of the most massive clump for model m40k-d100 is $340\pm240 M_{\odot}$. The minimum value is comparable to the observation. We, therefore, estimate that NGC 2264 formed in a dense molecular cloud ($100M_{\odot}$\,pc$^{-3}$ i.e., 1700\,pc$^{-3}$) with a mass of a few $10^4M_{\odot}$. \begin{table*} \begin{center} \caption{Results of simulations \label{tb:clumps}} \begin{tabular}{lccccc}\hline \hline Model & $M_{\rm tot}(10^3M_{\odot})$ & $N_{\rm clump}$ & $\sigma_{\rm 1D}$(km\,s$^{-1}$) & $r_{\rm rms}$ (pc) & $M_{\rm cl,max}(10^3M_{\odot})$ \\ \hline & \multicolumn{5}{c}{0.5\,Myr} \\ m1M-d100 & $110$ & 151 & 5.1 & 8.3 & $7.0$\\ m400k-d100 & $31\pm 8 $ & $51 \pm 5$ & $4.0 \pm 0.9$ & $8.5 \pm 2.0$ & $3.7 \pm 3.2$ \\ m100k-d100 & $9.9 \pm 2.1$ & $8.8 \pm 4.1$ & $2.5 \pm 0.4$ & $2.4 \pm 1.1$ & $0.93 \pm 0.37$ \\ m100k-d100-vir & $12 \pm 3$ & $13 \pm 4$ & $2.6 \pm 0.3$ & $2.5 \pm 0.7$ & $1.1 \pm 0.9$ \\ m40k-d100 & $2.3\pm 0.5$ & $5.6 \pm 1.4$ & $1.6 \pm 0.3$ & $2.1 \pm 1.0$ & $0.47\pm 0.27$ \\ m400k-d10 & $25 \pm 8$ & $47 \pm 6$ & $2.9 \pm 0.3$ & $14 \pm 1$ & $3.3 \pm 1.9$ \\ m100k-d10 & $3.1 \pm 1.5$ & $7.2 \pm 3.4$ & $1.6\pm0.2$ & $7.1\pm 3.6$ & $0.51 \pm 0.39$ \\ \hline & \multicolumn{5}{c}{2\,Myr} \\ m1M-d100 & $110$ & 108 & 4.5 & 18 & $7.7$\\ m400k-d100 & $31\pm 8$ & $37\pm 9$ & $3.7 \pm 0.9$ & $17 \pm 5$ & $4.4 \pm 3.9$ \\ m100k-d100 & $9.9 \pm 2.1$ & $8.2 \pm 4.3$ & $2.0 \pm 0.5$ & $7.2 \pm 1.9$ & $1.5 \pm 1.1$ \\ m100k-d100-vir & $12 \pm 3$ & $13 \pm 5$ & $2.3 \pm 0.5$ & $7.7 \pm 1.8$ & $2.3 \pm 2.2$ \\ m40k-d100 & $2.3\pm 0.5$ & $4.4 \pm 0.4$ & $1.4 \pm 0.4$ & $3.2 \pm 1.5$ & $0.34 \pm0.24$ \\ m400k-d10 & $25 \pm 8$ & $44 \pm 12$ & $2.7 \pm 0.2$ & $16 \pm 0.4$ & $2.2 \pm 1.2$ \\ m100k-d10 & $3.1 \pm 1.5$ & $5.5 \pm 2.9$ & $1.5 \pm 0.9$ & $6.9 \pm 4.9 $ & $0.41 \pm 0.43$ \\ \hline \end{tabular} \medskip \end{center} \end{table*} \begin{table*} \begin{center} \caption{Observed star cluster complexes \label{tb:obs}} \begin{tabular}{lccccccc}\hline \hline Name & Age & $M_{\rm tot}(10^3M_{\odot})$ & $N_{\rm clump} $ & $\sigma_{\rm 1D}$(km\,s$^{-1}$) & $\sqrt{3/2}\,r_{\rm rms,2D}$ (pc) & $M_{\rm cl, max}(10^3M_{\odot})$ & Ref. \\ \hline Carina & 0.5 & 28 & 16 & 2.35 & 11.2 & 4.3 & (1), (2), (3)\\ NGC 2264 & 3 & 1.1 & 8 & 0.99 & 2.53 & 0.15 & (3), (4), (5) \\ \hline \end{tabular} \medskip (1) \citet{2010A&A...515A..26S}; (2) \citet{2011A&A...530A..34P}; (3) \citet{2018arXiv180702115K}; (4) \citet{2017A&A...599A..23V}; (5) \citet{2008A&A...487..557P} \end{center} \end{table*} In our method, we assumed an instantaneous gas expulsion. In observed star cluster complexes, however, the gas mass is comparable to or large than stellar mass. In the Carina Nebula, for example, the estimated gas mass including dust is $2\times10^5M_{\odot}$ \citep{2011A&A...525A..92P}, which is an order of magnitude larger than that of stellar mass, $2.8\times10^{4}M_{\odot}$ \citep{2011A&A...530A..34P}. We, therefore, may overestimate the inter-cluster velocity dispersion in our simulations. \section{Discussion} Which initial parameter decides the inter-cluster velocity dispersion? In our simulations, the velocity dispersion depends on the potential energy of the initial molecular cloud. In Fig. \ref{fig:E_vel_rel}, we present the relation between the potential energy of our initial molecular clouds and the inter-cluster velocity dispersion at 0.5\,Myr. Since the velocity dispersion does not change much at 2\,Myr, we fit a power-law function to this relation using a least-mean-square method and obtain $\sigma_{\rm 1D, 0.5Myr}=1.66(E_{\rm p}/E_{\rm p, min})^{0.21}$ (km\,s$^{-1}$), where $E_{\rm p, min}$ is the minimum $E_{\rm p}$ among our models; specifically, $E_{\rm p}$ for model m400k-d10. In Fig. \ref{fig:cl_size}, we plot the relation between the initial size of the molecular cloud and the size of the resulting star cluster complexes at 0.5 and 2\,Myr. At 0.5\,Myr, the sizes of the complexes are correlated with those of the initial molecular cloud, but not in later time. This is because the expansion velocity of the complexes depends on the potential energy of the initial molecular cloud. Even if the initial size of the molecular cloud is the same, the expansion velocity can be different comparing models with different densities (see models m1M-d100 and m100k-d10). In our study, we tested only initially spherical models. In the Orion A molecular cloud, however, stellar and proto-stellar clumps including the Orion Nebula Cluster is associated with a 50-pc scale filament \citep{2016AJ....151....5M,2018AJ....156...84K}. In such a region, the initial molecular cloud might have been cylindrical \citep{2008MNRAS.389.1556B}, or a cloud-cloud collision might have triggered the star cluster formation \citep{2018ApJ...859..166F}. In a large velocity dispersion of stars around the Integral Shaped Filament \citep{1987ApJ...312L..45B} associated with the Orion Nebula Cluster \citep{2016A&A...590A...2S}, a magnetic field may play an important role to eject stars from the filament by the ``slingshot" mechanism \citep{2017MNRAS.471.3590B,2018MNRAS.473.4890S}. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{E_vel_rel-crop.pdf} \end{center} \caption{Relation between the potential energy of the initial molecular clouds and the inter-clump velocity dispersion at 0.5\,Myr. The black line show the result of a least-mean-square fitting; $\sigma_{\rm 1D, 0.5Myr}=1.66(E_{\rm p}/E_{\rm p, min})^{0.21}$. Here, $E_{\rm p, min}$ is the potential energy of model m40k-d100. \label{fig:E_vel_rel}} \end{figure} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{E_cl_size-crop.pdf} \end{center} \caption{Relation between the radius of the initial molecular clouds and the root-mean-square radius of the resulting star cluster complexes at 0.5\,Myr and 2\,Myr. \label{fig:cl_size}} \end{figure} \section{Summary} We performed a series of $N$-body simulations for the formation of star cluster complexes. Following the method in \citet{2015MNRAS.449..726F}, we first performed SPH simulations of turbulent molecular clouds and then used the last snapshots to generate initial conditions for the $N$-body simulations, in which stars are distributed in clumpy and filamental structures. The one-dimensional inter-clump velocity dispersion obtained from our simulations is $2.9\pm0.3$ and $1.4\pm0.4$\,km\,s$^{-1}$ for the Carina- and NGC 2264-like models, respectively, which are consistent with those obtained from Gaia Data Release 2, which are 2.35 and 0.99\,km\,s$^{-1}$ for the Carina Nebula and NGC 2264. The simulated complexes expand with time. We also confirmed the size and the mass of the most massive clump in these models are consistent with the observations. Our results suggest that the parental molecular cloud of NGC 2264 has a mass of $\sim 4\times 10^4M_{\odot}$ and that Carina Nebula formed from a giant molecular cloud with a mass of $\sim 4\times10^5M_{\odot}$, but the cloud density for NGC 2264 is estimated to be higher than that of the Carina Nebula. The inter-cluster velocity dispersion in our simulations, however, tends to be larger than that of observed star cluster complexes. This may be because we assumed an instantaneous gas expulsion, while observed star cluster complexes are still surrounded by molecular gas comparable or more massive than the total stellar mass. \section*{Acknowledgments} The author thanks the referee, Richard Parker, for his useful comments. The author also thanks Jun Makino, Amelia Stutz, and Tjarda Boekholt for fruitful discussion. Numerical computations were carried out on Cray XC30 and XC50 CPU-cluster at the Center for Computational Astrophysics (CfCA) of the National Astronomical Observatory of Japan. The author was supported by The University of Tokyo Excellent Young Researcher Program. This work was supported by JSPS KAKENHI Grant Number 26800108 and 19H01933. \bibliographystyle{mnras}
2,869,038,156,609
arxiv
\section{Introduction} In recent years, Deep Neural Networks (DNNs) have achieved significantly advanced performance in a wide range of fields, and have been further applied to industrial practices, including some security-critical fields, \textit{e.g.}, audio-visual processing \cite{liu2020re} and medical image processing \cite{chen2021uscl}. Accordingly, the model security problems have gained much attention from the community. One of the well-known model security problems of DNN is the adversarial attack \cite{tramer2018ensemble}, \cite{madry2018towards}, \cite{biggio2011support}, which has been extensively studied. The main idea is to add imperceptible perturbation into a well-classified sample to mislead the model prediction during the testing phase. Recently, backdoor attacks \cite{gu2019badnets} remarkably show severe threats to model security due to its well-designed attack mechanism, especially under high safety-required settings. Different from adversarial attacks, backdoor attacks modify the models by a carefully designed strategy during the model training process. For example, the attackers may inject a small proportion of malicious training samples with trigger patterns before training. In this way, DNN can be manipulated to have designated responses to inputs with specific triggers, while acting normally on benign samples \cite{gu2019badnets}. Backdoor attacks may happen in different stages of the model adopting pipeline \cite{li2020backdoor}. For example, when the victims try to train a DNN model on a suspicious third-party dataset, the implanted poisoned backdoor data perform the backdoor attack silently. The defender under such scenario thus has full access to the training process and the whole training data. Besides, backdoor attack happens with a greater frequency when the training process is uncontrollable by the victim, such as adopting third-party models or training on a third-party platform. Under such setting, the defender is only given a pre-trained infected model and usually a small set of benign data as an additional auxiliary to help to mitigate the backdoored model. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{Figure/clc.pdf} \caption{An illustration of the differences between the commonly studied Lipschitz condition and the proposed channel Lipschitz condition. We highlight the differences in the formula with red color. In a nutshell, Lipschitz constant bounds the largest changing rate of the whole network function, while channel Lipschitz constant bounds that of a specific output channel.} \label{fig:clc} \end{figure} To better understand the mechanism of backdoor attacks, in this work, we revisit the common characteristics of backdoor attacks, \textit{i.e.}, the ability of a small trigger to flip the outputs of benign inputs into the malicious target label. It is natural to relate such sensitivity to the high Lipschitz constant of the model. However, it may be a too strict condition for controlling the Lipschitz constant of the whole network function to achieve backdoor robustness. Since the backdoor vulnerability may come from the sensitivity of the channels to the backdoor trigger, we instead consider evaluating the Lipschitz constant (strictly speaking, an upper bound of the Lipschitz constant) among the channels to identify and prune the sensitive channels. Specifically, we view the mapping from the input images to each channel output as an independent function, as shown in \cref{fig:clc}. The Lipschitz constant is considered for each channel, which measures the sensitivity of the channel to the input. As channels, which detect backdoor triggers, should be sensitive to specific perturbation (\textit{i.e.}, the trigger) on inputs, \textbf{we argue that backdoor related channels should have a high Lipschitz constant compared to normal channels}. To demonstrate this, we track the forward propagation process of the same inputs with and without trigger to see how the trigger changes the activation on these channels. We provide empirical evidence to show the strong correlation between the Channel Lipschitz Constant (CLC) and the trigger-activated changes. To be more specific, we show that channels with large activation changes after attaching the trigger usually have high Lipschitz constant. Intuitively, pruning those channels may mitigate the changes brought by the backdoor trigger. To this end, we propose a novel \emph{Channel Lipschitzness based Pruning} (CLP), which prunes the channels with high Lipschitz constant to recover the model. Since the CLC can be directly derived from the weight matrices of the model, this method requires no access to any training data. Unlike previous methods that are designed for different specific threat models, \textbf{CLP's data-free property ensures a high degree of flexibility for its practical application, as long as the model parameters are accessible}. Besides, CLP is super fast and robust to the choice of its only hyperparameter, \textit{i.e.}, the relative pruning threshold $u$. Finally, we show that CLP can effectively reduce the attack success rate (ASR) against different advanced attacks with only a negligible drop on the clean accuracy. To summarize, our contributions are twofold: 1. We innovatively reveal the connections between backdoor behaviors and the channels with high Lipschitz constants. This conclusion generalizes various backdoor attacks, shedding light on the development of backdoor defense. 2. Inspired by the above observations, we propose the CLP, a data-free backdoor removal method. Even without data, it achieves state-of-the-art (SOTA) performance among existing defense methods which require a certain amount of benign training data. \section{Related work} \subsection{Backdoor Attack} BadNets \cite{gu2019badnets} is one of the earliest researches that perform backdoor attacks in DNNs. They injected several kinds of image pattern, referred to as the \emph{triggers}, into some samples, and modified the labels of the these samples to the desired malicious label. Then the DNNs were trained with the poisoned dataset and will be implanted a backdoor. After that, more covert and advanced backdoor designs were proposed \cite{turner2019label,xue2020one,liu2020reflection}. To prevent defenders from reversely generating possible triggers through the model, dynamic backdoor attacks such as the Input-Aware Backdoor (IAB) \cite{nguyen2020input}, Warping-based Backdoor (WaNet) \cite{nguyen2021wanet} and Sample Specific Backdoor Attack (SSBA) \cite{li2021invisible} were proposed. They generate a unique trigger for every single input, which makes the defense even more difficult. These attacks can be classified as the poisoning based backdoor attacks. Under some settings, it is also possible for the attackers to directly modify the architectures and parameters of a DNN, without poisoning any training data, known as the \emph{non-poisoning based backdoor attacks}. For example, in Trojan attack \cite{Trojan}, the triggers were optimized to increase the activation of some carefully chosen neurons, and then the network was retrained to finish the poisoning. In target bit trojan \cite{rakin2020tbt} and ProFlip \cite{chen2021proflip}, the authors choose vulnerable neurons and apply bit-flip techniques to perform backdoor attacks. Note that, such bit flip attack only occurs in the deployment phase of DNNs, and the ASR does not exceed other traditional ones. Therefore, this attack will not be discussed in this paper. \subsection{Backdoor Defense} \subsubsection{Training Stage Defenses.} The \emph{training stage defenses} aim at suppressing the effect of implanted backdoor triggers or filtering out the poisoned data during the training stage, with a full access to the training process. According to the different distributions of poisoned data and clean data in feature space \cite{huang2021backdoor}, several methods were proposed to filter out the poisoned data, including the robust statistics \cite{SPECTRE,tran2018spectral}, input perturbation techniques \cite{gao2019strip,doan2020februus} and semi-supervise training \cite{huang2021backdoor}. Besides, stronger data augmentation techniques \cite{borgnia2021strong} were proposed to suppress the effect of backdoor, such as the CutMix \cite{devries2017improved}, CutOut \cite{devries2017improved} and MaxUp \cite{gong2020maxup}. Besides, differential privacy \cite{du2019robust} and randomized smoothing \cite{rosenfeld2020certified} provide some certificated methods to defend against backdoor attack. \subsubsection{Model Post-processing Defenses.} The \emph{model post-processing defenses} mainly focus on eliminating the backdoor threat in a suspicious DNN model. The first effective defense against backdoor attack with the combination of neuron pruning and fine-tuning was proposed in \cite{liu2018fine}. Inspired by the mechanism of fixed-trigger backdoor attack, Neural Cleanse (NC) \cite{wang2019neural} obtained the reversed engineering trigger and detoxify the model with such knowledge. Some other fine-tuning based defenses used knowledge distillation \cite{hinton2015distilling} in their pipeline, such as the backdoor knowledge distillation \cite{yoshida2020disabling} and Neural Attention Distillation (NAD) \cite{li2020neural}. Besides, the Mode Connectivity Repair \cite{zhao2019bridging} was also explored to eliminate the backdoor effect. In addition to fine-tuning based defenses, the $L_{\inf}$\footnote{Refers to the infinite norm.} Neuron Pruning was proposed in \cite{xu2020defending} to filter out the neurons with high $L_{\inf}$ in activations. Recently, the Adversarial Neuron Pruning (ANP) \cite{wu2021adversarial} detected and pruned sensitive neurons with adversarial perturbations, and achieve considerable defense performance. However, it needs careful tuning of hyperparameters and requires access to a certain number of benign data. Unlike those model post-processing methods, the proposed CLP achieves superior defending performance without any benign data, and is robust to the choice of the \textbf{only} hyperparameter. Moreover, we hope that our work can enlighten the study on the effectiveness of Lipschitz constant on backdoor learning, and provide a new perspective on backdoor attack and defense. \section{Preliminaries} \subsection{Notations} In this paper, we consider a classification problem with $C$ classes. Suppose that $\mathcal{D}=\{(\vect{x}_i, y_i)\}_{i=1}^{N}\subseteq\mathcal{X}\times\mathcal{Y}$ is the original training set, which contains $N$ i.i.d. sample images $x_i \in \mathbb{R}^{d_c \times d_h \times d_w}$ and the corresponding labels $y_i \in \{1,2,...,C\}$. Here, we denote by $d_c$, $d_h$ and $d_w$ the number of output and input channels, the height and the width of the image, respectively. It is clear that $d_c=3$ for RGB images. We consider a neural network $F(x; \theta)$ with $L$ layers: \begin{equation}\nonumber F(x; \theta)=f^{(L)} \circ \phi \circ f^{(L-1)} \circ \cdots \circ \phi\circ f^{(1)}, \end{equation} where $f^{(l)}$ is the linear function (\emph{e.g.,} convolution) in the $l^{th}$ layer with $1\leq l\leq L$, $\phi$ is a non-linear activation function applied element wise. For simplicity, we denote $F(x; \theta)$ as $F(x)$ or $F$. Let $\tens{W}^{(l)}\in \mathbb{R}^{d_{c'}^{(l)}\times d_c^{(l)} \times d_h^{(l)} \times d_w^{(l)}}$ be the weight tensor of the $l^{th}$ convolutional layer, where $d_{c'}^{(l)}, d_{c}^{(l)}, d_{h}^{(l)}$ and $d_{w}^{(l)}$ are the number of output and input channels, the height and the width of the convolutional kernel, respectively. To do pruning, we apply a mask $\tens{M}^{(l)}\in \{0, 1\}^{d_{c'}^{(l)}\times d_c^{(l)} \times d_h^{(l)} \times d_w^{(l)}}$ starting with $\tens{M}^{(l)}_k=\mathbf{1}_{d_{c}^{(l)} \times d_h^{(l)} \times d_w^{(l)}}$ in each layer, where $\mathbf{1}_{d_c^{(l)} \times d_h^{(l)} \times d_w^{(l)}}$ denotes an all-one tensor. Pruning neurons on the network refers to getting a collection of indices $\mathcal{I}=\{(l, k)_i\}_{i=1}^{I}$ and setting $\mathcal{M}^{(l)}_k=\mathbf{0}_{d_c^{(l)} \times d_h^{(l)} \times d_w^{(l)}}$ if $(l,k)\in \mathcal{I}$. The pruned network $F_{-\mathcal{I}}$ has the same architecture as $F$ with all the weight matrices of convolutional layers set to $\mathcal{W}^{(l)}\odot \mathcal{M}^{(l)}$, where $\odot$ denotes the Hadamard product \cite{horn2012matrix}. The backdoor poisoning attack involves changing the input images and the corresponding labels\footnote{the labels remain unchanged in clean label attacks \cite{turner2019label}.} on a subset of the original training set $\mathcal{D}_p\subseteq \mathcal{D}$. We denote the poisoning function to the input images by $\delta(x)$. The ratio $\rho = \frac{\vert \mathcal{D}_p \vert}{\vert \mathcal{D} \vert}$ is defined as the \emph{poisoning rate}. \subsection{$L$-Lipschitz Function} A function $g:\mathbb{R}^{n_1}\xrightarrow[]{}\mathbb{R}^{n_2}$ is \emph{$L$-Lipschitz continuous} \cite{armijo1966minimization} in $\mathcal{X}\subseteq\mathbb{R}^{n_1}$ under $p$-norm, if there exists a non-negative constant $L\geq0$ such that \begin{equation} \label{eq:lips_g} \Vert g(\vect{x})-g(\vect{x}') \Vert_p \leq L\|\vect{x}-\vect{x}'\|_p, \quad \forall\ \vect{x}, \vect{x}' \in \mathcal{X}. \end{equation} The smallest $L$ guaranteeing equation \eqref{eq:lips_g} is called the \emph{Lipschitz constant} of $g$, denoted by $\Vert g \Vert_{\rm Lip}$. For simplicity, we choose $p=2$ in this paper. Viewing $\vect{x}'$ as a perturbation of $\vect{x}$, the Lipschitz constant $\Vert g \Vert_{\rm Lip}$ can be regarded as the maximum ratio between the resulting perturbation in the output space and the source perturbation in the input space. Thus, it is commonly used for measuring the sensitivity of a function to the input perturbation. \subsection{Lipschitz Constant in Neural Networks} According to the Cauchy-Schwartz inequality, we are now able to control the Lipschitz constant of the whole network as follows: \begin{align} \Vert F \Vert_{\rm Lip} &= \Vert f^{(L)} \circ \phi \circ f^{(L-1)} \circ \cdots \circ \phi\circ f^{(1)} \Vert_{\rm Lip} \nonumber\\ &\leq \Vert f^{(L)} \Vert_{\rm Lip} \cdot \Vert \phi\Vert_{\rm Lip} \cdot \Vert f^{(L-1)} \Vert_{\rm Lip} \cdots \Vert \phi\Vert_{\rm Lip}\cdot \Vert f^{(1)} \Vert_{\rm Lip}. \end{align} Most of the commonly used activation functions are L-Lipschitz (\emph{e.g.,} ReLU, LeakyReLU, Sigmoid, Tanh, ELU, SeLU). In particular, we have $L=1$ for the ReLU function, which is used in this paper. Note that $f^{(l)}(\vect{x}^{(l)})=\matr{W}^{(l)} \vect{x}^{(l)}+\vect{b}^{(l)}$. It follows that \begin{align} \Vert F \Vert_{\rm Lip} \leq \prod_{l=1}^{L} \Vert f^{(l)} \Vert_{\rm Lip} = \prod_{l=1}^{L} \max_{\Vert \vect{z} \Vert_{2} \neq 0} \frac{\Vert \matr{W}^{(l)}\vect{z} \Vert_{2}}{\Vert \vect{z} \Vert_{2}} = \prod_{l=1}^{L} \sigma(\matr{W}^{(l)}), \end{align} where $\sigma(\matr{W}^{(l)}) = \max_{\Vert \vect{z} \Vert_{2} \neq 0} \frac{\Vert \matr{W}^{(l)}\vect{z} \Vert_{2}}{\Vert \vect{z} \Vert_{2}}$ is the spectral norm. \section{Methodology} \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{Figure/tac.pdf} \caption{A simple diagram for calculating TAC in the $l^{th}$ layer. As illustrated, the word TAC stands for the activation differences of the feature maps before and after attaching the trigger to the images, and $k$ is the number of feature maps.} \label{fig:tac} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{Figure/clc_tac.pdf} \caption{A scatter plot to demonstrate the relationship between UCLC and TAC. As shown in the figure, we observe a strong correlation between the two indices.} \label{fig:clc_tac} \end{figure} \subsection{Channel Lipschitz Constant} We denote the function from the input to the $k^{th}$ channel of $l^{th}$ layer by: \begin{equation} F^{(l)}_k = f_{k}^{(l)} \circ \phi \circ f^{(l-1)} \cdots \circ \phi \circ f^{(1)}, \nonumber \end{equation} where $f^{(l)}_k$ is the $k^{th}$ output channel function of the $l^{th}$ layer. In particular, if \begin{equation} f^{(l)}(\vect{x}) = \matr{W}^{(l)}\vect{x} + \vect{b}^{(l)}, \quad \matr{W}^{(l)}\in \mathbb{R}^{d_{\rm out}^{(l)}\times d_{\rm in}^{(l)}}, \nonumber \end{equation} then $f^{(l)}_{k}(\vect{x}) = \vect{w}^{(l)}_k\vect{x} + b^{(l)}_{k}$, where $\vect{w}^{(l)}_k\in \mathbb{R}^{1 \times d_{in}^{(l)}}$ is the $k^{th}$ row of $\matr{W}^{(l)}$. It follows that the \emph{channel Lipschitz constant} (CLC) satisfies \begin{equation} \Vert F^{(l)}_k \Vert_{\rm Lip} \leq \|\vect{w}^{(l)}_k\|_{\rm Lip} \prod_{i=1}^{l-1} \sigma(\matr{W}^{(i)}). \nonumber \end{equation} We refer to the right side of the above inequality as an \emph{Upper bound} of the Channel Lipschitz Constant (UCLC). Particularly, convolution is a special linear transformation, with the weight matrix $\matr{W}^{(l)}$ being a doubly block-Toeplitz (DBT) form of the weight tensor $\tens{W}^{(l)}$. To calculate the exact spectral norm of a convolution operation, one should first convert the weight tensor into a DBT weight matrix. However, calculating the DBT matrix is time-consuming and memory-expensive, especially when the number of channels is large. A much simpler alternative way for this is to reshape the weight tensor into a matrix form and use the spectral norm of the reshaped matrix as an approximation to the original spectral norm. This approximation has been adopted by previous researches \cite{yoshida2017spectral}. In our work, for simplicity, we calculate the spectral norm using the reshaped kernel matrix $W^{(l)}_k\in\mathbb{R}^{d^{(l)}_c\times (d^{(l)}_hd^{(l)}_w)}$, which also shows acceptable results in our experiments. \subsection{Trigger-activated Change} In order to study the extent to which these channels are related to the backdoor behaviors, we define the \emph{Trigger-activated Change} (TAC). Specifically, we first train a ResNet-18 with BadNets in CIFAR-10, and track the forward propagation process of the same inputs with and without trigger. TAC is defined as the average differences of the $k^{th}$ channel of the $l^{th}$ layer for test samples $\vect{x}$ in dataset $\mathcal{D}$: \begin{equation} TAC^{(l)}_k(\mathcal{D}) = \frac{1}{\vert \mathcal{D} \vert} \sum_{\vect{x}\in\mathcal{D}} \Vert F^{(l)}_k(\vect{x}) - F^{(l)}_k(\delta(\vect{x})) \Vert_2, \nonumber \end{equation} where $\delta(\cdot)$ is the poisoning function. A more detailed illustration of this quantity is shown in \cref{fig:tac}. TAC is the change of the activation before and after the input image is attached with a trigger. Its magnitude reflects the effect the trigger has on a given channel. Higher TAC indicate higher sensitivity of the channel to the trigger. Note that TAC is proposed to study the backdoor behavior, but it can not used for defensing. This is because the calculation of TAC requires the access to the trigger pattern, which defenders cannot get in general. \subsection{Correlation between CLC and TAC} A scatter plot chart of TAC and UCLC for each layer under BadNets (All to One) \cite{gu2019badnets} is shown in \cref{fig:clc_tac}, from which we can observe a high correlation between them. In particular, there are some outlying channels with extremely high TAC in some layers, indicating that they are sensitive to the trigger. Hence, it is reasonable to consider them as the potential backdoor channels. As expected, most of these channels have abnormally high UCLC. Remind that TAC is inaccessible for a defender, but UCLC can be directly calculated from the weight matrices of the given model. Hence, we use UCLC as an alternated index to detect potential backdoor channels. We will show that pruning those high-UCLC channels will significantly reduce the backdoor ASR in \cref{sec: exp}. \subsection{Special Case in CNN with Batch Normalization} The \emph{Batch Normalization} (BN) \cite{ioffe2015batch} is adopted to stabilize the training process and make optimization landscape much smoother in modern neural networks. It normalizes the batch inputs of each channel and adjusts the mean and variance through trainable parameters. BN is usually placed after the convolution and before the non-linear function. Note that BN is also a linear transformation, and can be merged with convolution into a matrix-matrix multiplication. In this paper, we view a Conv-BN block as one linear layer, and an UCLC is calculated based on the composed weight matrix. \subsection{Channel Lipschitzness based Pruning} \label{sec:CLP} Based on the above observations, it is natural to think of removing the high-UCLC channels to recover the model. On this basis, we propose the channel Lipschitzness based pruning (CLP), which calculates UCLC for channels in each layer, and prunes the channels with UCLC larger than a pre-defined threshold within the layer. Note that in the same layer, all channels share the same cumulative product term, which is the Lipschitz upper bound of the previous layers. Hence, a much simplified way to compare CLC within a particular layer is to directly compare $\sigma(\matr{W}^{(l)}_k)$. The overall algorithm procedure is shown in Algorithm \cref{alg:clp}. Determining an appropriate threshold is crucial to the performance of this method. In this work, we simply set the threshold for the $l^{th}$ layer as $\mu^{(l)} + u*s^{(l)}$, where $\mu^{(l)}=\frac{1}{K}\sum_{i=1}^K \sigma(\matr{W}^{(l)}_k)$ and $s^{(l)}=\sqrt{\frac{1}{K}\sum_{i=1}^K (\sigma(\matr{W}^{(l)}_k) - \mu^{(l)})^2}$ are the mean and the standard deviation of the $l^{th}$ layer indices set $\{\sigma(\matr{W}^{(l)}_k): k=1, 2, \dots K\}$, and $u$ is the only hyperparameter for the CLP. The above threshold is also known as the common outlier dividing line for a standard Gaussian distribution. We find this simple setting works well in our experiments. \begin{algorithm}[t] \scriptsize \caption{Channel Lipschitzness based Pruning} \LinesNumbered \KwIn{L layer neural network function $F^{(L)}$ with a set of convolution weight tensor $\{\tens{W}^{(l)}: l=1,2,\dots,L\}$, threshold hyperparameter $u$} \KwOut{A pruned network} $\mathcal{I} := \emptyset$ \\ \For{l = 1 to L}{ \For{k = 1 to K}{ $\matr{W}^{(l)}_k := \operatorname{ReshapeToMatrix}(\tens{W}^{(l)}_k)$\\ $\sigma^{(l)}_k := \sigma(\matr{W}^{(l)}_k)$ } $\mu^{(l)} :=\frac{1}{K}\sum_{i=1}^K \sigma^{(l)}_k$ $s^{(l)} :=\sqrt{\frac{1}{K}\sum_{i=1}^K (\sigma^{(l)}_k - \mu^{(l)})^2}$ $\mathcal{I}^{(l)} := \{(l, k): \sigma^{(l)}_k > \mu^{(l)} + u*s^{(l)}\}$ $\mathcal{I} = \mathcal{I} \cup \mathcal{I}^{(l)}$ } return $F^{(L)}_{-\mathcal{I}}$ \label{alg:clp} \end{algorithm} \section{Experiments} \label{sec: exp} \subsection{Experimental Settings} \subsubsection{Attack Settings.} We test our proposed CLP on a variety of famous attack methods, \textit{i.e.}, BadNets \cite{gu2019badnets}, Clean Label Attack \cite{turner2019label}, Trojan Attack \cite{Trojan}, Blended Backdoor Attack \cite{liu2020reflection}, WaNet \cite{nguyen2021wanet}, IAB \cite{nguyen2020input} and Sample Specific Backdoor Attack \cite{li2021invisible}. The attacks are performed on CIFAR-10 \cite{krizhevsky2009learning} and Tiny ImageNet \cite{le2015tiny} using ResNet-18 \cite{he2016deep}. For BadNets, we test both All-to-All (BadA2A) attack and All-to-One (BadA2O) attack, which means that the attack target labels $y_t$ are set to all labels by $y_t=(y+1) \% C$ (\% denotes modulo operation) or one particular label $y_t=C_t$. Due to the image size requirement of the SSBA, its corresponding experiments are only conducted on Tiny ImageNet. We use $\sim 95\%$ of the training data to train the backdoored model. The rest $5\%$ are split into $4\%$ of validation data, and $1\%$ of benign training data to perform other defenses. The trigger size is $3 \times 3$ for CIFAR-10 and $5 \times 5$ for Tiny ImageNet. The poison label is set to the $0^{th}$ class, and the poisoning rate is set to $10\%$ by default. We use SGD \cite{ruder2016overview} as the base optimizer, and train the backdoored model with learning rate 0.1, momentum 0.9 and batch size 128 for 150 epochs on CIFAR-10, batch size 64 for 100 epochs on Tiny ImageNet. We use cosine scheduler to adjust the learning rate. All the experiments are conducted on Pytorch \cite{torch} framework. \subsubsection{Defense Settings.} We compare our approaches with the commonly used model repairing methods, \textit{i.e.}, FT, FP \cite{liu2018fine}, NAD \cite{li2020neural} and the SOTA neuron pruning strategy ANP \cite{wu2021adversarial}. All defense methods are allowed to access $1\%$ of the benign training data. Note that \textbf{no data} are used in CLP. The fine-tuning based methods default the training process with batch size 128 and learning rate 0.005 for 20 epochs. We adjust the hyperparameters including pruning ratio in fine-pruning \cite{liu2018fine}, attentional weight in NAD \cite{li2020neural}, and pruning threshold in ANP \cite{wu2021adversarial} to obtain their best performance instructed by their original papers. The CLP hyperparameter $u$ is default to be 3 on CIFAR-10 and 5 on Tiny ImageNet. Further study on the effect of the hyperparameter is conducted in \cref{sec:abl}. \subsubsection{Evaluation Metric.} The evaluation of the model includes the performance on benign data, Accuracy on Clean data (ACC) and the performance on the backdoored data, which we call the attack success rate (ASR). Note that the ASR is the ratio for poisoned samples that are \textbf{misclassified} as the target label, and it is calculated using the backdoored samples whose ground-truth labels do not belong to the target attack class. In a nutshell, a successful defense should achieve low ASR without much degradation on the ACC. \subsection{Experimental Results} \begin{table*}[htb] \centering \tiny \caption{Performance evaluation of the proposed CLP without data and 4 other defense methods with 500 benign data against seven mainstream attacks on CIFAR-10 with ResNet-18. Higher ACC and Lower ASR are preferable, and the best results are boldfaced. $\downarrow$ denotes the drop rate on average.} \begin{tabular}{c|c|cc|cc|cc|cc|cc|cc} \hline Trigger & \multirow{2}{*}{Attacks} & \multicolumn{2}{c|}{Backdoored} & \multicolumn{2}{c|}{FT} & \multicolumn{2}{c|}{FP\cite{liu2018fine}} & \multicolumn{2}{c|}{NAD\cite{li2020neural}} & \multicolumn{2}{c|}{ANP\cite{wu2021adversarial}} & \multicolumn{2}{c}{CLP(ours)} \cr Type & & ACC & ASR & ACC & ASR & ACC & ASR & ACC & ASR & ACC & ASR & ACC & ASR \cr \hline \hline \multirow{5}{*}{Static} & BadA2O \cite{gu2019badnets} & 93.86 & 100.00 & 92.22 & 2.16 & 92.18 & 2.97 & 91.67 & 5.40 & 91.67 & 5.40 & \bf{93.46} & \bf{1.38} \cr & BadA2A & 94.60 & 93.89 & 92.03 & 60.76 & 91.75 & 66.82 & 92.86 & 1.33 & 90.29 & 86.22 & \bf{93.69} & \bf{1.06} \cr & Trojan \cite{Trojan} & 94.06 & 100.00 & 92.58 & 99.99 & 90.78 & 86.43 & 92.13 & 5.76 & 93.44 & 8.11 & \bf{93.54} & \bf{2.06} \cr & CLA \cite{turner2019label} & 93.14 & 100.00 & 91.86 & 0.39 & 91.02 & 93.21 & \bf{92.46} & \bf{0.44} & 91.13 & 11.76 & 91.89 & 2.84 \cr & Blended \cite{chen2017targeted} & 94.17 & 99.62 & 93.90 & 70.27 & 90.92 & 3.24 & 92.72 & \bf{1.61} & 93.66 & 5.03 & \bf{94.07} & 1.90 \cr \hline \multirow{2}{*}{Dynamic} & IAB \cite{nguyen2020input} & 93.87 & 97.91 & 91.78 & \bf{9.52} & 87.04 & 21.33 & 93.52 & 10.61 & \bf{93.52} & 10.61 & 92.78 & 9.88 \cr & WaNet \cite{nguyen2021wanet} & 94.50 & 99.67 & 92.93 & 9.37 & 92.07 & 1.03 & 94.12 & 0.51 & \bf{94.12} & \bf{0.51} & 94.06 & 0.56 \cr \hline \hline & Average & 94.03 & 98.72 & 92.47 & 36.07 & 90.82 & 39.29 & 92.78 & 4.30 & 92.54 & 18.23 & \bf{93.36} & \bf{2.81} \cr & Drop & $\downarrow$ 0.00 & $\downarrow$ 0.00 & $\downarrow$ 1.56 & $\downarrow$ 62.66 & $\downarrow$ 3.21 & $\downarrow$ 59.43 & $\downarrow$ 1.25 & $\downarrow$ 94.42 & $\downarrow$ 1.49 & $\downarrow$ 80.49 & $\downarrow$ \bf{0.67} & $\downarrow$ \bf{95.91} \cr \hline \end{tabular} \label{tab:cifar10} \end{table*} \begin{table*}[htb] \centering \tiny \caption{Performance evaluation of the proposed CLP without data and 4 other defense methods with 1,000 benign data against seven mainstream attacks on Tiny ImageNet with ResNet-18. Higher ACC and Lower ASR are preferable, and the best results are boldfaced. $\downarrow$ denotes the drop rate on average.} \begin{tabular}{c|c|cc|cc|cc|cc|cc|cc} \hline Trigger & \multirow{2}{*}{Attacks} & \multicolumn{2}{c|}{Backdoored} & \multicolumn{2}{c|}{FT} & \multicolumn{2}{c|}{FP\cite{liu2018fine}} & \multicolumn{2}{c|}{NAD\cite{li2020neural}} & \multicolumn{2}{c|}{ANP\cite{wu2021adversarial}} & \multicolumn{2}{c}{CLP(ours)} \cr Type & & ACC & ASR & ACC & ASR & ACC & ASR & ACC & ASR & ACC & ASR & ACC & ASR \cr \hline \hline \multirow{5}{*}{Static} & BadA2O \cite{gu2019badnets} & 62.99 & 99.89 & 56.97 & 99.26 & 57.43 & 57.42 & 61.63 & 0.85 & \bf{63.05} & 3.93 & 62.94 & \bf{0.61} \cr & Trojan \cite{Trojan} & 64.09 & 99.99 & 62.85 & 3.45 & 60.43 & 99.59 & 61.12 & 95.05 & 62.68 & 10.43 & \bf{63.86} & \bf{0.77} \cr & CLA \cite{turner2019label} & 64.94 & 84.74 & 53.59 & 27.32 & 61.18 & 82.72 & 59.75 & 30.65 & 60.98 & 15.69 & \bf{64.71} & \bf{0.41} \cr & Blended \cite{chen2017targeted} & 63.30 & 99.70 & 59.98 & 1.02 & 59.84 & 62.17 & 61.94 & 11.55 & 62.49 & \bf{0.61} & \bf{63.12} & 0.74 \cr \hline \multirow{3}{*}{Dynamic} & IAB \cite{nguyen2020input} & \bf{61.40} & 98.28 & 58.35 & 89.89 & 57.03 & \bf{0.21} & 58.17 & 68.43 & 61.39 & 4.67 & 59.09 & 8.70 \cr & WaNet \cite{nguyen2021wanet} & 60.76 & 99.92 & 57.96 & 97.45 & 53.86 & 23.70 & 56.42 & 87.79 & 54.82 & 86.98 & \bf{59.52} & \bf{1.57} \cr & SSBA \cite{li2021invisible} & 66.51 & 99.78 & 62.66 & 73.11 & 62.89 & 4.68 & 60.13 & 24.68 & 60.98 & 1.01 & \bf{63.49} & \bf{0.42} \cr \hline \hline & Average & 63.43 & 97.47 & 58.91 & 55.93 & 58.95 & 47.21 & 59.88 & 62.13 & 60.91 & 17.62 & \bf{62.39} & \bf{1.89} \cr & Drop & $\downarrow$ 0.00 & $\downarrow$ 0.00 & $\downarrow$ 4.52 & $\downarrow$ 41.54 & $\downarrow$ 4.48 & $\downarrow$ 50.26 & $\downarrow$ 6.76 & $\downarrow$ 45.57 & $\downarrow$ 2.52 & $\downarrow$ 79.85 & $\downarrow$ \bf{1.04} & $\downarrow$ \bf{95.58} \cr \hline \end{tabular} \label{tab:tinyimagenet} \end{table*} In this section, we verify the effectiveness of CLP and compare its performance with other 4 existing model repairing methods as shown in Table \ref{tab:cifar10} and Table \ref{tab:tinyimagenet}. Table \ref{tab:cifar10} shows the experimental results on CIFAR-10, and the proposed CLP remarkably achieves almost the highest robustness against several advanced backdoor attacks. To be specific, the proposed CLP successfully cut the average ASR down to $2.81\%$ with only a slight drop on the ACC ($0.67\%$ on average). Note that CLP reaches such incredible result with no data requirement, and the SOTA defenses ANP and NAD give a similar performance on the ASR with a larger trade-off on the ACC under the access to benign data. The standard fine-tuning provides promising defense results against several attacks, especially BadNets (A2O), but fails to generalize to more complex attacks such as Trojan, BadNets (A2A) and blended attack. NAD repairs the backdoored model based on knowledge distillation with supporting information from the attention map of a fine-tuned model. Though it achieves relatively good defense performance, it requires carefully tuning of the hyperparameters. As for the other pruning based methods, fine-pruning adds an extra neuron pruning step according to the neuron activation to benign images before fine-tuning the model, and achieves the best defense performance against CLA. However, it also fails to maintain high robustness against some more covert attacks. ANP utilizes an adversarial strategy to find and prune the neurons that are sensitive to perturbations, which are considered to be highly backdoor related. While both ANP and CLP leverage the concept of sensitivity on channels, ANP measures the sensitivity of the model output to the perturbation on the channels as the pruning index, which requires additional data and careful hyperparameter tuning for different attacks. Unlike ANP, our CLP prunes channels based on the sensitivity of the channels to the inputs, which comes closer to the essence of backdoor attack and can be directly induced by the properties of the model weights without any data. We find both strategy works well on CIFAR-10 against various attacks, but on average, CLP performs better. \cref{tab:tinyimagenet} shows more experimental results on Tiny ImageNet. All the compared defense methods suffer from severe degradation on both the ACC and the ASR when confront a larger-scale dataset. On the contrast, our CLP still maintains its robustness against those different attacks, including SOTA sample specific attacks IAB and SSBA, which further suggests the strong correlation between channel Lipschitzness and the backdoor behaviors. \subsection{Ablation Studies} \label{sec:abl} \subsubsection{Performance with Different Choices of Hyperparameter $u$.} As mentioned in \cref{sec:CLP}, the CLP hyperparameter $u$ controls the trade-off between test accuracy on clean data and robust against backdoor attack. \cref{fig:attacks} shows the ACC and the ASR of the backdoored model after applying CLP with different hyperparameter $u$. From the fact that ASR drops rapidly to near $0\%$ while ACC drops much later as $u$ decreases, we argue that the backdoor related channels usually possess higher UCLC than normal channels, which will be pruned precisely by the CLP. Generally speaking, we can regard the interval between the two points when ASR drops to a very low level and ACC starts to drop as an index of the robustness with hyperparameter. For example, it is much easier to choose the hyperparameter $u$ to defend against Blended attack \cite{chen2017targeted} because choosing $u\in[3,5]$ will not affect that much on the performance. CLA has the smallest gap, and it requires a more carefully chosen hyperparameter. A possible reason is that the UCLC in the CLA attacked model is not that high compared with other attacked models. Nevertheless, setting $u=3$ still has an acceptable performance on CLA. Note that when $u$ decreases near to 0, nearly all the channels in the model are pruned, so the prediction of the model can be illogical. That's why the ASR curves in some attacks rapidly increase to $100\%$ when $u$ decreases to 0. \begin{figure}[htb] \centering \includegraphics[width=\linewidth]{Figure/attacks.pdf} \caption{Performance of the CLP with different hyperparameter against different attacks on CIFAR-10 with ResNet-18.} \label{fig:attacks} \end{figure} \subsubsection{Performance on Different Architectures.} To verify the generalization performance of CLP across different architectures, we perform BadNets attack on CIFAR-10 with the commonly used CNN architectures ResNet \cite{he2016deep}, VGG \cite{simonyan2014very} and SENet \cite{hu2018squeeze} of different depths. Then we plot the ACC and the ASR curves \textit{w.r.t} to the hyperparameter $u$. This is shown in Figure \ref{fig:architectures}. Overall, CLP achieves very good results on all the tested CNN architectures. Nevertheless, the optimal $u$ for different architectures are different. For example, the optimal choice of $u$ in VGG-16 is about 9. However, such choice of $u$ will not work on other architectures. In addition, we find that both VGG architectures and SENet architectures show better robustness to the choice of hyperparameter than ResNet architectures. In general, choosing $u$ between 3 and 4 generalizes well on different architectures. \begin{figure}[htb] \centering \includegraphics[width=0.9\linewidth]{Figure/architectures.pdf} \caption{Performance of the CLP with different hyperparameter $u$ in variant CNN architectures against BadNets on CIFAR-10.} \label{fig:architectures} \end{figure} \begin{table*}[htb] \centering \tiny \caption{Performance of the CLP against typical backdoor attacks with different poisoning rate on CIFAR-10 with ResNet-18.} \begin{tabular}{c|c|cc|cc|cc|cc|cc} \hline Poisoning & \multirow{2}{*}{Model} & \multicolumn{2}{c|}{BadNets (A2O)} & \multicolumn{2}{c|}{BadNets (A2A)} & \multicolumn{2}{c|}{Trojan \cite{Trojan}} & \multicolumn{2}{c|}{CLA\cite{turner2019label}} & \multicolumn{2}{c}{Blended\cite{chen2017targeted}}\cr rate & & ACC & ASR & ACC & ASR & ACC & ASR & ACC & ASR & ACC & ASR \cr \hline \hline \multirow{2}{*}{1\% } & Backdoored & 93.86 & 100.00 & 94.60 & 93.47 & 94.06 & 100.00 & 93.14 & 100.00 & 94.17 & 100.00 \cr & CLP Pruned & 93.46 & 1.38 & 93.69 & 1.06 & 93.54 & 0.92 & 91.89 & 2.84 & 94.07 & 1.90 \cr \hline \multirow{2}{*}{5\% } & Backdoored & 94.29 & 100.00 & 94.21 & 92.57 & 94.48 & 100.00 & 94.46 & 99.86 & 94.33 & 100.00 \cr & CLP Pruned & 92.33 & 6.42 & 93.93 & 0.76 & 93.84 & 5.56 & 90.71 & 5.96 & 93.37 & 2.34 \cr \hline \multirow{2}{*}{10\% } & Backdoored & 95.03 & 100.00 & 94.75 & 90.77 & 94.79 & 100.00 & 94.99 & 98.83 & 94.60 & 83.63 \cr & CLP Pruned & 94.21 & 2.76 & 94.03 & 0.74 & 93.17 & 3.32 & 93.67 & 10.04 & 93.23 & 0.83 \cr \hline \end{tabular} \label{tab:poison_rate} \end{table*} \subsubsection{Performance under Different Poisoning Rates.} The different choice of poisoning rate also affects the defense performance. To study the robustness of the proposed CLP, we try different poisoning rate of different backdoor attacks with the hyperparameter unchanged ($u=3$ on CIFAR-10 by default). As shown in Table \ref{tab:poison_rate}, CLP effectively reduces the ASR and maintains high ACC under different settings. We note that decreasing the poisoning rate of CLA leads to increasing ASR after applying CLP. CLA with poisoning rate set to $1\%$ gives the worse defense results, and the ASR is about 10\%. However, such ASR doesn't give a considerable threat to our models. Overall, we find poisoning rate doesn't affect much on the performance of CLP. \subsubsection{Running Time Comparison.} We record the running time of the above-mentioned defense methods on 500 CIFAR-10 images with ResNet-18, and show the results in \cref{tab:runtime}. The proposed CLP is the only one, which do not require data propagation, so CPU is enough to apply CLP, and we only record the CPU time (i7-5930K CPU @ 3.50GHz). The other methods are evaluated on RTX 2080Ti GPU. The proposed CLP is almost five times faster than the fastest methods among them and only requires 2.87 seconds. \begin{table}[htb] \scriptsize \centering \caption{The overall running time of different defense methods on 500 CIFAR-10 images with ResNet-18. All the methods except for CLP are training on GPU. * denotes that the results of CLP is in CPU time. } \begin{tabular}{c|c|c|c|c|c} \hline Defense Methods & FT & FP & NAD & ANP & CLP* \\ \hline \hline Runing Time (sec.) & $\quad$12.35s$\quad$ & $\quad$14.59s$\quad$ & $\quad$25.68s$\quad$ & $\quad$22.08s$\quad$ & $\quad$\textbf{2.87s}$\quad$ \\ \hline \end{tabular} \label{tab:runtime} \end{table} \section{Conclusions} In this paper, we reveal the connection between the Lipschitzness and the backdoor behaviors of the channels in an infected DNN. On this basis, we calculate an upper bound of the channel Lipschitz constant (UCLC) for each channel, and prune the channels with abnormally high UCLC to recover the model, which we refer to as the Channel Lipschitzness based Pruning (CLP). Due to the fact that UCLC can be directly induced by the model weights, our CLP does not require any data and runs super fast. To the best of our knowledge, CLP is the first productive data-free backdoor defense method. Extensive experiments show that the proposed CLP can efficiently and effectively remove the injected backdoor while maintaining high ACC against various SOTA attacks. Finally, ablation studies show that CLP is robust to the \textbf{only} hyperparameter $u$, and generalizes well to different CNN architectures. Our work further shows the effectiveness of channel pruning in defense of backdoor attacks. More importantly, it sheds light on the relationship between the sensitive nature of backdoor and the channel Lipschitz constant, which may help to explain the backdoor behaviors in neural networks. \section{Acknowledgement} This work was supported in part by the National Natural Science Foundation of China (No. 62101351), the GuangDong Basic and Applied Basic Research Foundation (No.2020A1515110376), and the Shenzhen Outstanding Scientific and Technological Innovation Talents PhD Startup Project (No. RCBS20210609104447108). \bibliographystyle{unsrt}
2,869,038,156,610
arxiv
\section{Introduction} \setcounter{equation}{0} \setcounter{footnote}{0} Let $M$ be a smooth, connected, compact Riemannian manifold without boundary. We use $T^*M$ to denote its cotangent bundle and $H$ a continuous function, called Hamiltonian, on $T^*M\times\mathbb{R}$. The problem of existence and uniqueness of viscosity solution of the Hamilton-Jacobi equation \begin{equation}\label{HJs}\tag{HJs} H(x,d_x u,u)=c,\quad x\in M, \end{equation} has attracted much attention in past forty years. For fixed constant $c\in\mathbb{R}$, the earliest results are obtained by M.Crandall, P.L.Lions in \cite{CL}-\cite{Crandall_Evans_Lions1984} when $H$ is strictly increasing in $u$, for instance $H(x,p,u)=u+h(x,p)$. The corresponding analytic tools including comparison principle have a great influence on the later development of the viscosity solution theory. For $H=H(x,p)$ independent of $u$, the situation is a bit complicated. A breakthrough was made in \cite{LPV_Hom}, where Lions and his coauthors changed the strategy and successfully proved the solvability of the ergodic problem, i.e., the existence of a pair $(u,c)\in C(M)\times\mathbb{R}$ solving the equation \eqref{HJs}. On the other hand, examples lead to the failure of the uniqueness of solution in this case. \vspace{1em} The picture for $u$-independent Hamiltonian becomes more clear after A.Fathi's work in the late 90s. In fact, Fathi built a connection, i.e., weak KAM theory \cite{Fathi_book}, between the theory of viscosity solution and Aubry-Mather theory in Hamiltonian dynamics. It turns out, under suitable assumptions (H1)-(H2) listed below, the constant $c$ found in \cite{LPV_Hom} is uniquely determined by $H$. The ingredients of Fathi's theory consist of regarding the solution of \eqref{HJs} as the large time limit of a nonlinear solution semigroup $\{T^-_t\}_{t\geqslant0}$ generated by the evolutionary equation \begin{equation}\label{HJe}\tag{HJe} \begin{cases} \partial_t u+H(x,\partial_x u,u)=c,\quad (x,t)\in M\times[0,+\infty),\\ u(0,x)=\varphi(x)\in C(M). \end{cases} \end{equation} It is curious to notice that the application of nonlinear semigroup method on the existence problem of evolutionary Hamilton-Jacobi equations already occurred in \cite[VI.3, page 39-41]{CL}. Nevertheless, due to the lack of explicit formula for the semigroup as well as further information on the dynamics of the associated system, the convergence of semigroup was not treated until the birth of weak KAM theory. According to the work of H.Ishii \cite{Ishii_chapter}, most of the weak KAM theory can be fit into the theory of viscosity solution by using delicate analytic tools. \vspace{1em} More recently, the nonlinear semigroup method was extended to genuinely $u$-dependent Hamiltonian in the sequence of works \cite{WWY1,WWY2,WWY3} by using a new variational principle. It is of particular interest that, based on the works mentioned before, the structure of the set of solutions of \eqref{HJs} can be sketched if $H$ is uniformly Lipschitz in $u$. This includes the untouched case that $H$ is strictly decreasing in $u$. Shortly after \cite{WWY2} occurred, \cite{JMT} generalized the results to ergodic problems from PDE aspects. In this paper, we show, through a simple model, that the results obtained in \cite{WWY1}-\cite{WWY3} allow us to treat the solvability of \eqref{HJs} for any fixed $c\in\mathbb{R}$ when $u$-monotonicity of Hamiltonian is not assumed, and secondly, to present a \textbf{bifurcation phenomenon for the family of equations \eqref{HJs} parametrized by $c$}. \vspace{1em} Once and for all, we use $|\cdot|_x$ to denote the dual norm induced by the Riemannian metric on $T^\ast_xM$ and normalize this metric so that diam$(M)=1$. We consider the Hamiltonian $H:T^*M\times\mathbb{R}\to \mathbb{R}$ written in the form \begin{equation}\label{model} H(x,p,u)=h(x,p)+\lambda(x)u \end{equation} where $h(x,p),\lambda(x)$ are $C^3$ functions satisfying: \begin{enumerate} \item[(H1)] (Convexity) the Hessian $\frac{\partial^2 h}{\partial p^2}$ is positive definite for all $(x,p)\in T^*M$; \item[(H2)] (Superlinearity) for every $K\geqslant 0$, there is $C^{\ast}(K)>0$ such that $h(x,p)\geqslant K|p|_x-C^{\ast}(K)$; \item[(H3)] (Fluctuation) there exist $x_1,x_2\in M$ such that $\lambda(x_1)=1$ and $\lambda(x_2)=-1$. \end{enumerate} The arguments for establishing our main theorem depend on the variational principal developed in \cite{WWY1}-\cite{WWY3}. This makes our standing assumptions (H1)-(H3) relatively stronger than the standard assumptions in PDE (convexity and coercivity in $p$). With these settings, our results can be summarise into \begin{theorem}\label{main} For the equation \eqref{HJs} with Hamiltonian \eqref{model} satisfying (H1)-(H3), \begin{enumerate} \item there is $c(H)\in\mathbb{R}$, uniquely determined by $H$, such that \eqref{HJs} admits a solution if and only if $c\geqslant c(H)$. Furthermore, if $c>c(H)$, \eqref{HJs} admits at least two solutions. \item for any $c \geqslant c(H)$, there is a constant $B(H,c)>0$ such that any solution $v:M\rightarrow\mathbb{R}$ to \eqref{HJs} satisfies \[ \|v\|_{W^{1,\infty}(M) }\leqslant B. \] where $ \|v\|_{W^{1,\infty}(M)} := \esssup_{M}(|v|+|D v| ) $. \end{enumerate} \end{theorem} \vspace{1em} Here and anywhere, solutions to \eqref{HJs} and \eqref{HJe} should always be understood in the viscosity sense. The remaining of this paper is organized as follows. In Section 2, we briefly recall some necessary tools from \cite{WWY1}-\cite{WWY3} and give a relatively self-contained proof of the main result. Section 3 is devoted to detailed analysis of the structure of the solutions of an example, thus illustrating the meaning of our result. \section{Proof of the main result} We divide the proof of main result into two steps. As the first step, we define the constant $c(H)$ and prove its finiteness. When $c<c(H)$, the non-existence of solution of \eqref{HJs} is a direct consequence of that. Secondly, we use tools from the former works \cite{WWY1}-\cite{WWY3} to show the existence and multiplicity of solutions. We use $\|u\|_\infty$ to denote the $C^0$-norm of $u$ as a continuous function on $M$. \subsection{Critical value and subsolutions to \eqref{HJs}} In a similar way with \cite{CIP}, we define the critical value of $H$ by \begin{equation}\label{dcv} c(H):=\inf_{u\in C^\infty(M)}\sup_{x\in M}H(x,d_x u(x),u(x)). \end{equation} Here, we want to remark that for a general Hamiltonian satisfying (H1)-(H2), the number $c(H)$ is not always finite, as the simple example $H(x,p,u)=u+h(x,p)$ shows (in this case, $c(H)=-\infty$). Nevertheless, for the Hamiltonian \eqref{model}, \begin{lemma}\label{finite} $-\infty <c(H)<+\infty$. \end{lemma} \begin{proof} We choose $u\equiv0$ on $M$, by \eqref{dcv}, to obtain \[ c(H)\leqslant\sup_{x\in M}H(x,0,0)=\sup_{x\in M}h(x,0)<+\infty. \] Note that by taking $K=0$ in the assumption (H2), there is $e_0:=C^*(0)>0$ such that \begin{equation}\label{e_0} \min_{(x,p)\in T^*M}h(x,p)\geqslant -e_0. \end{equation} Now the assumption (H3) implies that there exists $x_0\in M$ such that $\lambda (x_0)=0$. Thus for any $u\in C^\infty(M)$, \begin{align*} c(H)=&\inf_{u\in C^\infty(M)}\sup_{x\in M}\,\,[h(x,d_x u(x))+\lambda(x)u(x)]\\ \geqslant &\,\inf_{u\in C^\infty(M)}\,\,[h(x_0,d_x u(x_0))+\lambda(x_0)u(x_0)]\\ =&\,\inf_{u\in C^\infty(M)}h(x_0,d_x u(x_0))\geqslant -e_0. \end{align*} \end{proof} An immediate corollary of Lemma \ref{finite} is \begin{theorem} For $c<c(H)$, there is no continuous subsolution to the equation \eqref{HJs}. \end{theorem} For its proof, we need a standard approximation lemma. We omit the proof of the lemma and refer to \cite[Theorem 8.1]{FM_noncompact} for details. \begin{lemma}\cite[Lemma 2.2]{DFIZ} Assume $G\in C(T^{\ast}M)$ such that $G(x,\cdot)$ is convex in $T^{\ast}_x M$ for every $x\in M$, and let $u$ be a Lipschitz subsolution of $G(x,d_x u)=0$. Then, for all $\varepsilon>0$, there exists $u_\varepsilon\in C^{\infty}(M)$ such that $\|u-u_\varepsilon\|_\infty<\varepsilon$ and $G(x, d_x u_\varepsilon)\leqslant\varepsilon$ for all $x\in M$. \end{lemma} \textit{Proof of Theorem 2.2}: From now on, we set \begin{equation}\label{v-norm} \lambda_0:=\|\lambda\|_\infty\geqslant1. \end{equation} Assume for $c<c(H)$, the equation \eqref{HJs} admits a continuous subsolution $u:M\rightarrow\mathbb{R}$. Then for any $p\in D^+ u(x)$, \begin{align*} h(x,p)\leqslant c-\lambda(x)u(x)\leqslant c-\lambda_0\|u\|_\infty. \end{align*} Combining (H2) and the above inequality, we conclude that $u$ is Lipschitz. (A rigorous treatment can be found in \cite[Proposition 1.14]{Ishii_chapter}) Applying Lemma 2.3 to \[ G(x,p):=h(x,p)+\lambda(x)u(x)-c, \] then for $\varepsilon=\frac{1}{2(1+\lambda_0)}(c(H)-c)>0$, there is $u_\varepsilon\in C^\infty(M)$ such that $\|u-u_\varepsilon\|_\infty<\varepsilon$ and \[ h(x,d_x u_\varepsilon(x))+\lambda(x)u(x)\leqslant c+\varepsilon. \] Thus we obtain \begin{align*} &H(x,d_x u_\varepsilon(x),u_\varepsilon(x))=h(x,d_x u_\varepsilon(x))+\lambda(x)u_\varepsilon(x)\\ \leqslant &\,h(x,d_x u_\varepsilon(x))+\lambda(x)u(x)+\lambda_0\|u-u_\varepsilon\|_\infty\\ \leqslant &\,c+(1+\lambda_0)\varepsilon<c(H), \end{align*} this contradicts \eqref{dcv}. \qed \vspace{1em} The fluctuation condition (H3) gives the existence of subsolutions to \eqref{HJs} when $c$ lies above the critical value. First, we need a priori estimates for subsolutions for \eqref{HJs}. \begin{lemma}\label{uni-Lip} The $C^1$ subsolutions of \eqref{HJs} with $c=c(H)+1$ are equi-bounded and equi-Lipschitzian. \end{lemma} \begin{proof} Let $v\in C^1(M)$ be a subsolution to \eqref{HJs} with $c=c(H)+1$. Due to (H3) and \eqref{e_0}, we have \begin{align*} &-e_0 + v(x_1)\leqslant h(x_1,d_x v(x_1))+ v(x_1)\leqslant c(H)+1,\\ &-e_0 - v(x_2)\leqslant h(x_2,d_x v(x_2))- v(x_2)\leqslant c(H)+1, \end{align*} from which we deduce \begin{equation}\label{bound} v(x_1)\leqslant c(H)+1+e_0,\quad v(x_2)\geqslant-(c(H)+1+e_0). \end{equation} Setting $L:=\max_{x\in M}|d_x v(x)|_{x}$, by the mean value theorem and the fact diam$(M)=1$, \[ |v(x)-v(x_1)|\leqslant L, \quad |v(x_2)-v(x)|\leqslant L,\quad \text{for any}\,\,x\in M. \] Combining with \eqref{bound}, this implies that \begin{equation}\label{eq:1} \|v\|_\infty\leqslant |c(H)+1+e_0|+L. \end{equation} Thus for $\bar x\in \arg\max_{x\in M} \{|d_xv(x)|_{x}\}$, \begin{align*} c(H)+1 \geqslant &\, h(\bar x,d_x v(\bar x))+ \lambda(\bar x)v(\bar x) \\ \geqslant &\, h(\bar x,d_x v(\bar x))-\lambda_0\|v\|_\infty \\ \geqslant &\, (1+\lambda_0)L-C^{\ast}(1+\lambda_0)-\lambda_0\|v\|_\infty \\ \geqslant &\,L-C^{\ast}(1+\lambda_0)-\lambda_0|c(H)+1+e_0| \end{align*} where the first inequality follows from \eqref{eq:1} and the second from (H2) with $K=1+\lambda_0$. This implies \begin{equation}\label{eq:2} L\leqslant C^{\ast}(1+\lambda_0)+\lambda_0|c(H)+1+e_0|+c(H)+1, \end{equation} and the right hand side is independent of $v$. Combining \eqref{eq:1} and \eqref{eq:2} completes the proof . \end{proof} \begin{theorem}\label{exist-sub} There exists a subsolution $v_0\in \mbox{\rm{Lip}}(M)$ to the equation \eqref{HJs} when $c = c(H)$. \end{theorem} \begin{proof} By the definition \eqref{dcv}, for each integer $n\geqslant1$, there exists $v_n\in C^\infty(M)$ such that \begin{equation}\label{aux-eq1} \sup_{x\in M}h(x,d_x v_n(x))+ \lambda(x)v_n(x)\leqslant c(H)+\frac{1}{n}. \end{equation} Thus the sequence $\{v_n\}_{n\geqslant1}\subset C^1(M)$ are subsolutions of \eqref{HJs} with $c=c(H)+1$. By Lemma \ref{uni-Lip} and Ascoli-Arzel\`a theorem, it contains a subsequence $\{v_{n_k} \}_{k\in \mathbb{N}}$ uniformly converging on $M$ to some $v_0\in$ Lip$(M)$. Since $v_{n}$ are subsolution of \[ H(x,d_x u,u)=c(H)+\frac{1}{n}, \] the stability of subsolutions, see \cite[Theorem 8.1.1]{Fathi_book} or \cite[Theorem 5.2.5]{CS}, implies that $v_0$ is a subsolution of \[ H(x,d_x u,u)=c(H). \] \end{proof} \subsection{Solution semigroups and their fixed points} We shall use $TM$ to denote the tangent bundle of $M$. As usual, a point of $T M$ will be denoted by $(x,\dot{x})$, where $x\in M$ and $\dot{x}\in T_x M$. We recall that, for a Hamiltonian $H:T^{\ast}M\times\mathbb{R}\rightarrow\mathbb{R}$ satisfying (H1)-(H2), the corresponding Lagrangian $L:TM\times\mathbb{R}\rightarrow\mathbb{R}$ is defined as \[ L(x,\dot{x},u)=\sup_{p \in T_x^*M}\{p \cdot \dot{x}-H(x,p,u)\}, \] i.e., $L$ is the convex dual of $H$ with respect to $p$. Since the equations \eqref{HJs} under consideration are parametrized by $c$, we shall adopt the notions \[ H^c(x,p,u):=H(x,p,u)-c,\quad L^c(x,\dot{x},u):=L(x,\dot{x},u)+c. \] The following action functions are helpful in the definition and estimates of semigroups, they contain important information about the variational principle defined by equation \eqref{HJe}. \begin{proposition}\cite[Theorem 2.1, 2.2]{WWY3}\label{Implicit variational} For any given $x_0\in M$ and $u_0,c\in \mathbb{R}$, there exist continuous functions $h^c_{x_0,u_0}(x,t), h_c^{x_0,u_0}(x,t)$ defined on $M\times (0,+\infty)$ by \begin{equation}\label{eq:Implicit variational} \begin{split} h^c_{x_0,u_0}(x,t)=&\inf_{\substack{\gamma(t)=x\\ \gamma(0)=x_0 } }\Big\{u_0+\int_0^t L^c(\gamma(\tau), \dot \gamma(\tau),h^c_{x_0,u_0}(\gamma(\tau) ,\tau ) )\ d\tau\Big\},\\ h_c^{x_0,u_0}(x,t)=&\sup_{\substack{\gamma(t)=x_0\\ \gamma(0)=x } }\Big\{u_0-\int_0^t L^c(\gamma(\tau), \dot \gamma(\tau),h_c^{x_0,u_0}(\gamma(\tau) ,t-\tau ) )\ d\tau\Big\}, \end{split} \end{equation} where the infimum and supremum are taken among Lipschitz continuous curves $\gamma:[0,t]\rightarrow M$ and are achieved. We call $h^c_{x_0,u_0}(x,t)$ the backward action function and $h_c^{x_0,u_0}(x,t)$ the forward action function. \end{proposition} \begin{remark} Let $\gamma\in Lip([0,t],M)$ achieve the infimum (resp. supremum) in \eqref{eq:Implicit variational} and \[ x(s):=\gamma(s), \quad u(s):=h^c_{x_0,u_0}(x(s),s)\,(\text{resp.}\,\,h_c^{x_0,u_0}(x(s),t-s)), \quad p(s):=\frac{\partial L}{\partial \dot x}(x(s),\dot x(s),u(s)). \] Then $(x(s),p(s),u(s))$ satisfies the system \begin{equation} \label{eq:ode} \left\{ \begin{aligned} \dot x&=\frac{\partial H }{\partial p}(x,p,u)\\ \dot p &=-\frac{\partial H }{\partial x}(x,p,u)-\frac{\partial H }{\partial u}(x,p,u) \cdot p \quad (x,p,u)\in T^*M \times \mathbb{R}, \\ \dot u&=\frac{\partial H }{\partial p}(x,p,u) \cdot p-H^c(x,p,u) \end{aligned} \right. \end{equation} with $x(0)=x_0,x(t)=x\,($resp. $x(0)=x,x(t)=x_0)$ and $\lim_{s\to 0^+ }u(s)=u_0\,($resp. $\lim_{s\to t^-}u(s)=u_0)$. \end{remark} We collect the properties of the above action functions that are used in this paper into the following \begin{proposition}{\cite{WWY2}}\label{Minimality} For each $c\in\mathbb{R}$, the action function $h^c_{x_0,u_0}(x,t)\,\,($resp.\,\,$h^{x_0,u_0}_c(x,t)) $ satisfies \begin{enumerate} \item[(1)] \textbf{(Minimality)} Given $x_0,x\in M $ and $u_0\in \mathbb{R} $ and $t>0$, let $S^{x,t}_{x_0,u_0}$ be the set of the solutions $(x(s),p(s),u(s))$ of \eqref{eq:ode} on $[0,t]$ with $x(0)=x_0,x(t)=x,u(0)=u_0\,\,($resp. $x(0)=x, x(t)=x_0, u(t)=u_0)$. Then \begin{equation}\label{eq:Minimality} \begin{split} h^c_{x_0,u_0}(x,t)=&\inf \{u(t):(x(s),p(s),u(s))\in S^{x,t}_{x_0,u_0}\},\\ (\text{resp.}\,\,h_c^{x_0,u_0}(x,t)=&\sup \{u(0):(x(s),p(s),u(s))\in S^{x,t}_{x_0,u_0}\}.) \end{split} \end{equation} for any $ (x,t)\in M\times(0,+\infty)$. As a result, $h^c_{x_0,u_0}(x,t)=u\Leftrightarrow h_c^{x,u}(x_0,t)=u_0$. \item[(2)]\textbf{(Monotonicity)} Given $x_0\in M, u_1< u_2\in\mathbb{R}$, for any $t>0$ and all $x\in M$, \[ h^c_{x_0,u_1}(x,t)< h^c_{x_0,u_2}(x,t),\quad h_c^{x_0,u_1}(x,t)< h_c^{x_0,u_2}(x,t) \] \item[(3)]\textbf{(Markov property)} Given $x_0 \in M,u_0 \in \mathbb{R} $, we have \begin{equation}\label{markov} \begin{split} &h^c_{x_0,u_0}(x, t+s)=\inf_{y\in M}h^c_{y,h^c_{x_0,u_0}(y,t)}(x,s),\\ &h_c^{x_0,u_0}(x, t+s)=\sup_{y\in M}h_c^{y,h_c^{x_0,u_0}(y,t)}(x,s). \end{split} \end{equation} for any $s,t>0$ and all $x\in M$. \item[(4)]\textbf{(Lipschitz continuity)} The function $(x_0,u_0,x,t)\mapsto h^c_{x_0,u_0}(x,t)\,\,($resp. $h_c^{x_0,u_0}(x,t))$ is locally Lipschitz continuous on $M\times \mathbb{R}\times M\times (0,+\infty ) $. \end{enumerate} \end{proposition} Based on the backward\,/\,forward action function defined above, we introduce, for each $c\in\mathbb{R}$, two families of nonlinear operators $\{T^{c,\pm}_t\}_{t\geqslant 0}$. For each $\varphi\in C(M)$ and $(x,t)\in M\times(0,+\infty)$, \begin{equation}\label{eq:Tt-+ rep} \begin{split} T^{c,-}_t\varphi(x):=\inf_{y\in M}h^c_{y,\varphi(y)}(x,t),\\ T^{c,+}_t\varphi(x):=\sup_{y\in M}h_c^{y,\varphi(y)}(x,t). \end{split} \end{equation} One easily see that for every $t\geqslant0, T^{c,\pm}_t$ maps $C(M)$ to itself and satisfies for any $t,s\geqslant0$, \[ T^{c,\pm}_{t+s}=T^{c,\pm}_{t}\circ T^{c,\pm}_{s}, \] so that the families of operators $\{T^{c,\pm}_t\}_{t\geq0}$ form two semigroups. These semigroups are related to the evolutionary equation \eqref{HJe} by the fact that \begin{proposition}\label{solution} Assume that for $\varphi\in C(M)$, $U^{c,\pm}:M\times[0,\infty)\rightarrow\mathbb{R}$ are functions defined by \[ U^{c,\pm}(x,t):=T^{c,\pm}_t\varphi(x), \] then $U^{c,-}$ is the unique solution to \eqref{HJe} and $-U^{c,+}$ is the unique solution to \begin{equation}\label{reverse ham} \begin{cases} \partial_t u+\breve{H}(x,\partial_x u,u)=c,\quad (x,t)\in M\times[0,+\infty),\\ u(0,x)=-\varphi(x), \end{cases} \end{equation} where $\breve{H}(x,p,u)=H(x,-p,-u)$. \end{proposition} Due to the above proposition, we call $\{T^{c,-}_t\}_{t\geqslant 0}$ the \textit{backward solution semigroup} and $\{T^{c,+}_t\}_{t\geqslant 0}$ the \textit{forward solution semigroup} to \eqref{HJe}. It turns out that the notion of subsolution (resp. strict subsolution) is equivalent to the $t$-monotonicity (-strict monotonicity) of the solution semigroups. Here, strict subsolutions mean subsolutions that the inequality in the definition of which is strict at any $x\in M$. \begin{proposition}\label{mono} Assume $c\geqslant c(H)$, any $v\in C(M)$ is a subsolution to \eqref{HJs} if and only if \[ v(x)\leqslant\,\,T^{c,-}_t v(x)\quad\text{or}\quad v(x)\geqslant\,\,T^{c,+}_t v(x),\quad\text{for any}\,\,x\in M\,\,\text{and}\,\,t\geqslant0. \] Moreover, if $v\in C(M)$ is a strict subsolution, the corresponding \textbf{strictly} inequalities hold. \end{proposition} Thus if $T^{c,-}_t v$ \big(resp. $T^{c,+}_t v$\big) has an upper bound (resp. lower bound), then the uniform limits \[ u^c_-:=\lim_{t\rightarrow\infty}T^{c,-}_t v,\quad u^c_+:=\lim_{t\rightarrow\infty}T^{c,+}_t v \] exist and must be fixed points of the corresponding semigroups. This leads to \begin{definition}\label{weak kam} We use \begin{itemize} \item $\mathcal{S}^c_-$ to denote the set of all fixed points of $\{T^{c,-}_t\}_{t\geqslant 0}$, which is also the set of solutions to \eqref{HJs}. \item $\mathcal{S}^c_+$ to denote the set of all fixed points of $\{T^{c,+}_t\}_{t\geqslant 0}$. It follows that $u^c_+ \in \mathcal{S}^c_+ $ if and only if $-u^c_+$ is a solution to \begin{equation}\label{reverse eq} \breve{H}(x,d_x u,u)=c,\quad x\in M; \end{equation} \end{itemize} \end{definition} \subsection{Viscosity solutions via solution semigroups} Since the Hamiltonian has the form \eqref{model}, for $c\geqslant c(H)$, \[ L^c(x,\dot{x},u)=l^c(x,\dot{x})-\lambda(x)u, \] where \[ l^c(x,\dot{x})=\sup_{p\in T^{\ast}_x M}\{p\cdot\dot{x}-h(x,p)\}+c \] is a Tonelli Lagrangian. By Theorem \ref{exist-sub}, there is a subsolution $v_0\in$ Lip $(M)$ to \eqref{HJs} for $c\geqslant c(H)$. The discussion in 2.2 shows that $\lim_{t\rightarrow\infty}T^{c,-}_t v_0$ (if exists!) must be a solution to \eqref{HJs}. By Proposition \ref{mono}, the existence of the limit is equivalent to the the following \begin{lemma}\label{back-bd} For $c\geqslant c(H)$ and any subsolution $v$ to \eqref{HJs}, there is $B_0(H,c)>0$ such that \[ T_{t}^{c,-}v(x)\leqslant B_0\quad\text{and}\quad T_{t}^{c,+}v(x)\geqslant B_0 ,\quad \text{for any}\quad x\in M,\,\,t>0. \] \end{lemma} \begin{proof} We shall focus on the first inequality since the argument for the second one is completely similar. Since $v$ is a subsolution to \eqref{HJs}, then for any $p\in D^* v(x_1)$, here $D^{\ast}v(x)$ denotes the reachable gradients of $v$ at $x$, \[ H(x_1,p,v(x_1))=h(x_1,p)+ v(x_1)\leqslant c, \] which is equivalent to \[ v(x_1) \leqslant c-\min_{(x,p)\in T^*M} h(x,p)\leqslant c+e_0. \] By \eqref{eq:Tt-+ rep} and (2) of Proposition \ref{Minimality}, for any $t>0$ \[ T_t^{c,-}v(x)=\inf_{y\in M}h^c_{y,v(y)}(x,t) \leqslant h^c_{x_1,c+e_0}(x,t). \] On the other hand, by \eqref{eq:Implicit variational} and choosing $\gamma_0(\tau)\equiv x_1$ with $\tau\in [0,t]$. \begin{align*} h^c_{x_1,c+e_0}(x_1,t)=&\,(c+e_0)+\inf_{\substack{\gamma(t)=x_1\\ \gamma(0)=x_1} }\int_0^t \bigg[l^c(\gamma(\tau),\dot \gamma(\tau))-\lambda(\gamma(\tau))h^c_{x_1,c+e_0}(\gamma(\tau),\tau))\bigg]\ d\tau\\ \leqslant &\, (c+e_0)+\int_0^t \bigg[l^c(\gamma_0(\tau),\dot \gamma_0(\tau))-\lambda(\gamma_0(\tau))h^c_{x_1,c+e_0}(\gamma_0(\tau),\tau))\bigg]\ d\tau\\ = &\, (c+e_0)+\int_0^t \bigg[l^c(x_1,0) -\lambda(x_1)\cdot h^c_{x_1,c+e_0}(x_1,\tau)\bigg] \ d \tau \\ = &\, (c+e_0)+t\cdot l^c(x_1,0)-\int_0^t h^c_{x_1,c+e_0}(x_1,\tau) \ d \tau. \end{align*} We define for $t\geqslant0$, \[ G(t):=\int_0^t h^c_{x_1,c+e_0}(x_1,\tau) \ d\tau,\quad G(0)=0. \] Then by the above discussion, \[ \frac{d G(t)}{dt}+G(t) \leqslant (c+e_0)+t\cdot l^c(x_1,0), \] which implies that \begin{align*} G(t) \leqslant &\, e^{-t} \int_0^t e^s\Big(c+e_0+s\cdot l^c(x_1,0)\Big)\ ds\\ =&\, \Big(c+e_0-l^c(x_1,0)\Big)\cdot\Big(1-e^{-t}\Big) + t\cdot l^c(x_1,0)\leqslant D+t\cdot l^c(x_1,0), \end{align*} where $D(H):=|e_0-l(x_1,0)|\geqslant 0$. Hence \[ \int_0^t h^c_{x_1,c+e_0}(x_1,\tau) \ d \tau \leqslant D+t\cdot l^c(x_1,0). \] Therefore, there exists a sequence of positive numbers $\{t_n\}_{n\in\mathbb{N}}$ with $t_n\to +\infty $ and \[ h^c_{x_1,c+e_0}(x_1,t_n)\leqslant l^c(x_1,0)+1, \] thus by (2) and (3) of Proposition \ref{Minimality}, for any $x\in M$, \begin{align*} &\,h^c_{x_1,c+e_0}(x,t_n+1)=\inf_{y\in M}h^c_{y,h^c_{x_1,c+e_0}(y,t_n)}(x,1)\\ \leqslant &\, h^c_{x_1,h^c_{x_1,c+e_0}(x_1,t_n)}(x,1) \leqslant h^c_{x_1,l^c(x_1,0)+1}(x,1)\leqslant B_0, \end{align*} where $B_0:=\max_{x\in M}h^c_{x_1,l^c(x_1,0)+1}(x,1)$ only depends on $H$ and $c$. For any $n\in\mathbb{N}$, \[ T_{t_n+1}^{c,-}v(x)\leqslant h^c_{x_1,c+e_0}(x,t_n+1)\leqslant B_0. \] By Proposition \ref{mono}, $T_{t}^{c,-}v(x)$ is increasing in $t$, thus $T_{t}^{c,-}v(x)$ is uniformly bounded from above by $B_0$. \end{proof} Combining the above lemma and Proposition \ref{mono}, we obtain \begin{theorem}\label{exist-sol} For $c\geqslant c(H)$, both of the limits \[ u_-^c=\lim_{t\to +\infty} T_t^{c,-}v_0(x), \quad u_+^c=\lim_{t\to +\infty} T_{t}^{c,+}v_0(x) \] exists. In particular,\,\,$\mathcal{S}^c_-$ and $\mathcal{S}^c_+$ are both non-empty when $c\geqslant c(H)$. \end{theorem} In view of Lemma \ref{uni-Lip}, it is not surprising that $\mathcal{S}^c_-$ is bounded as a subset of Lipschitz functions on $M$. Before the proof of this conclusion, we state the \begin{lemma}\label{3-3} For each $t\geqslant 0$ and $x\in M$, \[ T_t^{c,-}u^c_+(x)\geqslant u^c_+(x),\quad T_t^{c,+}u^c_-(x)\leqslant u^c_-(x). \] \end{lemma} \begin{proof} We shall focus one the first inequality, the second one is due to Proposition \ref{mono} and the fact that $u^c_-$ is a subsolution to \eqref{HJs}. It is clear that $T_0^{c,-} u^c_+= u^c_+$. For $t>0$, we have \[ T^{c,-}_tu^c_+(x)=\inf_{y\in M}h^c_{y,u^c_+(y)}(x,t),\quad \forall x\in M. \] Thus, in order to prove $T_t^{c,-} u^c_+\geqslant u^c_+$ everywhere, it is sufficient to show that for each $y\in M$, \begin{equation}\label{eq:7} h^c_{y, u^c_+(y)}(x,t)\geqslant u^c_+(x)\quad \text{for all}\quad (x,t)\in M\times (0,+\infty). \end{equation} Given any $(x,t)\in M\times (0,+\infty)$, for $y\in M$, set $v(y):=h^c_{y, u^c_+(y)}(x,t)$, then Proposition \ref{Minimality}, (1) gives \[ u^c_+(y)=h_c^{x,v(y)}(y,t). \] Since $u^c_+$ is fixed point of $T_t^{c,+}$, it follows that \[ h_c^{x,v(y)}(y,t)=u^c_+(y)=T_t^{c,+} u^c_+(y)=\sup_{z\in M}h_c^{z, u^c_+(z)}(y,t)\geqslant h_c^{x,u^c_+(x)}(y,t). \] Proposition \ref{Minimality}, (2) implies \[ v(y)\geqslant u^c_+(x)\quad\text{ for all }\quad y\in M, \] which is equivalent to \eqref{eq:7}. \end{proof} Now we show the second conclusion of our main result, namely \begin{theorem}\label{Lip-bound} Assume $c \geqslant c(H)$, there is $B(H,c)>0$ such that any $u\in \mathcal{S}^c_-$ satisfying \[ \|u\|_{W^{1,\infty}}\leqslant B. \] \end{theorem} \begin{proof} By the superlinearity of $h$, it is enough to show that for any $u\in\mathcal{S}^c_-, \|u\|_{\infty}$ is bounded above by some constant only depending on $H$ and $c$. By Lemma \ref{back-bd}, we obtain that \begin{equation}\label{eq:thm21-1} u(x)\leqslant B_0(H,c), \quad \text{for all}\,\, x\in M,\,\, u\in \mathcal{S}^c_-;\\ \end{equation} By Lemma \ref{3-3}, any $v\in\mathcal{S}^c_+$ is bounded above by some $u\in \mathcal{S}^c_-$, thus \begin{equation}\label{eq:thm21-2} v(x)\leqslant B_0(H,c), \quad \text{for all}\,\, x\in M,\,\, v\in \mathcal{S}^c_+;\\ \end{equation} For the other side, we notice that, by Definition \ref{weak kam}, for any $u\in \mathcal{S}^c_-, -u$ is a fixed point of $\{\breve T_t^{c,+}\}_{t\geqslant 0}$ , where $\breve T_t^{c,+}$ is the \textit{forward solution semigroup} to \eqref{HJe}, where $H$ is replaced by $\breve{H}$. Since $\breve{H}$ satisfies our standing assumption (H1)-(H3), we invoke the inequality \eqref{eq:thm21-2} to obtain \[ -u(x)\leqslant B_0(\breve{H},c),\quad \text{for any}\quad x\in M. \] Combining the above with \eqref{eq:thm21-1}, we conclude that \[ -B_0(\breve{H},c)\leqslant u(x)\leqslant B_0(H,c), \quad \forall x\in M, u\in \mathcal{S}^c_-. \] \end{proof} The following two propositions are crucial in our construction of multiple solutions. \begin{proposition}\label{principal} For any $c\geqslant c(H)$ and $u^c_+\in\mathcal{S}^c_+$ (resp. $u^c_-\in\mathcal{S}^c_-$) to the \eqref{HJs}, the limit \[ \lim_{t\rightarrow\infty}T^{c,-}_t u^c_+\quad\text{(resp.}\,\,\lim_{t\rightarrow\infty}T^{c,+}_t u^c_-) \] exists and then belongs to $\mathcal{S}^c_-$ (resp. $\mathcal{S}^c_+$). \end{proposition} \begin{proof} We shall show the first limit exists, the other part is completely similar. Applying Proposition \ref{mono} and Lemma \ref{3-3}, $u^c_+$ is a subsolution to \eqref{HJs}. Then Lemma \ref{back-bd} implies that, for each $t\geqslant 0$, \[ u^c_+(x)\leqslant T_t^{c,-}u^c_+(x) \leqslant B_0(H,c). \] Thus $T_t^{c,-}u^c_+(x) $ is increasing and uniformly bounded in $t$ so that $\lim_{t\rightarrow\infty}T^{c,-}_t u^c_+$ exists and has to be a fixed point of $\{T^{c,-}_t\}_{t\geq0}$. \end{proof} \vspace{1em} The following proposition appeared first in the work \cite{WWY3} in a more complete form, which dealt with Hamiltonian that are strictly increasing in $u$. However, its proof does not depend on the $u$-monotonicity. For the readers convenience, we present a simple proof here. \begin{proposition}\cite[Theorem 1.1, 1.2]{WWY3}\label{mane} Assume $(u^{c}_-,u^{c}_+)\in\mathcal{S}^c_-\times\mathcal{S}^c_+$ satisfies \begin{equation}\label{pair} u^{c}_-=\lim_{t\rightarrow\infty}T^{c,-}_t u^{c}_+, \end{equation} then they must coincide on some point on $M$, i.e., \[ \mathcal{I}(u_-^{c},u_+^{c}):=\{x\in M:\,\,u_-^{c}(x)=u_+^{c}(x)\}\neq\emptyset. \] \end{proposition} \begin{proof} We argue by contradiction and assume $ \mathcal{I}(u_-^{c},u_+^{c})=\emptyset$. Then applying Lemma \ref{3-3}, \[ u_-^{c}(x)>u_+^{c}(x)\quad\text{for any}\,\,x\in M, \] and $\delta:=\min_{x\in M }\{u_-^{c}(x)-u_+^{c}(x)\}>0$. By the definition \eqref{pair}, there is $t_0>0$ such that for $t\geqslant t_0$, \begin{equation}\label{strict} T_t^{c,-} u^{c}_+(x) \geqslant u^{c}_-(x)-\frac{\delta}{2} > u^{c}_+(x). \end{equation} Notice that for any $x\in M,t>0$, the definition \eqref{eq:Tt-+ rep} gives \[ T_t^{c,-} u^{c}_+(y)\leqslant h^c_{x,u^{c}_+(x)}(y,t). \] Thus Proposition \ref{Minimality} implies that \[ T_t^{c,+} \circ T_t^{c,-} u^{c}_+(x)= \sup_{y\in M} h_c^{y, T_t^{c,-} u^{c}_+(y)} (x,t) \leqslant \sup_{y\in M} h_c^{y, h^c_{x,u^{c}_+(x)}(y,t)}(x,t)=u^{c}_+ (x). \] Combining \eqref{strict} and monotonicity of the solution semigroup, we obtain for $t\geqslant t_0$, \[ u^{c}_+(x) \geqslant T_t^{c,+} \circ T_t^{c,-} u^{c}_+(x)> T_t^{c,+} u^{c}_+(x), \] This contradicts with the fact that $u^{c}_+ $ is a fixed point of $\{T_t^{c,+}\}_{t\geqslant 0}$. \end{proof} As a refined version of the existence result, the multiplicity of solutions to \eqref{HJs} are obtained in the noncritical case. This phenomenon shares similarity with the bifurcation arising in nonlinear dynamics, but has a global nature. \begin{theorem}\label{multi} $\mathcal{S}^c_-$ contains at least two elements when $c>c(H)$. \end{theorem} \begin{proof} By the definition \eqref{dcv} of $c(H)$, there is a strict subsolution $v\in C^\infty(M)$ to \eqref{HJs}, i.e., \[ H(x,d_x v(x),v(x))<c. \] Then by Proposition \ref{mono}, for any $t>0$, \begin{equation}\label{eq:3} \begin{split} T_t^{c,-}v(x)>v(x),\\ T_t^{c,+}v(x)<v(x). \end{split} \end{equation} If we define \begin{equation}\label{vis-def} u_-^c:=\lim_{t\to +\infty}T_{t}^{c,-} v(x)\in \mathcal{S}^c_-, \quad u_+^c:=\lim_{t\to +\infty}T_{t}^{c,+} v(x)\in \mathcal{S}^c_+. \end{equation} and \begin{equation}\label{ano-def} \bar{u}_-^c:=\lim_{t\to +\infty}T_{t}^{c,-}u_+^c(x)\in \mathcal{S}^c_-. \end{equation} By \eqref{eq:3} and \eqref{vis-def}, for any $x\in M$, \begin{equation}\label{eq:4} u_+^c(x)<v(x)<u_-^c(x). \end{equation} Due to the definition \eqref{eq:Tt-+ rep} and (2) of Proposition \ref{Minimality}, the above inequality gives for any $t>0$ and $x\in M$, \[ T_t^{c,-}u_+^c(x)<T_t^{c,-}u_-^c(x)=u_-^c(x), \] where the equality holds since elements in $\mathcal{S}^c_-$ are fixed points of $\{T_t^{c,-}\}_{t\geqslant 0}$. By Proposition \ref{principal}, we send $t$ to infinity and use \eqref{ano-def} to find for any $x\in M$, \[ \bar{u}_-^c(x)\leqslant u_-^c(x). \] Notice that if $\bar{u}_-^c\equiv u_-^c$ on $M$, then by Proposition \ref{mane}, the set \[ \mathcal{I}(u_-^c,u_+^c)=\{x\in M:\,\,u_-^c(x)=u_+^c(x)\}\neq\emptyset. \] This contradicts with \eqref{eq:4}. Thus $\bar{u}_-^c\leqslant u_-^c$ are different solutions to \eqref{HJs}. \end{proof} \section{An illustrating example and concluding remarks}\label{section3} In this section, we want to illustrate our main theorem by a simple example from another aspect. Let $\mathbb{S}^1$ be the usual circle and $\mathbf{0}\in C(\mathbb{S}^1)$ the function vanishing everywhere on $\mathbb{S}^1$, we consider the following \begin{example} \begin{equation}\label{ham} H(x,p,u)=|p|^2+\sin(x)u,\quad\quad (x,p,u)\in T^\ast\mathbb{S}^1\times\mathbb{R}. \end{equation} \end{example} As a smooth function on $\mathbb{S}^1, \sin(x)$ attains its maximum $1$ and minimum $-1$. It follows that $H$ satisfies (H1)-(H3), thus falls into the class of Hamiltonian we consider in this paper. The Hamilton-Jacobi equation associated to the \eqref{ham} is \begin{equation}\label{i-e} |u^\prime(x)|^2+\sin(x)u=c,\quad x\in\mathbb{S}^1. \end{equation} From now on, we identify $\mathbb{S}^1$ with $[0,2\pi]$ and all functions are assumed to be $2\pi$-periodic. Notice that due to the facts \begin{enumerate}[(1)] \item if $c=0$, then $\mathbf{0}$ is a solution to \eqref{i-e}, \item if $c<0$, there is no smooth subsolution $\underline{\mathtt{u}}\in C^\infty(\mathbb{S}^1)$ to \eqref{i-e}. Since if $x=0,\pi$, then $\underline{\mathtt{u}}$ has to satisfy the impossible inequality \[ |\underline{\mathtt{u}}^{\prime}(x)|^2\leqslant c<0, \] \end{enumerate} and the definition \eqref{dcv}, we have \begin{proposition}\label{cri} For the Hamiltonian $H$ given by \eqref{ham}, \begin{equation}\label{e-cv} c(H)=0. \end{equation} \end{proposition} In this section, for $c\geq0$, we drop the superscript and use $u_{\pm}$ and $T_t^{\pm}$ to denote $u^c_{\pm}$ and $T_t^{c,\pm}$ respectively. According to the value of the righthand constant, we divide the discussion into two parts. \subsection{The critical case $c=0$} For $c=0$, Theorem \ref{exist-sol} implies that the set $\mathcal{S}^0_-$ of solutions to \eqref{i-e} is non-empty. Since the Hamiltonian $H$ is smooth and strictly convex in $p$, by \cite[Theorem 5.3.7]{CS}, any solution $u_-\in\mathcal{S}^0_-$ is locally semiconcave. In particular, \begin{equation}\label{d+} D^+ u_-(x)\neq\emptyset,\quad\text{for any}\,\,x\in[0,2\pi]. \end{equation} \begin{proposition}\label{nonnegative} Any $u_-\in\mathcal{S}^0_-$ satisfies \begin{equation}\label{back} u_-(x)\equiv0\quad \text{for}\quad x\in[0,\pi],\quad u_-(x)\geqslant0\quad \text{for}\quad x\in[\pi,2\pi]. \end{equation} \end{proposition} \begin{proof} By \eqref{d+}, the condition of being a subsolution implies \begin{equation}\label{cond-sub} \sin(x)u_-(x)\leqslant 0,\quad \text{for any}\,\,x\in[0,2\pi]. \end{equation} From the continuity of $u_-$ and the first inequality above, we have \begin{equation}\label{eq:5} \begin{split} u_-(x)\leqslant0\quad &\text{for}\quad x\in[0,\pi],\quad u_-(x)\geqslant0\quad \text{for}\quad x\in[\pi,2\pi],\\ &\text{so that}\quad u_-(0)=u_-(\pi)=0. \end{split} \end{equation} By \eqref{eq:5}, $u_-$ attains its minimum at $x^{\ast}\in(0,\pi)$. Since a locally semiconcave function is differentiable at a local minima, $u_-$ is differentiable at $x^\ast$ and $u_-^{\prime}(x^\ast)=0$. Since $u_-$ is a supersolution to \eqref{i-e}, one may conclude that $0\in D^-u_-(x^*)$ and \[ \sin(x^\ast)u_-(x^\ast)=|0|^2+\sin(x^\ast)u_-(x^\ast)\geqslant0, \] thus for any $x\in[0,2\pi],\,\,u_-(x)\geqslant u_-(x^\ast)\geqslant0$. Combining this with \eqref{eq:5}, any $u_-\in\mathcal{S}^0_-$ satisfies \eqref{back}. \end{proof} \begin{remark} By repeating the discussion for $H(x,-p,-u)$, it is readily seen that similar conclusion holds true for $u_+\in\mathcal{S}^0_+$, i.e., they all satisfy \begin{equation}\label{for} u_+(x)\leqslant 0\quad \text{for}\quad x\in[0,\pi],\quad u_+(x)\equiv0\quad \text{for}\quad x\in[\pi,2\pi]. \end{equation} \end{remark} \medskip Notice that there is a classical solution $\phi\in C^1([\pi,2\pi])$ of \begin{equation}\label{eq:appendix:ex1} \phi^\prime (x)= \sqrt{(-\sin x)\phi} \text{ \ in \ } [\pi, 2\pi], \quad \phi(\pi)=0, \quad\phi(x)>0, \text{ \ if \ } x>\pi \end{equation} Indeed, setting $$ \phi(x)=\Big( \frac{1}{2} \int_\pi^x \sqrt{-\sin t} d t \Big)^2, $$ we see that $\phi$ is a desired solution of \eqref{eq:appendix:ex1}. We define $u_{\pi,2\pi}:[0,2\pi]\rightarrow\mathbb{R}$ by $$ u_{\pi,2\pi}(x)= \begin{cases} 0, \quad & x\in [0,\pi]\\ \Big( \frac{1}{2} \int_\pi^x \sqrt{-\sin t} d t \Big)^2, \quad & x\in [\pi,\frac{3}{2}\pi]\\ \Big( \frac{1}{2} \int_x^{2\pi} \sqrt{-\sin t} d t \Big)^2, \quad &x\in [\frac{3}{2}\pi ,2\pi ], \end{cases} $$ then $u_{\pi,2\pi}\in\mathcal{S}^0_-$. More generally, for $\pi\leqslant a<b \leqslant 2\pi$, we define the function $$ u_{a,b}(x)= \begin{cases} 0, \quad & x\in [0,a]\\ \min\Big\{ \Big( \frac{1}{2} \int_a^x \sqrt{-\sin t} d t \Big)^2,\Big( \frac{1}{2} \int_x^b \sqrt{-\sin t} d t \Big)^2 \Big\}, \quad & x\in [a,b]\\ 0, \quad &x\in [b,2\pi ] \end{cases} $$ and choose a sequence of mutually disjoint interval $(a_i,b_i)\subset(\pi,2\pi)$, with $i\in \mathbb{N}$ or $i\leqslant K$, then the function $$ \sum_i u_{a_i,b_i}(x) $$ is a solution to \eqref{i-e}, thus belongs to $\mathcal{S}^0_-$. The following theorem shows all elements of $\mathcal{S}^0_-$ belong to this family. Assume $u_-:[0,2\pi]\rightarrow\mathbb{R}$ is a solution of \eqref{i-e}, by Proposition \ref{nonnegative}, the set \[ \{x\in [0 ,2\pi],u_-(x)>0\} \] is an open subset of $(\pi,2\pi)$, therefore can be written as an at most countable union of mutually disjoint open intervals $(a_i,b_i),i\in I=\mathbb{N}$ or $\{1,2,...,K\}$. We have \begin{theorem}\label{cri-sol} Assume $u_-\in\mathcal{S}^0_-$ and \begin{equation}\label{eq:18} \{x\in [0 ,2\pi],u_-(x)> 0 \}=\bigcup_{i\in I}(a_i,b_i)\subset(\pi,2\pi), \end{equation} then $$ u_-(x)=\sum_{i} u_{a_i,b_i}(x),\quad x\in [0,2\pi]. $$ \end{theorem} \begin{proof} Let $\mathcal{N}_{u_-}=\{x\in [0,2\pi] : u_-(x)=0\}$ denote the null set of $u_-$. Since $\mathcal{N}_{u_-}$ is the set of minima of $u_-$, thus $u_-$ is differentiable at any point $x$ in $\mathcal{N}_{u_-}$ with $u_-^{'}(x)=0$. We use $\mathcal{D}u_-$ to denote the differentiable points of $u_-$. For any $x\in[0,2\pi]$ such that $u'_-(x)$ exists and equals $0$, the equation \[ 0=(u'_-(x))^2+\sin(x)u_-(x)=\sin(x)u_-(x) \] and \eqref{back} implies $x\in\mathcal{N}_{u_-}$. Thus we have \[ \mathcal{N}_{u_-}=\{x\in\mathcal{D}u_- : u'_-(x)=0\}. \] It is obvious from the definition \eqref{eq:18} that for any $i\in I$, \begin{equation}\label{eq:19} (a_i,b_i)\cap\mathcal{N}_{u_-}=\emptyset. \end{equation} \medskip \textbf{Claim} : For any fixed $i\in I$, there exists $x_i \in (a_i,b_i)$ such that $$ u'_-(x)=\begin{cases} >0 , \quad x\in (a_i, x_i) \cap \mathcal{D}u_-.\\ <0 , \quad x\in ( x_i,b_i ) \cap \mathcal{D}u_-. \end{cases} $$ \textit{ Proof of the claim:} We argue by contradiction. Assume that there are $\underline{x}<\bar{x}\in(a_i,b_i)$ with $$ u'_-(\underline{x})<0\quad \quad u'_-(\bar{x})>0, $$ then $u_{-}|_{[\underline{x},\bar{x}]}$ attains its local minima $x_\ast\in(\underline{x},\bar{x})$. Thus $u_-$ is differentiable at $x_\ast$ with $u_-^{'}(x_\ast)=0$ and $x_\ast\in\mathcal{N}_{u_-}$. This contradicts \eqref{eq:19} and completes the proof. \medskip \medskip To prove the theorem, it is enough to prove that for each fixed $i\in I$, \[ u_-|_{[a_i,b_i]}=u_{a_i,b_i}. \] Using our claim, we get \[ \begin{cases} u_-^\prime (x)= \sqrt{(-\sin x) u_-(x)}, \quad & x\in (a_i, x_i) \cap \mathcal{D}u_- \\ u_-^\prime (x)= -\sqrt{(-\sin x) u_-(x)}, \quad & x\in ( x_i,b_i) \cap \mathcal{D}u_-, \end{cases} \] or equivalently \[ \begin{cases} (\sqrt{u_-})^\prime (x)=\sqrt{-\sin x}, \quad & x\in (a_i, x_i)\cap\mathcal{D}u_- \\ (\sqrt{u_-})^\prime (x)=-\sqrt{-\sin x}, \quad & x\in ( x_i,b_i)\cap\mathcal{D}u_-. \end{cases} \] This implies that $\sqrt{u_-}|_{(a_i,x_i)}, \sqrt{u_-}|_{(x_i,b_i)}$ are differentiable almost everywhere with a continuous derivative, thus are absolutely continuous. Applying the fundamental theorem of calculus and the condition $u_-(a_i)=u_-(b_i)=0$, we obtain \[ u_-(x)=\begin{cases} \Big( \frac{1}{2} \int_{a_i}^x \sqrt{-\sin t} \ d t \Big)^2, \quad & x\in [a_i, x_i ]\\ \Big( \frac{1}{2} \int_x^{b_i} \sqrt{-\sin t} \ d t \Big)^2, \quad &x\in [ x_i ,b_i ] \end{cases} \] The above formula and the continuity of $u_-$ at $x_i$ implies that $x_i=\frac{a_i+b_i}{2}$ and \[ u_-(x)=\min\Big\{ \Big( \frac{1}{2} \int_{a_i}^x \sqrt{-\sin t} d t \Big)^2,\Big( \frac{1}{2} \int_x^{b_i}\sqrt{-\sin t} d t \Big)^2 \Big\}=u_{a_i,b_i}(x),\quad \text{for} \quad x\in [a_i,b_i]. \] \end{proof} Theorem \ref{cri-sol} shows that for $c=0$, there are infinite solutions to \eqref{i-e}. However, as a concluding \begin{remark}\label{unique-prin} We first observe that for any $u_+\in\mathcal{S}^0_+$ and $u_-\in\mathcal{S}^0_-$, \[ \lim_{t\rightarrow\infty}T^-_t u_+=\mathbf{0},\quad \lim_{t\rightarrow\infty}T^+_t u_-=\mathbf{0}. \] This is due to the following reason: by Definition \ref{principal}, for $x\in[\pi,2\pi]$, \[ \lim_{t\rightarrow\infty}T^-_t u_+(x) \leqslant \lim_{t\rightarrow\infty}T^-_t \mathbf{0} (x)=0 \] is a solution to \eqref{i-e}, where the inequality uses \eqref{for}. Applying \eqref{back} to $\lim_{t\rightarrow\infty}T^-_t u_+(x) $ shows that $\lim_{t\rightarrow\infty}T^-_t u_+(x)=\mathbf{0}$. The proof of the second equation is completely similar. \vspace{1em} Now if we look for a pair of functions $(u_-,u_+)\in\mathcal{S}^0_-\times\mathcal{S}^0_+$ satisfying \[ \lim_{t\rightarrow\infty}T^+_t u_-=u_+,\quad \lim_{t\rightarrow\infty}T^-_t u_+=u_-, \] then the above observation shows that $(\mathbf{0},\mathbf{0})$ is the unique such pair. In this sense, the results obtained from our main theorem is optimal. The above discussions can be carried out to more general Hamiltonian $H(x,p,u)=|p|^2_x+v(x)u$, where $v:M\rightarrow\mathbb{R}$ is a Morse function and $0$ is a regular value of $v$. \end{remark} \subsection{The noncritical case $c>0$} Assume $c>0$, we want to give a construction of solutions to \eqref{i-e} from the viewpoint of dynamical system and show that $\mathcal{S}^c_-$ contains exactly two elements. For convenience, solutions to \eqref{i-e} are assumed to be $2\pi$-periodic functions on $\mathbb{R}$. For a solution $u_-\in\mathcal{S}^c_-$, we define its graph and 1-graph by \[ \Lambda^0(u_-)=\{(x,u_-(x))\,:\,x\in\mathbb{R}\}\quad\text{and}\quad\Lambda^1(u_-)=\{(x,p,u)\,:\,u=u_-(x),p\in D^{\ast}u_-(x),x\in\mathbb{R}\}. \] To begin with, we notice that a solution $u_-\in \mathcal{S}^c_-$ is a fixed point of backward solution semigroup, i.e., \[ T^{-}_t u_-=u_-,\quad\text{for any}\quad t>0. \] By Proposition \ref{Implicit variational} and Theorem \ref{Lip-bound}, the above equality implies that for any $x\in\mathbb{R}$, there is an orbit \[ (x(t),p(t),u(t)),\quad t\in(-\infty,0] \] of the characteristic system \eqref{eq:ode} with the contact Hamiltonian \begin{equation}\label{ham-c} H^c(x,p,u)=|p|^2+\sin(x)u-c \end{equation} such that $x(0)=x,\,p(0)\in D^\ast u_-(x),\,u(t)=u_-(x(t))$, here $D^{\ast}u_-(x)$ denotes the reachable gradients of the solution $u_-$ at $x$. Moreover, for any $t<0$, $x(t)\in\mathcal{D}u_-$ with $u'_-(x(t))=p(t)$. \begin{remark} For such an orbit and any $s\leqslant t\leqslant0$, it follows that \[ u_-(x(t))-u_-(x(s))=\int^{t}_{s} L^c(x(\tau), \dot x(\tau),u_-(x(\tau)))\ d\tau. \] Due to this identity, $(x(t),p(t),u(t)),\,t\in(-\infty,0]$ is said to be calibrated by $u_-$ or briefly called calibrated orbit since $u_-$ is fixed. \end{remark} Since $u_-$ is a solution to \eqref{HJs}, any calibrated orbit $(x(t),p(t),u(t)), t\in(-\infty,0]$ \begin{enumerate}[(1)] \item lies on the regular energy shell $M^c:=\{(x,p,u)\in T^\ast\mathbb{R}\times\mathbb{R}, H(x,p,u)=c\}$, preserved by the contact Hamiltonian flow $\Phi^t_{H^c}$. \item and $\alpha(x(0),p(0),u(0))$ is a nonempty connected, $\Phi^t_{H^c}$-invariant subset. Besides, elementary knowledge from topological dynamics shows this set is included in the non-wandering set of $\Phi^t_{H^c}|_{M^c}$. \end{enumerate} When restricting to $M^c$, the system \eqref{eq:ode}, with the Hamiltonian defined by \eqref{ham-c}, has the form \begin{equation}\label{eq:i-e} \left\{ \begin{aligned} \dot x&=2p,\\ \dot p &=-\cos(x)u-\sin(x)p,\\ \dot u&=2p^2. \end{aligned} \right. \end{equation} We shall use some symmetric property given by the above equation \begin{lemma}\label{sym} Assume \eqref{eq:i-e} admits an integral curve $(x(t),p(t),u(t)), t\in I$, where $I$ is an open interval, then \[ (x(t)+2\pi,p(t),u(t)),\quad (\pi-x(t),-p(t),u(t)),\quad t\in I \] are also integral curves of \eqref{eq:i-e}. \end{lemma} \begin{proof} $(x(t)+2\pi,p(t),u(t))$ is an integral curve of \eqref{eq:i-e} since the system \eqref{eq:i-e} is $2\pi$-periodic in $x$. For $t\in I$, we verify by \eqref{eq:i-e} and direct computation as \[ \left\{ \begin{aligned} &\frac{d}{dt}(\pi-x(t))=-\dot{x}(t)=-2p(t)=2(-p(t)),\\ &\frac{d}{dt}(-p(t))=-\dot{p}(t)=\cos(x(t))u(t)+\sin(x(t))p(t)\\ &\hspace{1.76cm}=-\cos(\pi-x(t))u(t)-\sin(\pi-x(t))(-p(t)),\\ &\frac{d}{dt}u(t)=2p(t)^2=2(-p(t))^2, \end{aligned} \right. \] which shows $(\pi-x(t),-p(t),u(t)),t\in I$ is also an integral curve. \end{proof} Another simple but cruicial observation from the system \eqref{eq:i-e} is that \begin{itemize} \item along the integral curve, $\dot{u}=2p^2\geqslant0$, so $u(t)$ is nondecreasing. \end{itemize} This leads to \begin{lemma}\label{snw} The non-wandering set of $\Phi^t_{H^c}|_{M^c}$ consists of hyperbolic fixed points \[ (\frac{\pi}{2}+2k\pi,0,c),\quad (-\frac{\pi}{2}+2k\pi,0,-c),\quad k\in\mathbb{Z}, \] whose unstable manifolds are one-dimensional. Moreover, every calibrated orbit lies on the unstable manifold of one of these fixed points. \end{lemma} \begin{proof} Assume an orbit $(x(t),p(t),u(t))$ belongs to the non-wandering set. Then there is $u_0\in\mathbb{R}$ such that $u(t)\equiv u_0$ and $\dot{u}(t)=|p(t)|^2\equiv0$. As a consequence, for some $x_0\in M$, \begin{equation}\label{eq:6} \begin{split} \dot{x}(t)&=2 p(t)\equiv0,\quad x(t)=x_0,\\ \dot{p}(t)&=-\cos(x_0)u_0\equiv0. \end{split} \end{equation} From these, we deduce that $(x_0,0,u_0)$ is a fixed point. According to the second equation of \eqref{eq:6}, we obtain that \[ x_0=\pm\frac{\pi}{2}+2k\pi\quad\text{or}\quad u_0=0. \] Concerning $(x_0,0,u_0)\in M^c$, we have $u_0\neq0$ and the only fixed points are $(\pm\frac{\pi}{2}+2k\pi,0,\pm c)$. \vspace{1em} Forgetting the last equation of \eqref{i-e}, we linearize the remaining two on the two dimensional energy shell $M^c$. Set $X=x-x_0, P=p$, we obtain that \[ \begin{split} &\left( \begin{array}{c} \dot{X} \\ \dot{P} \\ \end{array} \right) =\left( \begin{array}{cc} 0 & 2 \\ c & -1 \\ \end{array} \right) \left( \begin{array}{c} X \\ P \\ \end{array} \right),\quad \text{at}\,\,(x_0,0,u_0)=(\frac{\pi}{2}+2k\pi,0,c);\\ &\left( \begin{array}{c} \dot{X} \\ \dot{P} \\ \end{array} \right) =\left( \begin{array}{cc} 0 & 2 \\ c & 1 \\ \end{array} \right) \left( \begin{array}{c} X \\ P \\ \end{array} \right),\quad \text{at}\,\,(x_0,0,u_0)=(-\frac{\pi}{2}+2k\pi,0,-c). \end{split} \] Since $c>0$, it follows that all fixed points are hyperbolic and have one-dimensional stable and unstable manifolds on $M^c$. Since the $\alpha$-limit of every calibrated orbit is connected and included in the non-wandering set of $\Phi^t_{H^c}|_{M^c}$, it has to be one of these fixed points. This completes the proof. \end{proof} Now we project the dynamics on $M^c$ onto the $(x,u)$-plane. By the equations $du=pdx$, the projected unstable manifolds of fixed points are 1-graphs of functions near fixed points, but may admit cusp type singularities and turn around in $x$-direction, as is depicted below (contact geometers call these projections \textbf{wave fronts}, and are familiar with their structures, see \cite[Section 1]{ETM}). \begin{figure}[h] \begin{center} \includegraphics[width=9cm]{bifur3.png} \end{center} \end{figure} Assume $(x(t),p(t),u(t)),t\in(-\infty,0]$ serves as a calibrated orbit of $u_-$, then it lies on the graph of $u_-$ and can not swerve, i.e., $p(t)$ does not change sign on $(-\infty,0]$. Thus we only consider the ``horizontal'' part of the unstable manifolds before turning around and denote them by the notation $\mathrm{W}^u$. According to the equation \eqref{eq:i-e} and Lemma \ref{sym}, we have \begin{proposition}\label{unstable-prop} There are $C^1$ functions \[ \phi_0:\bigg[-\frac{\pi}{2}-\sigma_0,-\frac{\pi}{2}+\sigma_0\bigg]\rightarrow\mathbb{R},\quad \phi_1:\bigg[\frac{\pi}{2}-\sigma_1,\frac{\pi}{2}+\sigma_1\bigg]\rightarrow\mathbb{R} \] and $\sigma_0,\sigma_1\in(0,+\infty]$ such that for any $k\in\mathbb{Z}$, \begin{equation}\label{unstable} \begin{split} \mathrm{W}^{u}\bigg(-\frac{\pi}{2}+2k\pi,0,-c\bigg)&=\bigg\{(x,\phi_0(x-2k\pi)):x\in[-\frac{\pi}{2}+2k\pi-\sigma_0,-\frac{\pi}{2}+2k\pi+\sigma_0]\bigg\},\\ \mathrm{W}^{u}\bigg(\frac{\pi}{2}+2k\pi,0,c\bigg)&=\bigg\{(x,\phi_1(x-2k\pi)):x\in[\frac{\pi}{2}+2k\pi-\sigma_1,\frac{\pi}{2}+2k\pi+\sigma_1]\bigg\} \end{split} \end{equation} and \begin{enumerate}[(1)] \item $\phi_0|_{[-\frac{\pi}{2},-\frac{\pi}{2}+\sigma_0]}$ and $\phi_1|_{[\frac{\pi}{2},\frac{\pi}{2}+\sigma_1]}$ are strict increasing and $\phi_0(x)=\phi_0(-\pi-x), \phi_1(x)=\phi_1(\pi-x)$. As a result, $\phi_0$ takes minimum $-c$ only at $x=-\frac{\pi}{2}$ and $\phi_1$ takes minimum $c$ only at $x=\frac{\pi}{2}$. \item for any $u_-\in\mathcal{S}^c_-$ and $(x,p,u)\in\Lambda^1(u_-)$, if \[ \alpha(x,p,u)=(-\frac{\pi}{2}+2k\pi,0,-c)\quad\bigg(\text{resp. }(\frac{\pi}{2}+2k\pi,0,c)\bigg), \] then $\min\{\sigma_0,2\pi\}\geq|-\frac{\pi}{2}+2k\pi-x|\,\,($ resp. $\min\{\sigma_1,2\pi\}\geq|\frac{\pi}{2}+2k\pi-x|\,\,)$ and \begin{align*} u_-|_{[x,-\frac{\pi}{2}+2k\pi]}\quad\text{or}\quad u_-|_{[-\frac{\pi}{2}+2k\pi,x]}=\phi_0(\cdot-2k\pi),\\ (\text{resp. }u_-|_{[x,\frac{\pi}{2}+2k\pi]}\quad\text{or}\quad u_-|_{[\frac{\pi}{2}+2k\pi,x]}=\phi_1(\cdot-2k\pi)). \end{align*} \end{enumerate} \end{proposition} \begin{proof} From the form of the equation \eqref{eq:i-e}, we deduce that for any integral curve near the fixed points, $|\frac{du}{dx}(x(t))|=|p(t)|\ll1$. Thus the existence of $C^1$ function $\phi_i$ and constants $\sigma_i$ is a direct consequence of Hartman-Grobman Theorem \cite[Theorem 4.1]{Palis-de Melo} and Lemma \ref{sym}. \vspace{1em} (1)\,\,According to the definition of $\phi_i,\sigma_i$, we assume for $x(0)=-\frac{\pi}{2}+\sigma_0$, the integral curve $(x(t),u(t)):=(x(t),\phi_0(x(t))),\,\,t\in(-\infty,0]$ describes half of the projected horizontal unstable manifold of $(-\frac{\pi}{2},0,-c)$. From the proof of Lemma \ref{snw}, it is clear that if $p(t)\geq0$ vanishes on a non-empty open interval if and only if $(x(t),p(t),u(t))$ is a fixed point, this contradicts the assumption. Thus for any $-\infty<t_1<t_2\leq0$, \[ x(t_2)-x(t_1)=\int^{t_2}_{t_1}p(\tau)d\tau>0,\quad \phi_0(x(t_2))-\phi_0(x(t_1))=u(t_2)-u(t_1)=2\int^{t_2}_{t_1}p^2(\tau)d\tau>0. \] This shows that $\phi_0$ is strict increasing on $[-\frac{\pi}{2},-\frac{\pi}{2}+\sigma_0]$. The symmetric property $\phi_0(x)=\phi_0(-\pi-x)$ follows from Lemma \ref{sym}. The corresponding properties for $\phi_1$ is proved by similar arguments. \vspace{1em} (2)\,\,Assume for $(x_0,p_0,u_-(x_0))\in\Lambda^1(u_-)$ and some $k\in\mathbb{Z}, \alpha(x_0,p_0,u_-(x_0))=(-\frac{\pi}{2}+2k\pi,0,-c)$. The corresponding calibrated orbit $(x(t),p(t),u(t)), t\in(-\infty,0]$ satisfies $x(0)=x_0$ and for $t\leq0$, \[ u_-(x(t))=u(t)=\phi_0(x(t)-2k\pi). \] We assume $x_0>-\frac{\pi}{2}+2k\pi$, then $x(t)$ is strictly increasing in $t, \lim_{t\rightarrow-\infty}x(t)=-\frac{\pi}{2}+2k\pi$ and \begin{equation}\label{eq:8} u_-(x)=\phi_0(x-2k\pi),\quad x\in[-\frac{\pi}{2}+2k\pi,x_0] \end{equation} By definition of $\sigma_0$, it is clear that $0<x_0-(-\frac{\pi}{2}+2k\pi)\leq\sigma_0$. Since $\phi_0$ is strictly increasing, if $x_0-(-\frac{\pi}{2}+2k\pi)\geq2\pi$, then $x_0\geq-\frac{\pi}{2}+(2k+2)\pi>-\frac{\pi}{2}+2k\pi$ and by \eqref{eq:8}, \[ u_-(-\frac{\pi}{2}+(2k+2)\pi)=\phi_0(-\frac{\pi}{2}+2\pi)>\phi_0(-\frac{\pi}{2})=u_-(-\frac{\pi}{2}+2k\pi). \] This is a contradiction since $u_-$ is $2\pi$-periodic. The conclusions for $\phi_1$ is proved by similar arguments. \end{proof} \vspace{1em} Denoting by $\mathbf{c}$ the constant function taking value $c>0$ everywhere, we notice that $\mathbf{c}$ is a subsolution to \eqref{i-e}. By Proposition \ref{mono} and Lemma \ref{back-bd}, we define \begin{equation}\label{positive} u_1:=\lim_{t\to +\infty} T_t^-\mathbf{c}\in\mathcal{S}^c_-\quad\text{and}\quad u_1(x)\geqslant c>0,\quad\text{for all }x\in\mathbb{R}, \end{equation} thus \eqref{i-e} admits at least one positive function. Furthermore, we have \begin{lemma}\label{lem:existpositive} $\sigma_1\geq\pi$ and if we adopt the convention that for $x$ not belonging to the domain of $\phi_i(\cdot-2k\pi), i=0,1$, $\phi_{i}(x-2k\pi)$ is neglected in taking minimum, then \begin{equation}\label{sol-1} u_1(x)=\min_{k\in\mathbb{Z}}\{\phi_{1}(x-2k\pi)\} \end{equation} is the \textbf{unique} positive solution to \eqref{i-e}. \end{lemma} \begin{proof} Assume $u_-\in\mathcal{S}^c_-$ is positive everywhere. Then the $\alpha$-limit set of any point on $\Lambda^1(u_-)$ is, for some $k\in\mathbb{Z}, \left(\frac{\pi}{2}+2k\pi,0,c \right)$. It follows from Proposition \ref{unstable-prop} and the periodicity of $u_-$ that \[ \alpha\left(\Lambda^1(u_-)\right)=\bigcup_{k\in\mathbb{Z}}\left(\frac{\pi}{2}+2k\pi,0,c\right)\quad\text{and}\quad \Lambda^0(u_-)\subset\bigcup_{k\in\mathbb{Z}}\mathrm{W}^u \left(\frac{\pi}{2}+2k\pi,0,c\right). \] This shows that $\bigcup_{k\in\mathbb{Z}}\mathrm{W}^u \left(\frac{\pi}{2}+2k\pi,0,c\right)$ is connected, therefore $\sigma_1\geqslant\pi$. \vspace{1em} Now it suffices to verify \eqref{sol-1} on $[\frac{\pi}{2},\frac{5\pi}{2}]$ with $u_1$ replaced by an arbitrary positive solution $u_-$, since both sides are $2\pi$-periodic functions. By Proposition \ref{unstable-prop} (2), for $x\in[\frac{\pi}{2},\frac{5\pi}{2}]$, there are $p\in D^{\ast}u_-(x), k\in\mathbb{Z}$ such that \[ \alpha(x,p,u_-(x))=(\frac{\pi}{2}+2k\pi,0,c),\quad k=0,1. \] and there is $x_{\ast}\in(\frac{\pi}{2},\frac{5\pi}{2})$ such that \[ u_-(x)= \begin{cases} \phi_1(x),\quad &x\in[\frac{\pi}{2},x_{\ast}],\\ \phi_1(x-2\pi),\quad &x\in[x_{\ast},\frac{5\pi}{2}]. \end{cases} \] Thus we obtain \[ \phi_1(x_\ast)=u_-(x_\ast)=\phi_1(x_\ast-2\pi)=\phi_1(3\pi-x_\ast), \] where the last equality uses Proposition \ref{unstable-prop} (1). Since $x_\ast,3\pi-x_\ast\in[\frac{\pi}{2},\frac{\pi}{2}+\sigma_1]$, Proposition \ref{unstable-prop} (1) again (precisely, monotonicity of $\phi_1$) implies $x_\ast=\frac{3\pi}{2}$, and this completes the proof. \end{proof} \vspace{1em} By Theorem \ref{multi}, there are at least two different solutions of equation \eqref{i-e}. Due to Lemma \ref{lem:existpositive}, there exists at least a solution which is not everywhere positive. Lemma \ref{lem:existnotpositive} proves the uniqueness of such solutions. Before that, we need \begin{lemma}\label{lem:np-structure} If $u_-\in\mathcal{S}^c_{-}$ is not everywhere positive, then for any $k\in\mathbb{Z}$, $u_-$ is identified with $\phi_0(\cdot-2k\pi)$ near $x=-\frac{\pi}{2}+2k\pi$. \end{lemma} \begin{proof} Assume there is $(x_0,p_0,u_0)\in\Lambda^1(u_-)$ such that $x_0\in[-\frac{\pi}{2}, \frac{3\pi}{2}]$ and $u_0=u_-(x_0)\leq0$. By the equation \eqref{eq:i-e}, we have the $u$-component of $\alpha(x_0,p_0,u_0)$ is not larger than $u_-(x_0)$. Combining Proposition \ref{unstable-prop} (2), $\alpha(x_0,p_0,u_0)=(-\frac{\pi}{2},0,-c)$ or $(\frac{3\pi}{2},0,-c)$. By continuity and periodicity of $u_-, u_-(-\frac{\pi}{2}+2k\pi)=-c$ and there is $0<\delta\leq\min\{\sigma_0,\frac{\pi}{4}\}$ such that $u_-(x)<0$ for $|x+\frac{\pi}{2}-2k\pi|<\delta$. Notice that for different $k, I_k:=(-\frac{\pi}{2}+2k\pi-\delta,-\frac{\pi}{2}+2k\pi+\delta)$ are mutually disjoint and the fixed point with $x$-component falling into $I_k$ is unique, namely $(-\frac{\pi}{2}+2k\pi,0,-c)$. By the fact that the $u$-component of a calibrated orbit $(x(t),p(t),u(t))$, other than fixed points, is strictly increasing, we conclude that if $x(0)\in I_k$, then for $t\leq0$, \[ x(t)\in I_k\quad\text{and}\quad\alpha(x(0),p(0),u(0))=(-\frac{\pi}{2}+2k\pi,0,-c). \] By Proposition \ref{unstable-prop} (2), this implies that $u_-(x)=\phi_0(x-2k\pi)$ for $x\in I_k$. \end{proof} \begin{lemma}\label{lem:existnotpositive} If $u_-\in\mathcal{S}^c_{-}$ is not everywhere positive, then with the same convention as in Lemma \ref{lem:existpositive}, \begin{equation}\label{np:rep} u_-(x)=u_0(x):=\min_{k\in \mathbb{Z}}\{\phi_0(x-2k \pi), \phi_1(x-2k\pi)\}. \end{equation} Notice that since $\sigma_1\geq\pi$, the above minimum is less than $\max_{x\in[-\frac{\pi}{2},\frac{3\pi}{2}]}\phi_1(x)$ and is always attained. \end{lemma} \begin{proof} Since both sides are $2\pi$-periodic functions, it is sufficient to verify \eqref{np:rep} on $[-\frac{\pi}{2},\frac{3\pi}{2}]$. Proposition \ref{unstable-prop} (2) and Lemma \ref{lem:np-structure} shows that: for any $x\in[-\frac{\pi}{2},\frac{3\pi}{2}]$ and $p\in D^\ast u_-(x)$, the $x$-component of $\alpha(x,p,u_-(x))$ must belong to $[-\frac{\pi}{2},\frac{3\pi}{2}]$; combining the fact that $\phi_1-\phi_0$ is strictly decreasing on $[-\frac{\pi}{2},-\frac{\pi}{2}+\sigma_0]$ and $\phi_1-\phi_0(\cdot-2\pi)$ is strictly increasing on $[\frac{3\pi}{2}-\sigma_0,\frac{3\pi}{2}]$, there are $-\frac{\pi}{2}<a\leqslant b<\frac{3\pi}{2}$ such that \begin{equation}\label{sol-2} u_-(x)= \begin{cases} \phi_0(x),\quad &x\in[-\frac{\pi}{2},a],\\ \phi_1(x),\quad &x\in(a,b),\\ \phi_0(x-2\pi),\quad &x\in[b,\frac{3\pi}{2}]. \end{cases} \end{equation} The above formula implies $a\leq-\frac{\pi}{2}+\sigma_0, b\geq\frac{3\pi}{2}-\sigma_0$ and if $a<b$, then \begin{equation}\label{eq:9} \phi_0(a)=\phi_1(a), \phi_0(b-2\pi)=\phi_1(b). \end{equation} Now there are two cases: \begin{figure}[h] \begin{center} \includegraphics[width=18cm]{phi.png} \caption{\hspace{0.4cm} $a=b$ \hspace{7cm} $a<b$ \hspace{2cm}} \label{fig1} \end{center} \end{figure} \vspace{1em} (i)\,\,$a=b$. In this case, \[ u_-(x)= \begin{cases} \phi_0(x),\quad &x\in[-\frac{\pi}{2},a]\\ \phi_0(x-2\pi),\quad &x\in[a,\hspace{0.18cm}\frac{3\pi}{2}] \end{cases} \] and $\frac{3\pi}{2}-\sigma_0\leqslant a\leq-\frac{\pi}{2}+\sigma_0$. So we obtain from continuity of $u_-$ that \[ \phi_0(a)=u_-(a)=\phi_0(a-2\pi)=\phi_0(\pi-a), \] where the last equality uses Proposition \ref{unstable-prop} (1). Since $a,\pi-a\in[-\frac{\pi}{2},-\frac{\pi}{2}+\sigma_0]$, Proposition \ref{unstable-prop} (1) again implies $a=\pi-a=\frac{\pi}{2}, \sigma_0\geq\pi$ and $u_-(x)=\min_{k\in\mathbb{Z}}\{\phi_{0}(x-2k\pi)\}$. Observe that for $p\in D^*u_-( \frac{\pi}{2})$, \[ \min_{k\in\mathbb{Z}}\{\phi_{0}(x-2k\pi)\}=u_-(x)\leqslant u_-\bigg(\frac{\pi}{2}\bigg)\leqslant p^2+\sin\bigg(\frac{\pi}{2}\bigg)u_-\bigg(\frac{\pi}{2}\bigg)=c\leqslant\min_{k\in\mathbb{Z}}\phi_1(x-2k\pi). \] Then $u_-(x)=\min_{k\in\mathbb{Z}}\{\phi_{0}(x-2k\pi)\}=\min_{k\in\mathbb{Z}}\{\phi_{0}(x-2k\pi),\phi_1(x-2k\pi)\}$. \medskip (ii)\,\,$a<b$. In this case, $a$ is the \textbf{unique} zero of $\phi_1-\phi_0$ on $[-\frac{\pi}{2},-\frac{\pi}{2}+\sigma_0], b$ is the \textbf{unique} zero of $\phi_1-\phi_0(\cdot-2\pi)$ on $[\frac{3\pi}{2}-\sigma_0,\frac{3\pi}{2}]$. Using Proposition \ref{unstable-prop} (1) and \eqref{eq:9}, \[ 0=\phi_1(a)-\phi_0(a)=\phi_1(\pi-a)-\phi_0(-\pi-a)=\phi_1(\pi-a)-\phi_0((\pi-a)-2\pi). \] Thus $\pi-a$ is a zero of $\phi_1-\phi_0(\cdot-2\pi)$ on $[\frac{3\pi}{2}-\sigma_0,\frac{3\pi}{2}]$, thus equals $b$. So we obtain \begin{align} u_-(x)= \begin{cases} \phi_0(x),\quad &x\in[-\frac{\pi}{2},a],\\ \phi_1(x),\quad &x\in(a,\pi-a),\\ \phi_0(x-2\pi),\quad &x\in[\pi-a,\frac{3\pi}{2}]. \end{cases} \end{align} By Proposition \ref{unstable-prop} (1), $\phi_0-\phi_1$ is strictly increasing on $x\in [-\frac{\pi}{2},-\frac{\pi}{2}+\sigma_0]$, so for $x\in[-\frac{\pi}{2},\frac{\pi}{2}]$, \[ u_-(x)=\min_{k\in\mathbb{Z}}\{\phi_{0}(x-2k\pi),\phi_1(x-2k\pi)\}. \] The symmetry of $\phi_i$ shows the above identity holds for $x\in[\frac{\pi}{2},\frac{3\pi}{2}]$. \end{proof} Finally, we combine Lemma \ref{lem:existpositive} and Lemma \ref{lem:existnotpositive} to obtain \begin{theorem}\label{thm:existtwosolution} The equation \eqref{i-e} admit exactly two solutions, i.e. \[ \mathcal{S}^c_-=\{u_0,u_1 \} \] where $u_0=\min_{k\in \mathbb{Z}}\{\phi_0(x-2k\pi),\phi_1(x-2k\pi)\}$ and $u_1=\min_{k\in \mathbb{Z}}\{\phi_1(x-2k\pi)\} $. \end{theorem} \begin{remark} With the aid of numerical method, we depict two solutions $u_0, u_1$ to \eqref{i-e} below in Figure \ref{fig1} with different righthand constants. Notice that when $c=1$, the smooth solution $u_0(x)=\sin(x)$ corresponds to heteroclinic orbits between fixed points. In all cases, the numerical results fit well into our theoretical analysis, especially, Lemma \ref{lem:existpositive} and Lemma \ref{lem:existnotpositive}. \end{remark} \begin{figure}[h] \begin{center} \includegraphics[width=17cm]{bifur1.png} \caption{ $ \quad c=0.8 \hspace{4cm} c=1 \hspace{4.5cm} c=2 \hspace{1.2cm}$ } \label{fig1} \end{center} \end{figure} \section*{Acknowledgments} The authors are grateful to the anonymous referees for their careful reading, critical comments and useful suggestions on the original version of this paper, which have helped to improve the presentation substantially. Especially, they kindly pointed out that there are infinite solutions to \eqref{i-e} when $c=0$. The authors would like to thank Prof. Hitoshi.Ishii warmly for inspiring discussions on the example and numerical results in Section \ref{section3}, which lead them to establish Theorem \ref{cri-sol} and Theorem \ref{thm:existtwosolution}, and for his word by word correction of the manuscript from which they benefit a lot. The authors are partly supported by National Natural Science Foundation of China (Grant No. 12171096). L.Jin is also partly supported by National Natural Science Foundation of China (Grant No. 11901293,11971232). \medskip
2,869,038,156,611
arxiv
\section{Introduction} Consider a maximally entangled state of two particles $A$ and $B$, for instance, the Bell state \begin{equation} \ket{\Phi^+}_{AB}=\frac{1}{\sqrt{2}} (\ket{0}_A\ket{0}_B + \ket{1}_A\ket{1}_B). \label{bell} \end{equation} Imagine now that the state describes a situation where the particles are in space-like separated regions and let us call Alice and Bob the remote partners with access to particles $A$ and $B$ respectively. We assume that Alice and Bob can perform arbitrary local operations (LO) on their subsystems, including unitary actions and, possibly, generalized measurements involving extra ancilla particles. If necessary, Alice and Bob can exchange classical communication (CC) but global quantum operations involving subsystems $A$ and $B$ are forbidden. The resulting set of allowed operations is generally referred to as LOCC. When Alice and Bob share entanglement, local actions on a given region have a {\em non-local} effect, in the sense that the state of the remote particle does not necessarily remain insensitive to the details of the transformation performed miles away. Consider the case of a projective measurement on Alice's side. For example, she performs a Stern-Gerlach measurement with the apparatus oriented along an arbitrary unit vector $\hat{n}$. Let us call $\ket{+}_{\hat{n}}$, the eigenvector corresponding to spin up along the $\hat{n}$ direction. Whenever she records a spin up result, the joint state of the system is projected out into the product state $\ket{+}_{\hat{n},A} \ket{+}_{\hat{n},B}$, if she records a spin down result, the joint state of the system is projected out into the product state $\ket{-}_{\hat{n},A} \ket{-}_{\hat{n},B}$. Both outcomes happen with equal probability. \begin{figure}[htb] \epsfxsize=6.7cm \begin{center} \epsffile{Figure1.eps} \end{center} \caption{\label{effprob} Two remote partners share bipartite entanglement, represented by the state $\ket{\chi}_{AB}$. Only one of the partners, Alice, has access to a device that can implement an arbitrary single qubit operation $U$. Is there an LOCC protocol that can yield a final state where Bob ends up holding the state $U\ket{\psi}$ for any $\ket{\psi}$? } \end{figure} Now there are two distinctly different ways to proceed. If Bob were to conduct an independent measurement on his particles using an Stern-Gerlach apparatus with a random orientation, then he will also register a random outcome, obtaining an unbiased spin up and spin down distribution along his chosen direction. There will be no correlations between his measurement outcomes and those of Alice. If however, Bob receives a message from Alice informing him of her choice of alignment, then the situation changes in a subtle way. His measurement will still give random outcomes, but, these outcomes will be perfectly correlated to those of Alice's measurement outcomes. This shows that classical communication may be used to correlate actions between partners with the effect that experimental results will change significantly. In a more complicated setting Alice and Bob would perform measurements along various non-parallel but correlated axes. This type of correlated measurements then leads to the celebrated EPR paradox, later formulated by Bell in a testable form and which for years fuelled the discussions about quantum non-locality \cite{peres}. The recent development of the theory of quantum information processing has given an unexpected twist to this discussion. Shared entanglement supplement by LOCC operations makes possible entirely new forms of distributed computation and communication \cite{review}, as epitomized by the process of quantum state teleportation \cite{tele}. In this article we will focus on a related problem: The remote implementation of quantum operations, and in particular, the teleportation of unitary transformations \cite{teleU,angles}. The simplest situation has been illustrated pictorially in Figure 1. One of the remote partners, Alice, has access to a classical device that can implement an arbitrary singe-particle operation $U$. There is no need to restrict the dimensionality of the space of allowed operations except for assuming that it be finite, but possibly very large (See section I for specific details). Using LOCC operations, and provided that Alice and Bob share entanglement, represented by the state $\ket{\chi}_{\alpha AB}$ in the figure, our aim is to remotely implement the arbitrary transformation $U$ on Bob's side, i.e., to design an LOCC protocol that will end up with Bob holding the state $U \ket{\psi}_{\beta}$ for any single particle state $\ket{\psi}_{\beta}$. When we allow for a completely arbitrary $U$ then we will show that the most economical procedure to achieve this consist of teleporting the state of Bob's particle to Alice, who applies the transformation $U$ to the teleported state and then teleports the state back to Bob. No other LOCC protocol supplemented by shared entanglement will consume less distributed entanglement and have a lower communication cost. However, when the requirement of perfect remote control for arbitrary operations is relaxed, we encounter a different situation and state teleportation is no longer the most cost-efficient way to proceed when implementing restricted sets of operations. Non-trivial protocols become possible and we present examples for these in the special case of unitary transformations on two-level systems (qubits). We have organized the presentation as follows. In Section II we present a general proof for the necessary and sufficient resources required for the remote control of a single qudit operation. This results generalize previous constraints derived for qubits \cite{teleU}. Section III analyzes the limitations arising when one attempts to teleport restricted sets of operations. We will show that arbitrary qubit rotations around a fixed direction can be implemented remotely without the need of teleporting states between Alice and Bob. Novel results concerning the remote control of identical copies are discussed in III.a. Recent experimental reports demonstrating the protocol are summarized in section VI. The final section summarizes our main results and conclusions. \section{A general result} The most general scenario for the teleportation of an arbitrary unitary operation was discussed in \cite{teleU} and is related to the design of universal quantum gates arrays proposed by Nielsen and Chuang \cite{nielsen}. When the device implementing the unitary transformation $U$ is modelled as a truly quantum system, whose state corresponding to the unitary operation {\em U} will be denoted by $\ket{U}_{C}$, we can represent the remote control operator by a completely positive, linear, trace preserving map on the set of density operators for the combined system. Any such map has a unitary representation $\cal G$ involving global ancillary systems in state $\ket{\chi}_{AB}$, so that \begin{equation} {\cal G} \big[\ket{\chi}_{AB}{\otimes}\ket{U}_{C}{\otimes}\ket{\psi}_{\beta}\big]= \ket{\Phi(U,{\chi})}_{ABC}{\otimes}\left(U\ket{\psi}_{\beta}\right)\ .. \label{tony} \end{equation} Note that this unitary representation may be non-local even if the map itself is local. As a consequence, any argument involving the eq. (\ref{tony}) will provide only lower bounds on the resource requirement. In any specific case one will then need to find a local protocol that matches these lower bounds. This can indeed be done in any of the instances that we are considering in the following. \begin{figure}[htb] \epsfxsize=8.7cm \begin{center} \epsffile{Figure2.eps} \end{center} \caption{\label{circuito} Quantum circuit model using nonlocal unitary gates $G_1$ and $G_2$ for the teleportation of a unitary operation $U$ on a single particle (See text for details). Alice and Bob have access to the two upper and the two lower wires respectively. The lowest wire represents the selected single particle state onto which we intend to remotely apply the transformation $U$.} \end{figure} The results by Nielsen and Chuang on programmable gate arrays \cite{nielsen} imply that the control states $\ket{U}_C$ corresponding to different unitary transformations are orthogonal, so that no finite-dimensional control system can be used to teleport an arbitrary unitary operation. For the remainder of this paper, when we speak of an arbitrary unitary operation, we will mean one which belongs to some arbitrarily large, but finite, set. We will also assume that this set contains the identity $\sigma^0={\openone}$ and the 3 Pauli operators ${\sigma}^{i}$. The orthogonality of the control states opens the possibility that different operations can, at least in principle, be distinguished and identified by Alice if she chooses to perform measurements on the apparatus. Here we are not interested in this approach and focused instead in assisting the task with shared entangled and LOCC operations \cite{bcn}. Within this very general formulation, it is possible to derive lower bounds on the amount of non-local resources that are needed to implement ${\cal G}$ using only local operations and classical communication. To this end, we should remember the following basic principles \cite{poli,teleU}\\ \noindent{\bf (a)} {\em The amount of classical information able to be communicated by an operation in a given direction across some partition between subsystems cannot exceed the amount of information that must be sent in this direction across the same partition to complete the operation} (Impossibility of super-luminal communication).\\ \noindent {\bf (b)} {\em The amount of bipartite entanglement that an operation can establish across some partition between subsystems cannot exceed the amount of prior entanglement across the partition that must be consumed in order to complete the operation}(Impossibility of increasing entanglement under LOCC).\\ Principle (a) allows us to establish the fact that at least two classical bits must be sent from Alice to Bob to complete the teleportation of an arbitrary {\em U}. Moreover, by teleporting an arbitrary {\em U} according to the general prescription in Eq. (\ref{tony}), Alice and Bob can establish 2 ebits of shared entanglement and it follows from principle (b) that at least 2 ebits of entanglement need to be consumed to implement ${\cal G}$ locally, i.e. to teleport an arbitrary unitary operation \cite{teleU,angles}. These bounds can be attained by a procedure in which Bob teleports the state of his particle to Alice who, after applying the unitary transformation, teleports it back to him ( bidirectional state teleportation). This scheme saturates the lower bounds for the amount of shared ebits and classical bits transmitted from Alice to Bob and additionally uses two bits of classical communication from Bob to Alice and allows the faithful implementation of $U$ independently of the dimension of the control system. To be more efficient overall, any other scheme would need less resources than bidirectional state teleportation. This establishes an upper bound in the overall amount of resources required for the efficient remote implementation of an arbitrary $U$ as 4 classical bits and 2 ebits. Note that it would also be conceivable to adopt a different strategy -- teleporting the state of the control system from Alice to Bob who would then implement the control directly onto qubit $\beta$. We call this the ``control-state teleportation'' scheme. Control-state teleportation is a unidirectional communication scheme from Alice to Bob, so the absolute lower bound for the communication exchange from Bob to Alice is zero. Obviously, the overall resources will depend on the dimensionality of the control system $C$ and in general a large amount of entanglement and classical communication from Alice to Bob will be required if we want to teleport the control system. Let us focus on the experimental scenario where the black box implementing an arbitrary transformation $U$ is a macroscopic object, involving a (very) large number of degrees of freedom. The option of teleporting the control apparatus is then unfeasible, given that it would consume an unrealistic amount of entanglement and classical communication resources. However, the question remains whether there exists a more economical protocol than bidirectional state teleportation. We will generalize in the following the results presented in \cite{teleU} and show that bidirectional state teleportation is an unconditional optimal way to remotely implement an arbitrary $U$ on a given $d$-level system (qudit). Discarding the possibility of control-state teleportation allows us to replace the transformation given by Eq. (\ref{tony}) with \begin{equation} G_2 \, U \, G_1 (\ket{\chi}_{\alpha AB} \otimes\ket{\psi}_{\beta}) = \ket{\Phi(U,\chi)}_{\alpha AB} \otimes U \ket{\psi}_{\beta} \label{doble}, \end{equation} where certain fixed operations $G_1$ and $G_2$ are performed, respectively, prior to and following the action of the arbitrary $U$ on a qubit $\alpha$ on Alice's side, as illustrated in Figure 2. We assume that Alice and Bob share initially some entanglement, represented by the state $\ket{\chi}_{\alpha AB}$. As before, the purpose of the transformation is to perform the operation $U$ on Bob's qubit $\beta$. Note that we chose to use a nonlocal unitary representation of the transformation involved so that $G_1$ and $G_2$ are unitary operators acting on possibly all subsystems. For instance, the transformation $G_i$ can represent a state teleportation operation. In the following we prove, for systems of arbitrary spatial dimensions, that this is necessarily the case and that the only way that Eq. (\ref{doble}) can be implemented (locally) is by teleporting the state $\ket{\psi}_\beta$ from Bob to Alice, and then teleporting back the transformed state $U\ket{\psi}_\beta$ from Alice to Bob. By linearity, the transformed state of systems $\alpha AB$ has to be independent of the particular input state $\ket{\psi}_{\beta}$. and the specific operation {\em U} \cite{teleU}. With this we can already show that the operation $G_1$ cannot be trivial. We do this by first assuming the contrary that $G_1=\openone$, and considering two input states, $\ket{\psi}_{\beta}$ and $\ket{\psi'}_{\beta}$ such that $_{\beta}{\langle}{\psi}^{'}|{\psi}{\rangle}_{\beta}=0$, and two unitary transformations $U$ and $U'$ which bring these two states to the same state $\ket{\gamma}_{\beta}$. Using Eq. (\ref{doble}), this implies that \begin{eqnarray} G_2 \left( U\ket{\chi}_{\alpha AB} \, \ket{\psi}_{\beta} \right) &=& \ket{\Phi(\chi)}_{\alpha AB}{\otimes}\ket{\gamma}_{\beta} \nonumber \\ G_2 \left( U' \ket{\chi}_{\alpha AB} \, \ket{\psi'}_{\beta} \right) &=& \ket{\Phi(\chi)}_{\alpha AB}{\otimes} \ket{\gamma}_{\beta}\ .. \label{chu} \end{eqnarray} No universal unitary action $G_2$ can be found to satisfy Eq. (\ref{chu}), as this would require the mapping of orthogonal states onto the same state. This shows that no universal operation $G_2$ that satisfies Eq. (\ref{chu}) can exist and therefore, for the $U$-teleportation to succeed, $G_1$ has to be non-trivial. Finally, we will show that each of the operations $G_i$ implements a state transfer. Let us rewrite Eq. (\ref{doble}) as \begin{equation} U \, G_1 (\ket{\chi}_{\alpha AB} \otimes \ket{\psi}_{\beta}) = G^\dagger_2(\ket{\Phi(\chi)}_{\alpha AB} \otimes U \ket{\psi}_{\beta}). \label{doble2} \end{equation} Let us denote by $\{ \ket{k} \}_{k=0}^{d-1}$ the canonical basis in the space of sates of qudits. Consider the hermitean operator $\Pi$ defined as \begin{equation} \Pi=\sum_{k=0}^{d-1} (k+1) \ket{k} \bra{k}. \end{equation} By construction $\Pi$ is diagonal in the canonical basis and has a non-degenerate spectrum. Since the operations $G_1$ and $G_2$ are universal, we may choose $U$ and $\ket{\psi}_\beta$ freely. For each $\ket{\psi}_\beta$ let the operator $U_\psi$ be such that $U_\psi\ket{\psi}=\ket{0}$ where $\Pi \ket{0}=\ket{0}$. If $U= \Pi U_{\psi}$, then \begin{eqnarray*} \left(\Pi U_\psi\right) G_1\! \left(\ket{\chi}_{\alpha AB} \!\otimes\ket{\psi}_\beta\right) \!\!&=&\! G^\dagger_2\!\left(\ket{\Phi(\chi)}_{\alpha AB}\!\otimes \Pi U_\psi \ket{\psi}_\beta \right)\\ \!\!&=&\! G^\dagger_2\!\left(\ket{\Phi(\chi)}_{\alpha AB}\otimes\ket{0}_\beta\right)\ . \end{eqnarray*} The RHS is simply $(U_\psi) G_1\left(\ket{\chi}_{\alpha AB}\otimes\ket{\psi}_\beta\right)$ and so, given that the spectrum of the operator $\Pi$ is not degenerate, $(U_\psi) G_1\left(\ket{\chi}_{\alpha AB}\otimes\ket{\psi}_\beta\right)$ is the eigenstate $\ket{0}_{\alpha} \otimes\ket{\phi}_{AB \beta}$ of $\left(\Pi \right)_{\alpha} \otimes1_{AB \beta}$. Equivalently, \begin{eqnarray} G_1\left(\ket{\chi}_{\alpha AB}\otimes\ket{\psi}_\beta\right) &=& \left(U^\dagger_\psi\ket{0}_\alpha\right)\otimes\ket{\phi}_{AB \beta} \nonumber\\ &=& \ket{\psi}_\alpha\otimes\ket{\phi}_{AB \beta} \label{tele}\ . \end{eqnarray} In other words, the operation $G_1$ necessarily transfers Bob's state $\ket{\psi}$ to Alice's qubit $\alpha$. Substituting Eq. (\ref{tele}) into Eq. (\ref{doble}) then shows that $G_2$ necessarily transfers $U\ket{\psi}$ back to Bob's qubit ${\beta}$. This implies that the state of Bob's qubit must be brought to Alice for it to be acted on by the local operator $U$. This results is tantamount to a no-go theorem: {\it a local unitary operation $U$ cannot act remotely.} From this and the fact that quantum state teleportation is an optimal procedure for local state transfer, we conclude that the optimal LOCC procedure supplemented by shared entanglement for implementing remotely an arbitrary unitary action $U$ on a qudit is by means of bidirectional state teleportation. \section{Restricted operations on qubits: Teleporting arbitrary rotations around a fixed axis} In the specific case of qubits, the results presented in the previous section show that if we want the transformation $U$ to be an arbitrary element of the group $SU(2)$, no LOCC protocol can exist consuming less overall resources than teleporting Bob's state to Alice followed by Alice teleporting the state transformed by $U$ back to Bob. This amounts to two e-bits of entanglement and two classical bit in each direction. The ultimate reason for this result can be found in the linearity of quantum mechanics and the impossibility of implementing remotely an arbitrary $U$ without resorting to state transfer belongs to the family of no-go results imposed by the linear structure of quantum mechanics such as the non-cloning theorem \cite{cloning}.\\ The same way that the no-cloning theorem forbids the replication of general states, should we expect a similar result if the requirement of being able to implement {\em any} $U$ is relaxed? Can we find families of operators that can be implemented consuming less overall resources than a two-way teleportation protocol?. We want the procedure to work with perfect efficiency. Imperfect storage of quantum operations have been recently discussed by Vidal et al. \cite{guifre}. The probabilistic implementation of universal quantum processors was discussed by Nielsen and Chuang \cite{nielsen} and more recently by Hillery et al in the general case of qudits \cite{buzek}. We will show that there are indeed two restricted classes of operations that can be implemented remotely and deterministically using less overall resources than bidirectional quantum state teleportation and only two (up to a local change of basis). These are arbitrary rotations around a fixed direction $\vec{n}$ and rotations by a fixed angle around an arbitrary direction lying in a plane orthogonal to $\vec{n}$ \cite{angles}. It is easy to show that if a given protocol would be able to implement remotely certain operation $U$, the same protocol would also allow the implementation of a remote control-$U$. This argument will allow us to establish a lower bound on the amount of classical communication that needs to be conveyed from Bob to Alice. Consider the case of a non-local controlled-NOT gate between Alice and Bob \cite{poli,sandu}. When Alice prepares the state $\ket{+}_c=(\ket{0}+\ket{1})/\sqrt{2}$, the action of a controlled-NOT gate with Bob qubit being in either state $\ket{+}_B$ or in state $\ket{-}_B$ is given by \begin{eqnarray} \ket{+}_c \ket{+}_B &\longmapsto& \ket{+}_c \ket{+}_B, \\ \nonumber \ket{+}_c \ket{-}_B &\longmapsto& \ket{-}_c \ket{-}_B . \end{eqnarray} Therefore, this operation allows Bob to transmit one bit of information to Alice and, as a consequence, the teleportation of $U$ requires at least one bit of communication from Bob to Alice. Summarizing, the physical principles of non-increase of entanglement under LOCC and the impossibility of super-luminal communication allow us to establish lower bounds in the resources required for teleporting an unknown quantum operation on a qubit. At least two e-bits of entanglement have to be consumed and, in addition, this quantum channel has to be supplemented by a {\em two way} classical communication channel which, in principle, could be non-symmetric. While consistency with causality requires two classical bits being transmitted from Alice to Bob, the lower bound for the amount of classical information transmitted from Bob to Alice has been found to be one bit.\\ An explicit protocol saturating these bounds was presented in \cite{angles}. This protocol allows the remote implementation of arbitrary rotations around a given axis $\hat{n}$ as well as fixed rotations around an arbitrary direction within a plane orthogonal to $\hat{n}$. Choosing $\hat{n}$ to be along the $z$-direction, operations of the form \begin{equation} U_{com} = \left( \begin{array}{cc} a & 0\\ 0 & a^* \end{array} \right) = e^{i\frac{\phi}{2}\sigma_z}, \label{uu1} \end{equation} with $a=e^{i\phi}$ that is, the set of operations that commute with $\sigma_z$, or transformations of the form \begin{equation} U_{anticom} = \left( \begin{array}{cc} 0 & b\\ -b^* & 0 \end{array} \right)=\sigma_x e^{i(\frac{\phi + \pi}{2})\sigma_z} {\rm ,} \label{uu2} \end{equation} which anticommute with $\sigma_z$, i.e., are linear combinations of the Pauli operators $\sigma_x$ and $\sigma_y$. Any operation within this family can be teleported deterministically using a protocol which employs less resources than bidirectional state teleportation. Remarkably, these are the only sets that have this property as we rigourously showed in \cite{angles}. \begin{figure}[th] \vspace*{0.5cm} \centerline{\includegraphics[width=8cm]{Figure3.eps} } \vspace{.5cm} \caption{\label{gates} Quantum circuit for the remote rotation of a single qubit. The whole process is divided into three steps whose experimental implementation using photonic qubits will be described in section IV. The operation $G_1$ makes the coefficients of Bob's state visible in the channel via a global action on Bob's side and subsequent measurement in the computational basis. One classical bit has to be conveyed from Alice to Bob. For instance, if Alice and Bob share initially the Bell state $\ket{\phi}_{AB}^+$ and $\ket{\psi}=\alpha \ket{0}_B+\beta \ket{1}_B$, after completing $G_1$ they share the state $\alpha \ket{00}_{AB}+\beta \ket{11}_B$. Next, Alice implements the rotation $U$ on her side. Operation $G_2$ involves a measurement in the rotated basis followed by the transmission of a classical bit to Bob, who completes the protocol by applying a final $\sigma_z$ operation conditional on Alice's measurement result. } \end{figure} The entanglement and communication costs can be further reduced if it is a priory known whether the operation $U$ to be teleported belongs to either the set characterized by eq. (\ref{uu1}) or to the set characterized by eq. (\ref{uu2}). Figure 3 depicts the quantum circuit representation of the protocol that allows the deterministic remote implementation of arbitrary rotations around the $z$-axis. This is a communication symmetrical protocol where one classical bit is conveyed in each direction and with an overall entanglement cost of 1 ebit. As always, wiggly lines represent shared entanglement across the space-like thick solid line. The dotted arrow lines represent the exchange of classical communication following the measurement of the observable enclosed in the half-ovoid symbol. Squared boxes denote unitary actions performed after that exchange.\\ Related results have been recently derived by other authors \cite{benni1}. Reznik et al studied the deterministic remote implementation of a class of operations whose complete specification is split between the remote partners. For instance, single qubit transformations of the form $U=e^{i \alpha_A \sigma_{\hat{n}_B}}$, where the rotation angle $\alpha_A$ is controlled by Alice while Bob selects the direction $\hat{n}_{B}$. In this case, the most economical protocol for the remote implementation of $U$ consumes 1 ebit and requires the symmetrical exchange of two classical bits. Reznik and Groisman have also analyzed the probabilistic remote implementation of controlled operations using partially entangled states as a resource \cite{benni2}. The potential use of this type of protocols for secret sharing schemes was discussed by Gea-Banacloche and C-P Yang \cite{gea}. \subsection{Efficient application of multiple instances of an unknown unitary operation} In this subsection we are going to present a novel result concerning the generalization of quantum remote control. So far we have considered the question of a single remote application of an unknown unitary operation on an individual quantum state. One might consider whether the remote application of $U$ on two identical copies of a quantum state can be carried out with fewer resources than two full ebits and two classical bits of communication in each direction (ie twice the resource of a single application). In the following we will show that indeed, as long as the two copies of the quantum state $|\psi\rangle$ are both held by Bob a resource reduction can be achieved. In other words, assuming Alice holds a machine that implements $U=e^{i\theta\sigma_z}$ with an unknown $\theta$ and given that Bob holds the state $|\psi\rangle^{\otimes 2}$ we would like to provide an entangled state shared between Alice and Bob which, when supplemented by local operations and classical communication, allows for Bob to hold, at the end of the protocol, the state $(U|\psi\rangle)^{\otimes 2}$. In the following we will provide a protocol that requires an entangled state with $\log 3$ ebits of entanglement. This is clearly less than 2 ebits and therefore represents a resource reduction over the trivial protocol. The entangled resource is the well-known tele-cloning state \cite{Murao JPV 99} \begin{equation} |\psi_{tele}\rangle = \frac{1}{\sqrt{3}}\left(|00\rangle_A|00\rangle_B + \frac{|01\rangle_A+|10\rangle_A}{\sqrt{2}}\frac{|01\rangle_B+|10\rangle_B}{\sqrt{2}} + |11\rangle_A|11\rangle_B\right) \end{equation} For the following it will be convenient to use the following abbreviations \begin{eqnarray*} |\tilde{0}\rangle &=& |00\rangle\\ |\tilde{1}\rangle &=& \frac{|01\rangle+|10\rangle}{\sqrt{2}}\\ |\tilde{2}\rangle &=& |11\rangle \end{eqnarray*} Then the total state including $|\psi\rangle^{\otimes 2}$ with $|\psi\rangle = \alpha|0\rangle+\beta|1\rangle$ is given by \begin{equation} |\psi_{tele}\rangle|\psi\rangle^{\otimes 2} = \frac{1}{\sqrt{3}}(|\tilde{0}\rangle_A|\tilde{0}\rangle_B + |\tilde{1}\rangle_A|\tilde{1}\rangle_B + |\tilde{2}\rangle_A |\tilde{2}\rangle_B)(\alpha^2|\tilde{0}\rangle+\sqrt{2}\alpha\beta|\tilde{1}\rangle +\beta^2|\tilde{2}\rangle) \end{equation} let us further define the three unitary transformations \begin{eqnarray} 1 &=& |\tilde{0}\rangle\langle \tilde{0}|+ |\tilde{1}\rangle\langle \tilde{1}| + |\tilde{2}\rangle\langle \tilde{2}|\\ T &=& |\tilde{0}\rangle\langle \tilde{1}|+ |\tilde{1}\rangle\langle \tilde{2}| + |\tilde{2}\rangle\langle \tilde{0}|\\ T^2 &=& |\tilde{0}\rangle\langle \tilde{2}|+ |\tilde{1}\rangle\langle \tilde{0}| + |\tilde{2}\rangle\langle \tilde{1}| \end{eqnarray} and the corresponding controlled operation \begin{equation} U_T = \sum_{k=0}^{2} |\tilde{k}\rangle\langle \tilde{k}|\otimes T^k \end{equation} We furthermore define the unitary operator \begin{equation} {\cal F} = \frac{1}{\sqrt{3}}\left(\begin{array}{ccc} 1 & 1 & 1\\ 1 & e^{2i\pi/3} & e^{4i\pi/3}\\ 1 & e^{4i\pi/3} & e^{8i\pi/3} \end{array}\right) \end{equation} It is now straightforward to verify that the following protocol will implement $U$ on the two copies of $|\psi\rangle$. (i) Apply $U_T$ between the first (second) copy of $|\psi\rangle$ as control and the first (second) qubit of Bob's part of the telecloning state as target. (ii) Measure Bob's qubits that are part of the telecloning state. If he finds $|\tilde{k}\rangle$, then apply operation $T^k$ on Alice's qubits. (iii) Now she applies $U$ to both of her qubits. (iv) Alice applies ${\cal F}$ and subsequently performs a measurement in the $\{|\tilde{0}\rangle,|\tilde{1}\rangle,|\tilde{2}\rangle\}$ basis. (v) If Alice finds $|\tilde{k}\rangle$, then Bob needs to apply the operation ${\cal F}^k$ on his qubits.\\ The outcome of this procedure is the state $(U|\psi\rangle)^{\otimes 2}$ on Bob's side. The procedure requires the transmission of $\log 3$ classical bits in both directions and furthermore requires the telecloning state as a resource which contains $\log 3$ ebit of entanglement. Note however, that in this protocol is is strictly necessary that Bob holds both copies of the state $|\psi\rangle$ and possesses the ability to carry out joint operations on both of them. These joint operations would require require further entangled resources if the two particles on Bob's side are distant. It is therefore natural to consider the question of whether one can remotely apply the unitary operation $U$ that Alice possesses onto two identical copies of the state $|\psi\rangle$ that are held by Bob and Charles. If one admits a $50\%$ success rate then this is indeed possible employing a straightforward generalization of the protocol for remote application of $U=e^{i\theta\sigma_z}$ presented earlier in this section. It is an interesting open question whether there is a protocol that achieves $100\%$ success probability {\em and} requires less than a shared ebit between Alice and Bob and another shared ebit between Alice and Charles. \section{Recent experimental results} Recently, the quantum remote control protocol for arbitrary single qubit rotations, shown in Figure 3, was implemented experimentally in a linear optics setup \cite{Xiang LG 05}. For that, polarization entangled states generated from spontaneous parametric down conversion (SPDC) where locally manipulated to generate a three qubit state involving polarization and path degrees of freedom. In the following we will describe the basic elements of this experiment as well as the results that have been obtained with this set-up. We begin with a brief description of the experimental choice of the physical qubit, the gates implementing the local operations and the entangled states employed. \begin{itemize} \item {\em Qubits:} For photons, both horizontal and vertical polarization states $\left\{ \left\vert H\right\rangle ,\left\vert V\right\rangle \right\} $ as well as up and down paths $\left\{ \left\vert u\right\rangle ,\left\vert d\right\rangle \right\} $ can represent the logic states $\{\left\vert 0\right\rangle ,\left\vert 1\right\rangle \}$ of qubits. We will refer to the resulting encoding as polarization or path qubits respectively. \item {\em Quantum gates:} For a polarization qubit, arbitrary unitary rotation can be performed by using half-wave plate(HWP) and quarter-wave plate(QWP)\cite{JKMW}. The controlled-NOT gate between polarization qubit (control) and path qubit (target) of the same photon can be implemented by a polarization beam splitter(PBS), \begin{eqnarray} \left\vert H\right\rangle \left\vert u\right\rangle (\left\vert H\right\rangle \left\vert d\right\rangle )&\rightarrow& \left\vert H\right\rangle \left\vert u^{\prime }\right\rangle (\left\vert H\right\rangle \left\vert d^{\prime }\right\rangle ),\\ \nonumber \left\vert V\right\rangle \left\vert u\right\rangle (\left\vert V\right\rangle \left\vert d\right\rangle )&\rightarrow& \left\vert V\right\rangle \left\vert d^{\prime }\right\rangle (\left\vert V\right\rangle \left\vert u^{\prime }\right\rangle ),\end{eqnarray} with suitable definition of the incoming and outgoing modes. \item {\em Entangled states:} Bi-photon entangled states involving either polarization or path qubits can be generated via a SPDC process \cite{K}. Here the initial three qubit state of the protocol represented in Fig. 3 will involves the polarization degree of freedom of Alice's qubit and the polarization and path degrees of freedom of Bob's particle, as explained in detail below. \end{itemize} \begin{figure}[htb] \epsfxsize=8.7cm \begin{center} \epsffile{fig4.eps} \end{center} \caption{\label{setup} Schematic representation of the experimental setup to perform a remote rotation on a single photon \cite{Xiang LG 05}. \textit{P1}, \textit{P2} denote polarization beam splitters; HWP are half wave plates; QWP are quarter wave plates; PA is a polarization analyzer; $D_{1},D_{2}$ represent single photon detectors.} \end{figure} The concrete experimental setup that was implemented in \cite{Xiang LG 05} is showed schematically in Fig. \ref{setup}. A mode-locked Ti:Sapphire pulsed laser (with the pulse width less than 200 fs, a repetition rate of about 82MHz and a central wavelength of 780.0nm) is frequency-doubled to produce the pumping source for a SPDC process. A BBO crystal of 1mm thickness, cut for type-II phase matching, is used as a down converter. Non-collinear degenerated SPDC generates two photons, \textit{A} and\textit{\ B}, in the polarization-entangled state $$\left\vert \Psi ^{+}\right\rangle _{AB}=\frac{1% }{\sqrt{2}}(\left\vert H\right\rangle _{A}\left\vert V\right\rangle _{B}+ \left\vert V\right\rangle _{A}\left\vert H\right\rangle _{B})$$\cite{K}. Bob employs the PBS (denoted \textit{P1}) to split photon \textit{B} in two paths $\left\{ \left\vert u\right\rangle ,\left\vert d\right\rangle \right\} $ and a HWP \textit{H1} at $45^{\circ }$ as a $\sigma _{x}$ gate\ is used to flip the polarization in path \textit{u}. Hence the initial polarization entanglement between the two distributed photons is converted into polarization-path entanglement, \begin{equation} \left\vert \Psi ^{+}\right\rangle _{123}=\frac{1}{\sqrt{2}}(\left\vert H\right\rangle _{1}\left\vert u\right\rangle _{2}+\left\vert V\right\rangle _{1}\left\vert d\right\rangle _{2})\left\vert H\right\rangle _{3}, \label{ES} \end{equation}% where we have relabelled Alice's photon with the index $1$ and the the indices $2$ and $3$ refer to the path and polarization degree of freedom of photon B. The polarization state of qubit $3$ can be prepared in an arbitrary state $% \left\vert \psi \right\rangle _{3}=\alpha \left\vert H\right\rangle _{3}+\beta \left\vert V\right\rangle _{3}$ with identical sets of waveplates, $\{H_{u},Q_{u}\}$ and $\{H_{d},Q_{d}\}$, in each path \cite{JKMW}. The global state can therefore be initialized to be of the general form \begin{equation} \left\vert \Phi ^{+}\right\rangle _{12}\left\vert \psi \right\rangle _{3}=\frac{1}{\sqrt{2}}(\left\vert H\right\rangle _{1}\left\vert u\right\rangle _{2}+\left\vert V\right\rangle _{1}\left\vert d\right\rangle _{2})(\alpha \left\vert H\right\rangle _{3}+\beta \left\vert V\right\rangle _{3}).\end{equation} The three step protocol for the remote rotation of Bob's polarization qubit is performed as follows:\\ \textit{i}) \textit{Encoding (Operation $G_1$)}: The paths \textit{u} and \textit{d} of photon \textit{B} provide the input for a second PBS (denoted by \textit{P2}) to perform a \textit{CNOT} operation, where the polarization acts as the control qubit and the path represents the target qubit. In the experiment we have ensured that the optical path lengths of \textit{u} and \textit{p} are equal to avoid the accumulation of a relative phase factor between the two terms in eq. (\ref{ES}). The $\sigma _{z}$ measurement on qubit \textit{2} is implemented by reading out the path information of photon \textit{B}. If \textit{B} is in path \textit{u}', $\left\vert \psi \right\rangle _{B}$ is encoded into $\left\vert \psi \right\rangle _{AB}=\alpha \left\vert H\right\rangle _{A}\left\vert H\right\rangle _{B}+\beta \left\vert V\right\rangle _{A}\left\vert V\right\rangle _{B}$. If \textit{B} is in path \textit{d}', the two photons will be in $\left\vert \psi ^{\prime }\right\rangle _{13}=\alpha \left\vert V\right\rangle _{1}\left\vert H\right\rangle _{3}+\beta \left\vert H\right\rangle _{1}\left\vert V\right\rangle _{3}$, which can be transformed into $\left\vert \Psi \right\rangle _{13}$ by another HWP at $45^{\circ }$ acting on photon \textit{A}. Here we omit the later case without loss of generality. \textit{ii}) \textit{Remote operation}: The operation $U_{com}$ can be performed by a pair of QWP at $45^{\circ }$ with a HWP at $\frac{\varphi }{2}% -45^{\circ }$ between them. Such device has been used to verify the geometric phase of classical light and photons\cite{HR,BDM}. For a single qubit operation, any additional global phase is trivial, so $U_{com}$ can be replaced by $e^{i\varphi /2}U_{com}$, which can be realised by one zero-order waveplate at $0^{\circ }$ tilted in a suitable angle (see \cite {KWWAE} for similar application). Here we chose $\varphi =60^{\circ }$ and $120^{\circ }$ by a tilted QWP \textit{Q1}.\\ \textit{iii}) \textit{Decoding and Verification (Operation $G_2$)}: Alice performs her measurement in the rotated basis $\{\left\vert D\right\rangle _{1}=\frac{1}{\sqrt{2}}\left( \left\vert H\right\rangle _{1}+\left\vert V\right\rangle _{1}\right) ,\left\vert C\right\rangle _{1}=% \frac{1}{\sqrt{2}}\left( \left\vert H\right\rangle _{1}-\left\vert V\right\rangle _{1}\right) \}$ using a polarizer. Photon \textit{A} is detected by a single photon detector (SPCM-AQR-14 by EG\&G). Photon \textit{B} will be collapsed into $\left\vert \psi ^{\prime }\right\rangle _{3}=U_{com}\left\vert \psi \right\rangle _{3}$ for result $\left\vert +\right\rangle _{1}$, and $\left\vert \psi ^{\prime \prime }\right\rangle _{3}=U_{com}\sigma _{z}\left\vert \psi \right\rangle _{3}$ for result $% \left\vert -\right\rangle _{1}$. The latter can be converted into $% \left\vert \psi ^{\prime }\right\rangle _{B}$ by a HWP at $0^{\circ }$, i.e. a $\sigma _{z}$ rotation. The polarization state of photon \textit{B} is reconstructed by quantum state tomography using a polarization analyzer and a detector \cite{JKMW}. The measurements on \textit{A} and \textit{B} are collected via coincidence counts with a window time of 5ns.\\ {\em Results:} \vspace*{0.5cm} \begin{figure}[htb] \epsfxsize=10.7cm \begin{center} \epsffile{fig5.eps} \end{center} \caption{\label{results} Quantum process tomography for the remote $\sigma_{z}$ rotation of $60^{\circ }$ and $120^{\circ }$. We have represented the theoretical values of the $\chi$ matrices corresponding to (a) ideal rotation $\chi_i$ and (b) a dephased rotation $\chi_d$ as well as c) the measured rotation $\chi _{e}$ performed with the set up depicted in Fig. 4. The real parts of the matrix elements of $\chi $ are represented in the chart figures on the left while the imaginary ones are represented on the right.} \end{figure} The effect of a general quantum operation on a single qubit can represented by a trace-preserving completely positive (CP) map, i.e. for an arbitrary input state $\rho$, the output one would be of the form $\rho^{\prime}= \varepsilon (\rho )=\sum_{mn}\chi _{mn}E_{m}\rho E_{n}^{\dagger }$,\, ($\{E_{m}\}=\{I,\sigma _{x},\sigma _{y},\sigma _{z}\}$) , where $\chi $ is a positive Hermitian matrix. In the actual experiment, two main sources of phase decoherence were identified (for a more detailed discussion see \cite{Xiang LG 05}). One is caused by the bi-refringency of BBO, which induces the partial time-separation between the wave-packets of two polarizations. The other one is the mismatch of spatial modes in the PBS \textit{P2}. The CP map representing the dephasing operation can be written as ${\Large \varepsilon }_{d}% {\Large =}\{\sqrt{\frac{1+p\eta }{2}}U_{com},\sqrt{\frac{1-p\eta }{2}}% U_{com}\sigma _{z}\}$ where $p$ is the visibility of the entangled state obtained from SPDC and $\eta$ is the visibility of the interferometer formed by the PBSs $P1$ and $P2$. These parameters can be measured independently. Within this formalism, the final state after the action of the net quantum operation, including dephasing, representing the remote rotation protocol is given by \begin{equation} \rho _{d}=\varepsilon _{d}(\ket{\psi} \bra{\psi})=\left( \begin{array}{cc} \alpha \alpha ^{\ast } & p\eta \alpha \beta ^{\ast }e^{-i\varphi } \\ p\eta \alpha ^{\ast }\beta e^{i\varphi } & \beta \beta ^{\ast }% \end{array}% \right) . \end{equation} Here, to completely characterize the remote operation $\varepsilon _{e}$ in our experiment, four states $\{\left\vert H\right\rangle ,\left\vert V\right\rangle ,\left\vert D\right\rangle ,\left\vert R\right\rangle = \frac{1}{\sqrt{2}}\left( \left\vert H\right\rangle -i\left\vert V\right\rangle \right) \}$ are used as an input for the initial state of Bob's polarization qubit. With the four output density matrices, the $\chi $ obtained in the process can then be reconstructed using techniques for quantum process tomography \cite{ABJ}. For the $\sigma_{z}$-rotation $U_{com}=\cos\phi/2+i \sin\phi/2\sigma_{z}$, $\chi_{i}$ has four non-zero elements, $\chi_{11}=(1+\cos\phi)/2$, $\chi_{44}=(1-\cos\phi)/2$, $\chi^{\ast}_{14}=\chi_{41}=i \sin\phi/2$. The dephasing only changes the value of the four non-zero elements, $\chi_{11}=(1+p \,\eta \,cos\phi)/2$, $\chi_{44}=(1-p\,\eta\, cos\phi)/2$, $\chi^{\ast}_{14}=\chi_{41}=i p\,\eta \,sin\phi$, while leaving the other 12 zero elements unaffected. The matrices are shown in Fig. \ref{results} for the case of an ideal rotation $\chi_{i}$, a dephased rotation $\chi _{d}$, and the effective operation $\chi _{e}$ obtained in our experiment, where the left six histograms are the real parts for the matrices for $60^{\circ}$ and $120^{\circ}$ rotation through $z$ axis, and the right for the imaginary ones, where the values of the parameter $\varepsilon _{d}$ was measured to be $p\approx 0.85$ and $\eta \approx 0.92$. New non-zero elements are found in the effective $\chi_{e}$. This is introduced by the imperfection of the polarization beam-splitter. The comparison of the experimental operation $ \varepsilon_{e}$ with the ideal rotation $\varepsilon _{i}$ is determined by evaluating the average fidelity with pure input states uniformly distributed over the Bloch sphere $$ \overline{F}[\varepsilon_{e},U_{com} ]=\int d\psi F[\varepsilon _{e}(\ket{\psi}, U_{com}\ket{\psi} \bra{\psi} U_{com}^{\dagger}]$$ where $$F[\rho ,\rho ^{\prime}]=(Tr[\sqrt{\sqrt{\rho ^{\prime }}\rho \sqrt{\rho ^{\prime }}}])^2$$ is the output state fidelity \cite{BOSBJ,N}. The measured $\chi$ yields $\overline{F}_{60}=0.96$ and $% \overline{F}_{120}=0.86$. Although only a rotation commuting with $\sigma_{z} $ ($z$-rotation) is operated in our experiment, the same protocol can be implemented for operations anti-commuting with $\sigma_{z}$ ($x$-rotation) with another HWP at $45^{\circ} $ at the output port, which acts as a $\sigma_{z} $ operation. The above scheme can also be generalised to implementing controlled operations which commute or anti-commute with $\sigma_{z}$. For example, the control-phase gate commutes with $I\otimes \sigma _{z}$ , $\sigma _{z}\otimes I$ and $\sigma _{z}\otimes \sigma _{z}$. An experiment along these lines, where a non-local CNOT was implemented using linear optics, was reported in \cite{Huang}. Summarizing, we have implemented a remote rotation by $120^{\circ }$ about the $z$ axis on a photonic qubit using shared entanglement and local operations and without performing the rotation directly on the target photons. The whole process was characterized using quantum process tomography and the results agree with the theoretical predictions. The scheme can be generalized to implement remotely any operation belonging to the two classes that allow for protocols different from bidirectional state teleportation \cite{angles}. \section{Conclusions} The linearity of quantum mechanics imposes severe constraints on the type of protocols that can be implemented by quantum mechanical means. When combining these constraints with those of locality, one arrives at interesting no-go theorems concerning quantum protocols between spatially separated parties. In this work we have investigated such questions both theoretically and experimentally. In particular, we have considered the possibility of the remote implementation of unitary transformations. If Alice holds a tool that allows for the implementation of a unitary transformation we consider the question whether entanglement and classical communication is sufficient to allow this unitary transformation be applied to an unknown state of a particle held by Bob. We have considered several scenarios and, employing linearity, the non-increase of entanglement under local operations and classical communication (LOCC) as well as the no-signalling condition, several no-go theorems were derived and discussed. We have also identified situations with restricted sets of operations in which the remote application of unitaries can be achieved in a non-trivial way, that is, avoiding state teleportation. Despite their theoretical nature, the results of this work have led to an effort towards the implementation of such protocols and we have described the successful implementation of one of the theoretical protocols that has been developed in this paper. It is hoped that these results illuminate further how fundamental properties of quantum mechanics impose constraints via linearity, locality and no-signalling but also facilitate new opportunities in the form of generalized measurements and entanglement supplemented by classical communication. The use of multipartite entanglement, as discussed in section III.a, opens up a new front and further applications for quantum communication are foreseeable. \acknowledgements The authors would like to thank Claire Bedrock for her patience and her willingness to 'enjoy the challenge' of dealing with a very late submission. This work was supported by the IRC on Quantum Information of the EPSRC (GR/S82176/0), The EU via the Integrated Project QAP, the Thematic Network QUPRODIS (IST-2002-38877), the Leverhulme Trust (F/07058/U), the Chinese National Fundamental Research Program, the NSF of China and the Innovation Funds from the Chinese Academy of Sciences.
2,869,038,156,612
arxiv
\section{Motivation} The \textit{universal approximation theorem} \citep{cybenko1989} and its subsequent extensions \citep{hornik1991, lu2017} state that feedforward neural networks with exponentially large width and width-bounded deep neural networks can approximate any continuous function arbitrarily well. This universal approximation capacity of neural networks along with available computing power explain the widespread use of deep learning nowadays. Bayesian inference for neural networks is typically performed via stochastic Bayesian optimization, stochastic variational inference \citep{polson2017} or ensemble methods \citep{ashukha2020, wilson2020}. MCMC methods have been explored in the context of neural networks, but have not become part of the Bayesian deep learning toolbox. The slower evolution of MCMC methods for neural networks is partly attributed to the lack of scalability of existing MCMC algorithms for big data and for high-dimensional parameter spaces. Furthermore, additional factors hinder the adaptation of existing MCMC methods in deep learning, including the hierarchical structure of neural networks and the associated covariance between parameters, lack of identifiability arising from weight symmetries, lack of a priori knowledge about the parameter space, and ultimately lack of convergence. The purpose of this paper is twofold. Initially, a literature review is conducted to identify inferential challenges in MCMC developments for neural networks. Subsequently, Bayesian marginalization based on MCMC samples of neural network parameters is used for attaining accurate posterior predictive distributions of the respective neural network output, despite the lack of convergence of the MCMC samples to the parameter posterior. An outline of the paper layout follows. Section \ref{challenges} reviews the inferential challenges arising from the application of MCMC to neural networks. Section \ref{inferential_overview} provides an overview of the employed inferential framework, including the multilayer perceptron (MLP) model and its likelihood for binary and multiclass classification, the MCMC algorithms for sampling from MLP parameters, the multivariate MCMC diagnostics for assessing convergence and sampling effectiveness, and the Bayesian marginalization for attaining posterior predictive distributions of MLP outputs. Section \ref{examples} showcases Bayesian parameter estimation via MCMC and Bayesian predictions via marginalization by fitting different MLPs to four datasets. Section \ref{scope} posits predictive inference for neural networks, among else by combining Bayesian marginalization with approximate MCMC sampling or with ensemble training. \section{Parameter inference challenges} \label{challenges} A literature review of inferential challenges in the application of MCMC methods to neural networks is conducted in this section thematically, with each subsection being focused on a different challenge. \subsection{Computational cost} \label{computational_cost} Existing MCMC algorithms do not scale with increasing number of parameters or of data points. For this reason, approximate inference methods, including variational inference (VI), are preferred in high-dimensional parameter spaces or in big data problems from a time complexity standpoint \citep{mackay1995, blei2017, blier2018}. On the other hand, MCMC methods are better than VI in terms of approximating the log-likelihood \citep{dupuy2017}. Literature on MCMC methods for neural networks is limited due to associated computational complexity implications. Sequential Monte Carlo and reversible jump MCMC have been applied on two types of neural network architectures, namely MLPs and radial basis function networks (RBFs), see for instance \citet{andrieu1999, freitas1999, andrieu2000, freitas2001}. For a review of Bayesian approaches to neural networks, see \citet{titterington2004}. Many research developments have been made to scale MCMC algorithms to big data. The main focus has been on designing Metropolis-Hastings or Gibbs sampling variants that evaluate a costly log-likelihood on a subset (\textit{minibatch}) of the data rather than on the entire data set \citep{welling2011, chen2014, ma2017, mandt2017, desa2018, nemeth2018, robert2018, seita2018, quiroz2019}. Among minibatch MCMC algorithms to big data applications, there exists a subset of studies applying such algorithms to neural networks \citep{chen2014, gu2015, gong2019}. Minibatch MCMC approaches to neural networks pave the way towards \textit{data-parallel deep learning}. On the other hand, to the best of the authors' knowledge, there is no published research on MCMC methods that evaluate the log-likelihood on a subset of neural network parameters rather than on the whole set of parameters, and therefore no reported research on \textit{model-parallel deep learning} via MCMC. Minibatch MCMC has been studied analytically by \citet{johndrow2020}. Their theoretical findings point out that some minibatching schemes can yield inexact approximations and that minibatch MCMC can not greatly expedite the rate of convergence. \subsection{Model structure} A neural network with $\rho$ layers can be viewed as a hierarchical model with $\rho$ levels, each network layer representing a level \citep{williams2000}. Due to its nested layers and its non-linear activations, a neural network is a \textit{non-linear hierarchical model}. MCMC methods for non-linear hierarchical models have been developed, see for example \citet{bennett1996, gilks1996b, daniels1998, sargent2000}. However, existing MCMC methods for non-linear hierarchical models have not harnessed neural networks due to time complexity and convergence implications. Although not designed to mirror the hierarchical structure of a neural network, recent hierarchical VI \citep{ranganath2016, esmaeili2019, huang2019, titsias2019} provides more general variational approximations of the parameter posterior of the neural network than mean-field VI. Introducing a hierarchical structure in the variational distribution induces correlation among parameters, in contrast to the mean-field variational distribution that assumes independent parameters. So, one of the Bayesian inference strategies for neural networks is to approximate the covariance structure among network parameters. In fact, there are published comparisons between MCMC and VI in terms of speed and accuracy of convergence to the posterior covariance, both for linear or mixture models \citep{giordano2015, mandt2017, ong2018} and for neural networks \citep{zhang2018a}. \subsection{Weight symmetries} The output of a feedforward neural network given some fixed input remains unchanged under a set of transformations determined by the the choice of activations and by the network architecture more generally. For instance, certain weight permutations and sign flips in MLPs with hyperbolic tangent activations leave the output unchanged \citep{chen1993}. If a parameter transformation leaves the output of a neural network unchanged given some fixed input, then the likelihood is invariant under the transformation. In other words, transformations, such as weight permutations and sign-flips, render neural networks \textit{non-identifiable} \citep{pourzanjani2017}. It is known that the set of linear invertible parameter transformations that leaves the output unchanged is a subgroup $T$ of the group of invertible linear mappings from the parameter space $\mathbb{R}^n$ to itself \citep{nielsen1990}. $T$ is a transformation group acting on the parameter space $\mathbb{R}^n$. It can be shown that for each permutable feedforward neural network, there exists a cone $H\subset\mathbb{R}^{n}$ dependent only on the network architecture such that for any parameter $\theta\in\mathbb{R}^n$ there exist $\eta\in H$ and $\tau\in T$ such that $\tau\eta=\theta$. This relation means that every network parameter is equivalent to a parameter in the proper subset $H$ of $\mathbb{R}^n$ \citep{nielsen1990}. Neural networks with convolutions, max-pooling and batch-normalization contain more types of weight symmetries than MLPs \citep{badrinarayanan2015}. In practice, the parameter space of a neural network is set to be the whole of $\mathbb{R}^n$ rather than a cone $H$ of $\mathbb{R}^n$. Since a neural network likelihood with support in the non-reduced parameter space of $\mathbb{R}^n$ is invariant under weight permutations, sign-flips or other transformations, the posterior landscape includes multiple equally likely modes. This implies low acceptance rate, entrapment in local modes and convergence challenges for MCMC. Additionally, computational time is wasted during MCMC, since posterior modes represent equivalent solutions \citep{nalisnick2018}. Such challenges manifest themselves in the MLP examples of section \ref{examples}. For neural networks with higher number $n$ of parameters in $\mathbb{R}^n$, the topology of the likelihood is characterized by local optima embedded in high-dimensional flat plateaus \citep{brea2019}. Thereby, larger neural networks lead to a multimodal target density with symmetric modes for MCMC. Seeking parameter symmetries in neural networks can lead to a variety of NP-hard problems \citep{ensign2017}. Moreover, symmetries in neural networks pose identifiability and associated inferential challenges in Bayesian inference, but they also provide opportunities to develop inferential methods with reduced computational cost \citep{hu2019} or with improved predictive performance \citep{moore2016}. Empirical evidence from stochastic optimization simulations suggests that removing weight symmetries has a negative effect on prediction accuracy in smaller and shallower convolutional neural networks (CNNs), but has no effect in prediction accuracy in larger and deeper CNNs \citep{maddison2015}. Imposing constraints on neural network weights is one way of removing symmetries, leading to better mixing for MCMC \citep{sen2020}. More generally, exploitation of weight symmetries provides scope for scalable Bayesian inference in deep learning by reducing the measure or dimension of parameter space. Bayesian inference in subspaces of parameter space for deep learning has been proposed before \citep{izmailov2020}. Lack of identifiability is not unique to neural networks. For instance, the likelihood of mixture models is invariant under relabelling of the mixture components, a condition known as the \textit{label switching} problem \citep{stephens2000}. The high-dimensional parameter space of neural networks is another source of non-identifiability. A necessary condition for identifiability is that the number of data points must be larger than the number of parameters. This is one reason why big datasets are required for training neural networks. \subsection{Prior specification} Parameter priors have been used for generating Bayesian smoothing or regularization effects. For instance, \citet{freitas1999} develops sequential Monte Carlo methods with smoothing priors for MLPs and \citet{williams1995} introduces Bayesian regularization and pruning for neural networks via a Laplace prior. When parameter prior specification for a neural network is not driven by smoothing or regularization, the question becomes how to choose the prior. The choice of parameter prior for a neural network is crucial in that it affects the parameter posterior \citep{lee2004}, and consequently the posterior predictive distribution \citep{lee2005}. Neural networks are commonly applied to big data. For large amounts of data, practitioners may not have intuition about the relationship between input and output variables. Furthermore, it is an open research question how to interpret neural network weights and biases. As a priori knowledge about big datasets and about neural network parameters is typically not available, \textit{prior elicitation} from experts is not applicable to neural networks. It seems logical to choose a prior that reflects a priori ignorance about the parameters. A constant-valued prior is a possible candidate, with the caveat of being improper for unbounded parameter spaces, such as $\mathbb{R}^n$. However, for neural networks, an \textit{improper prior} can result in an improper parameter posterior \citep{lee2005}. Typically, a \textit{truncated flat prior} for neural networks is sufficient for ensuring a valid parameter posterior \citep{lee2005}. At the same time, the choice of truncation bounds depends on weight symmetry and consequently on the allocation of equivalent points in the parameter space. \citet{lee2003} proposes a \textit{restricted flat prior} for feedforward neural networks by bounding some of the parameters and by imposing constraints that guarantee layer-wise linear independence between activations, while \citet{lee2000} shows that this prior is asymptotically consistent for the posterior. Moreover, \citet{lee2003} demonstrates that such a restricted flat prior enables more effective MCMC sampling in comparison to alternative prior choices. \textit{Objective prior specification} is an area of statistics that has not infiltrated Bayesian inference for neural networks. Alternative ideas for constructing objective priors with minimal effect on posterior inference exist in the statistics literature. For example, \textit{Jeffreys priors} are invariant to differentiable one-to-one transformations of the parameters \citep{jeffreys1962}, \textit{maximum entropy priors} maximize the Shannon entropy and therefore provide the least possible information \citep{jaynes1968}, \textit{reference priors} maximize the expected Kullback-Leibler divergence from the associated posteriors and in that sense are the least informative priors \citep{bernardo1979}, and \textit{penalised complexity priors} penalise the complexity induced by deviating from a simpler base model \citep{simpson2017}. To the best of the authors' knowledge, there are only two published lines of research on objective priors for neural networks; a theoretical derivation of Jeffreys and reference priors for feedforward neural networks by \citet{lee2007}, and an approximation of reference priors via Monte Carlo sampling of a differentiable non-centered parameterization of MLPs and CNNs by \citet{nalisnick2018}. More broadly, research on prior specification for BNNs has been published recently \citep{pearce2019, vladimirova2019}. For a more thorough review of prior specification for BNNs, see \citet{lee2005}. \subsection{Convergence} MCMC convergence depends on the target density, namely on its multi-modality and level of smoothness. An MLP with fewer than a hundred parameters fitted to a non-linearly separable dataset makes convergence in fixed MCMC sampling time challenging (see subsection \ref{num_summaries}). Attaining MCMC convergence is not the only challenge. Assessing whether a finite sample from an MCMC algorithm represents an underlying target density can not be done with certainty \citep{cowles1996}. MCMC diagnostics can fail to detect the type of convergence failure they were designed to identify. Combinations of diagnostics are thus used in practice to evaluate MCMC convergence with reduced risk of false diagnosis. MCMC diagnostics were initially designed for asymptotically exact MCMC. Research activity on approximate MCMC has emerged recently. Minibatch MCMC methods (see subsection \ref{computational_cost}) are one class of approximate MCMC methods. Alternative approximate MCMC techniques without minibatching have been developed \citep{rudolf2018, chen2019} along with new approaches to quantify convergence \citep{chwialkowski2016}. \textit{Quantization} and \textit{discrepancy} are two notions pertinent to approximate MCMC methods. The quantization of a target density $p$ by an empirical measure $\hat{p}$ provides an approximation to the target $p$ \citep{graf2007}, while the notion of discrepancy quantifies how well the empirical measure $\hat{p}$ approximates the target $p$ \citep{chen2019}. The \textit{kernel Stein discrepancy} (KSD) and the \textit{maximum mean discrepancy} (MMD) constitute two instances of discrepancy; for more details, see \citet{chen2019} and \citet{gretton2012}, respectively. \citet{rudolf2018} provide an alternative way of assessing the quality of approximation of a target density $p$ by an empirical measure $\hat{p}$ in the context of approximate MCMC using the notion of \textit{Wasserstein distance} between $p$ and $\hat{p}$. \section{Inferential framework overview} \label{inferential_overview} An overview of the inferential framework used in this paper follows, including the MLP model and its likelihood for classification, MCMC samplers for parameter estimation, MCMC diagnostics for assessing convergence and sampling effectiveness, and Bayesian marginalization for prediction. \subsection{The MLP model} \label{mlp_overview} MLPs have been chosen as a more tractable class of neural networks. CNNs are the most widely used deep learning models. However, even small CNNs, such as AlexNet \citep{krizhevsky2012}, SqueezeNet \citep{iandola2016}, Xception \citep{chollet2017}, MobileNet \citep{howard2017}, ShuffleNet \citep{zhang2018b}, EffNet \citep{freeman2018} or DCTI \citep{truong2018}, have at least two orders of magnitude higher number of parameters, thus amplifying issues of computational complexity, model structure, weight symmetry, prior specification, posterior shape, MCMC convergence and sampling effectiveness. \subsubsection{Model definition.} An MLP is a feedforward neural network consisting of an input layer, one or more hidden layers and an output layer \citep{rosenblatt1958,minsky1988,hastie2016}. Let $\rho \ge 2$ be a natural number. Consider an index $j\in\{0,1,\dots,\rho\}$ indicating the layer, where $j=0$ refers to the input layer, $j=1,2,\dots,\rho-1$ to one of the $\rho-1$ hidden layers and $j=\rho$ to the output layer. Let $\kappa_{j}$ be the number of neurons in layer $j$ and use $\kappa_{0:\rho} = (\kappa_{0},\kappa_{1},\dots,\kappa_{\rho})$ as a shorthand for the sequence of neuron counts per layer. Under such notation, $\mbox{MLP}(\kappa_{0:\rho})$ refers to an MLP with $\rho-1$ hidden layers and $\kappa_j$ neurons at layer $j$. An $\mbox{MLP}(\kappa_{0:\rho})$ with $\rho-1\ge 1$ hidden layers and $\kappa_j$ neurons at layer $j$ is defined recursively as \begin{align} \label{mlp_g} g_{j}(x_{i},\theta_{1:j}) &= W_{j}h_{j-1}(x_{i},\theta_{1:j-1})+b_{j},\\ \label{mlp_h} h_{j}(x_{i}, \theta_{1:j}) &= \phi_{j}(g_{j}(x_{i},\theta_{1:j})), \end{align} for $j=1,2,\dots,\rho$. A data point $x_{i}\in\mathbb{R}^{\kappa_0}$ corresponds to the input layer $h_{0}(x_{i})=x_{i}$, yielding the sequence $g_{1}(x_{i}, \theta_{1})=W_{1}x_{i}+b_{1}$ in the first hidden layer. $W_{j}$ and $b_{j}$ are the respective weights and biases at layer $j=1,2,\dots,\rho,$ which constitute the parameters $\theta_{j} = (W_{j}, b_{j})$ at layer $j$. The shorthand $\theta_{1:j} = (\theta_{1},\theta_{2},\dots,\theta_{j})$ denotes all weights and biases up to layer $j$. Functions $\phi_{j}$, known as \textit{activations}, are applied elementwise to their input $g_{j}$. The default recommendation of activation in neural networks is a rectified linear unit (ReLU), see for instance \citet{jarrett2009,nair2009,goodfellow2016}. Other activations are the ELU, leaky RELU, tanh and sigmoid \citep{nwankpa2018}. If an activation is not present at layer $j$, then the identity function $\phi_{j}(g_{j})=g_{j}$ is used as $\phi_{j}$ in \eqref{mlp_h}. The weight matrix $W_{j}$ in \eqref{mlp_g} has $\kappa_{j}$ rows and $\kappa_{j-1}$ columns, while the vector $b_{j}$ of biases has length $\kappa_{j}$. Concatenating all $\theta_{j}$ across hidden and output layers gives a parameter vector $\theta=\theta_{1:\rho}\in\mathbb{R}^n$ of length $n=\sum_{j=1}^{\rho}\kappa_{j}(\kappa_{j-1}+1)$. To define $\theta$ uniquely, the convention to traverse weight matrix elements row-wise is made. Apparently, each of $g_{j}$ in \eqref{mlp_g} and $h_{j}$ in \eqref{mlp_h} has length $\kappa_{j}$. The notation $W_{j,k,l}$ is introduced to point to the $(k,l)$-the element of weight matrix $W_{j}$ at layer $j$. Analogously, $b_{j,k}$ points to the $k$-th coordinate of bias vector $b_{j}$ at layer $j$. \subsubsection{Likelihood for binary classification.} Consider $s$ samples $(x_{i}, y_{i}),~i=1,2,\dots,s,$ consisting of some input $x_{i}\in\mathbb{R}^{\kappa_0}$ and of a binary output $y_{i}\in\{0,1\}$. An $\mbox{MLP}(\kappa_0,\kappa_1,\dots,\kappa_{\rho}=1)$ with a single neuron in its output layer can be used for setting the likelihood function $L(y_{1:s}|x_{1:s},\theta)$ of labels $y_{1:s}=(y_{1},y_{2},\dots,y_{s})$ given the input $x_{1:s}=(x_{1},x_{2},\dots,x_{s})$ and MLP parameters $\theta$. Firstly, the \textit{sigmoid activation function} $\phi_{\rho}(g_{\rho})=1 / (1+\exp{(-g_{\rho})})$ is applied at the output layer of the MLP. So, the \textit{event probabilities} $\mbox{Pr}(y_{i}=1 | x_{i},\theta)$ are set to \begin{equation} \label{bc_mlp_probs} \begin{split} \mbox{Pr}(y_{i}=1 | x_{i},\theta) & = h_{\rho}(x_{i},\theta) =\phi_{\rho}(g_{\rho}(x_{i},\theta))\\ & =\frac{1}{1+\exp{(-g^{(\rho)}(x^{(i)} , \theta))}}. \end{split} \end{equation} Assuming that the labels are outcomes of $s$ independent draws from Bernoulli probability mass functions with event probabilities given by \eqref{bc_mlp_probs}, the likelihood becomes \begin{equation} \label{bc_mlp_lik} L(y_{1:s} | x_{1:s},\theta) = \prod_{i=1}^{s}\prod_{k=1}^{2} (z_{\rho,k}(x_{i},\theta))^{\mathbbm{1}_{\{y_{i}=k-1\}}}. \end{equation} $z_{\rho,k}(x_{i},\theta),~k=1,2,$ denotes the $k$-th coordinate of the vector $z_{\rho}(x_{i},\theta)= (1-h_{\rho}(x_{i},\theta),h_{\rho}(x_{i},\theta))$ of event probabilities for sample $i=1,2,\dots,s$. Furthermore, $\mathbbm{1}$ denotes the indicator function, that is $\mathbbm{1}_{\{y_{i}=k-1\}}=1$ if $y_{i}=k-1$, and $\mathbbm{1}_{\{y_{i}=k-1\}}=0$ otherwise. The log-likelihood follows as \begin{equation} \label{bc_mlp_loglik} \ell(y_{1:s}|x_{1:s},\theta) = \sum_{\substack{i=1 \\ k=1}}^{\substack{s \\2 }} \mathbbm{1}_{\{y_{i}=k-1\}} \log{(z_{\rho,k}(x_{i},\theta))}. \end{equation} The negative value of log-likelihood \eqref{bc_mlp_loglik} is known as the \textit{binary cross entropy} (BCE). To infer the parameters $\theta$ of $\mbox{MLP}(\kappa_0,\kappa_1,\dots,\kappa_{\rho}=1)$, the binary cross entropy or a different loss function is minimized using stochastic optimization methods, such as stochastic gradient descent (SGD). \subsubsection{Likelihood for multiclass classification.} Let $y_i\in\{1,2,\dots,\kappa_{\rho}\}$ be an output variable, which can take $\kappa_{\rho}\ge 2$ values. Moreover, consider an $\mbox{MLP}(\kappa_{0:\rho})$ with $\kappa_{\rho}$ neurons in its output layer. Initially, a \textit{softmax activation function} $\phi_{\rho}(g_{\rho})=\exp{(g_{\rho})} / \sum_{k=1}^{\kappa_{\rho}}\exp{(g_{\rho,k})}$ is applied at the output layer of the MLP, where $g_{\rho,j}$ denotes the $k$-th coordinate of the $\kappa_{\rho}$-length vector $g_{\rho}$. Thus, the event probabilities $\mbox{Pr}(y_{i}=k|x_{i},\theta)$ are \begin{equation} \label{mc_mlp_probs} \begin{split} \mbox{Pr}(y_{i}=k|x_{i},\theta) & = h_{\rho,k}(x_{i},\theta)\\ & = \phi_{\rho}(g_{\rho,k}(x_{i},\theta))\\ & = \frac{\exp{(g_{\rho,k}(x^{(i)} , \theta))}} {\sum_{r=1}^{\kappa_{\rho}}\exp{(g_{\rho,r}(x_{i}, \theta))}}. \end{split} \end{equation} $ h_{\rho,k}(x_{i},\theta)$ denotes the $k$-th coordinate of the MLP output $h_{\rho}(x_{i},\theta)$. It is assumed that the labels are outcomes of $s$ independent draws from categorical probability mass functions with event probabilities given by \eqref{mc_mlp_probs}, so the likelihood is \begin{equation} \label{mc_mlp_lik} L(y_{1:s}|x_{1:s},\theta)= \prod_{i=1}^{s}\prod_{k=1}^{\kappa_{\rho}} (h_{\rho,k}(x_{i},\theta))^{\mathbbm{1}_{\{y_{i}=k\}}}. \end{equation} The log-likelihood follows as \begin{equation} \label{mc_mlp_loglik} \ell(y_{1:s}|x_{1:s},\theta) = \sum_{\substack{i=1 \\ k=1}}^{\substack{s \\\kappa_{\rho}}} \mathbbm{1}_{\{y_{i}=k\}} \log{(h_{\rho,k}(x_{i},\theta))}. \end{equation} The negative value of log-likelihood \eqref{mc_mlp_loglik} is known as \textit{cross entropy}, and it is used as loss function for stochastic optimization in multiclass classification MLPs. An $\mbox{MLP}(\kappa_0,\kappa_1,\dots,\kappa_{\rho}=2)$ with two neurons at the output layer, event probabilities given by softmax activation \eqref{mc_mlp_probs} and log-likelihood \eqref{mc_mlp_loglik} can be used for binary classification. Such a formulation is an alternative to an $\mbox{MLP}(\kappa_0,\kappa_1,\dots,\kappa_{\rho}=1)$ with one neuron at the output layer, event probabilities given by sigmoid activation \eqref{bc_mlp_probs} and log-likelihood \eqref{bc_mlp_loglik}. The difference between the two MLP models is the parameterization of event probabilities, since a categorical distribution with $\kappa_{\rho}=2$ levels otherwise coincides with a Bernoulli distribution. \subsection{MCMC sampling for parameter estimation} \label{mcmc_algorithms} Interest is in sampling from the parameter posterior $p(\theta | x_{1:s}, y_{1:s}) \propto L(y_{1:s} | x_{1:s}, \theta) \pi (\theta) $ of a neural network given the neural network likelihood $L(y_{1:s} | x_{1:s}, \theta)$ and parameter prior $\pi(\theta)$. For MLPs, the likelihood $L(y_{1:s} | x_{1:s}, \theta)$ for binary and multiclass classification is provided by \eqref{bc_mlp_lik} and \eqref{mc_mlp_lik}, respectively. The parameter posterior $p(\theta | x_{1:s}, y_{1:s})$ is alternatively denoted by $p(\theta | D_{1:s})$ for brevity. $D_{1:s}=(x_{1:s}, y_{1:s})$ is a dataset of size $s$ consisting of input $x_{1:s}$ and output $y_{1:s}$. This subsection provides an introduction to the MCMC algorithms and MCMC diagnostics used in the examples of section \ref{examples}. Three MCMC algorithms are outlined, namely Metropolis-Hastings, Hamiltonian Monte Carlo, and power posterior sampling. Two MCMC diagnostics are described, the multivariate potential scale reduction factor (PSRF) and the multivariate effective sample size (ESS). \subsubsection{Metropolis-Hastings algorithm.} One of the most general algorithms for sampling from a posterior $p(\theta | D_{1:s})$ is the Metropolis-Hastings (MH) algorithm \citep{metropolis1953, hastings1970}. Given the current state $\theta$, the MH algorithm initially samples a state $\theta^{*}$ from a \textit{proposal density} $g_{\theta}$ and subsequently accepts the proposed state $\theta^{*}$ with probability \begin{equation*} \left\{ \begin{array}{ll} \min \left\{ \frac{p(\theta^{*}|D_{1:s}) g_{\theta^{*}} (\theta)} {p(\theta|D_{1:s})g_{\theta} (\theta^{*})} , 1 \right\} & \mbox{if } p(\theta|D_{1:s})g_{\theta} (\theta^{*})>0, \\ 1 & \mbox{otherwise}. \end{array} \right. \end{equation*} Typically, a normal proposal density $g_{\theta}=\mathcal{N}(\theta, \Lambda)$ with a constant covariance matrix $\Lambda$ is used. For such a normal $g_{\theta}$, the acceptance probability simplifies to $\min \left\{p(\theta^{*}|D_{1:s})/p(\theta|D_{1:s}), 1\right\}$, yielding the so called \textit{random walk Metropolis} algorithm. \subsubsection{Hamiltonian Monte Carlo.} Hamiltonian Monte Carlo (HMC) draws samples from an augmented parameter space via Gibbs steps, by computing a trajectory in the parameter space according to Hamiltonian dynamics. For a more detailed review of HMC, see \citet{neal2011}. \subsubsection{Power posterior sampling.} Power posterior (PP) sampling by \citet{friel2008} is a population Monte Carlo algorithm. It involves $m+1$ chains drawn from tempered versions $p^{t_i}(\theta |D_{1:s})$ of a target posterior $p(\theta |D_{1:s})$ for a \textit{temperature schedule} $t_i\in[0,1],~i\in\{0,1,\dots,m\}$, where $t_m=1$. At each iteration, the state of each chain is updated using an MCMC sampler associated with that chain and subsequently states between pairs of chains are swapped according to an MH algorithm. For the $i$-th chain, a sample $j$ is drawn from a probability mass function $p_i$ with probability $p_i(j)$, in order to determine the pair $(i, j)$ for a possible swap. \textit{Power posteriors} $p^{t_i}(\theta|D_{1:s}),~t_i < t_m,$ are smooth approximations of the target density $p^{t_m}(\theta |D_{1:s})=p(\theta |D_{1:s})$, facilitating exploration of the parameter space via state transitions between chains of $p^{t_i}(\theta|D_{1:s})$ and of $p(\theta |D_{1:s})$. In this paper, a categorical probability mass function $p_i$ is used in PP sampling for determining candidate pairs of chains for state swaps (see \ref{appendix_categorical}). \subsubsection{Multivariate PSRF.} \label{mcmc_diagnostics} PSRF, commonly denoted by $\hat{R}$, is an MCMC diagnostic of convergence conceived by \citet{gelman1992} and extended to its multivariate version by \citet{brooks1998}. This paper uses the multivariate PSRF by \citet{brooks1998}, which provides a single-number summary of convergence across the $n$ dimensions of a parameter, requiring a Monte Carlo covariance matrix estimator for the parameter. To acquire the multivariate PSRF, the \textit{multivariate initial monotone sequence estimator} (MINSE) of Monte Carlo covariance is employed \citep{dai2017}. In a Bayesian setting, the MINSE estimates the covariance matrix of a parameter posterior $p(\theta | D_{1:s})$. To compute PSRF, several independent Markov chains are simulated. \citet{gelman2004} recommend terminating MCMC sampling as soon as $\hat{R}<1.1$. More recently, \citet{vats2018a} make an argument based on ESS that a cut-off of $1.1$ for $\hat{R}$ is too high to estimate a Monte Carlo mean with reasonable uncertainty. \citet{vehtari2019} recommend simulating at least $m=4$ chains to compute $\hat{R}$ and using a threshold of $\hat{R} < 1.01$. \subsubsection{Multivariate ESS.} The ESS of an estimate obtained from a Markov chain realization is interpreted as the number of independent samples that provide an estimate with variance equal to the variance of the estimate obtained from the Markov chain realization. For a more extensive treatment entailing univariate approaches to ESS, see \citet{vats2018b,gong2016,kass1998}. $\hat{R}$ and its variants can fail to diagnose poor mixing of a Markov chain, whereas low values of ESS are an indicator of poor mixing. It is thus recommended to check both $\hat{R}$ and ESS \citep{vehtari2019}. For a theoretical treatment of the relation between $\hat{R}$ and ESS, see \citet{vats2018a}. Univariate ESS pertains to a single coordinate of an $n$-dimensional parameter. \citet{vats2019} introduce a multivariate version of ESS, which provides a single-number summary of sampling effectiveness across the $n$ dimensions of a parameter. Similarly to multivariate PSRF \citep{brooks1998}, multivariate ESS \citep{vats2019} requires a Monte Carlo covariance matrix estimator for the parameter. Given a single Markov chain realization of length $v$ for an $n$-dimensional parameter, \citet{vats2019} define multivariate ESS as \begin{equation*} \hat{S}=v \left( \frac{\det{(E)}}{\det{(C)}} \right)^{1/n}. \end{equation*} $\det{(E)}$ is the determinant of the empirical covariance matrix $E$ and $\det{(C)}$ is the determinant of a Monte Carlo covariance matrix estimate $C$ for the chain. In this paper, the multivariate ESS by \citet{vats2019} is used, setting $C$ to be the MINSE for the chain. \subsection{Bayesian marginalization for prediction} \label{subsec_bm} This subsection briefly reviews the notion of posterior predictive distribution based on Bayesian marginalization, posterior predictive distribution approximation via Monte Carlo integration, and associated binary and multiclass classification. \subsubsection{Posterior predictive distribution.} Consider a set $D_{1:s}=(x_{1:s}, y_{1:s})$ of $s$ training data points and a single test data point $(x, y)$ consisting of some test input $x$ and test output $y$. Integrating out the parameters $\theta$ of a model fitted to $D_{1:s}$ yields the posterior predictive distribution \begin{equation} \label{pred_posterior} \underbrace{p(y|x,D_{1:s})}_\text{ \parbox{2cm}{\centering Predictive\\[-4pt]distribution}} = \int \underbrace{p(y | x, \theta)}_\text{Likelihood} \underbrace{p(\theta | D_{1:s})}_\text{ \parbox{2cm}{\centering Parameter\\[-4pt]posterior}} d\theta . \end{equation} \ref{appendix_predictive} provides a derivation of \eqref{pred_posterior}. \subsubsection{Monte Carlo approximation.} \eqref{pred_posterior} can be written as \begin{equation} \label{pred_posterior_expectation} p(y|x,D_{1:s})= \mbox{E}_{\theta | D_{1:s}} [p(y | x, \theta)] . \end{equation} \eqref{pred_posterior_expectation} states the posterior predictive distribution $p(y|x,D_{1:s})$ as an expectation of the likelihood $p(y | x, \theta)$ evaluated at the test output $y$ with respect to the parameter posterior $p(\theta | D_{1:s})$ learnt from the training set $D_{1:s}$. The expectation in \eqref{pred_posterior_expectation} can be approximated via Monte Carlo integration. More specifically, a Monte Carlo approximation of the posterior predictive distribution is given by \begin{equation} \label{pred_posterior_approx} p(y|x,D_{1:s}) \simeq \sum_{k=1}^{v} p(y | x, \omega_{k}) . \end{equation} The sum in \eqref{pred_posterior_approx} involves evaluations of the likelihood across $v$ iterations $\omega_k,~k=1,2,\dots,v,$ of a Markov chain realization $\omega_{1:v}$ obtained from the parameter posterior $p ({\theta | D_{1:s}})$. \subsubsection{Classification rule.} In the case of binary classification, the prediction $\hat{y}$ for the test label $y\in\{0,1\}$ is \begin{equation} \label{bin_class_pred} \hat{y} = \left\{ \begin{array}{ll} 1 & \mbox{if } p(y|x,D_{1:s}) \geq 0.5, \\ 0 & \mbox{otherwise}. \end{array} \right. \end{equation} For multiclass classification, the prediction label $\hat{y}$ for the test label $y\in\{1,2,\dots,\kappa_{\rho}\}$ is \begin{equation} \label{multi_class_pred} \hat{y} = \argmax_{y} { \{p(y|x,D_{1:s})\} }. \end{equation} The classification rules \eqref{bin_class_pred} and \eqref{multi_class_pred} for binary and multiclass classification maximize the posterior predictive distribution. This way, predictions are made based on the Bayesian principle. The uncertainty of predictions is quantified, since the posterior predictive probability $p(y|x,D_{1:s})$ of each predicted label $\hat{y}$ is available. \section{Examples} \label{examples} Four examples of Bayesian inference for MLPs based on MCMC are presented. A different dataset is used for each example. The four datasets entail simulated noisy data from the exclusive-or (XOR) function, and observations collected from Pima Indians, penguins and hawks. Section \ref{mlp_datasets} introduces the four datasets. Each of the four datasets is split into a training and a test set for parameter inference and for predictions, respectively. MLPs with one neuron in the output layer are fitted to the noisy XOR and Pima datasets to perform binary classification, while MLPs with three neurons in the output layer are fitted to the penguin and hawk datasets to perform multiclass classification with three classes. Table \ref{data_models_table} shows the training and test sample sizes of the four datasets, and the fitted MLP models with their associated number $n$ of parameters. \begin{table}[t] \centering \caption{Training and test sample sizes of the four datasets of section \ref{examples}, architectures of fitted MLP models and associated number $n$ of MLP parameters.} \label{data_models_table} \begin{tabular}{|l|r|r|l|r|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{Dataset}} & \multicolumn{2}{c|}{Sample size} & \multicolumn{1}{c|}{\multirow{2}{*}{Model}} & \multicolumn{1}{c|}{\multirow{2}{*}{$n$}}\\ \cline{2-3} & \multicolumn{1}{c|}{Training} & \multicolumn{1}{c|}{Test} & & \\ \hline Noisy XOR & $500$ & $120$ & $\mbox{MLP}(2,2,1)$ & $9$ \\ \hline Pima & $262$ & $130$ & $\mbox{MLP}(8,2,2,1)$ & $27$ \\ \hline Penguins & $223$ & $110$ & $\mbox{MLP}(6,2,2,3)$ & $29$ \\ \hline Hawks & $596$ & $295$ & $\mbox{MLP}(6,2,2,3)$ & $29$ \\ \hline \end{tabular} \end{table} In the examples, samples are drawn via MCMC from the unnormalized log-posterior \begin{equation*} \log{(p(\theta | x_{1:s}, y_{1:s}))}= \ell (y_{1:s} | x_{1:s}, \theta) +\log{(\pi(\theta))} \end{equation*} of MLP parameters. The log-likelihood $\ell (y_{1:s} | x_{1:s}, \theta)$ for binary or multiclass classification corresponds to \eqref{bc_mlp_loglik} or \eqref{mc_mlp_loglik}. $\log{(\pi(\theta))}$ is the log-prior of MLP parameters. \subsection{Datasets} \label{mlp_datasets} An introduction to the four datasets used in this paper follows. The simulated noisy XOR dataset does not contain missing values, while the real datasets for Pima, penguins and hawks come with missing values. Data points containing missing values in the chosen variables have been dropped from the three real datasets. All \textit{features} (input variables) in the three real datasets have been standardized. The four datasets, in their final form used for inference and prediction, are available at \url{https://github.com/papamarkou/bnn_mcmc_examples}. \subsubsection{XOR dataset.} The so called XOR function $f:\{0,1\}\times \{0,1\}\rightarrow \{0,1\}$ returns $1$ if exactly one of its binary input values is equal to $1$, otherwise it returns $0$. The $s=4$ data points defining XOR are $(x_{1}, y_{1})=((0,0), 0)$, $(x_{2}, y_{2})=((0,1), 1)$, $(x_{3}, y_{3})=((1,0), 1)$ and $(x_{4}, y_{4})=((1,1), 0)$. A perceptron without a hidden layer can not learn the XOR function \citep{minsky1988}. On the other hand, an $\mbox{MLP}(2, 2, 1)$ with a single hidden layer of two neurons can learn the XOR function \citep{goodfellow2016}. An $\mbox{MLP}(2, 2, 1)$ has a parameter vector $\theta$ of length $n=9$, as $W_{1},b_{1},W_{2}$ and $b_{2}$ have respective dimensions $2\cdot 2, 2\cdot 1, 2\cdot 1$ and $1\cdot 1$. Since the number $s=4$ of data points defined by the exact XOR function is less than the number $n=9$ of parameters in the fitted $\mbox{MLP}(2, 2, 1)$, the parameters can not be fully identified. To circumvent the lack of identifiability arising from the limited number of data points, a larger dataset is simulated by introducing a noisy version of XOR. Firstly, consider the auxiliary function $\psi : [-c, 1+c]\times [-c, 1+c]\rightarrow \{0,1\}\times \{0,1\}$ given by \begin{align*} \psi(u-c,u-c) &= (0, 0),\\ \psi(u-c,u+c) &= (0, 1),\\ \psi(u+c,u-c) &= (1, 0),\\ \psi(u+c,u+c) &= (1, 1). \end{align*} $\psi$ is presented in parametrized form, in terms of a constant $c\in (0.5, 1)$ and a uniformly distributed random variable $u\sim\mathcal{U}(0,1)$. The noisy XOR function is then defined as the function composition $f\circ\psi$. A training and a test set of noisy XOR points, generated using $f\circ\psi$ and $c=0.55$, are shown in figure \ref{noisy_xor_scatterplots}. $125$ and $30$ noisy XOR points per exact XOR point $(x_i,y_i),~i=1,2,3,4,$ are contained in the training and test set, respectively. So, the training and test sample sizes are $500$ and $120$, as reported in table \ref{data_models_table} and as visualized in figure \ref{noisy_xor_scatterplots}. In figure \ref{noisy_xor_scatterplots}, the training and test sets of noisy XOR points consist of two input variables $(u\pm 0.55, u\pm 0.55)\in [-0.55, 1.55]\times [-0.55, 1.55]$ and of one output variable $f\circ\psi (u\pm 0.55, u\pm 0.55)\in\{0,1\}$. The four colours classify noisy XOR input $(u\pm 0.55, u\pm 0.55)$ with respect to the corresponding exact XOR input $\psi(u\pm 0.55, u\pm 0.55)\in\{(0,0),(0,1),(1,0),(1,1)\}$; the two different shapes classify noisy XOR output, with circle and triangle corresponding to $0$ and $1$. \subsubsection{Pima dataset.} The Pima dataset contains observations taken from female patients of Pima Indian heritage. The binary output variable indicates whether or not a patient has diabetes. Eight features are used as diagnostics of diabetes, namely the number of pregnancies, plasma glucose concentration, diastolic blood pressure, triceps skinfold thickness, insulin level, body mass index, diabetes pedigree function and age. For more information about the Pima dataset, see \citet{smith1988}. The original data, prior to removal of missing values and feature standardization, are available as the \texttt{PimaIndiansDiabetes2} data frame of the \texttt{mlbench} \texttt{R} package. \subsubsection{Penguin dataset.} The penguin dataset consists of body measurements for three penguin species observed on three islands in the Palmer Archipelago, Antarctica. Ad\'{e}lie, Chinstrap and Gentoo penguins are the three observed species. Four body measurements per penguin are taken, specifically body mass, flipper length, bill length and bill depth. The four body measurements, sex and location (island) make up a total of six features utilized for deducing the species to which a penguin belongs. Thus, the penguin species is used as output variable. \citet{horst2020} provide more details about the penguin dataset. In their original form, prior to data filtering, the data are available at \url{https://github.com/allisonhorst/palmerpenguins}. \subsubsection{Hawk dataset.} The hawk dataset is composed of observations for three hawk species collected from Lake MacBride near Iowa City, Iowa. Cooper's, red-tailed and sharp-shinned hawks are the three observed species. Age, wing length, body weight, culmen length, hallux length and tail length are the six hawk features employed in this paper for deducing the species to which a hawk belongs. So, the hawk species is used as output variable. \citet{cannon2019} mention that Emeritus Professor Bob Black at Cornell College shared the hawk dataset publicly. The original data, prior to data filtering, are available as the \texttt{Hawks} data frame of the \texttt{Stat2Data} \texttt{R} package. \subsection{Experimental configuration} \label{subsec_experim_conf} To fully specify the MLP models of table \ref{data_models_table}, their activations are listed. A sigmoid activation function is applied at each hidden layer of each MLP. Additionally, a sigmoid activation function is applied at the output layer of $\mbox{MLP}(2, 2, 1)$ and of $\mbox{MLP}(8, 2, 2, 1)$, conforming to log-likelihood \eqref{bc_mlp_loglik} for binary classification. A softmax activation function is applied at the output layer of $\mbox{MLP}(6, 2, 2, 3)$, in accordance with log-likelihood \eqref{mc_mlp_loglik} for multiclass classification. The same $\mbox{MLP}(6, 2, 2, 3)$ model is fitted to the penguin and hawk datasets. A normal prior $\pi(\theta)=\mathcal{N}(0, 10 I)$ is adopted for the parameters $\theta\in\mathbb{R}^n$ of each MLP model shown in table \ref{data_models_table}. An isotropic covariance matrix $10I$ assigns relatively high prior variance, equal to $10$, to each coordinate of $\theta$, thus setting empirically a seemingly non-informative prior. MH and HMC are run for each of the four examples of table \ref{data_models_table}. PP sampling incurs higher computational cost than MH and HMC; for this reason, PP sampling is run only for noisy XOR. Ten power posteriors are employed for PP sampling, and MH is used for within-chain moves. On the basis of pilot runs, the PP temperature schedule is set to $t_i=1,~i=0,1,\dots,9$; this implies that each power posterior is set to be the parameter posterior and consequently between-chain moves are made among ten chains realized from the parameter posterior. Empirical hyperparameter tuning for MH, HMC and PP is carried out. The chosen MH proposal variance, HMC number of leapfrog steps and HMC leapfrog step size for each example can be found in \url{https://github.com/papamarkou/bnn_mcmc_examples}. $m=10$ Markov chains are realized for each combination of training dataset shown in table \ref{data_models_table} and of MCMC sampler. $110,000$ iterations are run per chain realization, $10,000$ of which are discarded as burn-in. Thereby, $v=100,000$ post-burnin iterations are retained per chain realization. MINSE computation, required by multivariate PSRF and multivariate ESS, is carried out using $v=100,000$ post-burnin iterations per realized chain. The multivariate PSRF for each dataset-sampler setting is computed across the $m=10$ realized chains for the setting. On the other hand, the multivariate ESS is computed for each realized chain, and the mean across $m=10$ ESSs is reported for each dataset-sampler setting. Monte Carlo approximations of posterior predictive distributions are computed according to \eqref{pred_posterior_approx} for each data point of each test set. To reduce the computational cost, the last $v=10,000$ iterations of each realized chain are used in \eqref{pred_posterior_approx}. Predictions for binary and multiclass classification are made using \eqref{bin_class_pred} and \eqref{multi_class_pred}, respectively. Given a single chain realization from an MCMC sampler, predictions are made for every point in a test set; the predictive accuracy is then computed as the number of correct predictions over the total number of points in the test set. Subsequently, the mean of predictive accuracies across the $m=10$ chains realized from the sampler is reported for the test set. \subsection{Numerical summaries} \label{num_summaries} Table \ref{num_summaries_table} shows numerical summaries for each set of $m=10$ Markov chains realized by an MCMC sampler for a dataset-MLP combination of table \ref{data_models_table}. Multivariate PSRF and multivariate ESS diagnose the capacity of MCMC sampling to perform parameter inference. Predictive accuracy via Bayesian marginalization \eqref{pred_posterior_approx}, based on classification rules \eqref{bin_class_pred} and \eqref{multi_class_pred} for binary and multiclass classification, demonstrates the predictive performance of MCMC sampling. The last column of table \ref{num_summaries_table} displays the predictive accuracy via \eqref{pred_posterior_approx} with samples $\omega_k,~k=1,2,\dots,v,$ drawn from the prior $\pi(\theta)=\mathcal{N}(0,10I)$, thus providing an approximation of the expected posterior predictive probability \begin{equation} \label{pred_posterior_wrt_prior} \mbox{E}_{\theta}[p(y|x,\theta)]= \int p(y | x, \theta) \pi(\theta) d\theta \end{equation} with respect to prior $\pi(\theta)$. \begin{table}[t] \centering \caption{Multivariate PSRF, multivariate ESS and predictive accuracy for each set of ten Markov chains realized by an MCMC sampler for a dataset-MLP combination. Predictive accuracies based on samples from the prior are reported as model-agnostic baselines. } \label{num_summaries_table} \begin{tabular}{|l|r|r|r|r|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{Sampler}} & \multicolumn{1}{c|}{\multirow{2}{*}{PSRF}} & \multicolumn{1}{c|}{\multirow{2}{*}{ESS}} & \multicolumn{2}{c|}{Accuracy} \\ \cline{4-5} & & & \multicolumn{1}{c|}{MCMC} & \multicolumn{1}{c|}{Prior} \\ \hline \multicolumn{5}{|c|}{Noisy XOR, $\mbox{MLP}(2, 2, 1)$} \\ \hline MH & 1.2057 & 540 & 75.92 & \multirow{3}{*}{48.33} \\ HMC & 13.8689 & 25448 & 74.75 & \\ PP & 2.2885 & 4083 & 87.58 & \\ \hline \multicolumn{5}{|c|}{Pima, $\mbox{MLP}(8, 2, 2, 1)$} \\ \hline MH & 1.0007 & 93 & 79.31 & \multirow{2}{*}{51.69} \\ HMC & 1.0001 & 718 & 80.38 & \\ \hline \multicolumn{5}{|c|}{Penguins, $\mbox{MLP}(6, 2, 2, 3)$} \\ \hline MH & 1.0229 & 217 & 100.00 & \multirow{2}{*}{36.45} \\ HMC & 1.6082 & 3127 & 100.00 & \\ \hline \multicolumn{5}{|c|}{Hawks, $\mbox{MLP}(6, 2, 2, 3)$} \\ \hline MH & 1.0319 & 168 & 97.97 & \multirow{2}{*}{28.85} \\ HMC & 1.4421 & 1838 & 98.03 & \\ \hline \end{tabular} \end{table} PSRF is above $1.01$ \citep{vehtari2019}, indicating lack of convergence, in three out of four datasets. ESS is low considering the post-burnin length of $v=100,000$ of each chain realization, indicating slow mixing. MCMC sampling for Pima data is the only case of attaining PSRF less than $1.01$, yet the ESS values for Pima are the lowest among the four datasets. Overall, simultaneous low PSRF and high ESS are not reached in any of the examples. The predictive accuracy is high in multiclass classification, despite the lack of convergence and slow mixing. Bayesian marginalization based on HMC samples yields $100\%$ and $98.03\%$ predictive accuracy on the penguin and hawk test datasets, despite the PSRF values of $1.6082$ and $1.4421$ on the penguin and hawk training datasets. PP sampling for the binary classification problem of noisy XOR leads to higher predictive accuracy ($87.58\%$) than MH ($75.92\%$) or HMC ($74.75\%$). The $87.58\%$ predictive accuracy is attained by PP sampling despite the associated PSRF value of $2.2885$. Bayesian marginalization based on MCMC sampling outperforms prior beliefs or random guesses in terms of predictive inference, despite MCMC diagnostic failures. For instance, Bayesian marginalization via non-converged HMC chain realizations yields $74.75\%$, $100\%$ and $98.03\%$ predictive accuracy on the noisy XOR, penguin and hawk datasets. Approximating the posterior predictive distribution with samples from the parameter prior yields $48.33\%$, $36.45\%$ and $28.85\%$ predictive accuracy on the same datasets. It is noted that $48.33\%$ is close to a $50/50$ random guess for binary classification, while $36.45\%$ and $28.85\%$ are close to a $1/3$ random guess for multiclass classification with three classes. \subsection{Visual summaries for parameters} Visual summaries for MLP parameters are presented in this subsection. In particular, Markov chain traceplots and a comparison between MCMC sampling and ensemble training are displayed. \subsubsection{Non-converged chain realizations.} Figure \ref{traceplots} shows chain traceplots of four parameters of MLP models introduced in table \ref{data_models_table}. These traceplots visually demonstrate entrapment in local modes, mode switching and more generally lack of convergence. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{traceplots.png} \caption{Markov chain traceplots of four parameter coordinates of MLP models introduced in table \ref{data_models_table}. The vertical dotted lines indicate the end of burnin.} \label{traceplots} \end{figure} All $110,000$ iterations per realized chain, which include burnin, are shown in the traceplots of figure \ref{traceplots}. The vertical dotted lines delineate the first $10,000$ burnin iterations. Two realized MH chains for parameter $\theta_{8}$ of the $\mbox{MLP}(2, 2, 1)$ model fitted to the noisy XOR training data are plotted. The traces in orange and in blue gravitate during burnin towards modes in the vicinity of $8$ and $-8$, respectively, and then get entrapped for the entire simulation time in these modes. Parameter $\theta_{8}$ corresponds to a weight connecting a neuron in the hidden layer with the neuron of the output layer of $\mbox{MLP}(2, 2, 1)$. The two realized chains for $\theta_{8}$ explore two regions symmetric about zero associated with symmetries of weight $\theta_{8}$. Two realized MH chains for parameter $\theta_{18}$ of the $\mbox{MLP}(6, 2, 2, 3)$ model fitted to the penguin training data are plotted, one shown in orange and one in blue. Each of these two traces initially explore a mode, transit to a seemingly symmetric mode about halfway through the simulation time (post-burnin) and explore the symmetric mode in the second half of the simulation. One HMC chain traceplot for parameter $\theta_{23}$ and one HMC chain traceplot for parameter $\theta_{26}$ of the $\mbox{MLP}(6, 2, 2, 3)$ model fitted to the penguin and hawk training data, respectively, are shown. The traces of these two parameters exhibit similar behaviour, each of them switching between two symmetric regions about zero. Switching between symmetric modes, as seen in the displayed traceplots, manifests weight symmetries. These traceplots exemplify how computational time is wasted during MCMC to explore equivariant parameter posterior modes of a neural network \citep{nalisnick2018}. Consequently, the realized chains do not converge. \subsubsection{MCMC sampling vs ensemble training.} An exemplified comparison between MCMC sampling and ensemble training for neural networks follows. To this end, the same noisy XOR training data and the same $\mbox{MLP}(2, 2, 1)$ model, previously used for MCMC sampling, are used for ensemble training. \begin{figure}[t] \begin{subfigure}{.491\textwidth} \centering \includegraphics[width=1\linewidth]{noisy_xor_scatterplots.png} \caption{Noisy XOR training set (left) and test set (right).} \label{noisy_xor_scatterplots} \end{subfigure}\\ \begin{subfigure}{.491\textwidth} \centering \includegraphics[width=1\linewidth]{noisy_xor_parallel_coords_plot.png} \caption{$100$ SGD solutions from training $\mbox{MLP}(2, 2, 1)$.} \label{noisy_xor_parallel_coords_plot} \end{subfigure}\\ \begin{subfigure}{.491\textwidth} \centering \includegraphics[width=1\linewidth]{noisy_xor_marginal_par_posteriors.png} \caption{Histograms of parameter $\theta_{3}$ of $\mbox{MLP}(2, 2, 1)$.} \label{noisy_xor_marginal_par_posteriors} \end{subfigure} \caption{Comparison between MH sampling and ensemble training of an $\mbox{MLP}(2, 2, 1)$ model fitted to noisy XOR data. SGD is used for ensemble training. Each accepted SGD solution has predictive accuracy above $85\%$ on the noisy XOR test set.} \label{vis_summaries_mcmc_vs_optim} \end{figure} To recap, the noisy XOR dataset is introduced in subsection \ref{mlp_datasets} and is displayed in figure \ref{noisy_xor_scatterplots}; a sigmoid activation function is applied to the hidden and output layer of $\mbox{MLP}(2, 2, 1)$, and the BCE loss function is employed, which is the negative value of log-likelihood \eqref{bc_mlp_loglik}. Ensemble learning is conducted by training the $\mbox{MLP}(2, 2, 1)$ model on the noisy XOR training set multiple times. At each training session, SGD is used for minimizing the BCE loss. SGD is initialized by drawing a sample from $\pi(\theta)=\mathcal{N}(0, 10I)$, which is the same density used as prior for MCMC sampling. $2,000$ epochs are run per training session, with a batch size of $50$ and a learning rate of $0.002$. The SGD solution from the training session is accepted if its predictive accuracy on the noisy XOR test set is above $85\%$, otherwise it is rejected. Ensemble learning is terminated as soon as $1,000$ SGD solutions with the required level of accuracy are obtained. Figure \ref{noisy_xor_parallel_coords_plot} shows a parallel coordinates plot of $100$ SGD solutions. Each line connects the nine coordinates of a solution. Overlaying lines of different SGD solutions visualizes parameter symmetries. Figure \ref{noisy_xor_marginal_par_posteriors} displays histograms associated with parameter $\theta_{3}$ of $\mbox{MLP}(2, 2, 1)$. The green histogram represents all $1,000$ SGD solutions for $\theta_{3}$ obtained from ensemble training based on noisy XOR. These $1,000$ modes cluster in two regions approximately symmetric about zero. The orange histogram belongs to one of ten realized MH chains for $\theta_{3}$ based on noisy XOR. This realized chain is entrapped in a local mode in the vicinity of $5$, where the orange histogram concentrates its mass. The overlaid green and orange histograms show that MH sampling explores a region of the marginal posterior of $\theta_{3}$ also explored by ensemble training. The blue histogram in figure \ref{noisy_xor_marginal_par_posteriors} comes from a chain realization for $\theta_{3}$ using MH sampling to apply $\mbox{MLP}(2, 2, 1)$ to the four exact XOR data points. The pink line in figure \ref{noisy_xor_marginal_par_posteriors} shows the marginal prior $\pi(\theta_{3})=\mathcal{N}(0,\sigma^2=10)$. Four data points are not sufficient to learn from them, given that $\mbox{MLP}(2, 2, 1)$ has nine parameters. For this reason, the blue histogram coincides with the pink line, which means that the marginal posterior $p(\theta_{3})$ obtained from exact XOR via MH sampling and the marginal prior $\pi (\theta_{3})$ coincide. \subsection{Visual summaries for predictions} Visual summaries for MLP predictions and for MLP posterior predictive probabilities are presented in this section. MLP posterior predictive probabilities are visually shown to quantify predictive uncertainty in classification. \subsubsection{Predictive accuracy.} Figure \ref{pred_boxplots} shows boxplots of predictive accuracies, hereinafter referred to as accuracies, for the examples introduced in table \ref{data_models_table}. Each boxplot summarizes $m=10$ accuracies associated with the ten chains realized per sampler for a test set. Accuracy computation is based on Bayesian marginalization, as outlined in subsections \ref{subsec_bm} and \ref{subsec_experim_conf}. Horizontal red lines represent accuracy medians. Figure \ref{pred_boxplots} and table \ref{data_models_table} provide complementary summaries, as they present respective quartiles and means of accuracies across chains per sampler. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{pred_boxplots.png} \caption{Boxplots of predictive accuracies for the examples introduced in table \ref{data_models_table}. Each boxplot summarizes $m=10$ predictive accuracies associated with the ten chains realized by an MCMC sampler for a test set.} \label{pred_boxplots} \end{figure} Boxplot medians show high accuracy on the penguin and hawk test sets. Moreover, narrow boxplots indicate accuracies with small variation on the penguin and hawk test sets. Thereby, Bayesian marginalization based on non-converged chain realizations attains high accuracy with small variability on the two multiclass classification examples. Figure \ref{pred_boxplots} also displays boxplots of accuracies based on expected posterior predictive distribution approximation \eqref{pred_posterior_wrt_prior} with respect to the prior. For all four test sets and regardless of Markov chain convergence, Bayesian marginalization outperforms agnostic prior-based baseline \eqref{pred_posterior_wrt_prior}. The PP boxplot has more elevated median and is narrower than its MH and HMC counterparts for the noisy XOR test set. This implies that PP sampling attains higher accuracy with smaller variation than MH and HMC sampling on the noisy XOR test set. \subsubsection{Uncertainty quantification on a grid.} Figure \ref{pred_heatmaps} visualizes heatmaps of the ground truth and of posterior predictive distribution approximations for noisy XOR. More specifically, the posterior predictive probability $p(y=1 | (x_1, x_2), D_{1:500})$ is approximated at the centre $(x_1, x_2)$ of each square cell of a $22\times 22$ grid in $[-0.5, 1.5]\times [-0.5, 1.5]$. $D_{1:500}$ refers to the noisy XOR training dataset of size $s=500$ introduced in subsection \ref{mlp_datasets}. \eqref{pred_posterior_approx} is used for approximating $p(y=1 | (x_1, x_2), D_{1:500})$. Previously acquired Markov chain realizations (subsection \ref{num_summaries}) via MCMC sampling of $\mbox{MLP}(2, 2, 1)$ parameters, using the noisy XOR training dataset $D_{1:500}$, are passed to \eqref{pred_posterior_approx}. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{pred_heatmaps.png} \caption{Heatmaps of ground truth and of posterior predictive probabilities $p(y=1 | (x_1, x_2), D_{1:500})=c$ on a grid of noisy XOR features $(x_1, x_2)$. The heatmap colour palette represents values of $c$. The ground truth heatmap visualizes true labels, while the other three heatmaps use approximate Bayesian marginalization based on HMC and PP chain realizations.} \label{pred_heatmaps} \end{figure} The approximation $p(y=1 | (x_1, x_2), D_{1:500})=c$ at the center $(x_1, x_2)$ of a square cell determines the colour of the cell in figure \ref{pred_heatmaps}. If $c$ is closer to $1$, $0$, or $0.5$, the cell is plotted with a shade of red, blue or white, respectively. So, darker shades of red indicate that $y=1$ with higher certainty, darker shades of blue indicate that $y=0$ with higher certainty, and shades of white indicate high uncertainty about the binary label of noisy XOR. Two posterior predictive distribution approximations based on two HMC chain realizations learn different regions of the exact posterior predictive distribution. Each of the two HMC chain realizations uncover about half of the ground truth of grid labels, while it remains highly uncertain for the other half of grid labels. Moreover, both HMC chain realizations exhibit higher uncertainty closer to the decision boundaries of ground truth. These decision boundaries are the vertical straight line $x_1=0.5$ and horizontal straight line $x_2=0.5$. A posterior predictive distribution approximation based on a PP chain realization is displayed. PP sampling uncovers larger regions of the ground truth of grid labels than HMC sampling in the considered grid of noisy XOR features $(x_1, x_2)$. Although HMC and PP samples do not converge to the parameter posterior of $\mbox{MLP}(2, 2, 1)$, approximate Bayesian marginalization using these samples predicts a subset of noisy XOR labels. \subsubsection{Uncertainty quantification on a test set.} Figures \ref{vis_summaries_uq_noisy_xor} and \ref{vis_summaries_uq_hawks} show approximations of predictive prosterior probabilities for a binary classification (noisy XOR) and a multiclass classification (hawks) example. Two posterior predictive probabilities are interpreted contextually in each example to quantify predictive uncertainty. \begin{figure}[t] \begin{subfigure}{.491\textwidth} \centering \includegraphics[width=0.5\linewidth]{noisy_xor_test_scatterplot.png} \caption{Scatterplot of noisy XOR features $(x_1, x_2)$.} \label{noisy_xor_scatter_uq} \end{subfigure}\\ \begin{subfigure}{.491\textwidth} \centering \includegraphics[width=1\linewidth]{noisy_xor_pred_posterior_plot_on_test.png} \caption{Posterior predictive probabilities for noisy XOR.} \label{noisy_xor_pred_posterior_uq_test} \end{subfigure} \caption{Quantification of uncertainty in predictions for the noisy XOR test set. Approximate Bayesian marginalization via MH sampling is used for computing posterior predictive probabilities.} \label{vis_summaries_uq_noisy_xor} \end{figure} Figure \ref{noisy_xor_scatter_uq} visualizes the noisy XOR test set of subsection \ref{mlp_datasets}. This is the same test set shown in figure \ref{noisy_xor_scatterplots}, but with test points coloured according to their labels. Figure \ref{noisy_xor_pred_posterior_uq_test} shows the posterior predictive probability $p(y=c | (x_1, x_2), D_{1:500})$ of true label $c\in\{0,1\}$ for each noisy XOR test point $((x_1, x_2), y=c)$ given noisy XOR training set $D_{1:500}$ of subsection \ref{mlp_datasets}. The posterior probabilities $p(y=c | (x_1, x_2), D_{1:500})$ of predicting true class $c$ are ordered within class $c$. Moreover, each $p(y=c | (x_1, x_2), D_{1:500})$ is coloured as red or pale green depending on whether the resulting prediction is correct or not. One of the ten MH chain realizations for $\mbox{MLP}(2, 2, 1)$ parameter inference from noisy XOR data is used for approximating $p(y=c | (x_1, x_2), D_{1:500})$ via \eqref{pred_posterior_approx} and for making predictions via \eqref{bin_class_pred}. Two points in the noisy XOR test set are marked in figure \ref{vis_summaries_uq_noisy_xor} using a square and a rhombus. These two points have the same true label $c=1$. Given posterior predictive probabilities $0.5269$ and $0.9750$ for the rhombus and square-shaped test points, the label $c=1$ is correctly predicted for both points. However, the rhombus-shaped point is closer to the decision boundary $x_2=0.5$ than the square-shaped point, so classifying the former entails higher uncertainty. As $0.5269 < 0.9750$, Bayesian marginalization quantifies the increased predictive uncertainty associated with the rhombus-shaped point despite using a non-converged MH chain realization. Figure \ref{hawks_scatter_uq} shows a scatterplot of weight against tail length for the hawk test set of subsection \ref{mlp_datasets}. Blue, red and green test points belong to Cooper's, red-tailed and sharp-shinned hawk classes. Figure \ref{hawks_pred_posterior_uq_test} shows the posterior predictive probabilities $p(y=c|x, D_{1:596})$ for a subset of $100$ hawk test points, where $c\in\{ \mbox{Cooper's}, \mbox{red-tailed}, \mbox{sharp-shinned}\}$ denotes the true label of test point $(x, y=c)$ and $D_{1:596}$ denotes the hawk training set of subsection \ref{mlp_datasets}. These posterior predictive probabilities are shown ordered within each class, and are coloured red or pale green depending on whether they yield correct or wrong predictions. One of the ten MH chain realizations for $\mbox{MLP}(6, 2, 2, 3)$ parameter inference is used for approximating $p(y=c | x, D_{1:596})$ via \eqref{pred_posterior_approx} and for making predictions via \eqref{multi_class_pred}. \begin{figure}[t] \begin{subfigure}{.491\textwidth} \centering \includegraphics[width=1\linewidth]{hawks_test_scatterplot.png} \caption{Scatterplot of hawks' weight against tail length.} \label{hawks_scatter_uq} \end{subfigure}\\ \begin{subfigure}{.491\textwidth} \centering \includegraphics[width=1\linewidth]{hawks_pred_posterior_plot_on_test.png} \caption{Posterior predictive probabilities for hawks.} \label{hawks_pred_posterior_uq_test} \end{subfigure} \caption{Quantification of uncertainty in predictions for the hawk test set. Bayesian marginalization via MH sampling is used for approximating posterior predictive probabilities.} \label{vis_summaries_uq_hawks} \end{figure} Two points in the hawk test set are marked in figure \ref{vis_summaries_uq_hawks} using a square and a rhombus. Each of these two points represents weight and tail length measurements from a red-tailed hawk. The red-tailed hawk class is correctly predicted for both points. The squared-shaped observation belongs to the main cluster of red-tailed hawks in figure \ref{hawks_scatter_uq} and it is predicted with high posterior predictive probability ($0.9961$). On the other hand, the rhombus-shaped observation, which falls in the cluster of Cooper's hawk, is correctly predicted with a lower posterior predictive probability ($0.5271$). Bayesian marginalization provides approximate posterior predictive probabilities that signify the level of uncertainty in predictions despite using a non-converged MH chain realization. \subsection{Source code} The source code for this paper is split into three \texttt{Python} packages, namely \texttt{eeyore}, \texttt{kanga} and \texttt{bnn\_mcmc\_examples}. \texttt{eeyore} implements MCMC algorithms for Bayesian neural networks. \texttt{kanga} implements MCMC diagnostics. \texttt{bnn\_mcmc\_examples} includes the examples of this paper. \texttt{eeyore} is available via \texttt{pip}, via \texttt{conda} and at \url{https://github.com/papamarkou/eeyore}. \texttt{eeyore} implements the MLP model, as defined by \eqref{mlp_g}-\eqref{mlp_h}, using \texttt{PyTorch}. An \texttt{MLP} class is set to be a subclass of \texttt{torch.nn.Module}, with log-likelihood \eqref{bc_mlp_loglik} for binary classification equal to the negative value of \texttt{torch.nn.BCELoss} and with log-likelihood \eqref{mc_mlp_loglik} for multiclass classification equal to the negative value of \texttt{torch.nn.CrossEntropyLoss}. Each MCMC algorithm takes an instance of \texttt{torch.nn.Module} as input, with the logarithm of the target density being a \texttt{log\_target} method of the instance. Log-target density gradients for HMC are computed via the automatic differentiation functionality of the \texttt{torch.autograd} package of \texttt{PyTorch}. The \texttt{MLP} class of \texttt{eeyore} provides a \texttt{predictive\_posterior} method, which implements the posterior predictive distribution approximation \eqref{pred_posterior_approx} given a realized Markov chain. \texttt{kanga} is available via \texttt{pip}, via \texttt{conda} and at \url{https://github.com/papamarkou/kanga}. \texttt{kanga} is a collection of MCMC diagnostics implemented using \texttt{numpy}. MINSE, multivariate PSRF multivariate ESS are available in \texttt{kanga}. \texttt{bnn\_mcmc\_examples} organizes the examples of this paper in a package. \texttt{bnn\_mcmc\_examples} relies on \texttt{eeyore} for MCMC simulations and posterior predictive distribution approximations, and on \texttt{kanga} for MCMC diagnostics. For more details, see \url{https://github.com/papamarkou/bnn_mcmc_examples}. Optimization via SGD for the example involving $\mbox{MLP}(2, 2, 1)$ and noisy XOR data (figure \ref{vis_summaries_mcmc_vs_optim}) is run using \texttt{PyTorch}. The loss function for optimization is computed via \texttt{torch.nn.BCELoss}. This loss function corresponds to the negative log-likelihood function \eqref{bc_mlp_loglik} involved in MCMC, thus linking the SGD and MH simulations shown in figure \ref{noisy_xor_marginal_par_posteriors}. SGD is coded manually instead of calling an optimization algorithm of the \texttt{torch.optim} package of \texttt{PyTorch}. Gradients for optimization are computed calling the \texttt{backward} method. The SGD code related to the example of figure \ref{vis_summaries_mcmc_vs_optim} is available at \url{https://github.com/papamarkou/bnn_mcmc_examples}. \subsection{Hardware} \label{hardware} Pilot MCMC runs indicated an increase in speed by using CPUs instead of GPUs; accordingly, computations were performed on CPUs for this paper. The GPU slowdown is explained by the overhead of copying \texttt{PyTorch} tensors between GPUs and CPUs for small neural networks, such as the ones used in section \ref{examples}. The computations for section \ref{examples} were run on Google Cloud Platform (GCP). Eleven virtual machine (VM) instances with virtual CPUs were created on GCP to spread the workload. Setting aside heterogeneities in hardware configuration between GCP VM instances and in order to provide an indication of computational cost, MCMC simulation runtimes are provided for the example of applying an $\mbox{MLP}(6, 2, 2, 3)$ to the hawk training dataset. The mean runtimes across the ten realized chains per MH and HMC are $0:42:54$ and $1:10:48$, respectively (runtimes are formatted as `hours : minutes : seconds'). \section{Predictive inference scope} \label{scope} Bayesian marginalization can attain high predictive accuracy and can quantify predictive uncertainty using non-converged MCMC samples of neural network parameters. Thus, MCMC sampling elicits some information about the parameter posterior of a neural network and conveys such information to the posterior predictive distribution. It is possible that MCMC sampling learns about the statistical dependence among neural network parameters. Along these lines, groups of weights or biases can be formed, with strong within-group and weak between-group dependence, to investigate scalable block Gibbs sampling methods for neural networks. Another possibility of MCMC developments for neural networks entails shifting attention from the parameter space to the output space, since the latter is related to predictive inference directly. Approximate MCMC methods that measure the discrepancy or Wasserstein distance between neural network predictions and output data \citep{rudolf2018} can be investigated. Bayesian marginalization provides scope to develop predictive inference for neural networks. For instance, Bayesian marginalization can be examined in the context of approximate MCMC sampling from a neural network parameter posterior, regardless of convergence to the parameter posterior and in analogy to the workings of this paper. Moreover, the idea of \citet{wilson2020} to interpret ensemble training of neural networks from a viewpoint of Bayesian marginalization can be studied using the notion of quantization of probability distributions. \section*{Appendix A: power posteriors} \sname{Appendix A} \label{appendix_categorical} This appendix provides the probability mass function $p_i(j)$ for proposing a chain $j$ for a possible swap of states between chains $i$ and $j$ in PP sampling. Assuming $m+1$ power posteriors, a neighbouring chain $j$ of $i$ is chosen randomly from the categorical probability mass function $p_i = \mathcal{C}(\alpha_i(0), \alpha_i(1),\dots, \alpha_i(i-1),\alpha_i(i+1),\dots,\alpha_i(m))$ with event probabilities \begin{equation*} \alpha_i(j) = \frac{\exp{(-\beta |j-i|)}}{\gamma_i}, \end{equation*} where $i\in\{0,1,\dots,m\}$, $j\in\{0,1,\dots,m\}\setminus\{i\}$, $\beta$ is a hyperparameter and $\gamma_i$ is a normalizing constant. The hyperparameter $\beta$ is typically set to $\beta=0.5$, a value which makes a jump to $j=i\pm 1$ roughly three times more likely than a jump to $j=i\pm 3$ \citep{friel2008}. The normalizing constant $\gamma_i$ is given by \begin{equation*} \gamma_i = \frac{\exp{(-\beta)}(2-\exp{(-\beta i)}-\exp{(-\beta (m-i))})}{1-\exp{(-\beta)}}. \end{equation*} Starting from the fact that the event probabilities $\alpha_i(j)$ add up to one, $\gamma_i$ is derived as follows: \begin{equation*} \begin{split} 1 &= \sum_{\substack{j=0\\ j\ne i}}^{m}\alpha_i (j)\Rightarrow\\ \gamma_i &= \sum_{j=0}^{i-1}\exp{(-\beta(i-j))}+\sum_{j=i+1}^{m}\exp{(-\beta (j-i))}\\ &= \sum_{j=1}^{i}\exp{(-\beta j)}+\sum_{j=1}^{m-i}\exp{(-\beta j)}\\ &= \exp{(-\beta)}\left(\frac{1-\exp{(-\beta i)}}{1-\exp{(-\beta)}}\right)\\ & +\exp{(-\beta)}\left(\frac{1-\exp{(-\beta (m-i))}}{1-\exp{(-\beta)}}\right)\\ &= \frac{\exp{(-\beta)}(2-\exp{(-\beta i)}-\exp{(-\beta (m-i))})}{1-\exp{(-\beta)}}. \end{split} \end{equation*} \section*{Appendix B: Predictive distribution} \sname{Appendix B} \label{appendix_predictive} This appendix derives the posterior predictive distribution \eqref{pred_posterior}. Applying the law of total probability and the definition of conditional probability yields \begin{equation*} \begin{split} p(y | x, D_{1:s}) & = \int p(y, \theta | x, D_{1:s}) d\theta\\ & = \int p(y | x, D_{1:s}, \theta) p(\theta | x, D_{1:s}) d\theta . \end{split} \end{equation*} $p(y | x, D_{1:s}, \theta)$ is equal to the likelihood $p(y | x, \theta)$: \begin{equation*} \begin{split} p(y | x, D_{1:s}, \theta) & = \frac{p(y, D_{1:s} | x, \theta)}{p(D_{1:s} | x, \theta)}\\ & = \frac{p(y | x, \theta) p(D_{1:s} | x, \theta)} {p(D_{1:s} | x, \theta)}\\ & = p(y | x, \theta) . \end{split} \end{equation*} Furthermore, $p(\theta | x, D_{1:s})$ is equal to the parameter posterior $p(\theta | D_{1:s})$: \begin{equation*} \begin{split} p(\theta | x, D_{1:s}) & = \frac{p(\theta, x, D_{1:s})}{p(x, D_{1:s})}\\ & = \frac{p(\theta, x | D_{1:s}) p(D_{1:s})} {p(x) p(D_{1:s})}\\ & = \frac{p(\theta | D_{1:s}) p(x | D_{1:s})} {p(x)}\\ & = \frac{p(\theta | D_{1:s}) p(x, D_{1:s})} {p(x)p(D_{1:s})}\\ & = \frac{p(\theta | D_{1:s}) p(x) p(D_{1:s})} {p(x)p(D_{1:s})}\\ & = p(\theta | D_{1:s}). \end{split} \end{equation*} \section*{Acknowledgements} Research sponsored by the Laboratory Directed Research and Development Program of Oak Ridge National Laboratory, managed by UT-Battelle, LLC, for the US Department of Energy under contract DE-AC05-00OR22725. The first author would like to thank Google for the provision of free credit on Google Cloud Platform. \input{main.bbl} \end{document}
2,869,038,156,613
arxiv
\section{#1}\setcounter{equation}{0}} \def\Ai{\mathrm {Ai}} \def\Bi{\mathrm {Bi}} \renewcommand\O[1]{\mathcal{O}\left({#1}\right)} \renewcommand\i{\mathrm{i}} \renewcommand\Re{\hspace{0.2mm}\mathrm{Re}\hspace{0.2mm}} \renewcommand\Im{\hspace{0.2mm}\mathrm{Im}\hspace{0.2mm}} \begin{document} \title[Global asymptotics of Stieltjes-Wigert polynomials] {Global asymptotics of Stieltjes-Wigert polynomials} \author{Y. T. Li} \email{[email protected]} \address{Institute of Computational and Theoretical Studies, and Department of Mathematics, Hong Kong Baptist University, Kowloon, Hong Kong} \thanks{The work of Y. T. Li is supported by the HKBU Strategic Development Fund and a fund from HKBU (no.: 38-40-106)} \author{R. Wong} \email{[email protected]} \address{Liu Bie Ju Centre for Mathematical Sciences, City University of Hong Kong, Kowloon, Hong Kong} \maketitle \begin{abstract} Asymptotic formulas are derived for the Stieltjes-Wigert polynomials $S_n(z;q)$ in the complex plane as the degree $n$ grows to infinity. One formula holds in any disc centered at the origin, and the other holds outside any smaller disc centered at the origin; the two regions together cover the whole plane. In each region, the $q$-Airy function $A_q(z)$ is used as the approximant. For real $x>1/4$, a limiting relation is also established between the $q$-Airy function $A_q(x)$ and the ordinary Airy function $\Ai(x)$ as $q\to1$. \end{abstract} \section{introduction} We first fix some notations. Let $k>0$ be a fixed number and \begin{equation}\label{eq:1} q=\exp\{-(2k^2)^{-1}\}. \end{equation} Note that $0<q<1$. The $q$-shifted factorial is given by $$ (a,q)_0=1,\qquad (a,q)_n=\prod_{j=0}^{n-1}(1-aq^j),\qquad n=1,2,\cdots. $$ With these notations, the Stieltjes-Wigert polynomials \begin{equation}\label{eq:1.4} S_n(x;q)=\sum_{j=0}^n\frac{q^{j^2}}{(q,q)_j(q;q)_{n-j}}(-x)^j,\qquad n=0,1,2,\cdots, \end{equation} are orthogonal with respect to the weight function \begin{equation}\label{eq:1.5} w(x)=k\pi^{-\frac12}\exp\{-k^2\log^2x\} \end{equation} for $0<x<\infty$; see \cite[(18.27.18)]{NIST} and \cite[(3.27.1)]{KS}. It should be mentioned that the Stieltjes-Wigert polynomials belong to the indetermined moment class and the weight function in (\ref{eq:1.5}) is not unique; see~\cite{Christiansen}. One important property of the Stieltjes-Wigert polynomials is the symmetry relation \begin{equation}\label{eq:5} S_n(z;q)=(-zq^n)^n S_n\left(\frac{1}{zq^{2n}};q\right), \end{equation} which can be easily verified by changing the index $j$ to $n-j$ in the explicit expression given in (\ref{eq:1.4}). In some literatures, the variable $x$ in (\ref{eq:1.4}) is replaced by $q^{\frac12}x$; see, for example, Szeg\"o~\cite{Szego}, Chihara~\cite{Chihara}, and Wang and Wong~\cite{WangWong}. The notation for the Stietjes-Wigert polynomials used in these literatures is \begin{equation}\label{eq:SWp} p_n(x)=(-1)^nq^{n/2+1/4}\sqrt{(q;q)_n}S_n(q^\frac12 x;q). \end{equation} The Stietjes-Wigert polynomials appear in random walks and random matrix formulation of Chern-Simons theory on Seifert manifolds; see \cite{BaikSuidan,DT}. The asymptotics of the Stieltjes-Wigert polynomials, as the degree tends to infinity, has been studied by several authors. In 1923, Wigert~\cite{Wigert} proved that the polynomials have the limiting behavior \begin{equation} \lim_{n\to\infty}(-1)^nq^{-n/2}p_n(x)=\frac{q^{1/4}}{\sqrt{(q;q)_\infty}}\sum_{k=0}^\infty (-1)^k\frac{q^{k^2+k/2}}{(q;q)_k}x^k, \end{equation} which can be put in terms of the $q$-Airy function \begin{equation}\label{eq:8} A_q(z)=\sum_{k=0}^\infty\frac{q^{k^2}}{(q,q)_k}(-z)^k. \end{equation} This function appeared in the third identity on p.57 of Ramanujan's ``Lost Notebook''~\cite{Ramanujan}. (For this reason, it is also known as the Ramanujan function.) In terms of the $q$-Airy function, Wigert's result can be stated as \begin{equation}\label{eq:9} \lim_{n\to\infty}S_n(x;q)=\frac{1}{(q;q)_\infty}A_q(x). \end{equation} It is known that all zeros of $S_n(x;q)$ lie in the interval $(0,4q^{-2n})$; see~\cite{WangWong}. Hence, we introduce a new scale \begin{equation} z:=q^{-nt}u \end{equation} with $u\in\mathbb C\setminus\{0\}$ and $t\in\mathbb R$. The values of $t=0$ and $t=2$ can be regarded as the turning points of $S_n(q^{-nt}u;q)$. Taking into account the symmetry relation in (\ref{eq:5}), one may restrict oneself to the case $t\geq 1$; see~\cite[(1.4)]{WangXSWong2}. (However, in the present paper, we will not make this restriction.) The case $t=2$ has been studied by Ismail~\cite{Ismail2}, and he proved \begin{equation} \lim_{n\to\infty}q^{n^2(t-1)}(-u)^{-n}S_n(uq^{-nt};q)=\frac{1}{(q;q)_\infty}A_q\left(\frac{q^{n(t-2)}}{u}\right),\qquad t=2, \end{equation} uniformly on compact subsects of $\mathbb C\setminus\{0\}$; see \cite[Theorem 2.5]{Ismail2}. This result can in fact be derived directly from Wigert's result in (\ref{eq:9}) via the symmetry relation mentioned in (\ref{eq:5}). In~\cite{IsmailZhang}, Ismail and Zhang extended the validity of this result to $t\geq2$. For $1\leq t<2$, Ismail and Zhang~\cite{IsmailZhang} gave asymptotic formulas for these polynomials in terms of the theta-type function \begin{equation} \Theta_q(z)=\sum_{k=-\infty}^\infty q^{k^2}z^k, \end{equation} but in a very complicated manner. The result in~\cite{IsmailZhang} was then simplified by Wang and Wong~\cite{WangXSWong1}. For instance, when $1\leq t<2$, Wang and Wong proved that \begin{equation} S_n(uq^{-nt};q)=\frac{(-u)^{n-m}q^{n^2(1-t)-m[n(2-t)-m]}}{(q;q)_n(q;q)_\infty} \left\{\Theta_q\left(\frac{q^{2m-n(2-t)}}{-u}\right)+\O{q^{n(l-\delta)}}\right\}, \end{equation} where $l=\frac12(2-t)$, $m=\lfloor nl\rfloor$ and $\delta>0$ is any small number; see~\cite[Corollary 2]{WangXSWong1}. Note that all these results are not valid in a neighborhood containing $t=2$, one of the turning points. To resolve this issue, a uniform asymptotic formula was given by Wang and Wong in a second paper~\cite{WangXSWong2}. For $z:=uq^{-nt}$ with $t>2(1-\delta)$, $\delta$ being any small positive constant, they showed that \begin{equation}\label{eq:14} S_n(z;q)=\frac{(-z)^nq^{n^2}}{(q;q)_n}\big[A_{q,n}(q^{-2n}/z)+r_n(z)\big], \end{equation} where $r_n(z)$ is the remainder and $A_{q,n}(z)$ is the $q$-Airy polynomial obtained by truncating the infinite series in (\ref{eq:8}) at $k=n$, \textit{i.e.}, $$ A_{q,n}(z)=\sum_{k=0}^n \frac{q^{k^2}}{(q,q)_k}(-z)^k. $$ In this paper, we shall show that the $q$-Airy polynomial $A_{q,n}(z)$ in equation (\ref{eq:14}) can be replaced by the $q$-Airy function $A_q(z)$. Moreover, we shall show that the resulting formula is global. More precisely, we have the following result. \begin{theorem}\label{thm:1} Let $z:=uq^{-nt}$ with $-\infty<t<2$, $u\in\mathbb C$ and $|u|\leq R$, where $R>0$ is any fixed positive number. We have \begin{equation}\label{eq:3.13} S_{n}(z;q)=\frac{1}{(q;q)_n}\left[A_q(z)+r_n(z)\right], \end{equation} where the remainder satisfies \begin{equation}\label{eq:3.14} |r_n(z)|\leq \left[\frac{q^{n(1-\sigma)}}{1-q}+\frac{2}{1-q}\left(\frac{1}{2}\right)^{\lfloor n\sigma\rfloor}\right]A_q(-|z|) \end{equation} with $\sigma=\max\{\frac12,\frac12+\frac t4\}$. Let $z:=uq^{-nt}$ with $0<t<\infty$, $u\in\mathbb C$ and $|u|\geq 1/R$, where $R>0$ is any fixed positive number. We have \begin{equation}\label{eq:3.15} S_{n}(z;q)=\frac{(-z)^nq^{n^2}}{(q;q)_n}\left[A_q(q^{-2n}/z)+r_n(z)\right], \end{equation} where the remainder satisfies \begin{equation}\label{eq:3.16} |r_n(z)|\leq \left[\frac{q^{n(\delta-1)}}{1-q}+\frac{2}{1-q}\left(\frac{1}{2}\right)^{\lfloor n(2-\delta)\rfloor}\right]A_q\big(-q^{-2n}/|z|\big) \end{equation} with $\delta=\min\{\frac32,1+\frac t4\}$. \end{theorem} Note that the quantities inside the square brackets in (\ref{eq:3.14}) and (\ref{eq:3.16}) are exponentially small. Furthermore, the sizes of the $q$-Airy functions in these two equations for large values of their arguments are about the same as the leading terms in their corresponding approximation formulas in (\ref{eq:3.13}) and (\ref{eq:3.15}); cf. (\ref{eq:lim2}) below. The investigations mentioned above all started with the explicit expression of $S_n(x;q)$ given in (\ref{eq:1.4}). In~\cite{WangWong}, Wang and Wong used a different method, namely, the Riemann-Hilbert approach, to get a uniform asymptotic expansion of the Stietjes-Wigert polynomials $p_n(x)$ in terms of Airy functions. But, the main result in \cite{WangWong} needs a correction, and the correction is that the parameter $k$ in (\ref{eq:1}) and (\ref{eq:1.5}) should depend on $n$ and tend to infinity as $n$ tends to infinity. In other words, the result in~\cite{WangWong} holds with a varying weight and the number $q$ in (\ref{eq:1}) is required to approach 1. In fact, more precisely, if $k\sim n^\sigma$ as $n\to\infty$ and $0<\sigma<\frac12$, then all formulas in \cite{WangWong} remain valid; if $\sigma\geq\frac12$, then some equations need be amended, but the main result still holds; see also Baik and Suidan~\cite{BaikSuidan}. Note that the results in Theorem~\ref{thm:1} hold even when $q$ tends to 1. Comparing the results in Theorem~\ref{thm:1} and in \cite{WangWong}, we will establish the following limiting relation between the $q$-Airy function and the ordinary Airy function: \begin{theorem}\label{thm:2} Let $\xi(x)$ denote the function defined by \begin{equation}\label{eq:12} \frac23[\xi(x)]^{3/2}=\frac{1}{\log(1/q)}\int_0^{\log(4x)}\arctan\sqrt{e^s-1}ds. \end{equation} Then, for fixed $x>1/4$, the $q$-Airy function has the following asymptotic approximation \begin{equation}\label{eq:13} A_q(\sqrt q x)\sim 2\sqrt{\pi }\exp\left\{\frac{3\log^2x-\pi^2}{12\log(1/q)} \right\}\left(\frac{\xi(x)}{4x-1}\right)^\frac14 \Ai(-\xi(x)) \end{equation} as $q\to1$. \end{theorem} This result in fact holds for any $x\in\mathbb C\setminus\{0\}$, if we replace $\xi(x)$ in (\ref{eq:13}) by $\widetilde{\xi}(x)$ defined by \begin{equation*} \frac23\big[-\widetilde{\xi}(x)\big]^{3/2}=\frac{1}{\log q}\left[\int_0^{\log(4x)}\log\left(1+\sqrt{1-e^s}\right)ds-\frac{\log^2(4x)}{4}\right]. \end{equation*} In view of this result, $A_q$ is indeed a $q$-analogue of the Airy function. \section{Global asymptotics of Stieltjes-Wigert polynomials} \textbf{Proof of Theorem 1.} If $u=0$, then $z=uq^{-nt}=0$ and $$ (q;q)_nS_n(0;q)=A_q(0)=1 $$ by the series representations in (\ref{eq:1.4}) and (\ref{eq:8}). Hence, the result in (\ref{eq:3.13})-(\ref{eq:3.14}) follows immediately, and we just need consider the case when $u\neq0$. Noting that $$ z=uq^{-nt}=\frac{u}{|u|}q^{-n\big(t-\frac{\log|u|}{n\log q}\big)}, $$ we may assume $|u|=1$ without loss of generality. For notational convenience, we put \begin{equation}\label{eq:3.17} \begin{aligned} r_n(z)=&(q;q)_n S_n(z;q)-A_q(z)\\ =&\sum_{j=0}^n\left[\frac{(q;q)_n}{(q;q)_{n-j}}-1\right]\frac{q^{j^2}}{(q;q)_j}(-z)^j -\sum_{j=n+1}^\infty \frac{q^{j^2}}{(q;q)_j}(-z)^j\\ =:&I_1+I_2+I_3, \end{aligned} \end{equation} where $$ I_1=\sum_{j=0}^{\lfloor n\sigma\rfloor} \left[\frac{(q;q)_n}{(q;q)_{n-j}}-1\right]\frac{q^{j^2}}{(q;q)_j}(-z)^j, $$ $$ I_2=\sum_{j=\lfloor n\sigma\rfloor+1}^{n} \left[\frac{(q;q)_n}{(q;q)_{n-j}}-1\right]\frac{q^{j^2}}{(q;q)_j}(-z)^j, $$ $$ I_3= \sum_{j=n+1}^\infty \frac{q^{j^2}}{(q;q)_j}(-z)^j, $$ and $0<\sigma<1$ is a constant to be specified later. In view of the inequality $1-ab<(1-a)+(1-b)$ for any $a,b\in(0,1)$, we have \begin{equation} 1-\frac{(q;q)_n}{(q;q)_{n-j}} =1-(q^{n-j+1};q)_j <\sum_{i=1}^jq^{n-j+i} <\frac{q^{n-j}}{1-q}. \label{267488156} \end{equation} Thus, we have \begin{equation}\label{eq:3.18} |I_1|\leq\sum_{j=0}^{\lfloor n\sigma\rfloor} \frac{q^{n-j}}{1-q}\frac{q^{j^2}}{(q;q)_j}|z|^j <\frac{q^{n(1-\sigma)}}{1-q}A_q(-|z|). \end{equation} Let \begin{equation} \sigma=\max\left\{\frac12,\frac{t}{4}+\frac12\right\},\qquad l=\max\left\{\frac n4,\lfloor\frac{nt}{2}\rfloor+1\right\}. \end{equation} It is readily seen that $\frac t2<\sigma<1$ and that $\frac {nt}2<l<n$ for sufficiently large $n$. For $j\geq \lfloor n\sigma\rfloor$ and $n$ sufficiently large, there exists $\gamma>0$ such that $$ (j^2-ntj)-(l^2-ntl)=(j-l)(j+l-nt)>(j-l)^2\geq \gamma^2 j^2. $$ Then, it follows that $$ \begin{aligned} |I_2+I_3|\leq& \frac{1}{1-q}\sum_{j=\lfloor n\sigma\rfloor+1}^{\infty} \frac{q^{j^2}}{(q;q)_j}|z|^j\\ =& \frac{q^{l^2-ntl}}{1-q}\sum_{j=\lfloor n\sigma\rfloor+1}^{\infty} \frac{q^{(j^2-ntj)-(l^2-ntl)}}{(q;q)_j}\\ <& \frac{q^{l^2-ntl}}{(1-q)(q;q)_\infty}\sum_{j=\lfloor n\sigma\rfloor+1}^{\infty} q^{\gamma^2 j^2}. \end{aligned} $$ Since $q^{\gamma^2j}\leq\frac12$ for $j\geq \lfloor n\sigma\rfloor$ and sufficiently large $n$, we have \begin{equation}\label{eq:32145} \begin{aligned} |I_2+I_3| \leq& \frac{q^{l^2-ntl}}{(1-q)(q;q)_\infty}\sum_{j=\lfloor n\sigma\rfloor+1}^{\infty} \left(\frac12\right)^j\\ \leq& \frac{q^{l^2-ntl}}{(1-q)(q;q)_\infty} \left(\frac12\right)^{\lfloor n\sigma\rfloor}. \end{aligned} \end{equation} Similar to the inequality in (\ref{267488156}), we have $$ 1-\frac{(q;q)_\infty}{(q;q)_l}=1-(q^{l+1};q)_\infty<\frac{q^l}{1-q}\leq \frac{q^\frac{n}{4}}{1-q}<\frac12 $$ for sufficiently large $n$, which leads to \begin{equation}\label{eq:2458} \frac{(q;q)_l}{(q;q)_\infty}<\frac{1}{1-\frac{1}{2}}<2. \end{equation} Moreover, we have \begin{equation}\label{eq:125482} \frac{q^{l^2-ntl}}{(q;q)_l}\leq \sum_{j=0}^\infty\frac{q^{j^2-ntj}}{(q;q)_j} \leq \sum_{j=0}^\infty\frac{q^{j^2}}{(q;q)_j}|q^{-nt}u|^{j} =A_q(-|z|). \end{equation} A combination of the inequalities in (\ref{eq:32145}), (\ref{eq:2458}) and (\ref{eq:125482}) gives $$ |I_2+I_3|\leq \frac{2}{1-q} \left(\frac12\right)^{\lfloor n\sigma\rfloor}A_q(-|z|), $$ which, together with (\ref{eq:3.17}) and (\ref{eq:3.18}), yields (\ref{eq:3.14}). The result in (\ref{eq:3.15})-(\ref{eq:3.16}) can be proved in a similar manner. One can also obtain this directly from (\ref{eq:3.13}), (\ref{eq:3.14}) and the symmetry relation of $S_n(z;q)$ mentioned in (\ref{eq:5}). \section{Limiting behavior of $A_q(z)$ as $q\to1$} It is known that $A_q(z)$ has infinitely many positive zeros and satisfies the three-term recurrence relation \cite{Ismail} \begin{equation} A_q(z)-A_q(qz)+qzA_q(q^2z)=0. \end{equation} Moreover, Zhang \cite{Zhang} has shown that \begin{equation} \lim_{q\to 1^-}A_q\big((1-q)z\big)=e^{-z} \end{equation} for any fixed $z\in\mathbb C$. In \cite[Propsition 1]{WangXSWong2}, Wang and Wong proved that \begin{equation}\label{eq:lim2} A_q(z)=\frac{(-z)^mq^{m^2}}{(q;q)_\infty}\big[\Theta_q(-q^{2m}z)+\O{q^{m(1-\delta)}}\big] \end{equation} as $z\to\infty$, where $m:=\lfloor\frac{\ln|z|}{-2\ln q}\rfloor$ and $\delta>0$ is any small number; see also Zhang \cite[Theorem 2.1]{Zhang}. In this section, we shall establish the limiting relation of $A_q(z)$, as $q\to 1$, stated in Theorem~\ref{thm:2}. Let us first review some of the results given in Wang and Wong~\cite{WangWong}. Let \begin{equation}\label{eq:32} k:=k(n)=n^{\frac14}. \end{equation} Then \begin{equation}\label{eq:33} q=\exp\{-(2k^2)^{-1}\}=\exp\{-(2\sqrt n)^{-1}\} = 1-\frac{1}{2\sqrt{n}}+\O{\frac1n} \end{equation} as $n\to\infty$. In Sec. 1 we have commented that the main result in Wang and Wong~\cite{WangWong} holds if the parameter $k$ in (\ref{eq:1}) depends on $n$ and satisfies a growth condition such as the one given in (\ref{eq:32}). The MRS numbers $\alpha_n$ and $\beta_n$ have been calculated in~\cite{WangWong}, and are explicitly given in equations (3.19) and (3.20). We note that \begin{equation}\label{eq:34} \alpha_n\sim \frac14,\qquad \beta_n\sim 4q^{-(2n+1)},\qquad \text{as } n\to\infty; \end{equation} cf. \cite[eq. (2.4)]{WangWong}. Furthermore, the change of variable $t \to y$ defined by \begin{equation}\label{eq:t} y=\sqrt{\alpha_n\beta_n}\exp\left[\frac t2 \log(\beta_n/\alpha_n)\right] \end{equation} takes the interval $-1\leq t\leq 1$ onto the interval $\alpha_n\leq y\leq \beta_n$, where $y=1/(xq^{2n+1})$; see~\cite[(2.14)]{WangWong}. Let $\pi_n(x)$ denote the monic Stieltjes-Wigert polynomial $$ \pi_n(x)=\frac{p_n(x)}{\gamma_n}, $$ where $p_n(x)$ is the polynomial given in (\ref{eq:SWp}) and $$ \gamma_n=q^{n^2+n+1/4}/\sqrt{(q;q)_n}; $$ that is \begin{equation} \pi_n(x)=(-1)^nq^{-n^2-n/2}(q;q)_nS_n(q^{1/2}x;q). \end{equation} The major results in \cite{WangWong} is the asymptotic formula \begin{equation}\label{eq:asymp} \pi_n\big(y\big)=\frac{\sqrt{\pi}e^{l_n/2}}{\sqrt{w(y)}}\left\{N^\frac16 \Ai\left(N^\frac23\eta_n(t)\right)A(y,n)+\O{N^{-\frac16}}\right\}, \end{equation} where $N=n+\frac12$, \begin{equation}\label{eq:A} A(y,n)=\frac{[\eta_n(t)]^{1/4}(\beta_n-\alpha_n)^{1/2}}{[(y-\alpha_n)(y-\beta_n)]^{1/4}}, \end{equation} $$ \frac23[-\eta_n(t)]^{3/2}=\frac{a}{N\log(1/q)}\int_t^1\arctan \frac{\sqrt{(e^{a\tau}-e^{-a})(e^a-e^{a\tau})}}{e^{a\tau}+1}d\tau $$ and $$ a:=\frac12\log\frac{\beta_n}{\alpha_n}; $$ see \cite[(2.11), (2.12) and (6.7)]{WangWong}. Recall that $y$ in (\ref{eq:asymp}) and (\ref{eq:A}) is a function of $t$, given in (\ref{eq:t}). Using the formula \cite[(6.8)]{WangWong} $$ \frac23[-\eta_n(t)]^{3/2}=\frac{1}{N\log(1/q)}\int_0^{a(1-t)}\arctan\sqrt{e^s-1}ds+\O{q^{\frac12\delta N}}, $$ where $-1+\delta<t<1$, we have \begin{equation} N^{\frac23}\eta_n\sim-\xi(x) \label{2359824} \end{equation} as $n\to\infty$, where $\xi(x)$ is the function defined in (\ref{eq:12}). The last equation gives \begin{equation} \Ai(N^{2/3}\eta_n)\sim\Ai(-\xi(x)),\qquad n\to\infty. \label{2985245} \end{equation} Next, we show that \begin{equation} N^\frac16 A\left(y,n\right)\sim 2\sqrt{x}\left(\frac{\xi(x)}{4x-1}\right)^\frac14,\qquad n\to\infty. \label{298888} \end{equation} To this end, we note from (\ref{eq:A}) that \begin{equation} \begin{aligned} N^\frac16 A\left(\frac{1}{xq^{2n+1}},n\right) =& \left\{\frac{N^{2/3}\eta_n(t)(\beta_n-\alpha_n)^2}{[1/xq^{2n+1}-\alpha_n][1/xq^{2n+1}-\beta_n]}\right\}^\frac14\\ =& \left\{\frac{N^{2/3}\eta_n(t)(\beta_n-\alpha_n)^2x^2q^{2(2n+1)}}{(1-\alpha_nxq^{2n+1})(1-\beta_nxq^{2n+1})}\right\}^\frac14. \end{aligned} \end{equation} By using equations (3.17) and (3.18) in~\cite{WangWong}, we have \begin{equation}\label{eq:14253} \alpha_n\beta_n=q^{-(2n+1)}. \end{equation} Since $\beta_n$ is large, the last two equations, together with (\ref{2359824}), give \begin{equation} N^\frac16 A\left(\frac{1}{xq^{2n+1}},n\right)\sim \left(\frac{-\xi(x)x^2/\alpha_n^2}{1-x/\alpha_n}\right)^\frac14,\qquad n\to\infty, \end{equation} thus proving (\ref{298888}). Here, we have also made use of the fact that $\alpha_n\sim\frac14$ as $n\to\infty$. Finally, we evaluate the asymptotics of \begin{equation} \frac{\sqrt\pi e^{\frac12 l_n}}{y^n\sqrt{w(y)}}=\sqrt{\pi}\exp\left\{\frac12 l_n+\frac12 k^2 \log^2 y-\frac12\log\frac k{\sqrt{\pi}}-n\log y \right\}. \end{equation} Recall $y=1/xq^{2n+1}$. Using the formula \cite[(3.29)]{WangWong} $$ l_n=\frac{N(N-1)}{k^2}-\frac{k^2\pi^2}{3}+\log\frac{k}{\sqrt\pi}+\O{Nq^N}, $$ we obtain \begin{equation} \frac{\sqrt\pi e^{\frac12 l_n}}{y^n\sqrt{w(y)}}\sim \sqrt{\pi}\exp\left\{ \frac{N(N-1)}{2k^2}-\frac{k^2\pi^2}{6}+\frac12 k^2 \log^2 (xq^{2n+1})+n\log(xq^{2n+1}) \right\}. \label{abcdfe} \end{equation} Since $\log(1/q)=1/2k^2$ and $N=n+\frac12$, one can show that the right-hand side of (\ref{abcdfe}) is equal to \begin{equation} \sqrt{\pi}\exp\left\{-\frac12\log x+\frac{3\log^2x-\pi^2}{12\log(1/q)} \right\}=\sqrt\frac{\pi}{x}\exp\left\{\frac{3\log^2x-\pi^2}{12\log(1/q)} \right\}. \end{equation} Hence, we obtain \begin{equation} \frac{\sqrt\pi e^{\frac12 l_n}}{y^n\sqrt{w(y)}}\sim \sqrt\frac{\pi}{x}\exp\left\{\frac{3\log^2x-\pi^2}{12\log(1/q)} \right\},\qquad n\to\infty. \label{3122334} \end{equation} A combination of (\ref{eq:asymp}), (\ref{298888}) and (\ref{3122334}) yields \begin{equation} (xq^{2n+1})^n\pi_n(1/xq^{2n+1})\sim \sqrt{\frac\pi x}\exp\left\{\frac{3\log^2x-\pi^2}{12\log(1/q)} \right\}2\sqrt{x}\left(\frac{\xi(x)}{4x-1}\right)^\frac14 \Ai(-\xi(x)). \label{3288546} \end{equation} Therefore, we have $$ (xq^{2n+1})^n\pi_n(1/xq^{2n+1})=(q;q)_n(-q^{1/2}x q^{n})^n S_n(q^{-2n}/q^{1/2}x;q)=(q;q)_nS_n(q^\frac12 x;q). $$ Here, we have made use of the symmetry relation of $S_n(z;q)$ given in (\ref{eq:5}). Note that the result in Theorem~\ref{thm:1} in fact holds with $q$ satisfying (\ref{eq:33}). Thus, Theorem~\ref{thm:1} gives \begin{equation} (xq^{2n+1})^n\pi_n(1/xq^{2n+1})=A_q(\sqrt q x)+r_n(\sqrt q x). \label{334567} \end{equation} Coupling (\ref{3288546}) and (\ref{334567}), we obtain the desired formula $$ A_q(\sqrt q x)\sim 2\sqrt{\pi }\exp\left\{\frac{3\log^2x-\pi^2}{12\log(1/q)} \right\}\left(\frac{\xi(x)}{4x-1}\right)^\frac14 \Ai(-\xi(x)) $$ as $q\to 1^-$. \section{Numerical Verification} In the following tables, we have used a Maple-aided program to verify the asymptotic formulas in Theorem~\ref{thm:1} and the limiting relation in Theorem~\ref{thm:2}. The values are represented in scientific notations; for example, $$ -2.325\text{e-}3=-2.325\times10^{-3}=-0.00235. $$ In Table 1, `True' stands for the true value of $(q;q)_nS_n(uq^{-nt};q)$ by summing the series in (\ref{eq:1.4}); `Approx.' stands for the approximate value obtained from (\ref{eq:3.13}); `Error' is the relative error of the approximate value. The degree of the polynomial is $n=50$ and $q=0.5$ in Table 1, $u$ takes the values of $1$, $-1$ and $1+i$, and $t$ takes the values of $0,0.5,1.0,1.2$ and $1.6$. Here, we would like to mention that the approximate values are very close to the true values. For instance, in the case when $u=t=1$, the true value is -5.83981318477869$\cdots\times10^{187}$, whereas the approximate value is -5.83981318477868$\cdots\times10^{187}$. If we take only a few digits as what we have done in Table 1, the two values all appear to be nearly the same. In Table 2, `True' stands for the true value of $A_q(\sqrt q x)$ by summing the series in (\ref{eq:8}); `Approx.' stands for the approximate value obtained from the quantity on the right-hand side of (\ref{eq:13}); `Error' is the relative error of the approximate value. We examine the cases when $x=0.5,1.0,4.0,10,20$ and $q=0.9,0.92,0.94,0.96,0.98,0.99$. \begin{table}[h] \caption{Numerical Verification of Theorem 1} \begin{tabular}{|c|c|cccccc|} \hline \backslashbox{$u$\kern-1em}{$t$} & &0 &0.5 &0.8 &1.0 &1.2 &1.6 \\ \hline\multirow{3}*{1} &True &0.16076 &-9.3534e42 &1.0831e120 &-5.8398e187 &3.5453e270 &1.8649e481 \\ \cline{2-8} &Approx. &0.16076 &-9.3534e42 &1.0831e120 &-5.8398e187 &3.5453e270 &1.8649e481\\ \cline{2-8} &Error &2.99e-15 &2.98e-8 &1.78e-15 &1.18e-15 &6.05e-13 &6.36e-7\\ \hline\multirow{3}*{-1} &True &2.17267 &8.0063e47 &1.9036e121 &1.0264e189 &6.2313e271 &3.2740e482 \\ \cline{2-8} &Approx. &2.17267 &8.0063e47 &1.9306e121 &1.0264e189 &6.2312e271 &3.2778e482 \\ \cline{2-8} &Error &6.31e-16 &6.12e-12 &1.11e-9 &3.54e-8 &1.13e-6 &1.16e-3 \\ \hline \end{tabular} \begin{tabular}{|c|c|cccc|} \hline \backslashbox{$u$\kern-1em}{$t$} & &0 &0.5 & 1.0 &1.6\\ \hline\multirow{3}*{1+i} &True &0.0117-0.6786i &(1.92+8.38i)e48 &(-8.18-1.87i)e191 &(4.107-2.571i)e487 \\ \cline{2-6} &Approx. &0.0117-0.6786i &(1.92+8.38i)e48 &(-8.18-1.87i)e191 &(4.106-2.578i)e487 \\ \cline{2-6} &Error &8.83e-17 &3.16e-12 &1.39e-8 &4.55e-4 \\ \hline \end{tabular} \end{table} \begin{table}[h] \caption{Numerical Verification of Theorem 2} \begin{tabular}{|c|c|cccccc|} \hline \backslashbox{$x$\kern-1em}{$q$} & & 0.9 & 0.92 & 0.94 &0.96 &0.98 &0.99\\ \hline\multirow{3}*{0.5} &True &-2.325e-3 &-3.826e-4 &1.120e-5 &-2.966e-8 &5.080e-16 &4.9298e-32 \\ \cline{2-8} &Approx. &-2.320e-3 &-3.819e-4 &1.118e-5 &-2.964e-8 &5.078e-16 &4.9303e-32\\ \cline{2-8} &Error & 0.0022 &0.0018 & 0.0012 &0.00080 &0.00038 &0.0000995\\ \hline\multirow{3}*{1.0} &True & -5.171e-4 &2.978e-5 &-2.556e-6 &1.326e-9 &2.178e-18 &4.1417e-36 \\ \cline{2-8} &Approx. & -5.159e-4 &2.973e-5 &-2.553e-6 &1.325e-9 &2.177e-18 &4.1408e-36\\ \cline{2-8} &Error & 0.0022 &0.0018 & 0.0013 &0.00087 &0.00043 &0.00021\\ \hline\multirow{3}*{4.0} &True &0.034973 &0.01680 &4.4202e-4 &-1.0084e-4 &4.4912e-8 &-5.6869e-16 \\ \cline{2-8} &Approx. &0.034891 &0.01677 &4.1898e-4 &-1.0073e-4 &4.4893e-8 &-5.6853e-16 \\ \cline{2-8} &Error & 0.00235 &0.0018 & 0.0028 &0.00107 &0.00043 & 0.00028\\ \hline\multirow{3}*{10} &True &38.6522 &-247.876 &2715.83 &43744.8 &3.3978e10 &2.1941e21 \\ \cline{2-8} &Approx. &38.5316 &-247.372 &2712.29 &43745.2 &3.3961e10 &2.1944e21\\ \cline{2-8} &Error & 0.0031 &0.0020 & 0.0013 &0.00022 &0.00051 &0.00014\\ \hline\multirow{3}*{20} &True &-2.0951e5 &-3.5927e6 &-5.9716e9 &7.3472e14 &-6.1129e29 &1.5900e61 \\ \cline{2-8} &Approx. &-2.0884e5 &-3.5801e6 &-5.9645e9 &7.3418e14 &-6.1124e29 &1.5897e61\\ \cline{2-8} &Error & 0.00320 &0.00349 & 0.00119 &0.00073 &0.00007 &0.00023\\ \hline \end{tabular} \end{table}
2,869,038,156,614
arxiv
\section{Introduction} \label{sec:intro} In many areas, like finance, economics or physics, a common way of assessing the performance of a system is to consider the ratio of what the system delivers to what it consumes. In communication theory, transmit power and transmission rate are respectively two common measures of the cost and benefit of a transmission. Therefore, the ratio transmission rate (say in bit/s) to transmit power (in J/s) appears to be a natural energy-efficiency measure of a communication system. An important question is then: what is the maximum amount of information (in bits) that can be conveyed per Joule consumed? As reported in \cite{verdu-it-1990}, one of the first papers addressing this issue is \cite{pierce-tcom-1978} where the author determines the capacity per unit cost for various versions of the photon counting channel. As shown in \cite{verdu-it-1990}, the normalized\footnote{In \cite{verdu-it-1990} the capacity per unit cost is in bit/s per Joule and not in bit/J, which amounts to normalize by a quantity in Hz.} capacity per unit cost for the well-known additive white Gaussian channel model $Y = X + Z$ is maximized for Gaussian inputs and is given by $\lim_{P \rightarrow 0} \frac{\log_2 \left(1+ \frac{P}{\sigma^2} \right)}{P} = \frac{1}{\sigma^2 \ln 2}$, where $\mathbb{E}|X|^2 = P$ and $Z \sim \mathbb{C} \mathcal{N}(0, \sigma^2)$. Here, the main message of communication theory to engineers is that energy-efficiency is maximized by operating at low transmit power and therefore at low transmission rates. However, this answer holds for static and single input single output (SISO) channels and it is legitimate to ask: what is the answer for multiple-input multiple-output (MIMO) channels? In fact, as shown in this paper, the case of slow fading MIMO channels is especially relevant to be considered. Roughly speaking, the main reason for this is that, in contrast to static and fast fading channels, in slow fading channels there are outage events which imply the existence of an optimum tradeoff between the number of successfully transmitted bits or blocks (called goodput in \cite{katz-it-2005} and \cite{goodman-pcom-2000}) and power consumption. Intuitively, this can be explained by saying that increasing transmit power too much may result in a marginal increase in terms of quality or effective transmission rate. First, let us consider SISO slow fading or quasi-static channels. The most relevant works related to the problem under investigation essentially fall into two classes corresponding to two different approaches. The first approach, which is the one adopted by Verd\'{u} in \cite{verdu-it-1990} and has already been mentioned, is an information-theoretic approach aiming at evaluating the capacity per unit cost or the minimum energy per bit (see e.g., \cite{elgamal-it-2006}, \cite{cai-tcom-2005}, \cite{yao-tw-2005}, \cite{jain-allerton-2009}). In \cite{verdu-it-1990}, two different cases were investigated depending on whether the input alphabet contains or not a zero cost or free symbol. In this paper, only the case where the input alphabet does not contain a zero-cost symbol will be discussed (i.e., the silence at the transmitter side does not convey information). The second approach, introduced in \cite{shah-pimrc-1998} is more pragmatic than the previous one. In \cite{shah-pimrc-1998} and subsequent works \cite{goodman-pcom-2000}, \cite{saraydar-tcom-2002}, the authors define the energy-efficiency of a SISO communication as $u(p) = \frac{R f(\eta)}{p}$ where $R$ is the effective transmission data rate in bits, $\eta$ the signal-to-noise-plus-interference ratio (SINR) and $f$ is a benefit function (e.g., the success probability of the transmission) which depends on the chosen coding and modulation schemes. To the authors' knowledge, in all works using this approach (\cite{shah-pimrc-1998}, \cite{goodman-pcom-2000}, \cite{saraydar-tcom-2002}, \cite{meshkati-tc-2005}, \cite{buzzi-jsac-2008}, \cite{lasaulce-twc-2009}, etc.), the same (pragmatic) choice is made for $f$: $f(x) = (1- e^{-\alpha x})^N$, where $\alpha$ is a constant and $N$ the block length in symbols. Interestingly, the two mentioned approaches can be linked by making an appropriate choice for $f$. Indeed, if $f$ is chosen to be the complementary of the outage probability, one obtains a counterpart of the capacity per unit cost for slow fading channels and gives an information-theoretic interpretation to the initial definition of \cite{shah-pimrc-1998}. To our knowledge, the resulting performance metric has not been considered so far in the literature. This specific metric, which we call goodput-to-power ratio (GPR), will be considered in this paper. Moreover, we consider MIMO channels where the transmitter and receiver are informed of the channel distribution information (CDI) and channel state information (CSI) respectively. To conclude the discussion on the relevant literature, we note that some authors addressed the problem of energy-efficiency in MIMO communications but they did not consider the proposed energy-efficiency measure based on the outage probability. In this respect, the most relevant works seem to be \cite{cui-jsac-2004}, \cite{verdu-it-2002} and \cite{buzzi-eusipco-2008}. In \cite{cui-jsac-2004}, the authors adopt a pragmatic approach consisting in choosing a certain coding-modulation scheme in order to reach a given target data rate while minimizing the consumed energy. In \cite{verdu-it-2002}, the authors study the tradeoff between the minimum energy-per-bit versus spectral efficiency for several MIMO channel models in the wide-band regime assuming a zero cost symbol in the input alphabet and unform power allocation over all the antennas. In \cite{buzzi-eusipco-2008}, the authors consider a similar pragmatic approach to the one in \cite{goodman-pcom-2000}, \cite{saraydar-tcom-2002} and study a multi-user MIMO channel where the transmitters are constrained to using beamforming power allocation strategies. This paper is structured as follows. In Sec. \ref{sec:sys}, assumptions on the signal model are provided. In Sec. \ref{sec:det_ff}, the proposed energy-efficiency measure is defined for static and fast-fading MIMO channels. As the case of slow fading channels is non-trivial, it will be discussed separately in Sec. \ref{sec:finite-MIMO}. In Sec. \ref{sec:finite-MIMO}, the problem of energy-efficient precoding is discussed for general MIMO slow fading channels and solved for the multiple input single output (MISO) case, whereas in Sec. \ref{sec:asymptotic-MIMO} asymptotic regimes (in terms of the number of antennas and SNR) are assumed. In Sec. \ref{sec:simus}, simulations illustrating the derived results and stated conjectures are provided. Sec. \ref{sec:conclusion} provides concluding remarks and open issues. \section{General System Model} \label{sec:sys} We consider a point-to-point communication with multiple antenna terminals. The signal at the receiver is modeled by: \begin{equation} \label{eq:system-model-mimo} \underline{y}(\tau)=\mathbf{H}(\tau) \underline{x}(\tau) + \underline{z}(\tau), \end{equation} where $\mathbf{H}$ is the $n_r \times n_t$ channel transfer matrix and $n_t$ (resp. $n_r$) the number of transmit (resp. receive) antennas. The entries of $\mathbf{H}$ are i.i.d. zero-mean unit-variance complex Gaussian random variables. The vector $\underline{x}$ is the $n_t$-dimensional column vector of transmitted symbols and $\underline{z}$ is an $n_r$-dimensional complex white Gaussian noise distributed as $\mathcal{N}(\underline{0}, \sigma^2 \mathbf{I})$. In this paper, the problem of allocating the transmit power between the available transmit antennas is considered. We will denote by $\mathbf{Q} = \mathbb{E}[\underline{x}\underline{x}^H]$ the input covariance matrix (called the precoding matrix), which translates the chosen power allocation (PA) policy. The corresponding total power constraint is \begin{equation} \label{eq:power-constraint} \mathrm{Tr}(\mathbf{Q}) \leq \overline{P}. \end{equation} At last, the time index $\tau$ will be removed for the sake of clarity. In fact, depending on the rate at which $\mathbf{H}$ varies with $\tau$, three dominant classes of channel models can be distinguished: \begin{enumerate} \item the class of static channels; \item the class of fast fading channels; \item the class of slow fading channels. \end{enumerate} The matrix $\mathbf{H}$ is assumed to be perfectly known at the receiver (coherent communication assumption) whereas only the statistics of $\mathbf{H}$ are available at the transmitter. The first two classes of channels are considered in Sec. \ref{sec:det_ff} and the last one is treated in detail in Sec. \ref{sec:finite-MIMO} and \ref{sec:asymptotic-MIMO}. \section{Energy-efficient communications over static and fast fading MIMO channels} \label{sec:det_ff} \subsection{Case of static channels} \label{sec:sub-static-channels} Here the frequency at which the channel matrix varies is strictly zero that is, $\mathbf{H}$ is a constant matrix. In this particular context, both the transmitter and receiver are assumed to know this matrix. We are exactly in the same framework as \cite{telatar-ett-1999}. Thus, for a given precoding scheme $\mathbf{Q}$, the transmitter can send reliably to the receiver $\log_2\left|\mathbf{I}_{n_r}+ \rho \mathbf{H}\mathbf{Q}\mathbf{H}^H\right|$ bits per channel use (bpcu) with $\rho = \frac{1}{\sigma^2}$. Then, let us define the energy-efficiency of this communication by: \begin{equation} \label{eq:def-ee-static-ch} G_{\mathrm{static}}(\mathbf{Q})=\frac{\log_2\left|\mathbf{I}_{n_r}+ \rho \mathbf{H}\mathbf{Q}\mathbf{H}^H\right|}{\mathrm{Tr}(\mathbf{Q})}. \end{equation} The energy-efficiency $G_{\mathrm{static}}(\mathbf{Q})$ corresponds to an achievable rate per unit cost for the MIMO channel as defined in \cite{verdu-it-1990}. Assuming that the cost of the transmitted symbol $\underline{x}$, denoted by $b[\underline{x}]$, is the consumed energy $b[\underline{x}] = \|\underline{x}\|^2 =\mathrm{Tr}(\underline{x}\underline{x}^H)$, the capacity per unit cost defined in \cite{verdu-it-1990} is: $\widetilde{C}_{\mathrm{slow}} \triangleq \displaystyle{\sup_{\underline{x}, \mathbb{E}[b[\underline{x}]] \leq \overline{P}} } \frac{I(\underline{x};\underline{y})}{\mathbb{E}[b[\underline{x}]]}$. The supremum is taken over the p.d.f. of $\underline{x}$ such that the average transmit power is limited $\mathbb{E}[b[\underline{x}]] \leq \overline{P}$. It is easy to check that: \begin{equation} \label{eq:it_det_mimo} \begin{array}{lcl} \widetilde{C}_{\mathrm{slow}} & = & \displaystyle{\sup_{\mathbf{Q}, \mathrm{Tr}(Q) \leq \overline{P}}} \frac{1}{\mathrm{Tr}(Q)} \ \ \displaystyle{\sup_{\underline{x}, \mathbb{E}(\underline{x}\underline{x}^H) = \mathbf{Q}} }I(\underline{x};\underline{y}) \\ & = & \displaystyle{\sup_{\mathbf{Q}, \mathrm{Tr}(\mathbf{Q}) \leq \overline{P}} } G_{\mathrm{static}}(\mathbf{Q}). \end{array} \end{equation} The second equality follows from \cite{telatar-ett-1999} where Telatar proved that the mutual information for the MIMO static channel is maximized using Gaussian random codes. In other words, finding the optimal precoding matrix which maximizes the energy-efficiency function corresponds to finding the capacity per unit cost of the MIMO channel where the cost of a symbol is the necessary power consumed to be transmitted. The question is then whether the strategy ``transmit at low power'' (and therefore at a low transmission rate) to maximize energy-efficiency, which is optimal for SISO channels, also applies to MIMO channels. The answer is given by the following proposition, which is proved in Appendix \ref{appendix:A}. \begin{proposition}[Static MIMO channels] \label{proposition:static-channels} \emph{The energy-efficiency of a MIMO communication over a static channel, measured by $G_{\mathrm{static}}$, is maximized when $\mathbf{Q} = \mathbf{0}$ and this maximum is} \begin{equation} G_{\mathrm{static}}^* = \frac{1}{\ln 2} \frac{\mathrm{Tr}(\mathbf{H}\mathbf{H}^H)}{n_t \sigma^2}. \end{equation} \end{proposition} Therefore, we see that, for static MIMO channels, the energy-efficiency defined in Eq. (\ref{eq:def-ee-static-ch}) is maximized by transmitting at a very low power. This kind of scenario occurs for example, when deploying sensors in the ocean to measure a temperature field (which varies very slowly). In some applications however, the rate obtained by using such a scheme can be not sufficient. In this case, considering the benefit to cost ratio can turn out to be irrelevant, meaning that other performance metrics have to be considered (e.g., minimize the transmit power under a rate constraint). \subsection{Case of fast fading channels} \label{sec:sub-fast-fading-ch} In this section, the frequency with which the channel matrix varies is the reciprocal of the symbol duration ($\underline{x}(\tau)$ being a symbol). This means that it can be different for each channel use. Therefore, the channel varies over a transmitted codeword (or packet) and, more precisely, each codeword sees as many channel realizations as the number of symbols per codeword. Because of the corresponding self-averaging effect, the following transmission rate (also called EMI for ergodic mutual information) can be achieved on each transmitted codeword by using the precoding strategy $\mathbf{Q}$ : \begin{equation} R_{\mathrm{fast}}(\mathbf{Q}) = \mathbb{E}_{\mathbf{H}}\left[ \log_2\left|\mathbf{I}_{n_r}+ \rho \mathbf{H}\mathbf{Q}\mathbf{H}^H\right| \right]. \end{equation} Interestingly, $R_{\mathrm{fast}}(\mathbf{Q})$ can be maximized w.r.t. $\mathbf{Q}$ by knowing only the statistics of $\mathbf{H}$ that is, $\mathbb{E} \left[ \mathbf{H} \mathbf{H}^H \right]$, under the standard assumption that the entries of $\mathbf{H}$ are complex Gaussian random variables. In practice, this means that only the knowledge of the path loss, power-delay profile, antenna correlation profile, etc is required at the transmitter to maximize the transmission rate. At the receiver however, the instantaneous knowledge of $\mathbf{H}$ is required. In this framework, let us define energy-efficiency by: \begin{equation} \label{eq:def-ee-fast-fading-ch} G_{\mathrm{fast}}(\mathbf{Q})=\frac{\mathbb{E}_{\mathbf{H}} \left[\log_2\left|\mathbf{I}_{n_r}+\rho \mathbf{H}\mathbf{Q}\mathbf{H}^H\right|\right]}{\mathrm{Tr}(\mathbf{Q})}. \end{equation} By defining $\underline{g}_i$ as the $i$-th column of the matrix $\sqrt{\rho} \mathbf{H}\mathbf{U}$, $i \in \{1, \hdots, n_t \}$, $\mathbf{U}$ and $\{p_i\}_{i=1}^{n_t}$ an eigenvector matrix and the corresponding eigenvalues of $\mathbf{Q}$ respectively, and also by rewriting $G_{\mathrm{fast}}(\mathbf{Q})$ as \begin{equation} \displaystyle{G_{\mathrm{fast}}(\mathbf{Q}) =\mathbb{E}_{\mathbf{H}}\left[ \frac{\log_2\left|\mathbf{I}_{n_r}+\displaystyle{\sum_{i=1}^{n_t} p_i \underline{g}_i \underline{g}_i^H} \right|}{\displaystyle{\sum_{i=1}^{n_t}p_i}}\right]}, \end{equation} it is possible to apply the proof of Prop. \ref{proposition:static-channels} for each realization of the channel matrix. This leads to the following result. \begin{proposition}[Fast fading MIMO channels] \label{proposition:fast-fading-channels} T\emph{he energy-efficiency of a MIMO communication over a fast fading channel, measured by $G_{\mathrm{fast}}$, is maximized when $\mathbf{Q} = \mathbf{0}$ and this maximum is} \begin{equation} G_{\mathrm{fast}}^* = \frac{1}{\ln 2} \frac{\mathrm{Tr}(\mathbb{E} \left[ \mathbf{H}\mathbf{H}^H \right])}{n_t \sigma^2}. \end{equation} \end{proposition} We see that, for fast fading MIMO channels, maximizing energy-efficiency also amounts to transmitting at low power. Interestingly, in slow fading MIMO channels, where outage events are unavoidable, we have found that the answer can be different. This is precisely what is shown in the remaining of this paper. \section{Slow fading MIMO channels: from the general case to special cases} \label{sec:finite-MIMO} \subsection{General MIMO channels} In this section and the remaining of this paper, the frequency with which the channel matrix varies is the reciprocal of the block/codeword/frame/packet/time-slot duration that is, the channel remains constant over a codeword and varies from block to block. As a consequence, when the channel matrix remains constant over a certain block duration much smaller than the channel coherence time, the averaging effect we have mentioned for fast fading MIMO channels does not occur here. Therefore, one has to communicate at rates smaller than the ergodic capacity (maximum of the EMI). The maximum EMI is therefore a rate upper bound for slow fading MIMO channels and only a fraction of it can be achieved (see \cite{zheng-allerton-2001} for more information about the famous diversity-multiplexing tradeoff). In fact, since the mutual information is a random variable, varying from block to block, it is not possible (in general) to guarantee at $100 \ \%$ that it is above a certain threshold. A suited performance metric to study slow-fading channels \cite{ozarow-vt-1994} is the probability of an outage for a given transmission rate target $R$. This metric allows one to quantify the probability that the rate target $R$ is not reached by using a good channel coding scheme and is defined as follows: \begin{equation} \label{eq:def-outage-proba} \mathrm{P}_{\mathrm{out}}(\mathbf{Q},R)=\mathrm{Pr}\left[\log_2 \left|\mathbf{I}_{n_r}+ \rho\mathbf{H}\mathbf{Q}\mathbf{H}^H\right|<R\right]. \end{equation} In terms of information assumptions, here again, it can be checked that only the second-order statistics of $\mathbf{H}$ are required to optimize the precoding matrix $\mathbf{Q}$ (and therefore the power allocation policy over its eigenvalues). In this framework, we propose to define the energy-efficiency as follows: \begin{equation} \label{eq:gen_payoff} \Gamma(\mathbf{Q},R)= \frac{R [ 1- \mathrm{P}_{\mathrm{out}}(\mathbf{Q},R)]}{\mathrm{Tr}(\mathbf{Q})}. \end{equation} In other words, the energy-efficiency or goodput-to-power ratio is defined as the ratio between the expected throughput (see \cite{katz-it-2005},\cite{shamai-ciss-2000} for details) and the average consumed transmit power. The expected throughput can be seen as the average system throughput over many transmissions. In contrast with static and fast fading channels, energy-efficiency is not necessarily maximized at low transmit powers. This is what the following proposition indicates. \begin{proposition}[Slow fading MIMO channels] \label{proposition:slow_gen} \emph{The goodput-to-power ratio $\Gamma(\mathbf{Q},R)$ is maximized, in general, for $\mathbf{Q} \neq \mathbf{0}$.} \end{proposition} The proof of this result is given in Appendix \ref{appendix:B}. Now, a natural issue to be considered is the determination of the matrix (or matrices) maximizing the goodput-to-power ratio (GPR) in slow fading MIMO channels. It turns out that the corresponding optimization problem is not trivial. Indeed, even the outage probability minimization problem w.r.t. $\mathbf{Q}$ (which is a priori simpler) is still an open problem \cite{telatar-ett-1999}, \cite{katz-wc-2007}, \cite{jorswieck-ett-2007}. This is why we only provide here a conjecture on the solution maximizing the GPR. \begin{conjecture}[Optimal precoding matrices] \label{conjecture:GPR_MIMO} \emph{There exists a power threshold $\overline{P}_0$ such that:} \begin{itemize} \item \emph{if $\overline{P} \leq \overline{P}_0$ then $\mathbf{Q}^* \in \displaystyle{ \arg \min_{\mathbf{Q}}} P_{\mathrm{out}}(\mathbf{Q},R)$ $ \ \Rightarrow \ $ $\mathbf{Q}^* \in \displaystyle{ \arg \max_{\mathbf{Q}} } \Gamma(\mathbf{Q},R)$;} \item \emph{if $\overline{P} > \overline{P}_0$ then $\Gamma(\mathbf{Q},R)$ has a unique maximum in $\mathbf{Q}^* = \frac{p^*}{n_t} \mathbf{I}_{n_t}$ where $p^* \leq \overline{P}$.} \end{itemize} \end{conjecture} This conjecture has been validated for all the special cases solved in this paper. One of the main messages of this conjecture is that, if the available transmit power is less than a threshold, maximizing the GPR is equivalent to minimizing the outage probability. If it is above the threshold, uniform power allocation is optimal and using all the available power is generally suboptimal in terms of energy-efficiency. Concerning the optimization problem associated with (\ref{eq:gen_payoff}) several comments are in order. First, there is no loss of optimality by restricting the search for optimal precoding matrices to diagonal matrices: for any eigenvalue decomposition $\mathbf{Q} = \mathbf{U} \mathbf{D} \mathbf{U}^H$ with $\mathbf{U}$ unitary and $\mathbf{D} = \mathrm{\textbf{Diag}}(\underline{p})$ with $\underline{p}= (p_1, \hdots, p_{n_t})$, both the outage and trace are invariant w.r.t. the choice of $\mathbf{U}$ and the energy-efficiency can be written as: \begin{equation} \label{eq:diag_gen_payoff} \Gamma(\mathbf{D},R)= \frac{R [ 1- \mathrm{P}_{\mathrm{out}}(\mathbf{D},R)]}{\displaystyle{\sum_{i=1}^{n_t}} p_i}. \end{equation} Second, the GPR is generally not concave w.r.t. $\mathbf{D}$. In Sec. \ref{sec:sub_MISO}, which is dedicated to MISO systems, a counter-example where it is not quasi-concave (and thus not concave) is provided. \emph{Uniform Power Allocation policy} An interesting special case is the one of uniform power allocation (UPA): $\mathbf{D} = \frac{p}{n_t} \mathbf{I}_{n_t}$ where $p \in [0, \overline{P}]$ and $\Gamma_{\mathrm{UPA}}(p,R)\triangleq \Gamma\left(\frac{p}{n_t}\mathbf{I}_{n_t},R\right)$. One of the reasons for studying this case is that the famous conjecture of Telatar given in \cite{telatar-ett-1999}. This conjecture states that, depending on the channel parameters and target rate (i.e., $\sigma^2$, $R$), the power allocation (PA) policy minimizing the outage probability is to spread all the available power uniformly over a subset of $\ell^* \in \{1, \hdots, n_t\}$ antennas. If this can be proved, then it is straightforward to show that the covariance matrix $\mathbf{D}^*$ that maximizes the proposed energy-efficiency function is $\frac{p^*}{\ell^*} \mathrm{\textbf{Diag}}(\underline{e}_{\ell^*})$, where $\underline{e}_{\ell^*} \in \mathcal{S}_{\ell^*}$\footnote{We denote by $\mathcal{S}_{\ell} = \left\{\underline{v} \in \{0,1\}^{n_t} | \sum_{i=1}^{n_t} v_i = \ell \right\}$ the set of $n_t$ dimensional vectors containing $\ell$ ones and $n_t - \ell$ zeros, for all $\ell \in \{1,\hdots,n_t\}$.}. Thus, $\mathbf{D}^*$ has the same structure as the covariance matrix minimizing the outage probability except that using all the available power is not necessarily optimal, $p^* \in [0, \overline{P}]$. In conclusion, solving Conjecture \ref{conjecture:GPR_MIMO} reduces to solving Telatar's conjecture and also the UPA case. The main difficulty in studying the outage probability or/and the energy-efficiency function is the fact that the probability distribution function of the mutual information is generally intractable. In the literature, the outage probability is often studied by assuming a UPA policy over all the antennas and also using the Gaussian approximation of the p.d.f. of the mutual information. This approximation is valid in the asymptotic regime of large number of antennas. However, simulations show that it also quite accurate for reasonable small MIMO systems \cite{wang-it-2004}, \cite{moustakas-it-2003}. Under the UPA policy assumption, the GPR $\Gamma_{\mathrm{UPA}}(p,R)$ is conjectured to be quasi-concave w.r.t. $p$. Quasi-concavity is not only useful to study the maximum of the GPR but is also an attractive property in some scenarios such as the distributed multiuser channels. For example, by considering MIMO multiple access channels with single-user decoding at the receiver, the corresponding distributed power allocation game where the transmitters' utility functions are their GPR is guaranteed to have a pure Nash equilibrium after Debreu-Fan-Glicksberg theorem \cite{fundenberg-book-1991}. Before stating the conjecture describing the behavior of the energy-efficiency function when the UPA policy is assumed, we study the limits when $p \rightarrow 0$ and $p \rightarrow + \infty.$ First, let us prove that $\displaystyle{\lim_{p \rightarrow 0} \Gamma_{\mathrm{UPA}}(p,R) = 0}$. Observe that $\displaystyle{\lim_{p \rightarrow 0} P_{\mathrm{out}}\left(\frac{p}{n_t}\mathbf{I}_{n_t},R\right) = 1}$ and thus the limit is not trivial to prove. The result can be proven by considering the equivalent $1+\frac{\rho p}{n_t} \mathrm{Tr}(\mathbf{H}\mathbf{H}^H)$ of the determinant $\left|\mathbf{I}_{n_r}+\frac{\rho p}{n_t} \mathbf{H}\mathbf{H}^H \right|$ when $\sigma \rightarrow + \infty$. As the entries of the matrix $\mathbf{H}$ are i.i.d. complex Gaussian random variables, the quantity $\mathrm{Tr}(\mathbf{H}\mathbf{H}^H)= \displaystyle{\sum_{i=1}^{n_t} \sum_{j=1}^{n_r}} |h_{ij}|^2$ is a $2 n_r n_t$ Chi-square distributed random variable. Thus $\Gamma_{\mathrm{UPA}}(p,R)$ can be approximated by: $\widehat{\Gamma}_{\mathrm{UPA}}(p,R)=R \exp \left(-\frac{d}{p}\right) \displaystyle{ \sum_{k=0}^{n_r n_t -1} } \frac{d^k}{k!} \frac{1}{p^{k+1}}$ with $d = n_t (2^R-1)\sigma^2$. It is easy to see that this approximate tends to zero when $p \rightarrow 0$. Second, note that the limit $\displaystyle{\lim_{p \rightarrow + \infty} \Gamma_{\mathrm{UPA}}(p, R) = 0}$. This is easier to check since $\displaystyle{\lim_{p \rightarrow +\infty} P_{\mathrm{out}}\left(\frac{p}{n_t}\mathbf{I},R\right) = 0}$. \begin{conjecture}[UPA and quasi-concavity of the GPR] \label{conjecture:MIMO_UPA} \emph{Assume that $\mathbf{D}=\frac{p}{n_t}\mathbf{I}_{n_t}$. Then $\Gamma_{\mathrm{UPA}}(p,R)$ is quasi-concave w.r.t. $p \in \left[0, \overline{P}\right]$.} \end{conjecture} Table \ref{table_results} distinguishes between what has been proven in this paper and the conjectures which remain to be proven. \begin{table} \label{table_results} \begin{center} \begin{tabular}{|c||c|c|c|} \hline & Is $\mathbf{D}^*$ known? & Is $\Gamma^{\mathrm{UPA}}(p)$ quasi-concave? & Is $p^*$ known? \\ \hline \hline General MIMO & Conjecture & Conjecture & Conjecture \\ \hline MISO & Yes & Yes & Yes \\ \hline $1 \times 1$ & Yes & Yes & Yes \\ \hline Large MIMO & Conjecture & Yes & Yes \\ \hline Low SNR & Yes & Yes & Yes \\ \hline High SNR & Yes & Yes & Conjecture \\ \hline \end{tabular} \caption{Summary of proved results and open problems} \end{center} \end{table} \subsection{MISO channels} \label{sec:sub_MISO} In this section, the receiver is assumed to use a single antenna that is, $n_r =1$, while the transmitter can have an arbitrary number of antennas, $n_t \geq 1$. The channel transfer matrix becomes a row vector $\underline{h} = (h_1,...,h_{n_t})$. Without loss of optimality, the precoding matrix is assumed to be diagonal and is denoted by $\mathbf{D} = \mathrm{\textbf{Diag}}(\underline{p})$ with $\underline{p}^T= (p_1,...,p_{n_t})$. Throughout this section, the rate target $R$ and noise level $\sigma^2$ are fixed and the auxiliary quantity $c$ is defined by: $c = \sigma^2 (2^R -1)$. By exploiting the existing results on the outage probability minimization problem for MISO channels \cite{jorswieck-ett-2007}, the following proposition can be proved (Appendix \ref{appendix:C}). \begin{proposition}[Optimum precoding matrices for MISO channels] \label{proposition:MISO} \emph{For all $\ell \in \{1,...,n_t-1\}$, let $c_{\ell}$ be the unique solution of the equation (in $x$) $\mathrm{Pr}\left[\frac{1}{\ell+1} \displaystyle{\sum_{i=1}^{\ell+1}} |X_i|^2 \leq x \right] - \mathrm{Pr}\left[\frac{1}{\ell} \displaystyle{\sum_{i=1}^{\ell}} |X_i|^2 \leq x \right] = 0$ where $X_i$ are i.i.d. zero-mean Gaussian random variables with unit variance. By convention $c_0 = + \infty$, $c_{n_t} = 0$. Let $\nu_{n_t}$ be the unique solution of the equation (in $y$) $\frac{y^{n_t}}{(n_t-1)!} - \displaystyle{\sum_{i=0}^{n_t-1}} \frac{y^i}{i!} =0$. Then the optimum precoding matrices have the following form:} \begin{equation} \mathbf{D}^* = \left| \begin{array}{cl} \frac{\overline{P}}{\ell} \mathrm{\textbf{Diag}}(\underline{e}_{\ell}) & \ \mathrm{if} \ \overline{P} \in \left[\frac{c}{c_{\ell-1}}, \frac{c}{c_{\ell}} \right) \\ \min\left\{\frac{\sigma^2 (2^R-1) }{\nu_{n_t}}, \frac{\overline{P}}{n_t} \right\} \mathbf{I} & \ \mathrm{if} \ \overline{P} \geq \frac{c}{c_{n_t-1}} \end{array} \right. \end{equation} \emph{where $c = \sigma^2 (2^R-1) $ and $\underline{e}_{\ell} \in \mathcal{S}_{\ell}$.} \end{proposition} Similarly to the optimal precoding scheme for the outage probability minimization, the solution maximizing the GPR consists in allocating the available transmit power uniformly between only a subset $\ell \leq n_t$ antennas. As i.i.d entries are assumed for $\mathbf{H}$, the choice of these antennas does not matter. What matters is the number of antennas selected (denoted by $\ell$), which depends on the available transmit power $\overline{P}$: the higher the transmit power, the higher the number of used antennas. The difference between the outage probability minimization and GPR maximization problems appears when the transmit power is greater than the threshold $\frac{c}{c_{n_t-1}}$. In this regime, saturating the power constraint is suboptimal for the GPR optimization. The corresponding sub-optimality becomes more and more severe as the noise level is low; simulations (Sec. \ref{sec:simus}) will help us to quantify this gap. Unless otherwise specified, we will assume from now on that \textbf{UPA} is used at the transmitter. This assumption is, in particular, useful to study the regime where the available transmit power is sufficiently high (as conjectured in Proposition \ref{proposition:slow_gen}). Under this assumption, our goal is to prove that the GPR is quasi-concave w.r.t. $p \in [0, \overline{P}]$ with $\mathbf{D} = \frac{p}{n_t} \mathbf{I}_{n_t}$ and determine the (unique) solution $p^*$ which maximizes the GPR. Note that the quasi-concavity property w.r.t. $\underline{p}$ is not always available for MISO systems (and thus is not always available for general MIMO channels). In Appendix \ref{appendix:D}, a counter-example proving that in the case where $n_r=1$ and $n_t=2$ (two input single output channel, TISO) the energy-efficiency $\Gamma^{\mathrm{TISO}}\left(\textbf{Diag}(\underline{p}), R\right)$ is not quasi-concave w.r.t. $\underline{p}=(p_1,p_2)$ is provided. \begin{proposition}[UPA and quasi-concavity (MISO channels)] \label{proposition:MISO_UPA} Assume the UPA, $\mathbf{Q}=\frac{p}{n_t}\mathbf{I}_{n_t}$, then $\Gamma(p,R)$ is quasi-concave w.r.t. $p \in \left[0, \overline{P}\right]$ and has a unique maximum point in $p^* = \min \left\{\frac{(2^R-1)n_t \sigma^2}{\nu_{n_t}}, \overline{P} \right\}$ where $\nu_{n_t}$ is the solution (w.r.t. $y$) of: \begin{equation} \label{eq:y} \frac{y^{n_t}}{(n_t-1)!} - \displaystyle{\sum_{i=0}^{n_t-1}} \frac{y^i}{i!} = 0. \end{equation} \end{proposition} \begin{proof} Since the entries of $\underline{h}$ are complex Gaussian random variables, the sum $\displaystyle{\sum_{k=1}^{n_t}} |h_k|^2$ is a $2 n_t-$ Chi-square distributed random variable, which implies that: \begin{equation} \begin{array}{lcl} \Gamma^{\mathrm{MISO}}(p,R) & = & \displaystyle{\frac{R \left\{1 - \mathrm{Pr}[\log_2\left(1+\frac{\rho p}{n_t} \underline{h}^H\underline{h}\right) < R] \right\} }{p}}\\ & = & \displaystyle{\frac{R \left\{1 - \mathrm{Pr} \left[ \displaystyle{\sum_{i=1}^{n_t}} |h_i|^2 < \frac{d}{p}\right]\right\}}{p }} \\ & = & \displaystyle{R \times \mathrm{e}^{-\frac{d}{p}} \sum_{i=0}^{n_t-1} \frac{d^i}{p^{i+1}} \frac{1}{i!}}, \end{array} \end{equation} with $d =c n_t = (2^R-1)n_t \sigma^2$. The second order derivative of the goodput $R \left[\mathrm{e}^{-\frac{d }{p}} \displaystyle{\sum_{i=0}^{n_t-1} } \left(\frac{d}{p}\right)^i \frac{1}{i!}\right]$ w.r.t. $p$ is \newline $R \left[\frac{d^{n_t}}{p^{n_t+3}} \frac{1}{n_t!} \mathrm{e}^{-d/p} (d- (n_t+1)p)\right]$. Clearly, the goodput is a sigmoidal function and has a unique inflection point in $p_0 = \frac{d}{n_t+1}$. Therefore, the function $\Gamma^{\mathrm{MISO}}(p,R)$ is quasi-concave \cite{rodriguez-globecom-2003} and has a unique maximum in $p^* = \min \left\{\frac{d}{\nu_{n_t}}, \overline{P} \right\}$ where $\nu_{n_t}$ is the root of the first order derivative of $\Gamma^{\mathrm{MISO}}(p,R)$ that is, the solution of (\ref{eq:y}). \end{proof} The \textbf{SIMO} case ($n_t=1$, $n_r \geq 2$) follows directly since $|\mathbf{I}+ \rho p \underline{h}\underline{h}^H| = 1+ \rho p \underline{h}^H\underline{h}$. To conclude this section, we consider the most simple case of MISO channels namely the SISO case ($n_t=1$, $n_r=1$). We have readily that: \begin{equation} \label{eq:siso-GPR} \Gamma^{\mathrm{SISO}}(p,R) = \frac{e^{-\frac{c}{p}}}{p}. \end{equation} To the authors' knowledge, in all the works using the energy-efficiency definition of \cite{goodman-pcom-2000} for SISO channels, the only choice of energy-efficiency function made is based on the empirical approximation of the block error rate which is $\frac{(1- e^{-x})^M}{x}$, $M$ being the block length and $x$ the operating SINR. Interestingly, the function given by (\ref{eq:siso-GPR}) exhibits another possible choice. It can be checked that the function $e^{-\frac{c}{p}}$ is sigmoidal and therefore $\Gamma^{\mathrm{SISO}}$ is quasi-concave w.r.t. $p$ \cite{rodriguez-globecom-2003}. The first order derivative of $\Gamma^{\mathrm{SISO}}$ is \begin{equation} \frac{\partial \Gamma^{\mathrm{SISO}}}{\partial p} = R \frac{(c-p) e^{-\frac{c}{p}}}{p^3}. \end{equation} The GPR is therefore maximized in a unique point which $p^* = c = \sigma^2 (2^R-1)$. To make the bridge between this solution and the one derived in \cite{goodman-pcom-2000} for the power control problem over multiple access channels, the optimal power level can be rewritten as: \begin{equation} \label{eq:pstar_SISO} p^* = \min \left\{\frac{\sigma^2}{\mathbb{E}|h|^2} (2^R-1), \overline{P} \right\} \end{equation} where $\mathbb{E}|h|^2=1$ in our case. In \cite{goodman-pcom-2000}, instantaneous CSI knowledge at the transmitters is assumed while here only the statistics are assumed to be known at the transmitter. Therefore, the power control interpretation of (\ref{eq:pstar_SISO}) in a wireless scenario is that the power is adapted to the path loss (slow power control) and not to fast fading (fast power control). \section{Slow fading MIMO channels in asymptotic regimes } \label{sec:asymptotic-MIMO} In this section, we first consider the GPR for the case where the size of the MIMO system is finite assuming the low/high SNR operating regime. Then, we consider the UPA policy and prove that Conjecture \ref{conjecture:MIMO_UPA} claiming that $\Gamma_{\mathrm{UPA}}(p,R)$ is quasi-concave w.r.t. $p$ (which has been proven for MISO, SIMO, and SISO channels) is also valid in the asymptotic regimes where either at least one dimension of the system ($n_t$, $n_r$) is large but the SNR is finite. Here again, the theory of large random matrices is successfully applied since it allows one to prove some results which are not available yet in the finite case (see e.g., \cite{tse-it-2003}, \cite{dumont-arxiv-2007} for other successful examples). \subsection{Extreme SNR regimes} Here, all the channel parameters ($n_t$, $n_r$, and $\overline{P}$ in particular) are fixed. The low (resp. high) SNR regime is defined by $\sigma^2 \rightarrow + \infty$ (resp. $\sigma^2 \rightarrow 0$). In both cases, we will consider the GPR and the optimal power allocation problem. \subsubsection{Low SNR regime} Let us consider the general power allocation problem where $\mathbf{D}=\mathrm{\textbf{Diag}}(\underline{p})$ with $\underline{p}=(p_1,\hdots,p_{n_t})$. In \cite{jorswieck-ett-2007}, the authors extended the results obtained in the low and high SNR regimes for the MISO channel to the MIMO case. In the low SNR regime, the authors of \cite{jorswieck-ett-2007} proved that the outage probability $P_{\mathrm{out}}(\mathrm{\textbf{Diag}}(\underline{p}),R)$ is a Schur-concave (see \cite{marshall-book-1979} for details) function w.r.t. $\underline{p}$. This implies directly that beamforming power allocation policy maximizes the outage probability. These results can be used (see Appendix \ref{appendix:E}) to prove the following proposition: \begin{proposition}[Low SNR regime] \label{proposition:low-snr} \emph{When $\sigma^2 \rightarrow + \infty$, the energy-efficiency function} $\Gamma(\mathrm{\textbf{Diag}}(\underline{p}), R)$ \emph{is Schur-concave w.r.t. $\underline{p}$ and maximized by a beamforming power allocation policy} $\mathbf{D}^* = \overline{P} \mathrm{\textbf{Diag}} (\underline{e}_1)$. \end{proposition} \subsubsection{High SNR regime} Now, let us consider the high SNR regime. It turns out that the UPA policy maximizes the energy-efficiency function. In this case also, the proof of the following proposition is based on the results in \cite{jorswieck-ett-2007} (see Appendix \ref{appendix:E}). \begin{proposition}[High SNR regime] \label{proposition:low-snr}\emph{ When $\sigma^2 \rightarrow 0$, the energy-efficiency function} $\Gamma(\mathrm{\textbf{Diag}}(\underline{p}), R)$ \emph{is Schur-convex w.r.t. $\underline{p}$ and maximized by an uniform power allocation policy $\mathbf{D}^*=\frac{p^*}{n_t}\mathbf{I}_{n_t}$ with $p^* \in (0, \overline{P}]$. Furthermore, the limit when $p \rightarrow 0$ such that $\frac{p}{\sigma^2} \rightarrow \xi$ is $\Gamma\left( \frac{p}{n_t}\mathbf{I}_{n_t}, R \right) \rightarrow + \infty$ which implies that $p^* \rightarrow 0$.} \end{proposition} In other words, in the high SNR regime, the optimal structure of the covariance matrix is obtained by uniformly spreading the power over all the antennas, $\mathbf{D}^*=\frac{p^*}{n_t}\mathbf{I}_{n_t}$ the same structure which minimizes the outage probability in this case. Nevertheless, in contrast to the outage probability optimization problem, in order to be energy-efficient it is not optimal to use all the available power $\overline{P}$ but to transmit with zero power. \subsection{Large MIMO channels} \label{subsec:asym_MIMO} The results we have obtained can be summarized in the following proposition. \begin{proposition}[Quasi-concavity for large MIMO systems] \label{proposition:asymptotics} \emph{If the system operates in one of the following asymptotic regimes:} \begin{description} \item{(a)} $n_t < + \infty$ and $n_r \rightarrow +\infty$; \item{(b)} $n_t \rightarrow +\infty$ and $n_r < + \infty$; \item{(c)} $n_t \rightarrow +\infty$, $n_r \rightarrow +\infty$ with $\displaystyle{\lim_{n_i \rightarrow +\infty, i \in \{t,r\}} \frac{n_r}{n_t} = \beta < + \infty}$, \end{description} \emph{then $\Gamma_{\mathrm{UPA}}(p, R)$ is quasi-concave w.r.t. $p \in [0, \overline{P}]$.} \end{proposition} \begin{proof} Here we prove each of the three statements made above and provide comments on each of them at the same time. \emph{Regime (a): $n_t < + \infty$ and $n_r \rightarrow \infty$.} The idea of the proof is to consider a large system equivalent of the function $\Gamma_{\mathrm{UPA}}(p,R)$. This equivalent is denoted by $\widehat{\Gamma}^{\mathrm{a}}_{\mathrm{UPA}}(p,R)$ and is based on the Gaussian approximation of the mutual information $\log_2\left|\mathbf{I}+\frac{\rho p}{n_t} \mathbf{H}\mathbf{H}^H\right|$ (see e.g., \cite{hochwald-it-2004}). The goal is to prove that the numerator of $\widehat{\Gamma}^{\mathrm{a}}_{\mathrm{UPA}}(p,R)$ is a sigmoidal function w.r.t. $p$ which implies that $\widehat{\Gamma}^{\mathrm{a}}_{\mathrm{UPA}}(p,R)$ is a quasi-concave function \cite{rodriguez-globecom-2003}. In the considered asymptotic regime, we know from \cite{hochwald-it-2004} that: \begin{equation} \log_2\left|\mathbf{I}+\frac{\rho p}{n_t} \mathbf{H}\mathbf{H}^H\right| \rightarrow \mathcal{N}\left(n_t \log_2\left(1+\frac{n_r}{n_t}\rho p \right), \frac{n_t}{n_r} \log_2(e)\right). \end{equation} A large system equivalent of the numerator of $\Gamma_{\mathrm{UPA}}(p,R)$, which is denoted by $\widehat{N}_a(p,R)$, follows: \begin{equation} \label{eq:f-large-mimo-a} \widehat{N}_a(p,R) =R Q\left(\frac{R-n_t \log_2\left(1+\frac{n_r}{n_t}\rho p \right)}{\sqrt{\frac{n_t}{n_r}\log_2(e)}}\right) \end{equation} where $Q(x)= \frac{1}{\sqrt{2 \pi }}\int_{x}^{+\infty} \exp \left(-\frac{t^2}{2}\right) \mathrm{d}t$. Denote the argument of $Q$ in (\ref{eq:f-large-mimo-a}) by $\alpha_a$. The second order derivative of $\widehat{N}_a(p,R)$ w.r.t. $p$ \begin{equation} \label{eq:cp_asym_secd} \frac{\partial^2 \widehat{N}_a(p,R)}{\partial p^2} = \frac{1}{\sqrt{2 \pi}}\left[\alpha_a(p)(\alpha_a'(p))^2-\alpha_a''(p)\right]\exp\left(-\frac{\alpha_a(p)^2}{2}\right). \end{equation} Therefore $\widehat{N}_a(p,R)$ has a unique inflection point \begin{equation} \tilde{p}_a=\frac{n_t}{n_r\rho}\left\{2^{\left[\frac{1}{n_t}\left(R-\frac{1}{n_t}\left(\frac{n_t \log_2(e)}{n_r}\right)^{3/2}\right)\right]} -1 \right\}. \end{equation} Clearly, for each equivalent of $\Gamma_{\mathrm{UPA}}(p,R)$, the numerator has a unique inflection point and is sigmoidal, which concludes the proof. In fact, in the considered asymptotic regime we have a stronger result since $\displaystyle{\lim_{n_r \rightarrow + \infty}}\tilde{p}_a = 0$, which implies that $\widehat{N}_a(p,R)$ is concave and therefore $\widehat{\Gamma}^{\mathrm{a}}_{\mathrm{UPA}}(p,R)$ is maximized in $p_a^* =0$ as in the case of static MIMO channels. This translates the well-known channel hardening effect \cite{hochwald-it-2004}. However, in contrast to the static case, the energy-efficiency becomes infinite here since $\Gamma_{\mathrm{UPA}}(p,R) \rightarrow \frac{1}{p} $ with $p_a^* \rightarrow 0$. \emph{Regime (b): $n_t \rightarrow + \infty$ and $n_r < + \infty$.} To prove the corresponding result the same reasoning as in (a) is applied. From \cite{hochwald-it-2004} we know that: \begin{equation} \log_2\left|\mathbf{I}+\frac{\rho p}{n_t} \mathbf{H}\mathbf{H}^H\right| \rightarrow \mathcal{N}\left(n_r \log_2(1+\rho p) , \left(\sqrt{\frac{n_r}{n_t}} \log_2(e)\frac{\rho p}{1+\rho p}\right)^2\right). \end{equation} A large system equivalent of the numerator of $\Gamma_{\mathrm{UPA}}(p,R)$ is $\widehat{N}_b(p,R) = R Q\left(\alpha_b(p) \right)$ with \begin{equation} \alpha_b(p)= \sqrt{\frac{n_t}{n_r}}\log_2(e)\frac{1+\rho p}{\rho p}[R-n_r\log_2(1+\rho p)]. \end{equation} The numerator function $\widehat{N}_b(p,R)$ can be checked to have a unique inflection point given by: \begin{equation} \tilde{p}_b = \sigma^2 \left(2^{\frac{R}{n_r}} -1\right) \end{equation} and is sigmoidal, which concludes the proof. We see that the inflection point does not vanish this time (with $n_t$ here) and therefore the function $\widehat{N}_b(p,R)$ is quasi-concave but not concave in general. From \cite{rodriguez-globecom-2003}, we know that the optimal solution $p^*_b$ represents the point where the tangent that passes through the origin intersects the S-shaped function $R Q\left(\alpha_b(p) \right)$. As $n_t$ grows large, the function $Q\left(\alpha_b(p) \right)$ becomes a Heavyside step function since $\forall p \leq \tilde{p}_b$, $\lim_{n_t\rightarrow + \infty} Q\left(\alpha_b(p) \right) = 0$ and $\forall p \geq \tilde{p}_b$, $\lim_{n_t\rightarrow + \infty} Q\left(\alpha_b(p) \right) = 1$. This means that the optimal power $p^*_b$ that maximizes the energy-efficiency approaches $\tilde{p}_b$ as $n_t$ grows large, $p^*\begin{scriptsize}\begin{footnotesize}\begin{small}\end{small}\end{footnotesize}\end{scriptsize}_b \rightarrow \sigma^2 \left(2^{\frac{R}{n_r}} -1\right)$. The optimal energy-efficiency tends to $\frac{\widehat{N}_b(p^*_b,R)}{p^*_b} \rightarrow \frac{1}{2 \sigma^2 \left(2^{\frac{R}{n_r}} -1\right)}$ when $n_t \rightarrow + \infty$. \emph{Regime (c): $n_t \rightarrow +\infty$, $n_r \rightarrow \infty$.} Here we always apply the same reasoning but exploit the results derived in \cite{debbah-it-2005}. From \cite{debbah-it-2005}, we have that: \begin{equation} \log_2\left|\mathbf{I}+\frac{\rho p}{n_t} \mathbf{H}\mathbf{H}^H\right| \rightarrow \mathcal{N}\left(n_t \mu_I, \sigma_I^2\right) \end{equation} where $ \mu_I=\beta \log_2(1+\rho p(1-\gamma)) - \gamma + \log_2(1+\rho p (\beta-\gamma)) $, $\sigma_I^2=-\log_2\left(1-\frac{\gamma^2}{\beta}\right)$, \newline $\gamma= \frac{1}{2}\left(1+\beta+\frac{1}{\rho p}- \sqrt{(1+\beta+\frac{1}{\rho p})^2-4\beta}\right)$. It can be checked that $(\alpha_c'(p))^2 \alpha_c(p)-\alpha_c''(p)=0$ has a unique solution where $\alpha_c(p)= \frac{R-n_t \mu_I(p)}{\sigma_I(p)}$. We obtain $\alpha_c'(p)= \frac{n_t \mu_I \sigma_I'- n_t \mu_I'\sigma_I-R \sigma_I'}{\sigma_I^2}$ and \newline $\alpha_c''(p)=\frac{(n_t \mu_I \sigma_I''-n_t \mu_I'' \sigma_I- R \sigma_I'')\sigma_I^2-2\sigma_I\sigma_I'(n_t \mu_I \sigma_I'- n_t \mu_I'\sigma_I-R \sigma_I')}{\sigma_I^4}$. We observe that, in the equation $(\alpha_c'(p))^2 \alpha_c(p)-\alpha_c''(p)=0$, there are terms in $n_t^3$, $n_t^2$, $n_t$ and constant terms w.r.t. $n_t$. When $n_t$ becomes sufficiently large the first order terms can be neglected, which implies that the solution is given by $\mu_I(p)=0$. It can be shown that $\mu_{I}(0)=0$ and that $\mu_I$ is an increasing function w.r.t. $p$ which implies that the unique solution is $\tilde{p}_c=0$. Similarly to regime (a) we obtain the trivial solution $p_c^*=0$. \end{proof} \section{Numerical results} \label{sec:simus} In this section, we present several simulations that illustrate our analytical results and verify the two conjectures stated. Since closed-form expressions of the outage probability are not available in general, Monte Carlo simulations will be implemented. The exception is the MISO channel for which the optimal energy-efficiency can be computed numerically (as we have seen in Sec. \ref{sec:sub_MISO}) without the need of Monte Carlo simulations. \emph{UPA, the quasi-concavity property and the large MIMO channels.} Let us consider the case of UPA. In Fig. \ref{fig1}, we plot the GPR $\Gamma_{\mathrm{UPA}}\left(p, R\right)$ as a function of the transmit power $p \in [0,\overline{P}]$ W for an MIMO channel where $n_r=n_t=n$ with $ n \in \{1,2,4,8\}$ and $\rho=10$ dB, $R = 1$ bpcu, $\overline{P}=1$ W. First, note that the energy-efficiency for UPA is a quasi-concave function w.r.t. $p$, illustrating Conjecture \ref{conjecture:MIMO_UPA}. Second, we observe that the optimal power $p^*$ maximizing the energy-efficiency function is decreasing and approaching zero as the number of antennas increases and also that $\Gamma_{\mathrm{UPA}}\left(p^*,R\right)$ is increasing with $n$. In Fig. \ref{fig2}, this dependence of the optimal energy-efficiency and the number of antennas $n$ is depicted explicitly for the same scenario. These observations are in accordance with the asymptotic analysis in subsection \ref{subsec:asym_MIMO} for Regime (c). Similar simulation results were obtained for the case where $n_t$ is fixed and $n_r$ is increasing, thus illustrating the asymptotic analysis in subsection \ref{subsec:asym_MIMO} for Regime (a). In Fig. \ref{fig3}, we plot the energy-efficiency $\Gamma_{\mathrm{UPA}} \left(p, R \right)$ as a function of the transmit power $p \in [0,\overline{P}]$ W for MIMO channel such that $n_r=2$, $n_t \in \{1,2,4,8\}$ and $\rho=10$ dB, $R = 1$ bpcu, $\overline{P}=1$ W. The difference w.r.t. the previous case, is that the optimal power $p^*$ does not go to zero when $n_t$ increases. This figure illustrates the results obtained for Regime (b) in section \ref{subsec:asym_MIMO} where the optimal power allocation $p^*_b \rightarrow \frac{2^{\frac{R}{n_r}}-1}{\rho}=0.0414$~W and the optimal energy-efficiency $\Gamma^*_{UPA} \rightarrow \frac{\rho}{2(2^{\frac{R}{n_r}}-1)}= 12,07$ bit/Joule when $n_t \rightarrow +\infty$. \emph{UPA and the finite MISO channel} In Fig. \ref{fig4}, we illustrate Proposition \ref{proposition:MISO} for $n_t=4$. We trace the cases where the transmitter uses an optimal UPA over only a subset of $\ell \in \{1,2,3,4\}$ antennas for $\rho = 10$ dB, $R=3$ bpcu. We observe that: i) if $\overline{P}\leq \frac{c}{c_1}$ then the beamforming PA is the generally optimal structure with $\mathrm{D}^* = \overline{P} \ \mathrm{\textbf{Diag}}(\underline{e}_1)$; ii) if $\overline{P} \in \left[\frac{c}{c_1} \frac{c}{c_2}\right)$ then using UPA over three antennas is the generally optimal structure with $\mathrm{D}^* = \overline{P}/2 \ \mathrm{\textbf{Diag}}(\underline{e}_2)$; iii) if $\overline{P} \in \left[\frac{c}{c_2} \frac{c}{c_3}\right)$ then using UPA over three antennas is generally optimal with $\mathrm{D}^* = \overline{P}/3 \ \mathrm{\textbf{Diag}}(\underline{e}_3)$; iv) if $\overline{P} \geq \frac{c}{c_4}$ then the UPA over all the antennas is optimal with $\mathrm{D}^* =\frac{1}{4} \min\left\{\frac{4*c}{\nu_4}, \overline{P} \right\} \ \mathbf{I}_4$. The saturated regime illustrates the fact that it is not always optimal to use all the available power after a certain threshold. \emph{UPA and the finite MIMO channel} Fig. \ref{fig5} represents the success probability, $1 - P_{\mathrm{out}}(\mathbf{D},R)$, in function of the power constraint $\overline{P}$ for $n_t=n_r=2$, $R=1$ bpcu, $\rho=3$ dB. Since the optimal PA that maximizes the success probability is unknown (unlike the MISO case) we use Monte-Carlo simulations and exhaustive search to compare the optimal PA with the UPA and the beamforming PA. We observe that the result is in accordance with Telatar's conjecture. There exists a threshold $\delta= 0.16$ W such that if $\overline{P}\leq \delta$, the beamforming PA is optimal and otherwise the UPA is optimal. Of course, using all the available power is always optimal when maximizing the success probability. The objective is to check whether Conjecture \ref{conjecture:GPR_MIMO} is verified in this particular case. To this purpose, Fig. \ref{fig6} represents the energy-efficiency function for the same scenario. We observe that for the exact threshold $\delta=0.16$ W, we obtain that if $\overline{P}\leq \delta$ the beamforming PA using all the available power is optimal. If $\overline{P} > \delta$ the UPA is optimal. Here, similarly to the MISO case, we observe a saturated regime which means that after a certain point it is not optimal w.r.t. energy-efficiency to use up all the available transmit power. In conclusion, our conjecture has been verified in this simulation. Note that for the beamforming PA case we have explicit relations for both the outage probability and the energy-efficiency (it is easy to check that the MIMO with beamforming PA reduces to the SIMO case) and thus Monte-Carlo simulations have not been used. \section{Conclusion} \label{sec:conclusion} In this paper, we propose a definition of energy-efficiency metric which is the extension of the work in \cite{verdu-it-1990} to static MIMO channels. Furthermore, our definition bridges the gap between the notion of capacity per unit cost \cite{verdu-it-1990} and the empirical approach of \cite{goodman-pcom-2000} in the case of slow fading channels. In static and fast fading channels, the energy-efficiency is maximized at low transmit power and the corresponding rates are also small. On the the other hand, the case of slow fading channel is not trivial and exhibits several open problems. It is conjectured that solving the (still open) problem of outage minimization is sufficient to solve the problem of determining energy-efficient precoding schemes. This conjecture is validated by several special cases such as the MISO case and asymptotic cases. Many open problems are introduced by the proposed performance metric, here we just mention some of them: \begin{itemize} \item First of all, the conjecture of the optimal precoding schemes for general MIMO channels needs to be proven. \item The quasi-concavity of the goodput-to-power ratio when uniform power allocation is assumed remains to be proven in the finite setting. \item A more general channel model should be considered. We have considered i.i.d. channel matrices but considering non zero-mean matrices with arbitrary correlation profiles appears to be a challenging problem for the goodput-to-power ratio. \item The connection between the proposed metric and the diversity-multiplexing tradeoff at high SNR has not been explored. \item Only single-user channels have been considered. Clearly, multi-user MIMO channels such as multiple access or interference channels should be considered. \item The case of distributed multi-user channels become more and more important for applications (unlicensed bands, decentralized cellular networks, etc.). Only one result is mentioned in this paper: the existence of a pure Nash equilibrium in distributed MIMO multiple access channels assuming uniform power allocation transmit policy. \end{itemize} \begin{figure \begin{center} \includegraphics[angle=0,width=14cm]{efficiency_n_1.eps} \caption{\scriptsize{Energy-efficiency (GPR) vs. transmit power $p \in [0,1]$ W for MIMO channels where $n_r=n_t=n \in \{1,2,4,8\}$, UPA $\mathbf{D}=\frac{p}{n_t}\mathbf{I}_{n_t}$, $\rho=10$ dB, $R = 1$ bpcu. Observe that the energy-efficiency is a quasi-concave function w.r.t. $p$. The optimal point $p^*$ is decreasing and $\Gamma_{\mathrm{UPA}}\left(p^*,R\right)$ is increasing with $n$.}} \label{fig1} \end{center} \end{figure} \begin{figure \begin{center} \includegraphics[angle=0,width=14cm]{efficiency_n_2.eps} \caption{\scriptsize{Energy-efficiency vs. the number of antennas $n$ for MIMO $n_r=n_t=n \in \{1,2,4,8\}$, UPA, $\mathbf{D}=\frac{p}{n_t}\mathbf{I}_{n_t}$, $\rho=10$ dB, $R = 1$ bpcu and $\overline{P}=1$ W. Observe that $\Gamma_{\mathrm{UPA}}\left(p^*,R\right)$ is increasing with $n$.}} \label{fig2} \end{center} \end{figure} \begin{figure \begin{center} \includegraphics[angle=0,width=14cm]{efficiency_nt_1.eps} \caption{\scriptsize{Energy-efficiency vs. transmit power $p \in [0,1]$ W for MIMO $n_r=2$, $n_t\in \{1,2,4,8\}$, UPA $\mathbf{D}=\frac{p}{n_t}\mathbf{I}_{n_t}$, $\rho=10$ dB, $R = 1$ bpcu. Observe that the energy-efficiency is a quasi-concave function w.r.t. $p$. The optimal point $p^*$ is not decreasing with $n$ but almost constant.}} \label{fig3} \end{center} \end{figure} \begin{figure \begin{center} \includegraphics[angle=0,width=14cm]{efficiency_miso_4x1.eps} \caption{\scriptsize{Optimal energy-efficiency vs. constraint power for MISO $n_t=4$, $n_r=1$, UPA over a subset of $\ell \in \{1,2,3,4\}$ antennas, $\rho = 10$ dB, $R=3$ bpcu. We illustrate the results of Proposition \ref{proposition:MISO}. If $\overline{P}\leq \frac{c}{c_1}$ is low enough, the beamforming PA with full power is optimal. If $\overline{P} \geq \frac{c}{c_2}$ is high enough, the UPA is optimal but not with full power necessarily $\left(p^*=\min \{\frac{c}{\nu_4}, \overline{P}\} \right)$ which explains the saturated regime. }} \label{fig4} \end{center} \end{figure} \begin{figure \begin{center} \includegraphics[angle=0,width=12cm]{psucc_mimo_2x2.eps} \caption{\scriptsize{Success probability vs. power constraint $\overline{P}$, comparison between beamforming PA, UPA and General PA for MIMO $n_t=n_r=2$, $R=1$ bpcu, $\rho=3$ dB. We observe that Telatar's conjecture is validated. There is a threshold, $\delta=0.16$ W, below which ($\overline{P}\leq \delta$) the beamforming PA is optimal and above it, UPA is optimal.}} \label{fig5} \end{center} \end{figure} \begin{figure \begin{center} \includegraphics[angle=0,width=14cm]{efficiency_mimo_2x2.eps} \caption{\scriptsize{Optimal energy-efficiency vs. power constraint $\overline{P}$, comparison between beamforming PA, UPA and General PA for MIMO $n_t=n_r=2$, $R=1$ bpcu, $\rho=3$ dB. We observe that our Conjecture \ref{conjecture:GPR_MIMO} is validated. For the exact same $\delta=0.16$ W, we have that for $\overline{P}\leq \delta$ the beamforming PA structure optimal and above it, UPA structure is optimal. }} \label{fig6} \end{center} \end{figure} \newpage \appendices \section{Proof of Proposition \ref{proposition:static-channels}} \label{appendix:A} As $\mathbf{Q}$ is a positive semi-definite Hermitian matrix, it can always be spectrally decomposed as $\mathbf{Q}=\mathbf{U}\mathbf{D}\mathbf{U}^H$ where $\mathbf{D}= \mathrm{\textbf{Diag}}(p_1,\hdots,p_{n_t})$ is a diagonal matrix representing a given PA policy and $\mathbf{U}$ a unitary matrix. Our goal is to prove that, for every $\mathbf{U}$, $G_{\mathrm{static}}$ is maximized when $\mathbf{D} =\mathrm{\textbf{Diag}}(0, 0, ..., 0)$. To this end we rewrite $G_{\mathrm{static}}$ as \begin{equation} \displaystyle{G_{\mathrm{static}}(\mathbf{U} \ \mathrm{\textbf{Diag}}(p_1,\hdots,p_{n_t}) \ \mathbf{U}^H) = \frac{\log_2\left|\mathbf{I}_{n_r}+\displaystyle{\sum_{i=1}^{n_t} p_i \underline{g}_i \underline{g}_i^H }\right|}{\displaystyle{\sum_{i=1}^{n_t}p_i}}}, \end{equation} where $\underline{g}_i$ represents the $i^{th}$ column of the $n_r \times n_t$ matrix $\mathbf{G}=\sqrt{\rho}\mathbf{H}\mathbf{U}$ and proceed by induction on $n_t \geq 1$. First, we introduce an auxiliary quantity (whose role will be made clear a little further) \begin{equation} \label{eq:tr_logdet} \begin{array}{lcl} E^{(n_t)}(p_1,\hdots, p_{n_t})& \triangleq & \displaystyle{ \mathrm{Tr}\left(\mathbf{I}_{n_r}+\sum_{i=1}^{n_t}p_i \underline{g}_i\underline{g}_i^H \right)^{-1}\left(\sum_{i=1}^{n_t}p_i\underline{g}_i\underline{g}_i^H\right)}\\ & &\displaystyle{- \log_2 \left|\mathbf{I}_{n_r}+ \sum_{i=1}^{n_r}p_i \underline{g}_i\underline{g}_i^H\right| }. \end{array} \end{equation} and prove by induction that it is negative that is, $\forall (p_1,\hdots, p_{n_t}) \in \mathbb{R}_+^{n_t}$, $E^{(n_t)}(p_1,\hdots, p_{n_t}) \leq 0$. For $n_t=1$, we have $E^{(1)}(p_1) = \mathrm{Tr}\left[(\mathbf{I}_{n_r}+p_1\underline{g}_1\underline{g}_1^H)^{-1} p_1\underline{g}_1\underline{g}_1^H\right] - \log_2\left|\mathbf{I}_{n_r}+ p_1\underline{g}_1 \underline{g}_1^H\right|$. The first order derivative of $E^{(1)}(p_1)$ w.r.t. $p_1$ is: \begin{equation} \frac{\partial E^{(1)}}{\partial p_1}= -p_1 [\underline{g}_1^H(\mathbf{I}_{n_r}+p_1\underline{g}_1\underline{g}_1^H)^{-1}\underline{g}_1 ]^2 \leq 0 \end{equation} and thus $E^{(1)}(p_1) \leq E^{(1)}(0)=0$. Now, we assume that $E^{(n_t-1)}(\underline{p}) \leq 0$ and want to prove that $E^{(n_t)}(\underline{p}, p_{n_t}) \leq 0$, where $\underline{p}=(p_1,\hdots, p_{n_t-1})$. It turns out that: \begin{equation} \frac{\partial E^{(n_t)}}{\partial p_{n_t}}= -\sum_{j=1}^{n_t} p_j \left|\underline{g}_j^H\left(\mathbf{I}_{n_r}+\sum_{i=1}^{n_t}p_i\underline{g}_i\underline{g}_i^H \right)^{-1}\underline{g}_{n_t} \right|^2 \leq 0, \end{equation} and therefore $E^{(n_t)}(p_1,\hdots,p_{n_t-1},p_{n_t}) \leq E^{(n_t)}(p_1,\hdots,p_{n_t-1},0)=E^{(n_t-1)}(p_1,\hdots,p_{n_t-1} ) \leq 0$. As a second step of the proof, we want to prove by induction on $n_t \geq 1$ that \begin{equation} \arg \max_{\underline{p}, p_{n_t}} G_{\mathrm{static}}^{(n_t)}(\underline{p}, p_{n_t})= \underline{0}. \end{equation} For $n_t=1$ we have $G_{\mathrm{static}}^{(1)}(p_1)=\frac{\log_2|\mathbf{I}_{n_r}+ p_1\underline{g}_1 \underline{g}_1^H|}{p_1}=\frac{\log_2(1+p_1 \underline{g}_1^H\underline{g}_1)}{p_1}$ which reaches its maximum in $p_1=0$. Now, we assume that $\arg \displaystyle{\max_{\underline{p}}} \ G_{\mathrm{static}}^{(n_t-1)}(\underline{p})=\underline{0}$ and want to prove that $\displaystyle{\arg \max_{(\underline{p},p_{n_t})} G_{\mathrm{static}}^{(n_t)}(\underline{p},p_{n_t})=\underline{0}}$. \\Let $\displaystyle{k =\arg \min_{i\in \{1, \hdots, n_t\}} \mathrm{Tr}\left[\left(\mathbf{I}_{n_r}+\displaystyle{\sum_{j=1}^{n_t}p_j \underline{g}_j\underline{g}_j^H} \right)^{-1}\underline{g}_i\underline{g}_i^H \right]}$. By calculating the first order derivative of $G_{\mathrm{static}}^{(n_t)}$ w.r.t. $p_k$ one obtains that: \begin{equation} \frac{\partial G_{\mathrm{static}}^{(n_t)}}{\partial p_k} = \displaystyle{\frac{\mathcal{N}}{\left(\displaystyle{\sum_{i=1}^{n_t}p_i}\right)^2}}, \end{equation} with \begin{equation} \begin{array}{lcl} \mathcal{N}&=&\left(\displaystyle{\sum_{i=1}^{n_t} p_i}\right)\mathrm{Tr}\left[\left(\mathbf{I}_{n_r}+\displaystyle{ \sum_{j=1}^{n_t}p_j \underline{g}_j\underline{g}_j^H}\right)^{-1}\underline{g}_k\underline{g}_k^H\right] \\ & &-\log_2\left|\mathbf{I}_{n_r}+ \displaystyle{\sum_{i=1}^{n_t}p_i \underline{g}_i\underline{g}_i^H}\right| \end{array} \end{equation} and thus $ \frac{\partial G_{\mathrm{static}}^{(n_t)}}{\partial p_k} \leq \displaystyle{\frac{E^{(n_t)}(p_1,\hdots,p_{n_t})}{\left(\sum_{i=1}^{n_t}p_i \right)^2} }\leq 0$ and $p_k^*=0$ for all $p_1, \hdots, p_{k-1},p_{k+1}, \hdots, p_{n_t}$. We obtain that\\ $F^{(n_{t})}(p_1, \hdots, p_{k-1},0, p_{k+1}, \hdots, p_{n_t})$ $=F^{(n_t-1)}(p_1, \hdots, p_{k-1},p_{k+1}, \hdots, p_{n_t})$, which is maximized when $(p_1, \hdots, p_{k-1},p_{k+1}, \hdots, p_{n_t}) = \underline{0}$ by assumption. We therefore have that $\mathbf{Q}^*=\mathbf{U}\mathbf{0}\mathbf{U}^H=\mathbf{0}$ is the solution that maximizes the function $G_{\mathrm{static}}(\mathbf{Q})$. At last, to find the maximum reached by $G_{\mathrm{static}}$ one just needs to consider the the equivalent of the $\log_2\left|\mathbf{I}_{n_r}+ \rho \mathbf{H}\mathbf{Q}\mathbf{H}^H\right|$ around $\mathbf{Q}=\mathbf{0}$ \begin{equation} \log_2\left|\mathbf{I}_{n_r}+ \rho \mathbf{H}\mathbf{Q}\mathbf{H}^H\right| \sim \frac{\rho}{n_t} \mathrm{Tr}(\mathbf{H}\mathbf{H}^H) \end{equation} and takes $\mathbf{Q} = \frac{q}{n_t} \mathbf{I}_{n_t}$ with $q \rightarrow 0$. \section{Proof of Proposition \ref{proposition:slow_gen}} \label{appendix:B} The proof has two parts. First, we start by proving that if the optimal solution is different than the uniform spatial power allocation $\mathbf{P}^* \neq \frac{p}{n_t}\mathbf{I}_{n_t}$ with $p \in \left[0,\overline{P}\right]$ then the solution is not trivial $\mathbf{P}^* \neq \mathbf{0}$. We proceed by reductio ad absurdum. We assume that the optimal solution is trivial $\mathbf{P}^* = \mathbf{0}$. This means that when fixing $(p_2,\hdots,p_{n_t}) = (0,\hdots, 0)$ the optimal $p_1 \in [0, \overline{P}]$ that maximizes the energy-efficiency function is $p_1^*=0$. The energy-efficiency function becomes: \begin{equation} \label{eq:simo} \Gamma(\mathrm{\textbf{Diag}}(p_1,0, \hdots,0),R) = R \frac{1- \mathrm{Pr}\left[\log_2 (1+ \rho p_1 \|\underline{h}_1\|^2)<R\right]}{p_1} \end{equation} where $\underline{h}_1$ represents the first column of the channel matrix $\mathbf{H}$. Knowing that the elements in $\underline{h}_1$ are i.i.d. $h_{1j} \sim \mathcal{C}\mathcal{N}(0,1)$ for all $j \in\{1,\hdots, n_r\}$ we have that $|h_{1j}|^2 \sim \mathrm{expon}(1)$. The random variable $\|\underline{h}_1\|^2= \displaystyle{\sum_{j=1}^{n_r}}|h_{1j}|^2$ is the sum of $n_r$ i.i.d. exponential random variables of parameter $\lambda=1$ and thus follows an $2n_r$ chi-square distribution (or an $n_r$ Erlang distribution) whose c.d.f. is known and given by $\varsigma(x) = 1- \exp(-x)\displaystyle{\sum_{k=0}^{n_r-1}} \frac{x^k}{k!}$. We can explicitly calculate the outage probability and obtain the energy-efficiency function: \begin{equation} \Gamma(\mathrm{\textbf{Diag}}(p_1,0, \hdots,0),R) = R \exp\left(-\frac{c}{p_1}\right) \sum_{k=0}^{n_r-1} \frac{c^k}{k!}\frac{1}{p_1^{k+1}} \end{equation} where $c =\frac{2^R-1}{\rho} > 0$. It is easy to check that $\displaystyle{\lim_{p_1 \rightarrow 0} \Gamma(p_1,R)=0}$, $\displaystyle{\lim_{p_1 \rightarrow \infty} \Gamma(p_1,R)=0}$. By evaluating the first derivative w.r.t. $p_1$, it is easy to check that the maximum is achieved for $p_1^*=\frac{c}{\nu_{n_r}} \geq 0$ where $\nu_{n_r}$ is the unique positive solution of the following equation (in $y$): \begin{equation}\frac{1}{(n_r -1)!} y^{ n_r} - \sum_{k=0}^{n_r -1} \frac{y^k}{k!}=0. \end{equation} Considering the power constraint the optimal transmission power is $p_1^*=\min\{\frac{2^R-1}{\nu_{n_r} \rho}, \overline{P}\}$, which contradicts the hypothesis and thus if the optimal solution is different than the uniform spatial power allocation then the solution is not trivial $\mathbf{P}^* \neq \mathbf{0}$. \section{Proof Proposition \ref{proposition:MISO} } \label{appendix:C} Let $\underline{p}^T = (p_1,...,p_{n_t})$ be the vector of powers allocated to the different antennas $i\in\{1,...,n_t\}$ and thus $\mathbf{D}= \mathrm{\textbf{Diag}}(\underline{p})$. Define the two sets: $\mathcal{C}(x) = \left\{\underline{p} \geq 0,\displaystyle{ \sum_{i=1}^{n_t} } p_i \leq x \right\}$ and $\Delta(x) = \left\{\underline{p} \geq 0, \displaystyle{ \sum_{i=1}^{n_t} } p_i = x\right\}$. Using these notations, they key observation to be made is the following: \begin{equation} \begin{array}{ccl} \displaystyle{\sup_{\underline{p} \in \mathcal{C}(\overline{P})} \Gamma^{\mathrm{MISO}}(\mathbf{D},R)} & \stackrel{(a)}{=} & R \displaystyle{ \sup_{\underline{p} \in \mathcal{C}(\overline{P})} \frac{1 - P_{\mathrm{out}}^{\mathrm{MISO}}(\mathbf{D}, R)}{\displaystyle{\sum_{i=1}^{n_t}} p_i}} \\ & \stackrel{(b)}{=} & R \displaystyle{\sup_{x \in [0, \overline{P}]} \sup_{\underline{p} \in \Delta(x)} \frac{1 - P_{\mathrm{out}}^{\mathrm{MISO}}(\mathbf{D}, R)}{x}} \\ & \stackrel{(c)}{=} & R \displaystyle{\sup_{x \in [0, \overline{P}]} \frac{g\left( \frac{c}{x}\right) }{x}} \end{array} \end{equation} where $P_{\mathrm{out}}^{\mathrm{MISO}} = \mathrm{Pr}\left[\log\left( 1 + \rho \displaystyle{\sum_{i=1}^{n_t} }p_i |h_i|^2\right) \leq R \right] $: (a) translates the definition of the GPR; (b) follows from the property $\sup \{ A \cup B\}= \sup \{ \sup \{A\}, \sup \{ B \}\}$ for two sets $A$ and $B$, applied to our context; in (c) the function $g(z) = \left\{g_{\ell}(z), \ \mathrm{if} z \in \left[\frac{c}{c_{\ell-1}}, \frac{c}{c_{\ell}}\right)\right.$ is a piecewise continuous function where $g_{\ell}(z) = 1 - \mathrm{Pr}\left[\frac{1}{\ell} \displaystyle{\sum_{i=1}^{n_t}}|h_i|^2 \leq z \right]$ for $z \in \left[\frac{c}{c_{\ell-1}}, \frac{c}{c_{\ell}} \right)$ and $\ell \in \{1, \hdots, n_t \}$. The function $g(z)$ corresponds to the solution of the minimization problem of the outage probability \cite{jorswieck-ett-2007}. Now, we study the function $g_{\ell}$. By calculating the first order derivative of $\frac{1}{x}g_{\ell}\left(\frac{c}{x}\right)$ w.r.t. $x$ we obtain: \begin{equation} \frac{\mathrm{d}}{\mathrm{d} x} \left\{\frac{1}{x}g_{\ell}\left(\frac{c}{x}\right)\right\} = \frac{\mathrm{e}^{-\frac{\ell c}{x}}}{x^2} \left[\frac{1}{(\ell-1)!}\left(\frac{\ell c}{x}\right)^{\ell} - \displaystyle{\sum_{j=0}^{\ell-1} \frac{1}{j!} \left(\frac{\ell c}{x}\right)^{j}} \right]. \end{equation} Thus the function $\frac{1}{x} g \left(\frac{c}{x}\right)$ is increasing for $x \in (0, x_{\ell})$ and decreasing on $x \in (x_{\ell}, \infty)$. The maximum point is reached in $x_{\ell} = \frac{\ell c}{y_{\ell}}$ where $y_{\ell}$ is the unique positive solution of the equation $\phi_{\ell}(y)=0$ where \begin{equation} \phi_{\ell}(y) = \frac{1}{(\ell-1)!} y^{\ell} - \sum_{i=0}^{\ell-1}\frac{1}{i!} y^i. \end{equation} We have that $\phi(0)= -1 < 0$ and \begin{equation} \begin{array}{lcl} \phi_{\ell}(\ell) & = & \frac{1}{(\ell-1)!} \ell^{\ell} -\displaystyle{ \sum_{i=0}^{\ell-1}}\frac{1}{i!} \ell^i \\ & = & \displaystyle{\sum_{i=0}^{\ell-1} }\frac{\ell-i-1}{i!} \ell^i \\ & > & 0. \end{array} \end{equation} This implies that $y_{\ell} \leq \ell$ and thus $x_{\ell} \geq c $. Since $c_{n_t-1} \geq 1$ we also have $x_{\ell} \geq \frac{c}{c_{n_t-1}}$ for all $\ell \in \{1,\hdots,n_t-1\}$. Therefore, all the functions $\frac{1}{x}g_{\ell}\left(\frac{c}{x}\right)$ are increasing on the intervals $\left(0, \frac{c}{c_{n_t-1}}\right)$. Moreover, on the interval $ \left(\frac{c}{c_{n_t-1}},\infty\right)$, they are increasing on $ \left(\frac{c}{c_{n_t-1}}, x_{\ell} \right]$ and decreasing on $\left[x_{\ell}, \infty\right)$. Proposition \ref{proposition:MISO} follows directly. \section{Counter-example, TISO} \label{appendix:D} Consider the particular case where $n_t=2$ and $n_r=1$. From Proposition \ref{proposition:MISO}, it follows that for a power constraint $\overline{P} < \frac{c}{c_1}$ the beamforming power allocation policy maximizes the energy-efficiency and $\Gamma^{\mathrm{TISO}}(\mathrm{\textbf{Diag}}(\overline{P},0),R) = \Gamma^{\mathrm{TISO}}(\mathrm{\textbf{Diag}}(0,\overline{P}),R) > \Gamma^{\mathrm{TISO}}\left(\mathrm{\textbf{Diag}}\left(\frac{\overline{P}}{2},\frac{\overline{P}}{2}\right),R \right)$ . The function $\Gamma^{\mathrm{TISO}}(\mathrm{\textbf{Diag}}(p_1,p_2),R)$ with $(p_1,p_2) \in \mathcal{P}_2 \triangleq \{(p_1,p_2)\in \mathbb{R}_+^2 \ | \ p_1+p_2 \leq \overline{P} \}$ denotes the energy-efficiency function. We want to prove that $\Gamma^{\mathrm{TISO}}(\mathrm{\textbf{Diag}}(p_1,p_2),R)$ is not quasi-concave w.r.t. $(p_1,p_2) \in \mathcal{P}_2$. This amounts to finding a level $\gamma \geq 0$ such that the corresponding upper-level set $\mathcal{U}_{\gamma} = \left\{(p_1,p_2)\in \mathcal{P}_2 \ | \ \Gamma^{\mathrm{TISO}}(\mathrm{\textbf{Diag}}(p_1,p_2),R) \geq \gamma\right\}$ is not a convex set (see \cite{boyd-book-2004} for a detailed analysis on quasi-concave functions). Consider an arbitrary $0< q < \min\left\{ \overline{P}, \frac{c}{c_1} \right\}$ such that $\Gamma^{\mathrm{TISO}}(\mathrm{\textbf{Diag}}(q,0),R) = \Gamma^{\mathrm{TISO}}(\mathrm{\textbf{Diag}}(0,q),R) < \Gamma^{\mathrm{TISO}}\left(\mathrm{\textbf{Diag}}\left(\frac{q}{2},\frac{q}{2}\right),R \right)$. It turns out that all upper-level sets $\mathcal{U}_{\gamma_q}$ with $\gamma_q \triangleq \Gamma^{\mathrm{TISO}}(\mathrm{\textbf{Diag}}(q,0),R)$ are not convex sets. This follows directly from the fact that $(q,0), (0,q) \in \mathcal{U}_{\gamma_q}$ but $\left(\frac{q}{2},\frac{q}{2}\right) \notin \mathcal{U}_{\gamma_q}$ since $\Gamma^{\mathrm{TISO}}\left(\mathrm{\textbf{Diag}}\left(\frac{q}{2},\frac{q}{2}\right),R \right) < \gamma_q$. \section{Extreme SNR cases, GPR} \label{appendix:E} In \cite{jorswieck-ett-2007}, the authors proved that in the low SNR regime the outage probability $P_{\mathrm{out}}(\underline{p}, R)$ is Schur-concave w.r.t. $\underline{p}$. This means that for any vectors $\underline{p}$, $\underline{q}$ such that $\underline{p} \succ \underline{q}$ then $P_{\mathrm{out}}(\underline{p}, R) \leq P_{\mathrm{out}}(\underline{q}, R)$. The operator $\succ$ denotes the majorization operator which will be briefly described (see \cite{marshall-book-1979} for details). For any two vectors $\underline{p}, \underline{q} \in \mathbb{R}_+^{n_t}$, $\underline{p}$ majorizes $\underline{q}$ (denoted by $\underline{p} \succ \underline{q}$) if $\displaystyle{\sum_{k=1}^{m}} p_k \geq \displaystyle{\sum_{k=1}^{m}} q_k$, for all $m \in \{1,\hdots, n_t-1\}$ and $\displaystyle{\sum_{k=1}^{n_t}} p_k = \displaystyle{\sum_{k=1}^{n_t}} q_k$. This operator induces only a partial ordering. The Schur-convexity and $\prec$ operator can be defined in an analogous way. Also, an important observation to be made is that the beamforming vector majorizes any other vector, whereas the uniform vector is majorized by any other vector (provided the sum of all elements of the vectors is equal). Otherwise stated, $x \underline{e}_1 \succ \underline{p} \succ \frac{x}{n_t} \underline{\mathbf{1}} $ for any vector $\underline{p}$ such that $\displaystyle{\sum_{i=1}^{n_t} }p_i = x$ and $\underline{\mathbf{1}} =(1,1,\hdots,1)$ and $\underline{e}_1 \in \mathcal{S}_1$. It is straightforward to see that if $P_{\mathrm{out}}(\textbf{Diag}(\underline{p}), R)$ is Schur-concave w.r.t. $\underline{p}$ then $1 - P_{\mathrm{out}}(\textbf{Diag}(\underline{p}), R)$ is Schur-convex w.r.t. $\underline{p}$. Since the majorization operator implies the sum of all elements of the ordered vectors to be identical, $\Gamma(\textbf{Diag}(\underline{p}), R)=\frac{1 - P_{\mathrm{out}}(\textbf{Diag}(\underline{p}), R)}{\displaystyle{\sum_{i=1}^{n_t}p_i}}$ will also be Schur-convex w.r.t. $\underline{p}$ and thus is maximized by a beamforming vector. Using the same notations as in Appendix \ref{appendix:C} we obtain: \begin{equation} \begin{array}{ccl} \displaystyle{\sup_{\underline{p} \in \mathcal{C}(\overline{P})} \Gamma(\textbf{Diag}(\underline{p}),R)} & = & \displaystyle{\sup_{x \in [0, \overline{P}]} \frac{1}{x} \sup_{\underline{p} \in \Delta(x)} [1 - P_{\mathrm{out}}(\textbf{Diag}(\underline{p}), R)]} \\ & \stackrel{(a)}{=} & \displaystyle{\sup_{x \in [0, \overline{P}]}\frac{1}{x} [1 - \mathrm{Pr}[\log(1+ x \rho \underline{h}_1^H \underline{h}_1) \leq R ],} \\ & = & \displaystyle{\sup_{x \in [0, \overline{P}]}\frac{1}{x} \left\{1 - Pr\left[\frac{1}{n_r} \sum_{j=1}^{n_r} |h_{1j}|^2 \leq \frac{c}{n_r x} \right] \right\}, } \\ & \stackrel{(b)}{=} & \displaystyle{\sup_{x \in [0, \overline{P}]}\frac{g_{n_r}\left(\frac{c}{n_r x}\right)}{x} }, \end{array} \end{equation} where (a) follows by considering beamforming power allocation policy on the first transmit antenna (with no generality loss) and replacing $\underline{p} = x\underline{e}_1$ with $\underline{e}_1 = (1, 0, \hdots, 0)$ and $\underline{h}_1$ denoting the first column of the channel matrix; in (c) we make use the definition in Appendix \ref{appendix:C} for the function $\frac{1}{x} g_{n_r}\left(\frac{c}{n_r x}\right)$ which has a unique optimal point in $\min \left\{ \frac{c}{y_{n_r}} , \overline{P} \right\}$, with $y_{n_r}$ the unique solution of $\Phi_{n_r}(y) = 0$. Since $\sigma^2 \rightarrow 0$ then $c \rightarrow + \infty$ and thus the optimal power allocation is $\underline{p}^* = \overline{P} \underline{e}_1$. Similarly, for the high SNR case we have: \begin{equation} \begin{array}{ccl} \displaystyle{\sup_{\underline{p} \in \mathcal{C}(\overline{P})} \Gamma(\textbf{Diag}(\underline{p}),R)} & = & \displaystyle{\sup_{x \in [0, \overline{P}]} \frac{1}{x} \sup_{\underline{p} \in \Delta(x)} [1 - P_{\mathrm{out}}(\textbf{Diag}(\underline{p}), R)]} \\ & = & \displaystyle{\sup_{x \in [0, \overline{P}]}\frac{1}{x} \left[1 - P_{\mathrm{out}}\left(\textbf{Diag}\left(\frac{x}{n_t} (1,\hdots,1)\right), R\right)\right] }. \end{array} \end{equation} We have used the results in \cite{jorswieck-ett-2007}, where the UPA was proven to minimize the outage probability. Let us now consider the limit of the energy-efficiency function when $p \rightarrow 0$, $\sigma^2 \rightarrow 0$ such that $\frac{ p}{\sigma^2} \rightarrow \xi$ with $\xi$ a positive finite constant. We obtain that $1- P_{\mathrm{out}}\left(\frac{x}{n_t}\mathbf{I}_{n_t},R\right) \rightarrow \mathrm{Pr}\left[\left|\mathbf{I}_{n_r} + \frac{\xi}{n_t} \mathbf{H}\mathbf{H}^H \right|\right] >0$ which implies directly that $\Gamma\left(\frac{x}{n_t}\mathbf{I}_{n_t},R\right) \rightarrow + \infty$.
2,869,038,156,615
arxiv
\section{Introduction} \IEEEPARstart{A}{utomated driving} has the potential to radically change our mobility habits as well as the way goods are transported. To enable driving automation, several processing steps have to be executed. \autoref{fig:intro} illustrates this thought: In the first step, the current traffic scene has to be sensed and a proper representation of the environment needs to be generated. Using this information, the given traffic situation needs to be interpreted and the behavior of others has to be anticipated. Subsequently, a plan, i.\,e. a trajectory, is derived based on this knowledge. Finally, this plan is executed in the last step of this process. How long the trajectory stays viable, before it has to be re-planned, is strongly influenced by the capability of the prediction component. As opposed to other research works dealing with techniques to interconnect vehicles through a so called car-to-car communication, we aim to solve this anticipation task locally. On one hand, it is not foreseeable when an adequate market penetration of vehicles with such techniques will be reached. On the other, a local prediction component always becomes necessary, as there are several traffic participants without communication abilities such as bicyclists. In addition, local predictions might become necessary to bypass transmission times in certain cases as emphasized by \cite{weidl2018}. Moreover, it is reasonable to approach the topic from the perspective of highway driving, as this use case is easier to realize than others due to its clear constraints (e.\,g. structured setting, absence of pedestrians). However, for the prediction task this implies the challenge to create precise long-term predictions (2 to 5\,s) rather than short forecasts (up to 2\,s), as in highway scenarios higher velocities can be expected than in urban or rural areas. \begin{figure}[!t] \centering\includegraphics[width=0.8\columnwidth]{intro.pdf} \caption{Long-term driving behavior predictions in the context of trajectory planning for automated driving (equal symbols denote simultaneity).} \label{fig:intro} \end{figure} \IEEEpubidadjcol \subsection{Problem Statement}\label{subsec:problem_statement} We tackle the challenge of anticipating the behavior of other traffic participants in highway scenarios. In particular, we aim to generate information that can be processed by trajectory planning algorithms to implement an anticipatory driving style. In this context, our objective is to model future vehicle positions within a time $t$ in longitudinal $x_t$ and lateral $y_t$ direction as spatial distributions $x_t \sim p_x$, $y_t \sim p_y$ rather than estimating single shot predictions $\hat{x}_t$ and $\hat{y}_t$ respectively. Note that these distributions are more useful for down-streamed criticality assessments as they enable us to represent several alternative hypotheses at a time with their particular frequencies. Despite the focus on highway driving, the presented methods shall be general enough to be appropriate in other environments as well. \subsection{Problem Resolution Strategy}\label{subsec:res_strategy} This article presents a systematic workflow for the design and evaluation of a lightweight maneuver-based model \cite{lefevre2014}, which uses standard sensor inputs to perform long-term driving behavior predictions. Methodically, we build on \cite{schlechtriemen2015will} and use a two-step Mixture of Experts (\textit{MOE}) approach. This includes a maneuver classification and a down-streamed behavior prediction. The maneuver probabilities $\{P_m\}_{\forall m \in M}$ determined by the classifier are used in the Mixture of Experts approach as gating nodes. Specifically, the probabilities control the weighting $w_m$ of the respective expert distributions $p_ {y, m}$, while calculating the overall distribution of future vehicle positions $p_y$. \autoref{eq:MOE} summarizes this procedure for the lateral direction (equivalent for $x$): \begin{equation} \begin{aligned} y_{t} & \sim p_y(\Theta_y, I, t) \\ & = \sum_{m \in M}{p_{y, m}(\theta_{y, m}| I, t) \cdot w_m(I)}\\ \label{eq:MOE} \end{aligned} \end{equation} The set of maneuvers $M$ is defined as follows: \begin{equation} M = \{LCL, FLW, LCR\} \label{eq:maneuvers} \end{equation} Different weighting approaches based on the maneuver probabilities are presented in \autoref{sec:traj_eval}. The expert distributions $p_{y, m}$ are modeled as Gaussian Mixture Models (\textit{GMM}s) in the combined input and output space with $K$ components according to \autoref{eq:gmm}, and are used in a Gaussian Mixture Regression manner. Hence, they are conditioned by the input features $I$ and the prediction time $t$ (cf. \autoref{eq:MOE}). \begin{equation} p_{y, m}(\theta_{y, m}) = \sum_{i=1}^{K}{\phi_{y, m, i} \cdot \mathcal{N}(\mu_{y, m, i}, \Sigma_{y, m, i})} \label{eq:gmm} \end{equation} The parameters of the \textit{GMM}s are subsumed in $\Theta_y$: \begin{equation} \Theta_y = \{\theta_{y, m}\}_{\forall m \in M} = \{\phi_{y, m}, \mu_{y, m}, \Sigma_{y, m}\}_{\forall m \in M} \end{equation} In addition, we introduce an alternative methodology to the Mixture of Experts approach, integrating the outputs of the gating nodes into one single model. This simplifies \autoref{eq:MOE} as follows: \begin{equation} y_{t} \sim p_y(\theta_{y, IGMM}| I, t, P_{LCL}(I), P_{LCR}(I)) \label{eq:IGMM} \end{equation} For implementing the models, we use out-of-the-box modules from the widely used frameworks Apache Spark MLlib \cite{meng2016mllib} (classifiers) and Scikit-learn \cite{scikit-learn} (\textit{GMM}s). Altogether, we contribute a systematic workflow for designing and evaluating the prediction models as well as methodical extensions to known approaches. Moreover, we assess the performance of the developed modules for the two tasks of predicting (1) driving maneuvers and (2) probability distributions of future positions both separately and in combination. To evaluate the modules, we utilize a large data set comprising real-world measurements. As will be shown, our prediction models outperform established state-of-the-art approaches. \newline The remainder of this article is organized as follows: \autoref{sec:rel_work} discusses related work on object motion prediction, emphasizing the value added by our approach. \autoref{sec:pre} introduces the data set and describes the preprocessing steps applied to it. \autoref{sec:classification} outlines the training of the considered maneuver classifiers, whereas \autoref{sec:class_eval} deals with the experimental evaluation and the performance of the classifiers. Based on these findings, \autoref{sec:traj_pred} develops different approaches for estimating probability distributions of future vehicle positions, which are then assesed in \autoref{sec:traj_eval}. Finally, \autoref{sec:conclusion} summarizes the article and gives an outlook on future work. \section{Related Work}\label{sec:rel_work} Regarding the understanding and prediction of the behavior of other traffic participants in highway scenarios, various aspects were investigated in literature. Accordingly, this section is sub-divided into three parts: \autoref{subsec:rel_work_classifiers} presents approaches inferring the kind of maneuver that will be executed by a vehicle. Note that applications like collision checkers or trajectory planning algorithms cannot directly process such kind of information. Instead, probabilities of future vehicle positions or trajectories need to be predicted. Related research on this topic is presented in \autoref{subsec:rel_work_traj_pred}. Bringing together the aspects of maneuver classification and position prediction, \autoref{subsec:rel_work_hybrid} gives an overview of hybrid prediction approaches. Finally, \autoref{subsec:rel_work_summary} closes the section with a brief literature discussion, leading to the contributions of this article in \autoref{subsec:contributions}. \subsection{Classification Approaches}\label{subsec:rel_work_classifiers} Classification approaches for maneuver recognition are described in \cite{weidl2018, wissing2017lane, schlechtriemen2014lane, bahram2016}. In \cite{weidl2018}, a system is introduced, which is capable of detecting lane changes with high accuracies ($>$99\,\%), approximately 1\,s before their occurrence. For this purpose, dynamic Bayesian networks are used. Another approach, which is capable of detecting lane changes approximately 1.5\,s before their occurrence, is presented in \cite{wissing2017lane}. To achieve this, the lane change probability is decomposed into a situation- and a movement-based component, resulting in an $F_1$-score better than 98\,\%. The approach presented in \cite{schlechtriemen2014lane}, in turn, shows that it is possible to detect lane changes up to time horizons of 2\,s when using feature selection for scene understanding, with an Area Under the Curve (\textit{AUC}) better than 0.96. Moreover, \cite{bahram2016} combines interaction-aware heuristic models with an interaction-unaware learned model. The interaction-aware component relies on a multi agent simulation based on game theory, in which each agent simultaneously tries to minimize different cost functions. These cost functions are designed using expert knowledge and consider traffic rules. In a second step, the output of the interaction model is used to condition an interaction-unaware classifier based on Bayesian networks. The approach is able to detect lane changes on average 1.8\,s in advance, with an \textit{AUC} better than 0.93. \subsection{Trajectory and Position Prediction Approaches}\label{subsec:rel_work_traj_pred} Approaches dealing with the prediction of trajectories and positions are presented in \cite{lenz2017, altche, wiest2012probabilistic, wiest2013incorporating, schlechtriemen2014probabilistic}: \cite{lenz2017} uses a fully-connected Deep Neural Network to learn the parameters of a two-dimensional \textit{GMM}. For each situation, an adapted Gaussian Mixture distribution models the probability density in the output dimensions $a_x$ and $v_y$ (cf. \autoref{tab:featureOverview}). This distribution is then sampled to estimate trajectories. The authors evaluate their approach with the widely used NGSIM data set \cite{colyar2007us} and show that a root weighted square error (comparable to \textit{RMSE}) of approximately 0.5\,m in lateral direction at a prediction horizon of 5\,s can be achieved. Another approach, also evaluated with the NGSIM data set, is presented in \cite{altche}. The authors propose the use of a Long Short Term Memory network for predicting trajectories. In particular, the approach is able to compute single shot predictions with an \textit{RMSE} of approximately 0.42\,m at a prediction horizon of 5\,s. \cite{wiest2012probabilistic} deals with the prediction of spatial probability density functions, especially at road intersections. More precisely, a conditional probability density function, which models the relationship between past and future motions, is inferred from training data. Finally, standard \textit{GMM}s and variational approaches are compared. In \cite{wiest2013incorporating}, this approach is extended by a hierarchical Mixture of Experts that allows to incorporate categorical information. The latter includes, for example, the topology of a road intersection. In \cite{schlechtriemen2014probabilistic}, a Gaussian Mixture Regression approach for predicting future longitudinal positions as well as a procedure for estimating the prediction confidence are introduced. \begin{figure*}[!ht] \centering \includegraphics[scale=0.5]{processing_steps_detailed.pdf} \caption{Preprocessing steps used in the proposed workflow (respective sections are referred in the boxes).} \label{fig:preprocessing} \vspace{-4mm} \end{figure*} \subsection{Hybrid Approaches}\label{subsec:rel_work_hybrid} Approaches that combine strategies for both maneuver detection and trajectory or position prediction, similar to the approach presented in this article, are described in \cite{yoon2016, woo2017, wissing2017probabilistic, wissing2018trajectory, deo2018would, deo2018convolutional}. In the following, we denote such approaches as hybrid. \cite{yoon2016} presents a two-staged approach: In the first step, a Multilayer Perceptron (\textit{MLP}) is used to estimate the future lane of a vehicle. In a second step, a concrete trajectory realization is estimated with an additional \textit{MLP}. As a result, the lane estimation module is able to detect lane changes 2\,s in advance with an \textit{AUC} better than 0.90. The evaluation of the trajectory prediction module shows a median lateral error of approximately 0.23\,m at a prediction horizon of 5\,s. \cite{woo2017} proposes another hybrid approach that uses the prediction of future trajectories to forecast lane change maneuvers. Moreover, the intention of drivers is modeled using a Support Vector Machine. Subsequently, the resulting action is checked for collisions. This enables the approach to model interrupted lane changes. During the evaluation, an $F_1$-score of 98.1\,\% with a detection time up to 1.74\,s is achieved. In turn, \cite{wissing2017probabilistic} does not follow such a hybrid approach, but contains an intermediate step before predicting trajectories. Instead of learning maneuver probabilities, the authors present a regression technique for estimating the time span to the next lane change relying on Random Forests. In \cite{wissing2018trajectory}, this approach is extended and combined with findings from \cite{wissing2017lane}. The estimated time up to the next lane changes to the left and to the right are used as input for a cubic polynomial which is intended to predict future trajectories. Finally, the approach is evaluated with the mentioned NGSIM data set, showing a median lateral error of approximately 0.5\,m at a prediction horizon of 3\,s for lane changing scenarios, assuming a perfect maneuver classification. \cite{deo2018would} proposes the use of a maneuver recognition based on a Hidden Markov Model, distinguishing between ten maneuver classes. Based on this model, a position prediction module, which combines several maneuver specific variational \textit{GMM}s (according to \cite{wiest2012probabilistic}) and an Interacting Multiple Model, which weights different physical models against each other, are implemented. As the approach uses ten maneuver classes and as the errors are only measured in terms of Euclidean distance, the results are difficult to compare with the ones of other approaches. Additionally, the approach is evaluated on a rather small data set. Finally, in \cite{deo2018convolutional} these findings are pursued by the use of a Long Short Term Memory network. The authors demonstrate certain improvements compared to their previous work, while using the NGSIM data set for evaluation purposes. \cite{schlechtriemen2015will} presents an approach predicting future lateral vehicle positions based on Gaussian Mixture Regression and a Mixture of Experts with a Random Forest as gating network. The approach is evaluated based on a small data set, leading to noisy results, especially in case of lane changes. The evaluation shows that the approach is able to perform maneuver classifications with an \textit{AUC} better than 0.84 and lateral position predictions with a median error of less than 0.2\,m at a prediction horizon of 5\,s. \subsection{Discussion}\label{subsec:rel_work_summary} The findings of our literature survey can be summarized as follows: Many works provide meaningful algorithmic contributions. However, in numerous cases we miss structure regarding the problem resolution strategy. Often, it does not become clear how the approaches compare to any baseline (e.\,g. \cite{deo2018would}). Moreover, parameters (e.\,g. \cite{woo2017}) and feature sets (e.\,g. \cite{altche}) are selected manually, and are thus difficult to retrace. In addition, most approaches focus on short or medium prediction horizons (e.\,g. \cite{weidl2018}), or lack a good prediction performance for larger time-horizons (e.\,g. \cite{wissing2018trajectory}). When analyzing the approaches that aim to resolve the long-term prediction problem, it becomes clear that the latter is challenging as the prediction models become significantly more complex as, e.\,g., pointed out by \cite{bahram2016, schlechtriemen2014lane} and \cite{klingelschmitt}. Moreover, many approaches (e.\,g. \cite{altche}) aim to predict single trajectories or single shot predictions rather than probabilistic distributions of future vehicle positions. Therefore, the objective to be optimized is mostly the root-mean-square error ($RMSE$). As opposed to these works, we consider the objective of the learning problem as generating an estimator that models a probability distribution of positions reflecting the frequencies of all observed positions, e.\,g., for different drivers in the same situation. Thus, we aim to maximize the likelihood of truly occupied positions given the model. As reasoning behind this design choice, such distributions contain significantly more information than single shot predictions. Thus, they are more useful for applications that need to consider risks, like, for example, maneuver planning approaches as presented in \cite{wiest2012probabilistic, schlechtriemen2016wiggling, sadigh2016planning}. \subsection{Contributions}\label{subsec:contributions} The contribution of this article is threefold: \begin{enumerate} \item We apply a heuristic-free machine learning workflow to generate a model capable of predicting maneuvers and precise distributions of future vehicle positions for time horizons up to 5\,s (reasonable in terms of comparability). This is achieved with a machine learning workflow that omits any human tuned (hyper-) parameters when constructing the classifiers. Note that this includes all aspects involving feature engineering, labeling, feature selection, and hyperparameter optimization for different classification algorithms. Regarding feature engineering and selection, this means that we construct a data set with a large superset of all features, which are potentially relevant for the problem solution beforehand. Afterwards we select a more or less small feature set that still ensures maximum predictive power through an automated feature selection process. \item We evaluate the modules for maneuver classification and position prediction, where both parts are not only evaluated separately, as in other works (e.\,g. \cite{wissing2018trajectory}), but as a combined prediction system as well. This concerns the lateral as well as the longitudinal behavior. In this context, we show that directly feeding the results of the classifier into the regression problem produces results comparable to an Mixture of Experts approach. Additionally, we show that relying on the Markov assumption and not modeling the interactions between the traffic participants explicitly, allows producing superior results compared to existing approaches. As opposed to these works, we integrate the different aspects of behavior prediction, which comprise the prediction of driving maneuvers and positions both in lateral and longitudinal direction. In addition, we introduce new methodologies and conduct a large-scale evaluation. \item We demonstrate that the presented methods not only have the potential to outperform state-of-the-art approaches when feeding them with a sufficient number of data. Additionally, we show that our approach is able to provide a meaningful estimate of the prediction uncertainty to the consumer of the information, which is beneficial for collision risk calculation and trajectory planning (e.\,g. \cite{schlechtriemen2016wiggling}). \end{enumerate} \section{Data Preparation \& Experimental Setup}\label{sec:pre} \autoref{subsec:dataset} introduces the considered data set and the experimental setup. \autoref{subsec:features} then gives a detailed overview of the features used to train our models. Afterwards, \autoref{subsec:labeling} introduces the labeling process. Finally, \autoref{subsec:split} deals with the data set split for training, validating and testing the constructed models as well as further preprocessing steps. \autoref{fig:preprocessing} summarizes the overall preprocessing workflow. \subsection{Data Collection}\label{subsec:dataset} For modelling and evaluating our modules, we use measurement data from a fleet of testing vehicles \cite{tattersall2012} equipped with common series sensors. The sensor setup includes a front-facing camera detecting lane markings as well as two radars observing the traffic situation in the back. In addition, the vehicles have a front-facing automotive radar to sense the distances and velocities of surrounding vehicles. The data has been collected with different vehicles and drivers at varying times of the day during all seasons. The data collection campaign spanned over more than a year and was mainly restricted to the area around Stuttgart in Germany. Through the wide variance, we are expecting our models to achieve good generalization characteristics. Unlike other contributions (e.\,g. \cite{schlechtriemen2015will}), we are not using the actual object-vehicles as prediction target $o$ in this work, but rather the ego- (or measurement-) vehicle itself. However, as our work of course focuses on the prediction of surrounding vehicles, we solely use features that are observable from an external point of view, as postulated in other works (e.\,g. \cite{weidl2018} or \cite{woo2017}). Note that this constraint excludes features like driver status or steering wheel angle. Thus, the models remain applicable to actual object-vehicles, assuming a good sensing of their surrounding. Working with the ego-vehicle data offers several advantages concerning the modeling of situations: First, each situation can be described in a similar way, as situations in which relevant neighboring vehicles to the target-vehicle are hidden for the measurement-vehicle can not occur. In addition, all measurements span longer time periods as the target-vehicle can never disappear from the field of view. This way of data handling is widespread in literature (e.\,g. \cite{wissing2017lane}). In addition, one can expect that future sensor setups will minimize measurement uncertainty for perceived objects and will get closer to the data quality that is nowadays available for the ego-vehicle. \begin{figure}[!t] \centering\includegraphics[height=4.5cm]{umgebungsmodel_rot.pdf} \caption{Environment model used for our investigations.} \label{fig:env_model} \end{figure} Basically, our investigations rely on a similar environment model than the one presented in \cite{schlechtriemen2014lane}, modeling the surrounding with a fixed grid of eight relation partners. But opposed to \cite{schlechtriemen2014lane}, we use the ego-vehicle as prediction target. For this purpose, we slightly adapt the environment model: As the sensors facing the rear traffic in the testing vehicles are less capable than the ones facing the front, our environment model (cf. \autoref{fig:env_model}) distinguishes between relation partners behind (index $rb$) and in front of (index $rf$) the prediction target $o$. Thus, the relation vectors of the rear objects $R_{rb}$ are shortened compared to the ones of the front objects $R_{rf}$. The relation vectors describe the relation between the respective object and the prediction target. Object-vehicles on the same lane as $o$ and driving behind $o$ are left out, as the current sensor setup is not able to sense them. Consequently, a traffic situation can be described by the feature vector $F_{sit}$, which contains the relations of $o$ and its seven relation partners, its own status $F_o$, and the infrastructure description $F_{infra}$ (cf. \autoref{eq:fsit}): \begin{equation} \begin{aligned} F_{sit} = & [R_{rf}(r=fl), R_{rf}(r=f), R_{rf}(r=fr),\\ & R_{rf}(r=l), R_{rf}(r=r), \\ & R_{rb}(r=rl), R_{rb}(r=rr), \\ & F_o, F_{infra}]^{T} \end{aligned} \label{eq:fsit} \end{equation} A detailed listing of the particular elements of the relation vectors $R_{rf}$ and $R_{rb}$ as well as $F_o$ and $F_{infra}$ can be found in \autoref{tab:featureOverview}. \subsection{Feature Engineering}\label{subsec:features} To test and develop our system and to fill the described environment model, we use fused data originating from three different sources: \begin{enumerate} \item The basis for our investigations are measurement data produced by the testing fleet (cf. \autoref{subsec:dataset}). \item As we identified additional features being of interest as inputs beforehand, we fuse the data with information from a navigation map (e.\,g. bridges, tunnels, and distances to highway approaches). \item Besides, we calculate some higher order features out of the measurements, as e.\,g. a conversion to a curvilinear coordinate-system along the road \cite{thorvaldsson2015}. \end{enumerate} \subsection{Labeling}\label{subsec:labeling} Like previous works \cite{schlechtriemen2015will}, we divide all samples into the three maneuver classes $LCL$ (lane change left), $FLW$ (lane following), and $LCR$ (lane change right) and apply a labeling process that works as follows: First, for each measurement, the times up to the next lane change to the left neighboring lane ($TTLCL$) and to the right one ($TTLCR$) respectively are calculated. This is accomplished by a forecast in time with the distances to the lane markings. As the moment of the lane change, we define the point in time when the vehicle center has just crossed the lane marking. Subsequently, we determine the maneuver labels of each sample based on a defined prediction horizon $T_h$ according to \autoref{eq:labeling}: \begin{equation} L = \begin{cases} LCL,& \text{if } (TTLCL \leq T_h)\ \land\ \\ & \; \; \; \;(TTLCL < TTLCR) \\ LCR,& \text{if } (TTLCR \leq T_h)\ \land\ \\ & \; \; \; \; (TTLCR < TTLCL) \\ FLW,& \text{otherwise}\\ \end{cases} \label{eq:labeling} \end{equation} We decided to use a horizon of 5\,s, as the duration of lane change maneuvers usally ranges from 3\,s to 5\,s (see \cite{woo2017}). Consequently, it is reasonable to label samples only to an upper boundary of 5\,s as potential lane change samples. Additionally, this value is widely used in literature as longest prediction time (e.\,g. \cite{bahram2016, yoon2016} or \cite{woo2017}) and, therefore, it allows for comparability. However, note that this style of labeling might result in decreased performance values, as detections being slightly more than 5\,s ahead of a lane change count as false positives in the evaluation. \subsection{Data Set Split}\label{subsec:split} As shown in \autoref{fig:preprocessing}, we split our data into several parts after executing the mentioned preprocessing steps. The first split divides our data into one part for the maneuver classification $D^{Ma}$ and another one for the position prediction $D^{Po}$. This allows us to produce models based on independent data sets. An overview of the splits as well as the respective data set sizes and identifiers is given in \autoref{tab:data}. \begin{table*}[!h] \renewcommand{\arraystretch}{1.25} \caption{Data Set Identifiers and Sizes} \label{tab:data} \centering \begin{tabular}{|p{0.65cm}|p{0.65cm}|p{0.65cm}|p{0.65cm}|p{0.65cm}|p{0.65cm}||p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|} \hline \multicolumn{6}{|c||}{Maneuver Data:} & \multicolumn{4}{c|}{Position Data:} \\ \multicolumn{6}{|c||}{$D^{Ma}$} & \multicolumn{4}{c|}{$D^{Po}$} \\ \hline \multicolumn{5}{|c|}{Training \& Validation:} & \centering{Test:} & \multicolumn{3}{c|}{Training:} & \multicolumn{1}{c|}{Test:} \\ \multicolumn{5}{|c|}{$D^{Ma}_{TV}$} & \centering{$D^{Ma}_{Te}$} & \multicolumn{3}{c|}{$D^{Po}_{T}$} & \multicolumn{1}{c|}{$D^{Po}_{Te}$} \\ \multicolumn{5}{|c|}{} & & \multicolumn{3}{c|}{130\,623 Trajectories} & \multicolumn{1}{c|}{20\,000}\\ \multicolumn{5}{|c|}{} & & \multicolumn{3}{c|}{(7\,s; variable sampling time)} & \multicolumn{1}{c|}{Trajectories} \\ \cline{1-9} \centering{$D^{Ma}_{1}$} & \centering{$D^{Ma}_{2}$} & \centering{$D^{Ma}_{3}$} & \centering{$D^{Ma}_{4}$} & \centering{$D^{Ma}_{5}$} & \centering{$D^{Ma}_{6}$} & \centering{$D^{Po}_{T,LCL}$} & \centering{$D^{Po}_{T,FLW}$} & $D^{Po}_{T,LCR}$ & (5\,s; 10\,Hz) \\ \multicolumn{6}{|c||}{Samples per Maneuver Class:} & \multicolumn{3}{c|}{Selected\footnotemark[3] Trajectories:} & \\ \centering{90\,759} & \centering{87\,499} & \centering{89\,048} & \centering{90\,458} & \centering{92\,669} & \centering{87\,308} & \centering{3\,685} & \centering{6\,037} & \centering{5\,071}& \\ \hline \end{tabular} \end{table*} The first part $D^{Ma}$ is then used as follows: To prepare the training, parametrization and evaluation of the developed classifiers as well as to stay methodically straight, we split data set $D^{Ma}$ once more into six folds\footnote{As shown in the following sections, the amount of folds is a trade-off between computability and correctness}. Thereof we use five folds $D^{Ma}_{TV}$ in \autoref{sec:classification} for the design and parametrization. The remaining fold $D_6^{Ma}=D^{Ma}_{Te}$ is only used for the performance examinations presented in \autoref{sec:class_eval}. The split is performed based on entire situations as described in \cite{schlechtriemen2015will}. This means that the measurements of each situation solely occur in one of the folds. Note that this ensures the absence of unrealistic results, which might occur due to similar samples from the same time series in the evaluation and trainings data otherwise. To achieve an even proportion of the three maneuver classes, we balance the number of samples within each fold by a random undersampling strategy. As the prediction problem is extremely unbalanced, as outlined in \cite{altche}, classifiers would focus on the most frequent maneuver class $FLW$ otherwise. In our case approximately 94\,\% of the data points belong to that class. In addition, we only take situations into account that were collected continuously up to the prediction horizon of 5\,s. This ensures that the folds are also balanced over time, which constitutes a prerequisite for performing fair evaluations. This is necessary, as the prediction task is obviously much more demanding when predicting a lane change 4\,s in advance instead of 1\,s in advance. Due to this strategy, the numbers of samples in the six folds are slightly different, but we consider this as uncritical. Overall, $D^{Ma}$ contains approximately 8 hours of highway driving of which $\frac{2}{3}$ are collected right during lane changes. The second data set $D^{Po}$, which serves for the training and evaluation of the position prediction, is processed as follows: Initially, we add the lane change probabilities as estimated by the different classifiers to each sample. Furthermore, we only consider measurements that were collected when the vehicle was manually driven. Note that this restriction is essential as all vehicles of our testing fleet are equipped with an Adaptive Cruise Control (\textit{ACC}) system. Thus, driving in a semi-automated mode is over-represented in our data set compared to reality.\footnote{We do not explicitly filter out \textit{ACC} driving in the data set for maneuver classification, as we can assume that \textit{ACC} is always deactivated during lane changes.} We further split data set $D^{Po}$ into the subsets $D^{Po}_{T}$ for training and $D^{Po}_{Te}$ for evaluating the position predictions (cf. \autoref{sec:traj_pred} and \autoref{sec:traj_eval}). Afterwards, we expand each data point in $D_T^{Po}$ with the desired prediction outputs, i.\,e., the true positions in $x$ and $y$ direction for all times $t \in T_T=$\{-1.0\,s, -0.9\,s, ..., 6.0\,s\}. Note that the samples with negative times and the ones with times $>$5\,s are needed to train the distributions correctly. Strictly limiting the times to a certain range would generate areas in the data space, which are difficult to represent with \textit{GMM}s due to discontinuities similar to the ones in the probability dimension (cf. \autoref{subsec:Integrated}). To overcome these problems, we integrated a mechanism performing a subsampling between \mbox{-1\,s} and \mbox{0\,s} as well as between \mbox{5\,s} and \mbox{6\,s} according to a Gaussian distribution (percentiles: $P_{50}=0.0\,s$; $P_{-3\sigma}=-1.0\,s$; equivalent between 5 and 6\,s). Another mechanism performing a time interpolation ensures that the training data points are distributed continuously along the time dimension. Accordingly, we also have access to prediction times in between our sampling times during the training process. Moreover, the data points in the position test data set $D^{Po}_{Te}$ are expanded with $x$ and $y$ positions as well as corresponding times $t \in T_{Te}=$\{0.0\,s, 0.1\,s, ..., 5.0\,s\}. \pagebreak Finally, we 'coil' the two data sets $D^{Po}_T$ \& $D^{Po}_{Te}$ such that each of the newly constructed data points contains the features at the start point of the prediction, one corresponding prediction time, and the actual $x$ and $y$ positions at that point in time (in \autoref{fig:preprocessing} this step is called 'Explode Data'). Hence, our data sets are multiplied by a factor of $|T_T|=71$ respectively $|T_{Te}|=51$ and are structured as described in \autoref{subsec:traj_measures}. Note that $D^{Po}_{T}$ is re-splitted along the maneuver labels and undersampled in \autoref{subsec:MOE}, to train maneuver specific position prediction experts. \addtocounter{footnote}{1} \footnotetext[\thefootnote]{for details see \autoref{subsec:MOE}} \section{Maneuver Classifier Training}\label{sec:classification} This section gives an overview of the different techniques used for feature selection (cf. \autoref{subsec:feature_selection}), classification algorithms (cf. \autoref{subsec:algorithms}), and techniques to tune the respective hyperparameters (cf. \autoref{subsec:hyperparameter}) for the maneuver classification. The corresponding activities are illustrated by \autoref{fig:classifier}. \begin{figure}[!ht] \centering\includegraphics[scale=0.41]{classification_steps.pdf} \caption{Process of training and evaluating maneuver classifiers.} \label{fig:classifier} \end{figure} \subsection{Feature Selection}\label{subsec:feature_selection} This section deals with the task of selecting a meaningful subset of features from the available superset. Such selection makes sense for two reasons: First, it can improve the prediction performance of the maneuver classifiers. Second, it can help to reduce calculation efforts, enabling predictions on devices with limited computational power as well. Our main goal here is to improve the overall prediction performance. Note that this slightly contrasts with an overall ranking of the available features, as some of them are highly redundant. Consequently, the most predictive variables shall be selected, while excluding redundant ones. In literature, one can find numerous works dealing with feature selection in machine learning applications. In our implementation, we rely on the findings from \cite{guyon2003}. As we claim to solve the underlying classification problem through a systematic machine learning workflow, we start with simple techniques and move towards more sophisticated and computationally expensive ones. To demonstrate the performance of the used techniques, additionally, we test the classification with the entire superset as a baseline. The superset that contains all features is denoted as $A$ in the following. \pagebreak The first investigated technique is a simple correlation-based feature selection technique, which evaluates the correlation of all features and then applies a threshold (set to 0.15) to remove features showing a very low correlation with the maneuver class from the superset. More precisely, we compute Spearman's Correlation (see \cite[p. 133 ff]{fahrmeir2016statistik}) between each feature and the time up to the next lane change ($TTLC$). We selected this quantity instead of the maneuver label, as it enables a smooth fade-out. The resulting feature set is denoted as $B$ in the following. \autoref{tab:selection_techniques} summarizes the examined variants and their abbreviations. Finally, the elements of the resulting feature sets can be found in \autoref{tab:featureOverview}. \begin{table}[!h] \caption{Summary of Examined Feature Selection Techniques} \label{tab:selection_techniques} \centering \begin{tabular}{|c|c|} \hline Variant & Description\\ \hline $A$ & Superset as Baseline\\ $B$ & Correlation Threshold\\ $C$ & \textit{CFS} \\ $D$ & Wrapper Technique \\ \hline \end{tabular} \end{table} The second technique uses the Correlation-based Feature Selection (\textit{CFS}; cf. \cite{hall2000correlation}) and is referred to as $C$ in the following. For this technique, the correlation of entire feature sets instead of single features is calculated. More precisely, for all feature sets $S$, the 'merit' $M_S$, as a measure of the predictive performance, is computed according to \autoref{eq:merit}: \begin{equation} {M}_{S} = \frac{n\,\overline{\rho_{cf}}}{\sqrt{n+n(n-1)\overline{\rho_{ff}}}} \label{eq:merit} \end{equation} $n$ describes the number of features and $\overline{\rho_{cf}}$ corresponds to the mean correlation of all features with the class label or, in our case, $TTLC$. Variable $\overline{\rho_{ff}}$, in turn, describes the mean feature-feature inter-correlation of all features within $S$. As can be seen from \autoref{eq:merit}, strongly correlated features in a feature set $S$ minimize $M_S$, whereas a stronger correlation with the class label $\overline{\rho_{cf}}$ maximizes the value of $M_S$. All these computations rely on the assumption that no strong feature inter-correlations are present in the data set, but that instead every relevant feature itself is at least weakly correlated with the class label (see also \cite{hall2000correlation}). To meet the conditions of our data set and to be consistent with variant $B$, we use Spearman's correlation coefficient. As the computation of $M_S$ is not feasible for all possible feature combinations, we use a backward selection strategy that, according to Guyon \cite{guyon2003}, typically provides superior results compared to forward selection. When applying it in our research, we try to minimize the possible shortcomings of the \textit{CFS} by applying cross-validation with the five data folds for training and validation ($D^{Ma}_{TV}$), as described in \autoref{subsec:split}. The feature selection techniques described so far are limited in two aspects: Firstly, a proper incorporation of the properties of the used classification algorithm is missing. Secondly, features only being meaningful in combination with others are not considered in feature sets $B$ and $C$. Therefore, when generating feature set $D$, we apply a wrapper feature selection technique as described in \cite{kohavi1997}. As the training of Random Forests already includes an implicit feature selection, we solely focus on wrapper techniques including the other classifiers presented in \autoref{subsec:algorithms}. The main idea of wrapper techniques is to incorporate the classifier itself as black box into the feature selection process. Within this process the prediction performance on a validation data set is used to determine the best feature set for the respective classifier. We build our investigations on a hyperparameter set that was optimized as described in \autoref{subsec:hyperparameter}, whith the feature set of variant $C$ being used for optimization. According to the process for deriving $C$, we perform the search for the most descriptive feature set with backward elimination. As for each of the approximately 5\,000 possible subsets, a classifier needs to be trained and evaluated, the wrapper technique becomes computationally expensive. To accelerate the computation, we are not performing the validation using cross-validation. Instead, we use one of the data folds constructed in \autoref{subsec:split} for training ($D^{Ma}_1$) and one for validation ($D^{Ma}_2$). \subsection{Examined Classification Algorithms}\label{subsec:algorithms} For the task of maneuver classification, we consider three different algorithms for evaluation purposes, which have been successfully applied in reference works: \begin{enumerate} \item The first algorithm is based on a Gaussian Na\"{i}ve Bayes (\textit{GNB}) approach using \textit{GMM}s instead of only using one Gaussian kernel per class and was presented in \cite{schlechtriemen2014lane}. \item The second algorithm is based on a Random Forest (\textit{RF}) and was presented in \cite{schlechtriemen2015will}. \item The third algorithm is based on a Multilayer Perceptron (\textit{MLP}) approach and was presented similiarly in \cite{yoon2016}. As opposed to \textit{GNB} and \textit{RF}, this approach uses scaled features, as suggested by \cite[p. 398 ff]{hastie2001}. In contrast to \cite{yoon2016}, we use a modified labeling and a partly automated strategy to identify an optimal model structure, where we restrict the model to one hidden layer in order to keep the parameter optimization solvable in finite time. \end{enumerate} \subsection{Hyperparameter Optimization}\label{subsec:hyperparameter} To achieve the best possible performance and to enable a fair comparison of the examined classifiers, we optimize their respective hyperparameters. For the \textit{GNB}, this means to find the optimal number of Gaussian kernels $K$ used for each feature and class. A Variational Bayesian Gaussian Mixture Model (\textit{VBGMM}; see \cite{corduneanu2001variational}) is used in this context. This technique was already successfully applied in \cite{wiest2012probabilistic}. The principle behind \textit{VBGMM}s is to fit a distribution of the possible Gaussian Mixture distributions using a Dirichlet process. Hence, this technique ensures that the optimal value for $k$ is determined automatically. Regarding \textit{RF} and \textit{MLP} approaches, the parameter optimization is executed for each feature set using a grid-search. This means, that we vary the parameters and calculate for each parameter set a performance value. For the latter, we calculate the average balanced accuracy (see \autoref{subsec:class_measures}) in a leave one out cross-validation manner. Thereby, we use the data of the five data folds for training and validation ($D^{Ma}_{TV}$). The parameters to be optimized are summarized in \autoref{tab:params}. \begin{table}[!h] \caption{Optimized Hyperparameters per Classifier} \label{tab:params} \centering \tymin 40pt \begin{tabulary}{0.9\columnwidth}{|C|C|J|} \hline Classifier & Parameter & Description \\ \hline \multirow{8}{*}{\textit{MLP}}& \multirow{3}{*}{$\alpha$} & \textit{Step size}: Controls how fast the weights of the network are adapted towards the direction of the gradient \\ \cline{2-3} & & \\[-0.27cm] & \multirow{3}{*}{$n_{hidd}$} & \textit{Hidden neurons}: Describes the structure of the network as we are only working with one hidden layer \\ \cline{2-3} & & \\[-0.27cm] & \multirow{2}{*}{$n_{iter}$} & \textit{Iterations}: Maximum number of training cycles \\ \hline & & \\[-0.27cm] \multirow{6}{*}{\textit{RF}}& \multirow{2}{*}{$n_{tree}$} & \textit{Trees}: Number of parallel trees in the forest \\ \cline{2-3} & & \\[-0.27cm] & \multirow{2}{*}{$n_{splt}$} & \textit{Splits}: Maximum number of splits in each tree \\ \cline{2-3} & & \\[-0.27cm] & \multirow{2}{*}{$n_{smpl}$} & \textit{Samples}: Minimum number of samples necessary for a split \\ \hline \end{tabulary} \end{table} So far, we constructed different feature sets (cf. \autoref{subsec:feature_selection}) and optimized the hyperparameters for the different classification algorithms (cf. \autoref{subsec:algorithms} \& \autoref{subsec:hyperparameter}). Subsequently, we now execute a second training step with a larger amount of data for all algorithms, using the optimized feature sets and hyperparameters. The enlargement of the data set is achieved using all five folds that we previously used in the cross-validation $D^{Ma}_{TV}$. Note that through this step we derive the final models for the classifier evaluation (cf. \autoref{sec:class_eval}). \section{Maneuver Classifier Evaluation}\label{sec:class_eval} This section presents the experimental results obtained with the trained classification models (cf. \autoref{sec:classification}). \autoref{subsec:class_measures} introduces the used performance measures, whereas \autoref{subsec:class_results} presents and discusses the results measured with the constructed test data set (cf. \autoref{subsec:dataset}). \subsection{Performance Measures}\label{subsec:class_measures} To be able to assess the performance of the developed classifiers, several metrics are needed, as we are simultaneously focusing on different objectives. Particularly, we are interested in predicting lane changes not only with high accuracies, but also as early as possible in advance of their execution. To reflect that, we use the balanced accuracy ($BACC$), which enables us to perform an even weighting of the classification performance for the three maneuver classes. Basically, we use the definition presented in \cite{brodersen2010balanced}, but in a generalized form for multiclass problems (cf. \autoref{eq:baccmulti}): \begin{equation} BACC = \frac{1}{|M|} \cdot \sum_{m \in M} \frac{TP_m}{P_m} \label{eq:baccmulti} \end{equation} $M$ is defined according to \autoref{eq:maneuvers}. Moreover, $TP_m$ corresponds to the number of true positives for class $m$ and $P_m$ to the number of samples truly belonging to class $m$ (positives). Thereby, the classifiers assign each sample to the class with the highest probability value. Additionally, we use the Receiver Operator Characteristic (\textit{ROC}) and Area Under the \textit{ROC} Curve (\textit{AUC}), which both are widely used metrics in this domain (e.\,g. \cite[p. 180 ff]{murphy2012machine}). As opposed to the $BACC$, the \textit{ROC} curve is originally intended to asses binary classifiers. Accordingly, we transform our three-class problem into three binary classification problems. In contrast to the $BACC$, the \textit{ROC} curves constructed this way enable us to show off the classification performance at different working points (WP). For example, this property allows us to assess the performance for the maneuver classes $LCL$ and $LCR$ with more conservative classifier parametrizations and, thus, less false positives. Additionally, the \textit{AUC} helps to analyze the performance at all possible working points at once. IBesides, metrics which enable us to analyze the technically possible prediction time horizon are needed. As the point in time being referenced in this context is essential and most sources (e.\,g. \cite{weidl2018}, \cite{yoon2016} and \cite{woo2017}) are not very exact in this respect, we introduce the two metrics $\tau_f$ and $\tau_c$ (cf. \autoref{tab:tau}). \begin{table}[!h] \caption{Definition of the Detection Time Metrics} \label{tab:tau} \centering \tymin 40pt \begin{tabulary}{0.9\columnwidth}{|C|J|} \hline Metric & Definition \\ \hline \multirow{3}{*}{$\tau_{f}$} & Time between the vehicle center crosses the centerline and the first detection of the correct maneuver class (as presented in \cite{wissing2017lane}) \\ \hline & \\[-0.27cm] \multirow{5}{*}{$\tau_{c}$} & Time between the vehicle center crosses the centerline and the moment at which the classifier becomes certain about its decision for a specific maneuver class and does not change it till the end of the situation. Note that this is a by far stricter definition than the one of $\tau_f$\\ \hline \end{tabulary} \end{table} As opposed to the $BACC$ evaluation, for which an unambiguous class assignment becomes necessary, the class assignment is at this point conducted in a way that matches the binary evaluation in the \textit{ROC} curve: For the classes $LCL$ and $LCR$, respectively, we select a binary decision threshold that keeps the false positive rate below 1\%. The resulting working points are presented later on in \autoref{fig:ROC} along with the $ROC$ curves. The detection times calculated this way reflect an evaluation with a limited false positive rate and, hence, at a similar working point for the different classifiers. Note that this ensures a fair evaluation. We decide here for a very low false positive rate as the system should not produce too many lane change detections. Remember that in practice, lane changes occur very rarely compared to lane following. \subsection{Results \& Discussion}\label{subsec:class_results} \autoref{tab:classifierOverview} shows the results ($BACC$, $AUC$, $\tau$) for the different classifiers and feature sets measured based on the maneuver test data set $D^{Ma}_{Te}$. Probably, due to the large number of samples, a favorable classifier parametrization and selection seem to have a significantly higher impact on the classification performance than a clever feature selection has. Note that this can be concluded, as the classifiers working with feature sets $B$ and $C$ only perform slightly worse regarding $BACC$ and $AUC$ than the other classifiers. However, applying a feature selection still remains reasonable as it ensures shorter computation times. In addition, the results indicate that the feature selection contributes to an increase of the prediction times in most cases. Note that this does not apply to the \textit{RF} as this classifier performs an implicit feature selection. \begin{table}[!ht] \caption{Summary of Examined Classifiers with Preferred Hyperparameters} \label{tab:classifierOverview} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline Classi- & Feature & \multicolumn{4}{c|}{Performance on Test Data}\\ fier & Set & & \multicolumn{3}{c|}{per Class (\textit{AUC}; $\overline{\tau_{f}}$; $\overline{\tau_{c}}$)} \\ & & BACC & LCL & FLW & LCR \\ \hline \multirow{12}{*}{\textit{GNB}} & & & 0.924 & 0.815 & 0.905 \\ & $A$ & 0.704 & 2.86$\pm$1.46\,s & - & 2.92$\pm$1.42\,s \\ & & & 1.86$\pm$1.40\,s & - & 2.16$\pm$1.25\,s \\ \cline{2-6} & & & 0.910 & 0.801 & 0.895 \\ & $B$ & 0.692 & 2.82$\pm$1.38\,s & - & 2.82$\pm$1.32\,s \\ & & & 1.91$\pm$1.24\,s & - & 2.06$\pm$1.09\,s \\ \cline{2-6} & & & 0.874 & 0.770 & 0.884 \\ & $C$ & 0.651 & 2.57$\pm$1.31\,s & - & 2.73$\pm$1.31\,s\\ & & & 1.85$\pm$1.21\,s & - & 1.97$\pm$1.10\,s\\ \cline{2-6} & & & \textbf{0.943} & \textbf{0.864} & \textbf{0.929} \\ & $\boldsymbol{D}_{GNB}$ & \textbf{0.772} & \textbf{3.26}$\pm$1.28\,s & - & \textbf{3.11}$\pm$1.14\,s\\ & & & \textbf{2.41}$\pm$1.29\,s & - & \textbf{2.61}$\pm$1.03\,s\\ \hline \multirow{12}{*}{\textit{MLP}}& & & 0.973 & 0.909 & \textbf{0.961} \\ & $A$ & 0.822 & 3.67$\pm$1.26\,s& - & 3.35$\pm$1.19\,s\\ & & & 2.58$\pm$1.51\,s & - & 2.81$\pm$0.99\,s \\ \cline{2-6} & & & 0.974 & 0.912 & 0.958 \\ & $B$ & \textbf{0.831} & 3.73$\pm$1.07\,s& - & \textbf{3.60}$\pm$1.15\,s\\ & & & \textbf{2.72}$\pm$1.40\,s & - & \textbf{2.86}$\pm$1.10\,s \\ \cline{2-6} & & & 0.966 & 0.892 & 0.953 \\ & $C$ & 0.798 & 3.46$\pm$1.07\,s& - & 3.47$\pm$1.11\,s\\ & & & 2.69$\pm$1.10\,s & - & 2.81$\pm$0.89\,s \\ \cline{2-6} & & & \textbf{0.976} & \textbf{0.915} & 0.960 \\ & $\boldsymbol{D}_{MLP}$ & \textbf{0.831} & \textbf{3.79}$\pm$1.16\,s& - & 3.33$\pm$1.18\,s\\ & & & \textbf{2.72}$\pm$1.45\,s & - & 2.68$\pm$0.98\,s \\ \cline{2-6} \hline \multirow{9}{*}{\textit{RF}} & \cellcolor{Petrol} & \cellcolor{Petrol} & \cellcolor{Petrol} \textbf{0.978} & \cellcolor{Petrol} \textbf{0.925} & \cellcolor{Petrol} \textbf{0.968}\\ & \cellcolor{Petrol} $\boldsymbol{A}$ & \cellcolor{Petrol} \textbf{0.838} & \cellcolor{Petrol} \textbf{3.81}$\pm$1.14\,s & - & 3.60$\pm$1.19\,s\\ & \cellcolor{Petrol} & \cellcolor{Petrol} & \cellcolor{Petrol} \textbf{3.11}$\pm$1.35\,s & - & \cellcolor{Petrol} \textbf{3.06}$\pm$1.17\,s \\ \cline{2-6} & & & 0.976 & 0.918 & 0.959 \\ & $B$ & 0.834 & 3.73$\pm$1.13\,s& - & \cellcolor{Petrol} \textbf{3.61}$\pm$1.17\,s\\ & & & \cellcolor{Petrol} \textbf{3.11}$\pm$1.29\,s & - & \cellcolor{Petrol} \textbf{3.06}$\pm$1.03\,s \\ \cline{2-6} & & & 0.964 & 0.893 & 0.953 \\ & $C$ & 0.799 & 3.45$\pm$1.07\,s& - & 3.49$\pm$1.10\,s\\ & & & 2.73$\pm$1.14\,s & - & 2.93$\pm$0.92\,s \\ \hline \end{tabular} \end{table} \begin{figure*}[!ht] \begin{subfigure}[]{} \includegraphics[width=0.3\textwidth]{roc_gnb_slim.pdf} \end{subfigure} \begin{subfigure}[]{} \includegraphics[width=0.3\textwidth]{roc_mlp_slim.pdf} \end{subfigure} \begin{subfigure}[]{} \includegraphics[width=0.3\textwidth]{roc_rf_slim.pdf} \caption{\textit{ROC} curves for the developed maneuver classifiers with their respective best parameter sets and hyperparameters.} \label{fig:ROC} \end{subfigure} \end{figure*} \autoref{fig:ROC} additionally shows the \textit{ROC} curves for the respective best combination of classifier and feature set regarding $BACC$ and $AUC$ for each of the three classifiers. As another result of our investigations, the classification performance for the lane following maneuver ($FLW$), which is neglected by most researchers in literature, is notably worse than for the lane changing maneuvers for all considered algorithms. This can be explained with the fact that nearly each sample, which can not be certainly assigned to one of the lane change maneuvers, is classified as lane following. This is caused, as confusions between a lane change to the right and one to the left are very rare. Thus, a significantly larger number of false positives arises for maneuver class $FLW$. In addition, we could reproduce the findings of \cite{bahram2016}, which showed that lane changes to the left are easier to predict than the ones to the right. One may explain this phenomenon with the observation that lane changes to the right are often motivated by the intention to leave the highway. The latter can be hardly predicted compared to lane changes to the left, which are often performed to overtake slower leading vehicles. Besides, it can be observed that the classification problem remains resolvable even with a significantly decreased number of features, as shown by the \textit{MLP} classifier with feature set $D_{MLP}$, which only includes 24 features. This illustrates that a decreased number of features sometimes leads to an improved performance due to a lower dimension of the input space. This can be explained with the fact that numerous features, which we expected to provide insights into specific lane changing situations, seem to have nearly no effect concerning the general behavior in highway situations. Exemplary features showing this behavior are summarized in \autoref{tab:special_features}. \begin{table}[!ht] \caption{Contextual Features Solely Impacting Special Situations} \label{tab:special_features} \centering \begin{tabular}{|c|c|} \hline Features & Providing Insights on\\ \hline fog lamps, wiper, ... & weather conditions\\ \hline tunnel, bridge, ... & structural characteristics\\ \hline lane marking color, ... & road works\\ \hline country, distance to next & \multirow{2}{*}{geographic specialties} \\ highway exit/approach, ... & \\ \hline \end{tabular} \end{table} An explanation of this behavior is that situations, which are affected by these features, occur even rarer than lane changes. However, as automated driving is extremely demanding exactly in these situations, additional investigations are needed in these cases (cf. \autoref{sec:conclusion}). It is noteworthy that the detection times $\tau_f$ and $\tau_c$ are limited to a maximum of 5\,s due to our evaluation methodology. Therefore, the average values $\overline{\tau_f}$ and $\overline{\tau_c}$ presented in \autoref{tab:classifierOverview} will even be exceeded in practice. To substantiate this assumption, \autoref{fig:hist_tau} shows a histogram of the detection times for the $RF$. The distribution shows numerous situations, that are detected 5 or more seconds in advance. \begin{figure}[!ht] \centering \begin{subfigure}[]{ \includegraphics[width=0.235\textwidth]{hist_tau_f_lcl_rf_all.pdf} \end{subfigure} \hfill \begin{subfigure}[]{ \includegraphics[width=0.235\textwidth]{hist_tau_c_lcl_rf_all.pdf} \end{subfigure} \caption{Histogram of detection times $\tau_f$ (a) and $\tau_c$ (b) for \textit{RF} for maneuver class $LCL$ with feature set $A$.}\label{fig:hist_tau} \end{figure} \pagebreak Altogether, our investigations show that a systematic machine learning workflow, combined with a large amount of data, is able to outperform current state-of-the-art approaches significantly. This becomes obvious when looking at the \textit{AUC} in comparison to other approaches. \autoref{tab:reference_AUC} shows that our approach outperforms the others, although we are working with a significantly larger prediction horizon, which makes the classification problem more demanding as aforementioned. Finally, note that the mentioned state-of-the-art approaches were designed and evaluated on considerably smaller data sets. \begin{table}[!ht] \caption{\textit{AUC} Values in Comparison to Reference Works} \label{tab:reference_AUC} \centering \begin{tabular}{|c|c|c|c|c|} \hline Approach & \multicolumn{3}{c|}{\textit{AUC}} & Prediction Horizon\\ & LCL & FLW & LCR & \\ \hline \cite{schlechtriemen2015will} & 0.863 & 0.661 & 0.836 & 5.0\,s \\ \hline \cite{schlechtriemen2014lane} & 0.970 & - & 0.991 & 2.0\,s \\ \hline \cite{bahram2016} & 0.947 & - & 0.942 & 2.5\,s \\ \hline \cite{wissing2018trajectory} & 0.934 & - & 0.993 & 2.0\,s \\ \hline \textit{MLP} & 0.976 & 0.915 & 0.960 & 5.0\,s\\ \hline \textit{RF} & 0.978 & 0.925 & 0.968 & 5.0\,s\\ \hline \end{tabular} \end{table} Our investigations show that the \textit{GNB} classifier performs significantly worse than the two other approaches (i.\,e. \textit{MLP} and \textit{RF}). Thus, we only use these two classifiers in our further studies. Additionally, we are restricting ourselves to those feature sets and hyperparameter sets showing the best performance (cf. \autoref{tab:selected_clf_params}). \begin{table}[!ht] \caption{Selected Feature Sets and Hyperparameters per Classifier} \label{tab:selected_clf_params} \centering \begin{tabular}{|c|c|c|} \hline Classifier & Parameter & Value \\ \hline & & \\[-0.27cm] \multirow{4}{*}{\textit{MLP}} & Feature Set & $D_{MLP}$ \\ & $\alpha$ & 0.02 \\ & $n_{hidd}$ & 27 \\ & $n_{iter}$ & 800 \\ \hline & & \\[-0.27cm] \multirow{4}{*}{\textit{RF}} & Feature Set & $A$ \\ & $n_{tree}$ & 128 \\ & $n_{splt}$ & 16 \\ & $n_{smpl}$ & 100 \\ \hline \end{tabular} \end{table} \section{Position Predictor Training}\label{sec:traj_pred} \begin{figure}[!ht] \centering\includegraphics[scale=0.41]{trajectory_prediction_steps.pdf} \caption{Steps to train and evaluate the position predictors.} \label{fig:traj_pred} \end{figure} This section deals with the training of the models for position prediction. In particular, we show how to determine the \textit{GMM} parameters $\Theta$. \autoref{subsec:MOE} relies on the Mixture of Experts (\textit{MOE}) approach, which was introduced in \cite{schlechtriemen2015will} for lateral predictions and which uses Gaussian Mixture Regression (cf. \autoref{eq:MOE}). An alternative approach is presented in \autoref{subsec:Integrated}. As opposed to the \textit{MOE} approach, it solves the problem in one processing step (cf. \autoref{eq:IGMM}). The entire procedure, including the evaluation process (cf. \autoref{sec:traj_eval}), is depicted in \autoref{fig:traj_pred}. \subsection{Mixture of Experts Approach}\label{subsec:MOE} \begin{figure}[!ht] \centering\includegraphics[scale=0.41]{MOE.pdf} \caption{Illustration of the Mixture of Experts (\textit{MOE}) approach.} \label{fig:MOE} \end{figure} To train the experts for the three maneuver classes, we divide the data set (cf. \autoref{subsec:split}) along the maneuver labels (cf. \autoref{fig:traj_pred}). Subsequently, we perform a random undersampling of the data points for the $FLW$ maneuver class to obtain approximately the same number of samples as for the other two classes. The basic idea behind this step is that the regression problem for the $FLW$ class is less complex than for the two other classes. Thus, it should be solvable with the same amount of data. Amongst others, this data reduction helps to speed up training. As a consequence, the number of $FLW$ samples is approximately decreased by 95\,\% and the data sets $D^{Po}_{T,LCL}$, $D^{Po}_{T,FLW}$, and $D^{Po}_{T,LCR}$ are constructed (cf. \autoref{tab:data}). Afterwards, we train an expert \textit{GMM} with each of these data sets. These experts are later used in the \textit{MOE} approach (cf. \autoref{fig:MOE}). We choose a maximum number of $K=50$ mixture components as well as full covariance matrices\footnote{Preliminary investigations showed that \textit{GMM}s with diagonal covariance matrices are faster to fit, but are by far less accurate.}, and fit the \textit{GMM} in a variational manner again. Besides, we use the following input-feature set $F^I_y$ and the true position $y$ at a defined prediction time $t$ to train the experts in lateral direction (cf. \autoref{eq:Featureset_y}): \begin{equation} F^I_y = \{v_y,\ d_y^{cl}\} \label{eq:Featureset_y} \end{equation} Regarding the prediction in longitudinal direction, we need to distinguish whether or not a preceding vehicle is present. If no vehicle is in sensor range, both the relative speed and distance for that vehicle are set to default values. As involving the latter in the training of the models would lead to bad fits, the input feature sets $F_{x, Obj}^I$ and $F_{x, \overline{Obj}}^I$ are defined as follows (cf. \autoref{eq:Featureset_x_Obj} \& \autoref{eq:Featureset_x_noObj}): \begin{equation} F_{x, Obj}^I = \{v_x,\ a_x,\ d_v^{rel, f},\ v_v^{rel, f}\} \label{eq:Featureset_x_Obj} \end{equation} \begin{equation} F_{x, \overline{Obj}}^I = \{v_x,\ a_x\} \label{eq:Featureset_x_noObj} \end{equation} As shown in \cite{schlechtriemen2014probabilistic}, the prediction performance for the longitudinal direction can be significantly increased by learning the deviation from the constant velocity prediction $\hat{x}_{CV}$ instead of the true target position $x$. Consequently, we use the output dimensions $F^O_x$ (cf. \autoref{eq:Featureset_output_x_Obj}): \begin{equation} F^O_x = \{x-\hat{x}_{CV},\ t\} \label{eq:Featureset_output_x_Obj} \end{equation} \subsection{Integrated Approach}\label{subsec:Integrated} As alternative to the \textit{MOE} approach, this section presents an integrated approach, which uses the unsplitted data set $D^{Po}_T$ (cf. \autoref{tab:data}) and expands the feature sets ($F_{x, Obj}^I, F_{x, \overline{Obj}}^I, F_{y}^I$) with the maneuver probabilities $P_{LCL}$ and $P_{LCR}$ (cf. \autoref{fig:Integrated}). $P_{FLW}$ is left out here as this information would be redundant to the one provided by $P_{LCL}$ and $P_{LCR}$, and we want to keep the models' dimension as low as possible. Consequently, the task of considering the maneuver probabilities is directly integrated in the model. The resulting one-block solution is both easier to implement and to use. In this context, we discovered that \textit{GMM}s are not well suited to fit probabilities bounded to values between 0 and 1. Especially, this is the case if most of the probabilities tend against the extreme values (cf. \autoref{fig:density_plcl} (a)). Hence, we expand our data set with a duplicate of each data point containing probability values, which are mirrored at 0 for original probabilities being lower than 0.5 and at 1 for all other original probabilities. This way, we are able to generate the density shown in \autoref{fig:density_plcl} (b), which we identified as easier to fit with \textit{GMM}s. Note that before our adjustment, the density contained an abrupt jump, especially at $P_{LCL}=0$. As such discontinuities are only representable by numerous Gaussian components, which are symmetrical and smooth per definition, many components needed in other areas of the data space would be wasted for this purpose. \begin{figure}[!t] \centering\includegraphics[scale=0.41]{Integrated.pdf} \caption{Illustration of the integrated approach.} \label{fig:Integrated} \end{figure} \begin{figure}[!ht] \centering \begin{subfigure}[]{ \includegraphics[width=0.23\textwidth]{density_plcl_before.png} \end{subfigure} \hfill \begin{subfigure}[]{ \includegraphics[width=0.23\textwidth]{density_plcl_after.png} \end{subfigure} \caption{Density of $P_{LCL}$ before (a) and after (b) adjustment.}\label{fig:density_plcl} \end{figure} The actual training of the integrated \textit{GMM} is performed similarly to the experts training in a variational fashion, with $K=50$ components and full covariance matrices, but with the entire training data set. Thus, no undersampling procedures are applied and the unbalanced nature of the maneuver classes and their actual frequencies are preserved. \section{Position Estimation Evaluation}\label{sec:traj_eval} In order to evaluate the position predictions, first of all, one has to decide which of the considered classifiers fits best as gating network in the Mixture of Experts (\textit{MOE}) and in the integrated approach respectively. Hence, we calculate the average log-likelihoods $\overline{\mathcal{L}}$ on the entire position test data set $D^{Po}_{Te}$ (cf. \autoref{subsec:split}). Note that this data set is not balanced according to the maneuver labels as also suggested in \cite{deo2018convolutional}. In particular, the unbalanced nature of the data allows us to draw general conclusions about the performance, independent of the respective driving maneuver. In this context, the use of the average log-likelihood as quality criterion for comparing different approaches is beneficial, as it rates the quality of the predicted probability density distribution instead of assessing only the ability to predict one single position with maximized accuracy. Moreover, the log-likelihood is exactly the value to be maximized in the process of fitting a \textit{GMM}. However, as $\overline{\mathcal{L}}$ can not be interpreted as physical quantity, it is solely useful for comparison purposes. As we are also interested in assessing the performance concerning the spatial error and to achieve comparability, we additionally investigate this quantity for the approach working best in the following subsections. \autoref{tab:likelihoods} shows the per sample log-likelihood of different approaches for the longitudinal ($\overline{\mathcal{L}_x}$) as well as the lateral ($\overline{\mathcal{L}_y}$) direction. In this context, we use the already introduced classifiers \textit{RF} and \textit{MLP} in combination with four different strategies to combine the experts' position estimates, as introduced in \autoref{eq:MOE}, as weighting function $w_m(I)$: \begin{enumerate} \item Raw probabilities (Raw): This strategy directly uses the raw probabilities as issued by the classifiers $P_m^{clf}(I)$ as gating probabilities. This means that we concatenate the three \textit{GMM}s and multiply the mixture weights with the probabilities issued by the respective classifier: $w_{m}^{Raw}(I) = P_m^{clf}(I)$. \item Winner Takes it All (WTA): This strategy uses the outputs of the \textit{GMM} for the maneuver class with the largest probability according to the respective classifier (cf. \autoref{eq:w_wta}). \end{enumerate} \begin{equation} w_{m}^{WTA}(I) = \begin{cases} 1,& \text{if }P_m^{clf}(I)=\max\limits_{\{q \in M\}} P_{q}^{clf}(I)\\ 0,& \text{else} \end{cases} \label{eq:w_wta} \end{equation} \begin{enumerate} \setcounter{enumi}{2} \item Prior Weighted Raw probabilities (PW-Raw): This strategy considers that the classifiers were trained on a balanced data set. Thus, it multiplies the raw probabilities with the prior probabilities for each maneuver class: $w_{m}^{PWRaw}(I) = norm(P_m^{clf}(I) \cdot \pi_m)$. \item Integrated \textit{GMM} (I-GMM): This strategy directly uses the integrated approach presented in \autoref{subsec:Integrated} to predict the probability distributions and follows \autoref{eq:IGMM}. \end{enumerate} To demonstrate the benefits of our approach, which combines maneuver classification and position prediction, we additionally analyze its performance compared to reference strategies. First, we use the labels as a perfect classifier according to \autoref{eq:labels}: \begin{equation} w_{m}^{Labels} = \begin{cases} 1,& \text{if } m=L\\ 0,& \text{else} \end{cases} \label{eq:labels} \end{equation} Moreover, we use the pure prior probabilities \mbox{($\pi_{LCL}=\pi_{LCR}=0.03; \pi_{FLW}=0.94$)} as most naive classifier ($w_m^{Priors}=\pi_m$) and a strategy without a classifier, referred to as NOCLF in the following. \begin{table}[!ht] \caption{Per Sample Log-Likelihoods with Different Classifiers and \textit{MOE} strategies} \label{tab:likelihoods} \centering \begin{tabular}{|c|c|c|c|} \hline &&&\\[-0.8em] Classifier & Strategy & $\overline{\mathcal{L}_x}$ & $\overline{\mathcal{L}_y}$ \\ & & (normalized [\%]) & (normalized [\%]) \\ \hline Labels & - & -14.066 (100) & -7.547 (100) \\ \hline Priors & - & -13.273 (106.0) & -7.769 (97.1) \\ \hline NOCLF & - & \cellcolor{Petrol} \textbf{-13.171 (106.8)} & -7.762 (97.2) \\ \hline \multirow{ 4}{*}{\textit{MLP}} & Raw & -13.667 (102.9) & -7.900 (95.5) \\ & WTA & -16.279 (86.4) & -8.793 (85.8) \\ & PW-Raw & \textbf{-13.329 (105.5)} & \cellcolor{Petrol} \textbf{-7.608 (99.2)} \\ & I-GMM & -13.354 (105.3) & -7.691 (98.1)\\ \hline \multirow{ 4}{*}{\textit{RF}} & Raw & -13.568 (103.7) & -7.781 (95.9) \\ & WTA & -15.685 (89.7) & -8.369 (90.2) \\ & PW-Raw & -13.280 (105.9) & -7.626 (99.0) \\ & I-GMM & \textbf{-13.207 (106.5)} & \textbf{-7.611 (99.2)} \\ \hline \end{tabular} \end{table} For the longitudinal direction, \autoref{tab:likelihoods} shows that the reference solution without any previous maneuver classification (\textit{NOCLF}) is able to produce slightly better results than the other combinations. Although it seems to be trivial that lane changes have not to be taken into account when predicting the longitudinal behavior, this is noteworthy, as our expectations beforehand was that lane changes to the left mostly go along with an acceleration, whereas braking actions are extremely rare. \begin{figure*}[!ht] \centering \includegraphics[width=0.85\textwidth]{all_xy.pdf} \caption{Visualization of the error distribution (left) in longitudinal and lateral direction and the median lateral error as function of the prediction time (right).} \label{fig:err_boxplots} \vspace{-4mm} \end{figure*} \pagebreak By contrast, the benefits of the Mixture of Experts (\textit{MOE}) approach come into effect for the lateral direction. As shown in \autoref{tab:likelihoods}, the combination of prior weighting and \textit{MLP} probabilities performs best. Furthermore, all combinations involving the integrated approach perform only slightly worse or even better (\textit{RF}) than the combinations using prior weighted probabilities. As benefit, these models are easier to use and are more robust against poor or uncalibrated maneuver probabilities without needing an additional calibration step. This can be explained with the fact that these models perform an implicit probability calibration during the training of the \textit{GMM}. Moreover, we learned that the WTA strategy has no practical relevance, as it does not necessarily produce continous position predictions over consecutive time steps as accomplished by the other strategies per definition. Besides, in case of a misclassification, the WTA strategy solely asks one specific expert model, which might not be applicable in that area of the data space, what clearly decreases the overall performance. In the following, we investigate the spatial errors of the best combinations (lateral: \textit{MLP} classifier with PW-Raw strategy; longitudinal: \textit{NOCLF}), as previously introduced. For this purpose, we present the applied performance measures in \autoref{subsec:traj_measures} and then show the obtained results in \autoref{subsec:traj_results}. \subsection{Performance Measures}\label{subsec:traj_measures} To measure the spatial performance of our predictions, we rely on the unbalanced position evaluation data set $D^{Po}_{Te}$. The latter contains the needed inputs for the maneuver classifiers and position predictors ($I$) as well as the true trajectories $TR$ according to \autoref{eq:eval_data}. \begin{equation} D^{Po}_{Te} = \begin{bmatrix} I & TR \end{bmatrix} \label{eq:eval_data} \end{equation} $TR$ contains $N=20\,000$ 5\,s-trajectories sampled with 10\,Hz (hence 1\,000\,000 samples) according to \autoref{eq:trajectory_data}: \begin{equation} TR = \begin{bmatrix} tr^0 & tr^1 & \hdots & tr^{N} \end{bmatrix} \label{eq:trajectory_data} \end{equation} Each trajectory $tr^i$ consists of 51 corresponding $x$ and $y$ positions, according to \autoref{eq:trajectory}: \begin{equation} tr^i = \begin{bmatrix} x^i_{0.0} & y^i_{0.0} \\ x^i_{0.1} & y^i_{0.1} \\ \vdots & \vdots \\ x^i_{5.0} & y^i_{5.0} \\ \end{bmatrix} \label{eq:trajectory} \end{equation} The predicted trajectories $\hat{TR}$ are then calculated with the described classifiers and position predictors in the same format as $TR$. However, as the Gaussian Mixture Regression originally produces probability densities instead of point estimates, these have to be calculated first. This is accomplished by calculating the center of gravity of the density as described in \cite{schlechtriemen2015will}. Accordingly, the prediction error $e^i_t$ of a specific prediction time $t$ for one of the $i$ trajectories is calculated separately for the two dimensions $x$ and $y$ as follows (\autoref{eq:traj_err}): \begin{equation} e^i_t = \begin{bmatrix} e^i_{x, t} & e^i_{y, t} \\ \end{bmatrix} = \begin{bmatrix} |x^i_{t} - \hat{x}^i_{t}| & |y^i_{t} - \hat{y}^i_{t}| \\ \end{bmatrix} \label{eq:traj_err} \end{equation} Variables $\hat{x}$ and $\hat{y}$ describe the estimated positions, whereas $x$ and $y$ correspond to the actual ones. The individual errors $e^i_t$ of all trajectories $i$ are concatenated to $E_t$ (cf. \autoref{eq:traj_err_overall}): \begin{equation} E_t = \begin{bmatrix} E_{x, t} & E_{y, t} \\ \end{bmatrix} = \begin{bmatrix} e^i_{x, t} & e^i_{y, t} \\ \end{bmatrix}_{\forall i} \label{eq:traj_err_overall} \end{equation} At this point, we want to re-emphasize, that although this way of evaluating the performance produces easy to interpret results, it disregards that our original outputs (i.\,e. spatial probability densities) contain much more information than a single point estimation. \subsection{Results \& Discussion}\label{subsec:traj_results} \begin{figure*}[!ht] \centering \includegraphics[width=0.63\textwidth]{prediction.pdf} \caption{Predicted probability distribution of future vehicle positions for an illustrative situation.} \label{fig:predicted_distribution} \end{figure*} \autoref{fig:err_boxplots} shows the performance of the selected combinations of classifiers and mixing strategies (highlighted in \autoref{tab:likelihoods}) at a prediction horizon of 5\,s for the longitudinal ($E_{x, 5}$) and the lateral ($E_{y, 5}$) direction on the left side. In comparison, a constant velocity (\textit{CV}) prediction and a Mixture of Experts (\textit{MOE}) with labels\footnote{Using the \textit{MOE} with the labels as input corresponds to the assumption of a perfect classifier.} are shown. The right-hand side of \autoref{fig:err_boxplots} shows the development of the median lateral error $\tilde{E}_{y,t}$ as function of the prediction time $t$. As the plots indicate, our position prediction system is able to produce results comparable to the ones with a perfect maneuver classification, in both lateral and longitudinal direction. Additionally, the plots show that we are able to clearly outperform simple models as \textit{CV} and reach a very small median lateral prediction error of less than 0.21\,m at a prediction horizon of 5\,s. As shown in \autoref{tab:reference_Lat_Pred}, this is remarkable compared to other approaches. Note that we did not include studies in this compilation, which report the root-mean-square error (\textit{RMSE}), which we quantify with a value of 0.64\,m. On one hand, we follow \cite{willmott2005advantages}, which points out that \textit{RMSE} measures do not allow for a comparison over different data sets, as the values depend on the size of the data set. On the other, the challenge tackled by us (cf. \autoref{subsec:problem_statement}) is to predict the probability distribution of future vehicle positions rather than single shot estimates. Consequently, we did not optimize the predictions to minimize $RMSE$. Therefore, it is not surprising that other works which explicitly minimize this value, but ignore distribution estimations, perform better with respect to $RMSE$. \begin{table}[!ht] \caption{Comparing Lateral Prediction Performance with Related Works} \label{tab:reference_Lat_Pred} \centering \begin{tabular}{|c|c|c|} \hline &&\\[-0.8em] Approach & $\tilde{E}_{y,t}$ [m] & Prediction Horizon $t$ [s] \\ \hline \cite{schlechtriemen2015will} & $\approx$ 0.18 & 5.0 \\ \hline \cite{yoon2016} & 0.23 & 5.0 \\ \hline \cite{wissing2018trajectory} & 0.50 & 3.0 \\ \hline \textit{MLP} (PW-Raw) & 0.204 & 5.0\\ \hline \end{tabular} \end{table} As shown in \cite{schlechtriemen2015will}, these results are dominated by the most frequent maneuver class ($FLW$). Hence, \autoref{tab:errors_per_maneuver} complementarily shows the errors for 20\,000 maneuvers of each type. \begin{table}[!ht] \caption{Prediction Errors per Class and Direction} \label{tab:errors_per_maneuver} \centering \begin{tabular}{|c|c|c|c|} \hline &&&\\[-0.8em] $d$ & $\tilde{E}_{d,5}^{LCL}$ [m] & $\tilde{E}_{d,5}^{FLW}$ [m]& $\tilde{E}_{d, 5}^{LCR}$ [m]\\ \hline $x$ & 3.22 & 1.67 & 2.20 \\ \hline $y$ & 1.25 & 0.19 & 1.80 \\ \hline \end{tabular} \end{table} As can be seen, the errors for the lane change maneuvers are considerably larger than the ones for lane-following. On one hand, this can be explained with the more complex regression task. On the other, the predictions are subjected to higher uncertainties in case of a lane change, as shown by the predicted distributions (cf. \autoref{fig:predicted_distribution}). As opposed to that, the uncertainty is ignored in the single point estimates. Note that the increased uncertainties are caused by the lack of knowledge on the exact point in time at which the maneuver will be completed. This even holds true, if the classifier made the position prediction to know about an upcoming lane change. Complementary to these quantitative evaluations, we performed qualitative testing and visualized single situations along with our predictions. To illustrate this, we attached a short video and present a single frame in \autoref{fig:predicted_distribution}. More precisely, \autoref{fig:predicted_distribution} shows the predictions during an upcoming lane change, along with the described uncertainties. In addition, we show the confidence of our predictions ($Conf_x$, $Conf_y$), which provides an important hint concerning the reliability of the predictions to the consumer of the information. This value is calculated similarly to \cite{schlechtriemen2014probabilistic} through additional \textit{GMM}s fitted in the input dimensions. To demonstrate its general usability, we visualized the confidence value divided by the standard deviation against the lateral prediction errors at $T_h$=5\,s in \autoref{fig:confidence}. As can be seen, and as expected the prediction errors decrease with increasing confidence values. \begin{figure}[!ht] \centering\includegraphics[width=0.95\columnwidth]{all_confidence1_y.png} \caption{Prediction confidence against lateral prediction errors.} \label{fig:confidence} \end{figure} \begin{table*}[!t] \renewcommand{\arraystretch}{1} \caption{Description of the Evaluated Features $f$ of an Observed Vehicle $o$ and Usage of the Features in the Constructed Feature Sets ($A$-$D$)} \label{tab:featureOverview} \centering \resizebox{2 \columnwidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline $\boldsymbol{R}$ & $\boldsymbol{f}$ & \textbf{Description} & \textbf{Unit (Continuous)} & \multicolumn{4}{c|}{\textbf{Element of}} \\ & & & \textbf{Range of Values (Nominal)} & \textbf{$B$} & \textbf{$C$} & \textbf{$D_{MLP}$} & $D_{GNB}$\\ & & & & $\vert \cdot \vert = 40$ & $\vert \cdot \vert= 29$ & $\vert \cdot \vert = 24$ & $\vert \cdot \vert = 48$\\ \hline \hline & \multicolumn{7}{c|}{general information describing the related vehicle $r$ (cf. \autoref{eq:fsit} \& \autoref{fig:env_model})} \\ \cline{2-8} $R_{rf}$&$actv^{r}$ & activity status & \{0: inactive, 1: active\}&\{f, l\}\footnotemark&\{f, fl, fr, l\}&\{fr, r\}&\{fl, fr, l, r\}\\ \cline{2-8} &$mov^{r}$ & movement status & \{0: standing, 1: moving\}&\{f, l\}&\{f, fl, fr, l\}&\{r\}&\{fl, fr, r\}\\ \cline{2-8} &\multirow{ 2}{*}{$class^{r}$} & \multirow{ 2}{*}{object class} & \{0: bicycle, 1: motorbike, &\multirow{ 2}{*}{\{f, l\}}&\multirow{ 2}{*}{\{f, fl, l\}}&&\multirow{ 2}{*}{\{fl, fr, r\}}\\ &&& 2: car, ..., 14: no class\}&&&&\\ \cline{2-8} &\multirow{ 2}{*}{$cutinlvl^{r}$} & \multirow{ 2}{*}{cut-in level} & \{0: $P\leq0.5$, 1: $P>0.5$,&\multirow{ 2}{*}{\{l\}}&&&\multirow{ 2}{*}{\{r\}}\\ &&& 2: $P > 0.66$, 3: $P > 0.9$\}&&&&\\ \cline{2-8} & \multicolumn{7}{c|}{relation between $o$ and related vehicle $r$ in $o$'s coordinate-system} \\ \cline{2-8} &$d^{rel, r}_{x}$ & longitudinal distance & \multirow{ 2}{*}{$m$} &\{f, l\}&\{f, l\}&\{f, r\}&\{fr, r\}\\ \cline{2-3}\cline{5-8} &$d^{rel, r}_{y}$ & lateral distance &&\{f, l\}&\{f, fr\}&\{r\}&\{fr, r\}\\ \cline{2-8} &$v^{rel, r}_{x}$ & relative longitudinal speed & \multirow{ 2}{*}{$\nicefrac{m}{s}$}&\{f, r\}&&\{r\}&\{f, fr, r\}\\ \cline{2-3}\cline{5-8} &$v^{rel, r}_{y}$ & relative lateral speed &&\{f, fl, l, r\}&\{f\}&\{f, fr, r\}&\{f, fr\}\\ \cline{2-8} &$a^{rel, r}_{x}$ & relative longitudinal acceleration &$\nicefrac{m}{s^2}$&&&\{fr\}&\{f, fl, fr, r\}\\ \cline{2-8} & \multicolumn{6}{c|}{relation between $o$ and related vehicle $r$ in curvilinear coordinates} \\ \cline{2-8} &$d^{rel, r}_{v}$ & longitudinal distance& \multirow{ 2}{*}{$m$}&\{f, l\}&\{l\}&\{fr\}&\{fl, fr, r\}\\ \cline{2-3}\cline{5-8} &$d^{rel, r}_{u}$ & lateral distance&&\{f, l\}&\{fr\}&\{r\}&\{fr, r\}\\ \cline{2-8} &$v^{rel, r}_{v} $ &relative longitudinal speed & \multirow{ 2}{*}{$\nicefrac{m}{s}$}&\{f, r\}&&\{f\}&\{f, fl, fr, r\}\\ \cline{2-3}\cline{5-8} &$v^{rel, r}_{u} $ &relative lateral speed & &\{f, fl, l, r\}&&\{l\}&\{fr, r\}\\ \hline \hline $R_{rb}$&$mov^{r}$ & movement status & \{0: standing, 1: moving\}&\{rr\}&\{rr\}&\{rr\}&\{rr\}\\ \cline{2-8} &$d^{rel, r}_{y}$ & lateral distance & $m$&\{rl\}&&\{rl\}&\{rl, rr\}\\ \hline \hline $F_{o}$ &$ fog^{f}$ & status of the front fog lamp & \multirow{ 4}{*}{\{0: off, 1: on\}}&&&&\\ \cline{2-3}\cline{5-8} &$ fog^{r}$ & status of the rear fog lamp & && &&\\ \cline{2-3}\cline{5-8} &$ fog^{rl}$ & status of the rear left fog lamp && && &\\ \cline{2-3}\cline{5-8} &$ fog^{rr}$ & status of the rear right fog lamp && && &\\ \cline{2-8} &$ wpr$ & wiper level & \{0, ..., 15\} && &&\\ \cline{2-8} & $d^{ml}_{y}$ & distance between the center of $o$ and the left marking & \multirow{ 4}{*}{$m$}&\cmark&\cmark&\cmark&\cmark\\ \cline{2-3}\cline{5-8} &$ {d^{mr}_{y}}$ & distance between the center of $o$ and the right marking &&\cmark&\cmark&&\\ \cline{2-3}\cline{5-8} &\multirow{ 2}{*}{$ d^{cl}_{y}$} & distance between the center of $o$ and & &\multirow{ 2}{*}{\cmark} &\multirow{ 2}{*}{\cmark}&&\\ & & the centerline of the assigned lane && &&&\\ \cline{2-8} &$v_x$ & longitudinal speed of the observed vehicle&\multirow{ 2}{*}{$\nicefrac{m}{s}$}&& &&\\ \cline{2-3}\cline{5-8} &$v_y$ & lateral speed of the observed vehicle &&\cmark&\cmark&&\cmark\\ \cline{2-8} &$a_x$ & longitudinal acceleration of the observed vehicle&\multirow{ 2}{*}{$\nicefrac{m}{s^2}$}&\cmark& &\cmark&\\ \cline{2-3}\cline{5-8} &$a_y$ & lateral acceleration of the observed vehicle & & \cmark & \cmark & \cmark&\cmark\\ \cline{2-8} &\multirow{ 2}{*}{$\psi$} & angle of the observed vehicle & \multirow{ 2}{*}{$^\circ$} &\multirow{ 2}{*}{\cmark}&\multirow{ 2}{*}{\cmark}& \multirow{ 2}{*}{\cmark}& \multirow{ 2}{*}{\cmark} \\ & & relative to the direction of the lane & &&&&\\ \hline \hline $F_{infra}$ &$t^{ml}$ & type of the left marking & \{0: no marking, 1: continuous, &\cmark&\cmark&\cmark&\cmark\\ \cline{2-3}\cline{5-8} &$t^{mr}$ & type of the right marking & 2: broken\}&\cmark&\cmark&\cmark&\cmark\\ \cline{2-8} &$c^{ml}$ & color of the left marking & \{0: no marking, 1: white, && &&\cmark\\ \cline{2-3}\cline{5-8} &$c^{mr}$ & color of the right marking & 2: yellow\}&& &&\cmark\\ \cline{2-8} &$nlanes_{cam}$& number of parallel lanes observed via the camera& \{0: 0, ..., 3: 3+\}&&&&\cmark\\ \cline{2-8} &$nlanes_{map}$& number of lanes stored in the map& \{0, ..., 5\}&& &&\\ \cline{2-8} &$cntr$ & country & \{0: GER, 1: US, ...\}&&& &\\ \cline{2-8} &$tnl$ & indicator if the situation takes place in a tunnel & \multirow{ 2}{*}{\{0: False, 1: True\}}&&&&\cmark\\ \cline{2-3}\cline{5-8} &$brd$ &indicator if the situation takes place on a bridge & && &&\\ \cline{2-8} &$v^{lim}$ & speedlimit of the current highway section & \{1: $>130{\frac{km}{h}}$, ..., 8: $<11{\frac{km}{h}}$\}&&& &\\ \cline{2-8} &\multirow{ 2}{*}{$t^{a}$} & type of next approach & \{0: unknown, 1: on ramp, &\multirow{ 2}{*}{} & &&\\ & & to the highway & 2: highway merge\}& && &\\ \cline{2-8} &\multirow{ 2}{*}{$t^{e}$} & type of next exit & \{0: unknown, 1: ramp, &\multirow{ 2}{*}{}& &&\\ & & of the highway & 2: highway divider\}&& &&\\ \cline{2-8} &$w^{ml}$ & width of the left marking & \multirow{ 5}{*}{$m$}&\cmark& \cmark& &\\ \cline{2-3}\cline{5-8} &$w^{mr}$ & width of the right marking &&\cmark& \cmark&&\\ \cline{2-3}\cline{5-8} &$w^{lane}$ & width of the lane & && &\cmark&\\ \cline{2-3}\cline{5-8} &$d_{x}^a$ & distance to the next approach to the highway & && &&\\ \cline{2-3}\cline{5-8} &$d_{x}^e$ & distance to the next exit of the highway & && &&\\ \cline{2-8} & $c_{0}$ & curvature of the road & $\nicefrac{1}{m}$ && &&\\ \cline{2-8} & $c_{1}$ & derrivation of the curvature & $\nicefrac{1}{m^2}$ && &&\\ \hline \multicolumn{8}{l}{\footnotemark[\thefootnote] This means for example, that feature set $B$ (introduced in \autoref{subsec:feature_selection}), contains in total 40 elements, including the activity status of its surrounding vehicles in front (f)}\\ \multicolumn{8}{l}{and to its left (l). In contrast to that, the activity states of its other front relation partners (r, fl, fr) are not included in $B$.}\\ \end{tabular} } \vspace{-6mm} \end{table*} \section{Summary and Outlook}\label{sec:conclusion} This work introduces a machine learning workflow that enables calculations of long-term behavior predictions for surrounding vehicles in highway scenarios. For the first time, a combined compilation of prediction techniques for driving maneuvers and positions as well as lateral and longitudinal behavior is presented. The developed modules are evaluated in detail based on a large amount of real-world data, challenging established state-of-the-art approaches. To further improve the quality of the presented behavior predictions, especially in complex situations, we are working on various enhancements and conducting additional studies. Currently, we migrate the prediction strategies to an experimental vehicle to enable detailed investigations regarding run time as well as resource usage. Meanwhile, we are about to apply our models to predict movements of surrounding vehicles in contrast to ego-vehicle movements. Besides, we plan to apply our predictor to a publicly available data set as highD \cite{highDdataset} or NGSIM to improve comparability. In addition, we want to investigate up to which maximum prediction horizon (beyond 5\,s), the maneuver detection produces useful insights. Moreover, we see high potential in identifying demanding scenarios and explicitly integrating contextual knowledge (e.\,g. weather, traffic, time of day or local specialties) into our models. First experiments towards this direction have proven, that contextual properties can have a considerable impact on driving behavior. \section*{Acknowledgment} The authors would like to thank \textit{Mercedes-Benz AG Research and Development} for providing real-world measurement data, which enabled us to perform our experiments. Furthermore, we would like to thank the \textit{Institute of Databases and Information Systems} at \textit{Ulm University} as well as Prof.~Dr.~Klaus-Dieter~Kuhnert from the \textit{Institute of Realtime Learning Systems} at \textit{the University of Siegen} for supporting our studies. \bibliographystyle{ieeetr}
2,869,038,156,616
arxiv
\section{Introduction} The problem of recovering a hidden structure (the signal) in a graph on the basis of the observation of its edges and the weights on edges and vertices appears in many, diverse contexts. Community detection~\cite{decelle2011asymptotic}, group testing~\cite{mezard2008group}, certain types of error correcting codes~\cite{richardson2008modern}, particle tracking \cite{Chertkov2010} are some example of statistical inference problems formulated on graphs in which some underlying pattern has to be identified. The feasibility of the hidden structure recovery depends, of course, on the amount of ``noise'' in the problem. It turns out that, in the limit of large system sizes, sharp recovery thresholds with respect to the signal-to-noise ratio can appear. These sharp thresholds separate no recovery phases (in which no information on the signal is accessible), partial recovery phases (in which an output correlated with the signal can be obtained) and full recovery phases (in which the signal is recovered with a vanishing error). These algorithmic transitions, analogous to phase transitions in physics, can be of different orders (depending on the number of derivatives of the order parameter that exist). The order of the transition seems to have important algorithmic implications. First order transitions, in particular, have been related to the presence of a computationally hard phases \cite{decelle2011asymptotic}. In the Stochastic Block Model (SBM) on sparse graphs, for example, a phase transition between a no-recovery phase and a partial recovery phase is found~\cite{decelle2011asymptotic}. In \cite{Abbe2015,Abbe2016} it has been shown that a partial to full recovery transition can also appear in the same model if denser topologies are considered. A partial to full recovery first order transition appears in low-density-parity-check error correcting codes~\cite{richardson2008modern}, where the target is to correctly recover a code-word. Somewhat uncommonly, a partial to full recovery infinite order phase transition has been recently found in the planted matching problem, in which a hidden matching has to be detected in a weighted random graph \cite{Moharrami2019,Semerjian2020,Ding2021}. This is a new type of a phase transition found in inference problems that has been put in relation with the presence of instabilities of the belief propagation fixed points. It is thus of interest to investigate whether it appears in other problems than the planted matching. In this paper we study a generalization of the planted matching problem -- the so-called planted $k$-factor problem. A $k$-factor of a graph ${\mathrsfso G}$ is a spanning subgraph with fixed degree $k$, i.e., a spanning $k$-regular graph. In this problem, the (weighted) $k$-regular graph is hidden (planted) by adding new weighted edges to it. The weights of the planted and non-planted edges are random quantities with different distributions. The goal is to recover the planted $k$-factor knowing $k$ and the two weight distributions. In analogy with percolation being continuous while the appearance of a $k$-core is a discontinuous (first order) phase transition \cite{stauffer2018introduction}, we may have anticipated that the transition in the $k$-factor problem will be of a different type than in the planted matching. We will show instead that the planted $k$-factor problem manifests a partial--full recovery continuous phase transition akin the one in the planted matching, and we will give a criterion for the threshold between the two phases. The planted $k$-factor problem is related to many interesting inference problems. For example, it shares some similarity with the problem of structure-detection in small-world graphs \cite{Cai2017}. In the latter case, a $k$-regular ring lattice is hidden in a (unweighted) random graph by a rewiring procedure. Full recovery is possible depending on the parameters of the rewiring process. The planted $1$-factor problem corresponds to the aforementioned planted matching problem, introduced in~\cite{Chertkov2010} as a model for particle tracking. In this model, a weighted perfect matching, hidden in a graph, has to be recovered. In Ref.~\cite{Chertkov2010} the problem was studied numerically for a particular case of the distribution of weights. The results suggested the existence of a phase transition between a full recovery phase and a partial recovery phase. More recently, Moharrami and coworkers \cite{Moharrami2019} rigorously analysed another special setting of the problem assuming that ${\mathrsfso G}$ is the complete graph and the weights are exponentially distributed, and proved the existence of a partial/full recovery phase transition. The results of Ref.~\cite{Moharrami2019} have been generalized to sparse graphs and general weight distributions in Ref.~\cite{Semerjian2020}, via heuristic arguments based on a correspondence between the problem and branching random walk processes. A proof of the results in \cite{Semerjian2020} has been recently given in \cite{Ding2021}. The planted $2$-factor problem is a relaxation of the planted travelling salesman problem \cite{Bagaria2018}. In this problem, a unique, hidden Hamiltonian cycle visiting (exactly once) all vertices of a graph has to be recovered. In the planted $2$-factor problem, instead, the planted subgraph is more generally given by a set of cycles. Solving the $2$-factor problem can be, however, informative on a hidden Hamiltonian cycle. In Ref.~\cite{Bagaria2018} the Hamiltonian cycle recovery problem has been studied on the complete graph. Therein, a sufficient condition for the full recovery of the planted Hamiltonian cycle has been derived. Moreover, it is proved therein that, within the full recovery phase, the solution obtained searching for a $2$-factor coincides (with high probability and in the large size limit) with the hidden Hamiltonian cycle. In this paper, we generalize the available results for the planted $1$-factor problem and planted $2$-factor problem to the general planted $k$-factor problem with arbitrary distributions for the edge weights and including sparse graphs in the analysis. Unlike Ref.~\cite{Cai2017}, we will assume no prior knowledge on the structure of the planted $k$-factor (except for the degree of its vertices). Our approach, based on the standard cavity method and the corresponding belief propagation equations~\cite{mezard2009information}, allows us to obtain, at the level of rigor of theoretical physics, a criterion for the full recovery of the planted subgraph in the large size limit of the problem. The threshold criterion is derived studying the recursive distributional equations corresponding to the cavity equations. It turns out that a ``drift'' in their solutions can appear under iteration. If this is the case, the full-recovery solution is the only stable one, and full recovery is possible. The study of such drift has been tackled in analogy with the analysis in Ref.~\cite{Semerjian2020} for the $k=1$ case, and with the phenomenon of front propagation for reaction-diffusion equations \cite{Kingman1975,Biggins1977,Brunet1997,Majumdar2000,Ebert2000}. We give an explicit criterion for the threshold between a partial recovery phase and a full recovery phase of the planted $k$-factor. Our results recover, as special cases, the ones obtained in Refs.~\cite{Moharrami2019,Semerjian2020,Ding2021} for the planted $1$-factor. In the limit of dense graphs, they provide a sharper characterization of the phase transition for $k=2$ with respect to the analysis in \cite{Bagaria2018}, as discussed in more detail in Section \ref{sec:fully_connected}. The rest of the paper is organized as follows. In Section~\ref{sec:definitions} we define the problem under study and introduce two adopted statistical estimators, namely the block Maximum A Posteriori (MAP) and the symbol MAP. In Section \ref{sec:BP} we present the belief propagation equations for the solution of the problem and their corresponding probabilistic description. In Section~\ref{sec:Exp} we numerically study a specific case, observing that a transition between a full recovery and a partial recovery phase can appear as function of the parameters of the problem. In Section~\ref{sec:criterion} we give a heuristic derivation of the criterion for the location of the transition for arbitrary weight distributions for the block MAP estimator, Eq.~\eqref{condizione2}, which is the main result of the paper. In Section~\ref{sec:app} we compare our theoretical predictions with the numerical results obtained for different variants of the problem, including the case considered in Section~\ref{sec:Exp} and the Hamiltonian cycle recovery problem considered in Ref.~\cite{Bagaria2018}. Finally, conclusions and perspectives for future work are given in Section \ref{sec:conclusions}. \section{Definitions}\label{sec:definitions} \subsection{Planting a $k$-factor} Let us assume that an integer $k\in\mathds N$ and two probability densities $p$ and $\hat p$ on the real line are given. We will focus on an ensemble of weighted simple graphs denoted ${\mathrsfso G}_0=({\mathcal V}_0, {\mathcal E}_0,\underline{w})$, containing by construction a planted $k$-factor to be recovered. Here ${\mathcal V}_0 = \{1,\dots,N\}$ is the set of $N\in\mathds N$ vertices such that $kN$ is even, and ${\mathcal E}_0$ is the set of edges (unordered pairs of distinct vertices of ${\mathcal V}_0$). A weight $w_e\in\mathds R$ is associated to each edge $e$ of the graph, so that $\underline w=\{w_e \colon e \in {\mathcal E}_0\}$ is the set of such weights. We introduce a probability measure over the set of weighted graphs by means of the following generation steps of ${\mathrsfso G}_0$ (see Fig.~\ref{figA}). \begin{enumerate} \item One first constructs a $k$-regular graph having vertex set ${\mathcal V}_0$. The graph is chosen uniformly among all possible $k$-regular graph with $N$ vertices \cite{Bollobas2001book}. This can be achieved using fast algorithms available in the literature \cite{Gao2017}. The obtained graph has $\frac{1}{2}kN$ edges and edge-set ${\mathrsfso F}^*_k$. A random weight $w_e$, generated independently of all the others with density $\hat p$, is associated to each edge $e\in{\mathrsfso F}^*_k$. \item Given a pair of vertices that are not neighbours in ${\mathrsfso F}^*_k$, a link is added between them with probability $cN^{-1}$. Let ${\mathcal E}_0$ be the final set of edges of the obtained graph. A weight $w_e$, independently generated from all the others with distribution $p$, is assigned to each $e \in {\mathcal E}_0 \setminus {\mathrsfso F}^*_k$. \end{enumerate} We shall call planted (resp.~non-planted) edges those in ${\mathrsfso F}^*_k$ (resp.~in ${\mathcal E}_0 \setminus {\mathrsfso F}^*_k$). The parameters of this ensemble of weighted random graphs are thus the integers $N$ and $k$, the parameter $c$ controlling the density of non-planted edges, and the two distributions $\hat{p}$ and $p$ for the generation of the weights of the planted and non-planted edges respectively. Given ${\mathrsfso F}^*_k$, the probability to generate a graph ${\mathrsfso G}_0$ is therefore \begin{equation} \label{eq_proba_direct} \mathbb P({\mathrsfso G}_0 | {\mathrsfso F}^*_k)= \mathbb{I}({\mathrsfso F}^*_k \subseteq {\mathcal E}_0)\prod_{e \in {\mathrsfso F}^*_k} \hat p(w_e) \prod_{\mathclap{e \in {\mathcal E}_0 \setminus {\mathrsfso F}^*_k}} p(w_e) \left(\frac{c}{N} \right)^{|{\mathcal E}_0|-k\frac{N}{2}} \left(1-\frac{c}{N} \right)^{\binom{N}{2} - |{\mathcal E}_0|} , \end{equation} where here and in the following $\mathbb{I}(A)$ denotes the indicator function of the event $A$. The edges in ${\mathcal E}_0\setminus{\mathrsfso F}^*_k$ form essentially an Erd\H os-R\'enyi random graph of average degree $c$. The edge-set ${\mathrsfso F}^*_k$, on the other hand, is a $k$-factor of ${\mathrsfso G}_0$ by construction, i.e., a spanning subgraph of ${\mathrsfso G}_0$ in which all the vertices have the same degree $k$. The resulting graph has average coordination $c+k$. Note that, for $k=1$, ${\mathrsfso F}^*_1$ is a matching on ${\mathrsfso G}_0$, and the introduced ensemble of graphs coincides with the one studied in Ref.~\cite{Semerjian2020} for the planted matching problem. \subsection{The inference problem} \label{sec_def_inference} Given a graph ${\mathrsfso G}_0$ in the ensemble described above, we wonder if it si possible to infer the $k$-factor ${\mathrsfso F}^*_k$ hidden in it. We assume that the generation rules are known, alongside with $k$, $c$, $p$ and $\hat p$. All the exploitable information is contained in the posterior probability $\mathbb P({\mathrsfso F}_k|{\mathrsfso G}_0)$ that a certain $k$-factor ${\mathrsfso F}_k$ in ${\mathrsfso G}_0$ is the planted $k$-factor ${\mathrsfso F}_k^*$ From Bayes theorem we obtain the following expression for the posterior: \begin{equation} \mathbb P({\mathrsfso F}_k| {\mathrsfso G}_0 ) \propto \mathbb{I}({\mathrsfso F}_k\text{ is a $k$-factor})\prod_{e \in {\mathrsfso F}_k} \hat p(w_e) \prod_{\mathclap{e \in {\mathcal E}_0 \setminus {\mathrsfso F}_k}} p(w_e) \mathbb{I}({\mathrsfso F}_k \subseteq {\mathcal E}_0 ) , \end{equation} where the symbol $\propto$ hides a normalization constant independent of ${\mathrsfso F}_k$. To parametrize the probability measure above, it is convenient to introduce, for each edge $e$, the binary variable $m_e\in\{0,1\}$, so that $\underline{m}=\{m_e=\mathbb I(e\in {\mathrsfso F}_k) \colon e \in {\mathcal E}_0\} \in \{0,1\}^{|{\mathcal E}_0|}$, and rewrite the posterior as \begin{equation}\label{post} \mathbb P( \underline{m}| {\mathrsfso G}_0 ) \propto \prod_{e \in {\mathcal E}_0 } \left( \frac{\hat p(w_e)}{p(w_e)} \right)^{m_e} \prod_{i=1}^N \mathbb{I}\left(\sum_{e \in \partial i} m_e = k \right) \ , \end{equation} where $\partial i$ denotes the set of edges incident to the vertex~$i$. We want to compute an estimator $\hat {\mathrsfso F}_k$ that is ``close'' to the hidden $k$-factor ${\mathrsfso F}_k$. The estimator is associated to the set of binary variables $\hat{\underline{m}}$ that encodes the set of edges in $\hat{\mathrsfso F}_k$. With a slight abuse of notations, in the following we identify a set of edges ${\mathrsfso F}_k$ with its corresponding ${\underline{m}}$. We will denote $\underline m^*$ the set of variables associated to ${\mathrsfso F}_k^*$ and $\hat{\underline m}$ the set of variables associated to an estimator $\hat{\mathrsfso F}_k$. In this paper, we will quantify the distance between an estimator $\hat{\mathrsfso F}_k$ and the true planted $k$-factor ${\mathrsfso F}_k^*$ in terms of the cardinality of the symmetric difference ${\mathrsfso F}_k^* \triangle \hat{{\mathrsfso F}}_k$ between ${\mathrsfso F}_k^*$ and $\hat{\mathrsfso F}_k$, \begin{equation} \varrho({\mathrsfso F}_k^*,\hat{{\mathrsfso F}}_k) \coloneqq \frac{|{\mathrsfso F}_k^* \triangle \hat{{\mathrsfso F}}_k|}{2|{\mathrsfso F}_k^*|}= \frac{1}{kN} \sum_{e \in {\mathcal E}_0 }\mathbb{I}(\hat{m}_e\neq m^*_e), \label{eq_def_rho} \end{equation} or equivalently the Hamming distance between the binary string $\underline{m}^*$ encoding ${\mathrsfso F}_k^*$ and the binary string $\hat{\underline{m}}$ encoding $\hat{{\mathrsfso F}}_k$. We will consider two `Maximal A Posteriori' (MAP) estimators, each one minimizing a ``measure of distance'' with the planted $k$-factor. \begin{itemize} \item A first possibility is to choose as estimator the $k$-factor that minimizes the probability $\mathbb{P}( {\mathrsfso F}_k^* \neq \hat{{\mathrsfso F}}_k)$ over all realizations of the problem, \begin{equation}\label{eq_bMAP} \hat{{\mathrsfso F}}^{\rm (b)}_k = \underset{\underline{m}}{\rm argmax}\ \mathbb P( \underline{m}| {\mathrsfso G}_0 ). \end{equation} This estimator is usually called `block MAP' \cite{richardson2008modern}. \item A different estimator, called `symbol MAP' and denoted in the following $\hat{{\mathrsfso F}}_k^{\rm (s)}$, is obtained requiring that the distance to be minimized is precisely the error in Eq.~\eqref{eq_def_rho}. In this case, for each edge $e \in {\mathcal E}_0$, we choose \begin{equation} \label{eq_sMAP} \hat{m}_e= \underset{m_e}{\rm argmax}\ \mathbb P_e(m_e | {\mathrsfso G}_0 ), \end{equation} with $\mathbb P_e$ the marginal of the posterior probability \eqref{post} for the edge $e$. Observe, however, that this estimator is not necessarily a $k$-factor. \end{itemize} In the following, we will discuss both the estimators defined above, in the thermodynamic limit $N \to \infty$, as a function of the parameters of the model. As in the planted matching problem \cite{Chertkov2010,Moharrami2019,Semerjian2020}, the possibility of identifying the planted edges will depend on the similarity between the distribution $p$ and the distribution $\hat p$, and on the parameter $c$, that corresponds itself to a noise level expressing the number of confusing non-planted edges introduced in the graph. \section{Cavity equations}\label{sec:BP} \begin{figure*} \subfloat[\label{figA} ${\mathrsfso G}_0$]{\includegraphics[width=0.24\textwidth]{grafo0}}\hfill \subfloat[\label{figB}]{\includegraphics[width=0.24\textwidth]{grafo1}} \hfill \subfloat[\label{figC} ${\mathrsfso G}$]{\includegraphics[width=0.24\textwidth]{grafo2}} \caption{Pictorial representation of the preliminary pruning in a planted $2$-factor problem. (a) A $2$-regular graph on a vertex set of $N$ vertices is generated (thick double lines; in the picture, $N=18$), with edge weights distributed as $\hat p$. Random edges are added with some probability $\sfrac{c}{N}$ (thin lines), with edge weights distributed as $p$. The obtained graph is ${\mathrsfso G}_0$. A capacity $k$ is assigned to each vertex. (b) Edges in $\supp(\hat p)\setminus\Gamma$ (double lines) can be identified as planted and removed, decreasing the capacity of the endpoints by $1$. Edges in $\supp(p)\setminus\Gamma$ (thick single lines) can be identified and removed. Vertices with zero capacity can be removed. (c) We call ${\mathrsfso G}$ the obtained pruned graph, and we call ${\mathrsfso F}_k^{(0)}$ the set of identified edges by means of this first pruning process (green). } \end{figure*} \subsection{Pruning the graph}\label{sec:pruning} To efficiently solve the problem, and possibly reduce the size of the input, it is convenient to proceed with a preliminary ``pruning'' of the graph. Before proceeding further, let us assign a `capacity variable' $\kappa_i=k$ to each vertex $i$. The capacity of each node $i$ will take into account the number of unidentified planted edges incident at $i$. Let \begin{subequations} \begin{align} \supp(\hat p)&\coloneqq\{w\in\mathds{R}\colon \hat p(w)>0\},\\ \supp(p)&\coloneqq\{w\in\mathds{R}\colon p(w)>0\}, \end{align} be the support of $\hat p$ and $p$ respectively, and let \begin{equation} \Gamma\coloneqq\supp(p)\cap\supp(\hat p) \label{def:gamma} \end{equation} be the intersection of these supports. We suppose that $\Gamma\neq\emptyset$ (the inference problem is otherwise trivial). If an edge $e$ bears a weight $w_e\in\supp(p)\setminus\Gamma$, then it can be immediately identified as `non-planted', and removed from ${\mathrsfso G}_0$. This event will happen with probability $1-\mu$, where \begin{equation} \mu\coloneqq\int_{\Gamma}p(w){\rm d} w. \end{equation} On the other hand, the set of edges \begin{equation} {{\mathrsfso F}}_k^{(0)}\coloneqq \left\{e\in{\mathcal E}_0\colon w_e \in \supp(\hat p) \setminus \Gamma \right\} \end{equation} surely belongs to the planted configuration, ${{\mathrsfso F}}_k^{(0)}\subseteq{\mathrsfso F}_k^*$. These edges can be correctly classified as `planted' with no algorithmic effort, except for the inspection of their weights. A planted edge $e$ can be therefore identified, solely on the basis of the value of its weight, with probability $1-\hat\mu$, where \begin{equation} \hat\mu\coloneqq\int_{\Gamma}\hat p(w){\rm d} w. \end{equation} \end{subequations} We can remove from the graph the identified planted edges, see Fig.~\ref{figB}. We must take care, however, of reducing at the same time by $1$ the capacity of the endpoints of a planted edge that is removed. At the end of this process, the capacity of a generic vertex $i$ is $0\leq \kappa_i\leq k$, see Fig.~\ref{figC}. For large $N$, the capacity of the vertices after this pruning has binomial distribution ${\rm Bin}(k,\hat\mu)$. In particular, $(1-\hat\mu)^k N$ vertices have zero capacity, meaning that all their incident planted edges have been identified. These vertices can also be removed, alongside with all their remaining (non-planted) incident edges. In the resulting pruned graph, a vertex has $\mathsf K$ incident planted edges and $\mathsf Z$ non-planted edges, where $\mathsf K$ and $\mathsf Z$ are two random variables having distribution \begin{subequations} \begin{align} \Prob[\mathsf K=\kappa]&=\binom{k}{\kappa}\frac{\hat\mu^\kappa(1-\hat\mu)^{k-\kappa}}{\hat\mu_k},\quad \kappa=1,\dots,k,\label{distK}\\ \Prob[\mathsf Z=z]&=\gamma^{z}\frac{\e^{-\gamma}}{z!},\quad z\in\mathds N_0,\quad \gamma\coloneqq c\mu\hat\mu_k, \end{align} \end{subequations} where $\hat\mu_k\coloneqq 1-(1-\hat\mu)^k$, so that the pruned graph has average degree $\mathbb E[\mathsf K+\mathsf Z]=\gamma+k\hat\mu\hat\mu_k^{-1}$. The distributions of the weights of the surviving edges are obtained from the original ones conditioning the weights to be in $\Gamma$, i.e., \begin{subequations}\label{PPhat} \begin{align} P(w)&\coloneqq \frac{p(w)}{\mu}\mathbb I(w\in\Gamma), \\ \hat P(w) &\coloneqq \frac{\hat p( w)}{\hat\mu}\mathbb I( w\in\Gamma), \end{align} \end{subequations} for the non-planted and planted edges, respectively. We will denote ${\mathrsfso G} =({\mathcal V}, {\mathcal E}, \underline{w})$ the pruned graph, with ${\mathcal V}\subseteq{\mathcal V}_0$ and ${\mathcal E} \subseteq {\mathcal E}_0$ the new vertex and edge-sets. For large $N$, ${\mathcal V}$ has cardinality $\hat N\coloneqq \hat\mu_kN$, each vertex $i\in{\mathcal V}$ having capacity $1\leq \kappa_i\leq k$ distributed as $\mathsf K$. The graph will have a total of $\frac{1}{2}k\hat\mu N$ surviving planted edges and $\frac{1}{2}\gamma\hat\mu_k N$ surviving non-planted edges. \subsection{The Belief Propagation equations} To write down a belief propagation algorithm for the planted $k$-factor problem, we start from the probability distribution over the configurations $\underline{m}=\{m_e : e \in {\mathcal E}\} \in \{0,1\}^{|{\mathcal E}|}$ on the edges of the pruned graph ${\mathrsfso G}$, \begin{equation}\label{lbeta} \nu(\underline{m})\propto \exp\left(-\beta\sum_{e \in{\mathcal E}} m_e \omega_e\right)\prod_{i \in{\mathcal V}}\mathbb I\left(\sum_{e\in \partial i}m_e=\kappa_i\right), \end{equation} where $\beta>0$ and $\omega_e = \omega(w_e)$, with \begin{equation}\label{omega} \omega(w)\coloneqq -\ln\frac{\hat P(w)}{P(w)}. \end{equation} The quantities $\omega_e$ play the role of effective weights on the edges of the graph. The introduction of $\beta$ is convenient because the measure in Eq.~\eqref{lbeta} coincides, for $\beta=1$, with the posterior defined in Eq.~\eqref{post}. On the other hand, for $\beta \to \infty$ the measure concentrates on the configurations maximizing the posterior, i.e., on the block MAP. Eq.~\eqref{lbeta} can also be associated to a graphical model. In particular, we can associate a variable vertex $m_e$ (\tikz{\node[shape=circle,draw,inner sep=2pt] (e) {};}) to each edge $e$ of ${\mathrsfso G}$. We also introduce two types of interaction vertices. A first type of interaction vertex (\tikz{\node[fill=black,shape=rectangle,draw,inner sep=2pt] (b4) {};}) is associated to each $i \in {\mathcal V}$, and linked to all variable vertices $m_e$ such that $e\in\partial i$. Such vertex imposes that $\kappa_i$ variables $m_e$, $e\in\partial i$, are equal to 1. A second interaction vertex expresses the contribution $\e^{-\beta m_e \omega_e}$ (\tikz{\node[fill=gray!60, shape=rectangle, draw,inner sep=2pt] (B4) {};}) for each $e$, and it is linked to the variable vertex $m_e$. {Pictorially, \newdimen\nodeSize \nodeSize=4mm \newdimen\nodeDist \nodeDist=6mm \tikzset{position/.style args={#1:#2 from #3}{at=(#3.#1), anchor=#1+180, shift=(#1:#2)}} \[\underbrace{\begin{gathered} \begin{tikzpicture} \node[fill=black,shape=circle,draw,inner sep=1pt] (0) at (0,0) {}; \node[fill=black,shape=circle,position=120:{0.35*\nodeDist} from 0,draw,inner sep=1pt] (a2) {}; \node[draw=none,position=120:{0.35*\nodeDist} from a2,inner sep=1pt] (a2a) {}; \node[draw=none,position=180:{0.35*\nodeDist} from a2,inner sep=1pt] (a2b) {}; \node[draw=none,position=60:{0.35*\nodeDist} from a2,inner sep=1pt] (a2c) {}; \node[fill=black,shape=circle,position=240:{0.35*\nodeDist} from 0,draw,inner sep=1pt] (a3) {}; \node[draw=none,position=-120:{0.35*\nodeDist} from a3,inner sep=1pt] (a3a) {}; \node[draw=none,position=-180:{0.35*\nodeDist} from a3,inner sep=1pt] (a3b) {}; \draw[thin,gray] (a2a) -- (a2) -- (a2b) -- (a2) -- (a2c); \draw[thin,gray] (a3a) -- (a3) -- (a3b); \draw[thin] (a3) -- (0) -- (a2); \node[fill=black,shape=circle, position=0:10mm from 0,draw,inner sep=1pt] (i) {}; \node[fill=black,shape=circle,position=60:{0.35*\nodeDist} from i,draw,inner sep=1pt] (b4) {}; \node[fill=black,shape=circle,position=-60:{0.35*\nodeDist} from i,draw,inner sep=1pt] (b5) {}; \draw[thin] (b4) -- (i) -- (b5); \node[draw=none,position=0:{0.4*\nodeDist} from b4,inner sep=1pt] (b4a) {}; \node[draw=none,position=60:{0.4*\nodeDist} from b4,inner sep=1pt] (b4b) {}; \node[draw=none,position=10:{0.4*\nodeDist} from b5,inner sep=1pt] (b5a) {}; \node[draw=none,position=-40:{0.4*\nodeDist} from b5,inner sep=1pt] (b5b) {}; \node[draw=none,position=-90:{0.4*\nodeDist} from b5,inner sep=1pt] (b5c) {}; \node[draw=none,position=-140:{0.4*\nodeDist} from b5,inner sep=1pt] (b5d) {}; \draw[thin,gray] (b4a) -- (b4) -- (b4b); \draw[thin,gray] (b5a) -- (b5) -- (b5b) -- (b5) -- (b5c) -- (b5) -- (b5d); \draw[thin] (0) -- (i); \end{tikzpicture} \end{gathered}}_{{\mathrsfso G}}\Longrightarrow \begin{gathered} \begin{tikzpicture} \node[fill=black,shape=rectangle,draw,inner sep=2pt] (0) at (0,0) {}; \node[shape=circle,draw,inner sep=2pt, position=0:5mm from 0] (e) {}; \node[ fill=gray!60, shape=rectangle, position=90:{0.2\nodeDist} from e, draw,inner sep=2pt] (A) {}; \node[fill=black,shape=rectangle,position=120:{0.7*\nodeDist} from 0,draw,inner sep=2pt] (a2) {}; \node[shape=circle, position=120:{0.2\nodeDist} from 0,draw,inner sep=2pt] (2) {}; \node[ fill=gray!60, shape=rectangle, position=30:{0.2\nodeDist} from 2, draw,inner sep=2pt] (A2) {}; \node[fill=black,shape=rectangle,position=240:{0.7*\nodeDist} from 0,draw,inner sep=2pt] (a3) {}; \node[shape=circle, position=240:{0.2\nodeDist} from 0,draw,inner sep=2pt] (3) {}; \node[ fill=gray!60, shape=rectangle, position=-30:{0.2\nodeDist} from 3, draw,inner sep=2pt] (A3) {}; \draw[thin] (e) -- (0) -- (2) -- (a2) -- (2) -- (A2); \draw[thin] (0) -- (3) -- (a3) -- (3) -- (A3); \node[fill=black, position=0:5mm from e,draw,inner sep=2pt] (i) {}; \node[fill=black,shape=rectangle,position=60:{0.7*\nodeDist} from i,draw,inner sep=2pt] (b4) {}; \node[shape=circle, position=60:{0.2\nodeDist} from i,draw,inner sep=2pt] (4) {}; \node[ fill=gray!60, shape=rectangle, position=150:{0.2\nodeDist} from 4, draw,inner sep=2pt] (B4) {}; \node[fill=black,shape=rectangle,position=-60:{0.7*\nodeDist} from i,draw,inner sep=2pt] (b5) {}; \node[shape=circle, position=-60:{0.2*\nodeDist} from i,draw,inner sep=2pt] (5) {};\node[ fill=gray!60, shape=rectangle, position=-150:{0.25\nodeDist} from 5, draw,inner sep=2pt] (B5) {}; \draw[thin] (i) -- (4) -- (b4) -- (4) -- (B4); \draw[thin] (i) -- (e) -- (A); \draw[thin] (i) -- (5) -- (b5) -- (5) -- (B5); \node[draw=none,position=120:{0.35*\nodeDist} from a2,inner sep=1pt] (a2a) {}; \node[draw=none,position=180:{0.35*\nodeDist} from a2,inner sep=1pt] (a2b) {}; \node[draw=none,position=60:{0.35*\nodeDist} from a2,inner sep=1pt] (a2c) {}; \node[draw=none,position=-120:{0.35*\nodeDist} from a3,inner sep=1pt] (a3a) {}; \node[draw=none,position=-180:{0.35*\nodeDist} from a3,inner sep=1pt] (a3b) {}; \draw[thin,gray] (a2a) -- (a2) -- (a2b) -- (a2) -- (a2c); \draw[thin,gray] (a3a) -- (a3) -- (a3b); \node[draw=none,position=0:{0.4*\nodeDist} from b4,inner sep=1pt] (b4a) {}; \node[draw=none,position=60:{0.4*\nodeDist} from b4,inner sep=1pt] (b4b) {}; \node[draw=none,position=10:{0.4*\nodeDist} from b5,inner sep=1pt] (b5a) {}; \node[draw=none,position=-40:{0.4*\nodeDist} from b5,inner sep=1pt] (b5b) {}; \node[draw=none,position=-90:{0.4*\nodeDist} from b5,inner sep=1pt] (b5c) {}; \node[draw=none,position=-140:{0.4*\nodeDist} from b5,inner sep=1pt] (b5d) {}; \draw[thin,gray] (b4a) -- (b4) -- (b4b); \draw[thin,gray] (b5a) -- (b5) -- (b5b) -- (b5) -- (b5c) -- (b5) -- (b5d); \end{tikzpicture} \end{gathered}\]} The Belief Propagation (BP) algorithm \cite{mezard2009information} provides a recipe for the computation of the marginals of $\nu$ on such factor graph. The idea is to approximate the marginal $\nu_e(m)$ for the edge $e=(i,j)$ as \[\nu_e(m)\coloneqq \sum_{\{m_{\tilde e}\}_{\tilde e\neq e}}\nu(\underline{m})\simeq \nu_{i\to e}(m)\nu_{j\to e}(m)e^{-\beta m\omega_e},\] where $\nu_{i\to e}(m)$ (respectively $\nu_{i\to e}(m)$) mimics a marginal probability in graphical models in which $j$ (respectively $i$) is absent. Such factorization is exact on infinite trees. The algorithm goal is the computation of such messages, and it is conjectured to be exact in the large size limit for sparse random graphs. The messages obey the following equations (one for each directed edge of the graph), \begin{equation}\label{messaggio} \nu_{i\to e}(m)\propto \sum_{\mathclap{\{m_{\tilde e}\}_{\tilde e\in\partial i\setminus e}}}\quad\mathbb I\left(m+\sum_{\mathclap{\tilde e\in\partial i\setminus e}}m_{\tilde e}=\kappa_i\right)\ \prod_{\mathclap{\substack{\tilde e=(r,i)\\\tilde e\in\partial i\setminus e}}}\nu_{r\to \tilde e}(m_{\tilde e})\e^{-\beta m_{\tilde e}\omega_{\tilde e}} \ . \end{equation} We adopt the convention $\sum_{a\in A}f(a)=0$ and $\prod_{a\in A}f(a)=1$ if $A=\emptyset$ for any function $f$. {Pictorially, Eq.~\eqref{messaggio} can be rendered as \begin{center} \newdimen\nodeSize \nodeSize=4mm \newdimen\nodeDist \nodeDist=6mm \tikzset{position/.style args={#1:#2 from #3}{at=(#3.#1), anchor=#1+180, shift=(#1:#2)}} \begin{tikzpicture}[square/.style={regular polygon,regular polygon sides=4}] \node[fill=black,square,draw,inner sep=0.2pt] (0) at (0,0) {\small \color{white} $i$}; \node[shape=circle,draw=none, position=0:20mm from 0] (ef) {}; \node[shape=circle, position=160:{0.5\nodeDist} from 0,draw,inner sep=0.6pt] (2) {\tiny $ui$}; \node[fill=black,square,position=160:{4\nodeDist} from 0,draw,inner sep=0.5pt] (a2) {\small \color{white} $u$}; \node[fill=gray!60, shape=rectangle, position=60:{1.5\nodeDist} from 2, draw,inner sep=2pt] (A2) {}; \node[fill=black,square,position=200:{4\nodeDist} from 0,draw,inner sep=0.5pt] (a3) {\small \color{white} $v$}; \node[shape=circle, position=200:{0.5\nodeDist} from 0,draw,inner sep=0.6pt] (3) {\tiny $vi$}; \node[ fill=gray!60, shape=rectangle, position=-60:{1.5\nodeDist} from 3, draw,inner sep=2pt] (A3) {}; \path[thin,->,-stealth] (0) edge node [fill=white] {$\nu_{i\to (ij)}$} (ef); \draw[thin] (0) -- (2); \path[thin,->,-stealth] (A2) edge node [fill=white] {$\small \e^{-\beta\omega_{ui}}$} (2); \path[thin,->,-stealth] (A3) edge node [fill=white] {$\small \e^{-\beta\omega_{vi}}$} (3); \draw[thin] (0) -- (3); \node[draw=none,position=120:{0.35*\nodeDist} from a2,inner sep=1pt] (a2a) {}; \node[draw=none,position=180:{0.35*\nodeDist} from a2,inner sep=1pt] (a2b) {}; \node[draw=none,position=240:{0.35*\nodeDist} from a2,inner sep=1pt] (a2c) {}; \node[draw=none,position=-120:{0.35*\nodeDist} from a3,inner sep=1pt] (a3a) {}; \node[draw=none,position=-180:{0.35*\nodeDist} from a3,inner sep=1pt] (a3b) {}; \draw[thin,gray] (a2a) -- (a2) -- (a2b) -- (a2) -- (a2c); \draw[thin,gray] (a3a) -- (a3) -- (a3b); \path[thin,->,-stealth] (a2) edge node [fill=white,sloped] {\small $\nu_{u\to (ui)}$} (2); \path[thin,->,-stealth] (a3) edge node [fill=white,sloped] {\small $\nu_{v\to (vi)}$} (3); \end{tikzpicture} \end{center} where the arrows indicate the directions of ``propagation'' of the messages.} Taking advantage of the binary nature of the variables $m_e$, it is convenient to parametrize the marginals in terms of ``cavity fields'' $h_{i\to e}$, \begin{equation} \nu_{i\to e}(m)=\frac{\e^{\beta m h_{i\to e}}}{1+\e^{\beta h_{i\to e}}}, \end{equation} so that an equation for the cavity fields $h_{i\to e}$ can be written as \begin{multline}\label{cavbeta} h_{i\to e}=-\frac{1}{\beta}\ln\frac{\nu_{i\to e}(0)}{\nu_{i\to e}(1)} =\frac{1}{\beta}\ln\sum_{\mathclap{\{m_{\hat e}\}_{\partial i\setminus e}}}\ \ \mathbb I\left(\sum_{\hat e\in \partial i\setminus e}m_{\hat e}=\kappa_i-1\right)\prod_{{\substack{\hat e=(r,i)\\\hat e\in\partial i\setminus e}}}\e^{\beta m_{\hat e}(h_{r\to \hat e}-\omega_{\hat e})}\\ -\frac{1}{\beta}\ln\sum_{\mathclap{\{m_{\hat e}\}_{\partial i\setminus e}}}\ \ \mathbb I\left(\sum_{\hat e\in \partial i\setminus e}m_{\hat e}=\kappa_i\right)\prod_{{\substack{\hat e=(r,i)\\\hat e\in\partial i\setminus e}}}\e^{\beta m_{\hat e}(h_{r\to \hat e}-\omega_{\hat e})}. \end{multline} Once the equations have been solved for all the fields on the graph edges, the marginal probability of the variable $m$ on the edge $e=(i,j)$ is given by \begin{equation}\label{marginal} \nu_{e}(m)=\frac{\e^{\beta m(h_{i\to e}+h_{j\to e}-\omega_{e})}}{1+\e^{\beta (h_{i\to e}+h_{j\to e}-\omega_{e})}}, \end{equation} i.e., $\nu_{e}(1)$ evaluated with $\beta=1$ is the probability that $e\in{\mathrsfso F}_k^*$. The BP approximation to the symbol MAP estimator in \eqref{eq_sMAP} is obtained computing the messages, and then the marginal, with $\beta=1$, and then selecting the set \begin{equation} \begin{split} \hat{{\mathrsfso F}}_{k}^{\rm (s)}({\mathrsfso G})\coloneqq& \left\{e\in{\mathcal E}\colon \nu_e(1)>\frac{1}{2}\right\}\\ =&\left\{e=(i,j)\in{\mathcal E}\colon h_{i\to e}+h_{j\to e} > \omega_{e}\right\}. \end{split} \label{eq_inclusion_rule} \end{equation} Observe that proceeding in this way the selected edge-set $\hat{{\mathrsfso F}}_{k}^{\rm (s)}\cup {\mathrsfso F}_{k}^{\rm (0)} $ is not an actual $k$-factor in general. The block MAP estimator is obtained taking $\beta\to +\infty$ in the equations for the marginals: in this limit the measure in Eq.~\eqref{lbeta} concentrates on the configuration $\{m_e\}_e$ that maximizes the likelihood. For $\beta\to+\infty$, Eqs.~\eqref{cavbeta} simplify, and we obtain \begin{equation}\label{cav1} h_{i\to e}={\min}^{(\kappa_i)}\left[\left\{\omega_{\hat e}-h_{r\to \hat e}\right\}_{{\substack{\hat e=(r,i)\\\hat e\in \partial i\setminus e}}}\right], \end{equation} where $\min^{(r)}[A]$ outputs the $r$th smallest element of the set $A$. The block MAP estimator $\hat {\mathrsfso F}_k^{\rm (b)}$ is found using the same criterion given in Eq.~\eqref{eq_inclusion_rule} upon convergence of the algorithm, so that the final estimator for ${\mathrsfso F}_k^*$ is $\hat {\mathrsfso F}_k^{\rm (b)}\cup {\mathrsfso F}_k^{(0)}$. \subsection{Recursive Distributional Equations} In this Section we will study the average error on the considered ensemble by analysing the statistical properties of the solutions of the BP equations by the cavity method~\cite{Mezard2001}. We introduce the following random variables. \begin{itemize} \item $\hat {\mathsf H}$ is a random variable that has the law of the cavity field $h_{i \to e}$ given that $e$ is a randomly chosen planted edge; \item ${\mathsf H}$ is a random variable that has the law of the cavity field $h_{i \to e}$ given that $e$ is a randomly chosen non-planted edge; \item $\hat {\Omega}$ is a random variable that has the law of the effective weight $\omega_e$ given that $e$ is a randomly chosen planted edge; \item ${\Omega}$ is a random variable that has the law of the effective weight $\omega_e$ given that $e$ is a randomly chosen non-planted edge. \end{itemize} In the hypothesis that the replica symmetric hypothesis holds (i.e., typical realizations of $\nu$ have no long-range correlations), then \eqref{cavbeta} translates into recursive distributional equations (RDEs) involving the introduced random variables ${\mathsf H}$ and $\hat {\mathsf H}$. To write down this set of RDEs, first observe that an endpoint $i$ of a planted edge $e$ is incident to $\mathsf Z$ non-planted edges, plus a set $\mathsf K'-1$ of other planted edges. The random variable $\mathsf K'$, however, is not simply distributed as $\mathsf K$, but instead as \cite{mezard2009information} \begin{equation} \Prob[\mathsf K'=\kappa]=\frac{\kappa\Prob[\mathsf K=\kappa]}{\mathbb E[\mathsf K]}=\binom{k}{\kappa}\frac{\kappa\hat\mu^\kappa(1-\hat\mu)^{k-\kappa}}{k\hat\mu}. \end{equation} This is because the planted subgraph, having $\hat\mu_kN\gg 1$ vertices, contains $\frac{1}{2}\hat\mu_kN\kappa\Prob[\mathsf K=\kappa]$ edges adjacent to a vertex of capacity $\kappa$, so that the probability of picking a planted edge that is adjacent to a vertex with capacity $\kappa$ is proportional to $\kappa\Prob[\mathsf K=\kappa]$. For the sake of brevity, here and in the following, given a random variable $\mathsf X$, we will denote by $\mathsf X'$ a random variable distributed as \begin{equation}\label{eqedgeper} \Prob[\mathsf X'=x]=\frac{x\Prob[\mathsf X=x]}{\mathbb E[\mathsf X]}. \end{equation} Similarly, if $e$ is a non-planted edge and $i$ is one of its endpoints, there will be $\mathsf K$ planted edge incident to $i$, and other $\mathsf Z'-1$ non-planted edges. Within the replica-symmetric assumption of independence of the incoming cavity fields, one thus obtains from Eq.~\eqref{cavbeta}: \begin{subequations}\label{rdebeta} \begin{align} \hat{\mathsf H}\stackrel{\rm d}{=}&-\frac{1}{\beta}\ln\frac{\sum_{\{m\},\{\tilde m\}}\mathbb I\left(\sum_{i=1}^{\mathsf K'-1}m_i+\sum_{j=1}^{\mathsf Z}\tilde m_j=\mathsf K'\right)\prod_{i=1}^{\mathsf K'-1}\e^{\beta m_i(\hat{\mathsf H}_i-\hat\Omega_i)}\prod_{j=1}^{\mathsf Z}\e^{\beta \tilde m_j({\mathsf H}_j-\Omega_j)}}{\sum_{\{m\},\{\tilde m\}}\mathbb I\left(\sum_{i=1}^{\mathsf K'-1}m_i+\sum_{j=1}^{\mathsf Z}\tilde m_j=\mathsf K'-1\right)\prod_{i=1}^{\mathsf K'-1}\e^{\beta m_i(\hat{\mathsf H}_i-\hat\Omega_i)}\prod_{j=1}^{\mathsf Z}\e^{\beta \tilde m_j({\mathsf H}_j-\Omega_j)}}, \label{cavbetaH1}\\ {\mathsf H}\stackrel{\rm d}{=}&-\frac{1}{\beta}\ln\frac{\sum_{\{m\},\{\tilde m\}}\mathbb I\left(\sum_{i=1}^{\mathsf K}m_i+\sum_{j=1}^{\mathsf Z'-1}\tilde m_j=\mathsf K\right)\prod_{i=1}^{\mathsf K}\e^{\beta m_i(\hat{\mathsf H}_i-\hat\Omega_i)}\prod_{j=1}^{\mathsf Z'-1}\e^{\beta \tilde m_j({\mathsf H}_j-\Omega_j)}}{\sum_{\{m\},\{\tilde m\}}\mathbb I\left(\sum_{i=1}^{\mathsf K}m_i+\sum_{j=1}^{\mathsf Z'-1}\tilde m_j=\mathsf K-1\right)\prod_{i=1}^{\mathsf K}\e^{\beta m_i(\hat{\mathsf H}_i-\hat\Omega_i)}\prod_{j=1}^{\mathsf Z'-1}\e^{\beta \tilde m_j({\mathsf H}_j-\Omega_j)}}.\label{cavbetaH2} \end{align} \end{subequations} The equalities have to be considered in distribution. In the equations above all random variables are independent, $\mathsf Z$ is Poisson distributed with mean $\gamma$, the $\Omega_i$'s have the same law as $\Omega$, ${\mathsf H}_i$ are independent copies of $\mathsf H$, and, similarly, $\hat {\mathsf H}_i$ of $\hat {\mathsf H}$. Finally, the variable $\mathsf K$ is distributed as in Eq.~\eqref{distK}. Observe that being $\mathsf Z$ Poisson distributed, $\mathsf Z'-1\stackrel{\rm d}{=}\mathsf Z$. Note also that, in the $\beta\to+\infty$ limit, Eqs.~\eqref{rdebeta} simplify, giving \begin{subequations}\label{cavmap} \begin{align} \hat{\mathsf H}\stackrel{\rm d}{=}&{\min_{ij}}^{(\mathsf K')}\left\{\{\hat\Omega_i-\hat{\mathsf H}_i\}_{i=1}^{\mathsf K'-1}\cup\{\Omega_i-\mathsf H_j\}_{j=1}^{\mathsf Z}\right\},\\ {\mathsf H}\stackrel{\rm d}{=}&{\min_{ij}}^{(\mathsf K)}\left\{\{\hat\Omega_i-\hat{\mathsf H}_i\}_{i=1}^{\mathsf K}\cup\{\Omega_i-\mathsf H_j\}_{j=1}^{\mathsf Z'-1}\right\}. \end{align} \end{subequations} Recalling the inclusion rule \eqref{eq_inclusion_rule}, the average reconstruction error is given as \begin{equation} \mathbb{E}[\varrho] = \frac{\hat\mu}{2} \Prob[\hat {\mathsf H}_1 + \hat {\mathsf H}_2 \le \hat \Omega ] + \frac{\gamma \hat\mu_k}{2k} \Prob[\mathsf H_1 + \mathsf H_2 > \Omega ]. \label{eq_averho_1} \end{equation} \subsection{Hard fields} \label{sec:pruning2} At this point, we aim at evaluating $\mathbb E[\varrho]$ by solving, possibly numerically, the RDEs given in Eqs.~\eqref{rdebeta}. It is, however, convenient to first isolate the contribution of ``hard-fields''. It is indeed not difficult to see that the events $\hat {\mathsf H} = + \infty$ and ${\mathsf H} = - \infty$ have finite probability. This follows from the fact that $\Prob[\mathsf Z=0]>0$ in \eqref{cavbetaH1}, which leads to $\hat {\mathsf H} = + \infty$, and this event can lead to ${\mathsf H} = - \infty$ in \eqref{cavbetaH2}. Let therefore be $\hat q\coloneqq\Prob[\hat{\mathsf H}<+\infty]$ and $q\coloneqq\Prob[{\mathsf H}>-\infty]$, probability that the fields are finite. We introduce two new random variables $\hat H$ and $H$ that have the law of $\hat {\mathsf H}$ and ${\mathsf H}$ conditional on being finite: \begin{subequations} \begin{align} \mathsf H & \stackrel{\mathrm d}{=}\begin{cases} -\infty&\text{with prob. }1-q \ ,\\ H&\text{with prob. } q \ , \end{cases} \\ \hat{\mathsf H} & \stackrel{\mathrm d}{=}\begin{cases} +\infty&\text{with prob. }1-\hat q \ ,\\ \hat H&\text{with prob. } \hat q \ . \end{cases} \end{align} \end{subequations} To obtain the equations obeyed by $q$, $\hat q$, $H$ and $\hat H$, it is convenient, in this sense, to observe that \begin{subequations} \begin{align} \hat{\mathsf H}\stackrel{\rm d}{=}&{\min}^{(\mathsf K')}\left\{\{\hat\Omega_i-\hat{\mathsf H}_i\}_{i=1}^{\mathsf K'-1}\cup\{\Omega_i-\mathsf H_j\}_{j=1}^{\mathsf Z}\right\}+O(\sfrac{1}{\beta}),\\ {\mathsf H}\stackrel{\rm d}{=}&{\min}^{(\mathsf K)}\left\{\{\hat\Omega_i-\hat{\mathsf H}_i\}_{i=1}^{\mathsf K}\cup\{\Omega_i-\mathsf H_j\}_{j=1}^{\mathsf Z}\right\}+O(\sfrac{1}{\beta}), \end{align} \end{subequations} the correction terms being finite. From these equations we easily get that \begin{subequations}\label{eqQQ} \begin{align} 1-\hat q=&\e^{-\gamma q},\\ 1-q=&\sum_{\kappa=1}^k\Prob[\mathsf K=\kappa](1-\hat q)^\kappa=1-\frac{1-(1-\hat\mu\hat q)^k}{1-(1-\hat\mu)^k}, \end{align} \end{subequations} that is a set of equations for $q$ and $\hat q$. Introducing the variables $Z$ distributed as \begin{subequations} \begin{equation} \Prob[Z=z]=\frac{1-\hat q}{\hat q}\frac{(\gamma q)^z}{z!},\quad z\in\mathds N \end{equation} and the variable $K$ distributed as \begin{equation} \Prob[K=\kappa]=\binom{k}{\kappa}\frac{(1-\hat\mu \hat q)^{k-\kappa}(\hat\mu\hat q)^{\kappa}}{1-(1-\hat\mu \hat q)^k},\quad \kappa=1,\dots,k, \end{equation} \end{subequations} we can reduce ourselves to equations involving `soft fields' $H$ and $\hat H$ only. Using the notation introduced in Eq.~\eqref{eqedgeper} we can write \begin{subequations}\label{rdebetasoft} \begin{align} \hat{ H}\stackrel{\rm d}{=}&-\frac{1}{\beta}\ln\frac{\sum_{\{m\},\{\tilde m\}}\mathbb I\left(\sum_{i=1}^{ K'-1}m_i+\sum_{j=1}^{ Z}\tilde m_j= K'\right)\prod_{i=1}^{ K'-1}\e^{\beta m_i(\hat{ H}_i-\hat\Omega_i)}\prod_{j=1}^{ Z}\e^{\beta \tilde m_j({ H}_j-\Omega_j)}}{\sum_{\{m\},\{\tilde m\}}\mathbb I\left(\sum_{i=1}^{ K'-1}m_i+\sum_{j=1}^{ Z}\tilde m_j= K'-1\right)\prod_{i=1}^{ K'-1}\e^{\beta m_i(\hat{ H}_i-\hat\Omega_i)}\prod_{j=1}^{ Z}\e^{\beta \tilde m_j({ H}_j-\Omega_j)}}, \label{cavbetaH1s}\\ { H}\stackrel{\rm d}{=}&-\frac{1}{\beta}\ln\frac{\sum_{\{m\},\{\tilde m\}}\mathbb I\left(\sum_{i=1}^{ K}m_i+\sum_{j=1}^{ Z'-1}\tilde m_j= K\right)\prod_{i=1}^{ K}\e^{\beta m_i(\hat{ H}_i-\hat\Omega_i)}\prod_{j=1}^{ Z'-1}\e^{\beta \tilde m_j({ H}_j-\Omega_j)}}{\sum_{\{m\},\{\tilde m\}}\mathbb I\left(\sum_{i=1}^{ K}m_i+\sum_{j=1}^{ Z'-1}\tilde m_j= K-1\right)\prod_{i=1}^{ K}\e^{\beta m_i(\hat{ H}_i-\hat\Omega_i)}\prod_{j=1}^{ Z'-1}\e^{\beta \tilde m_j({ H}_j-\Omega_j)}}.\label{cavbetaH2s} \end{align} \end{subequations} In the $\beta\to+\infty$ limit, \begin{subequations}\label{cavmapsoft} \begin{align} \hat{H}\stackrel{\rm d}{=}&{\min_{ij}}^{(K')}\left\{\{\hat\Omega_i-\hat{H}_i\}_{i=1}^{K'-1}\cup\{\Omega_i-H_j\}_{j=1}^{Z}\right\}\label{cavmapsoft1},\\ {H}\stackrel{\rm d}{=}&{\min_{ij}}^{(K)}\left\{\{\hat\Omega_i-\hat{H}_i\}_{i=1}^{K}\cup\{\Omega_j-H_j\}_{j=1}^{Z'-1}\right\}.\label{cavmapsoft2} \end{align} \end{subequations} It is important to note that the solution $\hat H=+\infty$ and $H=-\infty$ is an admissible solution of the RDEs for any value of $\beta$. This solution corresponds to a full recovery phase, in which the fraction of planted edges that are not correctly recovered vanishes as the system size grows. The average reconstruction error \eqref{eq_averho_1} can be rewritten as \begin{equation} \mathbb{E}[\varrho] = \frac{\hat\mu \hat q^2}{2} \Prob[\hat H_1 + \hat H_2 \le \hat \Omega ] + \frac{\hat\mu_k q^2\gamma }{2k} \Prob[H_1 + H_2\geq \Omega ]. \label{eq_averho_2} \end{equation} This procedure of ``hard-fields'' elimination on the RDEs admits also an interpretation on a single graph instance. Infinite fields on the planted edges may appear in the BP equations \eqref{cavbeta}: if a vertex $i$ has coordination equal to its capacity $\kappa_i$, then $h_{i \to e}= +\infty$ for all its incident edges $e$. This is not surprising: in this case, indeed, all edges incident to $i$ surely belong to the planted $k$-factor. The vertex $i$ and all the $\kappa_i$ edges incident to it can be removed and the capacity of its neighbours updated. This removal procedure can be iterated until either ${\mathrsfso F}_k^*$ has been entirely recovered, or a non-trivial core survives in which every vertex $i$ has $|\partial i|>\kappa_i$. The BP algorithm in Eq.~\eqref{cavbeta} can be runned on the obtained core. The described pruning appears as a generalization of the known pruning procedure adopted for the study of optimal matching on Erd\H os-R\'enyi graphs \cite{Karp1981} and planted matching problems \cite{Semerjian2020}. It corresponds to a process of identification of the planted edges merely based on the topology of the graph ${\mathrsfso G}$ (and therefore not related to the weights on the edges). A `percolation transition' occurs in the parameter of the problem between a phase in which the graph is completely pruned (and the $k$-factor completely recovered), and a phase in which an (extensive) core survives. The threshold is obtained studying the equation \begin{equation}\label{eq:qeq} q+\frac{\left(1-\hat\mu+\hat\mu\e^{-\gamma q}\right)^k-1}{\hat\mu_k}=0 \end{equation} that has $q=0$ as attractor for \begin{equation}\label{boundfr} ck\mu\hat\mu\leq 1. \end{equation} In the considered problem, this condition implies full recovery of the planted configuration by simple topological considerations, i.e., iterative pruning. \section{A numerical experiment: the exponential distribution case} \label{sec:Exp} We shall now numerically investigate the planted $k$-factor problem on the ensemble of graphs introduced above, using the tools described in the previous Section. We will take an uniform distribution for the non-planted weights, \begin{equation}\label{uniform} p(w)=\frac{1}{c}\mathbb I(0 \le w \le c), \end{equation} and an exponential distribution \begin{equation}\label{expdis} \hat p(w)=\lambda \e^{-\lambda w}\mathbb{I}(w \ge 0) \ , \end{equation} for the planted weights. It follows that in this case $\Gamma=[0,c]$, $\mu=1$, $\hat\mu=1-\e^{-c\lambda}$ and $\gamma=c(1-\e^{-k c\lambda})$. In the $c\to+\infty$ limit, $\hat\mu=\mu=1$ and $q=\hat q=1$ $\forall k\in\mathds N$. \begin{figure*} \subfloat[\label{fig:exprho}]{\includegraphics[height=0.45\columnwidth]{rho_expL.pdf}}\hfill \subfloat[\label{fig:exprho1}]{\includegraphics[height=0.45\columnwidth]{rho_exp_b1L.pdf}}\\\ \subfloat[\label{fig:drift}]{\includegraphics[height=0.45\columnwidth]{mediavarianza.pdf}}\hfill \subfloat[\label{fig:lambdap}]{\includegraphics[height=0.45\columnwidth]{lambdac.pdf}} \caption{(a -- b) Reconstruction error using the block MAP (a) and the symbol MAP (b) estimators for the planted $2$-factor problem with exponential planted weights and different values of average degree $c$ of the non-planted edges. The lines have been obtained numerically solving the RDEs \eqref{cavmapsoft}, corresponding to $\beta=+\infty$, and \eqref{rdebetasoft} with $\beta=1$ respectively, using a PD algorithm with $\mathcal N=10^6$ fields. The dots are obtained solving $10^3$ instances of the problem using the BP algorithm in Eqs.~\eqref{cav1} and \eqref{cavbeta} with $\beta=1$ on graphs of $N=10^3$ vertices. The $c\to+\infty$ curves are estimated using PD with $c=20$ and are compared with BP results obtained solving the problem on complete graphs with $N=10^2$ vertices. In the block MAP case, we marked with an arrow the partial to full recovery threshold in the $c\to+\infty$ limit predicted to be at $\lambda = 4k$. \textit{In the insets}, plot of $\sqrt{\lambda^\star_+-\lambda}\ln\mathbb E[\varrho]$ given by the PD algorithm, with $\lambda_+^\star$ estimated using the condition in Eq.~\eqref{condizione}. Both in the block MAP case and in the symbol MAP case, the limiting value for $\lambda\to\lambda^\star_+$ is finite, suggesting that the transition is of infinite order. (c) Mean and variance of the variable $\hat H$ estimated using a PD algorithm for $k=c=2$ for $\beta\to+\infty$. In this case, we numerically find $\lambda^\star_+=7.9(1)$, whereas the prediction given by Eq.~\eqref{condizione} is $\lambda^\star_+=7.9946\dots$. The results are obtained assuming all the $\mathcal N$ fields equal to zero as initial condition. An ``iteration'' of the algorithm corresponds to an update of all fields of the population by means of the RDEs. For $\lambda=5$, inside the partial recovery phase, the algorithm rapidly converges to asymptotic values that do not depend on the size $\mathcal N$ of the population. For $\lambda=10$, i.e., in the full recovery phase, the algorithm converges to values of $\mathbb E[\hat H]$ that are $\mathcal N$-dependent and diverge with $\mathcal N$, whereas the variance of the distribution goes to an $\mathcal N$-independent value (although noisy). The described phenomenology appeared for all the investigated values of $c$ and $k$, and for both the block MAP and the symbol MAP. (d) Transition point $\lambda^\star_+$ for the block MAP in the exponential model for $c\to+\infty$. The continuous line correspond to the prediction $\lambda^\star_+=4k$ given in Section~\ref{sec:exp2}, whereas the dots are the numerical estimation of the transition point obtained using PD with $c=50$ and a population of $\mathcal N=10^8$ fields.} \end{figure*} The large $N$ limit results for $\mathbb E[\varrho]$ have been obtained by a numerical resolution of the RDEs for the soft fields for $\beta=1$ and $\beta\to+\infty$ (see Eqs.~\eqref{cavmapsoft}) via a population dynamics (PD) algorithm~\cite{Mezard2001}. The approach consists in introducing an iterative version of the RDEs discussed above that defines a new set of random variables $(\hat H^{(n)},H^{(n)})_n$, $n=0,1,\dots$. These new random variables satisfy, for $\beta\to+\infty$, \begin{subequations}\label{cavmapsoftn} \begin{align} \hat{H}^{(n+1)}\stackrel{\rm d}{=}&{\min_{ij}}^{(K')}\left\{\{\hat\Omega_i-\hat{H}^{(n)}_i\}_{i=1}^{K'-1}\cup\{\Omega_i-H_j^{(n)}\}_{j=1}^{Z}\right\},\\ {H}^{(n)}\stackrel{\rm d}{=}&{\min_{ij}}^{(K)}\left\{\{\hat\Omega_i-\hat{H}^{(n)}_i\}_{i=1}^{K}\cup\{\Omega_j-H_j^{(n-1)}\}_{j=1}^{Z'-1}\right\}, \end{align} \end{subequations} with initial condition $\hat H^{(0)}=H^{(-1)}=0$. A similar set of iterative RDEs can be written for $\beta=1$ starting from Eqs.~\eqref{rdebetasoft}. The underlying assumption is that, if a non-trival solution of the original RDEs exists, such solution will be reached in probability by Eqs.~\eqref{cavmapsoftn} for $n\to+\infty$. From the algorithmic point of view, the law of the random variable $H^{(n)}$ is represented by an empirical distribution of a sample $\{h_1,\dots,h_{\mathcal N}\}$ of its representants, with $\mathcal N \gg 1$, so that \begin{equation} \Prob[H^{(n)}\leq h]\approx \frac{1}{\mathcal N} \sum_{i=1}^{\mathcal N} \mathbb{I}(h_i^{(n)} \le h). \label{eq_population} \end{equation} Similarly, an empirical distribution is adopted to approximate the law of $\hat H^{(n)}$. The RDEs \eqref{cavmapsoftn} are used to update the population representing $(\hat H^{(n)},H^{(n)})$ to a new population representing the variables $(\hat H^{(n+1)},H^{(n+1)})$, each update corresponding to an ``iteration'' of the algorthm. The algorithm stops when some convergence criterion (usually the convergence of the moments of the populations) is satisfied (see, e.g., Ref.~\cite{mezard2009information} for additional details). The PD predictions have been compared with actual BP results, obtained solving the inference problem on a large number of instances (see also Section~\ref{sec:app} for further details about the BP implementation). In the block MAP case ($\beta\to+\infty$) it is observed that, for a given pair $k$ and $c$, there exists an interval $\Lambda^{(\rm b)}\coloneqq (\lambda^\star_{-},\lambda^\star_{+})$ of values of $\lambda$, such that for $\lambda\in\Lambda^{(\rm b)}$ the PD algorithm converges to a finite solution. The corresponding value of $\mathbb E[\varrho]$ is found to be non-zero and, within numerical precision, $\mathbb E[\varrho]\to 0$ smoothly on the boundary of the interval. In particular, we numerically observe that \begin{equation}\lim_{\lambda\to\lambda^\star_+}\sqrt{\lambda^\star_+-\lambda}\ln\mathbb E[\varrho]=-\alpha,\qquad \alpha>0.\end{equation} This implies that $\mathbb E[\varrho]$ approaches zero exponentially fast as $\lambda\to\lambda^\star_+$, i.e., the transition is of infinite order. Remarkably, the very same behavior has been observed in the $k=1$ and $c\to+\infty$ case \cite{Semerjian2020}, where it is shown that $\lambda_+^\star=4$ and \begin{equation}\ln\mathbb E[\varrho]=-\frac{2\pi}{\sqrt{4-\lambda}}-\frac{3}{2}\ln(4-\lambda)+O(1)\end{equation} Finally, for $c\to +\infty$ it is observed that $\lambda^\star_{-}\to 0$, whereas $\lambda^\star_{+}$ approaches a finite limit. The obtained prediction for $\mathbb E[\varrho]$ is fully compatible with the BP results, and $\mathbb E[\varrho]\to 0$ for $\lambda\to\lambda^\star_\pm$ with the size of the considered graphs. On the other hand, for $\lambda\not\in\Lambda^{(\rm b)}$ the PD algorithm does not converge. To be more precise, the population is subject to a `drift' towards the full recovery solution, $H\to -\infty$ and $\hat H\to+\infty$. This can be seen, for example, in Fig.~\ref{fig:drift}, where some numerical results for $c=k=2$ are given in the $\beta\to+\infty$ case for $\hat H$. It is seen that the numerically estimated mean $\mathbb E[\hat H]$ is population-size-dependent, and in particular diverges with the population size, whereas the variance is not. Moreover, larger populations correspond to larger values of $\mathbb E[\hat H]$. The numerical results suggest therefore that a nontrivial, attractive fixed point exists for $\lambda\in\Lambda^{(\rm b)}$ only, otherwise the only attractor being the trivial fixed point $\hat H=-H=+\infty$ corresponding to the full recovery phase. In Ref.~\cite{Semerjian2020} it is argued that, for $k=1$, an infinite order phase transition takes place between a full recovery phase and a partial recovery phase, and in particular full recovery is obtained for $\lambda\in\mathds R^+\setminus\Lambda^{(\rm b)}$. The conjectures in Ref.~\cite{Semerjian2020} about the location of the transition and its nature have been recently rigorously proved in Ref.~\cite{Ding2021}. Our results strongly suggest that the same phenomenology extends to the $k>1$ case. As in the $k=1$ case, the accurate numerical determination of the endpoints of $\Lambda^{(\rm b)}$ is heavily affected by finite-population-size effects using PD (and finite-size effects using BP). Indeed, the transition manifests itself as a front propagation in the cumulative distribution function that drifts towards large values of the fields. Such front propagations are generically driven by the behavior in the exponentially small tail far away from the front~\cite{Brunet1997}. The finite population size induces a cutoff on the smallest representable value of the cumulative distribution function, that translates, assuming an exponential decay of the cumulative, into logarithmic finite population size effects on the velocity of the front and the location of the transition. The very same phenomenology is observed for $\beta=1$, i.e., for the symbol MAP, where a partial recovery phase $\lambda\in\Lambda^{(\rm s)}=(\lambda^\star_-,\lambda^\star_+)$ is surrounded by a full recovery phase. For a given pair $k$ and $c$ of parameters, the symbol MAP transition points are found to be very close to the block MAP transition points obtained for the same values of $k$ and $c$. Also in this case, it is found that $(\lambda^\star_+-\lambda)^{\sfrac{1}{2}}\ln\mathbb E[\varrho]\to -\hat\alpha$ for $\lambda\to \lambda^\star_+$ for some $\hat\alpha>0$, suggesting that the transition between the partial recovery phase and the full recovery phase is of infinite order also in this case. In Fig.~\ref{fig:exprho1} we give the PD results for $\mathbb E[\varrho]$ in this case, alongside with the results of the BP simulations. \section{A criterion for the block MAP transition}\label{sec:criterion} In this Section we give a heuristic criterion for the transition between the partial recovery phase $\mathbb E[\varrho]>0$, and the full recovery phase $\mathbb E[\varrho]=0$ in the case of the block MAP. Our reasoning will follow and generalize the one given in Ref.~\cite{Semerjian2020} for the $k=1$ case. Our approach is inspired by the physics literature on front propagation in reaction-diffusion systems and equations of the FKPP type \cite{Brunet1997,Majumdar2000,Ebert2000}. Before applying it, however, an additional simplification of Eqs~\eqref{cavmapsoft} must be performed. We will proceed in generality, assuming that $\hat p$ depends on a parameter, let us call it $\lambda$. Moreover, we will assume that a special value $\lambda^\star$ exists such that for $\lambda<\lambda^\star$ we are in a partial-recovery phase, whereas for $\lambda>\lambda^\star$ we are in a full-recovery phase. We will also assume that the transition is \textit{continuous}, i.e., $\mathbb E[\varrho]\to 0$ smoothly as $\lambda\to {\lambda^\star}^-$. Observe that $\mathbb E[\varrho]\to 0$ means that $\Pr[H_1+H_2<\Omega]\to 1$ and $\Pr[\hat H_1+\hat H_2>\hat\Omega]\to 1$, see Eq.~\eqref{eq_averho_2}. The first property implies that, approaching the transition, $H_1<\Omega-H_2$ almost surely, i.e., in Eq.~\eqref{cavmapsoft2} the minimum picks almost surely one of the `planted contributions'. Similarly, the second property implies that in the same limit the minimum in Eq.~\eqref{cavmapsoft1} is almost surely picked in the set of `non-planted contributions'. These observations lead us to introduce a new set of random variables $(\hat U^{(n)},U^{(n)})_n$, satisfying the iterative RDEs, \begin{align}\label{eqK} \hat{U}^{(n+1)}\stackrel{\rm d}{=}&{\min_{1\leq i\leq Z}}\{\Omega_i-{U}^{(n)}_i\},\\ {U}^{(n)}\stackrel{\rm d}{=}&{\max_{1\leq i\leq K}}\{\hat\Omega_i-\hat U^{(n)}_i\}, \end{align} with initial condition $\hat U^{(0)}=U^{(0)}=0$, corresponding to the expected ``effective'' behavior of Eqs.~\eqref{cavmapsoftn} near the transition. The new set of auxiliary variables is informative on the behavior of the random variables $(\hat H^{(n)},H^{(n)})_n$. Indeed, we can prove that \begin{equation}\label{ordering} \hat U^{(n)}\preceq \hat H^{(n)},\qquad H^{(n-1)}\preceq U^{(n)},\qquad \forall n. \end{equation} Given two random variables $X$ and $Y$, we say that $X\preceq Y$ if $\mathbb P[X>z]\leq \mathbb P[Y>z]$ for all $z$ \cite{Lindvall2012}. The proof proceeds by induction. Eqs.~\eqref{ordering} are satisfied for $n=0$. Assuming that they are satisfied for given $n$, it is easily proved that they are satisfied for $n+1$, being \begin{subequations} \begin{align} H^{(n)}&\preceq\max_{1\leq i\leq K}\{\hat\Omega_i-\hat{H}_i^{(n-1)}\}\preceq \max_{1\leq i\leq K}\{\hat\Omega_i-\hat{U}_i^{(n)}\}={U}^{(n+1)},\\ \hat {U}^{(n+1)}&=\min_{1\leq j\leq Z}\{\Omega_j-{U}^{(n)}_j\}\preceq\min_{1\leq j\leq Z}\{\Omega_j-{H}^{(n)}_j\}\preceq {H}^{(n+1)}. \end{align} \end{subequations} The result stated above implies that, if $\Pr[\hat U^{(n)}>z]\to 1$ for $n\to+\infty$, then $\Pr[\hat H^{(n)}>z]\to 1$ as well in the same limit. We will obtain now a sufficient condition to have $\Pr[\hat U^{(n)}>z]\to 1$ that we will give us, therefore, a (sufficient) criterion to be in the full recovery phase. We define \begin{align} F(x;n)&\coloneqq \Prob[\hat U^{(n)}<x],\\ \Phi(x;n)&\coloneqq\Prob[U^{(n)}<x]. \end{align} Denoting by $\mathbb E_X[\bullet]$ the expectation with respect to the random variable $X$, we have that \begin{align} F(x;n+1)&=1-\mathbb E_Z\left[\left(\mathbb E_\Omega\left[\Phi(\Omega-x;n)\right]\right)^Z\right],\\ \Phi(x;n)&=\mathbb E_K\left[\left(1-\mathbb E_{\hat\Omega}\left[ F(\hat\Omega-x;n) \right]\right)^K\right], \end{align} and therefore \begin{equation}\label{eqnonlinearizzata} F(x;n+1)= 1-\mathbb E_Z\left[\left(\mathbb E_{\Omega}\left[\left(1-\mathbb E_{\hat\Omega}\left[F(\hat\Omega-\Omega+x;n)\right]\right)^K\right]\right)^Z\right]. \end{equation} Suppose now that the cumulative $F(x;n)$ is subject to a `drift', i.e., there exists a velocity $v$ such that $F(x+vn;n)\to F(x)$ as $n\to\infty$. This is in line with the numerical result, which suggest that $\mathbb E[\hat H]\to +\infty$ and $\mathbb E[H]\to -\infty$ approaching the transition, whereas the higher order cumulants remain finite. Fig.~\ref{fig:drift}, in particular, shows that the means of the distributions are subject to a constant drift velocity that (at the leading order in $n$) does not depend on $n$. Moreover, this assumption is compatible with what has been observed in \cite{Semerjian2020}, and rigorously proved in \cite{Kingman1975,Biggins1977}, in the study of Eq.~\eqref{eqK} for $K\equiv 1$. Then, for $n\to+\infty$, \begin{equation} F(x-v)= 1-\mathbb E_Z\left[\left(\mathbb E_{\Omega}\left[\left(1-\mathbb E_{\hat\Omega}\left[F(\hat\Omega-\Omega+x)\right]\right)^K\right]\right)^Z\right]. \end{equation} For $x\to-\infty$, $F(x)\to 0$ by definition, and in this limit at first order in $F$ \begin{equation}\label{linearizzata} F(x-v)\simeq \mathbb E[Z]\mathbb E[K]\mathbb E\left[ F(\hat\Omega-\Omega+x)\right] \end{equation} (we have dropped the subscripts implying an average over all variables in the argument). This linear (integral) equation has a solution in the form $F_v(z)=\e^{\theta z}$, with $\theta >0$ to respect the increasing character of distribution functions. Indeed, plugging this solution into the linearized equation we have \begin{equation} v(\theta)\simeq -\frac{\ln\left(\mathbb E[Z]\mathbb E[K]\mathbb E\left[\exp(\theta\hat\Omega-\theta\Omega)\right]\right)}{\theta}. \end{equation} The choice of the appropriate $\theta$ to estimate the drift velocity is, at this point, not obvious. It can be shown \cite{Kingman1975,Biggins1977,Brunet1997,Majumdar2000,Ebert2000} that the relevant value $\theta^*$ is the one that corresponds to the \textit{maximum} velocity, i.e., $\theta^*=\arg\sup_{\theta>0} v(\theta)$, and therefore \begin{subequations} \begin{align}\label{velocitas} v=&-\inf_{\theta>0}\frac{\ln\left(\mathbb E[Z]\mathbb E[K]\mathbb E\left[\exp(\theta\hat\Omega-\theta\Omega)\right]\right)}{\theta}\\ =&-\inf_{\theta>0}\frac{\ln\left[\mathcal I(\theta)\mathcal I(1-\theta)\right]}{\theta} \end{align} \end{subequations} where \begin{equation} \mathcal I(\theta)\coloneqq\sqrt{ck}\int_\Gamma \hat p^{\theta}(w)p^{1-\theta}(w){\rm d} w. \end{equation} If $v=v(\theta^*)>0$, then the distribution drifts towards $+\infty$ and we are in a full recovery phase ($\hat H\to +\infty$). We postulate that the marginal condition $v(\theta^*)=0\Rightarrow \ln\left[\mathcal I(\theta^*)\mathcal I(1-\theta^*)\right]=0$ corresponds to the transition point. Being $\ln(\mathcal I(\theta)\mathcal I(1-\theta))$ a convex function symmetric around $\theta=\sfrac{1}{2}$, one has $v = 0$ when $\mathcal I(\sfrac{1}{2}) = 1$. This condition can be written as \begin{equation}\label{condizione2} \frac{{\rm D}_{\sfrac{1}{2}}(\hat p\|p)}{\ln(ck)}=1, \end{equation} where \begin{equation} {\rm D}_{\alpha}(p\|q)\coloneqq \frac{1}{\alpha-1}\ln\int p^\alpha(x)q^{1-\alpha}(x){\rm d} x, \end{equation} is the R\'enyi divergence of order $\alpha$. In an equivalent form, Eq.~\eqref{condizione2} can be written as \begin{equation}\label{condizione} \int_\Gamma\sqrt{\hat p(w)p(w)}{\rm d} =\frac{1}{\sqrt{c k}}. \end{equation} The condition above generalizes the one obtained for the planted matching problem \cite{Semerjian2020}, that is recovered for $k=1$. As a final comment, observe that, being by construction $\mathbb E[\varrho(\hat {\mathrsfso F}_k^{(\rm s)},{\mathrsfso F}^*_k)]\leq \mathbb E[\varrho(\hat {\mathrsfso F}_k^{(\rm b)},{\mathrsfso F}^*_k)]$, full recovery by means of the block MAP implies full recovery by means of the symbol MAP, and the partial recovery interval obtained using the symbol MAP is contained in the partial recovery interval of the block MAP. \begin{figure}{\begin{center} \includegraphics[width=0.5\columnwidth]{PhaseD.pdf}\end{center}} \caption{Phase diagram of the planted $k$-factor with exponential planted weights on sparse graphs. Our argument in Section~\ref{sec:criterion} predicts a partial recovery (PR) phase (in blue) and a full recovery phase, corresponding to the remaining portion of the plane, depending on the values of $\lambda$ and $c$. Within the full recovery phase, the red area corresponds to the set of parameters for which full recovery is possible by means of pruning, i.e., $\hat q=q=0$. The red dots have been obtained from the numerical resolution of the RDEs \eqref{cav1} for the $2$-factor by a population dynamics algorithm with $\mathcal N=10^7$. The black dots correspond instead to the $k=1$ case and are taken from Ref.~\cite{Semerjian2020}. \label{fig:phase}} \end{figure} \section{Examples of results}\label{sec:app} In this Section, we consider some special formulations of the planted $k$-factor problem, and we compare our numerical results with the theoretical predictions obtained from the general criterion given in Section~\ref{sec:criterion}. The numerical results are obtained studying the RDEs \eqref{rdebeta} for the problem by means of a population dynamics algorithm for $\beta=1$ (symbol MAP) and $\beta\to+\infty$ (block MAP). We also implemented a BP algorithm for the solution of the problem on actual graphs by means of the algorithm in Eq.~\eqref{cavbeta}. In particular, a random weighted graph with $N$ vertices is generated according to the ensemble introduced in Section~\ref{sec:definitions}. This graph is first subject to the pruning procedures described in Section~\ref{sec:pruning} and Section~\ref{sec:pruning2}. Given an edge $e=(i,j)$ of the resulting graph, we associate two fields $h_{i\to e}$ and $h_{j\to e}$ to it, initialized to random values. The fields are updated using Eq.~\eqref{cavbeta} if $\beta>0$ or using Eqs.~\eqref{cav1} if $\beta=+\infty$ for a large number of iterations. A candidate solution $\hat{\mathrsfso F}_k$ is then selected using the criterion in Eq.~\eqref{eq_inclusion_rule}. In all cases, we stopped the algorithm after $5N$ updates, or before if the set $\hat{\mathrsfso F}_k$ does not change for at least $50$ iterations. The error $\varrho$ is then obtained using Eq.~\eqref{eq_def_rho}, and the average error $\mathbb E[\varrho]$ is estimated considering a large number of independent instances of the problem. The BP algorithm for the estimation of the block MAP given in Eq.~\eqref{cav1} coincides with the BP algorithm for the minimum weight $k$-factor introduced by Bayati and coworkers \cite{Bayati2011}. They proved therein that the algorithm converges in polynomial time to the correct minimizer as long as there are no fractional solutions, i.e., solutions with non-integer values of $m_e$. A worst-case analysis of the convergence properties of the BP algorithm with $\beta=1$, on the other hand, is still missing. Observe that the BP algorithm has longer running time for $c\gg k$ when $\beta$ is finite with respect to the $\beta\to+\infty$ case. In this case, indeed, given a node with valence $\kappa$, each step in Eq.~\eqref{cavbeta} requires the sum of $O(c^\kappa)$ contributions. Eq.~\eqref{cav1}, instead, asks only for to the $\kappa$-th incoming field, an operation that requires $O(c)$ steps. On complete graphs, having $c=N-k-1$, this means that the algorithm running time is increased by a factor $N^{k-1}$ in the finite $\beta$ case with respect to the $\beta=+\infty$ case. \subsection{The fully-connected case} \label{sec:fully_connected} Dense models are recovered in our setting considering the $c\to+\infty$ limit, to be taken after the $N\to+\infty$ limit. Assuming that $\mu\hat\mu_k\neq 0$ (the problem is otherwise trivial) this implies $\gamma\to+\infty$ and therefore $\hat q=q=1$ because of Eqs.~\eqref{eqQQ}, meaning that in the thermodynamic limit there are (almost surely) no hard fields. Eq.~\eqref{condizione2} also implies that, if $\lim_{c\to+\infty}{\rm D}_{\sfrac{1}{2}}(\hat p\|p)<+\infty$ no transition can take place in the fully connected limit. To get nontrivial results in this limit, it is therefore necessary to scale the weights with $c$, so that at the transition \begin{equation}\label{condizione4} {\rm D}_{\sfrac{1}{2}}(\hat p\|p)=\ln c+o(\ln c)\quad\text{for}\quad c\gg 1. \end{equation} Suppose, for example, that $p$ is $c$-independent, and $\hat p(w)\equiv c^{-a}f(wc^{-a};b)$ with $a>0$ and $b$ parameters, and some function $f$ such that $f_0(b)\coloneqq\lim_{x\to 0}f(x;b)\in(0,+\infty)$. Then the condition \eqref{condizione4} implies that the threshold for $c\to+\infty$ is at \begin{equation} a=1. \end{equation} If instead $\hat p(w)\equiv c^{-1}f(wc^{-1};b)$, then the asymptotic formula \eqref{condizione4} is not sufficient anymore and Eq.~\eqref{condizione} must be considered. The threshold condition becomes \begin{equation} \sqrt{f_0(b)}\int\sqrt{p(w)}{\rm d} w=\frac{1}{\sqrt{k}}. \end{equation} The observations above are compatible with the rigorous results obtained by Bagaria and coworkers for the planted $2$-factor problem on complete graphs of $N$ vertices for $N\to+\infty$ \cite{Bagaria2018}. In their paper, they prove that, for $k=2$, on the threshold the following limit holds \begin{equation}\label{eq:Bagaria} \liminf_{N\to +\infty}\frac{{\rm D}_{\sfrac{1}{2}}(\hat p\|p)}{\ln N}=1 \end{equation} under some assumptions on the distributions $p$ and $\hat p$ (fulfilled, e.g., by Gaussian or exponential distributions, see Ref.~\cite{Bagaria2018} for details). This condition corresponds to Eq.~\eqref{condizione4}, observing that on a complete graph $c=N-k-1$. As we will show below, however, Eq.~\eqref{eq:Bagaria} can be not sufficient to recover the transition point. \subsection{The exponential model}\label{sec:exp2} Let us now briefly revisit the exponential case discussed in Section~\ref{sec:Exp}. If we apply Eq.~\eqref{condizione} to derive the block MAP threshold, we obtain \begin{equation} 2-2\e^{-\frac{c\lambda}{2}}=\sqrt{\frac{\lambda}{k}}. \end{equation} Introducing $s=ck$ and $t=\lambda k^{-1}$, one parameter can be absorbed obtaining \begin{equation} 2-2\e^{-\frac{st}{2}}=\sqrt{t}. \end{equation} This equation is always solved by $t=\lambda=0$. For $s=ck\gtrsim 1.2277\dots$, two additional solutions for $t$, and therefore $\lambda$, appear, let us call them $\lambda^\star_-$ and $\lambda^\star_+$, delimiting the partial recovery phase, see Fig.~\ref{fig:phase}. In the $c\to +\infty$ limit, $\lambda^\star_-\simeq\sfrac{1}{c}\to 0$ and only one transition point is found \begin{equation} \lambda^\star_+=4k. \end{equation} This result is confirmed by the numerics, see Fig.~\ref{fig:lambdap}. For $k=1$ we recover the known result $\lambda^\star_+=4$, rigorously proved in \cite{Moharrami2019}. The criterion predicts that $4k-\lambda^\star_+$ approaches zero as $\e^{-ck}$ for $ck\to+\infty$. Observe that in this case, considering $c=N-k-1$, ${\rm D}_{\sfrac{1}{2}}(\hat p\|p)=\ln N+O(N)$ for all values of $k$ and $\lambda>0$. In other words, Eq.~\eqref{eq:Bagaria} of \cite{Bagaria2018}, although verified on the transition, is not enough to recover the threshold. \subsection{Hidden Hamiltonian cycle recovery}\label{sec:HiddenHam} In this section we are interested in solving a special type of planted $2$-factor problem, namely the hidden Hamiltonian cycle recovery (HC) problem. This is a planted $2$-factor problem in which the hidden $2$-factor is connected, i.e., it is a Hamiltonian cycle of the graph. The very same BP algorithms discussed for the planted $2$-factor can be applied to recover the hidden Hamiltonian cycle. This problem was studied in Ref.~\cite{Bagaria2018}, with the planted weights being assumed to be normal variables, $\hat p=\mathzapf N(\lambda,1)$, whereas the non-planted weights have distribution $p=\mathzapf N(0,1)$. In this case, therefore, ${\rm D}_{\sfrac{1}{2}}(\hat p\|p)=\frac{1}{4}\lambda^2$. Applying Eq.~\eqref{eq:Bagaria}, a nontrivial transition in the block MAP estimator is expected at $\lambda^2=4\ln N+o(\ln N)$. Parametrizing $\hat p$ with $\lambda^2=\hat\lambda^2\ln N$, the transition is then at $\hat\lambda=2$. This is rigorously proved and numerically verified in Ref.~\cite{Bagaria2018}, see Fig.~\ref{fig:phaseham}. In Fig.~\ref{fig:phaseham} we also plot, for the block MAP case, the probability that the estimator $\hat{\mathrsfso F}_2$ provided by the BP algorithm is actually connected, i.e., it is a single Hamiltonian cycle. Our numerics suggest that, for $\hat\lambda>2$, this probability goes to $1$ as $N\to+\infty$. \begin{figure} { \begin{center} \includegraphics[width=0.7\columnwidth]{rho_hamil.pdf} \end{center}} \caption{Recovery using the block MAP and the symbol MAP in the HC problem with $\hat p=\mathzapf N(\hat\lambda\sqrt{\ln N},1)$ and $p=\mathzapf N(0,1)$. The dots are obtained running BP on $10^3$ instances of complete graphs with $N=100$ with a hidden Hamiltonian cycle to be recovered. The black lines correspond to the PD prediction for the planted $2$-factor with $c=100$, with same planted and non-planted weight distributions. The vertical line indicates the transition point between partial and full recovery predicted by the theory for the block MAP. In gray, we plot the probability that the block MAP estimator $\hat{\mathrsfso F}^{(\rm b)}_{k=2}$ is a Hamiltonian cycle (instead of a union or two or more cycles) for different sizes of the problem.\label{fig:phaseham}} \end{figure} As discussed in Section~\ref{sec:definitions}, however, the block MAP is not the estimator that minimizes the error in Eq.~\eqref{eq_def_rho}. If we aim at minimizing the error $\varrho$, then the optimal estimator is the symbol MAP. We recall that this estimator does not provide a $k$-factor in general. Running our BP algorithm on graphs with hidden Hamiltonian cycles in the ensemble considered in \cite{Bagaria2018}, we obtained the results in Fig.~\ref{fig:phaseham}. The average error obtained using the symbol MAP is found to be smaller than the one obtained using the block MAP, as expected. On the other hand, finding the symbol MAP is computationally more expensive, as discussed above. For comparison, in Fig.~\ref{fig:phaseham} we plot also the PD results for the $2$-factor, finding a good agreement between the infinite-size prediction of the $2$-factor problem and the BP results of the HC problem also in the partial recovery phase. \medskip \section{Perspectives} \label{sec:conclusions} The transition appearing in the planted $k$-factor problem is of the same type as found in the planted matching problem \cite{Moharrami2019,Semerjian2020,Ding2021} and separates a partial recovery phase from a full recovery phase. Using heuristic arguments based on the literature on front propagation for reaction-diffusion equations, we have been able to obtain a simple and explicit criterion for the transition. We numerically tested the transition criterion, and we checked its consistency with the known results on the recovery thresholds of the planted $2$-factor problem. A rigorous proof of this transition criterion remains, however, as an open problem. The heuristic argument is based on the fact (numerically observed) that the phase transition is continuous. It is not excluded a priori that first order transitions are possible for some nontrivial choice of the weight distributions or degree distributions of the graph, allowing the presence of multiple BP fixed points \cite{Bordenave2013}. Finally, the threshold criterion obtained in the paper concerns the block MAP and only provides a bound for full recovery by means of the symbol MAP. In the numerically investigated cases, the recovery thresholds of the symbol MAP are observed to be very close to the ones of the block MAP. A formula for the exact location of the symbol MAP transition (and possibly its relation with the block MAP transition) is however still missing. \subsection*{Acknowledgments} The authors acknowledge collaboration with Guilhem Semerjian on the work \cite{Semerjian2020} that was the source of the key theoretical ideas underlying the analysis in the present paper and for many insightful discussions on the problem. This project has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sk\l{}odowska-Curie grant agreement CoSP No 823748, and from the French Agence Nationale de la Recherche under grant ANR-17-CE23-0023-01 PAIL. \section*{References}
2,869,038,156,617
arxiv
\section{Introduction} DNA or deoxyribonucleic acid is a macromolecule having a diameter of about 22-26 {\AA} \cite{JMB81}. In aqueous medium, specially in presence of cations the acid group dissociates to generate a macromolecular polyanion with a persistence length of at least $\sim$ 500 {\AA} \cite{PNAS-94-6185-1997}. The counterions, provided by dissolved inorganic or organic salts (buffers), screen the negative charge and stabilize the DNA as dispersion in aqueous medium. As expected from dilute solutions and also as confirmed from simulation studies, the DNA molecules are dispersed as gyration spheroids with a mean radius increasing with decreasing counterion concentration \cite{JPCB-109-10458-2005,JMB-267-299-1997,Biopolymers-31-1471-1991}. However, it is well known that (a) in biologically meaningful conditions, such as in the cellular environment, DNA molecules exist in a confined space of micrometer diameter, crowded with counterions and smaller molecules \cite{Biochimie-90-1040-2008}, and (b) in response to screened Coulomb forces and/or short-range forces (the most well-known being the `depletion force' \cite{JPolySc-33-183-1958}) DNA molecules may organize into liquid crystalline or crystalline phases in extra-cellular bulk \cite{Livolant}. Response of DNA towards this crowding is also influenced by the confining geometry \cite{NanoLett-11-5047-2011,JCP-128-225109-2008}. In a previous work we have shown that a DNA-PEG-NaCl system that forms a monophase aqueous system in bulk phase segregates spontaneously when confined within a droplet of micrometer length scale with the DNA molecules forming a layer at the boundary, leaving the interior of the droplet severely depleted. It was also found that this outer layer of DNA molecules is birefringent showing a molecular self-organization to an ordered structure. Most importantly, this segregation and ordering was clearly correlated with confinement, becoming slower with increase in the droplet diameter indicating a critical size above which the process will be essentially non-existent \cite{CPL-539-157-2012}. However, the evolution of the ordering of DNA molecules in the segregated layer is essentially a nanometer-scale phenomenon \cite{JCP-128-225109-2008} and the above study, which concentrated on the droplet as a whole and thus was at micrometer length scales, was unable to probe this level of self-organization. This is important for several reasons. First, both simple and complex liquids, when confined to nanometer length scales, especially at an interface, can spontaneously form one-dimensionally and two-dimensionally \cite{PRL-82-2326-1999,PRE-63-021205-2001,PRL-96-096107-2006,Macro-33-3478-2000} ordered structures and similar structures in DNA under confinement may provide spatially organized sites for molecular and supra-molecular bonds. Secondly, the height-height correlation at the surface or interface of such structures maybe either `solid-like' or `liquid-like', and this has a bearing on their mechanical properties. Also, one dimensionally nano-confined films act as mimics of biologically relevant immobile structures that provide steric constraints as well as confined environment. Unfortunately, the nanoconfined liquids studied so far, are mostly neutral \cite{PRL-83-564-1999}. In particular, to our knowledge, no such study has been carried out on polyelectrolytes or charged macromolecules, although such studies are specifically important for understanding biologically interesting layered structures. In the present study, we have tried to address the above issues by studying the in-plane and out-of-plane structure and morphology of nano-confined films of DNA in the pristine state and in presence of counterions provided by buffer molecules. We have used Atomic Force Microscopy for studying the surface morphology, X-ray reflectivity to extract the electron density profile (EDP) along the film depth and X-ray diffuse scattering to find the height-height correlation on the film surface, which decides whether the films are `solid-like' or `liquid-like'. The spin-coated films in our studies may serve as planar models to understand the nanoscale self-organization in the cellular space. The questions we are trying to answer are (a) whether nano-confined DNA molecules form layers spontaneously ? (b) What role do the counterions play in this layering ? and (c) whether the films are `solid-like' or `liquid-like' in their height-height correlations ? \section{Experimental Details} Polymerized calf thymus DNA (Sisco Research Laboratory, India) in triple distilled water formed the pristine stock solution. The absorbance ratio A$_{260}$/A$_{280}$ of the solution at 260 nm and 280 nm being in the range 1.8 $<$ A$_{260}$/A$_{280}$ $<$ 1.9, indicated that no further deproteinization of the solution was necessary. Concentration of the stock solution in terms of nucleotide, assuming $\varepsilon_{260}$ = 6600 M$^{-1}$cm$^{-1}$, was found to be 1.8mM. The stock solution was diluted to the desired concentration of 800 $\mu$M in triple distilled water and was used to prepare `pristine' films. 10mM of sodium cacodylate (Merck, Germany) solution in triple distilled water was adjusted to the desired pH of 6.7 with hydrochloric acid and was used as the buffer. This solution was used to prepare the `buffered' films with the 800$\mu$M solution of DNA. The buffer concentration was maintained well below the critical monovalent counterion concentration of $\sim$ 500mM, a condition which is required for complete neutralization of DNAs \cite{PNAS-94-6185-1997}. Also use of high salt concentration (500mM) leaves excess salt crystals over the film surface and consequently prevents us from probing the layered structures \cite{AIP-1447-189-2012}. Films were prepared by spin-coating the solution at 4000 rpm on amorphous fused quartz substrates at ambient condition using a spin-coater (Headway Research Inc., USA). Before spin-coating the fused quartz (Alfa Aesar, USA) substrates were cleaned and hydrophilized by boiling in 5:1:1 H$_2$O:H$_2$O$_2$:NH$_4$OH solution for 10 minutes, followed by sonication in acetone and ethanol respectively. The substrates were then rinsed by Millipore water (resistivity $\sim$18.2 M$\Omega$ cm) and subsequently water was removed by spinning the substrate at high speed (4000 rpm). Henceforth, for the sake of convenience, films obtained from pristine and buffer-added solutions would be called `pristine' and `buffered' respectively. For extracting out-of-plane information we have recorded specular X-ray reflectivity (XRR) profiles of these thin films. This is a well established technique for investigating layered systems. In XRR we measure $\vec{q}$ which is the difference in momentum between the incident and scattered beam. For specular condition momentum transfer occurs only in out-of-plane (Z) direction ($\vec{q} \equiv (q_x=0, q_y=0, q_z=(4\pi/\lambda)sin\theta_i\neq 0$), $\theta_i$ = angle of incidence) i.e. perpendicular to the film plane which is also the confining direction. Since $\vec{q}$ has dependency both on incidence angle and wavelength of the X-ray beam, XRR data can be recorded in angle dispersive and as well as in energy dispersive mode, respectively \cite{JPhysD-39-R461-2006}. We have followed angle dispersive mode and recorded the intensity of reflected X-ray beams varying the angle of incidence with an angular step size of 5 millidegree at the Indian Beamline (BL-18B) at Photon Factory, High Energy Accelerator Research Organization (KEK), Japan using X-ray of wavelength ($\lambda$) 1.08421{\AA}. This reflected intensity is the resultant of interference of X-rays reflected from different interfaces of the film and thus contains information of those interfaces. On the other hand, in transverse diffuse scattering, we measure off-specular intensity of the scattered beam for in-plane information \cite{RepProgPhys-63-1725-2000}. Here also we measure intensity by varying angle of incidence i.e. by rocking the sample but keeping the detector fixed. Here we maintain $q_x\neq q_y\neq q_z\neq0$. Since X-ray beam was incident along Y direction and source slit dimension was 0.1mm and 2mm in vertical and horizontal direction, respectively, i.e. quite large in out-of-the scattering plane or X direction, we can easily assume this scattering geometry effectively integrates out the intensity along $q_x$ direction leaving intensity as a function of $q_y$ only. Since off-specular scattering is dominated by in-plane scattering from the sample surface, this beam carries information of surface height distribution. We recorded transverse diffuse scattering data with step size of 2 millidegree for three different angular positions of detector and thus in-plane information at three different depths of the sample was collected. The sample was kept in nitrogen atmosphere to avoid radiation damage. Atomic Force Microscope (AFM) images were recorded at tapping mode using Nanonics MultiView1000 with Au coated cantilevered AFM probes of glass of tip diameter $\sim$ 20nm. The scan size was chosen 5$\mu$m $\times$ 5$\mu$m to probe the long range characteristics. The images were analyzed using WSxM software \cite{WSxM}. \section{Results and Discussions} \subsection{Layering and non-layering} X-ray reflectivity profiles are outcome of scattering of an electromagnetic wave by matter in specular direction. It is analyzed by different formalism varying the extent of approximations in perturbation of electromagnetic wave by matter \cite{Daillant,PhysRep-257-223-1995}. We have used the Distorted Wave Born Approximation (DWBA) \cite{jkbasu}, which only requires an ansatz of the average electron density of the film and uses the exact (Maxwell's) wavefunctions of the beam scattered from the film to find out the electron densities of different `layers' of the film. The thickness of the layers are decided by the spatial resolution along Z, which in turn is given by the maximum value of the momentum transfer $q_z$ up to which Kiessig (interference) fringes are observed. This technique is efficient for smaller variations and relatively unknown compositions, and has been used extensively for detecting layering in nano-confined liquids \cite{EPL96,PRB,Macro2}. The EDP shown in Figure 1b was obtained by further convoluting it with air-film and film-substrate interfacial widths within the resolution limits, where these widths were also obtained from DWBA. Let us first discuss the pristine film. DWBA fit (Figure 1a) gives the thickness of this film as $\sim$ 78{\AA}, corresponding to three layers of DNA lying on their sides parallel to the substrate surface, on top of each other. The widths of the air-film and film-substrate interfaces, $\sigma_{af}$ and $\sigma_{fs}$, are 5.0{\AA} and 6.9{\AA}, respectively (Table 1). Since the $\sigma_{af}$ of bare quartz is $\sim$ 7{\AA}, this indicates the excellent flatness and smoothness of the film, underscoring its stability. The value of $\sigma_{af}$ is consistent with the value of r.m.s. roughness ($\sigma_{rms}$) of the film surface, as obtained from AFM (Figure 1c and Table 1). The electron density profile (EDP) extracted from this fit (Figure 1b) shows formation of three distinct density oscillations characteristic of layering with a periodicity of $\sim$ 26{\AA}, nearly the diameter of the DNA molecule, rather than the polymer radius of gyration \cite{PRB}. The order parameter for layering, $\delta = \rho_{max} - \rho_{min}$, where $\rho_{max}(\rho_{min})$ is the average maximum(minimum) electron density in the film EDP comes out to be 0.032 e{\AA}$^{-3}$, and the $\sigma_{int}$ i.e. the interfacial width between the layers is negligible. This confirms the model of this film being that of a stack of three layers of pristine DNA molecules lying on their sides and aligning themselves parallel to the hydrophilic substrate, similar to short-chain polymers and simple liquids \cite{Langmuir-17-4021-2001}. Such confinement induced layering of DNA molecules is also in accordance with previous reports of enhancement of asymmetric shape of DNA molecules confined in nanoslits \cite{Macro-45-2920-2012}. Let us now look at the buffered film. The DWBA fit in Figure 1a yields the EDP in Figure 1b. The values of $\sigma_{af}$ and $\sigma_{fs}$ are 9.0{\AA} and 8{\AA}, respectively. Though the film quality is still very good but this increase in interfacial widths, especially $\sigma_{af}$, is partially explained from the AFM topographical image in Figure 1d. This shows formation of nanometer-sized islands that takes the r.m.s. surface roughness to this higher value. The probable origin of these islands will be discussed in next section. Nevertheless, the overall r.m.s. roughness of the film surface is $\sim$ 19 {\AA}, while the roughness between the islands is $\sim$ 9 {\AA} which is shown in Table 1. The most interesting aspects of the EDP of the buffered film are its thickness $\sim$ 52{\AA}, which corresponds to the thickness of a stack of two DNA molecules aligned parallel to the substrate, lying on their sides, and the absence of any density oscillation due to layering. This suggests (a) reduction of adhesion and/or cohesion of DNA, and (b) some form of intermolecular `diffusion' or `overlap' that reduces the order parameter $\delta$ to near-zero value. \subsection{`Liquids' of rods with adjustable lengths} We have used surface correlation function to characterize the in-plane morphology of the films. The correlation between the heights of two positions i.e. between the positions of two scattering centers determines the extent of interference of the beam undergoing in-plane scattering \cite{Tolan}. Following this principle we have extracted height-difference correlation from transverse diffuse scattering data. Under DWBA the diffuse scattering cross section is given by \cite{PRB-38-2297-1988}, \begin{equation}\label{Eqn:DSC} \frac{d\sigma}{d\Omega} \sim \mid T(\vec{k_1})\mid^2 \mid T(\vec{k_2})\mid^2 S(\vec{q_t}) \end{equation} where $T(\vec{k_1})$ and $T(\vec{k_2})$ denote transmission coefficient for incident and outgoing wavevectors $\vec{k_1}$ and $\vec{k_2}$, respectively and structure factor $S(\vec{q_t})$ for transmitted wavevector $\vec{q_t}$ is given by, \begin{equation}\label{Eqn:S} S(\vec{q_t})= \frac{exp\{-[(q_z^t)^2+(q_z^{t*})^2]\sigma^2/2\}}{\mid q_z^t \mid^2}\int\int_{S_0}dX dY (e^{\mid q_z^t\mid^2(\sigma^2-0.5g(X,Y))}-1)e^{-i(q_xX+q_yY)} \end{equation} Eq. \ref{Eqn:S} contains the height-difference correlation function $g(X,Y)$ which in polar coordinate for an isotropic Gaussian rough surface is given by, \begin{equation}\label{Eqn:Corr} g(r) = <[h(r_0+r)-h(r_0)]^2> \end{equation} where $h(r)$ denotes the height at any point $r$ and $<>$ denotes ensemble average over all possible surface configurations. Our transverse diffuse scattering data of the films taken at different detector angles, i.e. probing at different depth of film, fits well with the `self-affine liquid' (SALiq) correlation function \cite{PRL-82-4675-1999}, \begin{equation}\label{Eqn:SALiq} g(r)=\{2\sigma^2+B[\gamma_E + ln(\frac{\kappa r}{2})]\}\{1-exp[-(r/\xi)^{2\alpha}]\} \end{equation} where $\sigma$= rms roughness, $B=k_BT/\pi\gamma$, ($\gamma$=interfacial tension, $T$=absolute temperature) $\gamma_E$=Euler constant, $\kappa$=lower cut-off wave-vector corresponding to mass/density fluctuation ($\kappa^2=g\triangle\rho/\gamma$, $g$=acceleration due to gravity, $\triangle\rho$=density fluctuation), $\xi$=correlation length, $\alpha$=Hurst exponent. These fits (Figure 2) show that both of these films have `liquid-like' correlation. Table 2 shows that the roughness $\sigma$ obtained from diffuse scattering at three incident angles match with the $\sigma_{af}$, obtained from X-ray reflectivity and $\sigma_{rms}$, obtained from AFM (Table 1) respectively. With increasing angle (2$\theta$) of the detector position, the correlation length $\xi$ gradually decreases, consistent with the fact that at higher angle the illuminated area decreases. Decrease in the value of Hurst exponent ($\alpha$), indicating a more zig-zag surface, is consistent with an enhancement of roughness with buffering. Enhanced value of surface tension ($\gamma$) with increase of counterion concentration too, can be attributed to the enhanced roughness of buffered film \cite{ARMR-38-71-2008}. At the same time we observe that lower cut-off wavevector ($\kappa$) has decreased significantly from pristine to buffer film, associated with decrease of $\triangle\rho$. Decrease in the value of $\triangle\rho$ in buffer added film and its constant value with various detector angle, i.e. at different depth of film implies uniform distribution of molecules along the film depth, consistent with the absence of layered structure as obtained from XRR data. Thus we observe (a) pristine and buffered DNA can be spin-coated to form stable films that show `liquid-like' height-height correlations at larger length scales, (b) pristine film consists of layers of DNA with the long-axes of the molecules lying parallel to the substrate surface while the surface roughness is very small and (c) a small buffer (counterion) concentration both destroys the layering and enhances the roughness. Based on this information we visualize this system as a `frozen' state of vibrating rigid rods. The question regarding the adhesion of the pristine film to the hydrophilic substrate has been answered by recent simulation results \cite{Macro-44-1707-2011} on polyelectrolyte film attachment to such substrates. With the same view, we propose that the hydronium (H$_3$O$^+$) counterions to the phosphatic negative charges located on the DNA molecules provide attachment to the hydroxyl-terminated hydrophilic substrate through short-range interactions such as hydrogen bonds, over and above the long-range but weak, screened Coulomb attraction, and also that these forces align the DNA persistent length `rods' parallel to the substrate surface. Within the film this lateral alignment is more favourable as besides H-bonding, the self-avoidance is also enhanced by reduced screening of Coulomb interaction \cite{PRL-99-058302-2007}. Figure 3a and 3b shows the schematic of the film structure. \section{Conclusions} We have observed within a nano-confined thin film, in a quasi-equilibrium condition, by rearranging themselves DNA molecules form a confined charged liquid. In pristine film they behave like long semi-flexible rods and form layers quite like to simple liquid molecules. Addition of counterions neutralize their charge and make the rods shorter and hence increases the orientational entropy that destroys the layered structure. Addition of counterions thus effectively reduce the `rod' length and make them `soft' on a larger length scale. Our results show the importance of confinement towards the molecular arrangement of DNAs. Hence role of confining space demands attention in other cell-mimicking systems. It also shows a minute tuning of counterion concentrations can change the arrangement of DNAs and creates an opportunity of controlling the performance of nanofluidic devices \cite {AnalChem-80-2326-2008,PRL-94-196101-2005} where DNA molecules are made confined within nanoslits. \begin{acknowledgments} We would like to acknowledge Heiwa-Nakajima Foundation, Japan for providing financial support and Department of Science and Technology, Government of India for sponsoring Indian beamline project at Photon Factory, KEK, Japan. Authors N. B. and S. C. thank Council of Scientific and Industrial Research (CSIR), Government of India and Director, SINP for their research fellowships. \end{acknowledgments}
2,869,038,156,618
arxiv
\section{Introduction} We consider the linear fourth order Schr\"odinger equation in three spatial dimensions \begin{align*} i \psi_t = H \psi, \,\,\, \psi(0,x)= f(x), \,\,\, H:=\Delta^2+ V, \, \, \, x\in \mathbb R^3. \end{align*} Variants of this equation were introduced by Karpman \cite{K} and Karpman and Shagalov \cite{KS} to account for small fourth-order dispersion in the propagation of laser beams in a bulk medium with Kerr nonlinearity, and may be used to model other ``high dispersion" models. Linear dispersive estimates have recently been studied, \cite{fsy,GT4,FWY}, we continue this study to understand the structure and effect of zero energy resonances on the dynamics of the solution operator in the three dimensional case. Fourth order Schr\"odinger equations have been studied in various contexts. For example, the stability and instability of solitary waves in a non-linear fourth order equation were considered in \cite{LS}. Well-posedness and scattering problems for various nonlinear fourth order equations have been studied by many authors, see for example \cite{MXZ1, MXZ2,P, P1,CLB,CLB1}. We note that time decay estimates we consider in this paper may be used in the study of special solutions to non-linear equations. In the free case, see \cite{BKS}, the solution operator $e^{-it \Delta^2 }$ in $d$-dimensions preserves the $L^2$ norm and satisfies the following dispersive estimate \begin{align*} \|e^{-it \Delta^2 } f \|_{L^{\infty}(\R^d)} \les |t|^{-\frac{d} 4} \|f\|_{L^1(\R^d)}. \end{align*} In this paper we study the dispersive estimates in three spatial dimensions when there are obstructions at zero, i.e the distributional solutions to $H\psi =0$ with $\psi \in L^\infty(\mathbb R^3)$. We provide a full classification of the zero energy obstructions as a finite dimensional space of eigenfunctions along with a ten-dimensional space of two distinct types of zero-energy resonances, see Section~\ref{sec:classification}. As in the four dimensional case, \cite{GT4}, the zero energy obstructions in three dimensions have a more complicated structure than that of the Schr\"odinger operators $-\Delta+V$, \cite{JN,ES}. Let $P_{ac}(H)$ be the projection onto the absolutely continuous spectrum of $H$ and $V(x)$ be a real-valued, polynomially decaying potential. We prove dispersive bounds of the form $$ \|e^{-it H }P_{ac}(H) f \|_{L^{\infty}} \les |t|^{-\gamma} \|f\|_{L^1}, $$ or a variant with spatial weights, for each type of zero energy obstruction where $\gamma$ depends on the type of resonance. Such estimates can be used to study asymptotic stability of solitons for non-linear equations. We introduce some notation to state our main results. We let $\la \cdot \ra=(1+|\cdot|)^{\f12}$, and let $a-$ denote $a-\epsilon$ for a small, but fixed value of $\epsilon>0$. We define the polynomially weighted $L^p$ spaces, \begin{align* L^{p,\sigma}(\mathbb R^3):=\{ f\, : \, \la \cdot \ra^{\sigma} f\in L^p(\mathbb R^3) \} . \end{align*} We provide a precise definition and characterization of resonances in Section~\ref{sec:classification} and Definition~\ref{def:restype} below. We characterize the resonances in terms of distributional solutions to $H\psi=0$. Heuristically, if $|\psi(x)| \sim 1$ as $|x|\to \infty$, we have a resonance of the first kind. If $|\psi(x)| \sim |x|^{-1}$ as $|x|\to \infty$ we have a resonance of the second kind, and if $|\psi(x)|\sim |x|^{-\f32-}$ we have a resonance of the third kind. The classification of the resonances in the fourth order Schr\"odinger equation requires a more detailed, subtle analysis than in the Schr\"odinger equation since the lower order terms in the expansion of Birman-Schwinger operator interact each other, see expansions of $M(\lambda)$ in Lemma~\ref{lem:M_exp}. This causes complications in the classification of threshold obstructions which do not arise in the case of Schr\"odinger's equation or in the four dimensional case, see \eqref{F def}, \eqref{T2 def}, and Section~\ref{sec:classification}. Our main results are summarized in the theorem below. \begin{theorem}\label{thm:main} Let $V$ be a real-valued potential satisfying $|V(x)|\les \la x\ra^{-\beta}$ be such that there are no embedded eigenvalues in $[0,\infty)$ except possibly at zero. Then, \begin{enumerate}[i)] \item If zero is regular, then if $\beta>5$, $$ \| e^{-itH}P_{ac}(H)\|_{L^1\to L^\infty} \les |t|^{-\f34}. $$ \item If there is a resonance of the first kind at zero, then if $\beta>7$, $$ \| e^{-itH}P_{ac}(H)\|_{L^1\to L^\infty} \les |t|^{-\f34}. $$ \item If there is a resonance of the second kind at zero, then if $\beta>11$, $$ \| e^{-itH}P_{ac}(H)\|_{L^1\to L^\infty} \les \left\{ \begin{array}{ll} |t|^{-\f34} & |t|\leq 1 \\ |t|^{-\f14} & |t|>1 \end{array}\right. $$ Moreover, there is a time-dependent, finite-rank operator $F_t$ satisfying $\|F_t\|_{L^1\to L^\infty}\les \la t\ra^{-\f14}$ so that $$ \| e^{-itH}P_{ac}(H)-F_t\|_{L^1\to L^\infty} \les |t|^{-\f34}. $$ \item If there is a resonance of the third kind at zero, then if $\beta>15$, $$ \| e^{-itH}P_{ac}(H)\|_{L^1\to L^\infty} \les \left\{ \begin{array}{ll} |t|^{-\f34} & |t|\leq 1 \\ |t|^{-\f14} & |t|>1 \end{array}\right. $$ Moreover, there is a time-dependent, finite-rank operator $G_t$ satisfying $\|G_t\|_{L^1\to L^\infty}\les \la t\ra^{-\f14}$ so that $$ \| e^{-itH}P_{ac}(H)-G_t\|_{L^1\to L^\infty} \les |t|^{-\f12}. $$ Furthermore, one can improve this time decay at the cost of spatial weights, $$ \| e^{-itH}P_{ac}(H)-G_t\|_{L^{1,\f52}\to L^{\infty,-\f52}} \les |t|^{-\f34}. $$ \end{enumerate} \end{theorem} As in the two-dimensional Schr\"odinger equation and four-dimensional fourth order equation, we have a `mild' type of resonance which does not affect the natural $|t|^{-\frac{d}4}$ decay rate. As in \cite{fsy,GT4,FWY}, we assume absence of positive eigenvalues. Under this assumption, a limiting absorption principle for $H$ was established, see \cite[Theorem~2.23]{fsy}, which we use to control the large energy portion of the evolution, which necessitates the larger bound as $t\to 0$. The large energy is unaffected by the zero energy obstructions, and our main contribution is to control the small energy portion of the evolution in all possible cases, which we show is bounded for all time and decays for large $|t|$. In general, $|t|^{-\frac{d}2}$ decay rate for the Schr\"odinger evolution is affected by zero energy obstructions. In particular, the time decay for large $|t|$ is slower if there are obstructions at zero, see for example \cite{JSS, Yaj, Sc2, goldE, eg2, EGG, GG1,GG2}. It is natural to expect zero energy resonances to effect the time decay of the fourth order operator as well. This has been studied only in dimensions $d>3$; by Feng, Wu and Yao, \cite{FWY}, when $d>4$ as an operator between weighted $L^2$ spaces, and by the second and third authors when $d=4$, \cite{GT4}. These works built on the work of Feng, Soffer and Yao in \cite{fsy} which considered the case when zero is regular. This work in turn had its roots in Jensen and Kato's work \cite{JenKat}, and \cite{JSS} for $-\Delta+V$. The free linear fourth order Schr\"odinger equation is studied by Ben-Artzi, Koch, and Saut \cite{BKS}. They present sharp estimates on the derivatives of the kernel of the free operator, (including $ \Delta^2 \pm \Delta $). This followed work of Ben-Artzi and Nemirovsky which considered rather general operators of the form $f(-\Delta)+V$ on weighted $L^2$ spaces. Further generalized Schr\"odinger operators of the form $(-\Delta)^{m} + V$ were studied in \cite{DDY}, \cite{soffernew}. See also the work of Agmon \cite{agmon} and Murata \cite{Mur,Mur1,Mur2}. In particular, Murata's results for operators of the form $P(D)+V$ do not hold for the fourth order operator due to the degeneracy of $P(D)= \Delta^2$ at zero. There are not many works considering the perturbed linear fourth order Schr\"odinger equation outside of the previously referenced recent works. There has been study of special solutions for nonlinear equations, see for example \cite{Lev,P,P1,MXZ1,MXZ2,Dinh}. See \cite{Lev1,LS} for a study of decay estimates for the fourth order wave equation. Our results follow from careful expansions of the resolvent operators $(H-z)^{-1}$. We develop these expansions as perturbations of the free resolvent, for which, by using the second resolvent identity (see also \cite{fsy}), we have the following representation: \begin{align} \label{RH_0 rep} R (H_0; z):=( \Delta^2 - z)^{-1} = \frac{1}{2z^{\f12}} \Big( R _0(z^{\f12}) - R_0 (-z^{\f12}) \Big), \quad z\in \mathbb C\setminus[0,\infty). \end{align} Here $H_0=(-\Delta)^2$ and the $R_0$ is the Schr\"odinger resolvent $R _0(z^{\f12}):=(-\Delta-z^{\f12})^{-1} $. Since $H_0$ is essentially self-adjoint and $\sigma_{ac}(\Delta^2)= [0,\infty)$, by Weyl's criterion $\sigma_{ess}(H) = [0,\infty)$ for a sufficiently decaying potential. Let $\lambda \in \R^{+}$, we define the limiting resolvent operators by \begin{align} &\label{RH_0 def}R^{\pm}(H_0; \lambda ) := R^{\pm}(H_0; \lambda \pm i0 )= \lim_{\epsilon \to 0^+}(\Delta^2 - ( \lambda \pm i\epsilon))^{-1}, \\ & \label{Rv_0 def} R_V^{\pm}(\lambda ) := R_V^{\pm}(\lambda \pm i0 )= \lim_{\epsilon \to 0^+}(H - ( \lambda \pm i\epsilon))^{-1}. \end{align} Note that using the representation \eqref{RH_0 rep} for $R (H_0;z)$ in definition \eqref{RH_0 def} with $z=w^4$ for $w$ in the first quandrant of the complex plane, and taking limits as $w\to \lambda$ and $w\to i\lambda$ in the first quadrant, we obtain \be\label{eq:4th resolvdef} R^{\pm}(H_0; \lambda^4)= \frac{1}{2 \lambda^2} \Big( R^{\pm}_0(\lambda^2) - R_0(-\lambda^2) \Big),\,\,\,\lambda>0. \ee Note that $R_0(-\lambda^2 ): L^2 \rightarrow L^2$ since $-\Delta$ has nonnegative spectrum. Further, by Agmon's limiting absorption principle, \cite{agmon}, $R^{\pm}_0(\lambda^2)$ is well-defined between weighted $L^2$ spaces. Therefore, $R^{\pm}(H_0; \lambda^4)$ is also well-defined between these weighted spaces. This property is extended to $R_V^{\pm}(\lambda )$ in \cite{fsy}. As usual, we use functional calculus and the Stone's formula to write \begin{align} \label{stone} \ e^{-itH} P_{ac}(H) f(x) = \frac1{2\pi i} \int_0^{\infty} e^{-it\lambda} [R_V^+(\lambda)-R_V^{-}(\lambda)] f(x) d\lambda. \end{align} Here the difference of the perturbed resolvents provides the spectral measure. Our analysis in the three-dimensional case differs from the four dimensional case and previous works on the Schr\"odinger operator in several ways. First, the behavior of the free resolvents in \eqref{eq:4th resolvdef} provides technical challenges in which various lower order terms in the expansions interact. These interactions complicate the inversion process as the operators whose kernels we study and need to invert are now the difference of different operators in the resolvent expansions, see \eqref{F def} and \eqref{T2 def} below. Such difficulties are new to this case, in the analysis of the Schr\"odinger resolvents, see \cite{JN}, on can iterate the expansion procedure by examining the kernel of a single operator at each step. The techniques developed here may also be of use in dimensions $d=1,2$ or other high dispersion equations. Furthermore, the difference between the `+' and `-' resolvents in the Stone's formula, \eqref{stone}, which is crucial in the Schr\"odinger operators and the four-dimensional case, do not improve the analysis except in the most singular term in the case of a resonance of the third kind. Further, the classification of resonances differs from the Schr\"odinger case in several key aspects as shown in Section~\ref{sec:classification} below. The paper is organized as follows. In Section~\ref{sec:notation} we provide definitions of the various notations we use to develop the operator expansions. In Section~\ref{sec:free} we develop expansions for the free resolvent and establish the natural dispersive bound for the free operator. In Section~\ref{sec:low energy} we develop expansions for the perturbed resolvent in a neighborhood of the threshold for each type of resonance that may occur. In Section~\ref{sec:low disp} we utilize these expansions to prove the low energy version of Theorem~\ref{thm:main}. In Section~\ref{sec:large} we prove the high energy version of Theorem~\ref{thm:main}. Finally, in Section~\ref{sec:classification} we provide a classification of the spectral subspaces associated to the different types of zero-energy obstructions. \section{Notation}\label{sec:notation} For the convenience of the reader, we have gathered the notation and terminology we use throughout the paper. For an operator $\mE(\lambda) $, we write $\mE(\lambda)=O_1(\lambda^{-\alpha})$ if it's kernel $\mE(\lambda)(x,y)$ has the property \be\label{O1lambda} \sup_{x,y\in\R^3, \lambda>0}\big[\lambda^{\alpha}|\mE(\lambda)(x,y)|+\lambda^{\alpha+1} |\partial_\lambda\mE(\lambda)(x,y)|\big]<\infty. \ee Similarly, we use the notation $\mE(\lambda)=O_1(\lambda^{-\alpha}g(x,y))$ if $\mE(\lambda)(x,y)$ satisfies \be\label{O1lambdag} |\mE(\lambda)(x,y)|+\lambda |\partial_\lambda\mE(\lambda)(x,y)|\les \lambda^{-\alpha}g(x,y). \ee Recall the definition of the Hilbert-Schmidt norm of an operator $K$ with kernel $K(x,y)$, $$ \| K\|_{HS}:=\bigg(\iint_{\R^{6}} |K(x,y)|^2\, dx\, dy \bigg)^{\f12}. $$ We also recall the following terminology from \cite{Sc2,eg2}: \begin{defin} We say an operator $T:L^2(\R^2) \to L^2(\R^2)$ with kernel $T(\cdot,\cdot)$ is absolutely bounded if the operator with kernel $|T(\cdot,\cdot)|$ is bounded from $ L^2(\R^2)$ to $ L^2(\R^2)$. \end{defin} We note that Hilbert-Schmidt and finite-rank operators are absolutely bounded operators. We will use the letter $\Gamma$ to denote a generic absolutely bounded operator. In addition, $\Gamma_\theta$ denotes a $\lambda$ dependent absolutely bounded operator satisfying \be\label{eq:Gamma def} \big \||\Gamma_\theta|\big \|_{L^2\to L^2}+ \lambda \big \||\partial_\lambda \Gamma_\theta|\big \|_{L^2\to L^2} \les \lambda^{\theta},\quad \lambda>0. \ee The operator may vary depending on each occurrence and $\pm $ signs. The use of this notation allows us to significantly streamline the resolvent expansions developed in Section~\ref{sec:low energy} as well as the proofs of the dispersive bounds in Section~\ref{sec:low disp}. We use the smooth, even low energy cut-off $\chi$ defined by $\chi(\lambda)=1$ if $|\lambda|<\lambda_0\ll 1$ and $\chi(\lambda)=0$ when $|\lambda|>2\lambda_0$ for some sufficiently small constant $0<\lambda_0\ll1$. In analyzing the high energy we utilize the complementary cut-off $\widetilde \chi(\lambda)=1-\chi(\lambda)$. \section{The Free Evolution}\label{sec:free} In this section we obtain expansions for the free fourth order Schr\"odinger resolvent operators $R^{\pm}(H_0; \lambda^4)$, using the identity \eqref{RH_0 rep} and the Bessel function representation of the Schr\"odinger free resolvents $R^{\pm}_0(\lambda^2)$. We use these expansions to establish dispersive estimates for the free fourth order Schr\"odinger evolution, and throughout the remainder of the paper to study the spectral measure for the perturbed operator. Recall the expression of the free Schr\"odinger resolvents in dimension three, (see \cite{GS} for example) $$ R^{\pm}_0(\lambda^2) (x,y)= \frac{e^{\pm i \lambda|x-y|}}{4 \pi |x-y|} . $$ Therefore, by \eqref{eq:4th resolvdef}, \be\label{eq:R0lambda} R^\pm (H_0, \lambda^4) (x,y)= \frac{1}{2 \lambda^2} \Bigg( \frac{e^{\pm i \lambda|x-y|}}{4 \pi |x-y|} - \frac{e^{-\lambda|x-y|}}{4 \pi |x-y|} \Bigg). \ee When, $\lambda |x-y|<1$, we have the following representation for the $R(H_0, \lambda^4)$ \begin{align} \label{eq:R0low} R^\pm(H_0, \lambda^4) (x,y) = \frac{a^{\pm}}{\lambda} + G_0 + a_1^{\pm} \lambda G_1 + a_3^{\pm} \lambda^3 G_3+ \lambda^4 G_4 + O(\lambda^5 |x-y|^6) . \end{align} Here \begin{align}\label{adef} &a^{\pm}:= \frac{1\pm i}{8 \pi}, \quad a_1^{\pm}= \frac{1\mp i }{ 8 \pi \cdot (3!)}, \quad a_3^{\pm} =\frac {1\pm i }{8 \pi \cdot (5!)}, \quad G_0 (x,y) = - \frac{|x-y|}{8 \pi}, \\\label{Gdef} & G_1 (x,y) = |x-y|^2, \quad G_3 (x,y) = |x-y|^4, \quad G_4 (x,y) = -\frac{ |x-y|^5}{4\pi \cdot (6!)} . \end{align} When $\lambda |x-y|\gtrsim 1$, the expansion remains valid. Notice that $G_0=(\Delta^2)^{-1}$. The following lemma will be used repeatedly to obtain low energy dispersive estimates. \begin{lemma}\label{lem:t-34bound} Fix $0<\alpha<4$. Assume that $\mE(\lambda)=O_1(\lambda^{-\alpha})$ for $0<\lambda\les 1$, then we have the bound \be\label{eq:t-34bound} \bigg| \int_{0}^\infty e^{it\lambda^4} \chi(\lambda) \lambda^3 \mE(\lambda)\, d\lambda \bigg| \les \la t\ra^{-1+\frac\alpha4}. \ee \end{lemma} \begin{proof} By the support condition and since $\alpha<4$, the integral is bounded. Now, for $|t|>1$ we rewrite the integral in \eqref{eq:t-34bound} as $$ \int_{0}^{t^{-\f14}} e^{it\lambda^4} \lambda^3 \chi(\lambda)\mathcal E(\lambda)\, d\lambda+\int_{t^{-\f14}}^\infty e^{it\lambda^4} \lambda^3 \chi(\lambda)\mathcal E(\lambda)\, d\lambda:=I+II. $$ We see that $$ |I|\leq \int_0^{t^{-\f14}} \lambda^{3-\alpha}\, d\lambda \les t^{-1+\frac\alpha4 }. $$ For the second term, we use $\partial_\lambda e^{it\lambda^4}/(4it)=e^{it\lambda^4} \lambda^3$ to integrate by parts once. $$ |II|\les \frac{e^{it\lambda^4} \mathcal E(\lambda)}{4it}\bigg|_{t^{-\f14}} + \frac{1}{t} \int_{t^{-\f14}}^\infty|\mathcal E'(\lambda)|\, d\lambda\les t^{-1+\frac\alpha4}+ \frac{1}{t}\int_{t^{-\f14}}^\infty \lambda^{-\alpha-1}\, d\lambda\les t^{-1+\frac\alpha4}. $$ \end{proof} \begin{lemma}\label{lem:free bound} We have the bound $$ \sup_{x,y\in \mathbb R^3} \bigg| \int_{0}^\infty e^{it\lambda^4} \chi(\lambda) \lambda^3 R^\pm(H_0,\lambda^4)(x,y)\, d\lambda \bigg| \les \la t\ra^{-\f34}. $$ \end{lemma} \begin{proof} Note that the cancellation between $R^+$ and $R^-$ is not needed for, nor does it improve this bound. Using \eqref{eq:R0lambda} we have $$ |R^\pm (H_0, \lambda^4) (x,y)|= \Bigg| \frac{e^{\pm i \lambda|x-y|}-e^{-\lambda|x-y|}}{8 \pi \lambda^2|x-y|} \Bigg|\les \frac1{\lambda} $$ uniformly in $x,y$ for $\lambda|x-y|>1$. For $\lambda|x-y|<1$, we have $$ |R^\pm (H_0, \lambda^4) (x,y)|= \Bigg| \frac{e^{\pm i \lambda|x-y|}-1+1-e^{-\lambda|x-y|}}{8 \pi \lambda^2|x-y|} \Bigg| \les \frac1{\lambda} $$ by the mean value theorem. Similarly, $$|\partial_\lambda R^\pm (H_0, \lambda^4) (x,y)|\les \frac1{\lambda^2} $$ uniformly in $x,y$. Therefore \be\label{eq:freeO1} R^\pm (H_0, \lambda^4)=O_1(\lambda^{-1}), \ee and the claim follows from Lemma~\ref{lem:t-34bound} with $\alpha=1$. \end{proof} \begin{rmk} \label{rmk:large} The $t^{-\f34}$ bound is valid if we insert the high energy cutoff $\widetilde{\chi}(\lambda)=1-\chi(\lambda)$ in place of the low energy cutoff $\chi(\lambda)$ in Lemma~\ref{lem:t-34bound}. However, the integral is not absolutely convergent, and is large for small $|t|$. That is, $$ \bigg| \int_{0}^\infty e^{it\lambda^4} \widetilde{\chi}(\lambda) \lambda^3 \mE(\lambda) \, d\lambda \bigg| \les | t |^{-1+\frac\alpha4}. $$ Consequently, we obtain the following estimate for the the free equation $$ \| e^{i t \Delta^2} f\|_{ L^{1} \rightarrow L^{\infty}} \les t^{-\f34}. $$ \end{rmk} \section{Resolvent expansions near zero} \label{sec:low energy} In this section we provide the careful asymptotic expansions of the perturbed resolvent in a neighborhood of the threshold. To understand \eqref{stone} for small energies, i.e. $ \lambda \ll 1$, we use the symmetric resolvent identity. We define $U(x)=$sign$(V(x))$, $v(x)=|V(x)|^{\f12}$, and write \begin{align} \label{resid} R^{\pm}_V(\lambda^4)= R^{\pm}(H_0, \lambda^4) - R^{\pm}(H_0, \lambda^4)v (M^{\pm} (\lambda))^{-1} vR^{\pm}(H_0, \lambda^4) , \end{align} where $M^{\pm}(\lambda) = U + v R^{\pm}(H_0, \lambda^4) v $. As a result, we need to obtain expansions for $(M^{\pm} (\lambda))^{-1}$. The behavior of these operators as $\lambda \to 0$ depends on the type of resonances at zero energy, see Definition~\ref{def:restype} below. We determine these expansions case by case and establish their contribution to spectral measure in Stone's formula, \eqref{stone}. Let $T:= U + v G_0 v$, and recall \eqref{eq:Gamma def}, we have the following expansions. \begin{lemma}\label{lem:M_exp} For $0<\lambda<1$ define $M^{\pm}(\lambda) = U + v R^{\pm}(H_0, \lambda^4) v $. Let $P=v\langle \cdot, v\rangle \|V\|_1^{-1}$ denote the orthogonal projection onto the span of $v$. We have \begin{align} \label{Mexp} M^{\pm}(\lambda)&= A^{\pm}(\lambda) + M_0^\pm (\lambda),\\ \label{Apm} A^{\pm}(\lambda)&= \frac{\|V\|_1 a^\pm}{\lambda} P+T, \end{align} where $T:=U+vG_0v$ and $M_0^\pm (\lambda)=\Gamma_\ell$, for any $0\leq \ell \leq 1$, provided that $v(x)\les \la x \ra^{-\frac{5}{2}-\ell-}$. Moreover, for each $N=1,2,...,$ and $\ell\in [0,1]$, \be\label{M_0} M_0^\pm(\lambda)=\sum_{k=1}^N \lambda^k M_k^\pm +\Gamma_{N+\ell} \ee provided that $v(x)\les \la x \ra^{-\frac{5}{2}-N-\ell-}$. Here the operators $M_k^\pm$ and the error term are Hilbert-Schmidt, and hence absolutely bounded operators. In particular \begin{align} M^{\pm}_1 = a_1^{\pm} vG_1v, \,\,\,\,\,\, M_2^\pm=0, \, \,\,\,\,\, M^{\pm}_3 = a_3^{\pm} vG_3v, \,\,\,\,\,\, M^{\pm}_4 = vG_4 v, \label{M134} \end{align} where, the $a^\pm_j$'s and $G_j$'s are defined in \eqref{adef} and \eqref{Gdef}. \end{lemma} \begin{proof} We give a proof only for the case $N=1,2$, the other cases are similar. Using the expansion \eqref{eq:R0low} for $\lambda|x-y|<1$ and \eqref{eq:R0lambda} for $\lambda|x-y|>1$, we have $$ R^\pm (H_0, \lambda^4) (x,y)= \frac{a^{\pm}}{\lambda} + G_0 + a_1^{\pm} \lambda G_1+O_1(\lambda^3 |x-y|^4),\,\,\,\,\lambda|x-y|<1, $$ \begin{multline*} R^\pm (H_0, \lambda^4) (x,y)=\frac{a^{\pm}}{\lambda} + G_0 + a_1^{\pm} \lambda G_1+ \Big[\frac{e^{\pm i \lambda|x-y|} - e^{-\lambda|x-y|} }{8\pi \lambda^2|x-y|} - \frac{a^{\pm}}{\lambda} - G_0 - a_1^{\pm} \lambda G_1\Big]\\ = \frac{a^{\pm}}{\lambda} + G_0 + a_1^{\pm} \lambda G_1+ O_1(\lambda|x-y|^2),\,\,\,\,\lambda|x-y|>1. \end{multline*} Using these in the definition of $M^{\pm}(\lambda)$ and $M^{\pm}_0(\lambda)$, we have $$ \Big|\big(M^{\pm}_0(\lambda)- a_1^{\pm} \lambda vG_1v\big)(x,y)\Big|\les v(x)v(y)|x-y|^{\ell+2}\lambda^{\ell+1},\,\,\,0\leq \ell\leq 2, $$ $$ \Big|\partial_\lambda\big(M^{\pm}_0(\lambda)- a_1^{\pm} \lambda vG_1v\big)(x,y)\Big|\les v(x)v(y)|x-y|^{\ell+2}\lambda^{\ell },\,\,\,0\leq \ell\leq 2. $$ This yields the claim for $N=1$ since the error term is an Hilbert-Schmidt operator if $v(x)\les \la x\ra^{-\frac52-1-\ell-}$. The case of $N=2$ also follows since $M_2=0$ and $\ell\in[0,2]$. \end{proof} The definition below classifies the type of resonances that may occur at the threshold energy. In Section~\ref{sec:classification}, we establish this classification in detail. Since the free resolvent is unbounded as $\lambda\to 0$, this definition is somehow analogous to the definition of resonances from \cite{JN} and \cite{Sc2} for the two dimensional Schr\"odinger operators. However, there are important differences such as the appearance of the operators $T_1, T_2$ below. Specifically, the lower order terms in the expansions interact in such a way that $T_1$ and $T_2$ are now the differences of two separate operators. This phenomenon does not occur for the Schr\"odinger operators. \begin{defin} \label{def:restype} \begin{enumerate}[i)] \item Let $Q:=\mathbbm{1}-P$. We say that zero is regular point of the spectrum of $\Delta^2 +V$ provided $QTQ$ is invertible on $QL^2$. In that case we define $D_0:=(QTQ)^{-1}$ as an absolutely bounded operator on $QL^2$, see Lemma~\ref{lem:abs} below. \item Assume that zero is not regular point of the spectrum. Let $S_1$ be the Riesz projection onto the kernel of $QTQ$. Then $QTQ+S_1$ is invertible on $QL^2$. Accordingly, we define $D_0 = (QT Q + S_1)^{-1}$, as an operator on $QL^2$. This doesn't conflict with the previous definition since $S_1=0$ when zero is regular. We say there is a resonance of the first kind at zero if the operator \begin{align} \label{F def} T_1:=S_1TPTS_1- \frac{\|V\|_1}{3(8\pi)^2} S_1vG_1vS_1 \end{align} is invertible on $S_1L^2$. \item We say there is a resonance of the second kind if $T_1$ is not invertible on $S_1L^2$, but \begin{align} \label{T2 def} T_2:= S_2vG_3vS_2+\frac{10}{3 } S_2vWvS_2 \end{align} is invertible. Here $S_2$ is the Riesz projection onto the kernel of $T_1$, and $W(x,y)=|x|^2|y|^2$. Moreover, we define $D_1:= (T_1 +S_2)^{-1} $ as an operator on $S_1L^2$. \item Finally if $T_2$ is not invertible we say there is a resonance of the third kind at zero. In this case the operator $T_3:= S_3 v G_4v S_3$ is always invertible on $S_3L^2$ where $S_3$ the Riesz projection onto the kernel of $T_2$, see Lemma~\ref{invertible}. We define $D_2:= (T_2 + S_3)^{-1}$ as an operator on $S_3L^2$. \end{enumerate} \end{defin} As in the four dimensional operators, see the remarks after Definition~2.5 in \cite{EGG} and after Definition~3.2 in \cite{GT4}, $T$ is a compact perturbation of $U$. Hence, the Fredholm alternative guarantees that $S_1$ is a finite-rank projection. With these definitions first notice that, $ S_3 \leq S_2 \leq S_1 \leq Q $, hence all $S_j$ are finite-rank projections orthogonal to the span of $v$. Second, since $T$ is a self-adjoint operator and $S_1$ is the Riesz projection onto its kernel, we have $S_1 D_0= D_0 S_1 = S_1$. Similarly, $S_2 D_1= D_1 S_2 = S_2$, $S_3 D_2= D_2 S_3 = S_3$. \begin{lemma}\label{lem:abs} Let $|V(x)| \les \la x \ra^{-\beta}$ for some $\beta>5$, then $QD_0Q$ is absolutely bounded. \end{lemma} \begin{proof} We prove the statement when $S_1 \neq 0$. We first assume that $QUQ$ is invertible $QL^2 \rightarrow QL^2$. Using the resolvent identities, we have \begin{align*} QD_0Q = QUQ - QD_0Q (S_1 + vG_0v) QUQ = QUQ - S_1UQ -QD_0Q vG_0v QUQ \end{align*} Note that $QUQ$ is absolutely bounded. Moreover, since $S_1$ is finite rank, any summand containing $S_1$ is finite rank, and hence absolutely bounded. For $QD_0Q vG_0v QUQ$, we note $vG_0v$ is an Hilbert-Schmidt operator for any $v(x) \les \la x \ra^{-5/2-}$ and $QD_0Q$ is bounded. Therefore, $QD_0QvG_0v$ is Hilbert-Schmidt. Since the composition of absolutely bounded operators is absolutely bounded, $QD_0Q$ is absolutely bounded. If $QUQ$ is not invertible, one can define $\pi_0$ as the Riesz projection onto the kernel of $QUQ$ and see $QUQ + \pi_0$ is invertible on $QL^2$. Therefore, one can consider $ Q[ U + \pi_0 + S_1 + vG_0v -\pi_0]Q$ in the above argument to obtain the statement. \end{proof} Our aim in the rest of this section is to prove Theorem~\ref{thm:Minvexp} below obtaining suitable expansions for $[M^\pm(\lambda) ]^{-1}$ valid as $\lambda \to 0$ under the assumption that zero is regular and also in the cases when there are threshold obstructions. Recall the notation \eqref{eq:Gamma def} and that the operators $\Gamma_\theta$ vary from line to line. \begin{theorem}\label{thm:Minvexp} If zero is a regular point of the spectrum and if $|v(x)|\les \la x\ra^{-\frac52-}$, then $$[M^\pm(\lambda) ]^{-1} =Q\Gamma_0Q+\Gamma_1.$$ If there is a resonance of the first kind at zero and if $|v(x)|\les \la x\ra^{-\frac72-}$, then $$[M^\pm(\lambda) ]^{-1} =Q\Gamma_{-1}Q+Q\Gamma_0+\Gamma_0Q+\Gamma_1.$$ If there is a resonance of the second kind at zero and if $|v(x)|\les \la x\ra^{-\frac{11}2-}$, then \begin{multline*}[M^\pm(\lambda) ]^{-1} = S_2\Gamma_{-3} S_2+S_2\Gamma_{-2} Q+ Q\Gamma_{-2}S_2 + S_2\Gamma_{-1}+\Gamma_{-1}S_2 \\+Q \Gamma_{-1}Q +Q \Gamma_0+ \Gamma_0Q +\Gamma_1. \end{multline*} If there is a resonance of the third kind at zero and if $|v(x)|\les \la x\ra^{-\frac{15}2-}$, then \begin{multline*} [M^\pm(\lambda) ]^{-1} =\frac{1}{\lambda^4} S_3D_3S_3\\ + S_2\Gamma_{-3} S_2+S_2\Gamma_{-2} Q+ Q\Gamma_{-2}S_2 + S_2\Gamma_{-1}+\Gamma_{-1}S_2 +Q \Gamma_{-1}Q +Q \Gamma_0+ \Gamma_0Q +\Gamma_1. \end{multline*} \end{theorem} Roughly speaking, modulo a finite rank term, the contribution to \eqref{stone} of all of the operators in these expansions are of the same size with respect to the spectral parameter $\lambda$. We show in Lemma~\ref{lem:QvR} that in the contribution to \eqref{resid} having the operator $Q$ on one side allows us to gain a power of $\lambda$, while having $S_2$ allows us to gain two powers of $\lambda$ modulo the contribution of $G_0$. Recall from \eqref{Mexp} that $M^\pm(\lambda) = A^\pm(\lambda) + M_0^\pm(\lambda)$. If zero is regular then we have the following expansion for $(A^\pm(\lambda))^{-1}$. \begin{lemma}\label{regular} Let $ 0 < \lambda \ll 1$. If zero is regular point of the spectrum of $H$. Then, we have \begin{align} \label{M inverse regular} (A^\pm(\lambda))^{-1}= QD_0Q + g^{\pm}(\lambda) S, \end{align} where $g^{\pm}(\lambda)=(\frac{ a^\pm \|V\|_1}{\lambda} +c)^{-1} $ for some $c\in \R$, and \begin{align} \label{def:S} S =\left[\begin{array}{cc} P& -PTQD_0Q\\ -QD_0QTP &QD_0QTPTQD_0Q \end{array} \right], \end{align} is a self-adjoint, finite rank operator. Moreover, the same formula holds for $(A^\pm(\lambda)+S_1)^{-1}$ with $D_0=(Q(T+S_1)Q)^{-1}$ if zero is not regular. \end{lemma} \begin{proof} We prove the statement when $S_1\neq 0$. The proof is identical in the regular case. Recalling \eqref{Apm}, we write $A^\pm(\lambda)+S_1$ in the block format (using $PS_1=S_1P=0$): \begin{align} A^\pm(\lambda)+S_1 = \left[\begin{array}{cc} \frac{ a^{\pm} \|V\|_1}{\lambda} P + PTP & PTQ \\ QTP &Q(T+S_1)Q \end{array} \right] :=\left[\begin{array}{cc} a_{11} & a_{12} \\ a_{21} & a_{22} \end{array} \right] \end{align} Since $Q(T+S_1)Q$ is invertible, by Feshbach formula (see, e.g., Lemma~2.8 in \cite{eg2}) invertibility of $A^\pm(\lambda)+S_1$ hinges upon the existence of $d=(a_{11}-a_{12}a_{22}^{-1}a_{21})^{-1}$. Denoting $D_0=(Q(T+S_1)Q)^{-1}:QL^2\to QL^2$, we have \begin{align*} d= \big(\tfrac{ a^{\pm} \|V\|_1}{\lambda} P+PTP-PTQD_0QTP\big)^{-1} =\big(\frac{ a^\pm \|V\|_1}{\lambda} +c\big)^{-1}P=:g^{\pm}(\lambda)P \end{align*} with $c = Tr(P T P - P T QD_0QT P) \in \R $. Therefore, $d$ exists if $\lambda$ is sufficiently small. Thus, by the Feshbach formula, \begin{align}\label{feshbach} (A^\pm(\lambda)+S_1)^{-1}&=\left[\begin{array}{cc} d & -da_{12}a_{22}^{-1}\\ -a_{22}^{-1}a_{21}d & a_{22}^{-1}a_{21}da_{12}a_{22}^{-1}+a_{22}^{-1} \end{array}\right]\\ &=QD_0Q+ g^{\pm}(\lambda) S. \label{Ainverse} \end{align} \end{proof} Assume that $v(x)\les \la x\ra^{-\frac52-}$. Using \eqref{Mexp}, \eqref{M_0}, the resolvent identity and Lemma~\ref{regular} when zero is regular, we may write (for some $\epsilon>0$) \begin{multline*} [M^{\pm}]^{-1} = [A^\pm + M_0^\pm ]^{-1}= [A^\pm + \Gamma_{\epsilon} ]^{-1}\\ = [A^\pm ]^{-1}- [A^\pm ]^{-1}\Gamma_{\epsilon}[A^\pm ]^{-1}+ [A^\pm ]^{-1}\Gamma_{\epsilon}[M^{\pm}]^{-1}\Gamma_{\epsilon}[A^\pm ]^{-1} =Q\Gamma_0Q+\Gamma_1, \end{multline*} proving Theorem~\ref{thm:Minvexp} in the regular case. Assuming that $v(x)\les \la x\ra^{-\frac72-}$, by Lemma~\ref{lem:M_exp}, we have $M_0^\pm=\Gamma_1$. Also using \eqref{Mexp} and Lemma~\ref{regular} we obtain the following expansion in the case zero is not regular: \begin{multline}\label{M+S1ex} (M^{\pm} (\lambda)+S_1)^{-1} = (A^\pm(\lambda)+S_1 + M_0^{\pm}(\lambda))^{-1} \\= (A^\pm(\lambda)+S_1)^{-1} \sum_{k=0}^N (-1)^k [ M_0^{\pm}(\lambda) (A^\pm(\lambda)+S_1)^{-1}]^k+\Gamma_{N+1},\,\, N=0,1,\ldots, \\ =QD_0Q+ \Gamma_1. \end{multline} The following lemma from \cite{JN} is the main tool to obtain the expansions of $ M^\pm(\lambda)^{-1}$ when zero is not regular. \begin{lemma}\label{JNlemma} Let $M$ be a closed operator on a Hilbert space $\mathcal{H}$ and $S$ a projection. Suppose $M+S$ has a bounded inverse. Then $M$ has a bounded inverse if and only if $$ B:=S-S(M+S)^{-1}S $$ has a bounded inverse in $S\mathcal{H}$, and in this case \begin{align} \label{Minversegeneral} M^{-1}=(M+S)^{-1}+(M+S)^{-1}SB^{-1}S(M+S)^{-1}. \end{align} \end{lemma} We use this lemma with $M=M^{\pm}(\lambda)$ and $S=S_1$. Much of our technical work in the rest of this section is devoted to finding appropriate expansions for the inverse of $B^\pm(\lambda)=S_1-S_1(M^{\pm}(\lambda)+S_1)^{-1}S_1$ on $S_1L^2$ under various spectral assumptions. For simplicity we work with $+$ signs and drop the superscript. We first list the orthogonality relations of various operators and projections we need. \begin{align} \label{ort:1} &S_iD_j=D_jS_i=S_i, i>j,\\ \label{ort:2}&S_3\leq S_2\leq S_1\leq Q =P^\perp,\\ \label{ort:3}&S_1S= -S_1TP+S_1TPTQD_0Q,\,\,\,SS_1=-PTS_1+QD_0QTPTS_1,\\ \label{ort:4}&S_1SS_1=S_1TPTS_1,\\ \label{ort:5}&SS_2=S_2S=0,\\ \label{ort:6}&QM_1S_2=S_2M_1Q=S_3M_1=M_1S_3=0,\\ \label{ort:7}&S_2M_3S_3=S_3M_3S_2=0. \end{align} These can be checked using \eqref{def:S}, \eqref{M134}, and $Qv=S_2TP=S_2vG_1vQ=S_2x_jv=S_3x_ix_jv=0$, $i,j=1,2,3$ (see Lemmas~\ref{lem:S_2} and \ref{lem:S_3} below). Using \eqref{M_0} with $N=1$, $\ell=0+$ and \eqref{Ainverse} in \eqref{M+S1ex}, and then using \eqref{ort:4}, we obtain \be \label{BlambdaS1} B(\lambda)=S_1-S_1(M (\lambda)+S_1)^{-1}S_1 =- g(\lambda)S_1TPTS_1 + \lambda S_1M_1S_1 +\Gamma_{1+}, \ee provided that $|v(x)|\les \la x\ra^{-\frac72-}$. Using \eqref{M134}, we have \begin{multline}\label{T1calc} g(\lambda)S_1TPTS_1 -\lambda S_1M_1S_1 = g(\lambda)[S_1TPTS_1 -a_1\frac{\lambda}{g(\lambda)} S_1vG_1vS_1]\\ =g(\lambda) T_1-ca_1\lambda g(\lambda) S_1vG_1vS_1=g(\lambda) T_1+\Gamma_2, \end{multline} where $$ T_1=S_1TPTS_1- \frac{\|V\|_1}{3(8\pi)^2} S_1vG_1vS_1. $$ The second equality follows from $$ g(\lambda)[S_1TPTS_1 -a_1\frac{\lambda}{g(\lambda)} S_1vG_1vS_1] =g(\lambda)[S_1TPTS_1 -a_1(a \|V\|_1 +c\lambda) S_1vG_1vS_1], $$ and recalling the definitions of $g(\lambda)$, \eqref{adef} and \eqref{M134} to see $$ a^\pm a_1^\pm =\frac{(\pm i+1)(\mp i+1)}{(8\pi)^2 (3!)}=\frac{2}{(8\pi)^2 (3!)}=\frac{1}{3(8\pi)^2.} $$ In the case when there is a resonance of the first kind at zero, namely when $T_1$ is invertible, using \eqref{T1calc} in \eqref{BlambdaS1}, we obtain $$ B(\lambda)^{-1} =(-g(\lambda)T_1+\Gamma_{1+})^{-1}= \Gamma_{-1}, $$ provided that $v(x)\les \la x\ra^{-\frac{7}2-}$. Using this and \eqref{M+S1ex} in \eqref{Minversegeneral}, we obtain \begin{multline*} [M (\lambda)]^{-1}=QD_0Q+\Gamma_1+(QD_0Q+\Gamma_1)S_1 \Gamma_{-1}S_1(QD_0Q+\Gamma_1)\\ = Q\Gamma_{-1}Q+Q\Gamma_0+\Gamma_0Q+\Gamma_1, \end{multline*} proving Theorem~\ref{thm:Minvexp} in the case when there is a resonance of the first kind. In the case when there is a resonance of the second or third kind, namely when $S_2\neq 0$, we need more detailed expansions for $B(\lambda)$, and hence for $(M (\lambda)+S_1)^{-1}$. Using \eqref{M_0} and \eqref{M134} in \eqref{M+S1ex} we obtain \begin{multline}\label{M+S1ex2} (M (\lambda)+S_1)^{-1} = QD_0Q+g^\pm(\lambda)S -\lambda QD_0QM_1QD_0Q \\ -\lambda g^\pm(\lambda)\big[QD_0QM_1S+SM_1QD_0Q\big]+ \lambda^2 QD_0Q (M_1QD_0Q)^2 \\ -\lambda (g^\pm(\lambda))^2 SM_1S +\lambda^2g^\pm(\lambda) \big[S (M_1QD_0Q)^2+QD_0Q M_1SM_1QD_0Q +(QD_0Q M_1)^2S\big]\\ -\lambda^3 QD_0Q\big[M_3+M_1(QD_0QM_1)^2 \big]QD_0Q +\Gamma_{3+}, \end{multline} provided that $v(x)\les \la x\ra^{-\frac{11}2-}$. Using \eqref{ort:1}-\eqref{ort:4} and \eqref{T1calc} in \eqref{M+S1ex2}, we obtain \begin{multline}\label{S1M+S1S1} S_1(M (\lambda)+S_1)^{-1}S_1 = S_1+ g(\lambda) T_1 \\ -ca_1\lambda g(\lambda) S_1vG_1vS_1 -\lambda g (\lambda)S_1\big[M_1S+SM_1\big]S_1+ \lambda^2 S_1 M_1QD_0QM_1S_1 \\ -\lambda (g (\lambda))^2 S_1SM_1SS_1 +\lambda^2g (\lambda) \big[S_1S M_1QD_0QM_1S+S_1 M_1SM_1S_1 +S_1M_1QD_0Q M_1SS_1\big]\\ -\lambda^3 S_1\big[M_3+M_1(QD_0QM_1)^2 \big]S_1 +\Gamma_{3+}. \end{multline} Therefore \begin{multline*} B(\lambda)= S_1- S_1(M (\lambda)+S_1)^{-1}S_1 = -g(\lambda)T_1\\+ a_1\lambda g(\lambda) S_1(cM_1 + S M_1 +M_1S )S_1 - \lambda^2 S_1M_1QD_0QM_1S_1\\ +\lambda (g (\lambda))^2 S_1SM_1SS_1 +\lambda^2g (\lambda) \big[S_1S M_1QD_0QM_1S+S_1 M_1SM_1S_1 +S_1M_1QD_0Q M_1SS_1\big]\\ +\lambda^3 S_1\big[M_3+M_1(QD_0QM_1)^2 \big]S_1 +\Gamma_{3+}. \end{multline*} Let $U_1=S_1-S_2$, $U_2=S_2-S_3$, and $U=U_1+U_2$. In block form, we have \begin{align}\label{eq:BlambdaS3} B(\lambda) = \left[\begin{array}{cc} S_3B(\lambda) S_3 & S_3 B(\lambda) U \\ UB(\lambda) S_3 & U B(\lambda) U \end{array} \right], \end{align} \begin{align}\label{eq:Blambda} UB(\lambda)U = \left[\begin{array}{cc} U_2 B(\lambda) U_2 & U_2 B(\lambda) U_1 \\ U_1B(\lambda) U_2 & U_1B(\lambda) U_1 \end{array} \right]. \end{align} We first invert $UB(\lambda) U$ for small $\lambda$. We have $$ U_1B(\lambda) U_1=-g(\lambda) U_1T_1U_1+U_1\Gamma_2U_1, $$ Using \eqref{ort:5} and \eqref{ort:6}, we obtain $$ U_1B(\lambda) U_2 = a_1\lambda g(\lambda) U_1S vG_1v U_2 +U_1\Gamma_3U_2=-a_1\lambda g(\lambda) U_1TP vG_1v U_2 +U_1\Gamma_3U_2, $$ Similarly, $$ U_2B(\lambda)U_1 =-a_1\lambda g(\lambda) U_2 vG_1v PT U_1 +U_2\Gamma_3U_1, $$ and \begin{multline*} U_2B(\lambda) U_2 = \lambda^3 U_2M_3U_2- \lambda^2g(\lambda)U_2M_1SM_1U_2+\Gamma_4 \\ =a_3\lambda^3 U_2vG_3vU_2-a_1^2\lambda^2g(\lambda)U_2vG_1vSvG_1vU_2+U_2\Gamma_{3+}U_2. \end{multline*} Note that by \eqref{ort:6} $$ U_2vG_1vSvG_1vU_2= U_2vG_1vPvG_1vU_2= \|V\|_{L^1} U_2vWvU_2, $$ where $W(x,y)=|x|^2|y|^2$. In the second equality we used $G_1(x,y)=|x|^2-2x\cdot y +|y|^2$ and $S_2 x_j v =S_2v=0$. Also noting that $$ \frac{a_3\lambda}{ g(\lambda)}=a_3a\|V\|_1+ca_3\lambda=\frac{2i\|V\|_1}{5!(8\pi)^2}+ca_3\lambda,\,\,\,\,\,\, a_1^2=\frac{-2i}{(3!)^2(8\pi)^2}, $$ we obtain \begin{multline*} U_2B(\lambda) U_2 = \frac{2i }{5!(8\pi)^2} \|V\|_{L^1}\lambda^2g(\lambda) [ U_2vG_3vU_2+\frac{10}{3 } U_2vWvU_2]+\Gamma_4\\ = \frac{2i }{5!(8\pi)^2} \|V\|_{L^1}\lambda^2g(\lambda) U_2T_2U_2+U_2\Gamma_{3+}U_2. \end{multline*} If $U_1=0$, i.e. $S_1=S_2$, then we can invert $UBU$ as $$ (UB(\lambda)U)^{-1}= \frac{5!(8\pi)^2} {2i \|V\|_{L^1}\lambda^2g(\lambda) } (U_2T_2U_2)^{-1}+U_2\Gamma_{-3+}U_2 = U_2 \Gamma_{-3}U_2. $$ If $U_1 \neq 0$, we invert $UB(\lambda)U$ using Feshbach's formula. Note that, we can rewrite \eqref{eq:Blambda} using the calculations above: $$ UB(\lambda)U = -g(\lambda)\left[\begin{array}{cc} -\frac{2i }{5!(8\pi)^2} \|V\|_{L^1}\lambda^2 U_2T_2U_2+U_2\Gamma_{2+}U_2 & a_1\lambda U_2 vG_1v PT U_1 +U_2\Gamma_2 U_1 \\ a_1\lambda U_1TP vG_1v U_2 + U_1\Gamma_2U_2 & U_1T_1U_1+ U_1\Gamma_1 U_1 \end{array} \right]. $$ Note that $a_{22}$ is invertible. Therefore $U B(\lambda)U$ is invertible provided the following exists \begin{multline*} d=\Bigg( -\frac{2i \|V\|_{L^1}}{5!(8\pi)^2} \lambda^2 U_2T_2U_2 - a_1^2\lambda^2 S_2 vG_1v PT U_1 (U_1T_1U_1)^{-1} U_1TP vG_1v U_2 + U_2 \Gamma_{2+} U_2 \Bigg)^{-1} \\ =- \frac{5!(8\pi)^2}{2i \|V\|_{L^1}\lambda^2}\Bigg( U_2 T_2 U_2- \frac{10}{3\|V\|_{L^1}} U_2 vG_1v PT U_1 (U_1T_1U_1)^{-1} U_1TP vG_1v U_2 \Bigg)^{-1}+U_2\Gamma_{-2+}U_2. \end{multline*} Note that, since $S_2v=S_2x_jv=0$ we can rewrite the operator in parenthesis as \begin{multline*} U_2T_2U_2 - \frac{10\la (U_1T_1U_1)^{-1} U_1Tv,U_1Tv \ra}{3\|V\|_{L^1}} U_2 v W v U_2 \\ = U_2vG_3vU_2+\frac{10}{3 } \Big(1-\frac{\la (U_1T_1U_1)^{-1} U_1Tv,U_1Tv \ra}{\|V\|_{L^1}}\Big) U_2vWvU_2. \end{multline*} Note that by Lemma~\ref{lem:S_3} below the kernel of $T_2$ agrees with the kernel of $S_2vG_3vS_2$. Therefore $U_2vG_3vU_2$ is invertible and positive definite. Since $U_2vWvU_2$ is positive semi-definite, the inverse exists if we can prove that $\la (U_1T_1U_1)^{-1} U_1Tv,U_1Tv \ra\leq \|V\|_{L^1}$. Note that $$ U_1TPTU_1 u = \frac1{\|V\|_{L^1}} U_1(Tv) \la u,U_1(Tv)\ra. $$ Also note that $U_1T_1U_1-U_1TPTU_1$ is positive semi-definite. Therefore the required bound follows from the following lemma with $\mathcal H=U_1L^2$, $z= U_1(Tv)$, $\alpha=\frac1{\|V\|_{L^1}}$, and $\mathcal S =U_1T_1U_1-U_1TPTU_1$. \begin{lemma}\label{lem:posdef} Let $\mathcal H$ be a Hilbert space. Fix $z\in \mathcal H$ and $\alpha>0$ and let $\mathcal T(u)=\alpha z\la u,z\ra$, $u\in\mathcal H$. Let $\mathcal S$ be a positive semi-definite operator on $\mathcal H$ so that $\mathcal T+\mathcal S$ is invertible. Then, $$ 0\leq \big\la (\mathcal T+\mathcal S)^{-1}z, z\big\ra \leq \frac1\alpha. $$ \end{lemma} \begin{proof} Let $w=(\mathcal T+\mathcal S)^{-1}z$. We have $$ z=\mathcal Tw+\mathcal Sw=\alpha z\la w,z\ra +\mathcal Sw, \text{ and hence } \mathcal Sw=z-\alpha z\la w,z\ra. $$ Then since $\mathcal S$ is positive semi-definite, $$ 0\leq \la \mathcal Sw,w\ra = \big\la z-\alpha z\la w,z\ra,w\big\ra =\la z,w\ra-\alpha |\la z,w\ra|^2. $$ Therefore, $\la (\mathcal T+\mathcal S)^{-1}z,z\ra =\la w,z\ra\in \R$ and $$ 0\leq \la w, z\ra \leq \frac1\alpha. \quad \qedhere $$ \end{proof} We conclude that $$ d=\lambda^{-2}U_2DU_2+U_2\Gamma_{-2+}U_2=U_2\Gamma_{-2}U_2. $$ Using this in the Feshbach formula \eqref{feshbach}, we obtain \begin{multline} \label{eq:UBUinverse} (UB(\lambda)U)^{-1} =-\frac1{g(\lambda)}\left[\begin{array}{cc} U_2\Gamma_{-2} U_2 +U_2\Gamma_{-1}U_2 & U_2\Gamma_{-1 }U_1\\ U_1 \Gamma_{-1} U_2 & U_1 \Gamma_{0} U_1 \end{array}\right] \\ = U_2\Gamma_{-3} U_2+U_2\Gamma_{-2}U_1+U_1\Gamma_{-2}U_2+U_1\Gamma_{-1}U_1. \end{multline} We now focus on the case $S_3=0$, $U_2=S_2\neq 0$. We have $B(\lambda)^{-1}=(UB(\lambda)U)^{-1}$. Using \eqref{M+S1ex2} and orthogonality relations \eqref{ort:1}-\eqref{ort:6}, we have $$ S_2(M (\lambda)+S_1)^{-1} =(M (\lambda)+S_1)^{-1}S_2 = S_2 +\Gamma_2. $$ Also recall that $$ (M (\lambda)+S_1)^{-1} = QD_0Q+\Gamma_1. $$ Using these in \eqref{Minversegeneral}, we have \begin{multline}\label{MinverseS2nonzero} M(\lambda)^{-1}= QD_0Q+\Gamma_1\\ + (M (\lambda)+S_1)^{-1}\big[U_2\Gamma_{-3} U_2+U_2\Gamma_{-2}U_1+U_1\Gamma_{-2}U_2+U_1\Gamma_{-1}U_1\big](M (\lambda)+S_1)^{-1}\\= S_2\Gamma_{-3} S_2+S_2\Gamma_{-2} Q+ Q\Gamma_{-2}S_2 + S_2\Gamma_{-1}+\Gamma_{-1}S_2 +Q \Gamma_{-1}Q +Q \Gamma_0+ \Gamma_0Q +\Gamma_1. \end{multline} This expansion is valid also in the case $U_1=0$, proving Theorem~\ref{thm:Minvexp} in the case of resonance of the second kind. We consider the final case, when $S_3\neq 0$. Using $$ (A^\pm(\lambda)+S_1)^{-1} S_3 =S_3 (A^\pm(\lambda)+S_1)^{-1}= S_3, $$ $$ S_3 M_0^{\pm}=M_0^{\pm}S_3= \Gamma_3, $$ $$ S_3 M_0^{\pm}S_3= \lambda^4 S_3 vG_4v S_3+\Gamma_5 =\lambda^4 T_3+\Gamma_5, $$ we have $$ S_3B^\pm S_3= - S_3 \sum_{k=1}^4 (-1)^k [ M_0^{\pm}(\lambda) (A^\pm(\lambda)+S_1)^{-1}]^k S_3 +\Gamma_5\\ =\lambda^4 T_3 +\Gamma_5, $$ provided that $v(x)\les \la x \ra^{-\frac{15}{2}-}$. If $U\neq 0$, we invert $B(\lambda)$ using Feshbach's formula for the block form \eqref{eq:BlambdaS3}. Note that $$ d=\big(S_3BS_3-S_3BU(UBU)^{-1}UBS_3\big)^{-1}. $$ The leading term is $\lambda^4 T_3 +\Gamma_5$. We write the second term as \begin{multline*} S_3BU_2(UBU)^{-1}U_2BS_3+S_3BU_1(UBU)^{-1}U_2BS_3\\+S_3BU_2(UBU)^{-1}U_1BS_3+S_3BU_1(UBU)^{-1}U_1BS_3=\Gamma_5. \end{multline*} To obtain the estimate, we used $S_3BU_2=\Gamma_4$, $S_3BU_1=\Gamma_3$, and \eqref{eq:UBUinverse}. Therefore, for small $\lambda>0,$ $$ d=\lambda^{-4}D_3+S_3\Gamma_{-3}S_3=S_3\Gamma_{-4}S_3. $$ Using this in Feshbach's formula for the block form \eqref{eq:BlambdaS3} we obtain \begin{multline*} B(\lambda)^{-1}=\lambda^{-4}D_3+S_3\Gamma_{-3}S_3 + S_3\Gamma_{-4}S_3BU (UBU)^{-1}+(UBU)^{-1}UBS_3\Gamma_{-4}S_3 \\+\lambda^{-4}(UBU)^{-1}UBS_3\Gamma_{-4}S_3BU (UBU)^{-1}+(UBU)^{-1}. \end{multline*} Using \eqref{eq:UBUinverse}, decomposing $U=U_1+U_2$ as above, and using $S_3BU_2=\Gamma_4$, $S_3BU_1=\Gamma_3$, we have $$ B(\lambda)^{-1}= \lambda^{-4}D_3 + S_2\Gamma_{-3} S_2+ S_2 \Gamma_{- 2} S_1+ S_1 \Gamma_{- 2} S_2 + S_1 \Gamma_{- 1} S_1. $$ Finally, using $$ S_2(M (\lambda)+S_1)^{-1} =(M (\lambda)+S_1)^{-1}S_2 = S_2 +\Gamma_2, $$ $$ S_3(M (\lambda)+S_1)^{-1} =(M (\lambda)+S_1)^{-1}S_3 = S_3 +\Gamma_3, $$ $$ (M (\lambda)+S_1)^{-1} = QD_0Q+\Gamma_1, $$ we obtain Theorem~\ref{thm:Minvexp} in the case of a resonance of the third kind. \section{Low energy dispersive estimates}\label{sec:low disp} In this section we analyze the perturbed evolution $e^{-itH}$ in $L^1 \rar L^{\infty}$ setting for small energy, when the spectral variable $\lambda$ is in a small neighborhood of the threshold energy $\lambda=0$. As in the free case, we represent the solution via Stone's formula, \eqref{stone}. As usual, we analyze \eqref{stone} separately for large energy, when $\lambda \gtrsim 1$, and for small energy, when $ \lambda \ll 1$, see for example \cite{Sc2, eg2}. The effect of the presence of zero energy resonances is only felt in the small energy regime. Different resonances change the asymptotic behavior of the perturbed resolvents and hence that of the spectral measure as $\lambda\to 0$ which we study in this section. The large energy argument appears in Section~\ref{sec:large} to complete the proof of Theorem~\ref{thm:main}. We start with the following lemma which will be used repeatedly. \begin{lemma} \label{lem:QvR} Assume that $v(x)\les\la x\ra^{-\frac52-},$ then $$ \sup_y \big\| [Qv R^{\pm}(H_0, \lambda^4)](\cdot,y)\big\|_{L^2} \les 1,\,\,\text{ and }\,\, \sup_y \big\| \partial_\lambda [Qv R^{\pm}(H_0, \lambda^4)](\cdot,y)\big\|_{L^2} \les \frac1\lambda. $$ Assuming that $v(x)\les\la x\ra^{-\frac72-},$ we have $$ \sup_y \big\| \big[S_2v (R^{\pm}(H_0, \lambda^4)-G_0) \big](\cdot,y)\big\|_{L^2} \les \lambda,\,\, \,\, \sup_y \big\| \partial_\lambda \big[S_2v (R^{\pm}(H_0, \lambda^4)-G_0) \big](\cdot,y) \big\|_{L^2} \les 1, $$ and $$ \sup_y \big\| \big[S_2v (R^{+}(H_0, \lambda^4) -R^{-}(H_0, \lambda^4) ) \big](\cdot,y)\big\|_{L^2} \les \lambda, $$ $$ \sup_y \big\| \partial_\lambda \big[S_2v (R^{+}(H_0, \lambda^4) -R^{-}(H_0, \lambda^4) )\big](\cdot,y) \big\|_{L^2} \les 1. $$ \end{lemma} \begin{proof} We prove the assertion for $+$ sign. Recall the expansion \eqref{eq:R0low}. Using the fact $Qv=0$ we have \begin{multline*} [Q vR^{+}(H_0, \lambda^4)] (y_2, y) = \frac{1}{8 \pi \lambda} \int_{\R^3} Q(y_2, y_1) v(y_1) [F(\lambda |y-y_1|) -F(\lambda|y|)]dy_1\\ =\frac{1}{8 \pi } \int_{\R^3} Q(y_2, y_1) v(y_1) \int_{|y|}^{|y-y_1|}F^\prime(\lambda s) ds dy_1, \end{multline*} where $$ F (p) = \frac{ e^{ip} - e^{-p}}{ p}. $$ Noting that $|F^\prime(p)|\les 1$, and using the absolute boundedness of $Q$, we obtain $$ \big\| [Q vR^{+}(H_0, \lambda^4)] (\cdot, y) \big\|_{L^2}\les \Big\| \int_{\R^3} |Q(y_2, y_1)| |v(y_1)|\la y_1\ra dy_1 \Big \|_{L^2_{y_2}}\les \| v(y_1) \la y_1\ra \|_{L^2}\les 1, $$ uniformly in $y$. Now consider $S_2v (R^{+}(H_0, \lambda^4)-G_0)$. We have \begin{align*} [S_2 v(R^{+}(H_0, \lambda^4)-G_0)] (y_2, y)= \frac{1}{8 \pi \lambda} \int_{\R^3} S_2(y_2, y_1) v(y_1) F(\lambda |y-y_1|) dy_1 \end{align*} where $$ F (p) = \frac{ e^{ ip} - e^{-p}}{ p} + p.$$ Noting that $S_2v =0$ we can rewrite the integral above as \begin{multline*} \frac{1}{8 \pi \lambda} \int_{\R^3} S_2 (y_2,y_1) v(y_1) [F(\lambda |y-y_1|) - F(\lambda |y| ) ]dy_1\\ =\frac{1}{8 \pi } \int_{\R^3} S_2(y_2,y_1) v(y_1) \int_{|y|}^{| y-y_1|} F^{\prime} (\lambda s ) ds dy_1. \end{multline*} Furthermore, one has $ S_2y_jv =0$, and hence $$ \int_{\R^3} S_2 (y_2,y_1) v(y_1) \frac{y_1 \cdot y}{|y|} F^{\prime} (\lambda |y| ) dy_1= \int_{\R^3} S_2 (y_2,y_1) v(y_1) y_1 dy_1 \cdot \frac{ y }{|y|} F^{\prime} (\lambda |y| ) =0. $$ This gives \begin{multline} \int_{\R^3} S_2 (y_2,y_1) v(y_1) \int_{|y|}^{| y-y_1|} F^{\prime} (\lambda s ) ds dy_1 \\ = \int_{\R^3} S_2 (y_2,y_1) v(y_1) \Big[\int_{|y|}^{| y-y_1|} F^{\prime} (\lambda s ) ds + \frac{ y_1 \cdot y }{|y|} F^{\prime} (\lambda |y| ) \Big] dy_1 \\ = \int_{\R^3} S_2 (y_2,y_1) v(y_1) \Bigg[ \int_{|y|- \frac{ y_1 \cdot y}{|y|}}^{| y-y_1|} F^{\prime} (\lambda s ) ds - \int_{|y|- \frac{ y_1 \cdot y}{|y|}}^{|y|} F^{\prime} (\lambda s ) ds+ \int_{|y| - \frac{ y_1 \cdot y}{|y|}}^{ |y| } F^{\prime} (\lambda |y| ) ds \Bigg] dy_1 \\ \label{seperates} = \int_{\R^3} S_2 (y_2,y_1) v(y_1) \Bigg[\int_{|y|- \frac{ y_1 \cdot y}{|y|}}^{| y-y_1|} F^{\prime} (\lambda s ) ds + \lambda \int_{|y| - \frac{y_1 \cdot y}{|y|}}^{|y| } \int_{s}^{|y|} F^{\prime \prime} ( \lambda k) dk ds \Bigg] dy_1. \end{multline} To control the integrals in \eqref{seperates} notice that $| F^{(k)} (p) | \les p^{2-k} $ for $k= 1,2$. Therefore, for $|y| - \Big| \frac{y_1 \cdot y}{|y|} \Big| \geq 0$, we obtain \begin{align} \label{Ffirstests} \Big| \int_{|y| -\frac{y_1 \cdot y}{|y|}}^{|y-y_1|} F^{\prime} (\lambda s ) ds \Big| \les \lambda \Big|\int_{|y| -\frac{y_1 \cdot y}{|y|}}^{|y-y_1|} s ds\Big| \les \lambda \Big| |y-y_1|^2 - (|y| - \frac{y_1 \cdot y}{|y|})^2 \Big| \les \lambda \la y_1 \ra^{2} . \end{align} Note that if $|y| - \Big| \frac{y_1 \cdot y}{|y|} \Big| < 0$, one has $ |y| ,|y-y_1| < |y_1|$ and therefore the above inequality is trivial. For the second term in \eqref{seperates}, we have \begin{align} \label{Fsecondests} \int_{|y|-\frac{y_1 \cdot y}{|y|}}^{|y| } \int_{s}^{|y|} F^{\prime \prime} ( \lambda k) dk ds = \int_{|y|-\frac{y_1 \cdot y}{|y|}}^{|y| } [ k- |y| + \frac{y_1 \cdot y}{|y|} ] F^{\prime \prime} ( \lambda k) dk. \end{align} Noting that $|[ k- |y| + \frac{y_1 \cdot y}{|y|} ]| \les \la y_1 \ra$ and $| F^{\prime \prime}(\lambda k) | \les 1$. This term can be controlled by $ \la y_1 \ra^2$. Finally, by \eqref{Ffirstests} and \eqref{Fsecondests}, we obtain \begin{multline*} \big\| [S_2v (R^{+}(H_0, \lambda^4)-G_0)] (\cdot, y) \big\|_{L^2}\\ \les \lambda \Big\| \int_{\R^3} |S_2(y_2, y_1)| |v(y_1)|\la y_1\ra^2 dy_1 \Big \|_{L^2_{y_2}} \les \lambda \| v(y_1) \la y_1\ra^2 \|_{L^2}\les \lambda, \end{multline*} uniformly in $y$. To establish the bound on the first derivative, note that $$ \partial_{\lambda} F(\lambda r) = \frac{1}{\lambda} \Big[\frac{ [i(\lambda r)-1] e^{i(\lambda r)} + e^{-(\lambda r)} [(\lambda r)+1]}{(\lambda r)} - (\lambda r) \Big] =:\frac{1}{\lambda} \tilde{F}(\lambda r) $$ Since one has $| \tilde{F}^{k} (p) | \les p^{2-k}$, one can apply the same method to $\tilde{F}$ to finish the proof. The last assertion follows from noting that the bounds used on $S_2v (R^{\pm}(H_0, \lambda^4)-G_0)$ also apply to $S_2v (R^{+}(H_0, \lambda^4)-R^{-}(H_0, \lambda^4))$, see \eqref{eq:R0low} and the subsequent discussion. \end{proof} We first consider the case when zero is regular ($S_1=0$) or when there is a resonance of the first kind $S_1\neq 0, S_2=0$. \begin{theorem}\label{thm:firstkind} Assume that $v(x)\les \la x\ra^{-\frac52-}$ and $S_1=0$, or that $v(x)\les \la x\ra^{-\frac{7}2-}$ and $S_1\neq 0, S_2=0$. Then \begin{align}\label{eq:first low int} \sup_{x,y\in \mathbb R^3}\Bigg|\int_0^\infty e^{it\lambda^4} \lambda^3 \chi(\lambda) R_V^\pm (\lambda^4)(x,y) \, d\lambda \Bigg|\les \la t\ra^{-\frac34}. \end{align} \end{theorem} \begin{proof}Recall \eqref{resid}: $$R^{\pm}_V(\lambda^4)= R^{\pm}(H_0, \lambda^4) - R^{\pm}(H_0, \lambda^4)v (M^{\pm} (\lambda))^{-1} vR^{\pm}(H_0, \lambda^4). $$ We already obtained the required bound for the free term in Lemma~\ref{lem:free bound}. For the correction term, dropping the $\pm$ signs, the claim will follow from Lemma~\ref{lem:t-34bound} with \be\label{eq:mEregular} \mE(\lambda)(x,y)=\big[ R (H_0, \lambda^4) v (M (\lambda))^{-1} v R (H_0, \lambda^4)\big](x,y). \ee By Theorem~\ref{thm:Minvexp}, in the regular case we have $ M (\lambda)^{-1}= QD_0Q+\Gamma_1.$ In the case of a resonance of the first kind, we have $$ M (\lambda)^{-1}= Q\Gamma_{-1}Q+ Q\Gamma_0+\Gamma_0Q+\Gamma_1. $$ First consider the contribution of $\Gamma_1$ to \eqref{eq:mEregular}: $$ \big[ R (H_0, \lambda^4) v \Gamma_1 v R (H_0, \lambda^4)\big](x,y). $$ Note that, by \eqref{eq:freeO1} we have \be\label{eq:freeO1L2} \| v R (H_0, \lambda^4)(\cdot, y)\|_{L^2}\les \frac1\lambda,\,\,\, \| \partial_\lambda v R (H_0, \lambda^4)(\cdot, y)\|_{L^2}\les \frac1{\lambda^2} \ee uniformly in $y$. Therefore we estimate the contribution of the error term to $\mE(\lambda)(x,y)$ by $$ \lambda \| v R (H_0, \lambda^4)(\cdot, x)\|_{L^2}\| v R (H_0, \lambda^4)(\cdot, y)\|_{L^2}\les \frac1\lambda, $$ and its $\lambda$ derivative by $\frac1{\lambda^2}$. Hence, the claim follows from Lemma~\ref{lem:t-34bound} with $\alpha=1$. Now, consider the contribution of $ Q\Gamma_{-1}Q$ to \eqref{eq:mEregular}: $$ \big[ R (H_0, \lambda^4) v Q\Gamma_{-1} Q v R (H_0, \lambda^4)\big](x,y). $$ Note that, by Lemma~\ref{lem:QvR}, we bound this term by $$ \|Q v R (H_0, \lambda^4)(\cdot, y)\|_{L^2} \|Q v R (H_0, \lambda^4)(\cdot, x)\|_{L^2} \| |\Gamma_{-1}| \|_{L^2\to L^2} \les \frac1\lambda $$ uniformly in $x,y$. Similarly, its $\lambda$-derivative is bounded by $\frac1{\lambda^2}$. Therefore, the claim follows from Lemma~\ref{lem:t-34bound}. The contributions of $Q\Gamma_0$ and $\Gamma_0Q$ can be bounded similarly by using Lemma~\ref{lem:QvR} on one side and \eqref{eq:freeO1L2} on the other side. \end{proof} \begin{theorem}\label{thm:second/thirdkind} Assume that $v(x)\les \la x\ra^{-\frac{11}2-}$. If $S_2 \neq 0$, $S_3=0$ then \begin{align} \label{eq:secondkind} \sup_{x,y\in \mathbb R^3}\Bigg|\int_0^\infty e^{it\lambda^4} \lambda^3 \chi(\lambda) R_V^\pm (\lambda^4)(x,y) \, d\lambda -F^{\pm}(x,y)\Bigg|\les \la t\ra^{-\frac34}. \end{align} Here $F^\pm$ are time dependent finite rank operators satisfying $\|F^{\pm}\|_{L^1\to L^\infty}\les \la t\ra^{-\frac14}$. Moreover if $v(x)\les \la x\ra^{-\frac{15}2-}$ and $S_3\neq 0$, then \begin{align}\label{eq:thirdkind} \sup_{x,y\in \mathbb R^3}\Bigg|\int_0^\infty e^{it\lambda^4} \lambda^3 \chi(\lambda) [R_V^{+} (\lambda^4) - R_V^{-} (x,y)] \, d\lambda -G(x,y)\Bigg|\les \la t\ra^{-\frac12}, \end{align} where $G$ is a time dependent finite rank operator satisfying $\|G \|_{L^1\to L^\infty}\les \la t\ra^{-\frac14}$. \end{theorem} \begin{proof} We first prove \eqref{eq:secondkind}. By Theorem~\ref{thm:Minvexp}, in the case of a resonance of the second kind, we have $$[M^\pm(\lambda) ]^{-1} = S_2\Gamma_{-3} S_2+S_2\Gamma_{-2} Q+ Q\Gamma_{-2}S_2 + S_2\Gamma_{-1}+\Gamma_{-1}S_2 +Q \Gamma_{-1}Q +Q \Gamma_0+ \Gamma_0Q +\Gamma_1. $$ We only consider the contribution of $S_2\Gamma_{-3} S_2$ to \eqref{resid}, the others can be handled similarly. Let $$ \mathcal E(\lambda,x,y)=\big[R ^{\pm}(H_0, \lambda^4) v S_2\Gamma_{-3} S_2 v R^{\pm} (H_0, \lambda^4)\big](x,y) $$ Note that by Lemma~\ref{lem:QvR} we have \begin{multline*} \mathcal E =G_0v S_2\Gamma_{-3} S_2 vG_0+G_0v S_2\Gamma_{-3} S_2 v(R^{\pm} (H_0, \lambda^4)-G_0)\\+(R^{\pm} (H_0, \lambda^4)-G_0)v S_2\Gamma_{-3} S_2 vG_0+O_1(\lambda^{-1}). \end{multline*} By Lemma~\ref{lem:t-34bound}, the contribution of the last term is $\les \la t\ra^{-\frac34 }$. Moreover, noting that $S_2v=0$, we have \begin{align*} \big\|[S_2v G_0] (\cdot,y)\big\|_{L^2} =\Big\| \int_{\R^3} S_2(\cdot,y_1) v(y_1) [|y-y_1| - |y|] dy_1\Big\|_{L^2}\les 1, \end{align*} since $|[|y-y_1| - |y|] | \les \la y_1 \ra$. Therefore, the first term is $O_1(\lambda^{-3})$, and by Lemma~\ref{lem:t-34bound} its contribution is $\les \la t\ra^{-\frac14}$. Also note that its contribution is finite rank since $S_2$ is. Similarly the contributions of second and third terms are $\les \la t\ra^{-\frac12}$, and finite rank. One can explicitly construct the operators $F^\pm(x,y)$ from the contribution of these operators to the Stone formula, \eqref{stone}. Next we prove \eqref{eq:thirdkind}. Note that all the term in $M(\lambda) ^{-1}$ in Theorem~\ref{thm:Minvexp} except $\lambda^{-4}D_3$ are similar to the terms in the $M^{-1}(\lambda)$ that we considered in the case of resonance of the second kind. Therefore, we only control the terms interacting with $D_3$, that is we need to control the contribution of the following term to the Stone's formula, \begin{multline*} [R^{+}(H_0, \lambda^4) - R^{-}(H_0, \lambda^4)] v \frac{D_3}{\lambda^4} v R^{+}(H_0, \lambda^4)\\ = [R^{+}(H_0, \lambda^4) - R^{-}(H_0, \lambda^4)] v \frac{D_3}{\lambda^4}v G_0 \\+ [R^{+}(H_0, \lambda^4) - R^{-}(H_0, \lambda^4)] v \frac{D_3}{\lambda^4} v [R^{+}(H_0, \lambda^4) -G_0]. \end{multline*} Using Lemma~\ref{lem:QvR}, the first term is $O_1(\lambda^{-3})$, and hence its contribution to Stone's formula is $\la t\ra^{-\frac14}$ by Lemma~\ref{lem:t-34bound}, and is finite rank. Similarly, the second term is $O_1(\lambda^{-2})$ and its contribution is $ \les \la t\ra^{-\frac12}$. $G(x,y)$ is obtained explicitly by inserting these operators in \eqref{stone}. \end{proof} We note that the time decay of the non-finite rank portion of the evolution when $S_3\neq0$ can be improved at the cost of spatial weights. \begin{corollary}\label{cor:wtd decay} If $v(x)\les \la x\ra^{-\frac{15}2-}$ and $S_3\neq 0$, then \begin{align}\label{eq:thirdkind2} \Bigg|\int_0^\infty e^{it\lambda^4} \lambda^3 \chi(\lambda) [R_V^{+} (\lambda^4) - R_V^{-} (x,y)] \, d\lambda -G(x,y)\Bigg|\les \la t\ra^{-\frac34}\la x\ra^{\f52}\la y \ra^{\f52}, \end{align} where $G$ is a time dependent finite rank operator satisfying $\|G \|_{L^1\to L^\infty}\les \la t\ra^{-\frac14}$. \end{corollary} \begin{proof} We need only supply a new bound for the contribution of the following \begin{align} [R^{+}(H_0, \lambda^4) - R^{-}(H_0, \lambda^4)] v \frac{D_3}{\lambda^4} v [R^{+}(H_0, \lambda^4) -G_0]. \end{align} We note that $S_3v P_2(x)=0$ for any quadratic polynomial in the $x_j$ variables. Hence, $S_3vG_1=0$ as we may write $G_1(x,y)=|x|^2-2x\cdot y+|y|^2$. By truncating the expansion in \eqref{eq:R0low} earlier, we see $$ [R^+(H_0,\lambda^4)-R^-(H_0,\lambda^4)]=\frac{a^+-a^-}{\lambda}+(a_1^+-a_1^-)\lambda G_1 +O((\lambda|x-y|)^\ell |x-y|) \qquad 1<\ell\leq 3. $$ Using the orthogonality relations above and selecting $\ell=\f32$, one can see that $$ [R^+(H_0,\lambda^4)-R^-(H_0,\lambda^4)](x, \cdot) vS_3= O_1(\lambda^{\f32}\la x \ra^{\f52}) $$ A very similar computation shows that $$ S_3v [R^{+}(H_0, \lambda^4) -G_0](\cdot, y)= O_1(\lambda^{\f32}\la y\ra^{\f52}). $$ Combining these, we see that $$ [R^{+}(H_0, \lambda^4) - R^{-}(H_0, \lambda^4)] v \frac{D_3}{\lambda^4} v [R^{+}(H_0, \lambda^4) -G_0]= O_1(\lambda^{-1} \la x\ra^{\f52} \la y \ra^{\f52}). $$ Applying Lemma~\ref{lem:t-34bound} proves the claim. \end{proof} \section{The Perturbed Evolution For Large Energy } \label{sec:large} For completeness, we include a proof of the dispersive bound for the large energy portion of the evolution. Here we need to assume the lack of eigenvalues embedded in $[0,\infty)$ for the perturbed fourth order operator $H=(-\Delta)^2+V$. It is known that embedded eigenvalues may exist even for compactly supported smooth potentials. To complete the proof of Theorem~\ref{thm:main} we show \begin{prop} \label{prop:large}Let $|V(x) | \les \la x \ra^{-3-}$, and assume there are no embedded eigenvalues in the continuous spectrum of $H$, then \begin{align} \label{eq:large} \sup_{x,y\in \mathbb R^3}\Bigg|\int_0^\infty e^{it\lambda^4} \lambda^3 \widetilde{\chi}(\lambda) R_V^{\pm} (\lambda^4) (x,y) \, d\lambda \Bigg|\les | t |^{-\frac34}. \end{align} \end{prop} To prove the Proposition~\ref{prop:large} we use the resolvent identities and write, \begin{align}\label{large symmetric} R_V(\lambda^4)= R^\pm(H_0, \lambda^4) - R^\pm(H_0, \lambda^4) VR^\pm (H_0, \lambda^4) + R^\pm(H_0, \lambda^4) VR_V(\lambda^4) V R^\pm (H_0, \lambda^4). \end{align} Recall by the second part of Remark~\ref{rmk:large}, we know that the first summand in \eqref{large symmetric} satisfies the bound in \eqref{eq:large}. Therefore, it suffices to establish the bound in Proposition~\ref{prop:large} is valid for the last two summands in \eqref{large symmetric}. Recall by \eqref{eq:freeO1}, we have \begin{align} \label{est1} R^\pm(H_0, \lambda^4) (x,y) =O_1(\lambda^{-1}). \end{align} This, along with the fact that $\lambda\gtrsim 1$, shows that $$ R^\pm (H_0, \lambda^4)VR^\pm(H_0, \lambda^4)=O_1(\lambda^{-1}), $$ as the following bounds hold uniformly in $x,y$: \begin{align*} &|R^\pm (H_0, \lambda^4)VR^\pm(H_0, \lambda^4)(x,y)| \les \lambda^{-1} \int_{\R^3} |V (x_1)| dx_1\les \lambda^{-1} \\ &| \partial_\lambda\{ R^\pm (H_0, \lambda^4)VR^\pm(H_0, \lambda^4)]\}(x,y)| \les \lambda^{-2} \int_{\R^3} |V (x_1)| dx_1 \les \lambda^{-2}. \end{align*} Hence, by first part of Remark~\ref{rmk:large}, $R^\pm VR^\pm$ contributes $|t|^{-\f34}$ to Stone's formula. We next consider the last term in \eqref{large symmetric}. To control this term, we utilize the following. \begin{theorem} \label{th:LAP}\cite[Theorem~2.23]{fsy} Let $|V(x)|\les \la x \ra ^{-k-1}$. Then for any $\sigma>k+1/2$, $\partial_z^k R_V(z) \in \mathcal{B}(L^{2,\sigma}(\R^d), L^{2,-\sigma}(\R^d))$ is continuous for $z \notin {0}\cup \Sigma$. Further, \begin{align*} \|\partial_z^k R_V(z) \|_{L^{2,\sigma}(\R^d) \rar L^{2,-\sigma}(\R^d)} \les z^{-(3+3k)/4}. \end{align*} \end{theorem} The following suffices to finish the proof of Proposition~\ref{prop:large}. \begin{lemma} Let $|V(x) | \les \la x \ra^{-3-}$, then \begin{align*} \sup_{x,y\in \mathbb R^3} \Big| \int_{0}^{\infty} e^{-it\lambda^4} \widetilde\chi(\lambda) \lambda^{3} [R^\pm(H_0, \lambda^4) V R_V^\pm(\lambda^4) VR^\pm(H_0, \lambda^4)](x,y) d \lambda \Big| \les |t|^{-\f34}, \end{align*} \end{lemma} \begin{proof} Recalling the proof of Lemma~\ref{lem:t-34bound}, it suffices to establish \begin{align*} &\|R^\pm(H_0, \lambda^4) VR_V(\lambda^4) V R^\pm(H_0, \lambda^4) \|_{L^1 \rightarrow L^{\infty}} \les \lambda^{-1} \\ & \| \partial_{\lambda} \{ R^\pm(H_0, \lambda^4) VR_V(\lambda^4) V R^\pm(H_0, \lambda^4) \} \|_{L^1 \rightarrow L^{\infty}} \les \lambda^{-2} \end{align*} Note that first by \eqref{est1}, and using that $L^\infty \subset L^{2,-\f32-}$, we have \begin{align} \| [R^\pm (H_0, \lambda^4)] \|_{ L^1 \rightarrow L^{2,-\sigma} }= O_1(\lambda^{-1}), \,\,\,\ \sigma>3/2, \end{align} along with the dual estimate as an operator from $L^{2,\sigma}\to L^\infty$. Hence, by Theorem~\ref{th:LAP} we have the following estimate \begin{align*} & \| [R^\pm (H_0, \lambda^4) VR_V(\lambda^4) V R^\pm (H_0, \lambda^4)] \|_{L^1 \rightarrow L^{\infty}} \\ &\les \|R^\pm (H_0, \lambda^4) \|_{ L^{2,\sigma} \rightarrow L^{\infty}} \| V R_V(\lambda^4) V \|_{ L^{2,- \sigma} \rightarrow L^{2,\sigma} } \| R^\pm (H_0, \lambda^4) \|_{ L^ {1} \rightarrow L^{2,-\sigma} } \les \lambda^{-1} \end{align*} for any $|V(x) | \les \la x \ra^{-2-}$. In fact, one can show this term is smaller, though this bound is valid since $\lambda \gtrsim 1$. Similarly, by \eqref{est1} and Theorem~\ref{th:LAP} with $z=\lambda^4$ one obtains \begin{align*} \| \partial_{\lambda} \{R^\pm(H_0, \lambda^4) VR_V(\lambda^4) V R^\pm(H_0, \lambda^4) \}\|_{L^1 \rightarrow L^{\infty}} \les \lambda^{-2} \end{align*} for any $|V(x) | \les \la x \ra^{-3-}$. Here, we note that the extra decay on $V$ is needed when the derivative falls on the perturbed resolvent $R_V$ so that $V$ maps $L^{2,-\f32-}\to L^{2,\f32+}$. \end{proof} \section{Classification of threshold spectral subspaces}\label{sec:classification} In this section we establish the relationship between the spectral subspaces $S_i L^2(\R^3)$ for $i=1,2,3$ and distributional solutions to $H\psi =0$. \begin{lemma}\label{lem:esa1} Assume $|v(x)| \les \la x \ra ^{-\frac{5}{2}-}$, if $\phi \in S_1 L^2(\R^3) \setminus \{0\} $, then $\phi= Uv \psi $ where $\psi \in L^{\infty}$, $H\psi=0$ in distributional sense, and \be \label{eq:psi def} \psi= c_0 - G_0v \phi,\,\, \text{where} \,\,\ c_0= \frac{1}{ \| V\|_{L^1}} \la v,T \phi \ra . \ee \end{lemma} \begin{proof} Assume $\phi \in S_1 L^2(\R^4)$, one has $QTQ\phi=Q(U+vG_0v)\phi=0$. Note that \begin{align*} 0=Q(U+vG_0v)\phi & = (I-P)(U+vG_0 v)\phi \\ & = U\phi + vG_0v\phi - PT\phi \\ &\implies \phi = Uv ( -G_0v\phi + c_0)=Uv\psi\,\, \text{where} \,\, c_0= \frac{1}{ \| V\|_{L^1}} \la v,T \phi \ra. \end{align*} To show $ [\Delta^2+ V] \psi = [\Delta^2+ V]( -G_0v\phi + c_0)=0 $, notice that by differentiation under the integral sign $$\Delta^2 G_0v\phi=-\Delta\int \frac{1}{4\pi |x-y|} v(y)\phi(y)dy.$$ Since $ \frac{1}{4\pi |x-y|}$ is the Green's function for $-\Delta$, we have $ \Delta^2G_0 v\phi=v\phi$ in the sense of distributions. Hence, $$ [\Delta^2+ V] ( -G_0v\phi + c_0) = -v\phi + vUv\psi =0. $$ Next, we show that $G_0v \phi \in L^{\infty}$. Noting that $S_1\leq Q$, we have $P \phi =0$ and hence \begin{align}\label{psibounded} \Big|\int_{\R^3} |x-y| v(y) \phi(y) dy \Big| = \Big|\int_{\R^3} [ |x-y| -|x| ] v(y) \phi(y) dy \Big|\les \int_{\R^3} \la y \ra |v(y) \phi(y)| dy < \infty. \end{align} \end{proof} The following lemma gives further information for the function $\psi$ in Lemma~\ref{lem:esa1}. \begin{lemma} \label{lem:psiexp2} Let $|v(x)| \les \la x \ra^{-\frac{11}4- }$. Let $\phi= Uv \psi \in S_1 L^2(\R^3) \setminus \{0\}$ as in Lemma~\ref{lem:esa1}. Then, \be \label{eq:psi def2} \psi= c_0 +\sum_{i=1}^3c_i\frac{x_i}{\la x\ra}+ \sum_{1\leq i\leq j\leq 3} c_{ij} \frac{x_ix_j}{\la x\ra^3} + \widetilde\psi, \ee where $c_0= \frac{1}{ \| V\|_{L^1}} \la v,T \phi \ra$ and $\widetilde\psi\in L^2\cap L^\infty$. Moreover, $\psi \in L^p$ for $3<p \leq \infty$ if and only if $PT\phi=0$ and $\int y v(y)\phi(y)dy =0$. Furthermore, $\psi \in L^p$ for $2\leq p \leq \infty$ if and only if $PT\phi=0$, $\int y v(y)\phi(y)dy =0$, and $\int y_i y_j v(y)\phi(y)dy =0$, $1\leq i\leq j\leq 3$. \end{lemma} \begin{proof} Note that all the terms in the expansion and the function $\psi$ are in $L^\infty$, therefore it suffices to prove the claim for $|x|>1$. Using Lemma~\ref{lem:esa1} and the fact that $P\phi=0$, we write \begin{multline*} \psi(x)-c_0 = -\frac{1}{8\pi}\int_{\R^3} |x-y| [v\phi](y) dy \\ =-\frac{1}{8\pi}\int_{\R^3} \Big( |x-y| - |x| + \frac{ x \cdot y} { |x| } + \frac{ |y|^2} { 2|x|} - \frac {(x \cdot y )^2}{|x|^3} \Big) [v\phi](y) dy\\ +\frac{1}{8\pi}\int_{\R^3} \Big(\frac{ x \cdot y} { |x| } + \frac{ |y|^2} { 2|x|} - \frac {(x \cdot y )^2}{|x|^3} \Big) [v\phi](y) dy=:\psi_1+\psi_2. \end{multline*} We first claim that $\psi_1\chi_{B(0,1)^c}\in L^2\cap L^\infty$. To prove this claim we first consider the case $ |y| < |x|/2 $. In this case, by a Taylor expansion we have \begin{multline} |x-y|= |x| \Big( 1 - \frac{ x \cdot y} {|x|^2} + \frac{ |y|^2} { 2|x|^2} - \frac{1}{8} \Big(-\frac{ x \cdot y} { |x|^2} + \frac{ |y|^2} { 2|x|^2} \Big)^2 \Big) + O ( |y|^{3}/|x|^{2}) \\ = |x| - \frac{ x \cdot y} {|x|} + \frac{ |y|^2} { 2|x|} - \frac {(x \cdot y )^2}{8|x|^3} + O \Big( \frac{|y|^{{\frac 52}+}}{|x|^{{\frac 32}+}} \Big) \end{multline} Using this and the fact that $|v\phi|=|V\psi|\les v^2$ we have \begin{multline*} \Big|\int_{|y| < |x|/2} \Big( |x-y| - |x| + \frac{ x \cdot y} { |x| } + \frac{ |y|^2} { 2|x|} - \frac {(x \cdot y )^2}{8|x|^3} \Big) [v\phi](y) dy \Big| \\ \les \int_{|y| < |x|/2} \frac{|y|^{{\frac 5 2}+}}{ |x|^{{\frac 3 2}+}} \la y\ra^{-\frac{11}2-} dy \les |x|^{-\f32-} \int_{\R^3} \la y\ra^{-3-}\, dy \les |x|^{-{\frac 32}-} \end{multline*} which belongs to $L^2\cap L^\infty$ on $B(0,1)^c$. In the case $ |y| > |x|/2$, we have \begin{multline*} \Big|\int_{|y| > |x|/2} \Big( |x-y| - |x| + \frac{ x \cdot y} { |x| } + \frac{ |y|^2} { 2|x|} - \frac {(x \cdot y )^2}{8|x|^3} \Big) [v\phi](y) dy\Big|\\ \les \int_{|y| > |x|/2} \Big( |y|+\frac{|y|^2}{|x|}\Big) |y|^{-\frac{11}2-} dy \les |x|^{-\frac32-}, \end{multline*} which yields the claim. Now note that for $|x|>1$ \begin{multline}\label{psi2exp} 8\pi \psi_2= \sum_{i=1}^3 \frac{ x_i} { |x| } \int_{\R^3} y_i[v\phi](y) dy + \frac{ 1} { 2|x|} \int_{\R^3} |y|^2 [v\phi](y) dy -\sum_{i,j=1}^3 \frac{x_ix_j}{|x|^3}\int_{\R^3} y_i y_j [v\phi](y) dy. \end{multline} This yields the expansion for $\psi$ since for $|x|>1$, $\frac{x_i}{|x|}-\frac{x_i}{\la x\ra}=O(|x|^{-2})$ and $\frac{x_ix_j}{|x|^3}-\frac{x_ix_j}{\la x\ra^3}=O(|x|^{-2} ).$ Noting that the second and third terms in \eqref{psi2exp} are in $L^p_{B(0,1)^c}$ for $3<p\leq \infty$, we see that $\psi\in L^p$, $3<p\leq \infty$, if and only if $$c_0+\frac1{8\pi}\sum_{i=1}^3 \frac{ x_i} { |x| } \int_{\R^3} y_i[v\phi](y) dy \in L^{3+}_{B(0,1)^c},$$ which is equivalent to $c_0=0$ and $\int_{\R^3} y_i[v\phi](y) dy=0,$ $i=1,2,3$. To obtain the final claim, to determine if $\psi\in L^2_{B(0,1)^c}$ we rewrite the last two terms in \eqref{psi2exp} as follows $$ \frac{1}{|x|^3}\Big( \sum_{i=1}^3\big(\tfrac{|x|^2}2-x_i^2\big) \int_{\R^3} y_i^2 [v\phi](y) dy -2\sum_{1\leq i<j\leq 3} x_ix_j \int_{\R^3} y_i y_j [v\phi](y) dy\Big). $$ Note that the term in the parentheses is a degree 2 polynomial in $x$, and hence cannot be in $L^2$ unless all coefficients are zero, which implies the final claim. \end{proof} The following lemma is the converse of Lemma~\ref{lem:esa1}. \begin{lemma} Let $|v(x)| \les \la x \ra ^{-\frac{11}4-}$. Assume that a nonzero function $ \psi\in L^\infty$ solves $H\psi=0$ in the sense of distributions. Then $\phi= Uv\psi \in S_1L^2$, and we have $\psi = c_0 - G_0 v \phi$, $c_0=\frac{1}{ \| V\|_{L^1}} \la v,T \phi \ra$. In particular, the expansion given in Lemma~\ref{lem:psiexp2} is valid. \end{lemma} \begin{proof} Let $ \psi\in L^\infty$ be a solution of $H\psi=0$, or equivalently $- \Delta^2 \psi = V \psi$. We first show that for $\phi= Uv\psi\in QL^2$, namely $$ \int_{\R^3} v(x) \phi(x) dx =0. $$ Note that $v\phi=V\psi \in L^1$. Let $\eta(x)$ be a smooth cutoff function with $\eta(x)=1$ for all $|x| \leq 1$. For $\delta>0$, let $\eta_\delta(x)=\eta(\delta x)$. We have $$ |\la v \phi , \eta_\delta \ra|= |\la V\psi, \eta_\delta \ra| =| \la \Delta^2 \psi , \eta_\delta \ra| =| \la \psi , \Delta^2 \eta_\delta \ra| \leq \|\psi\|_{L^\infty} \| \Delta^2 \eta_\delta\|_{L^1} \les \delta. $$ Therefore, taking $\delta\to 0$ and using the dominated convergence theorem we conclude that $\la v ,\phi \ra=0$. Moreover, let $ \tilde{\psi} = \psi+ G_0v\phi$, then by assumption and \eqref{psibounded}, $ \tilde{\psi}$ is bounded and $\Delta^2 \tilde{\psi}=0$. By Liouville's theorem for biharmonic functions on $\R^n$, $ \tilde{\psi} =c $. This implies that $\psi= c- G_0v\phi$. Since $$ 0=H \psi=[ \Delta^2 +V] \psi = Vc-Uv(U+v G_0v)\phi \Rightarrow v^2c= vT\phi, $$ we have $c=c_0= \frac{1}{ \| V\|_{1}} \la v,T \phi \ra$. Lastly notice that, \begin{multline*} Q(U+vG_0v) Q \phi = Q(U+vG_0v) \phi = Q (U \phi + vG_0v \phi )\\ = Q (U \phi -v \psi +c_0v) =Q(c_0v) = 0, \end{multline*} hence $\phi \in S_1L^2$ as claimed. \end{proof} Let $T_1= S_1 TPTS_1 - \frac{\|V\|_1}{3 (8\pi)^2} S_1 v G_1 v S_1 $, and $S_2$ be the Riesz projection on the the kernel of $T_1$. Moreover, let $S^\prime_2$ be the Riesz projection on the the kernel of $S_1 TPTS_1$ and $S^{\prime \prime} _2$ be the Riesz projection on the the kernel of $ S_1 v G_1 v S_1$. \begin{lemma} \label{lem:S_2} Let $|v(x)| \les \la x \ra ^{-{\frac {11}4}-}$. Then, $S_2 L^2 = S^\prime_2L^2 \cap S^{\prime \prime} _2 L^2$. Moreover $\int yv(y)S_2\phi(y)dy=0$ and $PTS_2=QvG_1vS_2=S_2vG_1vQ=0$. Finally, $\phi=Uv\psi\in S_1 L^2$ belongs to $S_2L^2$ if and only if $\psi\in L^p$, $p>3$. \end{lemma} \begin{proof} It suffices to prove that $S_2 L^2 \subset S^\prime_2L^2 \cap S^{\prime \prime} _2 L^2$ since reverse inclusion holds trivially. Let $\phi \in S_1 L^2$. We have \begin{align} \label{c_0} \la S_1T PT S_1\phi , \phi \ra = \la PT \phi, PT \phi \ra = \| PT \phi \|_2^2. \end{align} On the other hand, since $S_1v=0$ and $x,y$ and $v$ are real, we have \begin{align} \la S_1v G_1 v S_1\phi , \phi \ra =& \int_{\R^6} \phi (x) v (x) |x-y|^2 \overline{v(y) \phi(y)} dy dx \nn \\ = & \int_{\R^6} \phi (x) v (x) [ |x|^2 - 2 x \cdot y - |y|^2 ] \overline{ v(y)\phi(y)} dy dx \nn\\ = & -2\int_{\R^6} \phi (x) v (x) x \cdot \overline{ y v(y)\phi(y)} dy dx = -2 \Big| \int_{\R^3} y v(y) \phi(y) dy \Big|^2 \label{S1vG1vS1} \end{align} Hence, if $\phi \in S_2 L^2$ then we have $$ 0 = \la T_1 \phi , \phi \ra = \la T PT \phi , \phi \ra - \frac{\|V\|_1}{3 (8\pi)^2} \la v G_1 v \phi , \phi \ra = \| PT \phi \|_2^2+\frac{2\|V\|_1}{3 (8\pi)^2} \Big| \int_{\R^3} y v(y) \phi(y) dy \Big|^2. $$ Therefore, $$ || PT \phi \|_2 = \Big| \int_{\R^3} y v(y) \phi(y) dy \Big|=0, $$ which yields the claim. This also implies that $\int yv(y)S_2\phi(y)dy=PTS_2=0$ and $$ QvG_1vS_2\phi=-2Q v(x)x\cdot \int y v(y)S_2\phi(y)dy=0. $$ Finally, by Lemma~\ref{lem:psiexp2}, $\psi\in L^p$, $3<p\leq \infty$ if and only if $PT \phi=\int yv(y) \phi(y)dy=0$, which is equivalent to $\phi \in S_2 L^2$ by the argument above. \end{proof} Define $S_3$ the projection on to the kernel of $T_2=S_2vG_3 vS_2+\frac{10}{3 } S_2vWvS_2$, where $W(x,y)=|x|^2|y|^2$. Note that the kernel of $G_3$ is \begin{align}\label{exp4-1} |x-y|^4= |x|^4 + |y|^4 - 4x \cdot y |y|^2 - 4 y \cdot x |x|^2 + 2 |x|^2 |y|^2 + 4 ( x \cdot y )^2. \end{align} Since $S_2x_jv=S_2v=0$, all but the final two terms contribute zero to $S_2 vG_3v S_2$. Therefore the kernel of $T_2$ (as an operator on $S_2L^2$) is \be\label{T2ker} T_2(x,y)=v(x)\big[\frac{26}{3 }|x|^2 |y|^2 + 4 ( x \cdot y )^2\big]v(y). \ee \begin{lemma}\label{lem:S_3} Let $|v(x)| \les \la x \ra ^{- 4-}$. Fix $\phi = Uv \psi \in S_2L^2$. Then $\phi\in S_3L^2 $ if and only if $\psi \in L^p $, for all $2 \leq p \leq \infty$. Moreover the kernel of $T_2$ agrees with the kernel of $S_2vG_3vS_2$. \end{lemma} \begin{proof} Using \eqref{T2ker} for $\phi\in S_2L^2$ , we have $$ \la T_2\phi,\phi\ra = \frac{26}{3 }\bigg|\int_{\R^3} |y|^2 v(y) \phi(y)\, dy \bigg|^2 + 4 \sum_{i,j=1}^3 \bigg| \int y_i y_j v(y) \phi(y)\, dy \bigg| ^2. $$ In particular, $T_2$ is positive semi-definite. Therefore $\phi\in S_3L^2$, if and only if $\la T_2 \phi,\phi\ra=0$, which by the calculation above equivalent to $\int y_i y_j v(y) \phi(y)\, dy=0$ for all $i, j$. The claim now follows from Lemma~\ref{lem:psiexp2}. The claim for $S_2 vG_3v S_2$ also follows from this since by the calculation before the lemma its kernel is $v(x)\big[2|x|^2 |y|^2 + 4 ( x \cdot y )^2\big]v(y)$. \end{proof} \begin{lemma} \label{invertible}Let $|v(x)| \les \la x \ra ^{- 4-}$. Then the kernel of the operator $S_3 v G_4 v S_3$ on $S_3L^2$ is trivial. \end{lemma} \begin{proof} Take $\phi$ in the kernel of $S_3 v G_4 v S_3$. Using \eqref{RH_0 rep}, we have (for $0<\lambda<1$) $$ R (H_0;- \lambda^4)=\frac1{2i\lambda^2}\big[R_0(i\lambda^2)-R_0(-i\lambda^2)\big]= \frac{e^{i\sqrt i \lambda|x-y|} -e^{i\sqrt{-i} \lambda|x-y|} }{8\pi i \lambda^2|x-y|}. $$ By an expansion similar to \eqref{eq:R0low}, and the proof of Lemma~\ref{lem:M_exp}, we have for $0<\lambda<1$ and for all $|x-y|$, $$ R (H_0; -\lambda^4)= \frac{a_0}{\lambda} + G_0 + a_1 \lambda G_1 + a_3 \lambda^3 G_3 + a_4 \lambda^4 G_4 + O (|\lambda|^{4+} |x-y|^{5+}),$$ where $a_0,a_1,a_3,a_4\in \C$ are constants. Notice that since $\phi \in S_3L^2$ one has $0 = \la v,\phi \ra = \la G_1 v \phi, v\phi \ra=\la G_3 v \phi, v\phi \ra $. Also note that since $v\phi=V\psi$, we have $$ \iint |x-y|^{5+} v(x)v(y)|\phi(x)\phi(y)|dx dy\les \iint |x-y|^{5+} \la x\ra^{-8-} \la y\ra^{-8-} dx dy<\infty. $$ Therefore \begin{align} \label{G5toG0} 0&= \la S_3 v G_4 v \phi, \phi \ra = \la G_4 v \phi, v\phi \ra \\ &= \lim_{ \lambda \to 0 } \Big\la \frac{R (H_0; - \lambda^4)- a_0 \lambda^{-1} - G_0 - a_1 \lambda G_1- a_3 \lambda^3 G_3}{\lambda^4} v\phi, v\phi\Big\ra \nn \\ & = \lim_{ \lambda \to 0 } \Big\la \frac{R (H_0; -\lambda^4)- G_0}{\lambda^4} v\phi, v\phi \Big\ra \nn. \end{align} Further, recalling that $G_0=[ \Delta^2]^{-1}$ and considering the Fourier domain, one has \begin{multline} \label{G5toG0lim} 0= \lim_{ \lambda \to 0 } \Big\la \frac{R(H_0;- \lambda^4)- G_0}{\lambda^4}v\phi, v\phi \Big\ra \\ = \lim_{\lambda \to 0} \frac{1}{\lambda^4} \Big\la \Big( \frac{1}{ 8 \pi^2 \xi^4 + \lambda^4} - \frac{1}{ 8 \pi^2 \xi^4} \Big) \widehat{v\phi}(\xi), \widehat{v\phi}(\xi) \Big\ra \\ = \lim_{ \lambda \to 0} \int_{\R^3} \frac{-1}{(8 \pi^2 \xi^4 + \lambda^4) 8 \pi^2 \xi^4} |\widehat{v\phi}(\xi)|^2 d \xi = \frac{-1}{ 64 \pi^4} \int_{\R^3} \frac{|\widehat{v\phi}(\xi)|^2 }{ \xi^8} d \xi. \end{multline} Where we used the Monotone Convergence Theorem in the last step. Note that this gives $v\phi=0$ since $v\phi \in L^1$. Also noting that the support of $\phi=Uv\psi$ is a subset of the support of $v$, we have $\phi=0$. This establishes the invertibility of $S_3 v G_4 v S_3$ on $S_3L^2$. \end{proof} \begin{rmk} \label{G5toG0inuse} Notice that, \eqref{G5toG0} and \eqref{G5toG0lim} imply that for any $\phi \in S_3$ one has \begin{align*} \la S_3 v G_4 v \phi, \phi \ra = \frac{-1}{ 64 \pi^4} \int_{\R^3} \big\la \tfrac{\widehat{v\phi}(\xi) }{ \xi^4}, \tfrac{\widehat{v \phi}(\xi) }{ \xi^4} \big\ra= -\la G_0v \phi, G_0v \phi \ra \end{align*} provided $|v(x)| \les \la x \ra ^{- 4-}$. \end{rmk} \begin{lemma} The operator $ P_0:=G_0 v S_3 [ S_3 v G_4 v S_3]^{-1} S_3 v G_0$ is the orthogonal projection on $L^2$ onto the zero energy eigenspace of $H = \Delta^2 + V$. \end{lemma} \begin{proof} Let $ \{ \phi_k \}_{k=1}^N $ be the orthonormal basis of $S_3L^2$, then $S_3 f = \sum_{j=1}^N \phi_j \la f, \phi_j \ra $. Moreover, for all $\phi_k$, one has $ \psi_k = - G_0 v \phi_k =-G_0V\psi_k$ are linearly independent for each $k$ and $\psi_k \in L^2$. We will show that $P_0 \psi_k = G_0 v S_3 [ S_3 v G_4 v S_3]^{-1} S_3 v G_0 \psi_k = \psi_k$ for all $1\leq k \leq N$. This implies that $P_0$ is the identity on the range of $P_0$. Since $P_0$ is self-adjoint, this finishes the proof. Let $\{A_{ij}\}_{i,j=1}^N$ be the matrix that representation of $S_3 v G_4 v S_3$ with respect to the orthonormal basis $ \{ \phi_k \}_{k=1}^N $, then by Remark~\ref{G5toG0inuse} $$ A_{ij} = \la S_3 v G_4 v \phi_j, \phi_i \ra = -\la G_0 v \phi_j, G_0 v \phi_i \ra = -\la \psi_j, \psi_i \ra. $$ Also note that, by the representation of $S_3$, we have \be\label{s3vg0psik} S_3 v G_0 \psi_k = \sum^N_{j=1} \phi_j \la v G_0 \psi_k, \phi_j \ra = -\sum^N_{j=1} \phi_j \la \psi_k, \psi_j \ra = - \sum^N_{j=1} \phi_j A_{j k} \ee By \eqref{s3vg0psik} we have \begin{multline*} P_0 \psi_k = -\sum^N_{j=1} G_0 v S_3 [ S_3 v G_4 v S_3]^{-1} \phi_j A_{jk} \\ = -\sum^N_{i,j=1} G_0 v S_3 (A^{-1})_{ij} \phi_i A_{jk} = \sum^N_{i,j=1} \psi_i (A^{-1})_{i,j} A_{jk}= \sum^N_{i =1} \psi_i \delta_{ik}=\psi_k. \end{multline*} \end{proof} \begin{rmk}\label{rem:ort} One consequence of the preceeding results is that any zero-energy resonance function is of the form: $$ \psi(x)= c_0+ c_1 \frac{x_1}{\la x \ra }+ c_2 \frac{x_2}{\la x \ra } +c_3 \frac{x_3}{\la x \ra } + \sum_{1\leq i \leq j\leq 3} c_{ij} \frac{x_i x_j}{\la x \ra^{3}} + O_{L^2}(1). $$ For some constants $c_0,c_1, c_2, c_3$, and $c_{ij}$, $1\leq i\leq j \leq 3$. Hence, the resonance space is at most 10 dimensional along with a finite-dimensional eigenspace. Moreover, $S_1-S_2 $ is at most four dimensional, $S_2-S_3$ is at most 6 dimensional, the rest is the eigenspace. \end{rmk}
2,869,038,156,619
arxiv
\section{Introduction} There are a variety of popular and useful techniques for automatically summarizing the thematic content of a set of documents including document clustering \citep{nigam00text} and latent semantic analysis \citep{LandDum97}. A somewhat more recent and general framework that has been developed in this context is latent Dirichlet analysis \citep{BleiNg03}, also referred to as statistical topic modeling \citep{GriffStey04}. The basic concept underlying statistical topic modeling is that each document is composed of a probability distribution over topics, where each topic is represented as a multinomial probability distribution over words. The document-topic and topic-word distributions are learned automatically from the data in an unsupervised manner with no human labeling or prior knowledge required. The underlying statistical framework of topic modeling enables a variety of interesting extensions to be developed in a systematic manner, such as author-topic models \citep{steyversat}, correlated topics \citep{BleiLafferty05}, and hierarchical topic models \citep{NCRPBlei07,Mimnohpam07}. \begin{table}[htdp] \begin{center} \begin{tabular*}{0.4\textwidth}{@{\extracolsep{\fill}} c||cc} Word & Topic A & Topic B\\ \hline database & 0.50 & 0.01 \\ query & 0.30 & 0.01 \\ algorithm & 0.18 & 0.08 \\ semantic & 0.01 & 0.40 \\ knowledge & 0.01 & 0.50 \\ \end{tabular*} \end{center} \caption{Toy example illustrating 2 topics each with 5 words.} \label{tab:topics} \end{table} As an illustrative example, Table \ref{tab:topics} shows two example topics defined over a toy vocabulary with 5 words Individual documents could then be modeled as coming entirely from topic A or from topic B, or more generally as a mixture (50-50, 70-30, 10-90, etc) from the two topics. The topics learned by a topic model can be thought of as themes that are discovered from a corpus of documents, where the topic-word distributions ``focus" on the high probability words that are relevant to a theme. An entirely different approach is to manually define semantic concepts using human knowledge and judgement. In the construction of ontologies and thesauri it is typically the case that for each concept a relatively small set of important words associated with the concept are defined based on prior knowledge. Concept names and sets of relations among concepts (for ontologies) are also often provided. \begin{table}[htdp] \begin{center} \begin{tabular*}{0.4\textwidth}{@{\extracolsep{\fill}} c||cc} \textsc{Family} Concept\hspace{1.5mm} & \multicolumn{2}{c}{\textsc{Family} Topic}\\ \hline beget & family &(0.208) \\ birthright & child &(0.171) \\ brood & parent &(0.073) \\ brother & young &(0.040) \\ children & boy &(0.028) \\ distantly & mother &(0.027) \\ dynastic & father &(0.021) \\ elder & school &(0.020) \\ \end{tabular*} \end{center} \caption{CALD \textsc{Family} concept and learned \textsc{Family} topic} \label{tab:family} \end{table} Concepts (as defined by humans) and topics (as learned from data) represent similar information but in different ways. As an example, the left column in Table~\ref{tab:family} lists some of the 204 words that have been manually defined as part of the concept \textsc{Family} in the Cambridge Advanced Learners Dictionary (more details on this set of concepts are provided later in the paper). The right column shows the high probability words for a learned topic, also about families. This topic was learned automatically from a text corpus using a statistical topic model. The numbers in parentheses are the probabilities that a word will be generated conditioned on the learned topic---these probabilities sum to 1 over the entire vocabulary of words, specifying a multinomial distribution. The concept \textsc{Family} in effect puts probability mass 1 on the set of 204 words within the concept, and probability 0 on all other words. The topic multinomial on the other hand could be viewed as a ``soft" version of this idea, with non-zero probabilities for all words in the vocabulary---but significantly skewed, with most of the probability mass focused on a relatively small set of words. Human-defined concepts are likely to be more interpretable than topics and can be broader in coverage, e.g., by including words such as {\it beget} and {\it brood} in the concept \textsc{Family} in Table 1. Such relatively rare words will occur rarely (if at all) in a particular corpus and are thus far less likely to be learned by the topic model as being associated with the more common family words. Topics on the other hand have the advantage of being tuned to the themes in the particular corpus they are trained on. In addition, the probabilistic model that underlies the topic model allows one to automatically tag each word in a document with the topic most likely to have generated it. In contrast, there are no general techniques that we are aware of that can automatically tag words in a document with relevant concepts from an ontology or thesaurus. In this paper we propose a general framework for combining data-driven topics and semantic concepts, with the goal of taking advantage of the best features of both approaches. Section \ref{sec:datasets} describes the two large ontologies and the text corpus that we use as the basis for our experiments. We begin Section \ref{sec:ctm} by reviewing the basic principles of topic models and then introduce the concept-topic model which combines concepts and topics into a single probabilistic model. In Section \ref{sec:hctm} we then extend the framework to the hierarchical concept-topic model to take advantage of known hierarchical structure among concepts. In Section \ref{sec:illustrations} we discuss a number of examples that illustrate how the hierarchical concept-topic model works, showing for example how an ontology can be matched to a corpus and how documents can be tagged at the word-level with concepts from an ontology. Section \ref{sec:experiments} describes a series of experiments that evaluate the predictive performance of a number of different models, showing for example that prior knowledge of concept words and concept relations can lead to better topic-based language models. Sections \ref{sec:future} and \ref{sec:conclusions} conclude the paper with a brief discussion of future directions and final comments. In terms of related work, our approach builds on the general topic modeling framework of \cite{BleiNg03} and \cite{GriffStey04} and the hierarchical Pachinko models of \cite{Mimnohpam07}. Almost all earlier work on topic modeling is purely data-driven in that no human knowledge is used in learning the topic models. One exception is the work by \cite{IfrimTW-ICML2005} who apply the aspect model \citep{Hofmann01} to model background knowledge in the form of concepts to improve text classification. Another exception is the work of \cite{BoydBlei07} who develop a topic modeling framework that combines human-derived linguistic knowledge with unsupervised topic models for the purpose of word-sense disambiguation. Our framework is somewhat more general than both of these approaches in that we not only improve the quality of making predictions on text data by using prior human concepts and concept-hierarchy, but also are able to make inferences in the reverse direction about concept words and hierarchies given data. There is also a significant amount of prior work on using data to help with ontology construction and evaluation, e.g., learning ontologies from text data (e.g., \cite{Maedche01}) or methodologies for evaluating how well ontologies are matched to specific text corpora ~\citep{BrewWilks04,AlanBrew06}. Our work is broader in scope in that we propose general-purpose probabilistic models that combine concepts and topics within a single framework, allowing us to use the data to make inferences about how documents and concepts are related (for example). It should be noted that in the work described in this paper we do not explicitly investigate techniques for modifying an ontology in a data-driven manner (e.g., adding/deleting words from concepts or relationships among concepts)---however, the framework we propose could certainly be used as a basis for exploring such ideas. \section{Text Data and Concept Sets } \label{sec:datasets} The experiments in this paper are based on one large text corpus and two different concept sets. For the text corpus, we used the Touchstone Applied Science Associates (TASA) dataset \citep{LandDum97}. This corpus consists of $D=37,651$ documents with passages excerpted from educational texts used in curricula from the first year of school to the first year of college. The documents are divided into 9 different educational genres. In this paper, we focus on the documents classified as \textsc{Science} and \textsc{Social Studies}, consisting of $D=5,356$ and $D=10,501$ documents and 1.7 Million and 3.4 Million word tokens respectively. For human-based concepts the first source we used was a thesaurus from the Cambridge Advanced Learner's Dictionary (CALD; http://www.cambridge.org/elt/dictionaries/cald.htm). CALD consists of $C=2,183$ hierarchically organized semantic categories. In contrast to other taxonomies such as WordNet \citep{fellbaum98}, CALD groups words primarily according to semantic topics with the topics hierarchically organized. The hierarchy starts with the concept \textsc{Everything} which splits into 17 concepts at the second level (e.g. \textsc{Science}, \textsc{Society}, \textsc{General/Abstract}, \textsc{Communication}, etc). The hierarchy has up to 7 levels. The concepts vary in the number of the words with a median of 54 words and a maximum of 3074. Each word can be a member of multiple concepts, especially if the word has multiple senses. The second source of concepts in our experiments was the Open Directory Project (ODP), a human-edited hierarchical directory of the web (available at http://www.dmoz.org). The ODP database contains descriptions and URLs on a large number of hierarchically organized topics. We extracted all the topics in the \textsc{Science} subtree, which consists of $C=10,817$ concept nodes after preprocessing. The top concept in this hierarchy starts with \textsc{Science} and divides into topics such as \textsc{Astronomy}, \textsc{Math}, \textsc{Physics}, etc. Each of these topics divides again into more specific topics with a maximum number of 11 levels. Each node in the hierarchy is associated with a set of URLs related to the topic plus a set of human-edited descriptions of the site content. To create a bag of words representation for each node, we collected all the words in the textual descriptions and also crawled the URLs associated with the node (a total of 78K sites). This led to a vector of word counts for each node. For both the concept sets, we propagate the words upwards in the concept tree so that an internal concept node is associated with its own words and all the words associated with its children. We created a single $W = 21,072$ word vocabulary based on the 3-way intersection between the vocabularies of TASA, CALD, and ODP. This vocabulary covers approximately 70\% of all of the word tokens in the TASA corpus and is the vocabulary that is used in all of the experiments reported in this paper. We also generated the same set of experimental results using the union of words in TASA, CALD, and ODP, and found the same general behavior as with the intersection vocabulary. We report the intersection results and omit the union results as they are essentially identical to the intersection results. A useful feature of using the intersection is that it allows us to evaluate two different sets of concepts (CALD and ODP) on a common data set (TASA) and vocabulary. \section{Combining Concepts and Topics } \label{sec:ctm} In this section, we describe the concept-topic model and detail its generative process and describe an illustrative example. We first begin with a brief review of the topic model. \subsection{Topic Model} \label{subsec:tm} The topic model (or latent Dirichlet allocation model) is a statistical learning technique for extracting a set of topics that describe a collection of documents \citep{BleiNg03}. A topic $z$ is represented as a multinomial distribution over the $V$ unique words in a corpus, $p(w|z) = [p(w_1|z), ..., p(w_V|z)]$ such that $\sum_v p(w_v|z) = 1$. Therefore, a topic can be viewed a $V$-sided die and generating $n$ words from a topic is akin to throwing the topic-die $n$ times. There are a total of $T$ topics and a document $d$ is represented as a multinomial distribution over those $T$ topics $p(z|d)$, $1 \le z \le T$ and $sum_z p(z|d) = 1$. Generating a word from a document $d$ involves first selecting a topic $z$ from the document-topic distribution $p(z|d)$ and then selecting a word from the topic distribution $p(w|z)$. This process is repeated for each word in the document. The conditional probability of a word in a document is given by, \begin{equation} p(w|d) = \sum_{z} p(w|z) p(z|d) \label{eqn:mixture} \end{equation} Given the words in a corpus, the inference problem involves estimating the word-topic distributions $p(w|z)$ and the topic-document distributions $p(z|d)$ for the corpus. For the standard topic model, collapsed Gibbs sampling has been successfully applied to do inference on large text collections in an unsupervised fashion \citep{GriffStey04}. Under this technique, words are initially assigned randomly to topics and the algorithm then iterates through each word in the corpus and samples a topic assignment given the topic assignments of all other words in the corpus. This process is repeated until a steady state is reached (e.g. the likelihood of the model on the corpus is not increasing with subsequent iterations) and the topic assignments to words are then used to estimate the word-topic $p(w|z)$ and topic-document $p(z|d)$ distributions. The topic model uses Dirichlet priors on the multinomial distributions $p(w|z)$ and $p(z|d)$. In this paper, we use a fixed symmetric prior on $p(w|z)$ word-topic distributions and optimize the asymmetric Dirichlet prior parameters on $p(z|d)$ topic-document distributions using fixed point update equations (as given in \cite{Minka00}). See Appendix A for more details on inference. \subsection{Concept-Topic Model} \label{subsec:ctm} \begin{figure*} \centering \includegraphics[keepaspectratio,width=5.7in]{tagging_tasa2.eps} \caption{Illustrative example of tagging a document excerpt using the concept model (CM) with concepts from CALD. \label{fig:caldtag}} \end{figure*} The concept-topic model is a simple extension to the topic model where we add $C$ concepts to the $T$ topics of the topic model resulting in an effective set of $T$ + $C$ ``topics" for each document. Recall that a concept is represented as a set of words. The human-defined concepts only give us a membership function over words---either a word is a member of the concept or it is not. One straightforward way to incorporate concepts into the topic modeling framework is to convert them to ``topics" by representing them as probability distributions over their associated word sets. In other words, a concept $c$ can be represented by a multinomial distribution $p(w|c)$ such that $\sum_w p(w|c) = 1$ where $w \in c$ (therefore, $p(w|c) = 0$ for $w \notin c$). A document is now represented as a distribution over topics and concepts, $p(z|d)$ where $1 \le z \le T+C$. The conditional probability of a word $w$ given a document $d$ is, \begin{equation} p(w|d) = {\sum_{t=1}}^{T} p(w|t) p(t|d) + {\sum_{c=1}}^{C} p(w|c) p( T + c |d) \label{eqn:ctmmixture} \end{equation} The generative process for a document collection with $D$ documents under the concept-topic model is as follows: \begin{enumerate} \item For each topic $t \in \{1,...,T\}$, select a word distribution $\phi_t$ $\sim$ Dir($\beta_{\phi}$) \item For each concept $c \in \{1,...,C\}$, select a word distribution $\psi_c$ $\sim$ Dir($\beta_{\psi}$) \footnote{Note that $\psi_c$ is a constrained word distribution defined over only the words that are members of the human-defined concept $c$} \item For each document $d \in \{1,...,D\}$ \begin{enumerate} \item Select a distribution over topics and concepts $\theta_d$ $\sim$ Dir($\alpha$) \item For each word $w$ of document $d$ \begin{enumerate} \item Select a component $z$ $\sim$ Mult($\theta_d$) \item If $z \le T$ generate a word from topic $z$, $w$ $\sim$ Mult($\phi_{z}$); otherwise generate a word from concept $c = z-T$, $w$ $\sim$ Mult($\psi_{c}$) \end{enumerate} \end{enumerate} \end{enumerate} where $\phi_t$ represents the $p(w|t)$ word-topic distribution for topic $t$, $\psi_c$ represents the $p(w|c)$ word-concept distribution for concept $c$ and $\theta_d$ represents the $p(z|d)$ distribution over topics and concepts for document $d$. $\beta_{\phi}$, $\beta_{\psi}$ and $\alpha$ are the parameters of the Dirichlet priors for $\phi$, $\psi$ and $\theta$ respectively. Every element in the above process is unknown except for the words in the corpus and the membership of words in the human-defined concepts. Thus, the inference problem involves estimating the distributions $\phi$, $\psi$ and $\theta$ given the words in the corpus. The standard collapsed Gibbs sampling scheme previously used to do inference for the topic model can be modified to do inference for the concept-topic model. We also optimize the Dirichlet parameters using the fixed point updates from \cite{Minka00} after each Gibbs sampling sweep through the corpus. The topic model can be viewed as a special case of the concept-topic model when there are no concepts present, i.e. when $C = 0$. The other extreme of this model where $T = 0$, which we refer to as the concept model, is used for illustrative purposes. In our experiments, we refer to the topic model, concept model and the concept-topic model as TM, CM and CTM respectively. We note that the concept-topic model is not the only way to incorporate semantic concepts. For example, we could use the concept-word associations to build informative priors for the topic model and then allow the inference algorithm to learn word probabilities for all words (for each concept), given the prior and the data. We chose the current approach to exploit the sparsity in the concept-word associations (topics are distributions over all the words in the vocabulary but concepts are restricted to just their associated words). This allows us to easily do inference with tens of thousands of concepts on large document collections. A motivation for this approach is that there might be topics present in a corpus (that can be learned) that are not represented in the concept set. Similarly, there may be concepts that are either missing from the text corpus or are rare enough that they are not found in the data-driven topics of the topic model. This marriage of concepts and topics provides a simple way to augment concepts with topics and has the flexibility to mix and match topics and concepts to describe a document. Figure \ref{fig:caldtag} illustrates concept assignments to individual words in a TASA document with CALD concepts, using the concept model (CM). The four most likely concepts are listed for this document. For each concept, the estimated probability distribution over words is shown next to the concept. In the document, words assigned to the four most likely concepts are tagged with letters a-d (and color coded if viewing in color). The words assigned to any other concept are tagged with ``o" and words outside the vocabulary are not tagged. In the concept model, the distributions over concepts within a document are highly skewed such that most probability goes to only a small number of concepts. In the example document, the four most likely concepts cover about 50\% of all words in the document. The figure illustrates that the model correctly disambiguates words that have several conceptual interpretations. For example, the word \textit{charged} has many different meanings and appears in 20 CALD concepts. In the example document, this word is assigned to the PHYSICS concept which is a reasonable interpretation in this document context. Similarly, the ambiguous words \textit{current} and \textit{flow} are correctly assigned to the ELECTRICITY concept. \section{Hierarchical Concept-Topic Model } \label{sec:hctm} Concepts are often arranged in a tree-structured hierarchy. While the concept-topic model provides a simple way to combine concepts and topics, it does not take into account the hierarchical structure of the concepts. In this section, we describe an extension, the hierarchical concept-topic model, that extends the concept-topic model to incorporate the hierarchical structure of the concept set. Similar to the concept-topic model described in the previous section, there are $T$ topics and $C$ concepts in the hierarchical concept-topic model. For each document $d$, we introduce a ``switch" distribution $p(x|d)$ which determines if a word should be generated via the topic route or the concept route. Every word token in the corpus is associated with a binary switch variable $x$. If $x$ = 0, the previously described standard topic mechanism of Section \ref{subsec:tm} is used to generate the word. That is, we first select a topic $t$ from a document-specific mixture of topics $p(t|d)$ and generate a word from the word distribution associated with topic $t$. If $x$ = 1, we generate the word from one of the $C$ concepts in the concept tree. To do that, we associate with each concept node $c$ in the concept tree a document-specific multinomial distribution with dimensionality equal to $N_c$ + 1, where $N_c$ is the number of children of the concept node $c$. This distribution allows us to traverse the concept tree and exit at any of the $C$ nodes in the tree --- given that we are at a concept node $c$, there are $N_c$ child concepts to choose from and an additional option to choose an ``exit" child to exit the concept tree at concept node $c$. We start our walk through the concept tree at the root node and select a child node from one of its children. We repeat this process until we reach an exit node and the word is generated from the the parent of the exit node. Note that for a concept tree with $C$ nodes, there are exactly $C$ distinct ways to select a path and exit the tree --- one for each concept. In the hierarchical concept-topic model, a document is represented as a weighted combination of mixtures of $T$ topics and $C$ paths through the concept tree and the conditional probability of a word $w$ given a document $d$ is given by, \begin{eqnarray} p(w|d) = P(x=0|d) \sum_{t} p(w|t) p(t|d) ~~~~~~~~~~~ \nonumber \\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + P(x=1|d) \sum_{c} p(w|c) p(c|d) \label{eqn:hctmmixture} \end{eqnarray} where $~~~~p(c|d) = p(exit|c) p(c|parent(c))...p(.|root)$ \\\\ The generative process for a document collection with $D$ documents under the hierarchical concept-topic model is as follows: - \begin{enumerate} \item For each topic $t \in \{1,...,T\}$, select a word distribution $\phi_t$ $\sim$ Dir($\beta_{\phi}$) \item For each concept $c \in \{1,...,C\}$, select a word distribution $\psi_c$ $\sim$ Dir($\beta_{\psi}$) \footnote{Note that $\psi_c$ is a constrained word distribution defined over only the words that are members of the human-defined concept $c$} \item For each document $d \in \{1,...,D\}$ \begin{enumerate} \item Select a switch distribution $\xi_d$ $\sim$ Beta($\gamma$) \item Select a distribution over topics $\theta_d$ $\sim$ Dir($\alpha$) \item For each concept $c \in \{1,...,C\}$ \begin{enumerate} \item Select a distribution over children of $c$, $\zeta_{cd}$ $\sim$ Dir($\tau_c$) \end{enumerate} \item For each word $w$ of document $d$ \begin{enumerate} \item Select a binary switch variable $x$ $\sim$ Bernoulli($\xi_d$) \item If $x$ = 0 \begin{enumerate} \item Select a topic $z$ $\sim$ Mult($\theta_d$) \item Generate a word from topic $z$, $w$ $\sim$ Mult($\phi_{z}$) \end{enumerate} \item Otherwise, create a path starting at the root concept node, $\lambda_1$ = 1 \begin{enumerate} \item Repeat \\ {\small Select a child of node $\lambda_j$, $\lambda_{j+1}$ $\sim$ Mult($\zeta_{{\lambda_j}{d}}$)} \\ Until $\lambda_{j+1}$ is an exit node \item Generate a word from concept $c$ = $\lambda_{j}$, $w$ $\sim$ Mult($\psi_{c}$); set $z$ to $T + c$ \end{enumerate} \end{enumerate} \end{enumerate} \end{enumerate} where $\phi_t$, $\psi_c$, $\beta_{\phi}$ and $\beta_{\psi}$ are analogous to the corresponding symbols in the concept-topic model described in the previous section. $\xi_{d}$ represents the $p(x|d)$ switch distribution for document $d$, $\theta_d$ represents the $p(t|d)$ distribution over topics for document $d$, $\zeta_{cd}$ represents the multinomial distribution over children of concept node $c$ for document $d$ and $\gamma$, $\alpha$, $\tau_c$ are the parameters of the priors on $\xi_d$, $\theta_d$, $\zeta_{cd}$ respectively. As before, all elements above are unknown except words and the word-concept memberships in the generative process. Details of the inference technique based on collapsed Gibbs sampling \citep{GriffStey04} and fixed point update equations to optimize the Dirichlet parameters \citep{Minka00} are provided in Appendix A. The generative process above is quite flexible and can handle any directed-acyclic concept graph. The model cannot, however, handle cycles in the concept structure as the walk of the concept graph starting at the root node is not guaranteed to terminate at an exit node. The word generation mechanism via the concept route in the hierarchical concept-topic model is related to the Hierarchical Pachinko Allocation model 2 (HPAM 2) as described in \cite{Mimnohpam07}. In the HPAM 2 model, topics are arranged in a 3-level hierarchy with root, super-topics and sub-topics at levels 1,2 and 3 respectively and words are generated by traversing the topic hierarchy and exiting at a specific level and node. In our model, we use a similar mechanism but only for word generation via the concept route. There is additional machinery in our model to incorporate $T$ data-driven topics (in addition to the hierarchy of concepts) and a switching mechanism to choose the word generation process via the concept route or the topic route. In our experiments, we refer to the hierarchical concept-topic model as HCTM and the version of the model without topics, which we use for illustrative purposes, as HCM. Note that the models we described earlier in Section \ref{sec:ctm} (CM, CTM etc) ignore any hierarchical information. There are several advantages of modeling the concept hierarchy. We learn the correlations between the children of a concept via its Dirichlet parameters ($\tau_c$ in the generative process). This enables the model to a priori prefer certain paths in the concept hierarchy given a new document. For example, when trained on scientific documents the model can automatically adjust the Dirichlet parameters to give more weight to the child node ``science" of root than say to node ``society". We experimentally investigate this aspect of the model by comparing HCM with CM (more details later). Secondly, by selecting a path along the concept hierarchy, the learning algorithm of the hierarchical model also reinforces the probability of the other concept nodes that lie along the path. This is desirable since we expect the concepts to be arranged in the hierarchy by their ``semantic proximity". We measured the average minimum path length of 5 high probability concept nodes for 1000 randomly selected science documents from the TASA corpus for both HCM and CM using the CALD concept set. HCM has an average value of 3.92 and CM has an average value of 4.09, the difference across the 1000 documents is significant under a t-test at the 0.05 level. This result indicates that the hierarchical model prefers semantically similar concepts to describe documents. We show some illustrative examples in the next section to demonstrate the usefulness of the hierarchical model. \section{Illustrative Examples } \label{sec:illustrations} \begin{figure*} \centering \includegraphics[keepaspectratio,width=5.7in]{alpha_hpam_science_words2.eps} \caption{Illustrative example of marginal concept distributions from the hierarchical concept model learned on science documents using CALD concepts. \label{fig:caldmargconcepts}} \end{figure*} \begin{figure*} \centering \includegraphics[keepaspectratio,width=5.7in]{tagging_example_cald_odp1.eps} \caption{Example of a single TASA document from the science genre (a). The concept distribution inferred by the hierarchical concept model using the CALD concepts (b) and the ODP concepts (c). \label{fig:taggingexample1}} \end{figure*} In this section, we provide two illustrative examples from the hierarchical concept model trained on the science genre of the TASA document set. Figure \ref{fig:caldmargconcepts} shows the 20 highest probability concepts (along with the ancestors of those nodes) for a random subset of 200 documents. The concepts are from the CALD concept set. For each concept, the name of the concept is shown in all caps and the number represents the marginal probability for the concept. The marginal probability is computed based on the product of probabilities along the path of reaching the node as well as the probability of exiting at the node and producing the word, marginalized (averaged) across 200 documents. Many of the most likely concepts as inferred by the model relate to specific science concepts (e.g. \textsc{Geography}, \textsc{Astronomy}, \textsc{Chemistry}, etc.). These concepts all fall under the general \textsc{Science} concept which is also one of the most likely concepts for this document collection. Therefore, the model is able to summarize the semantic themes in a set of documents at multiple levels of granularity. The figure also shows the 5 most likely words associated with each concept. In the original CALD concept set, each concept consists of a set of words and no knowledge is provided about the prominence, frequency or representativeness of words within the concept. In the hierarchical concept model, for each concept a distribution over words is inferred that is tuned to the specific collection of documents. For example, for the concept \textsc{Astronomy} (second from left, bottom row), the word ``planet" receives much higher probability than the word ``saturn" or ``equinox" (not shown), all of which are members of the concept. This highlights the ability of the model to adapt to variations in word usage across document collections. Figure \ref{fig:taggingexample1} shows the result of inferring the hierarchical concept mixture for an individual document using both the CALD and the ODP concept sets (Figures \ref{fig:taggingexample1}(b) and \ref{fig:taggingexample1}(c) respectively). For the hierarchy visualization, we selected the 8 concepts with the highest probability and included all ancestors of these concepts when visualizing the tree. This illustration shows that the model is able to give interpretable results for an individual document at multiple levels of granularity. For example, the CALD subtree (Figure \ref{fig:taggingexample1}(b)) highlights the specific semantic themes of \textsc{Forestry}, \textsc{Light}, and \textsc{Plant Anatomy} along with the more general themes of \textsc{Science} and \textsc{Life and Death}. For the ODP concept set (Figure \ref{fig:taggingexample1}(c)), the likely concepts focus specifically on \textsc{Canopy Research}, \textsc{Coniferophyta} and more general themes such as \textsc{Ecology} and \textsc{Flora and Fauna}. This shows that different concept sets can each produce interpretable and useful document summaries focusing on different aspects of the document. \section{Experiments} \label{sec:experiments} We assess the predictive performance of the topic model, concept-topic model and the hierarchical concept-topic model by comparing their perplexity on unseen words in test documents using concepts from CALD and ODP. Perplexity is a quantitative measure to compare language models \citep{Brown92} and is widely used to compare the predictive performance of topic models (e.g. \cite{BleiNg03,GriffStey04,Chemudugunta07,NCRPBlei07}). While perplexity does not necessarily directly measure aspects of a model such as interpretability or coverage, it is nonetheless a useful general predictive metric for assessing the quality of a language model. In simulated experiments (not described in this paper) where we swap word pairs randomly across concepts to gradually introduce noise, we found a positive correlation of the quality of concepts with perplexity. In the experiments below, we randomly split documents from science and social studies genres into disjoint train and test sets with 90\% of the documents included in the train set and the remaining 10\% in the test set. This resulted in training and test sets with $D_{train}$ = 4,820 and $D_{test}$ = 536 documents for the science genre and $D_{train}$ = 9450 and $D_{test}$ = 1051 documents for the social studies genre respectively. \subsection{Perplexity} Perplexity is equivalent to the inverse of the geometric mean of per-word likelihood of the heldout data. It can be interpreted as being proportional to the distance (cross entropy to be precise) between the word distribution learned by a model and the word distribution in an unobserved test document. Lower perplexity scores indicate that the model predicted distribution of heldout data is closer to the true distribution. More details about the perplexity computation are provided in the Appendix B. For each test document, we use a random 50\% of words of the document to estimate document specific distributions and measure perplexity on the remaining 50\% of words using the estimated distributions. \subsection{Perplexity Comparison across Models} \begin{figure \centering \subfigure{\includegraphics[keepaspectratio,width=3.5in]{figs/combpplextrain5test5} \label{subfig:combpplextrain5test5}}\hspace{-0.2cm} \subfigure{\includegraphics[keepaspectratio,width=3.5in]{figs/combpplextrain5test6} \label{subfig:combpplextrain5test6}} \caption{\small\label{fig:combpplextrain5}\small{Comparing perplexity for TM, CTM and HCTM using training documents from science and testing on science (top) and social studies (bottom) as a function of number of topics}} \end{figure} \begin{figure \centering \subfigure{\includegraphics[keepaspectratio,width=3.5in]{figs/combpplextrain6test6} \label{subfig:combpplextrain6test6}}\hspace{-0.2cm} \subfigure{\includegraphics[keepaspectratio,width=3.5in]{figs/combpplextrain6test5} \label{subfig:combpplextrain6test5}} \caption{\small\label{fig:combpplextrain6}\small{Comparing perplexity for TM, CTM and HCTM using training documents from social studies and testing on social studies (top) and science (bottom) as a function of number of topics}} \end{figure} We compare the perplexity of the topic model (TM), concept-topic model (CTM) and the hierarchical concept-topic model (HCTM) trained on document sets from the science and social studies genres of the TASA collection and using concepts from CALD and ODP concept sets. For the models using concepts, we indicate the concept set used by appending the name of the concept set to the model name, e.g. HCTM-CALD to indicate that HCTM was trained using concepts from the CALD concept set. Figure \ref{fig:combpplextrain5} shows the perplexity of TM, CTM and HCTM using training documents from the science genre in TASA and testing on documents from the science (top) and social studies (bottom) genres in TASA respectively as a function of number of data-driven topics $T$. The point $T$ = 0 indicates that there are no topics used in the model, e.g. for HCTM this point refers to HCM. The results clearly indicate that incorporating concepts greatly improves the perplexity of the models. This difference is even more significant when the model is trained on one genre of documents and tested on documents from a different genre (e.g. see bottom plot of Figure \ref{fig:combpplextrain5}), indicating that the models using concepts are robust and can handle noise. TM, on the other hand, is completely data-driven and does not use any human knowledge, so it is not as robust. One important point to note is that this improved performance by the concept models is not due to the high number of effective topics ($T+C$). In fact, even with $T$ = 2,000 topics TM does not improve its perplexity and even shows signs of deterioration in quality in some cases. In contrast, CTM-ODP and HCTM-ODP, using over 10,000 effective topics, are able to achieve significantly lower perplexity than TM. The corresponding plots for models using training documents from social studies genre in TASA and testing on documents from the social studies (top) and science (bottom) genres in TASA respectively are shown in Figure \ref{fig:combpplextrain6} with similar qualitative results as in Figure \ref{fig:combpplextrain5}. CALD and ODP concept sets mainly contain science-related concepts and do not contain many social studies related concepts. This is reflected in the results where the perplexity values between TM and CTM/HCTM trained on documents from the social studies genre are relatively closer (e.g. as shown in the top plot of Figure \ref{fig:combpplextrain6}. This is, of course, not true for the bottom plot as in this case TM again suffers due to the disparity in themes in train and test documents). Figures \ref{fig:combpplextrain5} and \ref{fig:combpplextrain6} also allow us to compare the advantages of modeling the hierarchy of the concept sets. In both these figures when $T = 0$, the performance of HCTM is always better than the performance of CTM for all cases and for both concept sets. This effect can be attributed to modeling the correlations of the child concept nodes. Note that the one-to-one comparison of concept models with and without the hierarchy to assess the utility of modeling the hierarchy is not straightforward when $T > 0$ because of the differences in the ways the models mix with data-driven topics (e.g. CTM could choose to generate 30\% of words using topics whereas HCTM may choose a different fraction). \begin{figure \centering \subfigure{\includegraphics[keepaspectratio,width=3.5in]{figs/percentpplexTrain5Test5} \label{subfig:percentpplextrain5test5}}\hspace{-0.2cm} \subfigure{\includegraphics[keepaspectratio,width=3.5in]{figs/percentpplexTrain5Test6} \label{subfig:percentpplextrain5test6}} \caption{\small\label{fig:percentpplextrain5}\small{Comparing perplexity for TM, CTM and HCTM using training documents from science and testing on science (top) and social studies (bottom) as a function of percentage of training documents}} \end{figure} \begin{figure \centering \subfigure{\includegraphics[keepaspectratio,width=3.5in]{figs/percentpplexTrain6Test6} \label{subfig:percentpplextrain6test6}}\hspace{-0.2cm} \subfigure{\includegraphics[keepaspectratio,width=3.5in]{figs/percentpplexTrain6Test5} \label{subfig:percentpplextrain6test5}} \caption{\small\label{fig:percentpplextrain6}\small{Comparing perplexity for TM, CTM and HCTM using training documents from social studies and testing on social studies (top) and science (bottom) as a function of percentage of training documents}} \end{figure} We next look at the effect of varying the amount of training data for all models. Figure \ref{fig:percentpplextrain5} shows the perplexity of the models as a function of varying amount of training data using documents from the science genre in TASA for training and testing on documents from the science (top) and social studies (bottom) genres respectively. Figure \ref{fig:percentpplextrain6} shows the corresponding plots for models using training documents from the social studies genre in TASA and testing on documents from the social studies (top) and science (bottom) genres in TASA respectively. In both these figures when there is insufficient training data, the models using concepts significantly outperform the topic model. Among the concept models, HCTM consistently outperforms CTM. Both the concept models take advantage of the restricted word associations used for modeling the concepts that are manually selected on the basis of the semantic similarily of the words. That is, CTM and HCTM make use of prior human knowledge in the form of concepts and the hierarchical structure of concepts (in the case of HCTM) whereas TM relies solely on the training data to learn topics. Prior knowledge is very important when there is insufficient training data (e.g. in the extreme case where there is no training data available, topics of TM will just be uniform distributions and will not perform well for prediction tasks. Concepts, on the other hand, can still use their restricted word associations to make reasonable predictions). This effect is more pronounced when we train on on genre of documents and test on a different genre (bottom plots in both Figures \ref{fig:percentpplextrain5} and \ref{fig:percentpplextrain6}), i.e. prior knowledge becomes even more important for this case. The gap between the concept models and the topic model narrows as we increase the amount of training data. Even at the 100\% training data point CTM and HCTM have lower perplexity values than TM. \section{Future Directions } \label{sec:future} There are several potentially useful directions in which the hierarchical concept-topic model can be extended. One interesting extension to try is to substitute the Dirichlet prior on the concepts with a Dirichlet Process prior. Under this variation, each concept will now have a potentially infinite number of children, a finite number of which are observed at any given instance (e.g. see \cite{TehJorBea2006}). When we do a random walk through the concept hierarchy to generate a word, we now have an additional option to create a child topic and generate a word from that topic. There would be no need for the switching mechanism as data-driven topics are now part of the concept hierarchy. This model would allow us to add new topics to an existing concept set hieararchy and could potentially be useful in building a recommender system for updating concept ontologies. An alternative direction to pursue would be to introduce additional machinery in the generative model to handle different aspects of transitions through the concept hierarchy. In HCTM, we currently learn one set of path correlations for the entire corpus (captured by the Dirichlet parameters $\tau$ in HCTM). It would be interesting to introduce another latent variable to model multiple path correlations. Under this extension, documents from different genres can learn different path correlations (similar to \cite{BoydBlei07}). For example, scientific documents could prefer to utilize paths involving scientific concepts and humanities concepts could prefer to utilize a different set of path correlations when they are modeled together. This model would also be able to make use of class labels of documents if available. Other potential future directions involve modeling multiple corpora and multiple concept sets and so forth. \section{Conclusions} \label{sec:conclusions} We have proposed a probabilistic framework for combining data-driven topics and semantically-rich human-defined concepts. We first introduced the concept-topic model, which is a straightforward extension of the topic model, to utilize semantic concepts in the topic modeling framework. This model represents documents as a mixture of topics and concepts thereby allowing us to describe documents using the semantically rich concepts. We further extended this model with the hierarchical concept-topic model where we incorporate the concept-set hierarchy into the generative model by modeling the parent-child relationship in the concept hierarchy. Experimental results, using two document collections and two concept sets with approximately 2,000 and 10,000 concepts, indicate that using the semantic concepts significantly improves the quality of the resulting language models. This improvement is more pronounced when the training documents and test documents belong to different genres. Modeling concepts and their associated hierarchies appears to be particularly useful when there is limited training data --- the hierarchical concept-topic model has the best predictive performance overall in this regime. We view the current set of models as a starting point for exploring more expressive generative models that can potentially have wide-ranging applications, particularly in areas of document modeling and tagging, ontology modeling and refining, information retrieval, and so forth. \section*{Acknowledgements} The work of Chaitanya Chemudugunta, Padhraic Smyth, and Mark Steyvers was supported in part by the National Science Foundation under Award Number IIS-0082489. The work of Padhraic Smyth was also supported by a Research Award from Google. \bibliographystyle{plainnat}
2,869,038,156,620
arxiv
\section{Introduction} Searching for exotic hadrons beyond conventional $q\bar{q}$-meson and $qqq$-baryon pictures is an extremely meaningful topic in hadronic physics because they may contain more abundant low-energy strong interaction information than ordinary hadrons. A large amount of new hadron states have been observed since the BELLE collaboration's discovery of the charmonium-associated state $X(3872)$ in 2003~\cite{review}. Many of new hadron states can not be accommodated in the conventional $Q\bar{Q}$-meson framework, such as the charged $Z_c^+$ states, which must have a smallest quark component $c\bar{c}u\bar{d}$ due to carrying one unit charge. Tetraquark states $Q\bar{Q}q\bar{q}$ have therefore attracted much attention from theoretical physicists to describe the internal structure of new hadron states~\cite{review}. Most of new hadron states can be accommodated in the picture of tetraquark states just matching their experimental data with model values of $Q\bar{Q}q\bar{q}$~\cite{charmed,referee}. Even so, none of new hadron states is lower than its threshold up to now and can therefore decay into $Q\bar{Q}$ and $q\bar{q}$ via strong interactions~\cite{charmed,referee}. The theoretical explorations on the question whether or not the doubly heavy tetraquark states $[QQ][\bar{q}\bar{q}]$ or $[\bar{Q}\bar{Q}][qq]$ can exist as stable states against breakup into two $Q\bar{q}$ mesons was pioneered in the early 1980s~\cite{potential}. From then on, a lot of attention has been payed to the states in various phenomenological methods, such as the MIT bag model~\cite{bag}, constituent quark model (CQM)~\cite{constituentmodel}, chiral perturbation theory~\cite{chiralperturbation}, string model~\cite{string}, lattice QCD~\cite{lqcdbb}, and QCD sum rule approach~\cite{qcdsum}. A large amount of researches indicated that the state $[bb][\bar{u}\bar{d}]$ with $01^+$ is stable against strong interactions in various theoretical framework. However, the state has not been determined because of a lack of experimental information about the strength of the interaction between two heavy quarks. The discovery of the doubly charmed baryon $\Xi_{cc}$ by the LHCb Collaboration at CERN has provided the crucial experimental input which allows this issue to be finally resolved~\cite{xicc}. Subsequently, the enthusiasms of the theoretical investigation on the doubly heavy tetraquark states $[QQ][\bar{q}\bar{q}]$ are stimulated again to search for possible stable tetraquark states~\cite{karliner,Eichten,Francis,bicudo, cqmpark,unstable,heavy,pseudoscalar,decay,francis,richard}. Undoubtedly, the stability of the doubly heavy tetraquark states are model dependent, see Table \ref{comparison}. More investigations from the different theoretical point of view should therefore be very necessary to present a comprehensive understanding on the properties of the states, which must be benefit to the future experimental searches for stable doubly heavy tetraquark states. Recently, we have developed an alternative color flux-tube model (ACFTM) based on the lattice QCD picture and the traditional quark models~\cite{lqcd,tcqm}. The most salient feature of this model is a multibody confinement potential instead of a two-body one proportional to the color charge in the traditional quark models~\cite{ACFTM}. We systematically investigate the hidden charmed states observed in recent years within the framework of the ACFTM. It can be found that many of hidden charmed states as compact tetraquark states can be accommodated in the ACFTM, especially the charged tetraquarks $Z^+_c$. The multibody color flux-tube dynamical mechanism is seem to be propitious to describe multiquark states from the phenomenological point of view~\cite{charmed}. We have therefore a great ambition to research the properties of the doubly heavy tetraquark states under the hypothesis of diquark-antidiquark picture within the framework of ACFTM. We concentrate more on the mass spectrum of doubly heavy tetraqurk states than on their decay properties and we investigate the dynamical mechanism bunching quarks together and affecting their binding energy. This work is attempted to broaden the theoretical horizon in the properties of the doubly heavy tetraquark states and may provide some valuable clues to the experimental establishment of the tetraquark states in the future. This paper is organized as follows. After the introduction section, the introduction of the ACFTM is given in Sec. II. The construction of the wavefunctions of the doubly heavy tetraquark states are shown in Sec. III. The numerical results and discussions of the stable doubly heavy tetraquark states are presented in Sec. IV. The last section is devoted to list a brief summary. \section{The alternative color flux-tube model} The underlying theory of strong interaction is quantum Chromodynamics (QCD), which has three important properties: asymptotic freedom, color confinement, approximate chiral symmetry and its spontaneous breaking. At the hadronic scale, QCD is highly non-perturbative due to the complicated infrared behavior of the non-Abelian $SU(3)$ gauge group. At present it is still impossible for us to derive the hadron spectrum analytically from the QCD Lagrangian. The QCD-inspired CQM is therefore a useful tool in obtaining physical insight for these complicated strong interaction systems. Though the connection between the models and QCD is not clearly established and there is no sound systematics to obtain corrections, the models can provide simple physical pictures, which connect the phenomenological regularities observed in the hadron data with the underlying structure. Color confinement is a long distance behavior whose understanding continues to be a challenge in theoretical physics. In the traditional CQM, a two-body interaction $V^{C}_{ij}$ proportional to the color charges $\mathbf{\lambda}^c_i\cdot\mathbf{\lambda}^c_j$ and $r_{ij}^n$, namely $V^{C}_{ij}=a_c\mathbf{\lambda}^c_i\cdot\mathbf{\lambda}^c_jr_{ij}^n$, where $n=1$ or 2 and $r_{ij}$ is the distance between two quarks, was introduced to phenomenologically describe the quark confinement interaction~\cite{tcqm}. The traditional models can well describe the properties of ordinary hadrons ($q^3$ and $q\bar{q}$) because their flux-tube structures are unique and trivial. The models are known to be flawed phenomenologically because it leads to power-law van der Waals forces between color-singlet hadrons~\cite{vandewaals}. The problems are related to the fact that the models do not respect local color gauge invariance~\cite{gauge}. In addition, it also leads to anticonfinement for symmetrical color structure in the multiquark system~\cite{anticonfinement}. LQCD calculations on mesons, baryons, tetraquark, and pentaquark states revealed that there exist flux-tube structures in the hadrons~\cite{lqcd}. In the case of a given spatial configuration of multiquark states, the confinement is a multibody interaction and can be simulated by a static potential which is proportional to the minimum of the total length of color flux-tubes. A naive flux-tube model, used in the present work, based on this picture has been constructed. It takes into account multibody confinement with harmonic interaction approximation, i.e., where the length of the color flux-tube is replaced by the square of the length to simplify the numerical calculation. In this way, the Regge trajectories are missing in the ACFTM. However, this replacement is still a good approximation for low-lying states such as the states considered in this paper. We have calculated the $b\bar{b}$ spectrum by using quadratic and linear potentials, the results show that the differences between two models are small for the low-lying states~\cite{SNChen}. There are two theoretical arguments to support this approximation: One is that the spatial separations of the quarks (lengths of the color flux-tube) in hadrons are not large, so the difference between the linear and quadratic forms is small and can be absorbed in the adjustable parameter, the stiffness. The calculations on nucleon-nucleon interactions support the argument~\cite{linear}. The second is that we are using a nonrelativistic description of the dynamics and, as was shown long ago~\cite{goldman}, an interaction energy that varies linearly with separation between fermions in a relativistic, first order differential dynamics has a wide region in which a harmonic approximation is valid for the second order (Feynman-Gell-Mann) reduction of the equations of motion. For an ordinary meson, the quark and anti-quark are connected by a three-dimension color flux-tube. It's confinement potential in the ACFTM can be written as \begin{eqnarray} V^{C}_{min}(2) = kr^2, \end{eqnarray} where $r$ is the separation of the quark and anti-quark, $k$ is the stiffnesses of a three-dimension color flux-tube. According to double Y-shaped color flux-tube structures of the tetraquark state $[Q_1Q_2][\bar{q}_3\bar{q}_4]$ with diquark-antiquark configuration, a four-body quadratic confinement potential instead of linear one used in the lattice QCD can be written as, \begin{eqnarray} V^{C}(4)&=&k\left[ (\mathbf{r}_1-\mathbf{y}_{12})^2 +(\mathbf{r}_2-\mathbf{y}_{12})^2+(\mathbf{r}_{3}-\mathbf{y}_{34})^2\right. \nonumber \\ &+& \left.(\mathbf{r}_4-\mathbf{y}_{34})^2+\kappa_d(\mathbf{y}_{12}-\mathbf{y}_{34})^2\right], \end{eqnarray} where $\mathbf{r}_1$, $\mathbf{r}_2$, $\mathbf{r}_3$ and $\mathbf{r}_4$ are particle's positions. Two Y-shaped junctions $\mathbf{y}_{12}$ and $\mathbf{y}_{34}$ are variational parameters, which can be determined by taking he minimum of the confinement potential. $\kappa_d k$ is the stiffness of a $d$-dimension color flux-tube. The relative stiffness parameter $\kappa_{d}$ is equal to $\frac{C_{d}}{C_3}$~\cite{kappa}, where $C_{d}$ is the eigenvalue of the Casimir operator associated with the $SU(3)$ color representation $d$ at either end of the color flux-tube, such as $C_3=\frac{4}{3}$, $C_6=\frac{10}{3}$, and $C_8=3$. The minimum of the confinement potential $V^C_{min}$ can be obtained by taking the variation of $V^C$ with respect to $\mathbf{y}_{12}$ and $\mathbf{y}_{34}$, and it can be expressed as \begin{eqnarray} V^C_{min}(4)&=& k\left(\mathbf{R}_1^2+\mathbf{R}_2^2+ \frac{\kappa_{d}}{1+\kappa_{d}}\mathbf{R}_3^2\right), \end{eqnarray} The canonical coordinates $\mathbf{R}_i$ have the following forms, \begin{eqnarray} \mathbf{R}_{1} & = & \frac{1}{\sqrt{2}}(\mathbf{r}_1-\mathbf{r}_2),~ \mathbf{R}_{2} = \frac{1}{\sqrt{2}}(\mathbf{r}_3-\mathbf{r}_4), \nonumber \\ \mathbf{R}_{3} & = &\frac{1}{ \sqrt{4}}(\mathbf{r}_1+\mathbf{r}_2 -\mathbf{r}_3-\mathbf{r}_4), \\ \mathbf{R}_{4} & = &\frac{1}{ \sqrt{4}}(\mathbf{r}_1+\mathbf{r}_2 +\mathbf{r}_3+\mathbf{r}_4). \nonumber \end{eqnarray} The use of $V^C_{min}(4)$ can be understood here as that the gluon field readjusts immediately to its minimal configuration. The origin of the constituent quark mass is traced back to the spontaneous breaking of $SU(3)_L\otimes SU(3)_R$ chiral symmetry and consequently constituent quarks should interact through the exchange of Goldstone bosons \cite{chnqm}. Chiral symmetry breaking suggests dividing quarks into two different sectors: light quarks ($u$, $d$ and $s$) where the chiral symmetry is spontaneously broken and heavy quarks ($c$ and $b$) where the symmetry is explicitly broken. The $SU(3)_L\otimes SU(3)_R$ chiral quark model where constituent quarks interact through pseudoscalar Goldstone bosons exchange (GBE) were very successfully applied to describe the baryon spectra~\cite{su3}, nucleon-nucleon and nucleon-hyperon interactions~\cite{NNNY}. The central part of the quark-quark interaction originating from chiral symmetry breaking can be resumed as follows~\cite{chiralmeson}, \begin{eqnarray} V_{ij}^{B} & = & V^{\pi}_{ij} \sum_{k=1}^3 \mathbf{F}_i^k \mathbf{F}_j^k+V^{K}_{ij} \sum_{k=4}^7\mathbf{F}_i^k\mathbf{F}_j^k \nonumber\\ &+&V^{\eta}_{ij} (\mathbf{F}^8_i \mathbf{F}^8_j\cos \theta_P -\sin \theta_P),\nonumber\\ V^{\chi}_{ij} & = & \frac{g^2_{ch}}{4\pi}\frac{m^3_{\chi}}{12m_im_j} \frac{\Lambda^{2}_{\chi}}{\Lambda^{2}_{\chi}-m_{\chi}^2} \mathbf{\sigma}_{i}\cdot \mathbf{\sigma}_{j} \\ & \times &\left( Y(m_\chi r_{ij})- \frac{\Lambda^{3}_{\chi}}{m_{\chi}^3}Y(\Lambda_{\chi} r_{ij}) \right),\nonumber \\ V^{\sigma}_{ij} & = &-\frac{g^2_{ch}}{4\pi} \frac{\Lambda^{2}_{\sigma}m_{\sigma}}{\Lambda^{2}_{\sigma}-m_{\sigma}^2} \left( Y(m_\sigma r_{ij})- \frac{\Lambda_{\sigma}}{m_{\sigma}}Y(\Lambda_{\sigma}r_{ij}) \right). \nonumber \end{eqnarray} where $\chi$ stands for $\pi$, $K$ and $\eta$, $Y(x)=e^{-x}/x$, $\mathbf{F}_{i,j}$ and $\mathbf{\sigma}_{i,j}$ are the flavor $SU(3)$ Gell-man matrices and spin $SU(2)$ Pauli matrices, respectively. Besides the chiral symmetry breaking, one expects the dynamics to be governed by QCD perturbative effects, which is well known one-gluon-exchange (OGE) potential. The central part of the OGE reads~\cite{chiralmeson}, \begin{eqnarray} V_{ij}^{G} & = & {\frac{\alpha_{s}}{4}}\mathbf{\lambda}^c_{i} \cdot\mathbf{\lambda}_{j}^c\left({\frac{1}{r_{ij}}}- {\frac{2\pi\delta(\mathbf{r}_{ij})\mathbf{\sigma}_{i}\cdot \mathbf{\sigma}_{j}}{3m_im_j}}\right). \nonumber \end{eqnarray} where $\mathbf{\lambda}_{i,j}$ is the color $SU(3)$ Gell-man and, $\alpha_s$ is the running strong coupling constant and takes the following form~\cite{chiralmeson}, \begin{equation} \alpha_s(\mu_{ij})=\frac{\alpha_0}{\ln\left((\mu_{ij}^{2}+\mu_0^2)/\Lambda_0^2\right)}, \end{equation} where $\mu_{ij}$ is the reduced mass of two interacting particles. The function $\delta(\mathbf{r}_{ij})$ should be regularized~\cite{weistein}, \begin{equation} \delta(\mathbf{r}_{ij})=\frac{1}{4\pi r_{ij}r_0^2(\mu_{ij})}e^{-r_{ij}/r_0(\mu_{ij})}, \end{equation} where $r_0(\mu_{ij})=\hat{r}_0/\mu_{ij}$. $\Lambda_0$, $\alpha_0$, $\mu_0$ and $\hat{r}_0$ are adjustable model parameters. The non-central parts of the OBE and OGE, tensor and spin-orbit forces, between quarks are omitted in the present calculation because, for the lowest energy states which we are interested in here, their contributions are small or zero. To sum up, the Hamiltonian $H_n$ ($n=2$ or 4) related to the present work can be expressed as follows: \begin{eqnarray} H_n & = & \sum_{i=1}^n \left(m_i+\frac{\mathbf{p}_i^2}{2m_i} \right)-T_{C}+\sum_{i>j}^4 V_{ij}+V^{C}_{min}(n), \nonumber\\ V_{ij} & = &V_{ij}^G+V_{ij}^B+V_{ij}^{\sigma}. \end{eqnarray} $\mathbf{p}_i$ and $m_i$ are the momentum and mass of the $i$-th quark (antiquark), respectively. $T_{c}$ is the center-of-mass kinetic energy of the states and should be deducted. The starting point of the model study on the multiquark states is to accommodate ordinary hadrons in the model in order to determine model parameters. The mass parameters $m_{\pi}$, $m_K$ and $m_{\eta}$ in the $V^B_{ij}$ take their experimental values. The cutoff parameters $\Lambda$s and the mixing angle $\theta_{P}$ in the $V_{ij}^B$ take the values in the work~\cite{chiralmeson}. The mass parameter $m_{\sigma}$ in the interaction $V_{ij}^{\sigma}$ can be determined through the PCAC relation $m^2_{\sigma}\approx m^2_{\pi}+4m^2_{u,d}$~\cite{masssigma}. The chiral coupling constant $g_{ch}$ can be obtained from the $\pi NN$ coupling constant through \begin{equation} \frac{g_{ch}^2}{4\pi}=\left(\frac{3}{5}\right)^2\frac{g_{\pi NN}^2}{4\pi} \frac{m_{u,d}^2}{m_N^2}. \end{equation} The values of the above fixed model parameters are given in Table \ref{fixed}. The adjustable parameters and their errors in Table \ref{adjustable} can be determined by fitting the masses of the ground states of mesons in Table \ref{mesons} using Minuit program. Once the meson masses are obtained, one can calculate the threshold of the doubly heavy tetraquark states $[QQ][\bar{q}\bar{q}]$ simply by adding the masses of two $Q\bar{q}$ mesons to identify the stability of the tetraquark states against strong interaction. \begin{table}[ht] \caption{Fixed model parameters.} \label{fixed} \begin{tabular}{cccccccccccccc} \toprule[0.8pt] Para. & Valu. & Unit &~~~& Para. & Valu.& Unit &~~~& Para. & Vale. & Unit \\ $m_{ud}$ & 280 & MeV & & $m_{\sigma}$ & 2.92 & fm$^{-1}$ & & $\Lambda_{\eta}$ & 5.2 & fm$^{-1}$ \\ $m_{\pi}$ & 0.7 & fm$^{-1}$ & & $\Lambda_{\pi}$ & 4.2 & fm$^{-1}$ & & $\theta_P$ & $-\frac{\pi}{12}$ & ... \\ $m_{K}$ & 2.51 & fm$^{-1}$ & & $\Lambda_{\sigma}$ & 4.2 & fm$^{-1}$ & & $\frac{g^2_{ch}}{4\pi}$ & 0.43 & ... \\ $m_{\eta}$ & 2.77 & fm$^{-1}$ & & $\Lambda_{K}$ & 5.2 & fm$^{-1}$ \\ \toprule[0.8pt] \end{tabular} \caption{Adjustable model parameters.}\label{adjustable} \begin{tabular}{ccccccccccc} \toprule[0.8pt] Para. & $x_i\pm\Delta x_i$ & Unit & Para. & $x_i\pm\Delta x_i$ & Unit \\ $m_{s}$ & $511.78\pm0.228$ & MeV & $\alpha_0$ & $4.554\pm0.018$ & ... \\ $m_{c}$ & $1601.7\pm0.441$ & MeV & $k$ & $217.50\pm0.230$ & MeV$\cdot$fm$^{-2}$ \\ $m_{b}$ & $4936.2\pm0.451$ & MeV & $\mu_0$ & $0.0004\pm0.540$ & MeV \\ $\Lambda_0$ & $9.173\pm0.175$ & MeV & $r_0$ & $35.06\pm0.156$ & MeV$\cdot$fm \\ \toprule[0.8pt] \end{tabular} \end{table} \begin{table}[ht] \caption{The ground state meson spectra in the three models, unit in MeV.}\label{mesons} \begin{tabular}{ccccccc} \toprule[0.8pt] Mesons & ~$IJ^P$~ & ~~~ACFTM~~~ &~Ref.~\cite{rel-meson}~ & ~Ref.~\cite{chiralmeson}~& ~~PDG~~ \\ \toprule[0.8pt] $\pi$ & $10^-$ & $142\pm26$ & 150 & 139 & 139 \\ $K$ & $\frac{1}{2}0^-$ & $492\pm20$ & 470 & 496 & 496 \\ $\rho$ & $11^-$ & $826\pm4$ & 770 & 772 & 775 \\ $\omega$ & $01^-$ & $780\pm4$ & 780 & 690 & 783 \\ $K^*$ & $\frac{1}{2}1^-$ & $974\pm4$ & 900 & 910 & 892 \\ $\phi$ & $01^-$ & $1112\pm4$ & 1020 & 1020 & 1020 \\ $D^{\pm}$ & $\frac{1}{2}0^-$ & $1867\pm8$ & 1880 & 1883 & 1869 \\ $D^*$ & $\frac{1}{2}1^-$ & $2002\pm4$ & 2040 & 2010 & 2007 \\ $D_s^{\pm}$ & $00^-$ & $1972\pm9$ & 1980 & 1981 & 1968 \\ $D_s^*$ & $01^-$ & $2140\pm4$ & 2130 & 2112 & 2112 \\ $\eta_c$ & $00^-$ & $2912\pm5$ & 2970 & 2990 & 2980 \\ $J/\Psi$ & $01^-$ & $3102\pm4$ & 3100 & 3097 & 3097 \\ $B^0$ & $\frac{1}{2}0^-$ & $5259\pm5$ & 5310 & 5281 & 5280 \\ $B^*$ & $\frac{1}{2}1^-$ & $5301\pm4$ & 5370 & 5321 & 5325 \\ $B_s^0$ & $00^-$ & $5377\pm5$ & 5390 & 5355 & 5366 \\ $B_s^*$ & $01^-$ & $5430\pm4$ & 5450 & 5400 & 5416 \\ $B_c$ & $00^-$ & $6261\pm7$ & 6270 & 6277 & 6277 \\ $B_c^*$ & $01^-$ & $6357\pm4$ & 6340 & ... & ... \\ $\eta_b$ & $00^-$ & $9441\pm8$ & 9400 & 9454 & 9391 \\ $\Upsilon(1S)$ & $01^-$ & $9546\pm5$ & 9460 & 9505 & 9460 \\ \toprule[0.8pt] \end{tabular} \end{table} Meson spectrum have been also studied in other different quark models~\cite{chiralmeson,rel-meson}. The spectrum from the light-pseudoscalar and vector mesons to bottomonium are also investigated in a nonrelativistic quark model (\textbf{17} free parameters) with one gluon exchange potential, a screened confinement and one boson exchange~\cite{chiralmeson}. The mesons from the $\pi$ to $\Upsilon$ can be described in a relativized quark model (\textbf{14} free parameters) with a universal one gluon exchange plus a linear confining potential motivated by QCD~\cite{rel-meson}. For comparison, the results of other two models are also listed in the Table \ref{mesons}. Objectively speaking, the other two models can describe the meson spectra a little better than our model, the main reason of which is that the number of the free parameters in our model is much less than those in the two models because the ACFTM has just $\textbf{8}$ adjustable parameters. On the whole, the ACFTM can describe meson spectrum the point of view of the model. In general, it is hard to exactly produce a large amount of states in the quark model calculation with limited number of parameters. The more parameters the model has, the more accurate it is. One does not expect to introduce too many free parameters to improve the accuracy of meson spectrum at the expense of reducing the prediction ability of the model. In addition, it can be found that the non-relativistic quark model and relativistic one are equivalent, which can give a reasonable meson spectrum. A great deal of early researches on meson spectra have been devoted to compare the equivalence of various types of quark models~\cite{three-models}. Phenomenological model researches on multiquark states and hadron-hadron interactions hope that the good equivalence found between relativistic and nonrelativistic meson spectra persists for multi-quark systems. In fact, norelativistic quark models have been successfully applied to baryon-baryon interactions and new hadrons observed in experiments up to now~\cite{NNNY,qdcsm, newhadrons}. Although it is generally recognized that the models with relativistic dynamics are more rigorous from the theoretical point of view, all relativistic quark models had to face the technical difficulty, an endemic problem, of separating the centre of mass motion. In contrast, nonrelativistic quark models can cope with the centre of mass motion and also be more easily extended to multibody dynamics than relativistic ones. \section{wavefunctions of the doubly heavy states} The properties of the doubly heavy tetraquark states can be obtained using a complete wavefunction which includes all possible flavor-spin-color-spatial channels that contribute to a given well defined parity, isospin, and total angular momentum. Within the framework of the diquark-antidiquark configuration, the trial wavefunction of the doubly heavy tetraquark state $[QQ][\bar{q}\bar{q}]$ can be constructed as a sum of the following direct products of color $\chi_c$, isospin $\eta_i$, spin $\chi_s$ and spatial $\phi$ terms \begin{eqnarray} \Phi^{[QQ][\bar{q}\bar{q}]}_{IM_IJM_J} &=& \sum_{\alpha}\xi_{\alpha}\left[\left[\left[\phi_{l_am_a}^G(\mathbf{r})\chi_{s_a}\right]^{[QQ]}_{J_aM_{J_a}} \left[\phi_{l_bm_b}^G(\mathbf{R})\right.\right.\right.\nonumber\\ & \times & \left.\left.\left.\chi_{s_b}\right]^{[\bar{q}\bar{q}]}_{J_bM_{J_b}}\right ]_{J_{ab}M_{J_{ab}}}^{[QQ][\bar{q}\bar{q}]} \phi^G_{l_cm_c}(\mathbf{X})\right]^{[QQ][\bar{q}\bar{q}]}_{JM_J}\\ & \times & \left[\eta_{i_a}^{[QQ]}\eta_{i_b}^{[\bar{q}\bar{q}]}\right]_{IM_I}^{[QQ][\bar{q}\bar{q}]} \left[\chi_{c_a}^{[QQ]}\chi_{c_b}^{[\bar{q}\bar{q}]}\right]_{CW_C}^{[QQ][\bar{q}\bar{q}]}, \nonumber \end{eqnarray} The subscripts $a$ and $b$ represent the diquark $[QQ]$ and antidiquark $[\bar{q}\bar{q}]$, respectively. The summering index $\alpha$ stands for all possible flavor-spin-color-spatial intermediate quantum numbers. The relative spatial coordinates $\mathbf{r}$, $\mathbf{R}$ and $\mathbf{X}$ and center of mass $\mathbf{R}_c$ in the tetraquark state $[QQ][\bar{q}\bar{q}]$ can be defined as, \begin{eqnarray} \mathbf{r}&=&\mathbf{r}_1-\mathbf{r}_2,~~~\mathbf{R}=\mathbf{r}_3-\mathbf{r}_4 \nonumber\\ \mathbf{X}&=&\frac{m_1\mathbf{r}_1+m_2\mathbf{r}_2}{m_1+m_2}-\frac{m_3\mathbf{r}_3+m_4\mathbf{r}_4}{m_3+m_4},\\ \mathbf{R}_c&=&\frac{m_1\mathbf{r}_1+m_2\mathbf{r}_2+m_3\mathbf{r}_3+m_4\mathbf{r}_4}{m_1+m_2+m_3+m_4}.\nonumber \end{eqnarray} The corresponding angular excitations of three relative motions are, respectively, $l_a$, $l_b$ and $l_c$. The parity of the doubly heavy tetraquark states $[QQ][\bar{q}\bar{q}]$ can therefore be expressed in terms of the relative orbital angular momenta associated with the Jacobi coordinates as $P=(-1)^{l_a+l_b+l_c}$. It is worth mentioning that this set of coordinate is only a possible choice of many coordinates and however most propitious to describe the correlation of two quarks (antiquark) in the diquark (antidiqurk) and construct the symmetry of identical particles. In order to obtain a reliable solution of few-body problem, a high precision numerical method is indispensable. The Gaussian Expansion Method (GEM)~\cite{GEM}, which has been proven to be rather powerful to solve few-body problem, is therefore used to study four-quark systems in the present work. According to the GEM, any relative motion wave function can be written as, \begin{eqnarray} \phi^G_{lm}(\mathbf{z})=\sum_{n=1}^{n_{max}}c_{n}N_{nl}z^{l}e^{-\nu_{n}z^2}Y_{lm}(\hat{\mathbf{z}}) \end{eqnarray} More details of the relative motion wave functions can be found in the paper~\cite{GEM}. The color representation of the diquark maybe antisymmetrical $[QQ]_{\bar{\mathbf{3}}_c}$ or symmetrical $[QQ]_{\mathbf{6}_c}$, whereas that of the antidiquark maybe antisymmetrical $[\bar{q}\bar{q}]_{\mathbf{3}_c}$ or symmetrical $[\bar{q}\bar{q}]_{\bar{\mathbf{6}}_c}$. Coupling the diquark and the antidiquark into an overall color singlet according to color coupling rule have two ways: $\left[[QQ]_{\bar{\mathbf{3}}_c}[\bar{q}\bar{q}]_{\mathbf{3}_c}\right]_{\mathbf{1}}$ (good diquark) and $\left[[QQ]_{\mathbf{6}_c}[\bar{q}\bar{q}]_{\bar{\mathbf{6}}_c}\right]_{\mathbf{1}}$ (bad diquark). In general, the interaction in the good diquark is attractive, whereas the interaction in the bad diquark is repulsive. Anyway, a real physical state should be their mixture because of the coupling between two color configurations. The spin of the diquark $[QQ]$ is coupled to $s_a$ and that of the antiquarks $[\bar{q}\bar{q}]$ to $s_b$. The total spin wave function of the doubly heavy tetraquark state $[QQ][\bar{q}\bar{q}]$ can be written as $S=s_a\oplus s_b$. Then we have the following basis vectors as a function of the total spin $S$. \begin{eqnarray} S=\left\{ \begin{array}{ll} \mbox{$0=1\oplus1$ and $0\oplus0$}\\ \mbox{$1=1\oplus1$, $1\oplus0$, and $0\oplus1$}\\ \mbox{$2=1\oplus1$}\\ \end{array} \right. , \end{eqnarray} With respect to the flavor wavefunction, we only consider $SU_f(2)$ symmetry in the present work. The quarks, $s$, $c$ and $b$, have isospin zero so they do not contribute to the total isospin. The flavor wave functions of the antidiquark consisting of $\bar{u}$ and $\bar{d}$ quarks are similar to those of spin. Taking all degrees of freedom of identical particles in the diquark (antidiquark) into account, the Pauli principle must be satisfied by imposing some restrictions on the quantum numbers of the diquark (antidiquark). Such as the color-antisymmetrical tetraquark state $[cc]_{\bar{\mathbf{3}}_c}[\bar{u}\bar{d}]_{\mathbf{3}_c}$, the quantum numbers must satisfy the relations $(-1)^{l_a+i_a+s_a}=-1$ and $(-1)^{l_b+i_b+s_b}=1$. But for the color-symmetrical tetraquark state $[cc]_{\mathbf{6}_c}[\bar{u}\bar{d}]_{\bar{\mathbf{6}}_c}$, the quantum numbers must satisfy the relations $(-1)^{l_a+i_a+s_a}=1$ and $(-1)^{l_b+i_b+s_b}=-1$. On the contrary, the situation of non-identical particles is extremely simple because of no any restrictions. \section{numerical results and analysis} The converged numerical results of the doubly heavy tetraquark states $[QQ][\bar{q}\bar{q}]$ within the framework of the ACFTM can be obtained through solving the four-body Schr\"{o}dinger equation with the Rayleigh-Ritz variational principle, \begin{eqnarray} (H_4-E_4)\Phi^{[QQ][\bar{q}\bar{q}]}_{IM_IJM_J}=0. \end{eqnarray} A tetraquark state should be stable against strong interaction if its energy lies below all possible two-meson thresholds. We express the lowest threshold of the doubly heavy tetraquark $[QQ][\bar{q}\bar{q}]$ as $T^{min}_{M_1M_2}$, where $M_1$ and $M_2$ stand for two $Q\bar{q}$ mesons. The binding energy of the doubly heavy tetraquark states can be therefore defined as \begin{eqnarray} E_b=E_4-T^{min}_{M_1M_2} \end{eqnarray} to identify whether or not the tetraquark state is stable against strong interactions. The procedure can greatly reduce the influence of inaccurate meson spectra coming from the parameters on the binding energies by the theoretical difference between the energy of the tetraquark states and that of two mesons. If $E_b\geq0$, the tetraquark state can fall apart into two mesons via strong interactions. If $E_b<0$, the strong decay into two mesons is forbidden and therefore the decay must be weak or electromagnetic interaction. \begin{table}[ht] \caption{The energies of the doubly heavy tetraquark states $[QQ][\bar{q}\bar{q}]$, masses unit in MeV.} \label{spectrum} \begin{tabular}{ccccccccccc} \toprule[0.8pt] ~~Flavor~~ & ~$IJ^{P}$~~ & $n^{2S+1}L_J$ & ~~~~Masses~~~~ & ~~$T^{min}_{M_1M_2}$~~ & ~~$E_b$~~ \\ \toprule[0.8pt] & $01^{+}$ & $0^3S_1$ & $3719\pm12$ & ${DD^*}$ & $-150$ \\ & $01^{-}$ & $0^1P_1$ & $3931\pm12$ & ${DD}$ & $197$ \\ $[cc][\bar{u}\bar{d}]$ & $10^{+}$ & $0^1S_0$ & $3962\pm8$ & ${DD}$ & 228 \\ & $11^{+}$ & $0^3S_1$ & $4017\pm7$ & ${DD^*}$ & 148 \\ & $12^{+}$ & $0^5S_2$ & $4013\pm7$ & ${D^*D^*}$ & 9 \\ \toprule[0.8pt] & $00^{+}$ & $0^1S_0$ & $6990\pm12$ & ${DB}$ & $-136$ \\ & $01^{+}$ & $0^3S_1$ & $6997\pm12$ & ${DB^*}$ & $-171$ \\ & $02^{+}$ & $0^5S_2$ & $7321\pm7$ & ${D^*B^*}$ & 18 \\ $[bc][\bar{u}\bar{d}]$ & $01^{-}$ & $0^1P_1$ & $7154\pm9$ & ${DB}$ & 28 \\ & $10^{+}$ & $0^1S_0$ & $7270\pm8$ & ${DB}$ & 144 \\ & $11^{+}$ & $0^3S_1$ & $7283\pm8$ & ${DB^*}$ & 115 \\ & $12^{+}$ & $0^5S_2$ & $7299\pm7$ & ${D^*B^*}$ & $-4$ \\ \toprule[0.8pt] & $01^{+}$ & $0^3S_1$ & $10282\pm12$ & ${BB^*}$ & $-278$ \\ & $01^{-}$ & $0^1P_1$ & $10404\pm11$ & ${BB}$ & $-114$ \\ $[bb][\bar{u}\bar{d}]$ & $10^{+}$ & $0^1S_0$ & $10558\pm7$ & ${BB}$ & 40 \\ & $11^{+}$ & $0^3S_3$ & $10586\pm7$ & ${BB^*}$ & 26 \\ & $12^{+}$ & $0^5S_2$ & $10572\pm7$ & ${B^*B^*}$ & $-30$ \\ \toprule[0.8pt] & $\frac{1}{2}0^{+}$ & $0^1S_0$ & $4121\pm8$ & ${DD_s}$ & 282 \\ $[cc][\bar{u}\bar{s}]$ & $\frac{1}{2}1^{+}$ & $0^3S_1$ & $4068\pm9$ & ${D^*D_s}$ & 94 \\ & $\frac{1}{2}2^{+}$ & $0^5S_2$ & $4177\pm7$ & ${D^*D_s^*}$ & 35 \\ \toprule[0.8pt] & $\frac{1}{2}0^{+}$ & $0^1S_0$ & $7339\pm9$ & ${D_sB}$ & 108 \\ $[bc][\bar{u}\bar{s}]$ & $\frac{1}{2}1^{+}$ & $0^3S_1$ & $7356\pm9$ & ${D_sB^*}$ & $83$ \\ & $\frac{1}{2}2^{+}$ & $0^5S_2$ & $7455\pm7$ & ${D^*B_s^*}$ & 23 \\ \toprule[0.8pt] & $\frac{1}{2}0^{+}$ & $0^1S_0$ & $10716\pm7$ & ${BB_s}$ & 80 \\ $[bb][\bar{u}\bar{s}]$ & $\frac{1}{2}1^{+}$ & $0^3S_1$ & $10629\pm9$ & ${B^*B_s}$ & $-49$ \\ & $\frac{1}{2}2^{+}$ & $0^5S_2$ & $10734\pm7$ & ${B^*B_s^*}$ & 3 \\ \toprule[0.8pt] & $00^{+}$ & $0^1S_0$ & $4279\pm8$ & ${D_sD_s}$ & 335 \\ $[cc][\bar{s}\bar{s}]$ & $01^{+}$ & $0^3S_1$ & $4312\pm7$ & ${D_sD_s^*}$ & 193 \\ & $02^{+}$ & $0^5S_2$ & $4328\pm7$ & ${D^*_sD_s^*}$ & 48 \\ \toprule[0.8pt] & $00^{+}$ & $0^1S_0$ & $7582\pm7$ & ${D_sB_s}$ & 232 \\ $[bc][\bar{s}\bar{s}]$ & $01^{+}$ & $0^3S_1$ & $7590\pm7$ & ${D_sB_s^*}$ & 188 \\ & $02^{+}$ & $0^5S_2$ & $7611\pm7$ & ${D^*_sB_s^*}$ & 41 \\ \toprule[0.8pt] & $00^{+}$ & $0^1S_0$ & $10866\pm7$ & ${B_sB_s}$ & 112 \\ $[bb][\bar{s}\bar{s}]$ & $01^{+}$ & $0^3S_1$ & $10875\pm7$ & ${B_sB_s^*}$ & 68 \\ & $02^{+}$ & $0^5S_2$ & $10882\pm7$ & ${B^*_sB_s^*}$ & 22 \\ \toprule[0.8pt] \end{tabular} \end{table} In the following, we discuss the properties of the doubly heavy tetraquark states $[QQ][\bar{q}\bar{q}]$ to search for all possible stable states against strong interactions in the ACFTM. In order to obtain the lowest states with positive parity, we assume that three relative motions are in a relative S-wave in the doubly heavy states. In the case of the lowest states with negative parity, we assume that the angular excitation of the relative motion occur in not $l_b$ and $l_c$ but $l_a$, namely $l_a=1$, $l_b=l_c=0$. The reason is that the angular excitation in the diquark $[QQ]$ bring a kinetic energy as possible as small into the excited states, which contributes to the stability of the doubly heavy tetraquark states $[QQ][\bar{q}\bar{q}]$. The ACFTM predictions on the lowest energies of the doubly heavy tetraquark states $[QQ][\bar{q}\bar{q}]$ with a set of given $IJ^P$ are presented in Table \ref{spectrum}. A first glance gives a conclusion that there exists seven bound states with positive parity, the states $[cc][\bar{u}\bar{d}]$, $[bc][\bar{u}\bar{d}]$ and $[bb][\bar{u}\bar{d}]$ with $01^+$, the states $[bc][\bar{u}\bar{d}]]$ and $[bb][\bar{u}\bar{d}]$ with $12^+$, the strange state $[bb][\bar{u}\bar{s}]$ with $\frac{1}{2}1^+$ and the state $[bc][ud]$ with $00^+$, and one negative parity state $[bb][\bar{u}\bar{d}]$ with $01^-$. Other doubly heavy tetraquark states lie above the corresponding lowest threshold within the framework of the ACFTM and should therefore decay very rapidly through the fall-apart mechanism of the color flux-tubes. \begin{table*} \caption{The stable doubly heavy tetraquark states $[QQ][\bar{q}\bar{q}]$ in various methods, two results in Ref.~\cite{pepin} for two different sets of parameters $C_1$ and $C_2$, unit in MeV.}\label{comparison} \begin{tabular}{cccccccccccccccccccc} \toprule[0.8pt] States&&&~~Ours~~&&&&&&&Others&&\\ Flavor&~$IJ^P$~&$T^{min}_{M_1M_2}$&ACFTM&\cite{karliner} &~~~\cite{Eichten}~~~&~\cite{Francis}~&~~~~\cite{bicudo}~~~~&\cite{cqmpark} &~~~\cite{ebert}~~~&\cite{ccqqvij}&\cite{pepin}&\cite{sakai}&\cite{valcarce}&~~~\cite{park}~~~&\cite{semay} \\ \toprule[0.8pt] $[cc][\bar{u}\bar{d}]$&$01^+$&$DD^*$ & $-150$ & 7 & ... & ... & ... & ... & 64 &$-129$ & $-185,-332$ & ... & $-76$ & $100$ & 19 \\ $[bc][\bar{u}\bar{d}]$&$00^+$&$DB$ & $-136$ & $-11$ & ... & ... & ... & ... & 95 & ... & ... & $-[20,60]$ & ... & ... & 11 \\ $[bc][\bar{u}\bar{d}]$&$01^+$&$DB^*$ & $-171$ & ... & ... & ... & ... & ... & 56 & ... & ... & $-[20,60]$ & ... & ... & 1 \\ $[bc][\bar{u}\bar{d}]$&$12^+$&$D^*B^*$ & $-4$ & ... & ... & ... & ... & ... & 90 & ... & ... & $-[20,60]$ & ... & ... & ... \\ $[bb][\bar{u}\bar{d}]$&$01^+$&$BB^*$ & $-278$ & $-215$ & $-121$& $-189$ & $-59^{+30}_{-38}$ & $-121$ & $-102$ &$-341$ & $-226,-497$ & ... & $-214$& $-100$ & $-131$ \\ $[bb][\bar{u}\bar{d}]$&$12^+$&$B^*B^*$ & $-30$ & ... & ... & ... & ... & ... & 23 &65 & ... & ... & 1 & ... & 30 \\ $[bb][\bar{u}\bar{s}]$&$\frac{1}{2}1^+$&$B^*B_s$& $-49$ & ... & $-48$ & $-98 $ &... & $-7$ & 13 &... & ... & ... & ... & ... & $-40$ \\ $[bb][\bar{u}\bar{d}]$&$01^-$&$BB$ & $-114$ & ... & ... & ... & ... & ... & ... &... & ... & ... & 11 & ... & ... \\ \toprule[0.8pt] \end{tabular} \end{table*} \begin{table} \caption{The energies of all stable states with the color configurations $\left[[QQ]_{\bar{\mathbf{3}}_c}[\bar{q}\bar{q}]_{\mathbf{3}_c}\right]_{\mathbf{1}}$ and $\left[[QQ]_{\mathbf{6}_c}[\bar{q}\bar{q}]_{\bar{\mathbf{6}}_c}\right]_{\mathbf{1}}$ in the ACFTM, unit in MeV.}\label{color configurations} \begin{tabular}{cccccccccc} \toprule[0.8pt] ~~~Flavor~~~& ~$IJ^P$~ & $~~~~~\bar{\mathbf{3}}_c\otimes\mathbf{3}_c$~~~~~&~~~$\mathbf{6}_c\otimes\bar{\mathbf{6}}_c$~~~~&~~Coupling~~~ \\ $[cc][\bar{u}\bar{d}]$ & $01^+$ & $3731\pm12$ & $4007\pm8$ & $3719\pm12$ \\ $[bc][\bar{u}\bar{d}]$ & $00^+$ & $6996\pm12$ & $7262\pm8$ & $6990\pm12$ \\ $[bc][\bar{u}\bar{d}]$ & $01^+$ & $7003\pm12$ & $7304\pm7$ & $6997\pm12$ \\ $[bc][\bar{u}\bar{d}]$ & $12^+$ & $7299\pm7$ & ... & $7299\pm7$ \\ $[bb][\bar{u}\bar{d}]$ & $01^+$ & $10283\pm12$ & $10583\pm8$ & $10282\pm12$ \\ $[bb][\bar{u}\bar{d}]$ & $12^+$ & $10572\pm7$ & ... & $10572\pm7$ \\ $[bb][\bar{q}'\bar{s}]$ & $\frac{1}{2}1^+$ & $10629\pm9$ & $10721\pm8$ & $10629\pm9$ \\ $[bb][\bar{u}\bar{d}]$ & $01^-$ & $10404\pm12$ & $10847\pm7$ & $10404\pm12$ \\ \toprule[0.8pt] \end{tabular} \caption{The average distance $\langle\mathbf{r}_{ij}^2\rangle^{\frac{1}{2}}$ between the $i$-th and $j$-th particle in the stable states, unit in fm.}\label{rms} \begin{tabular}{cccccccccc} \toprule[0.8pt] ~~Flavor~~&$IJ^P$&$E_b$ &$\langle\mathbf{r}_{12}^2\rangle^{\frac{1}{2}}$$\langle\mathbf{r}_{34}^2\rangle^{\frac{1}{2}}$$\langle\mathbf{r}_{24}^2\rangle^{\frac{1}{2}}$& $\langle\mathbf{r}_{13}^2\rangle^{\frac{1}{2}}$$\langle\mathbf{r}_{14}^2\rangle^{\frac{1}{2}}$$\langle\mathbf{r}_{23}^2\rangle^{\frac{1}{2}}$~\\ $[cc][\bar{u}\bar{d}]$&$01^+$ & $-150$ & 0.65~~ 0.78~~ 0.91 & 0.91~~ 0.91~~ 0.91 \\ $[bc][\bar{u}\bar{d}]$&$00^+$ & $-136$ & 0.53~~ 0.78~~ 0.91 & 0.83~~ 0.83~~ 0.91 \\ $[bc][\bar{u}\bar{d}]$&$01^+$ & $-171$ & 0.55~~ 0.78~~ 0.92 & 0.84~~ 0.84~~ 0.92 \\ $[bc][\bar{u}\bar{d}]$&$12^+$ & $-4$ & 0.56~~ 1.13~~ 1.06 & 0.98~~ 0.98~~ 1.06 \\ $[bb][\bar{u}\bar{d}]$&$01^+$ & $-278$ & 0.42~~ 0.77~~ 0.84 & 0.84~~ 0.84~~ 0.84 \\ $[bb][\bar{u}\bar{d}]$&$12^+$ & $-30$ & 0.42~~ 1.13~~ 0.98 & 0.98~~ 0.98~~ 0.98 \\ $[bb][\bar{u}\bar{s}]$&$\frac{1}{2}1^+$ & $-49$ & 0.42~~ 0.89~~ 0.90 & 0.76~~ 0.76~~ 0.90 \\ $[bb][\bar{u}\bar{d}]$&$01^-$ & $-114$ & 0.65~~ 0.77~~ 0.89 & 0.89~~ 0.89~~ 0.89 \\ \toprule[0.8pt] \end{tabular} \end{table} \begin{table*} \caption{The contributions of the various parts of the Hamiltonian in the ACFTM to the masses and the binding energies $E_B$ of the stable states $[QQ][\bar{q}\bar{q}]$, where $V^{B}$, $V^{\sigma}$, $V^C$, $V^{cm}$, $E_k$, and $V^{clb}$ represent one Goldstone boson exchange, $\sigma$-meson exchange, confinement, color-magnetic interaction, kinetic energy and Coulomb items, respectively, unit in MeV.}\label{contributions} \begin{tabular}{ccccccccc||cccccccccccc} \toprule[0.8pt] ~Flavor~&~$IJ^P$~&Masses&~~~$V^{\sigma}$~~~&~~$V^{B}$~~&~~~$V^C$~~~&~~$V^{cm}$~~&~~~$E_k$~~~&~~$V^{clb}$~~&&~~$E_b$~~&$~~\Delta V^{\sigma}~~$&~$\Delta V^{B}$~&$~~\Delta V^C$~~&$~\Delta V^{cm}$~&~$\Delta E_k$~&$\Delta V^{clb}$ \\ $[cc][\bar{u}\bar{d}]$ & $01^+$ &$3719\pm12$ &$-35$&$-223$&173&$-236$&953&$-676$ && $-150$&$-35$&$-223$&$-73$ &$-98$ & 174 &105\\ $[bc][\bar{u}\bar{d}]$ & $00^+$ &$6990\pm12$ &$-37$&$-245$&153&$-261$&993&$-711$ && $-136$&$-37$&$-245$&$-61$ &$-65$ & 147 &125\\ $[bc][\bar{u}\bar{d}]$ & $01^+$ &$6997\pm12$ &$-37$&$-246$&155&$-253$&983&$-703$ && $-171$&$-37$&$-246$&$-75$ &$-103$& 196 &93 \\ $[bc][\bar{u}\bar{d}]$ & $12^+$ &$7299\pm7$ &$-15$&$-10$ &242& 30 &506&$-552$ && $-4$ &$-15$&$-10$ &$-34$ & 0 &$-68$&123\\ $[bb][\bar{u}\bar{d}]$ & $01^+$ &$10282\pm12$&$-36$&$-228$&140&$-236$&928&$-719$ && $-278$&$-36$&$-228$&$-104$&$-208$& 288 &11 \\ $[bb][\bar{u}\bar{d}]$ & $12^+$ &$10572\pm7$ &$-15$&$-10$ &222& 27 &490&$-574$ && $-30$ &$-15$&$-10$ &$-38$ & 9 &$-92$&116\\ $[bb][\bar{u}\bar{s}]$ & $\frac{1}{2}1^+$ &$10629\pm11$&$-27$&$-10$ &151&$-98$ &605&$-654$ && $-49$ &$-27$&$-10$ &$-49$ &$-54$ &$-41$&132\\ $[bb][\bar{u}\bar{d}]$ & $01^-$ &$10404\pm9$ &$-37$&$-244$&170&$-253$&962&$-626$ && $-114$&$-37$&$-244$&$-59$ &$-179$& 262 &144\\ \toprule[0.8pt] \end{tabular} \end{table*} \begin{table} \caption{The energies of the stable states in the two models with fourbody and twobody confinement potential and the difference between two central values, unit in MeV.}\label{diff} \begin{tabular}{cccccccccc} \toprule[0.8pt] ~~~Flavor~~~ & $IJ^P$ & ~~Two-body~~ & ~~~Four-body~~~& ~Difference~ \\ $[cc][\bar{u}\bar{d}]$ & $01^+$ & $3817\pm11$ & $3719\pm12$ & $98$ \\ $[bc][\bar{u}\bar{d}]$ & $00^+$ & $7096\pm11$ & $6990\pm12$ & $106$ \\ $[bc][\bar{u}\bar{d}]$ & $01^+$ & $7109\pm11$ & $6997\pm12$ & $102$ \\ $[bc][\bar{u}\bar{d}]$ & $12^+$ & $7440\pm7$ & $7299\pm7$ & $141$ \\ $[bb][\bar{u}\bar{d}]$ & $01^+$ & $10355\pm11$ & $10282\pm12$ & $73$ \\ $[bb][\bar{u}\bar{d}]$ & $12^+$ & $10698\pm7$ & $10572\pm7$ & $126$ \\ $[bb][\bar{q}'\bar{s}]$ & $\frac{1}{2}1^+$ & $10729\pm8$ & $10629\pm9$ & $100$ \\ $[bb][\bar{u}\bar{d}]$ & $01^-$ & $10508\pm11$ & $10404\pm12$ & $104$ \\ \toprule[0.8pt] \end{tabular} \end{table} The binding energies of the doubly heavy tetraquark states within various theoretical methods are presented in Table \ref{comparison}, in which ``...'' represents that the corresponding state was not researched by authors. It is extremely obvious that the state $[bb][\bar{u}\bar{d}]$ with $01^+$ has a distinguished strong binding, above 100 MeV, in the absolutely majority of work and must therefore be the most promising stable doubly heavy tetraquark state against dissociation into two heavy-light mesons via strong interaction. Its strange partner, $[bb][\bar{u}\bar{s}]$ with $\frac{1}{2}1^+$, has also a binding energies from a few to dozens of MeV in all of investigations with exception of Ebert's work~\cite{ebert}, which lies slightly, about 13 MeV, above the $B^*B_s$ threshold. In this way, the state $[bb][\bar{u}\bar{s}]$ with $\frac{1}{2}1^+$ stands a good chance of existence as a bound state. It is strongly suggested that the two extremely possible stable states against strong interactions should be explored in experiments in the near future. In addition to the two doubly heavy tetraquark states $[bb][\bar{u}\bar{d}]$ with $01^+$ and $[bb][\bar{u}\bar{s}]$ with $\frac{1}{2}1^+$, the existent of other states in Table \ref{comparison} as stable states against strong interactions are obviously model dependent. The state $[cc][\bar{u}{d}]$ with $01^+$ lies below, greater than 100 MeV, the threshold $DD^*$ only in the ACFTM and the chiral quark models~\cite{ccqqvij,pepin}. Other results on the state are higher than the threshold $DD^*$. In the case of the states $[bc][\bar{u}\bar{d}]$ with $00^+$, $01^+$ and $12^+$, Sakai et al described them as $D^{(*)}B^{(*)}$ molecule states with binding energies about 20--60 MeV~\cite{sakai}. QCD sum rule research indicated that the extracted masses for both the scalar and axial vector $[bc][\bar{q}\bar{q}]$ tetraquark states are also below the open-flavor thresholds $DB$ and $DB^*$~\cite{qcdsumbcqq}. Lattice QCD study shown the existence of a strong-interaction-stable tetraquark $[bb][\bar{u}\bar{d}]$ with $01^+$ below $DB^*$ threshold in the range of 15 to 61 MeV~\cite{francis}. In the ACFTM, the states $[bc][\bar{u}\bar{d}]$ with $00^+$ and $01^+$ can be depicted as deeply bound states with binding energies $136$ MeV and $171$ MeV, respectively. The state $[bc][\bar{u}\bar{d}]$ with $12^+$ as a bound state have a slight binding, about 4 MeV. In this way, our conclusion on the three heavy states $[bc][\bar{u}\bar{d}]$ is qualitatively consistent with that of Sakai. Karliner also predicted that the state $[bc][\bar{u}\bar{d}]$ with $00^+$ lies below the threshold $DB$ about 11 MeV~\cite{karliner}. The heavy state $[bb][\bar{u}\bar{d}]$, the partner of $[bc][\bar{u}\bar{d}]$ with $12^+$, can exist as a stable state with binding energy about 30 MeV, which is not supported by existing results on the doubly heavy tetraquark states so far. With respect to the state $[bb][\bar{u}\bar{d}]$ with $1^-$, the ACFTM predicts that it lies below the $BB$ threshold about 114 MeV. The energy of this state is higher, just 1 MeV, than the threshold in Ref.~\cite{valcarce}. Very recently, Pflaumer et al predicted the doubly heavy tetraquark state $[bb][\bar{u}\bar{d}]$ with $1^-$ as a resonance higher 17 MeV than the $BB$ threshold applying lattice QCD potentials~\cite{pflaumer}. In general, the heavy states $[QQ][\bar{u}\bar{d}]$ with $I=0$ are easier than the states with $I=1$ to form bound states in the ACFTM. All possible stable doubly heavy tetraquark states should be, in general, the admixture of the two color configurations $\left[[QQ]_{\bar{\mathbf{3}}_c}[\bar{q}\bar{q}]_{\mathbf{3}_c}\right]_{\mathbf{1}}$ and $\left[[QQ]_{\mathbf{6}_c}[\bar{q}\bar{q}]_{\bar{\mathbf{6}}_c}\right]_{\mathbf{1}}$ under the diquark-antidiquark picture as a working hypothesis. Theoretically, the magnitude of their mixing through color-magnetic interaction is governed by the order of $\frac{1}{m_im_j}$. Special attention is therefore payed to the role quantitatively played by the two color configurations in the ACFTM. The energies of all possible stable doubly heavy tetraquark states with the two color configurations and their coupling results are given in Table \ref{color configurations}. It can be found that the configuration $\left[[QQ]_{\bar{\mathbf{3}}_c}[\bar{q}\bar{q}]_{\mathbf{3}_c}\right]_{\mathbf{1}}$ dominates the energy of the doubly heavy tetraquark states. The mixing effect pushes the energy of the states down a little comparing with that of the configuration $\left[[QQ]_{\bar{\mathbf{3}}_c}[\bar{q}\bar{q}]_{\mathbf{3}_c}\right]_{\mathbf{1}}$. The heavier the heavy quark pair $[QQ]$, the stronger the effect. For the $[cc]$ and $[bc]$ sections, it is just more than ten MeV and several MeV, respectively. In the $[bb]$ section, the color configuration $\left[[bb]_{\mathbf{6}_c}[\bar{q}\bar{q}]_{\bar{\mathbf{6}}_c}\right]_{\mathbf{1}}$ has almost no any effect on the masses of the states and can be ignored in the ACFTM. Therefore, the color configuration $\left[[QQ]_{\bar{\mathbf{3}}_c}[\bar{q}\bar{q}]_{\mathbf{3}_c}\right]_{\mathbf{1}}$ absolutely dominates the behavior of the doubly heavy tetraquark states in the course of investigation on their properties~\cite{ccqqvij,park}. However, the color configuration $\left[qq]_{\mathbf{6}_c}[\bar{q}\bar{q}]_{\bar{\mathbf{6}}_c}\right]_{\mathbf{1}}$ must be taken into accounted in the researches on the light tetraquark states~\cite{ccqqvij}. Regarding to the stable doubly heavy tetraquark states $\left[[bb]_{\bar{\mathbf{3}}_c}[\bar{u}\bar{d}]_{\mathbf{3}_c}\right]_{\mathbf{1}}$ and $\left[[cc]_{\bar{\mathbf{3}}_c}[\bar{u}\bar{d}]_{\mathbf{3}_c}\right]_{\mathbf{1}}$ with $01^+$, the two heavy diquarks $[bb]_{\bar{\mathbf{3}}}$ and $[cc]_{\bar{\mathbf{3}}}$ must have spin one because the flavor and orbit are symmetrical, the color-spin-orbit-isospin combination is $(c_a,s_a,l_a,i_a)=(\bar{\mathbf{3}}_c,1,0,0)$. The antidiquark $[\bar{u}\bar{d}]_{\mathbf{3}}$ couples into spin and isospin zero, the color-spin-orbit-isospin combination is $(c_b,s_b,l_b,i_b)=(\mathbf{3}_c,0,0,0)$. For the heavy diquarks $[bb]_{\bar{\mathbf{3}}}$ and $[cc]_{\bar{\mathbf{3}}}$, the color-magnetic interaction is therefore weak repulsive. However, the large masses admit two heavy quarks to approach each other as short as possible because kinetic energy is inversely proportional to the quark mass. Meanwhile, the Coulomb interaction is attractive in the diquark $[QQ]_{\bar{\mathbf{3}}}$. The heavier the heavy quark, the stronger the Coulomb interaction, the shorter the distance, see Table \ref{rms}. In the limit of heavy quark, the diquark $[QQ]_{\bar{\mathbf{3}}}$ gradually shrink into a pointlike particle, which is qualitatively consistently with the conclusion in Quigg's work~\cite{heavy}. With the exception of attractive Coulomb interaction, there exists strong attractive interactions in the antidiquark $[\bar{u}\bar{d}]_{\mathbf{3}_c}$ with spin and isospin zero generated through one-Goldstone-boson-exchange (mainly $\pi$) and color-magnetic interaction. This conclusion is also hold for the state $\left[[bc]_{\bar{\mathbf{3}}_c}[\bar{u}\bar{d}]_{\mathbf{3}_c}\right]_{\mathbf{1}}$ with $01^+$. In this way, the interaction in the doubly heavy states $[QQ][\bar{u}\bar{d}]$ with $01^+$ become strong gradually with the increase of the mass ratio $\frac{m_Q}{m_{\bar{q}}}$, which was pointed out by many investigations on natures of the doubly heavy tetraquark states and is strengthen again by the present work~\cite{potential,semay,valcarce}. The states $\left[[bc]_{\bar{\mathbf{3}}_c}[\bar{u}\bar{d}]_{\mathbf{3}_c}\right]_{\mathbf{1}}$ with $00^+$ is allowed because of no symmetry restriction on the diquark $[bc]$. In the contrary to the diquarks $[bb]$ and $[cc]$, the color-magnetic interaction in the diquark $[bc]$ is weak attractive due to its spin $s_b=0$. Therefore, the energy of the state $\left[[bc]_{\bar{\mathbf{3}}_c}[\bar{u}\bar{d}]_{\mathbf{3}_c}\right]_{\mathbf{1}}$ with $00^+$ is 7 MeV lower than that of the state $\left[[bc]_{\bar{\mathbf{3}}_c}[\bar{u}\bar{d}]_{\mathbf{3}_c}\right]_{\mathbf{1}}$ with $01^+$. With respect to the state $\left[[bb]_{\bar{\mathbf{3}}_c}[\bar{u}\bar{d}]_{\mathbf{3}_c}\right]_{\mathbf{1}}$ with $01^-$, which involves one angular excitation allowed to occur between two $b$-quarks because of their large masses. Meanwhile, the diquark $[bb]$ has spin zero so that the color-magnetic interaction is weak attractive. Therefore, the state with $01^-$ has a lower mass than that of other states with negative parity. In one word, there exists strong attractive interactions coming from the Coulomb interaction, the color-magnetic interaction and one Goldstone boson exchange (mainly $\pi$), which are more than 200 MeV, in the stable doubly heavy tetraquark states $\left[[QQ]_{\bar{\mathbf{3}}_c}[\bar{u}\bar{d}]_{\mathbf{3}_c}\right]_{\mathbf{1}}$ with $I=0$, see Table \ref{contributions}. Lattice QCD simulations on the doubly heavy tetraquark states $[QQ][\bar{u}\bar{d}]$ also indicated that the phase shifts in the isospin singlet channels suggest attractive interactions growing as $m_{\pi}$ decreases~\cite{lqcdheavy}. The state $\left[[bb]_{\bar{\mathbf{3}}_c}[\bar{u}\bar{s}]_{\mathbf{3}_c}\right]_{\mathbf{1}}$ with $\frac{1}{2}1^+$ is analogical to the state $\left[[bb]_{\bar{\mathbf{3}}_c}[\bar{u}\bar{d}]_{\mathbf{3}_c}\right]_{\mathbf{1}}$ with $01^+$ with the exception of the one Goldstone boson exchange ($K$ and $\eta$). The magnitude of the attractive interaction is weaker than the state $01^+$ because of no $\pi$-meson exchange interaction in the state with $\frac{1}{2}1^+$. In order to quantitatively understand the dynamical mechanism forming the stable doubly heavy tetraquark states, we calculate the contributions coming from the different piece of the Hamiltonian in the ACFTM to the binding energies of the stable states, which are presented in Table \ref{contributions}. One can find that the most of the binding energies come from meson exchange interactions, which are equal to the values in the tetraquark states. Once meson exchanges are switched off, some of stable states will vanish with the exception of the states $[bb][\bar{u}\bar{d}]$ with $01^+$ and $12^+$ and the state $[bb][\bar{u}\bar{s}]$ with $\frac{1}{2}1^+$, which become into weak bound states with binding energy of several and a dozen MeV. The reason is that the meson exchange between two light quarks in the states $[QQ][\bar{q}\bar{q}]$ does not occur in the threshold consisting of two $Q\bar{q}$ mesons. Therefore, the doubly heavy tetraquark states $[QQ][\bar{q}\bar{q}]$ provide an ideal field to research the interaction between the different quark interactions because the chiral symmetry is explicitly broken in the heavy sector but it is spontaneously broken in the light one. The color-magnetic interaction also plays an important role in the formation of the stable doubly heavy tetraquark states with isopsin zero, which makes contributions to the binding energies ranging from 54 MeV to 208 MeV, see Table \ref{contributions}. The Coulomb interaction, independence of spin and isospin, universally produces extremely strong attractive interactions ranging from 550 MeV to 719 MeV in the stable doubly heavy tetraquark states, which can be understood by the small separations between any two particles $\langle\mathbf{r}_{ij}^2\rangle^{\frac{1}{2}}$, specially for the distance of two heavy quarks $\langle\mathbf{r}_{12}^2\rangle^{\frac{1}{2}}$, see Table \ref{rms}. However, the Coulomb interaction has no direct contribution to the binding energies of the heavy tetraquark states $[QQ][\bar{q}\bar{q}]$, see Table \ref{contributions}. It can be found that the contributions to the binding energies from the color-magnetic and Coulomb interactions amplify with the increase of the mass ratio $\frac{m_{Q}}{m_{q}}$ in the group of states $[QQ][\bar{q}\bar{q}]$ with the same $[\bar{q}\bar{q}]$ and $IJ^P$, such as the group $[cc][\bar{u}\bar{d}]$, $[bc][\bar{u}\bar{d}]$ and $[bb][\bar{u}\bar{d}]$ with $01^+$. The states $[bc][\bar{u}\bar{d}]$ and $[bb][\bar{u}\bar{d}]$ with $12^+$ as stable states against strong interactions should be emphasized because of weak attractive meson exchange and repulsive color-magnetic interactions, which are different from the binding mechanism of the states with $I=0$. The kinetic energies make great contributions to the binding energies in the states with $12^+$, see Table \ref{contributions}. The reason is that the repulsive color-magnetic interaction and the motions of quarks prevent any two quarks from approaching each other. Meanwhile, the magnitude of the $\pi$-meson exchange in the states $[QQ][\bar{u}\bar{d}]$ with $12^+$ ($\langle\mathbf{\sigma}_i\cdot\mathbf{\sigma}_j\rangle$$\langle\mathbf{F}_i\cdot\mathbf{F}_j\rangle=1$) weaken to the $\frac{1}{9}$ of that in the states $[QQ][\bar{u}\bar{d}]$ with $01^+$ ($\langle\mathbf{\sigma}_i\cdot\mathbf{\sigma}_j\rangle$$\langle\mathbf{F}_i\cdot\mathbf{F}_j\rangle=9$). In this way, any two quarks should sit at far from each other, see Table \ref{rms}, so that the kinetic energies greatly reduce, about 400 MeV, comparing with those of the states with $01^+$. For the same reasons, the energies of other states $[QQ][\bar{q}\bar{q}]$ with $2^+$ in Table \ref{spectrum} is just a little higher than their corresponding threshold. With respect to the state $[bb][\bar{u}\bar{s}]$ with $\frac{1}{2}1^+$, one-Goldstone-boson-exchange interaction cannot provide large attraction because of lacking of $\pi$-meson exchange interaction. The system also reduces its kinetic energy to strengthen the stability of the system. The quark model with twobody quadratic confinement potential by means of the Casimir scaling and other interactions in the ACFTM is directly extended to the heavy tetraquark states $[QQ][\bar{q}\bar{q}]$, see Table \ref{diff}, the energies are in general higher about 100 MeV than those given by the ACFTM because the Casimir scaling will lead to anticonfinement for some color structure in the multiquark system~\cite{anticonfinement}. Meanwhile, the model with twobody confinement potential is also known to be flawed phenomenologically because it leads to power law van der Waals forces between color-singlet hadrons, which will disappear automatically by taking account into the flip-flop potential in the LQCD simulation on the tetraquark states~\cite{flip-flop}. Comparing with the twobody confinement potential, the fourbody confinement potential based on the lattice picture push down the energy of the tetraquark states, about 100 MeV, and can provide the binding energies ranging from over 30 MeV to about 100 MeV, which is therefore universal dynamical mechanism forming stable doubly tetraquark states in the ACFTM. In other words, some states with binding energies below 100 MeV are not stable states anymore in the model with twobody confinement potential. \section{summary} We systematically study the doubly heavy tetraquark states $[QQ][\bar{q}\bar{q}]$ with diquark-antidiquark picture in order to search for all possible stable states against strong interactions in the ACFTM with a multibody confinement potential, $\sigma$-exchange, one-gluon-exchange and one-Goldstone-boson-exchange interactions. The ACFTM model predicts that the tetraquark states $[cc][\bar{u}\bar{d}]$ with $01^+$, $[bc][\bar{u}\bar{d}]$ with $00^+$, $01^+$, and $12^+$, $[bb][\bar{u}\bar{d}]$ with $01^-$, $01^+$ and $12^+$, $[bb][\bar{q}'\bar{s}]$ with $\frac{1}{2}1^+$ are stable states against strong interactions. The tetraquark states $[bb][\bar{u}\bar{d}]$ with $01^+$ and $[bb][\bar{q}'\bar{s}]$ with $\frac{1}{2}1^+$ are the most promising stable doubly heavy tetraquark states, which should be explored in experiments in the near future. The strong decays of those stable doubly heavy tetraquark states are kinematically forbidden if they really exist. However, they can decay only weakly or electromagnetically and therefore they must have a small decay width. The color configuration $\left[[QQ]_{\bar{\mathbf{3}}_c}[\bar{q}\bar{q}]_{\mathbf{3}_c}\right]_{\mathbf{1}}$ dominates the propertits of the doubly heavy tetraquark states, in which the diquark $[QQ]$ can be regarded as a basic building block because of their small sizes. The Coulomb interaction is very strong attractive and greatly reduce the energies of the doubly heavy tetraquark states $[QQ][\bar{q}\bar{q}]$. However, it has no direct contribution to the binding energies of the bound states. The multibody confinement potential based on the color flux-tube picture employs a collective degree of freedom whose dynamics play an important role in the formation of the bound states, which push down the energies of the doubly heavy tetraquark states about 100 MeV comparing the twobody one. The doubly heavy tetraquark states $[QQ][\bar{u}\bar{d}]$ with $I=0$ are strong bound states and have the binding energies of the order of 100 MeV mainly coming from color-magnetic interaction and one-Goldstone-boson-exchange interaction. The doubly heavy tetraquark states $[QQ][\bar{u}\bar{d}]$ with $12^+$ are weak bound states because of weak meson exchange and repulsive color-magnetic interactions, which is formed mainly by reducing their kinetic energies. The strange state $[bb][\bar{u}\bar{s}]$ with $\frac{1}{2}1^+$ is similar to the state $[bb][\bar{u}\bar{d}]$ with $01^+$ and however a relative weak bound state because one-Goldstone-boson-exchange interaction cannot provide large attraction because of lacking of $\pi$-meson exchange interaction. The state also reduces its kinetic energy to strengthen the stability of the system. Until now, none of stable doubly heavy states $[QQ][\bar{q}\bar{q}]$ has been observed in experiments and therefore more comprehensive investigations on their properties are still needed. The experimental detection and analysis of the doubly heavy tetraquark states will undoubtedly provide an invaluable opportunity to severely check the availability of the different theoretical models and, therefore, will allow one to makes more reliable theoretical predictions on the exotic hadronic spectra. \acknowledgments {This research is partly supported by the National Science Foundation of China under Contracts Nos. 11875226, 11775118, 11535005 and Fundamental Research Funds for the Central Universities under Contracts No. SWU118111.}
2,869,038,156,621
arxiv
\chapter*{\textbf{Appendix B}} \addcontentsline{toc}{chapter}{\textbf{Appendix B}} \begin{appendices} \renewcommand{\thechapter}{\arabic{chapter}} \renewcommand{\thesection}{B.\arabic{section}} \renewcommand{\theequation}{B.\arabic{equation}} \setcounter{equation}{0} In this appendix, we collect the basic notations and formulas that are often used in this thesis. \section{Notations} \label{AppendixA.1} A dot represents the time derivative, while a prime represents the spatial derivative, \ie $ \dot{f}= d F/d t$, and $F'= d F/d x$. A double dots represents the time derivative of second-order, while a double primes represents the spatial derivative of second-order, \ie $ \ddot{F}= d^2 F/d t^2$, and $F''= d^2 F/d x^2$. Partial derivative of $F$ with respect to $\mu$ is given as $\partial_{\mu} F \equiv \partial F^{\mu}$. In an inertial system, we set \begin{equation} x^1 := x, \quad x^2 := y, \quad x^3 := z, \quad x^0 := ct \:, \end{equation} where $x, y, z$ are right-handed Cartesian coordinates, t is time, and c is the speed of light in a vacuum. Generally, \begin{itemize} \item Latin indices run from 1 to 3 (\eg, $i, j = 1, 2, 3$), and \item Greek indices run from 0 to 3 (\eg, $\mu, \nu = 0, 1, 2, 3$). \end{itemize} In particular, we use the Kronecker symbols \begin{equation} \delta_{ij}= \delta^{ij} = \delta^i_j := \begin{cases} 1 &\mbox{if} i = j \:, \\ 0 & \mbox{if} i\neq j \:, \end{cases} \end{equation} and the Minkowski symbols \begin{equation} \eta_{\mu \nu}= \eta^{\mu \nu} := \begin{cases} 1 &\mbox{if} \mu = \nu =0 \:, \\ -1 &\mbox{if} \mu = \nu =1,2,3 \:, \\ 0 & \mbox{if}\mu\neq \nu \:. \end{cases} \end{equation} In the context where the cosmic expansion is taken into account, we work in spatially flat Friedmann-Robertson-Walker (FRW) universe with a metric \begin{equation} ds^2 = g_{\mu \nu} dx^{\mu} dx^{\nu}= - dt^2 + R^2(t) [dx^2 + dy^2 +dz^2] \:, \end{equation} where R(t) is the scale factor of the universe. We denote the cosmic time as $t$ and the conformal time as $\tau$, where $d\tau= dt/R(t)$. \section{Einstein's summation convention} \label{AppendixB.2} In the Minkowski space-time, we always sum over equal upper and lower Greek (resp. Latin) indices from $0$ to $3$ (resp. from 1 to 3). For example, for the position vector, we have \begin{equation} \mathbold{X} = x^j \mathbold{e}_j \sum^3_{j=1} x^j \mathbold{e}_j \:, \end{equation} where $\mathbold{e}_1, \mathbold{e}_2, \mathbold{e}_3$ are orthonormal basis vectors of a right-handed orthonormal system. Moreover, \begin{equation} \eta_{\mu \nu} x^{\nu}= x^j \mathbold{e}_j \sum^3_{\nu=0} \eta_{\mu \nu} x^{\nu} \:. \end{equation} Greek indices are lowered and lifted with the help of the Minkowski symbols. That is, \begin{equation} x_{\mu} := \eta_{\mu \nu} x^{\nu} \:, \quad x_j = - x^j \:, \quad j=1,2,3 \:. \end{equation} For the indices $\alpha, \beta, \gamma, \delta = 0, 1, 2, 3$, we introduce the antisymmetric symbol $\epsilon^{\alpha \beta \gamma \delta}$ which is normalized by \begin{equation} \epsilon^{0123} := 1 \:, \end{equation} and which changes sign if two indices are transposed. In particular, $\epsilon^{\alpha \beta \gamma \delta}=0$ if two indices coincide. For example, $\epsilon^{0213}=-1$ and $\epsilon^{0113}=0$. Lowering of indices yields $\epsilon_{\alpha \beta \gamma \delta} := - \epsilon^{\alpha \beta \gamma \delta}$. For example, $\epsilon_{0123}=-1$. \end{appendices} \chapter*{\textbf{Appendix A}} \addcontentsline{toc}{chapter}{\textbf{Appendix A}} \begin{appendices} \renewcommand{\thechapter}{\arabic{chapter}} \renewcommand{\thesection}{A.\arabic{section}} \renewcommand{\theequation}{A.\arabic{equation}} \setcounter{equation}{0} For convenience, we summarize in this appendix some useful units and conversion relations that are often used in this thesis. \section{Fundamental constants} \label{AppendixA.1} The explicit numerical values of some of the universal constants in nature are given downward here$:$ \begin{itemize} \item Electron rest mass $m_e= 9.109 \times 10^{-31} \ \text{kg} \:,$ \item Electron electric charge $e= 1.6022 \times 10^{-19} \ \text{C} \:,$ \item Speed of light in free space $c=2.9979 \times 10^8 \ \text{m} \ \text{s}^{-1} \:,$ \item Planck's constant $h= 6=6.626 \times 10^{-34} \ \text{kg} \ \text{m}^2 \text{s}^{-1} \:,$ \item Reduced Planck's constant $\hbar=h/2 \pi=1.0546 \times10^{-34} \ \text{kg} \ \text{m}^2 \text{s}^{-1}=6.6 \times 10^{25} \ \text{GeV} \ \text{s} \:,$ \item Boltzmann's constant $k_B = 8.6 \times 10^{14} \ \text{GeV} \ \text{K}^{-1} \:,$ \item Permittivity of free space $\epsilon_0= 8.854 \times 10^{-12} \ \text{F} \ \text{m}^{-1} \:,$ \item Fine structure constant $\alpha = e^2 /(4\pi \epsilon_0 \hbar c)= (137.04)^{-1} \:,$ \item Gravitational constant $G=6.673 \times 10^{-11} \ \text{N} \ \text{m}^2 \ \text{kg}^{-2} \:.$ \end{itemize} \section{Units and conventions} \label{AppendixA.2} If not stated otherwise, through out this thesis we use the natural units system in which $c=\hbar=k_B=1$. In this system there is only one fundamental dimension the energy in units of electron-volt $\text{eV}$. Note that the unit of mass, in which the relation $E=mc^2$ is used implicitly, therefore one has the important conversion $1 \ \text{eV} = 1.602 \times 10^{-19} \ \text{kg} \ \text{m}^2 \ \text{s}^{-2}$. This implies the following conversion rules from the international system of units (SI) to the natural system of units$:$ \begin{align} 1 \ \text{s} & = 1.52 \times 10^{24} \ \text{GeV}^{-1} \:,\\ 1 \ \text{m} & = 5.10 \times 10^{15} \ \text{GeV}^{-1} \:,\\ 1 \ \text{kg} & = 5.62\times 10^{26} \ \text{GeV} \:,\\ 1 \ \text{K} & = 8.62 \times 10^{-14} \ \text{GeV} \:. \end{align} Spatial distances in astrophysics and cosmology are often measured in parsec, abbreviated with pc. In SI units, it is \begin{equation} 1 \ \text{pc} = 3.1 \times 10^{16} \ \text{m} \:. \end{equation} In natural units, we can express a parsec via an inverse energy, and the following conversion rule holds \begin{equation} 1 \ \text{Mpc} = 1.55 \times 10^{38} \ \text{GeV}^{-1} \:. \end{equation} A handy measure for masses of astrophysical objects is the mass of our sun $\mathrm{M}_{\odot}$. The solar mass is about \begin{equation} \mathrm{M}_{\odot} \ \text{Mpc} = 1.99 \times 10^{30} \ \text{kg} \:, \end{equation} or equivalently \begin{equation} \mathrm{M}_{\odot} \ \text{Mpc} = 1.11 \times 10^{57} \ \text{GeV} \:. \end{equation} \end{appendices} \chapter{\textbf{Summary and Conclusion}} \label{ch7} The standard model of particle physics, together with the standard model of cosmology, provides the best understanding of the origin of matter and the most acceptable explanation to the behavior of the universe. However, the shortcomings of the two standard models to solve some problems within their framework are promoting the search for new physics beyond the standard models. This is why, recently, there are several studies underway at the interface between cosmology, particle physics, and field theory. In this context, the dark matter searches remain elusive and pose one of the highest motivating scenarios to go beyond the standard models. The existence of stable cold dark matter is established based on many astrophysical and cosmological observations such as the cosmic microwave background, the large scale structure, the galactic rotation curves, and beyond. In particular, the Planck 2018 data predicts that the visible universe contains non-baryonic dark matter, which is estimated at more than five times greater than the ordinary baryonic matter. However, the content and properties of the dark matter remain one of the most pressing challenges in cosmology and particle physics. Although there are many suggested candidates for the dark matter content, there is currently no evidence of any of them. In this thesis, we focused on understanding the nature of dark matter by studying the phenomenology of axions and axion-like particles, highly viable dark matter candidates. The typical axions are identified in the Peccei-Quinn mechanism as pseudo-Nambu-Goldstone bosons appear after the spontaneous breaking of the PQ symmetry that is introduced to explain the absence of CP violation for the strong interaction, which is theoretically obvious from the Lagrangian of quantum chromodynamics. Furthermore, many extensions of the standard model of particle physics, including string theory models, generalized this concept to predict more such axion-like particles, which may arise as pseudo-scalar particles from the breaking of various global symmetries. The phenomenology of ALPs is the same as of QCD axions characterized by their coupling with two photons. The main difference between them is that the QCD axion coupling parameter is determined by the axion mass$;$ however, this is not necessarily the case for ALPs. Although the masses of the generic ALPs are expected theoretically to be very tiny, nowadays they are accounted for as the most leading candidates to compose a major part, if not all, of the dark matter content of the universe. This is motivated by the theoretical predictions for their properties that are determined by their low mass and very weak interactions with the standard model particles. It is also worthwhile to mention that it is believed that the main mechanisms to generate the dark matter ALPs in the early universe are the misalignment mechanism and the decay of strings and domain walls. In the first part of this thesis, we started by reviewing the current status of the searches for the dark matter. In particular, we briefly explained the first hints that dark matter exists, elaborated on the strong evidence physicists and astronomers have accumulated in the past years, discussed possible dark matter candidates, and described the various detection methods used to probe the dark matter's mysterious properties. Then, the theoretical backgrounds about the QCD axion, including the strong CP problem, the Peccei-Quinn solution, and the phenomenological models of the axion, were described. This was followed by a brief discussion of the main properties of the invisible axions. After that, the topic of the possible role that axions and ALPs can play in explaining the mystery of dark matter was illustrated. Further, to look at the question of whether they correctly explain the present abundance of the dark matter, we investigated their production mechanism in the early universe. Later, we discussed the recent astrophysical, cosmological, and laboratory bounds on the axion coupling with the ordinary matter. In the second part of this thesis, we considered a homogeneous cosmic ALP background analogous to the cosmic microwave background and motivated by many string theory models of the early universe. The coupling between the CAB ALPs traveling in cosmic magnetic fields and photons allows ALPs to oscillate into photons and vice versa. Using the M87 jet environment, we tested the CAB model that is put forward to explain the soft X-ray excess in the Coma cluster due to CAB ALPs conversion into photons. Then we demonstrated the potential of the active galactic nuclei jet environment to probe low-mass ALP models and to potentially exclude the model proposed to explain the Coma cluster soft X-ray excess. We found that the overall X-ray emission for the M87 AGN requires an ALP-photon coupling $g_{a\gamma}$ in the range of $\sim 7.50 \times 10^{-15} \textup{--} 6.56 \times 10^{-14} \ \text{GeV}^{-1}$ for ALP masses $m_a \sim 1.1 \times 10^{-13} \ \text{eV}$ as long as the M87 jet is misaligned by less than about 20 degrees from the line of sight. These values are up to an order of magnitude smaller than the current best fit value on $g_{a\gamma} \sim 2 \times 10^{-13} \ \text{GeV}^{-1}$ obtained in soft X-ray excess CAB model for the Coma cluster. The results presented in this part cast doubt on the current limits of the largest allowed value of $g_{a\gamma}$ and suggest a new constraint that $g_{a\gamma} \lesssim 6.56 \times 10^{-14} \ \text{GeV}^{-1}$ when a CAB is assumed. This might bring into question whether the CAB explanation of the Coma X-ray excess is even viable. In the third part of this thesis, we turned our attention to consider a scenario in which ALPs may form Bose-Einstein condensate, and through their gravitational attraction and self-interactions, they can thermalize to spatially localized clumps. The coupling between ALPs and photons allows the spontaneous decay of ALPs into pairs of photons. For ALP condensates with very high occupation numbers, the stimulated decay of ALPs into photons is possible, and thus the photon occupation number can receive Bose enhancement and grows exponentially. We studied the evolution of the ALPs field due to their stimulated decays in the presence of an electromagnetic background, which exhibits an exponential increase in the photon occupation number with taking into account the role of the cosmic plasma in modifying the photon growth profile. In particular, we focus on quantifying the effect of the cosmic plasma on the stimulated decay of ALPs as this may have consequences on the detectability of the radio emissions produced from this process by the forthcoming radio telescopes such as the Square Kilometer Array telescopes with the intention of detecting the CDM ALPs. The results presented in this part argue that neither the current cosmic plasma nor the plasma in the galactic halos can prevent the stimulated decay of ALP with the $10^{-11} \text{--} 10^{-4} \ \text{eV}$ mass range. Interestingly, the radio signal produced via the stimulated decay of ALPs in the range $10^{-6} \text{--} 10^{-4} \ \text{eV}$ mass range is expected to be within the reach of the next-generation of the SKA radio telescopes. It is worth noting that this stimulated decay is only allowed at redshift window $z \lesssim 10^{4}$ and it is not efficient at higher redshift due to the effects of the plasma. The detection of an indication related to this technique might provide essential priors and predictions that are of paramount importance to understanding the properties of dark matter and offer an exciting scenario to explain several unexpected astrophysical observations. Finally, in future work, it is worth extending the study of this PhD project to examine whether ultra-light ALPs can solve the core-cusp problem and the capability of the CAB to explain the EDGES 21 cm anomaly. Since there is still a lot of exciting theoretical aspects about the nature of dark matter from different perspectives, the expected impact of studying these topics would give rise to solve the mystery of dark matter. This, in addition, may have a future role in the discovery of new physics beyond the standard model of particle physics, and it possibly changes our main understanding of fundamental physics. We can sum up that, with the work presented here, we point out that the research on axions and ALPs will be one of the leading frontiers in the near future since the discovery of these particles can solve some of the common unresolved problems between particle physics and cosmology and take us a step forward towards understanding nature. For completeness, some useful notations and conversion relations are broadly outlined in the appendix. \chapter{\textbf{Potential of SKA to Detect CDM ALPs with Radio Astronomy}} \label{ch6} Recently, it has been pointed out that ALPs may form a Bose-Einstein condensate and, through their gravitational attraction and self-interactions, they can thermalize to spatially localized clumps. The coupling between ALPs and photons allows the spontaneous decay of ALPs into pairs of photons. For ALP condensates with very high occupation numbers, the stimulated decay of ALPs into photons is also possible, and thus the photon occupation number can receive Bose enhancement and grows exponentially. In this chapter, we study the evolution of the ALPs field due to their stimulated decays in the presence of an electromagnetic background, which exhibits an exponential increase in the photon occupation number by taking into account the role of the cosmic plasma in modifying the photon growth profile. In particular, we focus on quantifying the effect of the cosmic plasma on the stimulated decay of ALPs as this may have consequences on the detectability of the radio emissions produced from this process by the forthcoming radio telescopes such as the Square Kilometer Array telescopes with the intention of detecting the CDM ALPs. \section{Introduction} \label{sec.6.1} As we discussed before, the coupling between ALPs and photons is possible due to the ALP-two-photon interaction vertex \cite{sikivie1983experimental}. This ALP-photon coupling allows for the Primakoff conversion between ALPs and photons in the presence of an external electric or magnetic field, as well as for the radiative decay of the ALPs into pairs of photons. These two processes provide the theoretical basis for the majority of the recent mechanisms to search for ALPs in both laboratory experiments and astrophysical environments. Interactions with photons, electrons, and hadrons are also possible in the literature for the QCD axion. However, the main focus lies only on the effect of ALP-photon coupling because of two reasons. The first is owing to the coupling with photons is the most common characteristic between the QCD axion and the generic ALPs. The second is due to the fact that photon is the is only bosonic standard model particle we know precisely that it is lighter than the generic dark matter ALPs \cite{alonso2020wondrous}. However, increasing the effects of other couplings such as the self-couplings is doable through modifications to the standard ALPs. Although the effects of the ALPs self-interactions remain very weak, they can become important in the thermal equilibrium of ALPs condensate. An essential consequence in the context of ALPs decay is the fact that ALPs are identical bosons$;$ their very low mass indicates that their density and occupation numbers can be very high. Therefore it is suggested that ALPs may form a Bose-Einstein condensate with only short-range order \cite{sikivie2009bose, davidson2013bose, davidson2015axions}. This ALPs BEC can then thermalize through their gravitational attraction and self-interactions to spatially localized clumps \cite{chang1998studies, sikivie2009bose, erken2012cosmic}. By clump, we mean gravitationally bound structures composed of axions or ALPs, which may present in the dark matter halos around many galaxies$;$ for more detail about the properties of these BEC clumps, see references \cite{schiappacasse2018analysis, hertzberg2018scalar}. Of particular importance here is that the system in such high occupancy case is well described by a classical field \cite{guth2015dark}. Indeed, the spontaneous decay rate of ALPs is very small as a result of both their very low mass and their very weak coupling. However, the rapid expansion of the early universe would lead to an extremely homogeneous and coherent initial state of the ALP field. Hence, all ALPs are in the same state with very high occupation number. Therefore, the stimulated decay of ALPs $a \rightarrow \gamma \gamma$ with a very high rate is very likely in the presence of an electromagnetic background with a certain frequency. During this process, the electromagnetic wave will be greatly enhanced and Bose enhancement effects seem plausible. In contrast, in a scenario of an empty and non-expanding universe, the resulting stimulated emission would induce extremely rapid decay of ALPs into photons, invalidating most of the interesting parameter space \cite{alonso2020wondrous}. Nevertheless, the rapid expansion of the early universe and the plasma effects can disrupt this extremely fast process. These effects are crucial as they modify the evolution of the ALPs field that is resulting from their stimulated decays in the presence of an electromagnetic background. Indeed, this is an exciting approach to look for dark matter ALPs using the current and near-future experiments along with astrophysical observations. In the past years, there have been many attempts to detect signals from dark matter ALPs using radio telescopes, most of which were based on the Primakoff conversion of ALPs into photons \cite{caputo2018looking, caputo2019detecting}. Recently it was shown that the stimulated ALP decays in the astrophysical environments may also generate radio signals comparable with that of the Primakoff conversion \cite{caputo2018looking, sigl2017astrophysical}. In the work presented in this chapter, we study the evolution of the ALPs field due to their stimulated decays in the presence of an electromagnetic background, which exhibits an exponential increase in the photon occupation number by taking into account the role of the cosmic plasma in modifying the photon growth profile. In particular, we attempt to focus on investigating the plasma effects in modifying the detectability that results from the stimulated emissions from ALP clumps. Based on this scenario, we explore the potential of the near future radio telescopes such as the Square Kilometer Array \cite{dewdney2009square} telescopes to detect an observational signature of ALPs decay into photons in suitable astrophysical fields. The outline of this chapter is as follows. The discussion in the previous paragraphs sets the plan for the rest of this chapter. In the following section \ref{sec.6.2}, we review the theoretical basics for ALPs and their spontaneous and stimulated decay. In section \ref{sec.6.3}, we used the classical equation of motion to describe the evolution of the electromagnetic field in ALPs background. In section \ref{sec.6.4}, the corrections to the equation of motion due to plasma effects are considered. In section \ref{sec.6.5}, an approximate estimation of the plasma density distribution in the universe is performed. In section \ref{sec.6.6}, we discuss the effect of plasma density on preventing the stimulated decay of ALPs based on the numerical solutions for the ALP-photon decay. Then the possibility of detecting radio signature from the stimulated decay of ALPs using the SKA radio telescopes is discussed in section \ref{sec.6.7}. Finally, our conclusion is provided in section \ref{sec.6.8}. \section{{Interactions between ALPs and photons}} \label{sec.6.2} As we discussed before, most of the phenomenological implications of ALPs are due to their feeble interactions with the standard model particles that take place through the Lagrangian \cite{sikivie1983experimental, raffelt1988mixing, anselm1988experimental} \begin{equation} \label{eq.6.2.1} \mathrm{\ell}_{a} = \frac{1}{2} \partial_{\mu} a \partial^{\mu} a - V(a) - \frac{1}{4} \mathrm{F}_{\mu \nu} \tilde{\mathrm{F}}^{\mu \nu} - \frac{1}{4} g_{a\gamma} a \mathrm{F}_{\mu \nu} \tilde{\mathrm{F}}^{\mu \nu} \:. \end{equation} The specific form of the potential $V(a)$ is model dependent. For the QCD axion, it results from non-perturbative QCD effects associated with instantons, which are non-trivial to compute with accuracy. For a more simple and general case, the ALPs potential can be written in terms of the ALP mass $m_a$ and the energy scale of PQ symmetry breaking $f_a$ in the simple form \begin{equation} \label{eq.6.2.2} V(a) = m_a^2 f_a^2 \left[ 1- \cos \left( \frac{a}{f_a} \right) \right] \:. \end{equation} Since we shall only be interested in the non-relativistic regime for ALPs, we focus on very small field configurations $a \ll f_a$. For a high ALPs density when we can not neglect the ALPs self-interactions. In this case, the potential can be expanded as a Taylor series with the dominant terms\newpage \noindent \begin{equation} \label{eq.6.2.3} V(a) \approx \frac{1}{2} m_a^2 a^2 - \frac{1}{4 !} \frac{m_a^2}{f_a^4} a^4+ \dots \:. \end{equation} Note that the specific values of the ALPs mass $m_a$ and the energy scale $f_a$ are model dependent. The coupling strength of ALP to two-photons can be given by the relation \begin{equation} \label{eq.6.2.4} g_{a \gamma} = \frac{\alpha}{2 \pi f_a} \ C\:, \end{equation} where and $C$ is a dimensionless model-dependent coupling parameter, usually thought to be of order unity. \subsection{Spontaneous decay rate of ALPs} \label{sec.6.2.1} The ALP-two-photon interaction vertex allows for the Primakoff conversion between ALPs and photons in the presence of an external electric or magnetic field, as well as for the radiative decay of the ALPs into pairs of photons. The majority of the ALPs searches in both the laboratory experiments and in astrophysical environments are based on these two processes. The spontaneous decay rate of an ALP with mass $m_a$ in vacuum into pair of photons, each with a frequency $\omega=m_a / 4\pi$, can be expressed from the usual perturbation theory calculations in term of its mass and ALP-photon coupling $g_{a \gamma}$ \cite{kelley2017radio}. The lifetime of ALPs can be given by the inverse of their decay rate as \begin{equation} \label{eq.6.2.5} \tau_a \equiv \mathrm{\Gamma}_{\text{pert}}^{-1} = \frac{64 \pi}{m_a^3 g_{a\gamma}^2} \:. \end{equation} However, this is the specific form for the QCD axions, not for the generic ALPs$;$ we still can use it as a reasonable estimation. For the typical QCD axions with mass $m_a \sim 10^{-6} \ \text{eV}$ and coupling with photons $g_{a\gamma} \sim 10^{-12} \ \text{GeV}^{-1}$, one can insert these limits to equation \eqref{eq.6.2.4} and evaluate the perturbative decay time for general ALPs which found to be about $\sim 2 \times 10^{44} \ \text{s}$. This lifetime for ALPs is much larger than the present age of the universe $\sim 4.3 \times 10^{17} \ \text{s}$. Therefore ALPs seem to be super stable in the cosmic scale, and perhaps this is the main reason for neglecting the ALPs decay in the literature. According to this scenario, the spontaneous decay of ALPs can not be responsible for producing any observable signal that can be detectable by the current or near-future radio telescopes. \subsection{Stimulated decay rate of ALPs} \label{sec.6.2.2} Up to this point, we ignored the fact that ALPs are bosons$;$ their very low mass indicates that their density and occupation numbers can be very high. Therefore it is suggested that ALPs may form a Bose-Einstein condensate with only short-range order. Note that the authors of \cite{guth2015dark} claimed that an ALP BEC of long-range correlations is unjustified, as it is driven by attractive interactions. This ALPs BEC can then thermalize through their gravitational attraction and self-interactions to spatially localized clumps. These clumps are gravitationally bound structures composed of axions or ALPs in the form of either solitons or Bose stars and, they may present in the dark matter halos around many galaxies \cite{guth2015dark}. Indeed, if ALPs present during inflation, their field is expected to be totally homogenized within the horizon. Therefore, all ALPs are expected to be in the same state that means their occupation number is very huge \begin{equation} \label{eq.6.2.6} f_a \sim \frac{\rho_a}{m_a H^3} \sim 10^{40} \left( \frac{\text{eV}}{m_a} \right)^2 \left( \frac{A}{10^{11} \text{GeV}} \right)^2 \left( \frac{m_a}{H} \right)^3 \:, \end{equation} where $m_a$ represents the mass of the ALP field, and $A$ donates the typical field amplitude. Such a very huge occupation number makes the effects of Bose enhancement seem to be possible, as we now describe. However, this is not sufficient to lead to an enhanced decay rate. Bose enhancement is ensured by enormous occupation numbers in the final state as well. In this case, we are studying the population of photons, and the ALPs decay into two-photons is a two-body decay that produces identical photons with the same energy. Therefore, it is expected for these photons to quickly accumulate with an enormous occupation number resulting in Bose enhancement to ensue. In this case, the system with such a high occupancy case is well described by a classical field. The stimulated decay of ALPs $a \rightarrow \gamma \gamma$ with a very high rate is possible in the presence of an electromagnetic wave with a certain frequency. During this process, the electromagnetic wave will be greatly enhanced. Given the very low mass of the ALPs and their incredibly weak coupling, the stimulated emission in an empty and non-expanding universe would induce ALPs to decay very rapidly into photons, nullifying most of the interesting parameter space. The decay time obtained in \cite{alonso2020wondrous} from the equation of motion for ALPs condensates is just about $\sim 10^{-7} \ \text{s}$, which is dramatically small comparing to the perturbative decay time $\sim 2 \times 10^{44} \ \text{s}$. Indeed this scenario is much problematic as it can not lead to ALPs stable enough to be accounted for the dark matter content in the universe. In a recent paper \cite{alonso2020wondrous}, the authors have looked for an explanation for the significant divergence in the decay time of ALPs using the classical and the standard perturbative calculations. At first glance, with taking Bose enhancement into account, the calculations would lead to a dramatically small decay time of ALPs. Indeed this would be the scenario if the stimulated decay occurs in an empty and non-expanding universe. However, it is argued in \cite{abbott1983cosmological, preskill1983cosmology, dine1983not} that there are two effects that can reduce the rate of the stimulated decay of ALPs into photons. The rapid expansion of the early universe redshifts the target photon population and therefore it can discourage enhanced decay rate of ALPs. In addition to the expansion of the universe, the plasma effects also are crucial as it modifies the photon's propagation of and prevents the early decay of ALPs. Inside the plasma, photons have an effective mass that kinematically forbids the decay of lighter particles, including ALPs or at least a portion of it. Consequently, the redshifting of the decay products due to the expansion of the universe as well as the effective plasma mass of the photon can prevent the extremely fast decay rate of ALPs into photons. Indeed, this is an exciting approach to look for dark matter ALPs using the current and near-future experiments along with astrophysical observations. Therefore in the following section, we used the classical equation of motion to account for the Bose enhancement of ALPs in the cosmic plasma. Let us now consider the enhancement of the ALP decay rate by a stimulation effect in the presence of a photon background. The number density of ALPs $n_a$ obeys the Boltzmann equation that could be written in the following form \begin{equation} \label{eq.6.2.7} \dot{n}_a \simeq - n_a \mathrm{\Gamma}_{\text{pert}} (1+2 f_{\gamma}) \:, \end{equation} where $f_{\gamma}$ indicates the photon occupation number. Therefore, we have an effective ALP decay rate \begin{equation} \label{eq.6.2.8} \mathrm{\Gamma}_{\text{eff}} = \mathrm{\Gamma}_{\text{pert}} (1+2 f_{\gamma}) \:. \end{equation} The term between the brackets is the stimulation factor due to the background photon radiation. It seems to be clear that only in the limits of very low photon occupation $f_{\gamma} \ll 1$, the effective decay rate of ALP indicated from the previous formula should be identical to the spontaneous decay rate $\mathrm{\Gamma}_{\text{pert}}$ given by equation \eqref{eq.6.2.5}. Otherwise, if the final state photon modes have been significantly populated by previous decays, then $\mathrm{\Gamma}_{\text{eff}}$ can become much larger by stimulated emission. Further, the number of photons generated of modes with momentum around $k = m_a/2$ are given by \begin{equation} \label{eq.6.2.9} f_\gamma (k=m_a/2) = \frac{4 \pi^2 a_0}{m_a^2 g_{a \gamma}} \frac{n_\gamma}{n_a} \:. \end{equation} Therefore, the required condition for the occupation number of photons $f_\gamma$ to be greater than unity and thus leading to a stimulated emission becomes \begin{equation} \label{eq.6.2.10} n_\gamma > \frac{g_{a \gamma} m_a^2}{4 \pi^2 a_0} n_a \:. \end{equation} By using the spontaneous decay rate formula (2.5), the effective ALP decay rate reads \begin{equation} \label{eq.6.2.11} \mathrm{\Gamma}_{\text{eff}} = \frac{g^2_{a \gamma} m_a^3}{64 \pi} \left( 1+ \frac{8 \pi^2 a_0}{m_a^2 g_{a \gamma}} \frac{n_\gamma}{n_a} \right) n_a \:. \end{equation} Then, the Boltzmann equation for the photons number density can be written as \begin{equation} \label{eq.6.2.12} \dot{n}_\gamma = 2 \mathrm{\Gamma}_{\text{eff}} \ n_a = \frac{g^2_{a \gamma} m_a^3}{32 \pi} \left( 1+ \frac{8 \pi^2 a_0}{m_a^2 g_{a \gamma}} \frac{n_\gamma}{n_a} \right) n_a \:. \end{equation} If the condition in equation \eqref{eq.6.4.4} is fulfilled, the previous Boltzmann equation has an exponentially growing solution of the general form \begin{equation} \label{eq.6.2.13} n_\gamma = \exp [\tilde{\mu} t] n_\gamma (0) \:, \end{equation} with growth rate \begin{equation} \label{eq.6.2.14} \tilde{\mu} = \pi \frac{g_{a \gamma} m_a a}{4} \:. \end{equation} We notice here that the stimulated decay leads to a growth rate increases exponentially with a factor of $g_{a \gamma}$ instead of a growth rate proportional to a factor of $g^2_{a \gamma}$ in the spontaneous decay. \section{Evolution of electromagnetic field in ALPs background} \label{sec.6.3} To investigate the stimulated decay of the ALP condensates, we study the growth of the electromagnetic field using a simple model that the ALP background field is uniform, and the density is time-independent. The stable ALP condensate solutions are non-relativistic, with huge occupancy numbers \cite{schiappacasse2018analysis, hertzberg2018scalar}. In the non-relativistic regime it is useful to express the real ALP field $a(x,t)$ in terms of a complex Schr$\ddot{\text{o}}$dinger field $\phi(x,t)$ according to \begin{equation} \label{eq.6.3.1} a(x,t) = \frac{1}{\sqrt{2 m_a}} \left[ e^{-im_a t} \phi(x,t) + e^{im_a t} \phi^\ast(x,t) \right] \:. \end{equation} In general, such a homogeneous ALP background field is unstable due to gravitational and self-interactions, which lead this configuration to collapse and form ALP condensate clumps. For simplicity, we ignore this more realistic situation and restrict by treating the ALPs as homogeneous and oscillating periodically in the classical field limit. This approximately represents a harmonic oscillation for small field amplitudes \begin{equation} \label{eq.6.3.2} a(t) = a_0 \cos(\omega_0 t) \:, \end{equation} where the amplitude of oscillation is $a_0$. For the non-relativistic limit for ALPs, the frequency has an excellent approximation that $\omega_0 \approx m_a$. We now assume that the amplitude $a_0$ is independent of time and position. Then the dynamics of the ALP field is given by the standard non-relativistic Hamiltonian \begin{equation} \label{eq.6.3.3} H = \int d^3x \left[ \frac{1}{2} \left( \dfrac{\partial a}{\partial t} \right)^2 + \frac{1}{2} (\nabla a)^2 + \frac{1}{2} m_a^2 a^2 \right] \:. \end{equation} From the Hamiltonian, one can derive the ALPs density that contributes to the total energy density of the universe as \begin{equation} \label{eq.6.3.4} \rho_a = \frac{1}{2} m_a^2 a_0^2 \:. \end{equation} Thus, the equation of motion for the ALP field in the Friedmann-Lema{\^\i}tre-Robertson-Walker (FLRW) cosmological background takes the familiar form \begin{equation} \label{eq.6.3.5} \ddot{a} + 3 H \dot{a} - \frac{\nabla^2}{R^2(t)} a+ V'(a) = 0 \:. \end{equation} Here, $V' \equiv dV/da$. Since the equation of motion \eqref{eq.6.2.4} is given by the dimensionless form, the dynamics of the ALP does not depend on the values of $m_a$ and $f_a$. For the homogeneous case of the ALP field $a$, the equation of motion reduces to \begin{equation} \label{eq.6.3.6} \ddot{a} + 3 H \dot{a} + m_a^2 a = 0 \:. \end{equation} As we discussed above, the classical field equations are enough to describe the regime of ALP condensates with a very high occupancy number. Therefore we consider a quantized four-vector potential $\hat{\mathrm{A}}^{\mu}=(\hat{\mathrm{A}}_0, \hat{\accbm{\mathrm{A}}})$ in a classical background given by the ALP field $a$. We vary the above Lagrangian \eqref{eq.6.2.1} with respect to $\hat{\mathrm{A}}^{\mu}$ to obtain the Heisenberg equation of motion. Assuming a non-relativistic and homogeneous ALP field is given as in equation \eqref{eq.6.3.1}, the space derivatives of the ALP field $a$ can be neglected when compared with $\partial a / \partial t$ at least at the scale of the photon momentum and translational invariance. For the electromagnetic field, we deal with the Lorenz gauge, $\partial_\mu \hat{\accbm{\mathrm{A}}}^\mu = 0$, and use the remaining gauge freedom to set $\hat{\accbm{\mathrm{A}}}_0 = 0$. Then the resulting equations of motion for the ALP field $a$ propagating in the electromagnetic field $\hat{\mathrm{A}}^{\mu}$ are \begin{align} \ddot{a} + m_a^2 a - g_{a \gamma} \dot{\hat{\accbm{\mathrm{A}}}} \cdot ( \nabla \times \hat{\accbm{\mathrm{A}}} ) &= 0 \:, \label{eq.6.3.7} \\ \label{eq.6.3.8} \ddot{\hat{\accbm{\mathrm{A}}}} - \nabla^2 \hat{\accbm{\mathrm{A}}} + g_{a \gamma} \dot{a} \, (\nabla \times \hat{\accbm{\mathrm{A}}} )&= 0 \:. \end{align} Now we consider an incoming monochromatic electromagnetic wave with frequency $\omega > 0$. We are assuming that the electromagnetic field is weak. Then we can choose that $a_0$ does not change, and the ALP field can be regarded as a background field. Now because we consider the ALP field to be time-dependent, it is useful to make a Fourier transform with respect to the spatial coordinates only \begin{equation} \label{eq.6.3.10} \hat{\accbm{\mathrm{A}}}(t,\hat{z}) = \int \dfrac{d^3k}{(2\pi)^3} e^{-i\hat{k} \cdot \hat{z}} \hat{\accbm{\mathrm{A}}}_{\bm{k}}(t) \:. \end{equation} Then we can neglect the term $(\nabla \times \hat{\accbm{\mathrm{A}}})$ in equations \eqref{eq.6.3.7} and \eqref{eq.6.3.8}, by Fourier transforming the vector potential as \begin{equation} \label{eq.6.3.9} \hat{\accbm{\mathrm{A}}}_{\bm{k}} (t)= \sum_{\lambda = \pm} \left[ \hat{a}_{\bm{k}, \lambda} \bm{\epsilon}_{\bm{k}, \lambda} s_{\bm{k}, \lambda} (t)+\hat{a}^{\dagger}_{\bm{k}, \lambda} \bm{\epsilon}^{\ast}_{\bm{k}, \lambda} s^{\ast}_{\bm{k}, \lambda} (t) \right] \:, \end{equation} where $\bm{\epsilon}_{\bm{k}, \lambda=\pm}$ are the photon circular-polarization vectors and $ \hat{a}_{\mathbf{k}, \lambda}$ and $\hat{a}^{\dagger}_{\bm{k}, \lambda}$ are annihilation and creation operators, respectively. Here $s_{\mathbf{k}, \lambda}$ correspond to the mode functions that have to be solved. Whereas $i \bm{k} \times \bm{\epsilon}_{\bm{k}, \lambda} = k \bm{\epsilon}_{\mathbf{k}, \lambda}$, the polarizations of the mode functions $s_{\bm{k}, \lambda}$ decouple and the equations of motions \eqref{eq.6.3.7} and \eqref{eq.6.3.8} can rewritten as follows \begin{align} \ddot{a}+ \left[ k^2 +m_a^2 \right] a - g_{a \gamma} k \dot{s}_{\bm{k},+} s_{\bm{k},+} &= 0 \:, \label{eq.6.3.11} \\ \label{eq.6.3.12} \ddot{s}_{\bm{k},\pm} +\left[ k^2 \mp g_{a \gamma} k \omega_0 a_0 \sin(\omega_0 t)\right] s_{\bm{k},\pm}&= 0 \:. \end{align} The last equation takes the form of the well-known Mathieu equation that is one of the equations that describe the parametric oscillators\footnote{Parametric system refers to a system at which the motion depends on a time-dependent parameter.}. The Mathieu equation can be solved analytically in particular cases or exactly using numerical methods. In general, it has a periodic solution, and its parametric space characterizes by a band structure of stable regions correspond to oscillatory solutions and unstable (resonant) regions correspond to exponentially growing solutions. The term inside the square brackets represents the frequency $\omega_k^2(t)=\omega_k^2(t+T)$, where $T=2\pi / \omega_0$ is the period of oscillations of the condensate. The coupling between the mode functions $s_{\bm{k},\pm}$ and the ALPs field depends on $k$. In regimes of ALPs background with low density and weak coupling $(k/\omega_0) \gg (g_{a \gamma} a_0 /2)$, the periodicity of $\omega_k(t)$ leads to a spectrum of narrow resonant bands equally spaced at $k^2 \approx (n/2)^2 \omega_0^2$ for positive integer numbers of $n$. The resonance solutions corresponds to the stimulated decay process $a \rightarrow \gamma \gamma$ and lead to mode functions $s_{\bm{k},\pm}$ grow exponentially with time. In the limit of the small amplitude of the photon field, it is useful to expand the solution for the mode functions $s_{\bm{k},\pm}$ as a harmonic expansion \begin{equation} \label{eq.6.3.13} s_{\bm{k},\omega,\pm} = \sum_{\omega= - \infty}^{\infty} f_{\omega,\pm} (t) e^{-i \omega t} \:, \end{equation} where $f_{\omega,\pm} (t)$ are slowly varying functions, and sum runs only over integer multiplies of $\omega_0/4$. In vacuum, we take the dispersion relation $\omega = k$ for electromagnetic waves. In the first instability band, only the lowest frequencies $\omega =\pm \omega_0 /2$ are dominating. Inserting this expansion \eqref{eq.6.3.13} into the Mathieu equation \eqref{eq.6.3.12} and dropping all fast varying terms we obtain \begin{equation} \dot{f}_{\omega, \pm} + \frac{1}{2 i \omega} (\omega^2 - k^2) f_{\omega, \pm} \mp \frac{g_{a \gamma} a_0 \omega_0 k}{4 \omega} \left( f_{\omega - \omega_0, \pm} - f_{\omega + \omega_0, \pm}\right) =0 \:. \end{equation} For the lowest frequency modes corresponds to $\omega =\pm \omega_0 /2$, we get \begin{align} \dot{f}_{\frac{\omega_0}{2}, \pm} + \frac{1}{i \omega_0} (\frac{\omega_0^2}{4} - k^2) f_{\frac{\omega_0}{2}, \pm} \mp \frac{g_{a \gamma} a_0 k}{2} f_{\frac{-\omega_0}{2}, \pm} &= 0 \:, \label{eq.6.3.15} \\ \label{eq.6.3.16} \dot{f}_{\frac{- \omega_0}{2}, \pm} - \frac{1}{i \omega_0} (\frac{ \omega_0^2}{4} - k^2) f_{\frac{- \omega_0}{2}, \pm} \mp \frac{g_{a \gamma} a_0 k}{2} f_{\frac{\omega_0}{2}, \pm} &= 0 \:. \end{align} The last two equations represent the classical equations of motion for the electromagnetic field in vacuum. At this stage, we have ignored in our analysis the fact that the universe is permeated by an ionized plasma that has a severe impact on the propagation of photons. Therefore, before we go further in discussing the possible solutions to the classical equations of motion of the photon field, we want to consider the more realistic case that the universe is full of plasma. In this scenario, the density of the cosmic plasma might have a significant role in modifying the photon growth profile and their effect has to be determined to investigate whether it is worth to be taken into account. For this reason, we will analyze in the next section the corrections that can be made to the equations of motion of photons due to the effect of the plasma. \section{Corrections due to plasma effects} \label{sec.6.4} This far, we discussed the equations of motion for the electromagnetic field in vacuum. In this case, the photons have no mass. To be more realistic, we have to take into account the effect of the plasma that fills the universe. Photons propagating in a plasma acquires an effective mass equal to the plasma frequency according to \cite{carlson1994photon} \begin{equation} \label{eq.6.4.1} \omega_p^2 = \frac{4 \pi \alpha n_e}{m_e} \:, \end{equation} where $m_e$ and $n_e$ are the mass and number density of the free electrons, respectively. The interactions between the ALPs and photons can be quite large if they have the same effective mass. This explains the reason that the decay processes $a \rightarrow \gamma \gamma $ are kinematically forbidden in the very early universe. As the number density of the free electrons is so high that the plasma frequency and, consequently, the effective mass of the photons are very high. However, the number density of the free electrons decreases in the late universe, and the process becomes easily kinematically allowed. Dark matter usually resides in galactic halos, where the typical electron number density $n_e$ is about $0.03 \ \text{cm}^{-3}$ \cite{hertzberg2018dark}. If one use equation \eqref{eq.6.4.1} and the giving value for the $n_e$, it is found that the photon has an effective mass of $\bm{\mathcal{O}}(10^{-12}) \ \text{eV}$. Therefore, the plasma mass correction is small enough to consider its effect as a small modification. The dispersion relation for the photon propagating in a plasma modifies into $\omega^2 = k^2 + \omega^2_{\text{pl}}$. Consequently, the equation of motion for the electromagnetic modes can be rewritten in the form \begin{equation} \label{eq.6.4.2} \ddot{s}_{\bm{k},\pm} +\left[ k^2 + \omega_p^2 \mp g_{a \gamma} k \omega_0 a_0 \sin(\omega_0 t)\right] s_{\bm{k},\pm} = 0 \:. \end{equation} Dropping all higher-order terms and fast varying terms for the lowest frequency modes $\omega = \pm \omega_0/2$, the modified equation of motions for the electromagnetic field under the presence of plasma become \begin{align} \dot{f}_{\frac{\omega_0}{2}, \pm} + \frac{1}{i \omega_0} (\frac{\omega_0^2}{4} - k^2 - \omega_p^2) f_{\frac{\omega_0}{2}, \pm} \mp \frac{g_{a \gamma a_0 k}}{2} f_{- \frac{\omega_0}{2}, \pm} &= 0 \:, \label{eq.6.4.3} \\ \label{eq.6.4.4} \dot{f}_{\frac{- \omega_0}{2}, \pm} - \frac{1}{i \omega_0} (\frac{ \omega_0^2}{4} - k^2 - \omega_p^2) f_{\frac{- \omega_0}{2}, \pm} \mp \frac{g_{a \gamma a_0 k}}{2} f_{\frac{\omega_0}{2}, \pm} &= 0 \:. \end{align} Note that for simplicity, we consider the plasma has a uniform density with stable frequency $\omega_p$. In this case, the resonance will not be stopped because, in astrophysical background, the incoming wave has a continuous spectrum. However, the density of plasma is, in fact, not uniform, and plasma frequency is a function of time $\omega_p(t)$. In this case, the above equations are a good enough approximation for the equations of motion in a single area with almost the same plasma frequency, and we have different resonant frequencies for each area. For the QCD axion with mass $m_a \sim 10^{-5} \ \text{eV}$ and coupling with photon $g_{a \gamma} \sim 6 \times 10^{-11} \ \text{GeV}^{-1}$, the authors of \cite{hertzberg2018dark} found that the ratio $\omega_p^2/(g_{a \gamma} a_0 k)$ is of $\bm{\mathcal{O}}(10^{-4})$. Accordingly, they claimed to neglect the plasma effects and indicated that the massless photon approximation is reasonable to consider. But, for ALPs with lower masses $m_a \sim 10^{-13} \ \text{eV}$ and coupling $g_{a \gamma} \sim 10^{-12} \ \text{GeV}^{-1}$, the ratio $\omega_p^2/(g_{a \gamma} a_0 k)$ would be of $\bm{\mathcal{O}}(10^{14})$. Besides, the density of the cosmic plasma is a function of the cosmic time and we might have to take its evolution into account as well. For these reasons, we can not ignore the plasma effects in the such case for ALPs. Solving equations \eqref{eq.6.4.3} and \eqref{eq.6.4.4} would lead to an instability associated with an exponentially growing solutions $f \sim \exp(\mu t)$. The parameter $\mu$ is called Floquet exponent and gives the growth rate of the exponential function. Note that the resonance phenomenon occurs only for real Floquet exponents. One explicitly finds \begin{equation} \label{eq.6.4.5} \mu_k = \sqrt{\frac{g_{a \gamma}^2 a_0^2 (k^2+ \omega_p^2)}{4} - \frac{1}{\omega_0^2} \left( k^2 + \omega_p^2 - \frac{\omega_0^2}{4} \right)^2} \:. \end{equation} During the resonance, the electromagnetic wave grows exponentially. However, this growth can not persist forever because the density of the axion field decreases as axions decay to photons and dissatisfy the resonant condition. The maximum growth obtained for $k = \sqrt{\omega_0^2 - 4 \omega_p^2} /2$ is \begin{equation} \label{eq.6.4.6} \mu_{\text{max}} = \frac{g_{a \gamma} a_0 \omega_0}{4} \:. \end{equation} The edges of the instability band are given by values of $k$ at which $\mu = 0$, as the resonance occurs only for the real part of the Floquet exponent. The left and the right edges read as \begin{equation} \label{eq.6.4.7} k_{L/R} = \sqrt{\frac{g_{a \gamma}^2 a_0^2 \omega_0^2}{16} + \frac{\omega_0^2}{4}} \pm \frac{g_{a \gamma} a_0 \omega_0}{4} \:, \end{equation} and it is straightforward to find that the bandwidth is \begin{equation} \label{eq.6.4.8} \mathrm{\Delta} k = \frac{g_{a \gamma} \omega_0 a_0}{2} \:. \end{equation} The Mathieu equation \eqref{eq.6.4.2} can be solved numerically to obtain the evolution of the electromagnetic field due to the decay of the ALP field and the instability bands of resonant enhancement. Among our calculations in order to obey the cosmological stability condition as claimed in \cite{alonso2020wondrous}, we normalize the factor $g_{a \gamma} a_0$ to be less than unity. Figure \ref{fig.6.1}, left panel, describes the electromagnetic field enhancement released from the ALPs decay as a function of the cosmic time when the wavenumber $k= \omega_0/2$. In contrast figure \ref{fig.6.1}, right panel, shows the instability bands by plotting the numerical solution $\vert s_{\bm{k}} \vert^2$ of the Mathieu equation at a late time as a function of $k$ at early time. This solution of the Mathieu equation can be interpreted as a parametric resonance, occurring when $k = \omega_0/2$ in the absence of the plasma effect. The parametric resonance occurs at $k = \sqrt{\omega_0^2 - 4 \omega_p^2} /2$ when the effect of plasma is taken into account. This growth rate can be compared with the one found in equation \eqref{eq.6.2.14}, where we recall that $n_k \sim \vert s_{\bm{k}} \vert^2$. One realizes that the two rates agree apart from a numerical factor. This discrepancy can be explained due to the overestimation of the stimulated decay discussed in subsection \ref{sec.6.2.2} that leads to producing a larger photon growth rate. Thus, it is a better approximation to interpret the stimulated decay as a narrow parametric resonance of the Mathieu equation, and vice-versa. \begin{figure}[t!] \centering \includegraphics[width=0.49\textwidth]{s2vst.pdf} \includegraphics[width=0.49\textwidth]{s2vsk.pdf} \caption{Left panel$:$ Numerical solution of the Mathieu equation \eqref{eq.6.4.2} showing its instability bands as function of cosmic time. Right panel$:$ The first instability band of the Mathieu equation as function of the wavenumber $\boldmath{k}$ at early time.} \label{fig.6.1} \end{figure} \section{Plasma density evolution} \label{sec.6.5} Plasma density seems to be one of the important parameters that may play a crucial role in characterizing the ALPs stimulated decay. Therefore, before discussing the effect of plasma density on preventing the stimulated decay of ALPs, at least an approximate estimation of the plasma density distribution in the universe must be performed. A useful reference density is the typical density of plasma in the galactic halos, where dark matter usually resides. The typical value of the electron number density $n_e$ in the galactic halos is about $0.03 \ \text{cm}^{-3}$ \cite{hertzberg2018dark}. Generally, the electron number density in the universe can be described by the formula \cite{ryden2017introduction} \begin{equation} \label{eq.6.5.1} n_e =\eta n_{\gamma} X_e \:, \end{equation} where $\eta = (6.1 \pm 0.06) \times 10^{-10}$ is the baryon-to-photon ratio, $n_{\gamma} = 0.244 \times (kT/\hbar c)^{3}$ is the number density of photons, and $X_e$ ionized fraction of atoms \cite{puchwein2019consistent}. The temperature of the CMB at any redshift $z$ is given as $T=T_0 (1+z)$ with $T_0= 2.725 \ \text{K}$ corresponds to the CMB temperature in the present day. Hence, using equation \eqref{eq.6.5.1} we can estimate the plasma density evolution a function of redshift as shown in figure \ref{fig.6.2}. It seems to be clear from the graph that the electron number density gets to a minimum value of about $n_e \sim 2 \times 10^{-7} \ \text{cm}^{-3}$, which expected to happen before the reionization era at redshift $z \sim 15$. The today electron number density expected to be about $n_e \sim 2.37 \times 10^{-7} \ \text{cm}^{-3}$. \begin{figure}[ht!] \centering \includegraphics[width=0.75\textwidth]{ne-z.pdf} \caption{Electron number density as a function of redshift as in equation \eqref{eq.6.5.1}.} \label{fig.6.2} \end{figure} \section{Plasma density prevents the stimulated decay of ALPs} \label{sec.6.6} Below in this section, we present the projected effect of the plasma density in preventing the stimulated decay of ALPs. The effect of stimulated emission manifests in terms of the stimulated emission factor $2 f_{\gamma}$ in equation \eqref{eq.6.2.8}. Regardless of the astrophysical environment, we consider that the stimulated decay produces an enhancement of the ALPs decay rate by factors arising only from the CMB as in reference \cite{caputo2019detecting} \begin{equation} \label{eq.6.6.1} f_{\gamma, \text{CMB}} (m_a)= \frac{1}{e^{(E_{\gamma}/k_B T)} -1} \:, \end{equation} where $E_\gamma = m_a/2$, $k_B$ is the Boltzmann constant, and $T$ is the temperature of the CMB on redshift $z$. Note that, the factor $f_{\gamma}$ suppose to be a linear combination over all the sources which contribute to the photon bath with the same energy as that produced in the ALP decay. However, we only consider here the contribution from the CMB to extrapolate $f_{\gamma}$ corresponds to ALPs with low masses down to $m_a \sim 10^{-13} \ \text{eV}$. Figure \ref{fig.6.3} shows the stimulated emission factor $2 f_{\gamma}$ arising from the CMB at different redshifts $z=0, 1100, 2000,$ and $3000$. In table \ref{tab.6.1} we present the numerical values of the stimulated emission factor $2 f_{\gamma}$ on high redshift $z \sim 1100$ for a set of ALP masses, the perturbative decay time $\tau_{a}$, and the stimulated correction to the decay time $\tilde{\tau}_{a}$. Then we discuss the required plasma density that is able to significantly prevent this enhancement. Figure \ref{fig.6.4} illustrates the effect of plasma on the stimulated decay of ALPs, assuming four different categories of their masses, obtained from the numerical solutions of the Mathieu equation \eqref{eq.6.4.2}. \begin{figure}[ht!] \centering \includegraphics[width=0.75\textwidth]{fg.pdf} \caption{The stimulated emission factor arising from the CMB and at different redshifts as in equation \eqref{eq.6.6.1}.} \label{fig.6.3} \end{figure} \begin{table}[h] \centering \scalebox{0.8}{ \begin{tabular}{|c|c|c|c|} \hline ALP mass $m_a \ [\text{eV}]$ & Enhancement factor $2 f_{\gamma}$ & Perturbative decay time $\tau_{a} \ [\text{s}]$ & Stimulated correction $\tilde{\tau}_{a} \ [\text{s}]$ \\ \hline $\sim 1.00 \times 10^{-4}$ & $\sim 1.03 \times 10^{4}$ & $\sim 2.01 \times 10^{38}$ & $\sim 1.95 \times 10^{34}$ \\ $\sim 1.00 \times 10^{-6}$ & $\sim 1.03 \times 10^{6}$ & $\sim 2.01 \times 10^{44}$ & $\sim 1.95 \times 10^{38}$ \\ $\sim 2.31 \times 10^{-11}$ & $\sim 4.46 \times 10^{10}$ & $\sim 1.61 \times 10^{58}$ & $\sim 3.59 \times 10^{47}$ \\ $\sim 7.10 \times 10^{-14}$ & $\sim 1.46 \times 10^{13}$ & $\sim 5.62 \times 10^{65}$ & $\sim 3.85 \times 10^{52}$\\ \hline \end{tabular}} \caption{A set of ALP masses and the corresponding stimulated emission factor arising from the CMB, perturbative decay time, and stimulated correction to the decay time.} \label{tab.6.1} \end{table} The first case (a) is when we assume ALPs with mass $m_a \sim 10^{-4} \ \text{eV}$. For the plasma in the galactic halos, the today value of its electron number density is about $0.03 \ \text{cm}^{-3}$ and using equation \eqref{eq.6.4.1} this is corresponding to plasma frequency about $6.4 \times 10^{-12} \ \text{eV}$. Plasma with such frequency has no effect on the stimulated decay of ALPs in this case. A significant effect of plasma in this scenario requires at least plasma with a frequency of the order of $4.26 \times 10^{-5} \ \text{eV}$ and this is corresponding to electron number density $n_e \sim 1.32 \times 10^{12} \ \text{cm}^{-3}$ to effectively reduce the amplitude of the resonance. In our results, we consider reducing the enhancement factor for the stimulated decay of ALPs to about $1\%$ of its maximum value. Note that reducing the amplitude in this way seems sufficient as the plasma density required to vanishing the amplitude with more accuracy is not much different from the values presented here. It seems to be clear from figure \ref{fig.6.2} that this very high plasma density only exists at very high redshifts $z \gtrsim 1.77 \times 10^6$. The second case (b) is corresponding to ALPs with mass $m_a \sim 10^{-6} \ \text{eV}$. The typical plasma in the galactic halos has no significant effect on the ALPs stimulated decay in this case as well. However, plasma with a frequency of about $\sim 3.65 \times 10^{-7} \ \text{eV}$, which corresponds to a plasma density of about $n_e \sim 9.66 \times 10^{7} \ \text{cm}^{-3}$ at redshifts $z \gtrsim 7.4 \times 10^4$, is able to significantly prevent the ALPs stimulated decay. Because these two cases are expected to be valid only in the very early universe, this makes us rule out the possibility of any current significant plasma effect on the stimulated decay of ALPs with these masses. However, one may worry that such plasma density can be found at low redshifts in astrophysical environments with a very high density. Indeed, this is highly unlikely, but this does not preclude that it is important to look carefully at the astrophysical environment when considering the stimulated decay of ALPs with this range of masses in the late universe eras. In the third case (c) we consider ALPs with mass about $m_a \sim 2.31 \times 10^{-11} \ \text{eV}$. This scenario is quite interesting since we look at the effect of plasma with number densities comparable to the current typical value $n_e \sim 0.03 \ \text{cm}^{-3}$ in the galactic halos where dark matter expected to be with a high amount. From figure \ref{fig.6.2} we might also note that the cosmic plasma density at redshift $z \sim 5.54 \times 10^2$ is equivalent to the current plasma density in the galactic halos that is at redshift $z \sim 0$. Such type of plasma is only able to prevent the stimulated decay of ALPs with masses $m_a \lesssim 2.31 \times 10^{-11} \ \text{eV}$. The fourth case (d) is for low-mass ALPs at $m_a \lesssim 7.10 \times 10^{-14} \ \text{eV}$. We notice in this scenario that the current cosmic plasma number density about $n_e \sim 2.37 \times 10^{-7} \ \text{cm}^{-3}$, which correspond to plasma frequency about $\sim 1.81 \times 10^{-14} \ \text{eV}$, is able to prevent the stimulated decay of ALPs with such low-mass. Since this plasma with such density is prevalent in the universe at present with redshift $z \sim 0$, we claim that at the current time the plasma should always be able to prevent or at least significantly reduce the stimulated decay of ALPs with masses $m_a \lesssim 7.10 \times 10^{-14} \ \text{eV}$. From figure \ref{fig.6.2}, one can see that the cosmic plasma density around the end of the recombination epoch at redshifts $z \lesssim 16.25$ is also comparable to the current plasma density. This makes the stimulated decay of ALPs with this range of masses is allowed at this epoch as well. The summary of these results is presented in table \ref{tab.6.2} and shows a list of plasma frequencies and their corresponding plasma densities that are required to protect the stimulated decay of ALP with different ranges of masses. Hence, plasma with higher densities beyond these values for each case does not automatically protect the ALPs from stimulated emission cascades. This sets bounds on the mass ranges for the ALPs that can be a subject for the search for observational signals based on their stimulated decay as we will discuss in the following section. In table \ref{tab.6.2} we summarize as well the redshifts at which the plasma density reached the required threshold values to suppressing the stimulated decay of ALPs with each mass category. In addition, we provide the photon frequencies expected from the decay of ALPs with the given masses. \begin{figure}[t!] \centering \includegraphics[width=0.49\textwidth]{ma=1e-4.pdf} \includegraphics[width=0.49\textwidth]{ma=1e-6.pdf} \includegraphics[width=0.49\textwidth]{ma=1e-11.pdf} \includegraphics[width=0.49\textwidth]{ma=1e-14.pdf} \caption{Effect of plasma on destroying the stimulated ALPs decay for different ALPs masses, obtained from the numerical solutions of the Mathieu equation \eqref{eq.6.4.2}.} \label{fig.6.4} \end{figure} \begin{table}[h] \centering \scalebox{0.75}{ \begin{tabular}{|c|c|c|c|c|} \hline ALP mass $m_a \ [\text{eV}]$ & Plasma frequency $\omega_p \ [\text{eV}]$ & Plasma density $n_e \ [\text{cm}^{-3}]$ &Redshift [z]& Photon frequency $ [\text{Hz}]$ \\ \hline $\sim 1.00 \times 10^{-4}$ & $\lesssim 4.26 \times 10^{-5}$ & $\lesssim 1.32 \times 10^{12}$& $\lesssim 1.77 \times 10^{6}$& $\sim 1.21 \times 10^{10}$ \\ $\sim 1.00 \times 10^{-6}$ & $\lesssim 3.65 \times 10^{-7}$ & $\lesssim 9.66 \times 10^{7}$ &$\lesssim 7.40 \times 10^{4}$& $\sim 1.21 \times 10^{8}$ \\ $\sim 2.31 \times 10^{-11}$ & $\lesssim 6.40 \times 10^{-12}$& $\lesssim 2.97 \times 10^{-2}$&$\lesssim 5.54 \times 10^{2}$ & $\sim 2.80 \times 10^{3}$ \\ $\sim 7.10 \times 10^{-14}$ & $\lesssim 1.81 \times 10^{-14}$& $\lesssim 2.37 \times 10^{-7}$ &$\lesssim 16.25$& $\sim 8.52$\\ \hline \end{tabular}} \caption{List of plasma frequencies and their corresponding plasma densities that are required to protect the stimulated decay of ALP with different ranges of masses.} \label{tab.6.2} \end{table} \section{Detecting radio signature from the stimulated decay of ALPs} \label{sec.6.7} In recent years, there is increasing interest in using radio telescopes to search for a radio signal produced by cold dark matter. The SKA is considered to be the most sensitive radio telescope ever \cite{colafrancesco2015probing}. This makes it the most aspirant radio telescope to unveil any expected signature of dark matter. Therefore, we investigate in this section whether interesting to take the stimulated decay of ALPs into account for searching for radio signals produced using the SKA telescopes$;$ other related work includes references \cite{tkachev2015fast, braaten2017emission, caputo2018looking, caputo2019detecting}. In this work, in particular, we clarify the role that the cosmic plasma plays in modifying the detectability of radio emissions results from the stimulated decay of ALPs. Based on the scenario of ALPs stimulated decay that we discussed in the previous sections, we explore here the potential of the near future radio telescopes, in particular the SKA telescopes, in an attempt to detect an observational signature of CDM ALPs decay into photons in astrophysical fields. As shown in the previous section, regardless of the astrophysical environment, by considering the enhancement of the stimulated decay produces only by the CMB, our results imply the following. From figure \ref{fig.6.3}, one can deduce that the decay rate of ALPs with large enough masses $m_a \gtrsim 10^{-4} \ \text{eV}$ does not receive a remarkable enhancement. Therefore, in this case, it is not expected that the stimulated decay has an essential role, and we can not count on the spontaneous decay of ALPs for producing a significant observational signal because of the low rate of this process. Consequently, this can be considered as an upper limit for the ALP mass that can be accountable for any significant radio signature produced from the stimulated decay of ALPs. For ALP masses $m_a \lesssim 7.10 \times 10^{-14} \ \text{eV}$, the current cosmic plasma, which is prevalent in the universe, can prevent or at least significantly reduce the stimulated decay of ALPs. In contrast, the typical plasma at present time in the galactic halos where dark matter is expected to be with a high amount can prevent the stimulated decay of ALP with masses $m_a \lesssim 2.31 \times 10^{-11} \ \text{eV}$. This puts a lower bound on the ALP mass at which the stimulated decay of ALPs is allowed by the plasma effects. Hence, with the today plasma density, the stimulated decay of ALP in the $10^{-11} \text{--} 10^{-4} \ \text{eV}$ mass range is allowed by the plasma effect. In principle, this makes it possible to consider a radio emissions due to the stimulated decay of ALPs with this mass range. Within the allowed range by the plasma effects, the search for ALPs in the $10^{-6} \text{--} 10^{-4} \ \text{eV}$ mass range seems to be the most exciting scenario. In this case we would expect some form of low-frequency radio background due to the stimulated decay of ALPs peaked in the frequency range about $120 \ \text{MHz} \ \text{--} \ 12 \ \text{GHz}$. Fortunately, considering the upcoming reach radio frequency range of the SKA telescopes from $50 \ \text{MHz}$ to $20 \ \text{GHz}$, it would be able to detect photons produced from the decay of ALPs with such range of masses. For comparison, the results presented here are consistent with the results presented in \cite{alonso2020wondrous} that the current plasma frequency in the range of $\sim 10^{-14}\text{--}10^{-13} \ \text{eV}$ sets a lower bound on the ALP mass that can be subject to stimulated decay. Our results also agree with the results presented in \cite{caputo2019detecting} that suggested the search for a radio signal based of the stimulated decay of axion dark matter in the $10^{-7} \text{--} 10^{-3} \ \text{eV}$ mass range in the near-future radio observations by SKA or using forthcoming axion search experiments, such as ALPS-II and IAXO. Indeed, this process was not possible in the early universe eras because the effective plasma mass of the photons was significantly higher than this range of masses for the ALPs and consequently it was able to prevent their stimulated decays as claimed in \cite{alonso2020wondrous}. Finally, the possible signature produced from the stimulated decay of ALPs is expected to be a large flux of photons with a frequency close to half the ALP mass as shown in table \ref{tab.6.2}. This signal supposed to appear as a narrow spectral line, broadened by the ALPs velocity dispersion as discussed in \cite{caputo2019detecting}. Since dark matter expected to exist with high amounts in the halos around many galaxies, these structures offer an interesting astrophysical environment for testing the CDM ALPs decay scenario. Under this hypothesis, the decay of an ALP will produce two photons each with energy $m_a/2$. Hence, the energy flux density, \ie the power per unit area per unit frequency, is given by \begin{equation} \label{eq.6.7.1} S_a = \frac{m_a \mathrm{\Gamma}_\text{eff}}{4 \pi d_L^2} \:, \end{equation} which accounts for luminosity distance $d_L$ from observer to the halo at redshift $z$, and $\mathrm{\Gamma}_\text{eff}$ is the effective ALP decay rate as given in equation \eqref{eq.6.2.7} at redshift $z$. Then, the total energy flux density of the radio signal from a smooth ALPs background at an energy $E=m_a/2$ can be expressed as \begin{equation} \label{eq.6.7.2} S_{\text{decay}} = \frac{m_a \mathrm{\Gamma}_\text{eff}}{4 \pi d_L^2} \, n_\text{ac} dV_c \:, \end{equation} where $n_\text{ac}$ is the comoving number density of the ALPs in the dark matter halos and the term $dV_c$ is the comoving volume element per unit redshift. In figure \ref{fig.6.4} we estimate using the last expression the total energy flux density $S_{\text{decay}}$ of the radio signal arising from the stimulated decay of ALPs as a function of the redshift for the same set of ALPs masses that we examine in this study. These fluxes are significantly small comparing to the energy flux density of the CMB Which can be well estimated at any redshift $z$ using the expression $S_{\text{CMB}}= 0.065 (1+z)^3 \ \text{eV} \, \text{cm}^{-2} \, \text{s}^{-1}$ \cite{dermer2009high, ambaum2010thermal}. However, the standard paradigm of hierarchical structure formation predicts that small structures of CDM form first and then merging into larger ones. This leads to a clumpy distribution of CDM inside the galactic halos. Therefore, radio signals arising from the stimulated decay of ALPs in the galactic halos are expected to be enhanced due to the substructures of the galactic halos. This enhancement is known as the boost factor and should be about a few orders of magnitude \cite{strigari2007precise}. Including this boost factor, the radio signals arising from the stimulated decay of ALPs in the galactic halos could be comparable to the CMB signal. Indeed, this contribution should be taken into account in modifying the background radiation field and hence offering an exciting scenario to the search for a possible signature from the CDM as well as a viable explanation for the EDGES 21 cm anomaly as claimed in \cite{feng2018enhanced, mirocha2019does}. \begin{figure}[ht!] \centering \includegraphics[width=0.75\textwidth]{cmb0.pdf} \caption{Estimating the radio flux arising from the stimulated decay of ALPs as in equation \eqref{eq.6.7.2}.} \label{fig.6.5} \end{figure} An interesting discussion on the sensitivity of the SKA radio telescopes to detect such signals from some astrophysical targets can be found in \cite{caputo2019detecting, hertzberg2020merger}. It has been illustrated in these works that with near-future radio observations by SKA, it will be possible to increase sensitivity to the ALP-photon coupling by a few orders of magnitude. In our results here we have shown that neither the current cosmic plasma nor the plasma in the galactic halos can prevent the stimulated decay of ALP for the given range of masses. However, the near-future SKA radio telescopes would be only able to reach sensitivities as low as the $10^{-17} \ \text{eV} \ \text{cm}^{-2} \ \text{s}^{-1}$ orders of magnitude \cite{dewdney2013ska1}. This is a few orders of magnitude higher than the most promising radio signal predicted in figure \ref{fig.6.5} from the stimulated decay of ALPs, which puts more challenges on the way to detect any observable signal. Leaving this aside, these results support the idea that the stimulated decay process can increase the sensitivity of the SKA radio telescopes to detect the radio emissions produced by the decay of ALPs, and indeed taking it into account is a big step forward towards achieving this goal in future. \section{Conclusion} \label{sec.6.8} In this chapter, we studied the possibility of detecting an observable signature produced due to the decay of CDM ALPs into photons at radio frequencies. For ALPs with masses and coupling with photons allowed by astrophysical and laboratory constraints, the ALPs are very stable on the cosmological scale, and their spontaneous decay can not be responsible for producing any detectable radio signal. Since ALPs are identical bosons, they may form a Bose-Einstein condensate with very high occupation numbers. The presence of an ambient radiation field leads to a stimulated enhancement of the decay rate. Depending on the astrophysical environment and the ALP mass, the stimulated decay of ALPs BEC in an expanding universe and under the plasma effects can be counted for producing observable radio signals enhanced by several orders of magnitude. We have examined that the stimulated decay of ALPs arising from the presence of the ambient background of the CMB photons results in a large enhancement of the decay rate. However, the decay rate of ALPs with large enough masses does not receive a remarkable enhancement produces by the CMB. This puts an upper limit for the ALP mass that can be accountable for any significant radio signature produced from the stimulated decay of ALPs. Further, the cosmological stability of ALPs dark matter is ensured by a combination of the expansion and the plasma effects and the latter is the most restrictive of the two. In our results, we found the plasma densities that are required to suppress the stimulated decay of ALP with different ranges of masses. This puts a lower bound on the ALP mass at which the stimulated decay of ALPs is allowed by the plasma effects. The results showed as well at which redshift the plasma density reached the required threshold values to suppressing the stimulated decay of ALPs with each mass category. This decides whether the ALP abundance in the modern epoch would be sufficient for a detectable signal and/or constituting all of dark matter. For ALPs in the $10^{-11} \text{--} 10^{-4} \ \text{eV}$ mass range in astrophysical environments with large radio emission, this emission might potentially increase by a few orders of magnitude, and neither the current cosmic plasma nor the plasma in the galactic halos can prevent the stimulated decay of the ALPs with this range of masses. Interestingly, the stimulated decay of ALPs in the $10^{-6} \text{--} 10^{-4} \ \text{eV}$ mass range can, in principle, lead to a signal that can be detectable by the near-future radio telescopes such as the SKA. We should be noted that this stimulated decay is only allowed at redshift window $z \lesssim 10^{4}$, and thus the ALPs can not decay efficiently at higher redshift due to the effects of the plasma. Finding such a signal consistent with the predictions for the stimulated decay of the CDM ALPs in such a very low mass range will make this technique the essential method to understanding the properties of dark matter, such as its spatial distribution and clustering in cosmological structures. In addition, this might offer an exciting scenario to explain several unexpected astrophysical observations, \eg the EDGES 21 cm anomaly. Indeed, this should depend on other parameters like astrophysical environments and radio telescopes sensitivity. That is worth being a subject of intense future work. \chapter{\textbf{Axion Dark Matter and Cosmology}} \label{ch4} We discussed in the previous chapter that the axion is a PNGB, which appears after the spontaneous breaking of the PQ symmetry, which was proposed to solve the strong CP violation problem. In this chapter, we will study the possible role that axion and axion-like particles can play to explain the mystery of dark matter. Then, to discuss whether they correctly explain the present abundance of the dark matter, we will investigate their production mechanism in the early universe. Afterward, we will discuss the recent astrophysical, cosmological, and laboratory bounds on the axion coupling with the ordinary matter. \section{Axion as a dark matter candidate} As mentioned in chapter \ref{ch2}, only a small fraction of the total matter content of the universe is made of baryonic matter, while the vast majority is constituted by dark matter \cite{aghanim2018planck}. It is usually assumed to consist of one or several new species of weakly interacting particles \cite{carlson1992self}. It is characterized to be electrically neutral and interacting with ordinary matter only through gravity \cite{bertone2005particle}. In principle, interactions under the weak and strong forces can be allowed, depending on the model. Dark matter can not be constituted by any large fraction of baryonic matter, as the baryon density in the universe is tightly constrained by, among other things, CMB measurements. If the QCD axions exist, they would be considered a very promising and well-motivated candidate for dark matter, as they have the main characteristics to be dark matter. They are nearly collisionless, electrically neutral, very stable, non-baryonic, and weakly interacting with the standard model particles \cite{marsh2016axion}. Depending on the production process, thermal or non-thermal, the axions could constitute either hot dark matter or cold dark matter. We will explain this point as we progress during this chapter. The non-baryonic forms of dark matter are usually subdivided into three classes$;$ hot, warm, and cold dark matter \cite{kauffmann1993formation, bond1984dark, primack1984dark}. This pertains to objects or particles that have large, intermediate, and small free-streaming lengths, respectively. The free-streaming length is a characteristic length scale for dark matter particle and defines as the possible distance of travel due to random velocities in the early universe, adjusted for the expansion of the universe. Since the particles can propagate freely over this distance scale, any smaller fine-scale structure is smoothed out. \begin{itemize} \item {\bf Hot dark matter (HDM).} HDM requires nearly massless particles that they move at relativistic velocities. This is achieved if the dark matter particle's mass is satisfying $m \lesssim T_d$, which means that they are still relativistic by the time of their decoupling. In addition, HDM is defined as particles with $m \lesssim 1 \ \text{eV}$, which is the temperature at which the universe energy density moves from being radiation dominated to matter-dominated. HDM particles have long free-streaming lengths, $\gtrsim 1 \ \text{Mpc}$. A couple of examples of such candidates are neutrinos and HDM axions. \item {\bf Warm dark matter (WDM).} WDM consists of particles with masses satisfying $m \lesssim T_d$ as for HDM but in this case $m \gtrsim 1 \ \text{eV}$. Therefore, these particles were relativistic at their decoupling temperature, but nonrelativistic at the time of matter-radiation equality. The free-streaming size is of dwarf-galaxy scale, or $\sim 1$ Mpc. There are no clear candidates for WDM, but non-thermally produced gravitinos, sterile neutrinos, and light neutralinos could be viable ones. \item {\bf Cold dark matter (CDM).} CDM refers to objects that are sufficiently massive and moving at sub-relativistic velocities. This is achieved if the dark matter particle's mass is much greater than the temperature at which it decouples from the cosmological plasma, $m \gtrsim T_d$. Hence, early in time after the big bang at the point of decoupling, these particles become nonrelativistic, massive, and have a short free-streaming length, $\lesssim 1$ Mpc. The major CDM candidate is WIMPs, gravitinos, MACHOs, CDM axions particles. \end{itemize} \subsection{Axion-like particles} The visible and invisible axion models are not the only extensions to the standard model introducing new symmetries. In fact, there are plenty of theories have been proposed to extend the standard model by introducing new symmetries and new particles, for example, supersymmetric theories and string theory models \cite{ringwald2012exploring}. If any of these new symmetries is global and gets spontaneously broken, then a Goldstone boson or PNGB is obtained. If this boson is a light scalar or pseudo-scalar, it can share qualitative properties with the QCD axion. This means that, in addition to invisible QCD axions, there can be many other axion-like particles (ALPs). These ALPs are usually predicted by string theory-driven models \cite{arvanitaki2010string, cicoli2012type, anselm1982second}. In particular, ALPs naturally arise from string theory due to the compactification of extra dimensions \cite{svrcek2006axions, witten1984some}. In general, it is expected that ALPs couple to ordinary matter in the same way as the QCD axions do. Thus, the coupling constants for the ALPs interactions with photons and fermions would be similar to the ones in equations \eqref{eq.3.60} and \eqref{eq.3.63}, respectively. To adapt those equations to ALPs, one would remove the terms containing z, as they come from the mixing between axions and mesons, and replace $f_a$ with the ALP decay constant. Another difference is that, while axions had originally been proposed with the aim of solving the strong CP problem, ALPs can be considered as pure predictions of theories beyond the standard model. They are not needed for any specific purpose but can be considered potential candidates for the particles of dark matter. \subsection{HDM axions} Axion, and more generally, ALPs may be copiously produced in the early universe, including via thermal processes. Therefore, relic axions and ALPs constitute an HDM component. Hadronic axions that do not couple directly to charged leptons would be produced by Primakoff reactions with the quarks in the primordial quark-gluon plasma (QGP). After the temperature of the universe drops below $\mathrm{\Lambda}_{\text{QCD}}$ and confinement occurs, the dominant thermalization process is $\pi + \pi \leftrightarrow a + \pi$ \cite{chang1993hadronic}. \subsection{CDM axions} Since HDM, no matter what the constituents are, seemingly can not make up more than a small part of the total dark matter density, CDM axions \cite{dine1983not, preskill1983cosmology, abbott1983cosmological, stecker1982evolution} are usually more seriously considered. They are produced non-thermally in the early universe by the misalignment mechanism and under certain circumstances, also via the decay of topological defects such as axion strings and domain walls. We will discuss the contribution of each process to the axion abundance in section \ref{sec.4.3}. \section{Thermal production of axions} If the temperature of the primordial plasma is sufficiently high, axions are created and annihilated during interactions among the standard model particles in the thermal bath of the QCD plasma. The produced axions from such processes are known as thermal axions and described by the standard freeze-out scenario \cite{kolb1990early, turner1986thermal, masso2002axion}. The number density $n_a^{th}(t)$ of thermal axions obeys the Boltzmann equation \begin{equation} \label{eq.4.3} \dfrac{d n_a^{\text{th}}}{dt} + 3 H n_a^{\text{th}} = \mathrm{\Gamma} (n_a^{\text{eq}}-n_a^{\text{th}}) \:, \end{equation} where $H$ is the Hubble parameter, and $\mathrm{\Gamma}$ is the interaction rate at which axions are created and annihilated in the plasma \begin{equation} \mathrm{\Gamma} = \sum_i n_i \langle \sigma_i v \rangle \:. \end{equation} Here, the sum is over all processes involving axions $a + i\leftrightarrow 1 + 2$, with $1$, and $2$ refer to other particles species, $n_i$ is the number density of particles species $i$, and $\langle \sigma_i v\rangle$ indicates averaging over the momentum distribution of the particles involved, with $\sigma_i$ in the corresponding cross-section, and $v$ is the relative velocity between the particle $i$ and the axion. The term $n_a^{\text{eq}}$ represents the number density of axions at thermal equilibrium, which is obtained by using the Bose-Einstein distribution \begin{equation} n_a^{\text{eq}} = \frac{\zeta(3)}{\pi^2} T^3 \:, \end{equation} where $\zeta (3) = 1.202 \cdots$ is the Riemann zeta function of argument $3$, and $T$ is the temperature of the plasma. Using the conservation of the number density at equilibrium \begin{equation} \dfrac{dn_a^{\text{eq}}}{dt} + 3 H n_a^{\text{eq}} =0 \:, \end{equation} we can rewrite equation \eqref{eq.4.3} as follows, \begin{equation} \dfrac{d}{dt} \left[ R^3 (n_a^{\text{eq}}-n_a^{\text{th}}) \right] = - \mathrm{\Gamma} R^3 (n_a^{\text{eq}}-n_a^{\text{th}}) \:, \end{equation} where $R(t)$ is the scale factor of the universe. The solution of this equation implies that the axion would be in thermal equilibrium as long as its interaction rate is faster than the Hubble expansion, $\mathrm{\Gamma}(T) > H(T) $. Therefore, the thermal population of axions was produced when this condition was satisfied for a few times at some point in the early universe until the axions decouple from the plasma at the decoupling temperature $T_{\text{th}}$ which define with $\mathrm{\Gamma}(T_{\text{th}})= H(T_{\text{th}})$. The thermal population of axions thus established did not subsequently get diluted away by inflation or some other cause of huge entropy release \cite{kuster2007axions}. In the thermal bath, axions couple differently to fermions depending on the axion model, whereas their coupling with gluons is model-independent. The thermal average of the interaction rate $\mathrm{\Gamma}$ is calculated, including the following three elementary processes for thermalizing axions in the early universe \begin{center} \begin{inparadesc} \item[(a)] $a+q \leftrightarrow g + q \:, \quad$ \item[(b)] $a+g \leftrightarrow q + \bar{q} \:, \quad$ and $\quad$ \item[(c)] $a+g \leftrightarrow g+g\:.$ \end{inparadesc} \end{center} \begin{figure}[ht!] \begin{center} \begin{tikzpicture} \pgfmathsetmacro{\CosValueee}{cos(30)} \pgfmathsetmacro{\SinValueee}{sin(30)} \draw[fermionnn, black, line width=.5mm] (0.5,0)-- (2,0); \draw[fermionnn, black, line width=.5mm] (2,0)-- (3.5,0); \draw[gluon, black, line width=.5mm, rotate=180] (-2,-1.0)--(-0.5,-1.0); \draw[fermionnn, black, line width=.5mm] (2,1.0)-- (3.5,1.0); \draw[gluon, black, line width=.5mm, rotate=360] (2,0)--(2,1.0); \node at (2,0) {\Large $\bullet$}; \node at (2,1) {\Large $\bullet$}; \draw[gluon, black, line width=.5mm, rotate=180] (-7,-0.5)--(-5.5,-0.5); \draw[fermionin, black, line width=.5mm] (5.5-1.5*\CosValueee,0.5+1.5*\SinValueee)--(5.5,0.5); \draw[fermion, black, line width=.5mm] (5.5,0.5)-- (5.5-1.5*\CosValueee,0.5-1.5*\SinValueee); \draw[scalarr, black, line width=.5mm] (7,0.5)-- (7+1.5*\CosValueee,0.5+1.5*\SinValueee); \draw[gluon, black, line width=.5mm, rotate=360] (7+1.5*\CosValueee,0.5-1.5*\SinValueee)--(7,0.5); \node at (5.5,0.5) {\Large $\bullet$}; \node at (7,0.5) {\Large $\bullet$}; \draw[gluon, black, line width=.5mm, rotate=180] (-10.5,0)--(-9,0); \draw[gluon, black, line width=.5mm, rotate=180] (-12,0)--(-10.5,0); \draw[gluon, black, line width=.5mm, rotate=180] (-10.5,-1.0)--(-9,-1.0); \draw[scalarr, black, line width=.5mm] (10.5,1.0)-- (12,1.0); \draw[gluon, black, line width=.5mm, rotate=360] (10.5,0)--(10.5,1); \node at (10.5,0) {\Large $\bullet$}; \node at (10.5,1) {\Large $\bullet$}; \node at (0,1.75) {\bf{(a)}}; \node at (3.85,1.75) {\bf{(b)}}; \node at (8.65,1.75) {\bf{(c)}}; \end{tikzpicture} \caption[Feynman diagrams of the processes which produce thermal axions in the early universe.]{Feynman diagrams of the processes which produce thermal axions in the early universe.} \label{Fig.4.1} \end{center} \end{figure} The Feynman diagrams of the processes which produce thermal axions in the early universe are shown in figure \ref{Fig.4.1}. These processes have a cross-section of the order of $\sigma_{\text{agg}}=\alpha_s^3/f_a^2$, and accordingly, the interaction rate becomes\newpage \noindent \begin{equation} \mathrm{\Gamma}(T) \sim \frac{\alpha_s^3}{f_a^2} T^3 \sim \frac{10^5}{f_a^2} \:, \end{equation} where $\alpha_s = g_s^2/4\pi^2$. Assuming that axions thermalize at temperature $T_{th}$, their number density today is obtained by assuming the conservation of the comoving number density, \begin{equation} n_a^{\text{th}} (T_0) = n_a^{\text{th}} (T_{\text{th}}) \left( \frac{a (T_{\text{th}})}{a (T_0)} \right)^3 = 7.5 \ \text{cm}^{-3} \frac{107.75}{g_{\ast} (T_{\text{th}})} \:, \end{equation} where $g_{\ast}(T)$ denotes the relativistic number of degrees of freedom at temperature $T$. The current abundance from thermal production can be then estimated to be \begin{equation} \mathrm{\Omega}_a = 10^{-8} \left( \frac{100}{g_{\ast}} \right) \frac{10^{12} \ \text{GeV}}{f_a} \:. \end{equation} For $f_a \sim 10^{12} \ \text{GeV}$, the axion production from the thermal processes is not efficient. If $f_a$ is smaller, the coupling of the axion increases, and the axion decouples later, helping to get more relic abundance. It is possible to get the correct relic abundance for $f_a \sim 10^{6} \ \text{GeV}$, but this value is normally excluded by astrophysics bounds. However, the astrophysical bounds do not apply if the axion has an anomalously small coupling to photons. In this scenario, the axion forms a hot dark matter candidate, but bounds from Planck already exclude the axion as hot dark matter. Therefore, taking into account that hot dark matter can not be the dominant dark matter component can lead to putting an upper bound on the axion mass. A recent study involving CMB anisotropy measurements, halo power spectrum data, and Hubble constant measurements provides an approximate lower limit of the axion mass $m_a \sim 0.7 \ \text{eV}$ \cite{hannestad2010neutrino}. \section{Non-thermal production of axions} \label{sec.4.3} The most efficient mechanism for axion production is non-thermal, and there are three types such processes, the misalignment mechanism \cite{preskill1983cosmology, dine1983not, abbott1983sikivie}, axion string decay, and axion domain walls decay \cite{davis1986cosmic, lyth1992estimates}. The abundance of the non-thermally produced axions depends on two important scales. The first is the temperature at which the axion mass, arising from non-perturbative QCD effects, becomes significant. The second scale is the temperature $T_{\text{PQ}}$, at which the PQ phase transition occurs when the $U(1)_{\text{PQ}}$ symmetry spontaneously breaks. As we will see below, depending on whether this temperature $T_{\text{PQ}}$ is greater or less than the inflationary reheating temperature $T_R$, the contribution of each of the three production mechanisms to the cold axion population is determined. At high temperatures, the QCD effects are not significant, and the axion mass is negligible. Gradually the axion acquires its mass due to the non-perturbative QCD effect. At a critical time, $t_1$, when $m_a t_1 \sim 1$, the axion mass becomes important and the corresponding temperature of the universe at $t_1$ is $T_1 \simeq 1 \ \text{GeV}$. The $U(1)_{\text{PQ}}$ symmetry is then unbroken at early times and temperatures greater than $T_{\text{PQ}}$. At $T_{\text{PQ}}$, it breaks spontaneously, and the axion field, proportional to the phase of the complex scalar field acquiring a vacuum expectation value, may have any value. The phase varies continuously, changing by order one from one horizon to the next. At that time, axion strings may appear as topological defects. Now we have to distinguish between two cases. The first one occurs if the reheating temperature after inflation is below the PQ transition temperature $T_{\text{PQ}} > T_R$. In this case, the axion field is homogenized over vast distances, and the string density is diluted because of inflation, to a point where it is extremely unlikely that our visible universe contains any axion strings. When the axion mass turns on at $t_1$, the axion field starts to oscillate. The amplitude of this oscillation is determined by how far from zero the axion field is when the axion mass turns on. The axion field oscillations do not dissipate into other forms of energy and hence contribute to the cosmological energy density today. Such a contribution is called the misalignment mechanism. Since the axion string density is diluted by inflation, thus the misalignment mechanism is the only process that contributes significantly to the density of cold axions. In this scenario, the non-thermal production of axions is then estimated by investigating the evolution of the background filed. The equation of motion for a homogeneous axion field in an FRW universe is described by \begin{equation} \ddot{a} + 3H\dot{a}+m_a^2 a=0 \:. \end{equation} When the axion mass is constant, at some time after the QCD phase transition, the approximate solution to this equation is \begin{equation} a \sim a_{\text{init}} \frac{1}{R^{3/2}} \cos(m_a t) \:, \end{equation} where $a_{\text{init}}$ is the initial axion value chosen arbitrarily between $-\pi$ and $\pi$ at the moment of the PQ phase transition. The energy density is then given by \begin{equation} \rho =\frac{1}{2} (\dot{a}^2 + m_a^2 a^2) \:, \end{equation} scales with $1/R^3$ like non-relativistic matter. This leads us to expect the CDM axion energy density to be \cite{sikivie2008axion} \begin{equation} \label{eq.4.12} \mathrm{\Omega}_a \sim 0.15 \left( \frac{f_a}{10^{12} \ \text{GeV}} \right)^{7/6} \left(\frac{ 0.7}{h} \right)^2 \theta_i^2 \:, \end{equation} where $\theta_i$ is the initial random value of the vacuum angle, the misalignment angle, and $h$ is the reduced Hubble parameter. The other case occurs if $T_{\text{PQ}} < T_R$, meaning the reheating temperature after inflation is above the PQ transition temperature. In such a case, the axion field is not homogenized, and strings radiate cold, massless axions until non-perturbative QCD effects become significant at temperature $T_1$. When the universe cools to $T_1$, the axion strings become the boundaries of $N$ domain walls. The network of the domain walls bounded by the axion strings is unstable, and therefore it rapidly radiates cold axions and decay. Here, in addition to the contribution of the vacuum realignment mechanism to the density of cold axions, there are significant contributes from both the string decay and the wall decay. The three contributions give a total CDM axion energy density \cite{sikivie2008axion} \begin{equation} \label{eq.4.13} \mathrm{\Omega}_a \sim 0.7 \left( \frac{f_a}{10^{12} \ \text{GeV}} \right)^{7/6} \left(\frac{ 0.7}{h} \right)^2 \:. \end{equation} Comparing this to the measured CDM density provides an approximate lower limit of the axion mass $m_a \sim 10 \ \mu \text{eV}$. However, equations \eqref{eq.4.12} and \eqref{eq.4.13} are subject to many sources of uncertainty, aside from the uncertainty about the contribution from string decay \cite{sikivie2008axion}. Consequently, it is not yet clear with sufficient precision how much of these axions are produced. \section{Bounds on axion coupling} \label{sec.4.4} Axions and ALPs can be searched for in astrophysics, cosmology, and laboratory experiments, based on the theoretical predictions discussed in the previous chapter. The interaction between axions and ordinary matter can be exploited, and bounds on the axion couplings can be obtained as a result of both observations and simulations. In this section, we present bounds on axion properties coming from observations and experiments, see references \cite{raffelt2008astrophysical, asztalos2006searches, abbott1983cosmological, marsh2016axion} for a thorough review. Note that recent research based on both observations and experiments still aiming to improve the current limits on axions and ALPs couplings. In this context, we show in chapter \ref{ch5} some techniques in which different astrophysical environments are used to probe new limits on the coupling of ALPs with photons. \subsection{Astrophysical axion bounds} The existence of the axions and ALPs would slightly affect physics related to the star's evolution and well established physical processes. The axion properties can then be bounded by the discrepancies that some axion parameters would entail in deeply studied astrophysical processes. The main argument resides in the fact that axions can be produced in hot and dense environments like in stars and other galactic objects. The axion production, which depends on $f_a$, provides an additional energy loss channel for the source. In the following, we summarize some of these astrophysical bounds. Consequently, stars have been used to derive some constraints on the axions and ALPs coupling parameters. Photons in the stellar interior would convert into axions or ALPs by the Primakoff process. This would be a very efficient energy loss mechanism for the star. The most obvious star that can be exploited to constrain axion parameters is our sun. The energy loss by solar axion emission requires enhanced nuclear burning and increases the solar ${}^8B$ neutrino flux \cite{raffelt2008astrophysical}. The observation of ${}^8B$ neutrino flux gives a bound $g_{a \gamma \gamma} \lesssim 7 \times 10^{-10} \ \text{GeV}^{-1}$. The solar neutrino flux constraint can also be applied to the axion-electron coupling, giving $g_{a ee} < 2.8 \times 10^{-11} \ \text{GeV}^{-1}$, where $g_{a ee} \equiv C_e m_e /f_a$, and $m_e$ is the electron mass. A more restrictive bound comes from the observations of globular clusters. The helium-burning lifetimes of horizontal branch (HB) stars give a bound for axion-photon coupling $g_{a \gamma \gamma} \lesssim 0.6 \times 10^{-10} \ \text{GeV}^{-1}$. Moreover, the delay of helium ignition in red-giant branch (RGB) stars due to the axion cooling gives $g_{a ee} < 2.5 \times 10^{-13} \ \text{GeV}^{-1}$. In addition, the energy loss rate of the supernova 1987A leads to another very strong constraint on the axion-nucleon coupling. For a small value of the coupling, the mean free path of axions becomes larger than the size of the supernova core (so-called the ``free streaming’’ regime). In this regime, the energy loss rate is proportional to the axion-nucleon coupling squared, and one can obtain the limit $f_a \gtrsim 4 \times 10^{8} \ \text{GeV}$. On the other hand, for a large value of the coupling, axions are ``trapped’’ inside the supernova core. In this regime, by requiring that the axion emission should not have a significant effect on the neutrino burst, one can obtain another bound $f_a \lesssim \bm{\mathcal{O}}(1) \times 10^6 \ \text{GeV}$. However, in this ``trapped’’ regime, it was argued that the strongly-coupled axions with $f_a \lesssim \bm{\mathcal{O}}(1) \times 10^5 \ \text{GeV}$ would have produced an unacceptably large signal at the Kamiokande detector, and hence they were ruled out \cite{raffelt2008astrophysical}. Furthermore, the axion-electron coupling is constrained by the observation of white-dwarfs. The cooling time of white-dwarfs due to the axion emission gives a bound $g_{a ee} < 4 \times 10^{-13} \ \text{GeV}^{-1}$. Recently, it is reported that the fitting of the luminosity function of white-dwarfs is improved due to the axion cooling, which implies $g_{a ee} \simeq (0.6\text{--}1.7) \times 10^{-13} \ \text{GeV}^{-1}$. Also, observed pulsation period of a ZZ Ceti star can be explained by means of the cooling due to the axion emission, if $g_{a ee} \simeq (0.8-2.8) \times 10^{-13} \ \text{GeV}^{-1}$. These observations might imply the existence of the $\text{meV}$ mass axion, but require further discussions. Additionally, the axion with mass $m_a \sim \bm{\mathcal{O}}(1) \ \text{eV}$ in galaxy clusters makes a line emission due to the decay into two photons, whose wavelength is $\lambda_a \simeq 24800 \ \mathrm{\AA}/[m_a/ 1 \ \text{eV}]$. This line emission gives observable signature in telescopes. Such a line has not been observed in any telescope searches, which excludes the mass range $3 \ \text{eV} \lesssim m_a \lesssim 8 \ \text{eV}$. \subsection{Cosmological axion bounds} According to the astrophysical axion bounds, the spontaneous breaking of the PQ symmetry is expected to happen at a very high energy scale. As a consequence of this, axions start playing quite a relevant role in cosmology. Axion cosmological effects depend on the value of the PQ symmetry breaking scale $f_a $, which is the value related to the potential dark matter nature of axions. In the previous section, we discussed the conditions for axions to be cold dark matter candidates. This happens if axions are non-thermally produced via the misalignment mechanism and the decay of topological strings and domain walls. The resulting cosmological energy density is expressed as a function of $f_a$. In order to obtain the energy densities observed in the universe, if a cold dark matter axion population exists, then the energy scale has to be $f_a \sim 10^{12} \ \text{GeV}$. The other possibility is to have hot dark matter axions, thermally produced in the early universe via axion-pion conversion ($\pi + \pi \leftrightarrow \pi + a$). Massive thermal axions would have a similar impact on cosmological observables as massive neutrinos. The cosmological bound on the breaking energy scale, in this case, is $f_a \lesssim 10^{12} \ \text{GeV}$, while the most stringent limit on the thermal axion mass has been placed using the Planck 2015 data, and is $m_a < 0.529 \ \text{eV}$ \cite{di2016cosmological}. The astrophysical and cosmological bounds on axion couplings narrow the parameter space where axions could still exist. Moreover, these bounds drive the experimental searches aiming at axion detection. \subsection{Laboratory axion bounds} We have mentioned that bounds on axion mass came from laboratory experiments at the very early stage of axion history. When Weinberg and Wilczek proposed a $100 \ \text{keV} \ \text{--} \ 1 \ \text{MeV}$ mass axion as a Peccei-Quinn boson, such a candidate was soon ruled out, as an axion with a mass of this order would have had large enough couplings to be observed in several laboratory experiments. This is the start of the invisible axions era. Since then, there have been other axion searches in laboratory-based experiments. They are mainly magnetometry searches for spin-dependent forces mediated by axion exchange. The results were generally weaker than the bounds set by both astrophysical and cosmological studies. Another approach to axion-like particle detection is a microwave light shining through the wall experiment, which exploits the Primakoff effect twice \cite{betz2012status}. A photon source is fired against a wall, and some of the photons are converted to axion-like particles by the Primakoff effect. The wall blocks the unconverted photons but not the ALPs that reach a receiving cavity where the reciprocal conversion takes place via the Primakoff effect. The photons thus obtained can be detected. Additionally, a very interesting recent publication searches for ultra low-mass axion dark matter, based on the proposed interaction between an oscillating axion dark matter field and gluons or fermions. Assuming that such an interaction would induce oscillating electric dipole moments of nucleons and atoms, they set the first laboratory bound on the axion-gluon coupling and also improved on previous laboratory constraints on the axion-nucleon interaction. In the next chapters, we will provide more information about the possible role of axions and axion-like particles in cosmology and astrophysics. The interest of the scientific community in this kind of search is growing fast, and their discovery would have a strong impact on particle physics and beyond. \chapter{\textbf{Phenomenology of ALPs Coupling with Photons in the Jets of AGNs}} \label{ch5} Interestingly there are many string theory models of the early universe that motivate the existence of a homogeneous cosmic ALP background analogous to the cosmic microwave background, arising via the decay of string theory moduli in the very early universe. In this chapter, we study the phenomenology of the coupling between the CAB ALPs and photons in the presence of an external electromagnetic field that allows the conversion between ALPs and photons. Based on this scenario, we examine the detectability of signals produced by ALP-photon coupling in the highly magnetized environment of the relativistic jets produced by active galactic nuclei. Then, we test the cosmic ALP background model that is put forward to explain the soft X-ray excess in the Coma cluster due to CAB ALPs conversion into photons using the M87 jet environment. Moreover, we demonstrate the potential of the active galactic nuclei jet environment to probe low-mass ALP models and to potentially constrain the model proposed to better explain the Coma cluster soft X-ray excess. \section{Introduction} \label{sec.5.1} If ALPs really exist in nature, they are expected to couple with photons in the presence of an external electric or magnetic field through the Primakoff effect with a two-photon vertex \cite{sikivie1983experimental}. This coupling gives rise to the mixing of ALPs with photons \cite{raffelt1988mixing}, which leads to the conversion between ALPs and photons. This mechanism serves as the basis to search for ALPs$;$ in particular, it has been put forward to explain a number of astrophysical phenomena, or to constrain ALP properties using observations. For example, over the last few years, it has been realized that this phenomenon would allow searches for the ALPs in the observations of distant active galactic nuclei in radio galaxies \cite{bassan2010axion, horns2012probing}. Since photons emitted by these sources can mix with ALPs during their propagation in the presence of an external magnetic field and this might reduce photon absorption caused by extragalactic background light \cite{harris2014photon}. This scenario might lead to a suitable explanation for the unexpected behavior for the spectra of several AGNs \cite{mena2013hints}. Furthermore, because only photons with polarization in the direction of the magnetic field can couple to ALPs, this coupling can lead to change in the polarization state of photons. This effect can also be useful to search for ALPs in the environment of AGNs by looking for changes in the linear degree of polarization from the values predicted by the synchrotron self-Compton model of gamma ray emission. Over the last few years, it has been realized that this phenomenon would allow searches for the ALPs in the observations of distant AGNs in radio galaxies \cite{bassan2010axion, horns2012probing}. Since photons emitted by these sources can mix with ALPs during their propagation in the presence of an external magnetic field, and this might reduce photon absorption caused by extragalactic background light \cite{harris2014photon}. Recent observations of blazars by the Fermi Gamma-Ray Space Telescope \cite{abdo2011fermi} in the $0.1\text{--}300 \ \text{GeV}$ energy range show a break in their spectra in the 1-10 GeV range. In their paper \cite{mena2013hints, mena2011signatures}, Mena, Razzaque, and Villaescusa-Navarro have modeled this spectral feature for the flat-spectrum radio quasar 3C454.3 during its November 2010 outburst, assuming that a significant fraction of the gamma rays convert to ALPs in the magnetic fields in and around the large scale jet of this blazar. Furthermore, many galaxy clusters show a soft X-ray excess in their spectra below $1 \textup{--} 2 \ \text{keV}$ on top of the extrapolated high-energy power \cite{bonamente2002soft, durret2008soft}. This soft excess was first discovered in 1996 from the Coma and Virgo clusters, before being subsequently observed in many other clusters \cite{lieu1996discovery, lieu1996diffuse, bowyer1996extreme}. However, the nature of this component is still unclear. There are two astrophysical explanations for this soft X-ray excess phenomenon, for review, see \cite{durret2008soft, angus2014soft}. The first is due to emission from a warm $T \approx 0.1 \ \text{keV}$ gas. The second is based on inverse-Compton scattering of $\gamma \sim 300 \textup{--} 600$ non-thermal electrons on the cosmic microwave background. However, both of these two explanations face difficulties in clarifying the origin of the soft X-ray excess \cite{angus2014soft}. Recent work \cite{conlon2013excess} proposed that this soft excess is produced by the conversion of a primordial cosmic ALP background with $0.1 \textup{--} 1 \ \text{keV}$ energies into photons in the magnetic field of galaxy clusters. The existence of such a background of highly relativistic ALPs is theoretically well-motivated in models of the early universe arising from compactifications of string theory to four dimensions. Also, the existence of this CAB can be indirectly probed through its contribution to dark radiation, but this is beyond the scope of this work and we confine ourselves here by highlighting how this can change the ALP bounds. The main aim of the work presented in this chapter is to follow the approach of \cite{mena2013hints} to test the CAB model that is put forward in \cite{angus2014soft} to explain the soft X-ray excess in the Coma cluster based on the conversion between CAB ALPs and photons in the presence of an external magnetic field using the astrophysical environment of the M87 jet. We aim as well to demonstrate the potential of the AGNs jet environment to probe low-mass ALP models and to potentially constrain the model proposed to explain the soft X-ray excess in the Coma cluster better. The structure of this chapter is as follows. In section \ref{sec.5.2}, we review the theoretical model that describes the ALP-photon mixing phenomenon. In section \ref{sec.5.3'}, we describe some aspects of the astrophysical environment of the active galactic nuclei and their jets where these interactions may take place. In section \ref{sec.5.3}, we study the effects of ALPs conversion into photons to reproduce spectral curvature in the radio quasar 3C454.3 to constrain fundamental parameters of the ALP-photon conversion mechanism. In section \ref{sec.5.4}, we briefly discuss the motivation for the existence of the CAB. Then in section \ref{sec.5.5}, we check whether the soft X-ray excess in the environment of the M87 AGN jet can be explained due to CAB ALPs conversion into photons in the jet magnetic field. In section \ref{sec.5.6}, the results of a numerical simulation of the ALP-photon coupling model are discussed and compared with observed soft excess luminosities in observations. Finally, our conclusion is provided in section \ref{sec.5.7}. \section{ALP-photon coupling model} \label{sec.5.2} We first outline the theory of the conversion between ALPs and photons in an external magnetic field within the environment of the jets of AGNs following \cite{mena2013hints, mena2011signatures}. In the presence of a background magnetic field, the coupling of an ALP with a photon is described by the effective Lagrangian \cite{sikivie1983experimental, raffelt1988mixing, anselm1988experimental} \begin{equation} \label{eq.5.1} \mathrm{\ell}_{a\gamma} = - \frac{1}{4} g_{a\gamma} \mathrm{F}_{\mu \nu} \tilde{\mathrm{F}}^{\mu \nu} a = g_{a\gamma} \ \mathbf{E} \cdot \mathbf{B} \ a \:, \end{equation} where $g_{a\gamma}$ is the ALP-photon coupling parameter with dimension of inverse energy, $\mathrm{F}_{\mu \nu}$ and $\tilde{\mathrm{F}}^{\mu \nu}$ represent the electromagnetic field tensor and its dual respectively, and $a$ donates the ALP field. While $\mathbf{E}$ and $\mathbf{B}$ are the electric and magnetic fields, respectively. Then, we consider a monochromatic and linearly polarized ALP-photon beam of energy $\omega$ propagating along the $z$-direction in the presence of an external and homogeneous magnetic field. The equation of motion for the ALP-photon system can be described by the coupled Klein-Gordon and Maxwell equations arising from the Lagrangian equation \eqref{eq.5.1}. For very relativistic ALPs when $\omega \gg m_a$, the short-wavelength approximation can be applied successfully and accordingly, the beam propagation can be described by the following Schr$\ddot{\text{o}}$dinger-like form \cite{raffelt1988mixing, bassan2010axion}\newpage \noindent \begin{equation} \label{eq.5.2} \left( i \dfrac{d}{dz} + \omega + \bm{\mathcal{M}} \right) \left( \begin{matrix} A_{\perp}(z) \\ A{\parallel}(z) \\ a(z) \end{matrix} \right) =0 \:, \end{equation} where $A_{\perp}$ and $A_{\parallel}$ are the photon linear polarization amplitudes along the $x$ and $y$ axis, respectively, and $a(z)$ donates the ALP amplitude. Here, $\bm{\mathcal{M}}$ represents the mixing matrix of the ALP field with the photon polarization components. Since only photons with polarization parallel to the magnetic field couple to ALP, so for simplicity, we restrict our attention to the case of magnetic field transverse $\mathbf{B}_T$ to the beam direction (\ie in the x-y plane). Therefore, if we choose the y-axis along $\mathbf{B}_T$, so that $\mathbf{B}_x$ vanishes and the mixing matrix can be written as \begin{equation} \label{eq.5.3} \bm{\mathcal{M}} = \left( \begin{matrix} \mathrm{\Delta}_{\perp} & 0 & 0 \\ 0 & \mathrm{\Delta}_{\parallel} & \mathrm{\Delta}_{a\gamma} \\ 0 & \mathrm{\Delta}_{a\gamma} & \mathrm{\Delta}_{a} \end{matrix} \right) \:. \end{equation} As expressed in references \cite{bassan2010axion, mena2013hints, mena2011signatures}, the elements of $\bm{\mathcal{M}}$ and their references values are given as below$:$ \begin{align} \label{eq.5.4} \mathrm{\Delta}_{\perp} &\equiv 2 \ \mathrm{\Delta}_{\text{QED}} + \mathrm{\Delta}_{\text{pl}} \:, \nonumber \\[10pt] \mathrm{\Delta}_{\parallel} &\equiv \frac{7}{2} \ \mathrm{\Delta}_{\text{QED}} + \mathrm{\Delta}_{\text{pl}} \:, \nonumber \\ \mathrm{\Delta}_{a\gamma} &\equiv \frac{1}{2} \ g_{a\gamma} B_T \simeq 1.50 \times 10^{-11} \ \left( \frac{g_{a\gamma}}{10^{-10} \ \text{GeV}^{-1}} \right) \left( \frac{B_T}{10^{6} \ \text{G}} \right) \ \text{cm}^{-1} \:, \nonumber \\ \mathrm{\Delta}_a &\equiv -\frac{m_a^2}{2\omega} \simeq -2.53 \times10^{-13} \ \left( \frac{\omega}{\text{keV}} \right)^{-1} \left( \frac{m_a}{10^{-7} \ \text{eV}} \right)^{2} \ \text{cm}^{-1} \:. \end{align} The two terms $\mathrm{\Delta}_{\text{QED}}$ and $\mathrm{\Delta}_{\text{pl}}$, account for the QED vacuum polarization and the plasma effects, and they are determined as \begin{align} \label{eq.5.5} \mathrm{\Delta}_{\text{QED}} &\equiv \frac{\alpha \omega}{45 \pi} \left( \frac{B_T}{B_{cr}} \right)^{2} \simeq 1.34 \times10^{-12} \ \left( \frac{\omega}{\text{keV}} \right) \left( \frac{B_T}{10^{6} \ \text{G}} \right)^{2} \ \text{cm}^{-1} \:, \nonumber \\ \mathrm{\Delta}_{\text{pl}} &\equiv - \frac{\omega^2_{pl}}{2\omega} \simeq -3.49 \times 10^{-12} \ \left( \frac{\omega}{\text{keV}} \right)^{-1} \left( \frac{n_e}{10^{8} \ \text{cm}^{-3}} \right) \ \text{cm}^{-1} \:. \end{align} Here, $\alpha$ is the fine structure constant and $B_{\text{cr}}=4.414 \ \text{G}$ is the critical magnetic field. In a plasma, the photons acquire an effective mass given in term of the plasma frequency $\omega^2_{\text{pl}}=4\pi \alpha n_e / m_e$, where $n_e$ is the plasma electron density. For a general case, when the transverse magnetic field $\mathbf{B}_T$ makes an angle $\xi$, (where $0 \leq \xi \leq 2\pi$), with the $y$-axis in a fixed coordinate system. A rotation of the mixing matrix (equation \eqref{eq.5.3}) in the x-y plane and the evolution equation of the ALP-photon system can then be written as \begin{equation} \label{eq.5.6} \resizebox{.9\hsize}{!}{$ i \dfrac{d}{dz} \left( \begin{matrix} A_{\perp}(z) \\ A{\parallel}(z) \\ a(z) \end{matrix} \right) = - \left( \begin{matrix} \mathrm{\Delta}_{\perp} \cos^2 \xi + \mathrm{\Delta}_{\parallel} \sin^2 \xi & \cos \xi \sin \xi (\mathrm{\Delta}_{\parallel}+\mathrm{\Delta}_{\perp}) & \mathrm{\Delta}_{a\gamma} \sin \xi \\ \cos \xi \sin \xi (\mathrm{\Delta}_{\parallel}+\mathrm{\Delta}_{\perp}) & \mathrm{\Delta}_{\perp} \sin^2 \xi + \mathrm{\Delta}_{\parallel} \cos^2 \xi & \mathrm{\Delta}_{a\gamma} \cos \xi \\ \mathrm{\Delta}_{a\gamma} \sin \xi & \mathrm{\Delta}_{a\gamma} \cos \xi & \mathrm{\Delta}_{a} \end{matrix} \right) \left( \begin{matrix} A_{\perp}(z) \\ A{\parallel}(z) \\ a(z) \end{matrix} \right) \:.$} \end{equation} As we can see from the model equations, the conversion proportionality between ALPs and photons is very sensitive to the transverse magnetic field $\mathbf{B}_T$ and the plasma electron density $n_e$ profiles. However, their configurations in the AGN jets are not fully clear yet. In our work here, we adopted the following $\mathbf{B}_T$ and $n_e$ profiles \begin{equation} \label{eq.5.7} B_T(r,R) =J_s(r) \cdot B_{\ast} \left( \frac{R}{R_{\ast}} \right)^{-1} \text{G} \:, \quad \text{and} \quad n_e(r,R) = J_s(r) \cdot n_{e,\ast} \left( \frac{R}{R_{\ast}} \right)^{-s} \ \text{cm}^{-3} \:. \end{equation} Where $r$ is the distance from the jet axis, $R$ is the distance along the jet axis from the central supermassive black hole (SMBH), believed to be at the center of the AGN, and $R_{\ast}$ represents a normalization radius. The function $J_s(r)$ is the exponentially scaled modified Bessel function of the first kind of order zero \cite{abramowitz1965handbook, arfken1985mathematical}, used to scale the magnetic field and the electron density profiles with our choice here for the scale length to be three times the Schwarzschild radius of the central supermassive black hole. The normalization parameters $B_{\ast}$ and $n_{e,\ast}$ can be found in by fitting observational data for a given environment with the suggested magnetic field and electron density profiles. The parameter $s$ is model-dependent and takes different values of $s=1, 2, \text{and}, 3$. Note that in this work, we study two scenarios related to distinguished environments. The first one is an attempt to explain the Fermi spectrum of the radio quasar 3C454.3 based on ALPs conversions to photons. In contrast, the other one uses independent measurements to constrain the contribution of the ALP-photon concertions to the M87 emissions. Using the set of parameters discussed above, the evolution equations \eqref{eq.5.6} can be numerically solved to find the two components of the photon linear politicization$;$ $A_{\perp}$ and $A_{\parallel}$. If we consider the initial state is ALPs only, the initial condition is $(A_{\perp}, A_{\parallel}, a)^t = (0,0,1)$ at the distance $R = R_{\text{min}}$ where the ALPs enter the jet. Then the probability for an ALP to convert into a photon after traveling a certain distance in the magnetic field inside the AGN jet can be defined as \cite{raffelt1988mixing, mirizzi2008photon} \begin{equation} \label{eq.5.9} P_{a\rightarrow \gamma} (E) = \vert A_{\parallel}(E) \vert^2 + \vert A_{\perp}(E) \vert^2 \:. \end{equation} Notice here that we follow the authors of reference \cite{mena2013hints} in assuming the jet field to be coherent along the studied scale, and ALP-photon conversion probability is independent of the coherence length of the magnetic field. In addition, in \cite{marsh2017new}, the authors find coherence lengths in M87 in excess of longest scales studied here. We will discuss later how the ALP-photon conversion probability is affected by the jet geometry and the direction of the beam propagation inside the jet. Moreover, the observations can be compared with the ALP-photon mixing model results by plotting the energy spectrum ($\nu F_{\nu} \equiv E^2 dN/dE$) as a function of the energy of the photons with $\omega \equiv E(1+z)$ where $z$ is the AGN redshift. The final photon spectrum is obtained by multiplying the photon production probability $P_{a\rightarrow \gamma} (E)$ with the CAB Band spectrum $S(E)$ \cite{band1993batse} \begin{equation} \label{eq.5.10} E^2 \dfrac{dN}{dE} = P_{a\rightarrow \gamma} (E) \cdot S(E) \:. \end{equation} Hence, the ALP-photon mixing for the AGN jet model includes six free parameters$:$ the normalization for the magnetic field $B_{\ast}$, the normalization for the electron density $n_{e,\ast}$, the ALP mass $m_a$, the ALP-photon coupling parameter $g_{a\gamma}$, and two additional geometric parameters $\theta$ and $\phi ;$ we will discusses their role later in section \ref{sec.5.4}. \section{The astrophysical environment of AGNs and their jets} \label{sec.5.3'} Before we go on with our discussion of the effects of ALP-photon conversion, we describe in this section some aspects of the astrophysical environment of the active galactic nuclei and their jets where these interactions may take place. AGNs are compact cores of a special category of active galaxies that are characterized by being among the most massive and luminous compact objects in the universe, see for review \cite{begelman1984theory, peterson1997introduction}. They are distinguished also by emitting intense amounts of radiation at any range of the electromagnetic spectrum due to the accretion onto the central supermassive black holes that are considered to be the most likely engine of the activity in these sources. According to the accepted evolutionary models, galaxies gather their masses through a sequence of accretion and merger phases. As a result of the process, million-to-billion solar mass SMBHs formed at the center of the massive galaxies. An accretion disk may form around the SMBH due to the non-zero angular momentum of the infalling matter. As substantial energy is released in the accretion disk due to its efficient mass-energy conversion, the compact center of the galaxy becomes an AGN. The existence of the SMBH in the center of AGN explains not only the large energy output based on the release of gravitational energy through accretion phenomena but also the small size of the emitting regions and connected to it the short variability time scales of AGN. General relativistic modeling of magnetized jets from accreting black hole showed that provided the central spin is high enough, a pair of relativistic jets are launched from the immediate vicinity of the black hole, composed mostly by electrons and positrons (in light jets), or electrons and protons (in heavy jets). These jets could be powered either by the rotational energy of the central SMBH or by the magnetized accretion disk wind accelerated by magneto-centrifugal forces. The presence of jets affects the spectrum of AGNs through the emission of synchrotron radiation and Inverse Compton scattering of low energy photons, thus leading to a prominent non-thermal spectrum, sometimes extending from radio frequencies all the way up to $\gamma$-ray energies. Particles are accelerating on helical paths along the magnetic field lines, emitting synchrotron radiation. In the Fourier transform of the continuum spectrum of the synchrotron radiation, the most powerful frequency is the critical frequency. The electron energies and magnetic field strengths typical to AGN render the critical frequency into the radio regime, so the collimated jets are observed as radio-loud AGN. Apart from the synchrotron radiation in radio to optical (and in some cases X-ray) wavelengths, AGNs and their jets emit energetic particles in the X-ray and $\gamma$-ray bands due to the inverse Compton scattering. Observation of synchrotron radiation from the jets of AGNs implies that the material in the jet is a magnetized plasma. The content of the plasma may be electrons and protons or electrons and positrons or a mixture of them. In general, the emission spectra of the emitting particles (mostly electrons) can be characterized by the power-law energy distribution. Since the jet broadens with distance $R$ from the core, the magnetic field $B$ and the density number of the emitting particles $n$ decrease as $B(R) \propto R^{-b}$ and $n(R) \propto R^{-a}$, where the exponent $a$ and $b$ are positive numbers \cite{chatterjee2010multi}. It is well-established that galaxy clusters feature strong magnetic fields with typical field strengths ranging from a few $0.1 \ \mu\text{G}$ up to several $10 \ \mu\text{G}$ for the most massive one\cite{dubois2009influence}. Such fields are extended over megaparsec distances and have kiloparsec coherence scales. However, the values of the jet main physical parameters$;$ such as the matter density and composition inside the jets, are still very poorly understood or even constrained and required more investigations in order to gain insight into the main physical properties of the AGNs. In the current literature, the typical electron density within the environment of the AGNs jets is estimated to be ranging from tens to a few thousands $\text{cm}^{-3}$ \cite{kakkad2018spatially}. \section{Signatures of ALP-photon coupling in the jets of AGNs} \label{sec.5.3} The Fermi Large Area Telescope in the period from 2010 September 1st to December 13th, reported observations of the radio quasar 3C454.3 at a redshift of z=0.895, constitutes of four epochs \cite{abdo2011fermi}$:$ (i) A pre-flare period, (ii) A 13 day long plateau period, (iii) A 5-day flare, and (iv) A post-flare period. The ALP-photon mixing model then has been used in \cite{mena2013hints} to fit these observation data by plotting the $\gamma$-ray energy spectra ($\nu F_{\nu} \equiv E^2 dN/dE$) as a function of the energy of the photon $E$ to get some constraints on the ALP parameters. To validate the results in \cite{mena2013hints}, we replicated the model analyses considering the photons to be initially unpolarized, and the following initial condition has been applied$:$ $(A_{\parallel}(E), A_{\perp}(E), a(E))=(\frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}}, 0)$ at $z \equiv R = 10^{18} \ \text{cm}$. Besides, the angel $\xi$ has been fixed to be $\pi/4$ during the whole calculations. The evolution equations \eqref{eq.5.2} have been solved numerically to produce the spectral feature of the blazar 3C454.3 in its four epochs for three different cases of the electron profile density. \begin{figure}[t!] \centering \includegraphics[width=18.2pc]{a.pdf} \includegraphics[width=18.2pc]{b.pdf} \includegraphics[width=18.2pc]{c.pdf} \caption{The numerical simulation of the spectral energy fitting observations of blazar 3C454.3 for four different epochs produced using ALP-photon mixing model with electron profile density$:$ (a) $n_e \propto R^{-1}$, (b) $n_e \propto R^{-2}$ and (c) $n_e \propto R^{-3}$.} \label{fig.5.1} \end{figure} Figures \ref{fig.5.1} show the numerical simulation of the spectral energy fitting observations to blazar 3C454.3 for four different epochs (flare, post-flare, plateau, and quiet) using the ALP-photon mixing model with the three different electron density profiles with the following values of the model-dependent parameter $s=1$ (a), $2$ (b), and $3$ (c). The two spectral parameters $C$ and $\mathrm{\Gamma}$ have been varied from epoch to another as they affected by the $\gamma$-ray emission region, while the other four parameters$;$ $\phi$, $\eta$, $m_a$, and $g_{a\gamma}$ have been kept fixed. Comparing our results with the one published in \cite{mena2013hints}, allows us to deduce that the spectral energy obtained using the numerical solutions to the evolution equations \eqref{eq.5.2} show a good agreement with these published results. The best fitting of the ALP-photon mixing model for the observation data of blazar jet has been achieved when the transition between photons to ALPs take place over different radii, $R \sim 10^{18}-10^{21} \ \text{cm}$ for $\phi \sim 10^{-2} $, $\eta \sim 10^9$, $m_a \sim 10^{-7} \ \text{eV}$, and $g_{a\gamma}\sim 10^{-10} \ \text{GeV}^{-1}$. So far, we have examined the ALP-photon model developed by Mena and Razzaque to fit the spectral features of the flat-spectrum radio quasar 3C454.3 during its November 2010 outburst. This allowed the aforementioned authors to derive constraints on the ALP parameters by assuming that a significant fraction of the gamma rays convert to ALPs in the presence of an external magnetic field in the large scale jet of this blazar. We reproduced their results that have a very good agreement with the observation data for the 3C454.3 blazar. Indeed, this makes us very confident that our simulation is robust. As a step forward in the following sections, we will discuss how to use the environment of the M87 AGN jet to test whether the CAB ALP conversion to photons, which is proposed to explain the Coma cluster X-ray excess, survives comparison with X-ray data in M87. Before moving on to the discussion of the cosmic axion background in the next section, it is worth mentioning the following two notes. Discussing the example of 3C454.3 blazar was to verify the parameters used for the ALP-photon mixing model in reference \cite{mena2013hints} and to ensure that the simulation is reproduced in the correct way. Then, we used the suitable parameters related to the M87 environment and the soft X-ray excess CAB model for the Coma cluster, because our next aim is to use the M87 environment to test the CAB Coma model. The motivation of considering the M87 AGN is because it is the best characterized AGN in the literature and the availability of information and data about it \cite{macchetto1997supermassive, gebhardt2009black, event2019first}. \section{Cosmic axion background} \label{sec.5.4} In this section, we follow reference \cite{conlon2013excess}$;$ the authors motivate the existence of a homogeneous cosmic axion background with $0.1 \textup{--} 1$ keV energies arising via the decay of string theory moduli in the very early universe. The origin of this idea came from the four-dimensional effective theories arising from compactifications of string theory. A generic feature of these effective theories is the presence of massive scalar particles called moduli, coming from the compactification of massless fields. Theoretical arguments indicate that the existence of such moduli should be responsible for the reheating of the standard model degrees of freedom. Regardless of the details of the inflation model, moduli are usually displaced from their final metastable minimum during inflation and begin to oscillate at the end of inflation. As they are characterized by their very weak, typically suppressed by Planck-mass interactions, the moduli are long-lived. The oscillating moduli fields subsequently come to dominate the energy density of the universe, which enters a modulus-dominated stage lasts until the moduli decay into visible and hidden sector matter and radiation, thus leading to reheating. The visible sector decays of the modulus rapidly thermalize and initiate the hot big bang. The gravitational origin of the moduli implies that moduli can also decay to any hidden sector massless particles with extremely weak interactions, such as ALPs. Two-body decays of a modulus field $\mathrm{\Phi}$ into ALPs are induced by the Lagrangian coupling $\frac{\mathrm{\Phi}}{M_P} \partial_\mu a \partial^\mu a$, resulting in ALPs with an initial energy $E_a=m_{\mathrm{\Phi}}/2$. Considering these ALPs arising from moduli decays at the time of reheating are weakly interacting, they do not thermalize, and the vast majority of ALPs propagate freely to the present day. This implies that they would linger today, forming a homogeneous and isotropic CAB with a non-thermal spectrum determined by the expansion of the universe during the time of moduli decay. Furthermore, because they are relativistic, they contribute to the dark radiation energy density of the universe, but this is beyond the scope of this work. For moduli masses $m_{\mathrm{\Phi}} \approx$ GeV the present energy of these axions is $E_a \sim 0.1 \textup{--} 1$ keV. The suggestion being that the natural energy for such a background lies between $0.1$ and $1$ keV. Furthermore, in \cite{conlon2013cosmophenomenology} it was shown that such CAB would have a quasi-thermal energy spectrum with a peak dictated by the mass of the ALP. It is also worth noting here that the ALP number density in the CAB and consequently their flux with energies between $E$ and $E+dE$ is directly related to the shape of the CAB energy spectrum which in general depend on the mean ALP energy $\langle E_a \rangle$ with some model-dependent constants that may be measured by the CAB contribution to the number of neutrino species $\mathrm{\Delta} N_{\text{eff}}$ which is directly related to the CAB energy density $\rho_a$ \cite{day2016cosmic}. This CAB is also invoked in \cite{angus2014soft} to explain the soft X-ray excess on the periphery of the Coma cluster with an ALP mass of $1.1\times 10^{-13}$ eV and a coupling to the photon of $g_{a\gamma} = 2 \times 10^{-13} \ \text{GeV}^{-1}$. In this work, we will assume, for convenience, that the CAB has a thermal spectrum with an average energy of $\langle E_a \rangle = 0.15 \ \text{keV}$. We then normalize the distribution to the typical example quoted in \cite{conlon2013cosmophenomenology}. We use the thermal distribution as an approximation, as the exact shape of the distribution will not substantially affect the conclusions we draw. We can then determine the fraction of CAB ALPs converted into photons within the environment of the M87 AGN jet and use this to determine a resulting photon flux. Under these assumptions, the predicted photon flux depends on the value of the CAB mean energy $\langle E_a \rangle$. This flux can be compared to X-ray measurements to see if such environments can constrain low-mass ALPs and put limits on the ALP explanation of the X-ray excess. \section{The soft X-ray excess CAB in the environment of M87 AGN jet} \label{sec.5.5} It is established that most of the baryonic mass of the galaxy clusters are composed of a hot ionized intracluster medium (ICM) with temperatures about of $T \approx 10^8 \ \text{K}$ (corresponding to $\omega \approx 7 \ \text{keV}$) and number densities in the range of $n \sim 10^{-1} \textup{--} 10^{-3} \ \text{cm}^3$. The ICM generates diffuse X-ray emission through thermal bremsstrahlung. A thermal bremsstrahlung spectrum gives a good approximation of a constant emissivity per unit energy at low energies. However, observations of many galaxy clusters show a soft X-ray excess in their spectra at low energies around $1 \textup{--} 2 \ \text{keV}$, which is above that from the hot ICM, and the origin of this component is still unclear. In this work, we have adopted the scenario that this soft X-ray excess is produced by the conversion of a primordial cosmic ALP background into photons in the cluster magnetic field. The central M87 AGN of the Virgo cluster is the best characterized AGN in the literature due to its proximity \cite{macchetto1997supermassive, gebhardt2009black, event2019first}. In this respect, we numerically solve the ALP-photon mixing model that is described in section \ref{sec.5.2} and use it to study the photon production probability from CAB ALPs in the environment of the M87 AGN jet. Then, we test the model to reproduce the X-ray emission for the M87 AGN from the ALP-photon conversion with very low ALP mass and very small ALP to photon coupling. The M87 AGN is a radio galaxy at a luminosity distance of $16.7 \pm 0.2$ Mpc \cite{mei2007acs} and a redshift of z = 0.00436. Based on its radio images and the modeling of its interaction with the surrounding environment, it is suggested that the M87 jet is misaligned with respect to the line of sight \cite{biretta1999hubble, biretta1995detection}. Therefore, we consider the situation when there is a misalignment between the ALP-photon beam propagation direction and the AGN jet direction. Accordingly, we have to take into account the geometry of the jet of the AGN and the direction of the ALP-photon beam propagation. In this case, two more parameters may play an important role in the study of the ALP-photon conversion probability are the misalignment angle $\theta$ between the jet direction and the line of sight and the AGN jet opening angle $\phi$, which define the jet geometry. When $\theta=0$, the ALP-photon beam propagate parallel to the R-direction, and we expect that the ALP-photon conversion probability is not affected by the jet geometry represented by the jet opening angle $\phi$. However, when the ALP-photon beam crosses the jet diagonally, making a misalignment with the R-direction, the ALP-photon conversion probability may be affected by the misalignment angle $\theta$ as well as the jet opening angle $\phi$. The opening angle $\phi$ for the jet of the M87 AGN near the base is less than about 5 degrees \cite{biretta1995detection}, and the misalignment angle $\theta$ is less than about 19 degrees \cite{walker2008vlba, hada2017structure}. In this study, we make our choices such that the magnetic field and the electron density profiles used for M87 AGN are consistent with the obtained values in \cite{park2019faraday}. In this perspective, we use an electron profile density profile $n_e \propto R^{-1}$ with the model-dependent parameter $s=1$ in equation \eqref{eq.5.7}. For our case of varying $\mathbf{B}_T$ and $n_e$ with $R$ as in equation \eqref{eq.5.7}, transition of ALPs to photons take place over different distances, $R \sim 10^{16} \textup{--} 10^{17} \ \text{cm}$, with normalization radius $R_{\ast} = 6 \times 10^{20} \ \text{cm}$. The environmental parameters are taken as $B_{\ast}= 1.4 \times 10^{-3}$ G and $n_{e,\ast}=0.3$ cm$^{-3}$ at the distance $R_{\ast}$. Note that as $n_e$ values are only available at larger distances from the base \cite{park2019faraday}, we assume that we can extrapolate its values down to small radii. In addition, we set the ALP mass to be $1.1\times 10^{-13}$ eV and we start with ALP-photon coupling around $g_{a\gamma} \sim 2 \times 10^{-13}$ GeV$^{-1}$ in agreement with the models derived in \cite{angus2014soft} to explain the soft X-ray excess on the periphery of the Coma cluster. \section{Probing low-mass ALP models within the jets of AGNs} \label{sec.5.6} In this section, we discuss the results of the numerical simulation of the conversion probabilities and present a description of the predictions of the scenario of CAB ALPs conversion to photons for the M87 AGN soft X-ray excess. To obtain our results, we apply the ALP-photon mixing model to study the probability of CAB ALPs to convert to photons in the intergalactic magnetic field on the jet of M87 AGN with the initial state of ALPs only at $R_{\text{min}}= 10^{16} \ \text{cm}$. Figure \ref{fig.5.1} shows the ALP-photon conversion probability $P_{a \rightarrow \gamma} (E)$ as a function of energy for different values of the misalignment angle $\theta$ and the jet opening angle $\phi$. The different curves on the left panel correspond to $\theta = 5^\circ, 10^\circ, 15^\circ,$ and $20^\circ$ at fixed $\phi=4^\circ$, while the different curves on the right panel correspond to $\phi = 4^\circ, 8^\circ,$ and $12^\circ$ at fixed $\theta=20^\circ$. It seems to be clear from the two graphs that the maximum conversion probability occurs when the misalignment angle $\theta$ is more close to the opening angle of $\phi$. This might be explained due to the relation between the ALP-photon beam direction and jet geometry. For the beam to cross the jet diagonally from one side to another, making an arbitrary angle with the magnetic field between zero and $\pi$, the condition for the beam to make the longest path is that the misalignment angle $\theta$ to be very close (but not equal) to the opening angle $\phi$. We have to remark here that the longest path is less than the maximum distance $R_{\text{max}} = 10^{17} \ \text{cm}$ when there is no misalignment. The misalignment at a given opening angle controls the point at which the ALP-photon beam would leave the jet and, therefore, the total distance that the beam travels inside the jet. Since we selected an electron density profile $n_e \propto R^{-1}$, this defines the relationship between the misalignment and the electron density profile that sets the critical energy at which stronger mixing starts as shown by the plots where less misaligned cases have higher critical energies and thus less overlap with the CAB spectrum. \begin{figure}[th!] \centering \includegraphics[width=.495\textwidth]{theta.pdf} \hfill \includegraphics[width=.495\textwidth]{phi.pdf} \caption{Plot of the ALP-photon conversion probability $P_{a \rightarrow \gamma} (E)$. Left panel$:$ The different curves correspond to $\theta = 5^\circ, 10^\circ, 15^\circ,$ and $20^\circ$ at fixed $\phi=4^\circ$. Right panel$:$ The different curves correspond to $\phi = 4^\circ, 8^\circ,$ and $12^\circ$ at fixed $\theta=20^\circ$.} \label{fig.5.1} \end{figure} At this stage, we are ready to present the output of the model and check to put some new constants on the acceptable limit of the value of the ALP-photon coupling parameter $g_{a\gamma}$. Figure \ref{fig.5.2} shows our results for the energy spectrum distributions obtained from the numerical simulation for the ALPs conversion to photons in the intergalactic magnetic field of the jet of M87 AGN. For these plots, we kept using plasma electron density profile $n_e \propto R^{-1}$ as taking the parameter $s=1$ in equation \eqref{eq.5.7} as well as using opening angle $\phi=4^\circ$ for the jet of M87 AGN. The different plots then represent the energy spectrum distributions at different values of the misalignment angle that the ALP-photon beam makes with the jet direction $\theta = 0^\circ, 5^\circ, 10^\circ, 15^\circ, 20^\circ,$ and $25^\circ$. For each case, we find the maximum value for the ALP to Photon coupling $g_{a\gamma}$ such that we do not exceed the observed flux $\sim 3.76 \times 10^{-12} \ \text{erg} \ \text{cm}^{-2} \ \text{s}^{-1}$ from the M87 AGN between $0.3$ and $8$ $\text{keV}$ \cite{m87chandra}. Table \ref{tab.5.1} shows six different cases of $\theta = 0^\circ, 5^\circ, 10^\circ, 15^\circ, 20^\circ, 25^\circ$ at fixed $\phi=4^\circ$ and three different cases of $\phi = 4^\circ, 8^\circ, 12^\circ$ at fixed $\theta=20^\circ$ with the corresponding constraints the model put on the ALP to Photon coupling to produce the correct flux that is compatible with observations. The summary of the results presented in the table shows that the model put constraints on the value of ALP to photon coupling to be about $\sim 7.50 \times 10^{-15} \textup{--} 6.56 \times 10^{-14} \ \text{GeV}^{-1}$ for ALP masses $m_a \lesssim 10^{-13} \ \text{eV}$ if there is a misalignment between the AGN jet direction and the line of sight less than 20 degrees. It is important to note that the bounds on $g_{a\gamma}$ we derive in this work are stronger than those found in \cite{marsh2017new}, which also simulates the effects of ALP-photon conversion in the environment of M87. The advantage in terms of constraints comes because we specifically study a CAB, whereas the authors in \cite{marsh2017new} consider the loss of photons to ALP interactions in general, rather than the addition of photons from a cosmic ALP flux. The very large magnitude of the background ALP flux is what allows us to achieve such strong constraints. Additionally, we have to note that \cite{marsh2017new} derive more universal limits, as they do not require a CAB to exist for their limits to be valid. \begin{figure}[th!] \centering \includegraphics[width=.495\textwidth]{theta=0.pdf} \hfill \includegraphics[width=.495\textwidth]{theta=5.pdf} \vfill \includegraphics[width=.495\textwidth]{theta=10.pdf} \hfill \includegraphics[width=.495\textwidth]{theta=15.pdf} \vfill \includegraphics[width=.495\textwidth]{theta=20.pdf} \hfill \includegraphics[width=.495\textwidth]{theta=25.pdf} \caption{The numerical simulation of the energy spectrum distributions from ALPs conversion to photons in the intergalactic magnetic field on the jet of M87 AGN at fixed opening angle $\phi =4^\circ$ and different values of the misalignment angle $\theta$. Top left panel, $\theta=0^\circ$. Top right panel, $\theta=5^\circ$. Middle left panel, $\theta=10^\circ$. Middle right panel, $\theta=15^\circ$. Bottom left panel, $\theta=20^\circ$. Bottom right panel, $\theta=25^\circ$.} \label{fig.5.2} \end{figure} \begin{table}[ht!] \centering \begin{tabular}{|c|c|c|c|} \hline $\:$ $\theta$ (${}^\circ$), $\phi= 4^\circ$ $\:$& $\qquad$ $g_{a\gamma}$ $(\text{GeV}^{-1})$ $\qquad$ &$\:$ $\phi$ (${}^\circ$), $\theta = 20^\circ$ $\:$ & $\qquad$ $g_{a\gamma}$ $(\text{GeV}^{-1})$ $\quad$ \\ \hline $0$ & $\lesssim 3.91 \times 10^{-13}$ &$4$ & $\lesssim 6.56 \times 10^{-14}$\\ $5$ & $\lesssim 9.17 \times 10^{-15}$&$8$ & $\lesssim 2.32 \times 10^{-14}$\\ $10$ & $\lesssim 7.50 \times 10^{-15}$&$12$ & $\lesssim 7.99 \times 10^{-15}$\\ $15$ & $\lesssim 2.08 \times 10^{-14}$&&\\ $20$ & $\lesssim 6.56 \times 10^{-14}$&&\\ $25$ & $\lesssim 1.98 \times 10^{-13}$&&\\ \hline \end{tabular} \caption{The ALP to photon coupling $g_{a\gamma}$ corresponds to different values of the misalignment angle $\theta$ and the jet opening angle $\phi$ for the M87 AGN at which we produce the correct flux that is compatible with observations.} \label{tab.5.1} \end{table} \section{Conclusion} \label{sec.5.7} \begin{figure}[t!] \centering \includegraphics[scale=1.035]{a-g-limits.pdf} \caption[The allowed regions of the ALP mass-coupling plane. The bands for the ALP-photon coupling $g_{a\gamma}$ where the conversion of CAB ALPs with $m_a \lesssim 10^{-13} \ \text{eV}$ to photons can explain the soft X-ray excess of the Coma cluster are shown with the current limits and the suggested new limits.]{The allowed regions of the ALP mass-coupling plane. The bands for the ALP-photon coupling $g_{a\gamma}$ where the conversion of CAB ALPs with $m_a \lesssim 10^{-13} \ \text{eV}$ to photons can explain the soft X-ray excess of the Coma cluster are shown with the current limits and the suggested new limits. This figure is extended from \cite{dias2014quest}.} \label{Fig.5.4} \end{figure} In this chapter, we studied the conversion between ALPs and photons in the presence of external magnetic fields using the ALP-photon mixing model developed in \cite{mena2013hints, mena2011signatures}. Then examined the detectability of signals produced by ALP-photon coupling in the highly magnetized environment of the relativistic jets produced by active galactic nuclei. Further, we tested the CAB model argued in \cite{angus2014soft} to explain the soft X-ray excess on the Coma cluster periphery. We did this by calculating the X-ray emission due to CAB propagation in the jet of the M87 AGN. It is evident in these results that the overall X-ray emission for the M87 AGN, between $0.3$ and $8$ keV, can be reproduced via the photons production from CAB ALPs with coupling $g_{a\gamma}$ in the range of $\sim 7.50 \times 10^{-15} \textup{--} 6.56 \times 10^{-14} \ \text{GeV}^{-1}$ for ALP masses $m_a \sim 1.1 \times 10^{-13} \ \text{eV}$ if there is a misalignment between the AGN jet direction and the line of sight less than 20 degrees, since the M87 jet has been found to be misaligned by less than 19 degrees in \cite{walker2008vlba, hada2017structure}. These values are up to an order of magnitude smaller than the current best fit value on the ALP-photon coupling $g_{a\gamma} \sim 2 \times 10^{-13} \ \text{GeV}^{-1}$ obtained in soft X-ray excess CAB model for the Coma cluster \cite{angus2014soft}. This casts doubt on the current limits of the largest allowed value of the ALP-photon coupling as a universal background would need to be consistent with observations of any given environment. Thus, our results exclude the current best fit value on the ALP-photon coupling for the Coma soft X-ray excess CAB model from \cite{angus2014soft} and instead place a new constraint that $g_{a\gamma} \lesssim 6.56 \times 10^{-14} \ \text{GeV}^{-1}$ when a CAB is considered. Figure \ref{Fig.5.4} summarizes the range of the allowed ALP-photon couplings together with other constraints on the ALP parameter space. The green regions are the projected sensitivities of the light-shining-through-wall experiment ALPS-II, of the helioscope IAXO, of the haloscopes ADMX and ADMX-HF, and of the PIXIE or PRISM cosmic microwave background observatories. Axions and ALPs with parameters in the regions surrounded by the red lines may constitute all or a part of cold dark matter, explain the cosmic $\gamma$-ray transparency, and the soft X-ray excess from Coma. Our results using the environment of the M87 AGN jet show that we can exclude the area between the red line labeled ``Soft X-Ray Excess from Coma (Current limits)'' and the other red line labeled ``Soft X-Ray Excess from Coma (Suggested limits)'' from the allowed parameter space. This is a strong motivation to improve the sensitivity of the current and future versions of the ALP searches in both observations and laboratory experiments down to such a range of the ALP-photon coupling. \chapter{\textbf{Introduction and Motivation}} \label{ch1} The question of the fundamental origin of matter with explaining how nature works is one that has interested philosophers and scientists since the dawn of history. The answer to this question has been a subject of explanation in almost all civilizations and cultures until science has been able to give a version of the facts. The magnificent progress made in this regard particularly in the last few decades is undoubtedly one of the most important achievements of the human species throughout the ages to realize the nature of our reality. In theoretical physics, our present best understanding of the behavior of the universe is based upon the extraordinary successes of the standard model of particle physics (SMPP) \cite{glashow1961partial, weinberg1967model, salam1968weak} which describes the physics of the very small objects in terms of quantum mechanics (QM) \cite{planck1978gesetz}, together with the standard model of cosmology (SMC) \cite{gamow1946expanding, alpher1948evolution} which describes the physics of the very large objects in terms of the theory of general relativity (GR) \cite{einstein1915allgemeinen, einstein1916grundlage}. According to this investigation, we believe that the structure of the universe is explained in terms of a set of elementary particles interacting with each other through four fundamental forces. Gravity, electromagnetism, weak, and strong interactions are considered the four fundamental forces in nature, all of which are described based on symmetry principles \cite{maldacena2015symmetry}. The SMC, and also called the $\mathrm{\Lambda} \text{CDM}$ model, is essentially based on Einstein's general theory of relativity for the gravitational force, which improved upon Newton's theory of gravity \cite{newton2013philosophiae, bone1996sir}. Indeed it is a purely classical theory, as long as it does not incorporate any idea of quantum mechanics into the formulation. Today, general relativity is widely accepted as our best description of the physics of the gravitational field, and it has many successes in describing the structure of the universe on the macroscopic scale \cite{rovelli2004quantum, oriti2009approaches}. The SMPP, on the other hand, is broadly accepted as the fundamental description of particle physics. It successfully handles the interactions of the elementary particles due to the other three fundamental forces, the electromagnetic, weak, and strong forces within the framework of quantum mechanics at the microscopic scale \cite{rovelli2004quantum, oriti2009approaches}. Despite the many successes and the strong empirical support of both the two standard models (SM), at first sight, they appear to be incompatible since each of them is formulated based on principles that are explicitly contradicted by the other model. This contradiction leaves many foundational issues that are still poorly understood, and numerous basic questions remain active areas of current research. Although there is a lot of evidence to make one quite confident that many steps have been taken on the way to understand the behavior of matter and the structure of the universe, there are also many reasons to believe that something very basic is missing and fundamental understanding of the nature of matter and the current picture of the world is still incomplete. In the last few years, the connection between cosmology and particle physics has been developing very rapidly. The potential now exists to revolutionize our knowledge by looking for the possibility of new physics being discovered and this definitely requires new theories that will have to be developed or that existing theories will have to be amended to account for it. \section{Standard model of cosmology} The observed expansion of the universe is a natural result of any homogeneous and isotropic cosmological model based on general relativity \cite{slipher1915spectrographic, lundmark1924determination, hubble1931velocity}. In 1915, Einstein developed the general theory of relativity to improve Newton's theory of gravity in describing the gravitational interactions between matter. A comprehensive introduction to general relativity can be found in various textbooks, such as \cite{weinberg1972gravitation, hawking1973large, einstein2003meaning, carroll2004spacetime, wald2007general}. General relativity is a purely classical theory and does not incorporate any idea of quantum mechanics into the formulation. The critical aspect of the general relativity is the dynamical nature of spacetime, and it essentially depends upon the following two fundamental postulates$:$ \begin{itemize} \item The principle of relativity, which states that all systems of reference are equivalent with respect to the formulation of the fundamental laws of physics. \item The principle of equivalence, which states that in the vicinity of any point, a gravitational field is equivalent to an accelerated frame of reference in gravity-free space. \end{itemize} The consequences of these principles lead to the fundamental insight of the general relativity that is$:$ gravity can not only be regarded as a conventional force but rather as a manifestation of spacetime geometry. In other words, general relativity assumes that the gravitational force is a result of the curvature of spacetime. This should change our ideas about mass that came from Newtonian gravity, which implies that the mass is the source of gravity. In general relativity, the mass turns out to be a part of a more general quantity called the energy-momentum tensor ($T_{\mu \nu}$) \cite{schutz2009first}, which includes both energy and momentum densities and encodes how matter is distributed in spacetime. It seems to be natural that the energy-momentum tensor is involved in the field equations for the general gravity, as we will see later. Thus, the energy and matter content alters the geometry of the spacetime, and the geometry of spacetime affects the motion of matter. Essentially as John Wheeler once said, ``matter tells spacetime how to curve, and spacetime tells matter how to move’’ \cite{wheeler2000geons}. Mathematically, general relativity is defined by two central equations. The first is a set of ten equations that give the relationship between spacetime and matter, known as the Einstein field equations \cite{einstein1915field} \begin{equation} \label{eq.1.1} \mathit{G}_{\mu \nu}= \mathfrak{R}_{\mu \nu} - \frac{1}{2} \mathfrak{R} \mathit{g}_{\mu \nu}= \frac{8 \pi \mathit{G}}{c^4} \mathit{T}_{\mu \nu} \:, \end{equation} where $\mathit{G}_{\mu \nu}$ is the Einstein tensor, $\mathfrak{R} _{\mu \nu}$ is the Ricci tensor \cite{parker1994mathtensor} that encodes information about the curvature of space-time given by the metric $\mathit{g}_{\mu \nu}$, $\mathfrak{R}$ is the Ricci scalar, $\mathit{T}_{\mu \nu}$ is the energy-momentum tensor, $\mathit{G}$ is the universal gravitational constant, and $c$ is the speed of light. For an extensive review of tensors see \cite{sotiriou2010f}. The second is the equation of the geodesic path, which governs how the trajectories of objects evolve in curved spacetime and it gives the equation of motion for freely falling particles in a specified coordinate system. In practice, this equation represents four second-order differential equations that determine $x^{\alpha}(\tau)$, given initial position and 4-velocity, where $\tau$ is proper time measured along the path of particle$:$ \begin{equation} \dfrac{d^2 x^{\alpha}}{d \tau^2} + \mathrm{\Gamma}^{\alpha}_{\mu \nu} \left[ \dfrac{dx^{\mu}}{d \tau} \dfrac{dx^{\nu}}{d \tau} \right] = 0 \:, \end{equation} where $\mathrm{\Gamma}^{\alpha}_{\mu \nu}$ is known as a Christoffel symbol. One of the most spectacular successes of the general relativity is its role that leads to the birth of the standard model of big bang cosmology or simply the standard model of cosmology (SMC). For thorough reviews of this topic, see for example \cite{guth1982fluctuations, linde1982new, albrecht1982cosmology, bardeen1983spontaneous}. The formulation of the SMC was based on general relativity, and it has been very successful in explaining the observable properties of the cosmos \cite{oriti2009approaches}. In principle, general relativity provided a comprehensive and coherent description of space, time, gravity, and matter at the large scales \cite{davis1985evolution}. It was also capable of describing the cosmology of any given distribution of matter using the Einstein field equation. Friedmann simplified the Einstein field equations by assuming the universe is spatially homogeneous and isotropic on large scales, and that is quite consistent with the recent observations \cite{friedmann1922125}. Together, homogeneity and isotropy lead to the cosmological principle, stating that on sufficiently large scales, the universe is homogeneous and isotropic, and essentially this means that all spatial positions in the universe are equivalent. The cosmological principle then restricts the metric to the Friedmann-Robertson-Walker (FRW) form \cite{friedman1922krummung, friedmann1924moglichkeit, robertson1935kinematics, robertson1936kinematicsII, robertson1936kinematicsIII, walker1937milne} \begin{equation} ds^2 = - dt^2 + R(t)^2 \left[ \frac{dr^2}{1-kr^2} + r^2 (d\theta^2 + \sin^2 \theta d\phi^2) \right] \:, \end{equation} in which $(r, \theta, \phi)$ are the comoving coordinates, and the dimensionless parameter $R(t) \equiv R_c(t) / R_c$ is called the scale factor of the universe that characterizes the size of the universe and hence its evolution. Here $R_c(t)$ is the radius of curvature of the 3-dimensional space at time $t$ and , by convention, $R_c$ is the radius of curvature at the present time $t_0$. If this metric equation is rewritten in terms of the conformal time $\tau$ instead of using the proper cosmic time $t$ as measured by a comoving observer, then it reduces to \begin{equation} ds^2 = R(t)^2 \left[ - d \tau^2 + \frac{dr^2}{1-kr^2} + r^2 (d\theta^2 + \sin^2 \theta d\phi^2) \right] \:. \end{equation} Here the dimensionless parameter $k \equiv \kappa / R_c$ determines the curvature of the space, where the number $\kappa$ is called the curvature constant, which takes only the discrete values$;$ $+1, 0, -1$, and distinguishes between the following different spatial geometries, \begin{itemize} \item \text{\boldmath $\kappa= +1$}, corresponds to a finite universe with spatially closed geometries, positively curved like a sphere. \item \text{\boldmath $\kappa = 0$}, corresponds to an infinite universe with spatially flat geometries, uncurved like a plane. \item \text{\boldmath $\kappa = -1$}, corresponds to an infinite universe with spatially open geometries, negatively curved like a hyperboloid. \end{itemize} By inserting the FRW metric into the Einstein field equations, we obtain a closed system of Friedmann equations, which describe the evolution of the scale factor$:$ \begin{align} \left( \frac{\dot{R}}{R} \right)^2 &= \frac{8 \pi G}{3} \rho - \frac{k c^2}{R^2} \:, \\ \frac{\ddot{R}}{R} &= - \frac{4 \pi G}{3} \left( \rho + \frac{3 P}{c^2} \right) \:, \end{align} where $\rho$ is the energy density, and $P$ is the pressure of the universe. Most solutions to the Friedmann equations predict an expanding or a contracting universe, depending on some set of initial numbers, such as the total amount of matter in the universe. The solutions of the expanding universe lead to Hubble's law \cite{hubble1929relation} \begin{equation} v = H d \:, \end{equation} where $v$ is expanding velocity, $d$ is the proper distance that is the distance to an object as measured in a surface of constant time, and $H$ is the rate of expansion of the universe, which known as the Hubble function \begin{equation} H \equiv \frac{\dot{R}}{R} \:. \end{equation} This allows establishing the age of the observable universe to be $\sim 14$ billion years. This equation is really important since it relates the empirical parameter $H$ discovered by Hubble to the expansion parameter of the Friedmann equation. The recent value for the today universe expansion rate is measured by the current value of the Hubble function and known as the Hubble constant $H_0$. Hubble initially overestimated the numerical value of $H_0$ and thought it is $500 \ \text{km} \ \text{s}^{-1} \ \text{Mpc}^{-1}$. The best current estimation of it, combining the results of different research groups, gives a value around $70 \pm 7 \ \text{km} \ \text{s}^{-1} \ \text{Mpc}^{-1}$ \cite{ryden2017introduction}. However, the initial value for the Hubble constant was much bigger than the recent one, but in any case, it was always big enough to make the Friedmann solution to the Einstein field equations inconsistent with the belief at this time that the universe is static and positively curved. The current value of the Hubble constant allows establishing the age of the observable universe to be about $14.0 \pm 1.4$ billion years. The discovery of the cosmic expansion then implies that the age of the universe is not infinite. Note that since the energy density of the universe $\rho$ must be a positive number, then the Friedmann to the Einstein field equations leads to the surprising result that the pressure of matter $P$ is negative. Einstein corrected for this by introducing the so-called cosmological constant $\mathrm{\Lambda}$ \cite{peebles2003cosmological} into the original field equations and forced it to escape the confusion with Friedmann solution for a static and positively curved universe \cite{riess1998observational, perlmutter1999measurements}. The modified field equations with the cosmological constant had a form \begin{equation} G_{\mu \nu} + \mathrm{\Lambda} g_{\mu \nu}= \mathfrak{R}_{\mu \nu} - \frac{1}{2} \mathfrak{R} g_{\mu \nu} + \mathrm{\Lambda} g_{\mu \nu}= \frac{8 \pi G}{c^4} T_{\mu \nu} \:. \end{equation} With the additional term of the cosmological constant to the Einstein field equations, the Friedmann equations become \begin{align} \left( \frac{\dot{R}}{R} \right)^2 &= \frac{8 \pi G}{3} \rho - \frac{k c^2}{R^2(t)} + \frac{\mathrm{\Lambda} c^2}{3} \:, \\ \frac{\ddot{R}}{R} &= - \frac{4 \pi G}{3} \left( \rho + \frac{3 P}{c^2} \right) + \frac{\mathrm{\Lambda} c^2}{3} \:. \end{align} In addition, conservation of the energy-momentum tensor yields a third equation which turns out to be dependent on the two Friedmann equations$:$ \begin{equation} \dot{\rho} + 3 \frac{\dot{R}}{R} \left(\rho + \frac{P}{c^2} \right) =0 \:. \end{equation} This is essentially the equation of state that relates the density $\rho$ to the pressure $P$. In the most common cases, the equation of state for the cosmological fluid is chosen to be of a mixture of non-interacting ideal fluids that is given by the form \begin{equation} P_i = \omega_i \rho_i c^2 \:, \end{equation} where $P_i$ is the partial pressure, and $\rho_i$ is the partial energy density for each part of the cosmic fluid. While the factor $\omega_i$ is a constant whose value represents properties of different kinds of fluid, for example, $\omega = 0$ gives a pressureless fluid, while $ \omega = 1/3$ corresponds to radiation, and it might take negative values to represent the negative pressure which could be caused by some other form of a perfect fluid. Indeed, the additional term with the cosmological constant $\mathrm{\Lambda}$ in the Einstein field equation should be the source of such a form of matter, which is known as dark energy. As a final note on the Friedmann equations, dividing the first equation by factor $H^2$ gives the sum of the energy densities of the various matter components of the universe \begin{align} 1 &= \mathrm{\Omega}_i + \mathrm{\Omega}_{\mathrm{\Lambda}} + \mathrm{\Omega}_k \:, \\ \mathrm{\Omega}_i &= \frac{8 \pi G}{3 H^2} \rho_i \:, \quad \mathrm{\Omega}_k = \frac{- k c^2}{a^2 H^2} \:, \quad \mathrm{\Omega}_\mathrm{\Lambda} = \frac{\mathrm{\Lambda} c^2}{3 H^2} \:, \end{align} where $\mathrm{\Omega}$ is the energy density divided by the critical density $\rho_c = \frac{3 H^2}{8 \pi G}$, the energy density at which the universe is flat such as $\mathrm{\Omega}=1$. The general relativity and the relative SMC provide the most successful description of space, time, and gravity. This is supported by a number of experimental confirmations that have been found to agree with the theoretical predictions. The declaration in 2016 about the first direct detection of the gravitational-waves signal generated due to the merger of black holes has provided extraordinary evidence in support of the general relativity and the SMC \cite{abbott2016supplement}. In addition to the existence of gravitational waves, general relativity has been confirmed by other tests include the observations of the gravitational deflection of light, the anomalous advance in the perihelion of Mercury, and gravitational redshift \cite{treschman2019gravitational}. \section{Standard model of particle physics} \label{Sec.1.2} The SMPP is a very successful theory that considered our best current description of the known elementary particles of nature and their interactions. A wealth of canonical literature is available, see for example \cite{peskin2018introduction, sterman1993introduction, ellis2003qcd, burgess2006standard}. The model is capable of providing a quantitative description of three of the four fundamental interactions in nature, \ie electromagnetic, weak, and strong interaction. The electromagnetic and weak interactions are unified in the electroweak theory interaction that is described by the model of Glashow, Salam, and Weinberg (GSW), while the quantum chromodynamics (QCD) is the theory that describes the strong interaction. The fourth fundamental interaction is gravity, which is best described by the general relativity, as we already discussed in the previous section. The incorporation of the gravitational force into the SMPP framework is still an unresolved challenge \cite{salam1980gauge}. The SMPP classifies the elementary particles into two main categories, \ie fermions and bosons. In this context, the ordinary matter\footnote{The terms ordinary matter and baryonic matter will be used to define the kind of matter described by the standard model of particle physics.} in the universe is basically made up of fermions that are held together by the fundamental forces through the exchange of bosons that mediate the forces between these fermions. The fermions are classified as particles with half-integer spins that obey the Pauli exclusion principle. They can be further subdivided into two basic classifications of elementary particles, \ie quarks and leptons, depending on which kind of interaction they are subjected to. Both classes consist of six particles, grouped into three doublets, called families or generations. The three quark doublets are$:$ up ($u$) and down ($d$), charm ($c$) and strange ($s$), top ($t$) and bottom ($b$). While the three lepton doublets consist of electron ($e$), muon ($\mu$), tau ($\tau$), and an associated neutrino to each of these leptons. Also, there are three additional doublets for each class, composed of leptons and quarks antiparticles. The antiparticles have the same mass as their partners but all quantum numbers opposite. On the other hand, bosons are classified as particles with integer spins and do not obey the Pauli exclusion principle. The photon ($\gamma$) is the gauge boson that mediates the electromagnetic interactions, and there are three such bosons, \ie $W^+$, $W^-$, $Z$ responsible for the weak interactions, while the strong interactions are mediated by 8 gluons ($G$) \cite{shifman1979qcd}. In addition, there is the Higgs boson \cite{atlas2012observation, bezrukov2012higgs} that gives the mass to all the standard model particles with which the field interacts through the Higgs mechanism \cite{higgs1966spontaneous}. A list of all the standard model particles and some of their properties are presented in table \ref{table1}.\noindent \begin{table}[t!] \centering{ \scalebox{0.755}{ \begin{subtable}{1.5 \textwidth} \begin{tabular}{|c||c|c|c|c||c|c|c|c|} \toprule \hline \multicolumn{9}{|c|}{Fermions} \\ \hline \multicolumn{5}{|c||}{Quarks} & \multicolumn{4}{c|}{Leptons} \\ \hline \multirow{1}{*} { Generation} & Name & Symbol & Charge [$e$] & Mass [GeV] & Name & Symbol & Charge [$e$] & Mass [GeV] \\ \cline{1-9} \multirow{2}{*} {1st} & up & u & $+2/3$ & $2.2^{+0.6}_{-0.4} \times 10^{-3}$& electron & e & $-1$ & $0.511 \times 10^{-3}$ \\ & down & d & $-1/3$ & $4.7^{+0.5}_{-0.4} \times 10^{-3}$ & $e$-neutrino & $\nu_e$ & $0$ & $< 2 \times 10^{-9}$ \\ \cline{1-9} \multirow{2}{*} {2nd} & charm & c & $+2/3$ & $1.27 \pm 0.03$& muon & $\mu$ & $-1$ & $0.106$ \\ & strange & s & $-1/3$ & $96^{+8}_{-4} \times 10^{-3}$& $\mu$-neutrino& $\nu_{\mu}$ & $0$ & $< 0.19 \times 10^{-3}$ \\ \cline{1-9} \multirow{2}{*} {3rd} & top & t & $+2/3$ & $174.2 \pm 1.4$& tau & $\tau$ & $-1$ & $1.777$ \\ & bottom & b & $-1/3$ & $4.18 \pm 0.04$ & $\tau$-neutrino & $\nu_{\tau}$ & $0$ & $< 0.0182$ \\ \hline \end{tabular} \end{subtable}}} \\ \centering{\scalebox{0.8586}{ \begin{subtable}{1.5 \textwidth} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{8}{|c|}{Bosons} \\ \hline Name & Symbol & Charge [$e$]& Spin & Mass [GeV] & Interactions & Range & Interaction with \\ \hline Photon & $\gamma$ & $0$ & $+1$ & $0$ & Electromagnetism& $\infty$& Charge \\ \cline{1-8} W-boson &$W^{\pm}$ & $\pm 1$ & $+1$ & 80.4 & \multirow{2}{*} {Weak} & \multirow{2}{*} {$10^{-18}$} & weak isospin \\ Z-boson&$Z^0$ & $0$ & $+1$ & 91.2 & && + hypercharge \\ \cline{1-8} Gluons &$G_{i=1, \dots, 8}$ & $0$ & $+1$ & $0$ & Strong & $10^{-15}$& color \\ \cline{1-8} Higgs &$H$ & $0$ & $0$ & 125 &&& \\ \hline \bottomrule \end{tabular} \end{subtable}}} \caption{Overview over the standard model particles and some of their properties.} \label{table1} \end{table} The six quarks in the standard model are classified by their so-called flavors. The quarks three generations are ordered by increasing mass from the first to the third generation. The up-type quark in each generation has an electric charge equal to $+2/3$, while each down-type one carries an electric charge of $-1/3$. Quarks are the only fundamental particles that experience all the four fundamental interactions. They participate in strong interactions because of their color charges, which came in three kinds, \ie red (r), green (g), blue (b) charges. They have never been observed yet in nature as free states, but only confined inbound states called hadrons \cite{gell2010quarks}. Leptons come as well in six different flavors, and their three generations similarly are ordered by increasing mass. The electron type leptons carry $-1$ electric charge, while the associated neutrinos are electrically neutral. Unlike quarks, leptons do not carry color charge and do not participate in strong interactions. In turn, every generation has two chiral manifestations, the left-handed and right-handed one. Only left-handed particles can participate in weak interactions via $W^{\pm}$ bosons \cite{commins1983weak}. Both left-handed particles in a given quark or lepton generation are assigned a so-called weak-isospin quantum number, identifying them as partners of each other with respect to the weak interaction. One important remark should be mention here, that only fermions from the first generation built up the observable stable matter in the universe. In contrast, the particles of the other two generations and compounds of them always decay into lighter particles from lower generations. In fact, the role of these two last generations in describing the visible universe is not clearly understood yet. Mathematically, the SMPP can be defined as a renormalizable quantum field theory (QFT) based on the following local gauge symmetry \begin{equation} G_{\text{SM}} = SU(3)_C \otimes SU(2)_L \otimes U(1)_Y \:. \end{equation} Each of these gauge groups is corresponding to model a different interaction of the three fundamental forces, which are incorporated into the SM framework$:$ the electromagnetic, the weak, and the strong forces. The electromagnetic interaction of the standard model particles is described by the theory of quantum electrodynamics (QED), which is a gauge theory based on a $U(1)_{\text{em}}$ symmetry group. Then the electroweak theory has been formulated to unify the electromagnetic and weak interactions between quarks and leptons to a single framework that is able to describe both the two interactions based on the electroweak gauge group $SU(2)_L \otimes U(1)_Y$. The $SU(2)_L$ group refers to the weak isospin charge $I$, while $U(1)_Y$ to the weak hypercharge $Y$ \cite{nishijima1955charge, gell1956interpretation}. Then the description of the electroweak interactions requires three massive gauge bosons $W^{\pm}$ and $Z$ in addition to the photon $\gamma$. In contrast, the theory of quantum chromodynamics describes the strong interaction based on the gauge group of local $SU(3)_C$ transformations of the quark-color fields. The QCD describes the strong interaction between quarks that arises from the exchange of the eight massless gluons that couple to the color charge of the fermions $G_{i=1, \dots, 8}$. For further convenience, the standard model Lagrangian can be formed as the sum of four parts, which are the gauge interactions, fermions interactions, Higgs interaction, and Yukawa interactions. Each of these four terms refers to one of the interaction mechanisms between the particles of the standard model. Therefore, the most general Lagrangian density of the standard model can be written as \begin{align} \L_{\text{SM}} & = \L_{\text{Gauge}} + \L_{\text{Dirac}} +\L_{\text{Higgs}} + \L_{\text{Yukawa}} \:. \end{align} The first term represents the kinetic terms for the gauge bosons and describes the interactions between them \begin{equation} \L_{\text{Gauge}} = - \frac{1}{4} G^a_{\mu \nu} G_a^{\mu \nu} - \frac{1}{4} W^i_{\mu \nu} W_i^{\mu \nu} - \frac{1}{4} B_{\mu \nu} B^{\mu \nu} \:, \end{equation} where $G^a_{\mu}(a=1, \dots,8)$ are the gluons of the strong interactions, and $W^i_{\mu}(i=1,2,3)$ and $B_{\mu}$ are the gauge bosons of the electroweak interactions. The covariant field strength tensors are defined as follows \begin{align} G^a_{\mu \nu} &= \partial_\mu G^a_\nu - \partial_\nu G^a_\mu - g_s f^{abc} G^b_\mu G^c_\nu \:, \\ W^i_{\mu \nu} &= \partial_\mu W^i_\nu - \partial_\nu W^i_\mu - g \epsilon^{ijk} W^j_\mu W^k_\nu \:, \\ B_{\mu \nu} &= \partial_\mu B_\nu - \partial_\nu B_\mu \:, \end{align} where $g_s$ is the strong interaction coupling strength, and the structure constant $f^{abc}$ is defined as $[ \tau^a,\tau^b ] = i f^{abc} \tau^c$, $a, b, c$ run from $1$ to $8$ for $\tau^a$ which are the generator for the $SU(3)_C$ group. In contrast, the weak interaction coupling strength is $g$, and the structure constant is defined as $\epsilon^{abc}$ is defined as $[ \lambda^a,\lambda^b ] = i f^{abc} \lambda^c$, where $\lambda^a$ are the group $SU(2)_L$ generators with $a, b, c$ run from $1$ to $3$. The second term contains kinetic terms for the fermions, which describe the fermions interactions with the gauge bosons as well as their interactions with each other \begin{equation} \L_{\text{Dirac}} = \sum_f i \bar{\psi}_i \slashed{D} \psi_i + h.c. \:. \end{equation} The summation runs over all of the fermions, where $\psi_i$, and $\bar{\psi}_i$ describe the fermion field and the conjugate field. The covariant derivative $\slashed{D}$ featuring all the gauge bosons without self-interactions. The beauty of this term is that it contains the description of the electromagnetic, weak, and strong interactions, while the covariant derivative distinguishes between them. The $h.c.$ term represents the ``hermitian conjugate’’ of the Dirac interaction. The third part of the Lagrangian is the Higgs part \begin{equation} \L_{\text{Higgs}} = (D_\mu \phi)^\dagger (D_\mu \phi) - V(\phi^\dagger \phi) \:, \end{equation} where $\phi$ represents the Higgs field and the Higgs potential is given by \begin{equation} V(\phi^\dagger \phi) = \mu^2 \phi^\dagger \phi + \frac{\lambda}{2} (\phi^\dagger \phi)^2 \:. \end{equation} This part contains only the Higgs bosons and the electroweak gauge bosons. The first term describes how the gauge bosons couple to the Higgs field and forms its masses, while the second term represents the potential of the Higgs field. The last part describes the Yukawa interactions \cite{yukawa1935interaction}, and consist of the most general possible couplings of the Higgs fields to the fermion fields$:$ \begin{equation} \L_{\text{Yukawa}} = - y_{ij} \bar{\psi}_{Li} \psi_{Ri} \phi + h.c. \:, \end{equation} where $y_{ij} $ is the dimensionless Yukawa coupling. This part describes how the fermions couple to the Higgs field $\phi$ and thought to be responsible for giving fermions their masses when electroweak symmetry breaking occurs \cite{englert1964broken, higgs1964broken, higgs1964brokenn}. The character of neutrino masses is not yet known, and the $h.c.$ term is the hermitian conjugate of the Yukawa interaction that gives mass for antimatter particles. Up to this point, we have constructed the standard model Lagrangian based on the mechanism of interactions. Schematically, it may also be useful to divide our approach to study the model Lagrangian into two sectors$:$ \begin{equation} \L_{\text{SM}} = \L_{\text{EW}} + \L_{\text{QCD}} \:. \end{equation} The first part is the electroweak sector of the standard model, which is the subset of terms consisting of the $SU(2)_L$ and $U(1)_Y$ gauge fields as well as all the fermions with non-zero charges under these groups. Then the EW Lagrangian is of the general form \begin{equation} \L_{\text{EW}} = - \frac{1}{4} W^i_{\mu \nu} W_i^{\mu \nu} - \frac{1}{4} B_{\mu \nu} B^{\mu \nu} + \sum_f \bar{l}_f (i \slashed{D}_{\text{EW}} - m_f) l_f \:, \end{equation} The behavior of the electroweak Lagrangian is defined by the EW covariant derivative$:$ \begin{equation} \slashed{D}_{\text{EW}} = \gamma^\mu (\partial_\mu + i g \lambda^a W^a_\mu + i g' \frac{Y}{2} B_\mu) \:, \end{equation} where $W^a_\mu $ and $B_\mu$ the three gauge fields of the $SU(2)$ group and the $U(1)_Y$ group gauge field, respectively. While $g'$ is another coupling constant and $Y$ is called the hypercharge operator that is the generator of the $U(1)_Y$ group. The strong interaction in the SMPP is described by the quantum chromodynamics sector, that composed of the $SU(3)$ gauge fields and non-singlet fields under this gauge group. Consequently, the most general gauge-invariant Lagrangian of QCD reads \begin{equation} \L_{\text{QCD}} = - \frac{1}{4} G^a_{\mu \nu} G_a^{\mu \nu} + \sum_f \bar{q}_f (i \slashed{D}_s - m_f) q_f + h.c. + \L_{\theta} \:. \end{equation} The summation runs over all the quark fields $q_f$, where $f$ is the quark flavor, and $m_f$ is the corresponding mass. The QCD sector is characterized by the QCD covariant derivative $\slashed{D}_s$ that contains the coupling between the quarks and the gauge fields and defined as \begin{equation} \slashed{D}_s = \gamma^\mu (\partial_\mu + i g_s \tau^a G^a_\mu) \:, \end{equation} with $G^a_\mu$ represent the $8$ gluon fields of the strong interaction. As we discussed before, the term $h.c.$ is the hermitian conjugate to these sectors, and it is required to fix the gauge kinetic term. Lastly, the QCD gauge invariance allows for one additional term, which we have labeled the $\theta$-term \cite{baker2006improved}. This term has the following form \begin{equation} \L_\theta = \bar{\theta} \frac{g_s^2}{64 \pi^2} \epsilon^{\mu \nu \alpha \beta} G_a^{\mu \nu} G_a^{\alpha \beta} \:. \end{equation} Here $\epsilon^{\mu \nu \alpha \beta}$ is the totally antisymmetric tensor in four dimensions. The $\theta$-term appears as a consequence solution to the spontaneous breaking of the axial $U(1)_A$ symmetry in the QCD Lagrangian. Adding this term to the Lagrangian leads to another fundamental problem called the problem of strong CP (charge-parity) violation \cite{hooft1994symmetry, hooft1994computation}. The axion \cite{weinberg1978new, wilczek1978problem} is a very promising solution to such a problem and, at the same time, a possible dark matter candidate. We will discuss this problem with more details in chapter \ref{ch3} to understand why adding the $\theta$-term to the QCD Lagrangian is necessary and how axions can give rise to solutions to both the strong CP problem and the dark matter problem as well. The SMPP is currently well accepted as the best description of nature at microscopic scales. Within the theoretical framework of the SMPP, a wide range of phenomena can be described to an impressive degree of accuracy, and its predictions as well have been verified experimentally to extraordinary accuracy. Perhaps the most significant success of the SMPP is the theoretical prediction of the Higgs boson, over $50$ years before its experimental detection in $2012$ by the ATLAS and CMS collaborations at the LHC \cite{atlas2012observation, chatrchyan2012observation}. Other successes of the model include the prediction of the $W$ and $Z$ bosons, the gluon, and the top and charm quark, also before they have even been observed. \section{Problems with the standard model} As a matter of fact, most of our information about the structure of our universe came based on the SMPP together with the SMC, and for simplicity, let us call both of the two models the standard model (SM). Despite all these successes and more, the standard model does not provide a complete picture of nature, and it does not answer all questions. There is a number of theoretical and experimental reasons that lead to the belief that the standard model is not complete yet. Examples in this respect, a set of the major unsolved problems which can not be addressed by the standard model, are listed downward in this section. \begin{itemize} \item {\bf Gravity puzzle.} The SMPP is extremely successful in describing the electromagnetic, weak, and strong interactions, while gravity$;$ the last fundamental interaction, is only treated classically by the general relativity and not yet incorporated in the SMPP. However, there is a hypothesis that gravitational interactions are presumably mediated by a massless spin-2 particle called graviton, but this particle has not yet been observed due to the relative weakness of gravitation in comparison with the other fundamental forces. The possibility of a theory for massive gravitons is commonly referred to as Massive Gravity, but for the moment it can not be promoted into a renormalizable quantum theory \cite{schmidt2013classically}. For these reasons, both gravity and graviton are not included in the SMPP. Furthermore, there is is still a possibility that the general relativity does not hold precisely on very large scales or in very strong gravitational forces. In any case, the general relativity breaks down at the big bang, where quantum gravity effects became dominant. According to a naive interpretation of general relativity that ignores quantum mechanics, the initial state of the universe, at the beginning of the big bang, was a singularity. Seeking for reasonable explanations to these issues might be good enough reason to look for new physics beyond the general relativity \cite{rovelli2004quantum, oriti2009approaches}. \item {\bf The gauge hierarchy problem.} The SMPP can not explain the large differences in the coupling constants of forces at low energy scales. In particular, there is no clarification of the mystery that why gravity is so much weaker than the other forces. In the same context, the mass of the Higgs boson has been measured as 125 GeV \cite{atlas2012observation, bezrukov2012higgs}, while the theoretical value of this mass that is calculated from the standard model is enormously larger than this experimental result. In a nutshell, the gauge hierarchy problem is about the question of why the physical Higgs boson mass is so small. It is possible to restore these values to the proper one through fine-tuning, but this is considered to be unnatural. It leads many to believe that there must be a better solution, but the problem is not yet settled \cite{gildener1976gauge, hatanaka1998gauge}. \item {\bf Origin of the mass.} The Higgs Mechanism is introduced in the SMPP as the mechanism that generates the particle masses through a Yukawa-type interaction. The Higgs boson is the first scalar fundamental particle observed in nature. It gives masses to the fermions, $W$, and $Z$ bosons. However, the standard model does not tell us why this happens. It is also still not clear whether this particle is fundamental or composite, or if there are other Higgs bosons \cite{wilczek2012origins}. \item {\bf Neutrino mass.} The SMPP predicts neutrinos to be massless\cite{pontecorvo1968neutrino}. However, the experimental observation of neutrino oscillations implies that neutrinos are massive particles \cite{fukuda1998measurements}. An extension of the standard model containing a massive right-handed, sterile neutrino can solve this problem. In such a model, the standard model neutrinos acquire mass, and the so-called seesaw mechanism \cite{yanagida1979horizontal, gell1979supergravity, minkowski1977mu} explains the smallness of their masses \cite{mohapatra1980neutrino}. \item {\bf Flavor problem.} The flavor problem \cite{aranda2000u} is about the questions of why the SMPP contains precisely three copies of the fermions, and why are the masses of these fermions so hierarchical and are not in the same order. For example, it is not clear why the mass of the electron is about $0.511 \ \text{MeV}$, while the top quark has a mass of around $173 \ \text{GeV}$. \item {\bf Matter-antimatter asymmetry.} According to the SMC, it is generally assumed that equal amounts of matter and antimatter should have been created after the big bang. However, the visible universe today appears to consist almost entirely of matter rather than antimatter. One of the current challenges in physics is to figure out what happened to cause this asymmetry between matter and antimatter \cite{ryden2017introduction}. A possible explanation could come from the study of CP-violation, which addresses a very fundamental question, are the laws of physics the same for matter and antimatter, or are matter and antimatter intrinsically different. The answer to this question may hold the key to solving the mystery of the matter-dominated universe. \item {\bf The strong CP problem.} At the end of the previous section, we just mentioned that the strong CP problem results from the $\theta$-term in the QCD Lagrangian. This term contains the vacuum angle $\bar{\theta}$ with no apparent preferred value, while the current experimental limit sets a strong limit on its value$;$ that it must be $\bar{\theta} \lesssim 10^{-10}$. The problem of why the value of $\bar{\theta}$ is so small is known as the strong CP problem. It seems unlikely that the angle would be so close to zero by pure chance$;$ there should be a deeper explanation for this behavioral \cite{hooft1994symmetry, hooft1994computation}. \item {\bf Inflation.} In cosmology, the better possibility to explain the puzzles that are the horizon problem, the flatness problem, and the origin of perturbations is to extend the SMC with inflation theory \cite{ryden2017introduction}. The theory assumes that the first fraction of a second of the big bang, the universe went through a stage of extremely rapid expansion called inflation. The theory usually requires adding a new heavy particle to the SMPP, provoking an acceleration of the universe expansion at its very early stages. This new particle is called the inflaton and is supposed to fill the whole universe, driving the dynamics of its expansion, before producing after a while the standard model particles that our world is made of today. Indeed, finding the correct theoretical description of inflation requires a new physics BSM, and it would not be easy to understand otherwise. \item {\bf Dark matter and dark energy.} Because of the unexpected discovery that the acceleration of our universe expansion is not slowing down but instead speeding up, it became clear that our universe contains about $4.9\%$ of ordinary matter, $26.8\%$ of Dark Matter (DM), and $68.3\%$ of Dark Energy (DE) \cite{aghanim2018planck}. However, the nature of dark matter and dark energy, as well as the cause of the accelerated expansion of our universe, are still unknown \cite{bertone2005particle}. This problem represents one of the major unresolved issues in contemporary physics. \end{itemize} The standard model is unable to solve such questions, and these problems remain an open area for recent research, motivating us to continue looking for new physics beyond the standard model. At the time of writing this thesis, no evidence has been found for physics beyond the standard model. Nonetheless, the search for physics beyond the standard model is still an important guideline to new ideas proposed to answer these questions. In the following section, we discuss some of the approaches that are explored in this direction. \section{Models beyond the standard model} Despite the described criticism above, the incredible accuracy of the SMPP and the SMC leads to the suggestion that the standard model is simply incomplete rather than incorrect. Perhaps these models are only different effective parts or phases of a bigger picture that includes new physics hidden deep in the subatomic world or in the dark recesses of the universe. This is why a first step to build a new model that could address some of the problems is first to verify that it agrees with the predictions of the standard model. This is why many new models aim to expand the standard model rather than to provide an entirely new approach. Such models are typically called beyond standard model (BSM). There are a plethora of models that address the standard model problems in many different ways. Accordingly, there is a lot of effort paid now in theoretical physics to introduce some new approaches beyond the standard model to solve some of its shortcomings. Hence we need to find an extension that tackles some or maybe all of these issues mentioned above to generalize the standard model. Some of them are enumerated below. \begin{itemize} \item {\bf Supersymmetric theories.} One of the most popular extensions to the standard model is supersymmetry, which is a fundamental symmetry between fermions and bosons introducing a set of new superpartners with opposite spin statistics for each standard model particle \cite{wess1989supergauge, wess1992supersymmetry}. While the bosons have a positive contribution to the total vacuum energy, the fermionic contribution is negative. In non-supersymmetric theories, it is unreasonable that the fermions would exactly cancel the contributions of the bosons to give this small number. In supersymmetry, it is posited that every particle and its superpartner are degenerate in mass. Therefore, in addition to the usual term from quantum corrections to the Higgs mass from standard model particles, there would be a similar contribution to the Higgs mass with the opposite sign and the same magnitude from the superpartners. These two terms exactly cancel in the limit of exact supersymmetry, and there is thus no hierarchy problem. Breaking of this symmetry at the electroweak scale could theoretically explain the small number. So far, though, no such symmetry has been found in nature. \item {\bf Extra-dimensional theories.} Another exciting possible way to extend the standard model is by adding extra spatial dimensions to the common four-dimensional spacetime \cite{kaluza1921unitatsproblem, klein1991quantum}. These theories often reside at high energies and will, therefore, be manifest as effective theories at the low energy scale. From the common four-dimensional point of view, particles that propagate through the extra dimensions will effectively be perceived as towers of heavy particles. The extra dimensions can be warped and provide an alternative solution to the hierarchy problem. In these models, the weak hierarchy is induced simply because the Planck scale is red-shifted to the weak scale by the warp factor. In general, these models explain the weakness of gravity by diluting gravity in a large bulk volume, or by localizing the graviton away from the standard model. \item {\bf Grand unified theories.} This proposal to extend the standard model is exquisite because it attempts to unify the three gauge coupling of the standard model in one single one, and correspondingly the strong and electroweak forces unify into a single gauge theory \cite{ross1984grand}. This unification must necessarily take place at some high energy scale of order $10^{16} \ \text{GeV}$, and called the grand unified scale, at which all the three couplings become approximately equal. The central feature of these theories is that above the grand unified scale, the three gauge symmetries of the standard model tend to unify in one single gauge symmetry with a simple gauge group, and just one coupling constant. Below the grand unified scale, the symmetry is spontaneously broken to the standard model symmetries. Popular choices for the unifying group are the special unitary group in five dimensions $SU(5)$ and the special orthogonal group in ten dimensions $SO(10)$. Such theories leave unanswered most of the open questions above, except for the fact that it reduces the number of independent parameters due to the fact that there is only one gauge coupling at large energies. Unfortunately, with our present precision understanding of the gauge couplings and spectrum of the standard model, the running of the three gauge couplings does not unify at a single coupling at higher energies, but they cross each other at different energies and further model building is required\cite{allanach2016beyond}. In practice, further model building is required in order to make this work. \item {\bf Theories of everything.} The idea of unifying the various forces of nature is not limited to the unification of the strong interaction with the electroweak one but extends to include gravity as well. Finding a theory of everything that thoroughly explains and links together all known physical phenomena is presently considered one of the most elusive goals of theoretical physics \cite{ellis1986superstring}. In practical terms, the immediate goal in this regard is to develop a theory that reconciles quantum mechanics and gravity and to build a quantum theory of gravity. Additional features, such as overcoming conceptual flaws in either theory or accurate prediction of particle masses, would be desired. The challenges in putting together such a theory are not just conceptual$;$ they include the experimental aspects of the very high energies needed to probe exotic realms. In the attempts to unify different interactions, several ideas and principles were proposed in this direction. String theory \cite{polchinski1998string} is one such reinvention, and many theoretical physicists think that such theories are the next theoretical step toward a true theory of everything. Also, theories such as loop quantum gravity and others are thought by some to be promising candidates to the mathematical unification of quantum mechanics and gravity, requiring less drastic changes to existing theories. However, recent workplaces stringent limits on the putative effects of quantum gravity on the speed of light, and disfavors some current models of quantum gravity \cite{abdo2009limit}. \end{itemize} \section{Motivation for an axion dark matter search} At the present time, the dark matter mystery is one of the greatest common unsolved problems between the SMPP and the SMC. About $85\%$ of the universe's gravitating matter is nonluminous, and its nature and distribution are, for the most part, unknown \cite{komatsu2011seven}. Elucidating these issues is one of the most important problems in fundamental physics. Beginning with the nature of dark matter, one possibility is that it is made of new fundamental particles \cite{bertone2005particle}. We study dark matter scenarios in BSM physics and look for distinctive dark matter signature in direct and indirect dark matter searches, using astrophysical and cosmological probes. A complete understanding of the nature of dark matter requires utilizing several branches of physics and astronomy. The creation of dark matter during the hot and rapid expansion of the early universe is understood through statistical mechanics and thermodynamics. Particle physics is necessary to propose candidates for dark matter content and explore its possible interactions with ordinary baryonic matter. General relativity, astrophysics, and cosmology dictate how dark matter acts on large-scales and how the universe may be viewed as a laboratory to study dark matter. Many other areas of physics come into play as well, making the study of dark matter a diverse and interdisciplinary field. Furthermore, the profusion of the ground and satellite-based measurements in recent years has rapidly advanced the field, making it dynamic and timely$;$ we are truly entering the era of precision cosmology \cite{garrett2011dark}. In an attempt to explain the particle nature of the dark matter component, we study models of very light dark matter candidates called axions and axion-like particles. Axions are hypothetical elementary particles introduced by Peccei and Quinn to solve the CP problem of the strong interactions in quantum chromodynamics. Furthermore, there are more other particles that have similar properties to that of axions are postulated in many extensions of the standard model of particle physics and known as axion-like particles. One exciting aspect of axion and ALPs is that they might interact very weakly with the standard model particles. This property makes axions and ALPs plausible candidates that might contribute to the dark matter density of the universe. The capability of axions and ALPs to contribute to the discovery of the dark matter composition, in addition, to solve other problems in the standard model strongly encourages us to pursue researching for such interesting candidates of dark matter. \section{Overview and outline of the thesis} The standard model of particle physics, together with the standard model of cosmology, provides the best understanding of the origin of matter and the most acceptable explanation to the behavior of the universe. However, the shortcomings of the two standard models to solve some problems within their framework are promoting the search for new physics beyond the standard models. Dark matter is one of the highest motivating scenarios to go beyond the standard models. In this thesis, we focus on understanding the nature of dark matter through the study of the phenomenological aspects of axions and axion-like particles dark matter candidates, in cosmology and astrophysics. In chapter \ref{ch2}, we review the current status of the research of the dark matter. We briefly explain the first hints that dark matter exists, elaborate on the strong evidence physicists and astronomers have accumulated in the past years, discuss possible dark matter candidates, and describe various detection methods used to probe the dark matter's mysterious properties. The theoretical backgrounds about the QCD axion, including the strong CP problem, the Peccei-Quinn solution, and the phenomenological models of the axion, are described. Then, the main properties of the invisible axions are briefly discussed in chapter \ref{ch3}. The possible role that axions and ALPs can play to explain the mystery of dark matter is the topic of chapter \ref{ch4}. To discuss whether they correctly explain the present abundance of dark matter, we investigate their production mechanism in the early universe. After, we discuss the recent astrophysical, cosmological, and laboratory bounds on the axion coupling with the ordinary matter. In chapter \ref{ch5}, we consider a homogeneous cosmic ALP background (CAB) analogous to the cosmic microwave background (CMB) and motivated by many string theory models of the early universe. The coupling between the CAB ALPs traveling in cosmic magnetic fields and photons allows ALPs to oscillate into photons and vice versa. Using the M87 jet environment, we test the CAB model that is put forward to explain the soft X-ray excess in the Coma cluster due to CAB ALPs conversion into photons. Then we demonstrate the potential of the active galactic nuclei (AGNs) jet environment to probe low-mass ALP models and to potentially exclude the model proposed to explain the Coma cluster soft X-ray excess. We turn our attention in chapter \ref{ch6} to consider a scenario in which ALPs may form Bose-Einstein condensate (BEC), and through their gravitational attraction and self-interactions, they can thermalize to spatially localized clumps. The coupling between ALPs and photons allows the spontaneous decay of ALPs into pairs of photons. For ALP condensates with very high occupation numbers, the stimulated decay of ALPs into photons is possible, and thus the photon occupation number can receive Bose enhancement and grows exponentially. We study the evolution of the ALPs field due to their stimulated decays in the presence of an electromagnetic background, which exhibits an exponential increase in the photon occupation number with taking into account the role of the cosmic plasma in modifying the photon growth profile. We focus on investigating the plasma effects in modifying the early universe stability of ALPs, as this may have consequences for attempts to detect their decay by the forthcoming radio telescopes such as the Square Kilometer Array (SKA) telescopes in an attempt to detect the cold dark matter (CDM) ALPs. Finally, chapter \ref{ch7} is devoted to summarize our arguments and to draw our conclusions. We point out that the research on axions and ALPs will be one of the main frontiers in the near future since the discovery of these particles can solve some of the common unresolved problems between particle physics and cosmology and take us a step forward towards understanding nature. For completeness, some useful notations and conversion relations are broadly outlined in the appendix. \chapter{\textbf{General Context and Overview of Dark Matter}} \label{ch2} During the last century, the development of astrophysical and cosmological observations have provided rich information that significantly improved our understanding of the universe. One of the most astounding revelations is that ordinary baryonic matter is not the dominant form of material in the universe, and instead, some strange invisible matter fills our universe \cite{garrett2011dark}. This new form of matter is known as the dark matter, and it is roughly five times more abundant than ordinary baryonic matter \cite{aghanim2018planck}. Although this strange dark matter has not been detected yet in the laboratory, there is a great deal of observational evidence that points to the necessity of its existence. The purpose of this chapter is to present a brief overview of the evidence for the existence of dark matter and study its properties, possible candidates, and detection methods. \section{What is dark matter\boldmath$?$} \label{Sec.21} In principle, direct information on cosmology can be obtained by measuring the spectrum of mass as it evolves with cosmic time, which could enable direct reconstruction of the present mass density of the universe \cite{cline2001sources}. But each type of observational test encounters degeneracies that can not be resolved without considering an additional sort of matter which behaves differently and can not be observed by all observational techniques. Maybe this new type of matter does not interact strongly enough with anything we can readily detect or see, and therefore it is basically invisible to us and is referred to as ``dark matter’’. The name dark matter is really just a label for the hole in our fundamental understanding of nature, that something is missing and we do not know what it is. Precisely, it is called dark because it does not emit, reflect, or absorb light, and since it has no identifiable form, so it is called by the most generic word, matter, as it behaves like matter. The only reason we believe dark matter makes up bout $85\%$ percent of the mass of the known universe is because of its observable gravitational effects \cite{lisanti2017lectures}. It is hardly the first time that scientists have invoked unseen entities to account for phenomena that seemed inexplicable at the time. Eventually, some such ghostly entities turned out to be real, and the rest were disproved \cite{livio2004dark}. \section{ Evidence of existence of dark matter} \label{Sec.22} In this section, we review the observational and astrophysical evidence for the presence of dark matter at a wide variety of scales from the scale of the smallest galaxies to clusters of galaxies and cosmological scales. \subsection{The discovery of Neptune} Let us here consider a historical precedent that might be related to dark matter$:$ the discovery of planet Neptune. Early in the 19th century, astronomers noticed that the newly discovered planet Uranus was not following the predicted orbit suggested by Newton's theory of gravitation. Some speculated that an unseen world was tugging on it. In 1846, Le Verrier assumed that this unseen world is an undiscovered planet, and accordingly, he calculated its expected location in the sky based upon Newton's laws \cite{le1846recherches}. Then this hypothetical planet was observed later in 1846 by Galle and is known now as planet Neptune \cite{galle1846account}. We refer to this historical incident because it is similar to the current situation of the dark matter. Before the discovery of Neptune, it was just a theoretical-hypothetical to represent the unseen mass or ``dark matter’’. Of course, we know now that Neptune is not part of dark matter, but the idea of discovering something missed or unseen should be the same. \subsection{Discovery of missing mass ``dark matter’’} The evidence for dark matter comes for the first time in 1933, when the Swiss astronomer Fritz Zwicky studied the movement of galaxies in the nearby Coma cluster (99 Mpc away from the Milky Way) \cite{zwicky1933rotverschiebung, babcock1939rotation}. The magnitudes of the velocities of galaxies with respect to each other were found to be much faster than the expected values that would be consistent with gravitational confinement based on the potential well arising from the visible matter alone. The argument was based on estimating the Coma cluster dynamical mass using the virial theorem \cite{jeans1921dynamical, hillstatistical}. The theory provides a general equation relating the average total kinetic energy $ \langle K \rangle$ with the average total potential energy $\langle V \rangle$ of a abound system in equilibrium \begin{equation} \label{eq.2.1} 2 \langle K \rangle + \langle V\rangle = 0 \:. \end{equation} For the moment, if the Coma cluster consists of $N$ galaxies, then its average total kinetic energy can be written as \begin{equation} \label{eq.2.2} \langle K \rangle = \frac{1}{2} \sum_i^N m_i v_i^2 = \frac{1}{2} v^2 \sum_i m_i = \frac{1}{2} M_{\text{c}} v^2 \:, \end{equation} where $v$ is the average velocity of the whole galaxies, and $M_{\text{c}}$ is the total mass of the Coma cluster. Then suppose that the Coma cluster is spherical, its average gravitational potential energy is approximately \begin{equation} \label{eq.2.3} \langle V \rangle = - \frac{1}{2} \sum_i^N \sum_{j>i} \frac{G m_i m_j}{R_{ij}} = - \frac{3}{5} \frac{ GM^2_{\text{c}}}{R_{\text{c}}} \:, \end{equation} where the sum is taken over all possible pairs of galaxies. Here $R_{ij}$ is the effective distance between two galaxies and $R_{\text{c}}$ is the effective radius of the Coma cluster. Thus, we have obtained an expression for the two terms in equation \eqref{eq.2.1}, that leads to read \begin{equation} v^2 = \frac{3}{5} \frac{ G M_{\text{c}}}{R_{\text{c}}} \:. \end{equation} The last formula provides a method to estimate the total mass of the cluster in terms of its average velocity. The unexpected observations based on the total luminosity mass of the Coma cluster shows that the average velocity of the cluster galaxies was so high that \begin{equation} v^2 \gg \frac{3}{5} \frac{ G M_{\text{c}}}{R_{\text{c}}} \:. \end{equation} This implies that the Coma cluster may not obey the virial theorem, and the cluster is not even a gravitationally bound system. In such a case, the system is dominated by kinetic energy and individual galaxies should escape from the cluster and hence the cluster must decay. But this situation is not consistent with observations. Another possible scenario is that the system is in virial equilibrium, but it contains a more gravitational potential, and accordingly, there is much more additional nonluminous matter. That said, the proof of the existence of such a nonluminous matter may only come with its direct detection \cite{sanders2010dark}. Therefore, Zwicky concluded that there must be a large amount of invisible matter within the cluster that he termed ``dark matter’’. Nevertheless, this being viewed as too unconventional, so the idea was not taken seriously and was ignored at that time. The possibility that this missing matter would be non-baryonic was unthinkable at this time$;$ this evidence was pointed out as the first hint for the existence of dark matter. Zwicky's estimation of the discrepancy between the mean density of the Coma cluster obtained by his observation and the mean density derived from luminous matter was about a factor $400$ or $100$. Observations of X-ray emitting hot gas in the galaxy clusters reveal that most of baryonic content is in the form of such hot gas and its mass easily exceeds by a factor $3\textup{--}4$ the mass of stars in the individual galaxies \cite{jones1984structure}. Going back to Zwicky, who did not count on this hot gas, this means that a fraction of the missing matter was actually found. Modern observations confirm this discrepancy but were reduced to a factor of $5\textup{--}6$. We know that luminous stars make up only a tiny $1\%$ of the total cluster mass. The additional matter is $14\%$ in the form of a baryonic hot intracluster medium, and the remaining $ 85\%$ in the form of dark matter \cite{aghanim2018planck}. \subsection{Rotation curves of spiral galaxies} One of the most substantial evidence of the existence of dark matter has been found in the 1970s, because of the significant contributions of the astronomers Rubin, Ford, and Thonnard in studying the rotational motion of stars in spiral galaxies \cite{rubin1970rotation, rubin1978extended, rubin1980rotational}. Once again, explaining the results required the existence of vast amounts of invisible matter. In their work, they were measuring the rotational velocities of spiral galaxies via redshifts and used the data to calculate the corresponding mass distribution $M(r)$. Then they were comparing the results of the observations with the expected values that have been calculated by applying Newton's law to a spherically symmetric distribution of mass. The comparison should give \begin{equation} F = \frac{G m M(r)}{r^2} = \frac{m v^2(r)}{r} \:, \end{equation} where $v(r)$ is the rotational velocity, $r$ represents the distance from the galactic center, and $M(r)$ is the mass enclosed within the distance $r$. The rotation curve is defined as the plot of the rotational velocity as a function of $r$, and it can be described by the following formula \begin{equation} \label{eq.2.7} v(r) = \sqrt{\frac{G M(r)}{r}} \:. \end{equation} Suppose the fact that spiral galaxies are composed of a central dense bulk and a thin disk in the outer region is considered. Therefore the great amount of the mass is located in the central bulk. If we assume a constant density of the bulk, so for $ r \ll R_c$, the mass increases as the volume $(\propto r^3)$, while at large distances $r \gg R_c$ the mass should be independent of $r$. Inserting this information into equation \eqref{eq.2.7} gives us an expected rotation curve \begin{equation} v(r) \propto \begin{cases} \: r & \quad r \ll R_c \:, \\ \: r^{-1/2} & \quad r \gg R_c \:. \end{cases} \end{equation} Thus, it is expected that the velocity should start increasing linearly with $r$ until reaching a maximum, and then it should fall off. But the general result does not agree with this expectation. Instead of falling off at large $r$, the observed rotation curves remain flat with increasing $r$, at least to values of $r$ comparable with the disk radius, see a typical example in figure \ref{Fig.2.1}. To explain this unexpected flat behavior, one could assume a modified theory of gravity or the existence of a more considerable amount of invisible mass, which only interacts gravitationally, extending further beyond the limits of the visible galaxy. Thus the dark matter explanation imposes itself as one of the most powerful solutions to the rotation curves problem in the galactic scale. \begin{figure}[t!] \centering \includegraphics[scale=0.5]{rcurve} \caption{Rotation curve of a typical spiral galaxy, predicted and observed.} \label{Fig.2.1} \end{figure} \subsection{Gravitational lensing} \begin{figure}[t!] \centering \includegraphics[scale=0.4]{grav_lensing} \caption[An illustration of gravitational lensing. Light from a distant galaxy is bent by a foreground galaxy, and when this light is observed on Earth, we see virtual images of the distant galaxy at different sky positions. The light also gets magnified, and this is visible in the top image, which is brighter than the lower image. The shape of the top images is also altered compared to the original shape of the galaxy. Image credit$:$ NASA/JPL-Caltech, 2010.]{An illustration of gravitational lensing. Light from a distant galaxy is bent by a foreground galaxy, and when this light is observed on Earth, we see virtual images of the distant galaxy at different sky positions. The light also gets magnified, and this is visible in the top image, which is brighter than the lower image. The shape of the top images is also altered compared to the original shape of the galaxy. Image credit$:$ NASA/JPL-Caltech, 2010.} \label{Fig.2.2} \end{figure} Another very solid evidence of the presence of dark matter comes from studying galaxy clusters. The method is used to make direct measurements of the total mass of the cluster based on the gravitational lensing effect \cite{einstein1911influence, einstein1936lens}. In particular, this technique exploits the main principle of general relativity that massive objectives cause curvature of spacetime. Therefore if clusters were replete with more dark matter, then this additional mass ought to produce a deeper divot in the fabric of spacetime, thereby a greater altering in the paths of light rays in the universe. Accordingly, the cluster would act as a giant lens, distorting the images of galaxies behind it. This phenomenon is known as strong gravitational lensing, see figure \ref{Fig.2.2}. The angular separation between the different images is \begin{equation} \theta = \sqrt{4GM \frac{d_{\text{LS}}}{d_L d_S}} \:, \end{equation} where $M$ is the mass of the object acting as the lens, $d_{\text{LS}}$ is the distance from the lens to the source and $d_L$, $d_S$ are the distances from the observer to the lens and the source, respectively. Hence, the size and the distances involved in these images captured by telescopes are directly linked to the mass of the lensing foreground cluster. When a large gravitational mass is located between a background source and the observer and leads to the bending of light, this effect is very apparent and is called strong lensing. But if the source that causes the bending of light is located exactly behind a massive circular object in the foreground, a complete ``Einstein ring’’ appears, in more complicated cases, like a background source that is slightly offset or a lens with a complex shape, one can still observe arcs or multiple images of the same source. The mass distribution of the lens can then be inferred by the measurement of the ``Einstein radius’’ or, more in general, by the positions and shapes of the source objects. Once again, the total measured mass of the lens is not in agreement with the evaluation of the luminosity mass. This leads once more to think that a large fraction of the mass of the clusters is composed of dark matter. This technique has also been used to create the first 3-dimensional maps of the dark matter distribution in the cosmic space and provides evidence for the large-scale structure of matter in the universe and constraints on cosmological parameters. \subsection{Bullet cluster} \begin{figure}[t!] \centering \includegraphics[scale=0.52]{Bullet_cluster} \caption[Matter distribution in X-rays (colors) and using the gravitational lensing (contours) of the Bullet cluster.]{Matter distribution in X-rays (colors) and using the gravitational lensing (contours) of the Bullet cluster. Figure taken from reference \cite{clowe2006direct}.} \label{Fig.2.3} \end{figure} Dark matter existence could also be inferred from the comparison between the luminosity mass of a cluster and the mass determined by the X-ray emission of its electron component$;$ for more detail about this method, see references \cite{sadat1997clusters, piffaretti2008total}. This allows inferring the temperature of the gas, which in turn gives information about its mass through the equation of hydrostatic equilibrium. For a system with spherical symmetry, the equation of hydrostatic equilibrium implies \begin{equation} \dfrac{d P}{d r} = - \frac{G M(r) \rho(r)}{r^2} \:, \end{equation} where $P$, $M(r)$, and $\rho$ respectively are the pressure, mass, and density of the gas at radius $r$. In order to rewrite this formula in a more suitable form in terms of the temperature $T$, we can use the equation of state for ideal gas, $PV = N k_B T$, where $V$ is the volume of the gas, $k_B$ is the Boltzmann constant, and $N$ is the total number of electrons, and ionized nuclei in the gas, which can be expressed as $N = M/m_p \mu$, where $M$ is the total mass of the gas, $m_p$ is the proton mass and $\mu \simeq 0.6$ is the average molecular weight. Since $M/V = \rho$, the equation of hydrostatic equilibrium for an ideal gas reads now \begin{equation} \dfrac{d \log \rho}{d \log r} + \dfrac{d \log T}{d \log r} = - \frac{G \mu m_p M(r)}{k_B T(r) r} \:. \end{equation} The temperature of clusters is roughly constant outside their cores, and the density profile of the observed gas at large radii roughly follows a power-law with an index between $-2$ and $-1.5$. Therefore, for the baryonic mass of a typical cluster, the temperature should obey the relation \begin{equation} k_B T \approx (1.3 - 1.8) \ \text{keV} \left( \frac{M(r)}{10^{14} \mathrm{M}_{\odot}} \right) \left( \frac{1 \ \text{Mpc}}{r} \right) \:. \end{equation} The disparity between the temperature obtained using this calculation, and the corresponding observed temperature, $k_B T \approx 10 \ \text{keV}$, when $M(r)$ is identified with the baryonic mass, suggests the existence of a substantial amount of dark matter in clusters. Based on this method, one of the most direct empirical evidence for the existence of dark matter can be extracted from studying the Bullet cluster, which is a cluster formed out of a collision of two smaller clusters \cite{clowe2006direct}. When the two galaxy clusters pass each other, the ordinary matter components collide and slow down, while on the other hand, the dark matter components pass each other without interaction and slowing down. It seems that the collision between the two galaxy clusters has led to the separation of dark matter and ordinary matter components of each cluster. This separation was detected by comparing X-ray images of the luminous matter taken with the Chandra X-ray Observatory with measurements of the cluster's total mass from gravitational lensing observations, see figure \ref{Fig.2.3}. This method not only gives evidence of the existence of dark matter but also allows finding the locations of the dark matter and ordinary matter in the cluster and reveals some differences between their behaviors. Although the two smaller clumps of ordinary matter are moving away from the center of the collision with low speeds, the two large clumps of dark matter are moving in front of them with higher speeds. This fact points out the collisionless behavior of the dark matter components, and this implies that the self-interactions of the dark matter components must be very weak. It seems interesting that this evidence can be counted as direct evidence for dark matter, as it is independent of the details of Newtonian gravitational laws. \subsection{Evidence on cosmological scales} The analysis of the cosmic microwave background \cite{fixsen1996cosmic, penzias1965measurement, dicke1965cosmic} is another useful tool not only for providing proof of dark matter but also for determining the total amount of dark matter in the universe. The CMB is the thermal radiation relic leftover from the early universe stages at redshift $z \sim 1100$ around $380, 000$ years after the big bang. The CMB consisted of photons emitted during the recombination era when the free electrons and protons\footnote{There is an approximation here that all baryons in the universe at this time are in the form of protons.} combine into neutral hydrogen atoms, thus allowing photons to decouple from matter and stream freely across the universe. In this way, the range of photons increases immediately from very short length scales to very long length scales and is quickly followed by their decoupling. The spectrum of the CMB is well described by a blackbody function at a temperature different from matter due to this decoupling. The blackbody radiation at the recombination temperature evolves into blackbody radiation in the present universe at a lower temperature. Because the temperature is proportional to the mean photon energy, which has redshifted with the expansion of the universe, the photons retain information about the state of the universe at the recombination timescale, and thus carries remnant information about the general properties of matter in the early universe. At the present date, one can observe this CMB as a radiation with a perfect blackbody spectrum at temperature $T_0=2.725 \ \text{K}$ or energy $k_B T =2.35 \times10^{-4} \ \text{eV}$. Precision measurement of the anisotropies in the angular distribution of temperatures of the CMB sky map the presence of overdensities and under densities in the primordial plasma before recombination, see the left panel of figure \ref{Fig.2.4}. For this reason, one can read information on the baryon and matter distribution of the universe in the spectrum of CMB anisotropies. The observed temperature of the CMB as a function of the angular position in the sky $\theta$ only differs by a small amount from the mean, and therefore it represents anisotropies as a temperature difference \begin{equation} \frac{\mathrm{\Delta} T}{T} (\theta/ \phi) = \frac{T(\theta, \phi) - \bar{T}}{\bar{T}} \:. \end{equation} Represent this temperature difference as a function of position using an equivalent Fourier series in spherical coordinates, which are spherical harmonics, reads \begin{equation} \frac{\delta T}{T} (\theta, \phi) = \sum_{\ell=1}^{ \infty} \sum_{m=- \ell}^{\ell} a_{\ell m} Y_{\ell m} (\theta, \phi) \:, \end{equation} where $Y_{\ell m}(\theta, \phi)$ are spherical harmonics and $a_{\ell m}$ are the multipole moments. Taking into account the fact that on large scales the sky is extremely uniform, the anisotropies are extremely small$:$ $\delta T/T \sim 10^{-5}$, then the variance $C_\ell$ of a given moment can be defined as \begin{equation} C_\ell \equiv \langle \vert a_{\ell m} \vert^2 \rangle \equiv \frac{1}{2 \ell+1} \sum_{m=-\ell}^{\ell} \vert a_{\ell m} \vert^2 \:. \end{equation} On small sections of the sky, the universe is relatively flat, and therefore the spherical harmonic analysis becomes ordinary Fourier analysis in two dimensions. In this limit, $\ell$ becomes the Fourier wavenumber. Since the angular wavelength is $\theta= 2\pi/\ell$, large multipole moments correspond to small angular scales. From observations, it appears to be a reasonable approximation that the temperature fluctuations are Gaussian, see the right panel of figure \ref{Fig.2.4}. Consequently, the information from the CMB can be accurately represented as a function of the multipole moment. \begin{figure}[t!] \centering \includegraphics[scale=.44]{planck-2015-cmb} \includegraphics[scale=.44]{planck-2015-temp} \caption[Left panel$:$ The full-sky map of the temperature anisotropies of the CMB as observed by Planck satellite in 2015. Right panel$:$ Planck 2015 temperature power spectrum data in blue and the best-fit base $\mathrm{\Lambda} \text{CDM}$ theoretical spectrum in red.]{Left panel$:$ The full-sky map of the temperature anisotropies of the CMB as observed by Planck satellite in 2015. Image credit$:$ ESA and the Planck Collaboration. Right panel$:$ Planck 2015 temperature power spectrum data in blue and the best-fit base $\mathrm{\Lambda} \text{CDM}$ theoretical spectrum in red. Image credit$:$ ESA and the Planck Collaboration.} \label{Fig.2.4} \end{figure} Now it is essential to understand the causes and meaning of the underlying anisotropies, which give rise to the so-called acoustic peaks in the power spectrum in the right panel of figure \ref{Fig.2.4}. They are primarily the result of a competition between baryons and photons from the time of the baryon-photon plasma. The pressure of the relativistic photons works to erase temperature anisotropies, while the heavy non-relativistic baryons tend to form dense halos of matter, thus creating sizable local anisotropies. The competition between these two effects creates acoustic waves in the baryon-photon plasma and is responsible for the observed acoustic oscillations. Each peak of this distribution can be related to a cosmological parameter, thus providing a means of constraining cosmology through measurements of the CMB. The most recent such measurements come from the Plank Collaboration \cite{aghanim2018planck}. According to this observation, the energy content of the universe is comprised of $68.3\%$ dark energy, $26.8\%$ dark matter, and $4.9\%$ baryonic matter, see figure \ref{Fig.2.5}. This provides compelling evidence for the existence of dark matter in large abundances throughout the universe. \begin{figure}[t!] \centering \includegraphics[scale=.50]{Planck_cosmic_recipe} \caption[The contents of the universe, according to recent results from the Planck Satellite.]{The contents of the universe, according to recent results from the Planck Satellite. Image credit$:$ ESA and the Planck Collaboration.} \label{Fig.2.5} \end{figure} \section{The need for non-baryonic dark matter} For the moment, the evidence presented in the previous section already sufficient to conclude that most of the matter in the universe is in the form of dark matter. The nature of this dark matter, whether it is baryonic or non-baryonic, is not yet known. Although we are still open to the possibility that at least a portion of the dark matter content is baryonic, there are strong cosmological pieces of evidence that make us biased to the other hypothesis of non-baryonic dark matter. In this section, based on \cite{bertone2005particle, coc2004updated, gondolo1991cosmic, kolb1981early, sarkar2002high} we clarify this issue. \subsection{Big bang nucleosynthesis} The modeling of the early universe by the standard big bang model gives a scenario that led to the present cosmic abundances of elements outside the stars. According to the theory, our universe is thought to have begun as an infinitesimally small, infinitely hot, infinitely dense singularity point around $\sim 14$ billion years ago. Some theories suggest that immediately after the big bang, the universe was too hot for any matter to exist and filled with an unstable form of energy whose nature is not yet known. Directly after, the universe starts expanding and cooling down to the point where quarks could condense from the energy until a sort of thermal equilibrium existed between matter and energy. Then things were cool enough for quarks, leptons, and gauge bosons to associate with one another and form protons and neutrons, and more familiar particles like electrons and photons begin to appear. The temperature drops during the universe expansion until the neutrons and protons have frozen out. Then about one minute after the big bang, when the universe has cooled enough, the big bang nucleosynthesis (BBN) begins to create the light elements. A few hundred thousand years later, the universe had cooled enough for electrons to be bound to nuclei, forming atoms. This is known as the recombination era at which the universe first became transparent to radiation. Before that, photons were scattered by the free electrons, making the universe opaque. It is the radiation emitted during this recombination that makes up the cosmic microwave background radiation that we can still detect today. As things continued, the universe had cooled sufficiently that stars, galaxies, clusters, and superclusters have been formed during a long period of dark ages. What interests us here is that the production of the light nuclei ${}^2\text{H}, {}^3\text{He}, {}^4\text{He}$, and ${}^7\text{Li}$ occurred exclusively during the first few minutes of the big bang, while the heavier elements are thought to have their origins in the interiors of stars which formed much later in the history of the universe. Consequently, no more light elements can be formed after the BBN. Also, these elements can not be easily destroyed in stellar interiors. Therefore the baryonic contribution to the total mass contained in the universe can be determined based on the measurements of the present abundances of these light elements in cosmic rays. The resulting elemental abundances depend only on the nuclear reaction rate and the baryon-to-photon ration $\eta$ at the time. The nuclear reaction rate can be calculated using theoretical as well as laboratory analysis. While the parameter $\eta$ is the present ratio $\eta$ of the baryon number density $n_b$ to the photon number density $n_\gamma$. This quantity is directly related to the value of $\mathrm{\Omega}_b h^2$ deduced by Planck as \begin{equation} \eta \equiv \frac{n_b}{n_\gamma} \approx 2.738 \times 10^{-8} \: \mathrm{\Omega}_b h^2 = 6.11 \pm 0.04 \times 10^{-10} \:, \end{equation} where $\mathrm{\Omega}_b$ represents the baryon density of the universe, and $h \approx H_0/100 \ \text{km} \ \text{s}^{-1} \ \text{Mpc}^{-1} = 0.6727 \pm 0.0066$ is the dimensionless Hubble parameter. Matching the observed abundances of the light elements in today's universe to the prediction by the BBN requires satisfying the condition$:$ \begin{equation} \mathrm{\Omega}_b h^2 = 0.02225 \pm 0.00016 \:. \end{equation} The BBN predictions for the abundances of the light elements as a function of the baryon over photon ratio compared to observations is shown in figure \ref{Fig.2.6}. This measurement exemplifies the best-fit BBN prediction and gives a good agreement with the values inferred from the CMB power spectrum that the baryon density is of $\mathrm{\Omega}_b \approx 0.04$. Combining this with the total matter density inferred from the CMB observations, BBN not only confirms the existence of dark matter but is also one more evidence for its non-baryonic nature. The independent measurements of the total mass density of the universe from large-scale structures provide that $\mathrm{\Omega}_m \approx 0.29$. This is directly implied that the remaining $\mathrm{\Omega}_{\textit{leftover}} \approx 0.25$ must be dark matter. Furthermore, we have another invaluable piece of information about its nature$:$ we see immediately that dark matter must be predominantly non-baryonic. Being non-baryonic allows the possibility that the dark matter is not capable of interacting with photons electromagnetically, which sits very well with the fact that it does indeed appear dark. This is also consistent with the dark matter being dissipationless$;$ were it able to absorb and re-emit photons, it would obscure stars within distant halos, as well as radiate away angular momentum and collapse with baryons to form stellar and galactic disks. \begin{figure}[t!] \centering \includegraphics[scale=.50]{BBN} \caption[BBN predictions for the abundances of light-elements as a function of the baryon over photon ratio compared to observations.]{BBN predictions for the abundances of light-elements as a function of the baryon over photon ratio compared to observations. Figure is taken from reference \cite{bertone2005particle}.} \label{Fig.2.6} \end{figure} \subsection{Cosmic structure formation} The final cosmological argument for the need for dark matter arises from the growth of cosmic structure from the nearly uniform distribution of baryonic matter imprinted on the CMB to the wide variety of structures observed today. A perfectly homogeneous expanding universe stays that way forever$;$ there will be no structures. However, this is not the case for our universe, as evident from the universe density map and galaxy redshift surveys. It appears that the present-day distribution of matter is very non-homogeneous, at least on scales up to about $100 \ \text{Mpc}$. The standard theory for the formation of structure assumes that the universe was initially almost perfectly homogeneous, with a tiny modulation of its density field. The action of gravity then works to enhance the density contrast as time goes on, leading to the formation of galaxies or clusters when the self-gravity of an overdense region becomes large enough to decouple itself from the overall Hubble expansion. In this sense, one can use the value of the density fluctuations at the last scattering surface to estimate the recent value of it. At a very early time, CMB indicates that the universe had a thermal distribution at the recombination era with temperature fluctuations $\delta T /T \sim 10^{-5}$. Because radiation and baryons were coupled before recombination, the distribution of the baryon density should also be with density fluctuations $\delta \rho /\rho \sim 10^{-5}$. Now let us look at how fluctuations in density evolve with time. For simplicity, we consider a flat and matter-dominated universe with $\mathrm{\Omega}_m = 1$ as a good approximation for the early times. The Friedmann equation can be written as \begin{equation} H^2 - \frac{8}{3} \pi G \rho = - \frac{kc^2}{R^2(t)} \:. \end{equation} If now we look at a small area of this universe which has a little more matter than the rest. Because of the excess matter, it evolves slightly differently as \begin{equation} H^2 - \frac{8}{3} \pi G \rho' = - \frac{kc^2}{R^2(t)} \:. \end{equation} Subtracting one equation from the other, one easily gets the following expression for the fractional overdensity of the region$:$ \begin{equation} \frac{\delta \rho}{\rho} \equiv \frac{\rho - \rho'}{\rho} = - \frac{3 k c^2}{8 \pi G R^2 \rho} \:. \end{equation} This implies that the fractional overdensity of the region depends on the evolving quantities $R$ and $\rho$, as follows$:$ \begin{equation} \frac{\delta \rho}{\rho} \sim \frac{1}{R^2 \rho} \sim \frac{1}{R^2 R^{-3}} \sim R \:. \end{equation} Therefore as the universe gets bigger, the overdensity grows along with it. But since size and redshift are related $R= (1+z)^{-1}$, we can calculate the ratio between the today value of the expansion parameter and its value after the last scattering surface. The CMB is a picture of the universe at a redshift of $z \sim 1100$, while the present should have no redshift $z=0$. Thus, \begin{equation} \frac{R_{\text{today}}}{R_{\text{LSS}}} \sim 10^3 \:. \end{equation} If there is only baryon matter and since $\delta \rho_b / \rho_b$ can not growth until the last scattering surface. Therefore \begin{equation} \left( \frac{\delta \rho}{\rho} \right)_{\text{today}} = 10^3 \times \left( \frac{\delta \rho}{\rho} \right)_{\text{LSS}} \approx 10^{-2} \:. \end{equation} This can not explain today's observations for the density fluctuation in the Milky way, for example, which is about $10^5$. This implies that the density of a galaxy is much greater than the density of the universe as a whole, and taking into account only the baryonic content of matter in the universe can not explain the very large value of density fluctuations. The existence of such non-linear structures today implies that the growth of fluctuations must have been driven by the non-baryonic dark matter, which was not relativistic at recombination. After recombination, baryons decouple and fall into these overdensities before recombination, creating gravitational wells that the baryons can later collapse into after recombination. \section{Basic properties of dark matter} \label{Sec.23} Gradually it also becomes clear that dark matter is required not only to keep galaxies and clusters stable but also to structure the entire universe. It provides the scaffolding for the formation of stars, galaxies, and clusters. We strongly believe that the dynamic of astrophysical and cosmological systems, ranging from the size of the galaxy to the whole universe, can not be explained without assuming the existence of dark matter. At the same time, the identity of dark matter has far-reaching implications also in particle physics. It may lead to improve our understanding of the possible new physics beyond the standard model of particle physics. Therefore we discuss in this section some of the main properties that must characterize the identity of any possible dark matter candidate. The ideas in this section were discussed in \cite{sahni20045, lazarides2007particle, matarrese2011dark}. \begin{itemize} \item {\bf Relic abundance.} Astrophysical measurements indicate that the relic dark matter density accounts for about $85\%$ of the total matter density in the universe. It is possible that the dark matter particles were produced during the very early stages of the universe age after the big bang by either standard thermal production via scattering interactions in the thermal bath or nonthermal mechanisms. Therefore a good dark matter candidate must be able to be produced through such mechanisms under the early universe conditions with the correct abundance to account for this relic density. \item {\bf Electrically neutral.} Dark matter is not observed to shine, and can not be detected by telescopes. For this reason, dark matter is considered to be optically dark. Therefore, a successful dark matter candidate must be electrically neutral or at least have very weak electromagnetic interactions. This required either the dark matter particle has vanishing, or at least very small, electric charge and electric and magnetic dipole moments, or the particles must be very heavy. The major consequence of this behavior is that the dark matter does not couple too strongly to photons and consequently can not cool by radiating photons. Thus, it will not collapse to the center of galaxies as the baryons do by radiating their energy away electromagnetically, and this means that the dark matter is very nearly dissipationless. \item {\bf Interaction strength.} For the moment, all the astrophysical and cosmological observations indications for the presence on the dark matter is because of its gravitational effects. Therefore, dark matter particles should not interact or at least have very weak interactions with only photons and electrically charged particles, but must not couple too strongly to other electrically neutral particles as well. The reason for this is that all interactions between dark matter and baryons except gravity need to be very weak during the recombination epoch$;$ otherwise, there would be changes in the CMB acoustic peaks and would allow the dark matter to radiatively cool and affect the structure formation. \item {\bf Collisionless.} Furthermore, as we just mentioned above, astrophysical and cosmological observations indicate that dark matter interacts through gravity only, and if the dark matter particles possess any other interactions, they must be very weak. In addition, the dark matter particles to be consistent with observations, they must be nearly collisionless and has no self-interaction, or at least several astrophysical constraints must be placed on their self-interactions. \item {\bf Temperature.} Dark matter was probably non-relativistic (``cold’’) during the formation of large scale structures, as relativistic particles would cause the universe to be less clumpy than it is today. As a result, one can conclude that relativistic particles can not constitute the majority of the dark matter, instead, it should be dominated by non-relativistic species. Moreover, cold dark matter is capable of explaining the observed properties of galaxies quite well. However, this does not rule out totally the possibility that a fraction of dark matter consists of nearly massless particles moving at relativistic velocities (``hot’’). One more possibility is the modification of (``warm’’) dark matter particles. These particles were just relativistic at their decoupling temperature and cooled down to be non-relativistic at the time of matter-radiation equality. Another possibility can be a mixed dark matter composed of several distinct particle species with different temperatures. \item {\bf Stability.} Dark matter must be either very stable on cosmological timescales or at least long-lived with lifetimes comparable to or longer than the present age of the universe. This is to make sure that they can survive from the early universe stage until now. Another possible scenario to make it stable is that its destruction and formation must be in equilibrium. \item {\bf Non-baryonic nature.} Another important conclusion derived from observations is that the total contribution of the mutter content is about $29\%$ of the total energy density of the whole universe. In contrast, the results of the CMB, together with the predictions from BBN, suggest that only about $4\%$ of the total energy budget of the universe is made out of the ordinary baryonic matter. This directly implies that at least the majority of dark matter must be non-baryonic and constitutes about $2\%$ of the total energy-dense of the universe. \item {\bf Fluid.} Dark matter must behave as a perfect fluid on large scales, which means that the granularity of the dark matter is sufficiently fine not to have been directly detected yet through various effects. \end{itemize} \section{Dark matter candidates} \label{Sec.26} Despite the overwhelming success of dark matter in explaining cosmic phenomena, we do not know what particle or particles it is made of. There are a great number of dark matter candidate particles, some of which are considered more promising than others. Unfortunately, observations do not place stronger bounds on possible candidates. In this section, we outline some of the most popular candidates for the dark matter particles. While this list is by no means exhaustive, we will attempt to cover the range of possibilities that have been considered at least qualitatively. We will proceed very roughly from the smallest mass candidates at $10^{-22} \ \text{eV}$ to the largest at $10^{72} \ \text{eV}$ \cite{baltz2004dark}. \subsection{WIMPs} Weakly interacting massive particles (WIMPs) are suggested to be one of the very plausible candidates for the CDM component in the universe \cite{jungman1996supersymmetric, bergstrom2000non}. They represent a catch-all term for a bunch of particles that interact with a strength of the order of the weak-force interaction. Therefore they are supposed to interact with the ordinary matter only through their gravitational and the weak forces. In addition, it is suggested that they of large masses near the weak scale, that is between $10 \ \text{GeV}$ and a few $\text{TeV}$. This class of candidates is particularly interesting because of the WIMP miracle. The miracle is that the total mass of WIMPs has been independently predicted by a few different particle physics theories, and it is approximately the same mass as that required to explain all the extra gravitational force. It is also probably the largest class of dark matter particles, firstly, as it consists of hundreds of suggested particles. Secondly, because its rest masses exceed those of the baryons, therefore could account for lots of dark matter if, as most theories predict, they are common in the universe. \subsection{Axions and axion-like particles} Axions arise in the Peccei-Quinn solution for the problem of CP violation in the theory of strong interactions \cite{peccei1977cp}. On the first hand, they represent an essential extension of the standard model by offering solutions to its internal problems. On the other hand, they are considered as a promising and well-motivated dark matter candidate. If axions do exist, they meet all the requirements of a successful cold dark matter candidate. They interact weakly with the baryons, leptons, and photons of the standard model. Also, they are non-relativistic during the time when the structure begins to form. Moreover, they are capable of providing some or even all of the CDM density. In addition, they are relatively light and electrically neutral. Although the axion mass is arbitrary over the range $10^{-6} \ \text{eV}$ to a few $\text{eV}$, the symmetry breaking occurs at a high energy scale and hence early in the universe. Axion theories predict that the universe is filled with a very cold Bose-Einstein condensate of primordial axions, which never comes into thermal equilibrium. The axions in this condensate are always non-relativistic, and therefore are CDM candidate and not HDM candidate as one would suppose according to their mass. Furthermore, there are abundant theories beyond the standard model such as supersymmetric theories and string theory models that predict many other particles very similar to the QCD axions and called axion-like particles. In general, the properties of the more general class of ALPs are very similar to that of the QCD axions and are determined by their coupling with two photons. The main difference between the two categories is that the mass of the QCD axions is related to the coupling parameter, while this is not necessarily the case for the APLs, making the corresponding parameter space larger. In addition to their properties that make them very viable candidates for the dark matter, their ability to solve other problems in physics from different theoretical origins makes the study of the phenomenology of axions and ALPs an extraordinary exciting subject for the current research. More technical details about the theoretical origin and the characteristics of such particles will be discussed in the following two chapters. \subsection{Fuzzy dark matter} Fuzzy dark matter (FDM) \cite{hu2000fuzzy}, motivated by string theory, has recently become a hot candidate for dark matter that may resolve some of the current small scale problems of the cold dark matter model. It is also known as ultralight axions, ultralight scalar dark matter, or wave dark matter. The main idea of such a model is that the dark matter particles are made of ultra-light bosons in Bose-Einstein condensate state. The rest mass of FDM is believed to be about $10^{-22} \ \text{eV}$, and the corresponding de Broglie wavelength is $\sim 1 \ \text{kpc}$. Therefore, the quantum effect of FDM plays an important role in structure formation. \subsection{Sterile neutrinos} Dark matter could also be a special form of neutrino, which is tricky because neutrinos are already special. As we know, the standard model neutrinos come in three different flavors, electron, muon, and tau neutrinos. They can shift between flavors as they travel through space \cite{gonzalez2008phenomenology}. Dark matter neutrinos, if they exist, they are predicted to interact with ordinary matter only when they flip between flavors. What supports this hypothesis is that neutrinos already fulfill a lot of the qualifications of a dark matter candidate. They have mass and are weakly interacting. However, given their relic density, they are not massive enough to contribute to the total amount of dark matter in the universe significantly. Furthermore, their low rate of interaction and low mass corresponds to a long free streaming length, meaning that they could not account for the structure formation on scales below $\sim 40 \ \text{Mpc}$. Therefore the standard model neutrinos are excluded. However, sterile neutrinos, meaning that they do not interact via the weak force, with a mass of at least $\sim 10 \ \text{keV}$, have been proposed as a candidate. \subsection{Supersymmetric candidates} Supersymmetry is one of the most popular proposals for physics beyond the standard model and is flexible enough to offer several dark matter candidates \cite{jungman1996supersymmetric, roszkowski1993supersymmetric}. Supersymmetric theories extend the symmetry properties of spacetime in ordinary quantum field theory to provide a link between bosons and fermions, \ie between particles with integer spins and particles with half-integer spins. If supersymmetry exists in nature, then each standard model particle should have a corresponding superpartner with the opposite spin-statistics. So far, there has been no unbroken supersymmetry observed in nature, and it is thus obvious that if nature is described by supersymmetry, it has to be broken at a certain high energy scale. Indeed, if supersymmetry is realized at low energy world like the one in which we live, it must be broken. It is common in supersymmetric theories with conserved R-parity that the supersymmetry breaking allows for a mass difference between the superpartners. In this framework, the lightest superpartner (LSP) is absolutely stable and only weakly interacting, and therefore it happens to be a very suitable dark matter candidate. The most studied supersymmetric dark matter candidates are listed downward. \begin{itemize} \item {\bf Gravitinos.} The first dark matter candidate has been proposed by supersymmetry was the gravitino \cite{pagels1982supersymmetry, feng2003superweakly1, feng2003superweakly2}. It is a spin $3/2$ particle and the superpartner of the graviton in local supersymmetry, \ie supergravity. If the gravitino is the LSP in some models, it is often quite light, \ie in order of $\text{keV}$, and would thus be considered to be a warm dark matter candidate. The overproduction of gravitinos is somewhat problematic in cosmology, though not insurmountably so. Noting that gravitinos should not be sufficiently heavy as in this scenario, they decay during or after the BBN giving rise to the famous ``gravitino problem’’. For these reasons, gravitinos can not account for the entire relic density of the dark matter composition. \item {\bf Neutralinos.} The lightest neutralino is the most favored supersymmetric dark matter candidate. Several supersymmetric models contain four neutralinos characterized as electroweakly interacting particles with spin $1/2$ \cite{weinberg1982cosmological, goldberg1983constraint}. If the lightest neutralino is the LSP, it is considered as a perfect dark matter candidate. The successes of such models depend on the fact that the very light neutralinos are sufficiently suitable to account for the proper relic abundance of the dark matter compositions. \item {\bf Sneutrinos.} Another interesting possibility for supersymmetric dark matter candidates is the partners of the neutrinos. However, this candidate is quite disfavored because one can not explain a relic density consistent with the dark matter as the sneutrinos annihilate very effectively, and this required their masses to be above $500 \ \text{GeV}$. Giving their quite large elastic scattering cross sections on nuclei, they would be easily detectable in current experiments, which is not the case for the dark matter yet \cite{hagelin1984perhaps, ibanez1984scalar, falk1994heavy}. \item {\bf Axinos.} The axino is the supersymmetric partner of the axion and shares many of the same properties. The theory does not predict the mass of the axino, and it could, in principle, be the lightest supersymmetric particle rather than the neutralino. Axinos may be produced in decays of heavier supersymmetric particles and thereby achieve a relic density appropriate to a dark matter candidate. They might be either warm or cold dark matter depending on the conditions in the early universe \cite{rajagopal1991cosmological}. \item {\bf Q-balls.} Supersymmetric theories predict the existence of stable non-topological solitons known as Q-balls. They are characterized by their relatively strong self-interactions and can be absolutely stable if large enough. Therefore, they are suggested as promising candidates for the collisional dark matter \cite{kusenko1998supersymmetric}. \item {\bf Split SUSY.} It is argued recently that the successes of supersymmetric models hinge on the fact that the gauginos have masses at the weak scale while having the scalar superpartners at the weak scale is somewhat problematic. The null results of all low energy supersymmetric searches lead to the development of Split SUSY models, which are based only on gauge unification and dark matter as guiding principles. If the Higgs fine-tuning problem is simply ignored, the scalar superpartners can be made very heavy, keeping light gauginos and Higgsinos. This seems to be an exciting scenario to new supersymmetric dark matter candidates \cite{wells2003implications, arkani2005supersymmetric}. \end{itemize} \subsection{Lightest Kaluza-Klein particle} There are exotic dark matter candidates that arise in models with universal extra dimensions. According to such models, our four-dimensional spacetime may be embedded in a higher-dimensional space at high energy or short distance. This has some interesting theoretical ramifications such as the unification of forces, introduced as a concept by Kaluza in 1921 when he unified electromagnetism and gravity in a 5-dimensional theory \cite{appelquist2001bounds}. The additional spatial dimensions could be compactified, meaning that they are finite and probably very small. The simplest case would be that a compactified dimension is in the shape of a circle. In such a circle, the momentum traveling through it would be quantized. In contrast, the standard model physics would exist in the lowest state, with no momentum going in the direction of the compactified dimension, there would exist a possibility of excitations, holding a ladder of excited Kaluza-Klein (KK) states \cite{antoniadis1990possible}. This would correspond to additional particles outside the standard model, and potentially more possible dark matter candidates. \subsection{Chaplygin gas} One very exciting way was proposed in an attempt to unify dark energy and dark matter in a class of a simple cosmological model based on the use of a peculiar perfect fluid dubbed Chaplygin gas \cite{chaplygin1944gas, bilic2002unification, fabris2002density}. This perfect fluid is characterized by the following exotic equation of state $p = -A/\rho$, where $A$ is a positive real parameter. The pure Chaplygin gas has been extended to the so-called generalized Chaplygin gas with the following equation of state $p = -A/\rho^{\alpha}$, where $\alpha$ is another positive real parameter \cite{kamenshchik2001alternative, bento2002generalized}. This type of fluid can arise in certain string-inspired models involving d-branes. The interesting feature of this model is that it naturally provides a universe that undergoes a transition from the decelerating phase driven by dust-like matter at a moderate redshift to the accelerating universe at later stages of its evolution. \subsection{Mirror matter} The concept of hidden sectors of matter is the modern version of the old idea of a mirror copy of the world$;$ see for review \cite{foot2004mirror}. In this scenario, the dark matter could just be ordinary matter in the mirror world and, thus, the universe consisting of our knowen world and the mirror world. The matter in the mirror wold is constrained to interact with our world only very weakly, and the only significant communication is gravitational. This scenario can be constructed in a braneworld context, where our world and a mirror world are two branes in a higher-dimensional space. \subsection{Branons} String theory naturally contains objects of many different dimensions, called branes \cite{cembranos2003brane}. These would naturally have fluctuations characterizable as particles, the so-called branons. These branons are massive and weakly interacting particles that have been proposed to explain the missing matter problem within the so-called brane-world models. In this framework, these branons can be made into suitable cold dark matter candidates, both thermal and non-thermal. \subsection{WIMPzillas} This scenario grounds on the presence of non-thermally produced superheavy particles called WIMPzillas. These particles may be produced from gravitational interactions at the end of inflation. For mass scales of $10^{13}$ GeV, if these particles are stable or at least longlived, they may account for the dark matter \cite{chung1998nonthermal, chung1998superheavy}. In addition, such particles might decay with a lifetime much longer than the age of the universe, providing a source of ultra high energy cosmic rays$;$ this is the so-called ``top-down’’ scenario for UHECR production. \subsection{Primordial black holes} A viewable option to explain the nature of dark matter is that it is composed of small-mass black holes called primordial black holes (PBHs), which might have been formed under the right conditions in the very early universe \cite{chapline1975cosmological}. The production of these PBHs is enhanced during periods where the equation of state softens ($p < \rho/3$), such as during a first-order phase transition. This is easy to understand as when the pressure support lessens, objects collapse quickly and form the PBHs. The last such phase transition in the universe is the quark-hadron phase transition, at a temperature $T \sim 100$ MeV. The mass contained in the horizon at this epoch is very roughly a solar mass. Since the PBHs are produced before the BBN, they are considered non-baryonic, non-relativistic, and effectively collisionless and thus could be a very promising candidate for the cold dark matter in the universe. \subsection{Self-interacting dark matter} This is an increasingly popular option, assuming that dark matter is not just one particle but a collection of particles \cite{carlson1992self}. Just as the ordinary matter has a whole bunch of different particles, so too could dark matter. But because ordinary particles and dark particles do not interact much, we may never know. Perhaps the only way we could observe these particles would be indirectly, through their gravitational effect on the evolution of the cosmos. \section{Dark matter searches} \label{Sec.25} So far, we have presented the most popular possibilities for the dark matter candidates. If dark matter exists, tremendous numbers of such dark matter particles are expected to fill the entire universe and be moving around everywhere and could even be passing through Earth each moment. Indeed, a given dark matter candidate must meet a bevy of conditions to be consistent with astrophysical and cosmological observations$;$ however, some of these conditions might be relaxed in the context of a suitable scenario. Although such observations constrain the measurement of impressive large scale quantities, they do not allow for the precise determination of the properties of the dark matter particles. Therefore, many experiments are currently working to detect the dark matter particles to probe their properties and the fundamental theory or at least the existence of one of them. However, because of their weak interactions with ordinary matter, they are very hard to detect. The diagram used to group all possible interactions between dark matter and the standard model particles is shown in figure \ref{Fig.2.61}. Based on the distinction interaction processes, there are three different complementary ways to detect dark matter$;$ direct detection, indirect detection, and collider detection. These different categories are discussed in this section based on materials elaborated in detail in references \cite{bertone2010particle, baer2005direct, klasen2015indirect}. \begin{figure} [t!] \centering \begin{tikzpicture} [square/.style={draw, rounded corners, minimum width=width("#1"), minimum height=width("#1")+2*\pgfshapeinnerysep, node contents={#1}}] \draw [fill=lightgray] (0,0) ellipse (1.5cm and 1cm); \node at (3.5,1.5) (v100) [square={\bf{DM}}, fill=lightgray]; \node at (3.5,-1.5) (v100) [square={\bf{DM}}, fill=lightgray]; \node at (-3.5,1.5) (v100) [square={\bf{SM}}, fill=lightgray]; \node at (-3.5,-1.5) (v100) [square={\bf{SM}}, fill=lightgray]; { \newcounter{tmp} \foreach \s in {latex'} { \stepcounter{tmp} \begin{scope}[yshift=-\thetmp cm] \draw[darkblue,line width=1.0mm, arrows={-\s}] (-3.5,-1.2)-- (3.5,-1.2); \end{scope}} \foreach \s in {latex'} { \stepcounter{tmp} \begin{scope}[yshift=-\thetmp cm] \draw[darkblue,line width=1.0mm, arrows={-\s}] (4.4,0.5) --+ (0,3.0); \end{scope}} \foreach \s in {latex'} { \stepcounter{tmp} \begin{scope}[yshift=-\thetmp cm] \draw[darkblue,line width=1.0mm, arrows={-\s}] (3.5,5.2) -- (-3.5,5.2); \end{scope}} } \draw[black, line width=1.0mm] (-3.05,-1.3)--(-1.25,-0.55); \draw[black, line width=1.0mm] (1.25,0.55)--(3,1.3); \draw[black, line width=1.0mm] (3.0,-1.3)--(1.25,-0.55); \draw[black, line width=1.0mm] (-1.25,0.55)--(-3.05,1.3); \draw[black] (1.4,0.25) node[above,left] {\bf{Some kind of}}; \draw[black] (1.25,-0.25) node[above,left] {\bf{interaction}}; \draw[black] (2.2,-2.6) node[above,left] {\bf{Production at Collider}}; \draw[black] (2.0,2.6) node[above,left] {\bf{Indirect Detection}}; \draw[black] (4.8,1.7) node[above,left, rotate=90] {\bf{Direct Detection}}; \end{tikzpicture} \caption{Schematic diagram of different strategies to detect a dark matter particle based on the distinction interaction processes.} \label{Fig.2.61} \end{figure} \subsection{Direct detection} The direct detection is one of the most promising ways to search for dark matter particles. Currently, there are numerous direct dark matter experiments are running around the world. Cosmic rays that are constantly coming towards the Earth collide with the upper atmosphere and create a shower of particles reaching the surface of the Earth. Thus it is essential to construct these dark matter detectors deep underground to avoid the background from these cosmic rays. The main idea for the direct detection experiments is to search for observable signals based on the effects of galactic dark matter particles passing through the detector. These effects may be in the form of elastic or inelastic scatters of the dark matter particles with atomic nuclei, or with electrons in the detector material. The result of this type of interaction could be the production of heat or light, which can be measured. Such interaction can be represented as \begin{equation} \text{DM} + \text{SM} \rightarrow \text{DM} + \text{SM} \:. \end{equation} Especially of interest, there are some experiments that use different materials in their detectors, and various detection techniques have claimed to found some hints for a dark matter particle. However, their results are not entirely consistent with each other and also are in disagreement with results from several different experiments that have not found any observable signal for dark matter. This makes these results still controversial and so must be clarified by new experimental data. \subsection{Indirect detection} In parallel to the direct detection experiments, a wide range of indirect detection projects have been developed to search for dark matter particles. The dark matter may annihilate or decay under certain conditions to produce standard model products, including charged particles, neutrinos, and photons. The detection of such products would constitute an indirect detection of dark matter. This type of interaction is described by \begin{equation} \text{DM} + \text{DM} \rightarrow \text{SM} + \text{SM} \:. \end{equation} For example, the annihilation of two dark matter particles can produce gamma rays that can be detected by space or ground-based gamma-ray telescopes. This example is of quite an interest because gamma-ray photons can travel across long astrophysical distances and would allow the identification of annihilation sources. Therefore, there is a wide array of cosmic-ray and gamma-ray observatories searching for indirect dark matter signals from astrophysical sources. \subsection{Collider detection} It may also be possible to produce dark matter particles and detect it in the laboratory. The idea here is allowing the collision between ordinary standard model particles coming from opposite directions at tremendous energies and very high speed in the hopes that heretofore undiscovered particles will emerge from the collisions. The search on this case is based on a possible process that is the inverse of the annihilation process \begin{equation} \text{SM} + \text{SM} \rightarrow \text{DM} + \text{DM} \:. \end{equation} The famous example based on this technique is the experiments in the Large Hadron Collider (LHC) at CERN that collides two beams of protons together at the highest energies and then looks for dark matter signals. Unfortunately, none of these experiments have yet been able to detect any effect attributed to dark matter. In addition, there is no guarantee that any given particle observed at colliders is the astrophysical dark matter particle, even if it has the same properties listed above. In order to resolve this issue, the particle properties acquired from collider signals must be verified with those obtained from signals arising from astrophysical sources. \chapter{\textbf{The Strong CP Problem and Axions}} \label{ch3} The discussion in this chapter provides an overview of the problem of CP violation in the theory of strong interactions and its solution$;$ for a thorough review of this topic, see, for example, references \cite{peccei1989strong, peccei2008strong, kim2010axions, dine2000tasi, dine2017axions}. We start by introducing the $U(1)_A$ problem and its solution, which in turn begets the strong CP violation problem. After a brief look at other proposed solutions, we will describe the generally preferred Peccei-Quinn solution to the strong CP violation problem, and the resulting axion, together with some experimental considerations. The ability of axions to solve the strong CP problem as well as their predicted properties make them extraordinary exciting candidates for the CDM particle in the universe. \section{The chiral QCD symmetry and the \boldmath$U(1)_A$ problem} As we discussed in section \ref{Sec.1.2}, the quantum chromodynamics sector is the gauge theory that describes the strong interactions in the standard model of particle physics. The classical QCD Lagrangian for $N_f$ flavors reads \cite{peskin2018introduction} \begin{equation} \L_{\text{QCD}} = - \frac{1}{4} G^a_{\mu \nu} G_a^{\mu \nu} + \sum_f \bar{q}_f (i \slashed{D}_s - m_f) q_f \:, \end{equation} where $q_f$ are the quark fields with flavor $f$ , which runs over all presently known six flavors$;$ $u, d, s, c, b, \ \text{and}\ t$, with the corresponding masses $m_f$, and $\slashed{D}_s$ is the covariant derivative contains the coupling between the quarks and gauge fields, which is defined as \begin{equation} \slashed{D}_s = \gamma^{\mu} (\partial_{\mu} + ig_s \tau^a G_{\mu}^{a}) \:, \end{equation} where $g_s$ is the strong interaction coupling strength, $\tau^a$ are the generators of the $SU(3)_C$ group, and $G_{\mu}^{a}$ represents the $8$ gluon fields of the strong interaction. The classical QCD Lagrangian possesses local $SU(N_c)$ color symmetry, with $N_c = 3$. In the limit that all quark masses vanish, the left-handed and right-handed fermions decouple, and the model shows a further exact global $G= U_L(N_f) \otimes U_R(N_f)$ chiral symmetry. More technically, all transformations which treat left-handed and right-handed fields separately are chiral transformations. We can then conclude that the global $U(2)\times U (2)$ flavor symmetry is an exact symmetry of only the massless theory since the mass term breaks chiral symmetry explicitly. Therefore, the left-handed and right-handed charges are decoupled and operate separately. Each of them generates an $SU(N_f)$ group of transformations. The whole chiral group is then decomposed into the direct product of two $SU(N_f)$ groups, which will be labeled with the subscripts L and R, respectively. For example, if we consider the two flavor theory with the $u$-quark and $d$-quark, the chiral symmetry implies \begin{equation} \left( \begin{matrix} u \\ d \end{matrix} \right) \rightarrow (U_L P_L + U_R P_R) \left( \begin{matrix} u \\ d \end{matrix} \right) \:, \end{equation} where $P_{L, R} \equiv \frac{1}{2} (1\pm \gamma^5)$ are the usual Dirac projection operators that produce left and right projections when operating on the Dirac spinors $u$ and $d$. In this case, $U_{L, R}$ are unitary two-by-two matrices, and $\gamma^5$ are the gamma matrices. Before going further in our argument, some important notes have to be mentioned. It is possible that symmetry can be exact or approximate. If symmetry is clearly realized in all states seen in nature, this is referred to as an exact symmetry. And if symmetry is valid only under certain conditions, it is known as an approximate symmetry. Another possibility is that symmetry may be spontaneously or explicitly broken. According to the Goldstone theorem \cite{goldstone1961field}, the spontaneous breaking of an exact symmetry gives rise to massless scalar bosons, known as Goldstone bosons or sometimes also as Nambu-Goldstone bosons (NGBs). Moreover, the Goldstone theorem states that the number of massless particles is equal to the number of generators of the spontaneously broken symmetry. In contrast, the explicit breaking of an exact symmetry gives rise to a kind of pseudo-Nambu-Goldstone bosons (PNGBs), which are massive but light. Equivalently the spontaneous breaking of an approximate symmetry gives rise as well to PNGBs. In fact, the assumption of massless quarks is very sensible if we only consider the two lightest quarks$;$ $u$-quark, and $d$-quark, because their masses are much smaller than the typical QCD scale $\mathrm{\Lambda}_{\text{QCD}}$. Even considering three flavors would be acceptable, nevertheless worse, approximation, since the mass of the $s$-quark introduced in theory is comparable to the $\mathrm{\Lambda}_{\text{QCD}}$. We are now getting back to our main argument. The chiral QCD symmetry is exact in the quarks massless limit and spontaneously broken by the vacuum to an approximate large global symmetry $U(N_f)_v \otimes U(N_f)_A$ when we consider their real values. The first part of this symmetry $[U(N_f)_v = SU(N_f)_I \otimes U(1)_B]$, represents a vectoral symmetry term that consists of isospin symmetry times the global baryonic symmetry respectively. Both of these two symmetries have been realized in nature. The second part corresponds to the axial term $[U(N_f)_A = SU(N_f)_A \otimes U(1)_A$. Here, $SU(N_f)_A$ denotes the axial transformation symmetry, which is spontaneously broken in nature. PNGBs arise as a consequence of this breaking and are identified with the pions, the kaons, and the $\eta$ meson. However, the exact axial symmetry $U(1)_A$ is also not realized in nature, and it is expected to be broken. But no corresponding suitable PNGBs can be found yet. To summarize, the classical QCD Lagrangian shows a $U(1)_A$ symmetry$;$ however, it has never been observed in nature, and this implies that it merely has to be broken, but at the same time, if such symmetry is broken, there must be a PNGB associated to the symmetry breaking which has yet to be identified. This issue is known as the $U(1)_A$ problem \cite{hooft1976computation, hooft1986instantons}. \section{The resolution of the \boldmath$U(1)_A$ problem} The resolution of the $U(1)_A$ problem begins from an incredibly brief remark taken by Weinberg, that, somehow, $U(1)_A$ is not a genuine symmetry of QCD, albeit, in the massless quark limit, it seems to be present. Then 't Hooft \cite{hooft1976symmetry} came up with a sufficiently reasonable explanation to the problem. He realized that the QCD vacuum has a more complicated structure, which in effect, makes $U(1)_A$, not a true symmetry of the strong interactions, even though it is an apparent symmetry of the QCD Lagrangian in the limit of vanishing quark masses. Thus, this might appear to have explained the $U(1)_A$ problem$;$ there is no mystery surrounding the missing PNGB. Mathematically, to resolve the $U(1)_A$ problem, an extension to the classical QCD Lagrangian has to be supplied. This extension is known as the QCD $\theta$-term. Now let us explain how to provide this term. If $U(1)_A$ was indeed obeyed, the associated Noether current \cite{adler1969axial} \begin{equation} J^{\mu}_5 = \sum_q \bar{q} \gamma^{\mu} \gamma^5 q \:, \end{equation} where the sum is over light quarks and $\gamma^\mu$ are the Dirac matrices, would be conserved, \ie $\partial_\mu J^{\mu}_5 = 0$. However, it turns out that the divergence of the axial current $J^{\mu}_5$ gets quantum corrections from the triangle graph, which connects it to two gluon fields with quarks going around the loop. Therefore, as a consequence of this anomaly, the divergence of the axial current obtains nonzero quantum corrections and fails to be conserved \begin{equation} \partial_\mu J^{\mu}_5 = \frac{g_s^2 N_f}{32 \pi^2} G_a^{\mu \nu} \tilde{G}_{\mu \nu}^a \:, \end{equation} where $G_a^{\mu \nu}$ is the gluon field strength and $\tilde{G}_{\mu \nu}^a = \frac{1}{2} \epsilon_{\mu \nu \alpha \beta} G_a^{\alpha \beta}$ is its dual. Here $\epsilon_{\mu \nu \alpha \beta}$ is the antisymmetric Levi-Civita symbol in four dimensions. Hence, in the limit of vanishing quark masses, although formally QCD is invariant under a $U(1)_A$ transformation\newpage \noindent \begin{equation} q_f \rightarrow e^{i \alpha \gamma^5 / 2} q_f \:, \end{equation} where $\alpha$ is an arbitral parameter, the chiral anomaly affected the action \begin{equation} \delta S= \alpha \int d^4 x \: \partial_{\mu} J^{\mu}_5 = \alpha \frac{g_s^2 N_f}{32 \pi^2} \int d^4 x \: G_a^{\mu \nu} \tilde{G}_{\mu \nu}^a \:. \end{equation} However, as it turns out, there is a further complication here. The pseudo-scalar density entering in the anomaly can be shown to be a total derivative \begin{equation} G_a^{\mu \nu} \tilde{G}_{\mu \nu}^a = \partial_{\mu} K^{\mu} \:, \end{equation} where $K^{\mu} $ is another current \begin{equation} K^{\mu} = \epsilon^{\mu \alpha \beta \gamma} A_{a \alpha} (G^a_{\beta \gamma} - \frac{g_s}{3} f_{abc} A_{b \beta}A_{c \gamma}) \:. \end{equation} Here, $A_a^{\mu}$ are the gluon fields, and $f_{abc}$ are the QCD structure constant. These identities imply that $\delta S$ is a pure surface integral \begin{equation} \label{eq.3.10} \delta S = \alpha \frac{g_s^2 N_f}{32 \pi^2} \int d^4x \: \partial_{\mu} K^{\mu} = \alpha \frac{g_s^2 N_f}{32 \pi^2} \int d\sigma_{\mu} K^{\mu} \:. \end{equation} Of course, if the anomaly does not contribute to the action, the system will still be invariant under $U(1)_A$, and we will have our problem back. In fact, this is the case when using the naive boundary conditions that $A_a^{\mu}=0$ at spatial infinity, since it leads to a zero total contribution to the action due to the vanishing term $\int \sigma_{\mu} K^{\mu}=0$. However, 't Hooft realized that there are actually topologically non-trivial field configurations, called instantons, that contribute to this operator, and thus it can not be neglected. Further, he showed that $A_a^{\mu}=0$ is not the correct boundary condition to be used, and instead, the appropriate choice is to take $A_a^{\mu}$ as a pure gauge field at spatial infinity, \ie this means $A_a^{\mu}$ should be either zero or a gauge transformation of zero at spatial infinity. Hence, with these boundary conditions, it turns out that there are gauge configurations for which $\int \sigma_{\mu} K^{\mu} \neq 0$ and, therefore, the system is not anymore invariant under $U(1)_A$. Thus, $U(1)_A$ is not a true symmetry of QCD. Let us now discuss some important details of how instantons give rise to non-trivial contribution to the integral term in the action equation \eqref{eq.3.10}. Instantons (sometimes referred to as pseudo-particles) are classical solutions to the equation of motion in Euclidean spacetime rather than Minkowski spacetime, with finite non-zero action. Instantons are a non-perturbative effect either in quantum mechanics or quantum field theory, which can not be seen in perturbation theory. Instantons describe tunneling effects between different vacua of a theory, which changes the structure of the quantum vacuum qualitatively. These effects can lead to the dynamical breaking of $U(1)_A$ symmetry in QCD. For more clarification, let us consider that we have a Yang-Mills theory, and for simplicity, we restrict ourselves with $SU(2)$ QCD. In this gauge, there are only spatial gauge fields $A_a^{\mu}$. Under a gauge transformation, these fields transform as\newpage \noindent \begin{equation} \frac{1}{2} \sigma_a A_a^{\mu} \equiv A^{\mu} \rightarrow \mathrm{\Omega} A^{\mu} \mathrm{\Omega}^{-1} + \frac{i}{g_s} \nabla^{\mu} \mathrm{\Omega} \mathrm{\Omega}^{-1} \:, \end{equation} with $\sigma_a$ and $\mathrm{\Omega}$ represent here the Pauli matrices and an element of the gauge group $SU(2)$, respectively. This means that, the vacuum configuration either vanish or have the form $i g_s^{-1} \nabla^{\mu} \mathrm{\Omega} \mathrm{\Omega}^{-1}$. In the $A_a^{0}=0$ gauge, these vacuum configurations can be classified by how $\mathrm{\Omega}$ goes to unitary as $\vert x \vert \rightarrow \infty$ \begin{equation} \mathrm{\Omega}_n \rightarrow e^{i2\pi n} \quad \text{as} \quad \vert x \vert \rightarrow \infty \quad \text{with} \quad n= 0, \pm 1, \pm 2, \dots \:. \end{equation} The quantum number $n$ is known as winding number and define as follows \begin{equation} n = \frac{i g_s^3}{24 \pi^2} \int d^3x \: \tr (\epsilon_{ijk} A_n^i A_n^j A_n^k) \:. \end{equation} The winding number $n$ characterizes the homotopy class for the $S_3 \rightarrow SU(2)$ mapping, \ie a mapping from the three-dimensional Euclidean space to the $SU(2)$ space. It is an integer for a pure gauge field. In contrast, for a field that vanishes at spatial infinity, it can be expressed in terms of a non-zero surface integral. This implies a non-zero vacuum-vacuum transition amplitude. Keeping in mind that each vacuum is characterized by a distinct winding number, so we can refer to them as n-vacua, $\vert n \rangle$ that denotes the pure gauge configurations. Furthermore, since the vacua are degenerate and instantons allow transitions between them, the physical vacuum state must be written as a superposition of these $n$-vacua. Therefore the true vacuum is known as the theta vacuum and written as \begin{equation} \vert \theta \rangle = \sum_n e^{-in\theta} \vert n \rangle \:, \end{equation} where $\theta$ is an unknown, $2\pi$-periodic number referred to as the vacuum angle. We may note here that the $\theta$ vacuum is nothing but the Fourier transform of the $n$-vacua. Coupling this note with the following important point that, a gauge transformation that transforms the field configuration $\vert n \rangle \rightarrow \vert n+1 \rangle $ has a well-defined solution, and such a tunneling event is called an instanton. The effect of these mutually distinct theta vacua is that effective action gains an additional term. The path integral formulation of the vacuum to vacuum transition amplitude involves an effective action which is dependent on the vacuum angle $\theta$ \begin{equation} S_{\text{eff}} [A] = S_0 [A] + \theta \frac{g_s^2}{32 \pi^2} G_a^{\mu \nu} \tilde{G}^a_{\alpha \beta} \:, \end{equation} with $ S_0 [A] $ being the usual QCD action. This means the QCD Lagrangian now has an addition $\theta$-term \cite{peccei2008strong} \begin{equation} \L_\theta = \theta \frac{g_s^2}{32 \pi^2} G_a^{\mu \nu} \tilde{G}^a_{\alpha \beta} \:. \end{equation} Since quarks are not massless, and if we consider the weak as well as the strong interactions, then we have a general quarks mass term in the Lagrangian, which can be written as \begin{equation} \L_{\text{mass}} = \bar{q}_{iR} M_{ij} q_{jL} + h.c \:. \end{equation} where $M_{ij}$ represents the complex quark mass matrix. Consequently, the $U(1)_A$ transformations lead to an additional phase when we diagonalize the mass matrix. Thus the vacuum angle picks up contributions both from QCD and the electroweak sector. So in the full theory, the value of the $\theta$-term gains an additional contribution equal to the argument of the determinant of the matrix $M_{ij}$, and is denoted as $\arg \det M$. Hence, the total physical vacuum angle becomes \cite{peccei2008strong} \begin{equation} \label{eq.3.18} \bar{\theta} = \theta + \arg \det M \end{equation} This new vacuum term is called the effective vacuum angle, $\bar{\theta}$, and generally, it is non-zero. Hence, the additional $\theta$-term to the QCD Lagrangian has to be rewritten in the following form \cite{peccei2008strong} \begin{equation} \L_\theta = \bar{\theta} \frac{g_s^2}{32 \pi^2} G_a^{\mu \nu} \tilde{G}^a_{\alpha \beta} \:. \end{equation} By introducing this extra term in the QCD Lagrangian, we have successfully allowed for non-vanishing contributions, by the anomalous current, to the action. Thus the $U(1)_A$ is in no way symmetry of QCD, and we no longer expect a thereto associated preserved Noether current or Goldstone mode. The $U(1)_A$ problem has indeed been solved. \section{The strong CP problem} As we discussed above, the $U(1)_A$ problem can be explained due to the complex structure of the QCD vacuum \cite{callan1976structure}. The mathematical resolution of the problem implies including the $\theta$-term into the total QCD Lagrangian. However, solving the $U(1)_A$ problem leads to a new fine-tuning problem referred to as the strong CP problem. The additional $\theta$-term violates the PT (\ie parity-time) symmetry, but conserve C (\ie charge conjugation) symmetry. This makes it a source of CP violation in the strong interaction. Although the total QCD Lagrangian includes such a CP-violating term, there is no experimental indication of CP violation in the strong interactions. For example, the electric dipole moments (EDM) for the neutron is one of the most sensitive CP-violating observables to the value of $\theta$. An estimate the neutron EMD obtained within chiral perturbation theory can be conveniently expressed in the form \cite{crewther1979chiral} \begin{equation} \label{eq.3.20} d_n = - \frac{m_u m_d}{m_u + m_d} \bar{\theta} (\bar{q} i \gamma^5 q) \:. \end{equation} Accordingly, the calculation obtained the following estimation \begin{equation} \vert d_n \vert \sim ( 2.4 \pm 1.0) \times 10^{-16} \bar{\theta} \ e \ \text{cm} \:, \end{equation} where $e$ is the electric charge. The current experimental bound is given by \cite{baker2006improved} \begin{equation} \vert d_n \vert < 2.9 \times 10^{-26} \ e \ \text{cm} \:. \end{equation} Such an experimental result implies that the parameter $\theta$ is about of order $\vert \bar{\theta} \vert \lesssim 10^{-10}$. Thus we are tempted to think that $\bar{\theta}$ is zero. Nevertheless, there is no natural reason to expect $\bar{\theta}$ to be this small. In principle, the $\bar{\theta}$ parameter is a free parameter, and it may take a value anywhere in the range from $0$ to $2\pi$. Further, CP violation occurs in the standard model by allowing the quark masses to be complex, and thus the natural value of $\bar{\theta}$ is expected to be of order one. Thus it seems there is an unnatural cancellation between the two parameters of equation \eqref{eq.3.18} that are not related. In fact, we would like to understand the reason why the sum of two contributions with very different physical origins is extremely small, if not zero. This is the strong CP problem. \section{The resolution of the strong CP problem} As we saw in the previous section, the strong CP problem is clearly a very serious issue. There are three scenarios to solve this problem. In this section, we briefly discuss these possible solutions. Then in the next section, we discuss in detail the most preferable solution$;$ the axion solution. \begin{itemize} \item {\bf Massless quark solution.} The first suggestion is to set the up quark mass to zero, so the RHS of equation \eqref{eq.3.20} vanishes. Thus the neutron EDM vanishes with no constraints in this case. However, the massless quark solution is excluded, as seen in reference \cite{di2016qcd}. The lattice QCD simulations suggest a non-zero up quark mass. This is indeed experimentally supported by the other properties of the known mesons, including pions, require all of the quarks to have some sort of bare mass. \item {\bf Solution with spontaneous CP breaking.} The second possibility might be constructed if CP is a symmetry of nature, which is spontaneously broken, and hence there would be no strong CP problem at all \cite{nelson1984naturally, barr1984solving}. Interestingly, even though we observe CP violation in nature, we can make use of this fact if CP is truly a fundamental symmetry of nature at high energies that have been spontaneously broken, such that at low energies, we do not observe it to be a symmetry. In this way, one starts with a theory that is CP invariant with $\bar{\theta}= 0$ at the Lagrangian level. Then, CP is spontaneously broken by the vacuum expectation value (VEV) of a CP-odd scalar. The interactions of this scalar may be engineered such that it can generate both the required CP-violating phase and $\bar{\theta}< 10^{-10}$. Although several models exist where this is successfully achieved, they require rather disturbing features, \eg complex Higgs VEVs, which cause further problems. Moreover, the biggest drawback for this possible solution is that experimental data is in excellent agreement with the CKM Model, which includes CP explicitly not spontaneously broken. \item {\bf The axion solution.} The Peccei-Quinn or the axion solution is the most widely accepted solution to explain the smallness of the parameter $\bar{\theta}$ \cite{peccei1977constraints, peccei1977cp}. The model is based on the idea of introducing an additional chiral symmetry to the standard model Lagrangian, which effectively rotates the $\theta$-vacua away. In essence, this should effectively allow the theory to be insensitive to any additional source of CP violation generated at higher energies and accordingly solve the strong CP problem. The description of the mechanism of this solution is the subject of the next section. \end{itemize} \section{The Peccei-Quinn mechanism} R. Peccei and H. Quinn proposed this theory in 1977 to solve the strong CP problem. They assumed that the standard model Lagrangian has an additional global chiral $U(1)$ symmetry, which was named after them as the Peccei-Quinn (PQ) symmetry $U(1)_{\text{PQ}}$ \cite{peccei1977constraints, peccei1977cp}. This symmetry is necessarily spontaneously broken at some large energy scale $f_a$, and this indeed gives rise to a massless NGB, which is called the axion. The consequences of introducing such symmetry into the theory are effectively promoting the vacuum angle from static CP-violating constant to a dynamical CP-conserving field, which is the axion field. As we will describe below, when the effective potential of the axion is minimized, the $\bar{\theta}$ dependence cancels, and the CP problem is no more. Hence, when we add the axion field to the standard model Lagrangian, and in order to make it invariant under the new $U(1)_{\text{PQ}}$ symmetry, it has to take the form \begin{align} \label{eq.3.23} \L_{\text{total}} &= \L_{SM} + \L_{\bar{\theta}} + \L_{a} \nonumber \\ &= \L_{SM} + \bar{\theta} \frac{g_s^2}{32 \pi^2} G_b^{\mu \nu} \tilde{G}^b_{\mu \nu} - \frac{1}{2} \partial_{\mu} a \partial^{\mu} a + \L_{\text{int}} [\partial^\mu a/f_a , \psi] + \xi \frac{a}{f_a} \frac{g_s^2}{32 \pi^2} G_b^{\mu \nu} \tilde{G}^b_{\mu \nu} \nonumber \\ &= \L_{SM} - \frac{1}{2} \partial_\mu a \partial^\mu a +\L_{\text{int}} [\partial^\mu a/f_a , \psi] + \left( \bar{\theta} + \xi \frac{a}{f_a} \right) \frac{g_s^2}{32 \pi^2} G_b^{\mu \nu} \tilde{G}^b_{\mu \nu} \:, \end{align} where $\L_a$ refers to the Lagrangian of the new axion field. It is easy once we look at the previous equation to realize that the axion part of the Lagrangian contains the usual kinetic and interaction terms in addition to another term which is required to ensure that the Noether current associated to the new $U(1)_{\text{PQ}}$ symmetry has a chiral anomaly \begin{equation} \partial_\mu J^{\mu}_{\text{PQ}} =\xi \frac{g_s^2}{32 \pi^2} G_b^{\mu \nu} \tilde{G}^b_{\mu \nu} \:, \end{equation} where $\xi$ is a coefficient. Since the axion field, $a(x)$ is the NGB of the broken symmetry$;$ it is invariant under a $U(1)_{\text{PQ}}$ transformation \begin{equation} a(x) \rightarrow a(x) + \alpha f_a \:, \end{equation} where $\alpha$ is the phase of the field, and $f_a$ represents here the order parameter associated with the breaking of the $U(1)_{\text{PQ}}$ symmetry. In such a way, we see that an axial transformation can shift $a$ to make it able to remove the $\bar{\theta}$ dependence of the theory and thus provide a dynamical solution to the strong CP problem. This is very important, as it means that the physical vacuum angle is actually $\bar{\theta}+\xi \langle a \rangle / f_a$, where $\langle a \rangle$ signifies the VEV of $a$, and $f_a$ is now the scale of the spontaneous breaking of the $U(1)_{\text{PQ}}$ symmetry. Because of the complicated structure of the vacuum, the PQ symmetry must be explicitly instead of spontaneously broken. Therefore, the axion becomes a PNGB and picks up a small mass. This also means that the axion gains a nontrivial effective potential \begin{equation} \left \langle \frac{\partial V_{\text{eff}}}{\partial a} \right \rangle = - \frac{\xi}{f_a} \frac{g_s^2}{32 \pi^2} \left \langle G_b^{\mu \nu} \tilde{G}^b_{\mu \nu} \right \rangle \Big \vert_{\langle a \rangle = - \bar{\theta} f_a / \xi} =0 \:. \end{equation} If we were to neglect the nontrivial vacuum structure, \ie the instanton effects, the circle of minimal potential would be parallel to the plane, with degenerate ground states, and all the values $0 \leq \xi \frac{\langle a \rangle}{f_a} \leq 2 \pi $ are allowed. In this situation, the breaking of PQ symmetry remains spontaneous. But in the case of a nontrivial vacuum structure, the instanton effects must be taken into account, and then the circle of minimum potential becomes tilted. Hence in this scenario, the PQ symmetry becomes explicitly broken, and an axion mass $m_a \neq 0$ is generated. In this case, the mechanism through which the $\bar{\theta}$-term can be eliminated from the theory is to allow the effective potential for the axion field to get its minimum. This is known as Peccei-Quinn solution and occurs at $\langle a \rangle = - \bar{\theta} f_a / \xi$. To understand the core of the solution, we may take a look at equation \eqref{eq.3.23}. Then, we easily can realize that generating potential for the axion field that is periodic in the effective vacuum angle $\bar{\theta} + \langle a \rangle \xi / f_a$ requires \begin{equation} V_{\text{eff}} \sim \cos \left( \bar{\theta} + \xi \frac{\langle a \rangle}{f_a}\right) \:. \end{equation} In order to minimize the potential $V_{\text{eff}}$, Peccei and Quinn showed that taking the term inside the brackets to be zero (and not $\pi$) is the correct choice. Hence, we get the following condition \begin{equation} \bar{\theta} + \frac{\langle a \rangle}{f_a} = 0 \quad \Leftrightarrow \quad \langle a \rangle= - \frac{f_a}{\xi} \bar{\theta} \:, \end{equation} and as the axion field evolves, and the potential minimum is reached, the CP-violating term from the Lagrangian \eqref{eq.3.23} is removed. The strong CP problem is solved. What has happened is that we have essentially switched the fixed-parameter $\bar{\theta}$for a dynamical variable with a CP-conserving minimum, the axion field. As the field evolves, it effectively relaxes the CP-violating term to zero. Now, by defining the physical axion as $a_{\text{phys}} \equiv a-\langle a \rangle$, we can rewrite the Lagrangian \eqref{eq.3.23} in terms of $a_{\text{phys}}$ which is now no longer has a CP-violating $\bar{\theta}$-term. It obviously takes the following form \begin{equation} \label{eq.3.29} \L_{\text{total}} = \L_{\text{SM}} - \frac{1}{2} \partial_\mu a \partial^\mu a + \L_{\text{int}} [\partial^\mu a/f_a , \psi] + a \frac{g_s^2}{32 \pi^2} G_b^{\mu \nu} \tilde{G}^b_{\mu \nu} \:. \end{equation} \section{The axion dynamics and models} As we discussed in the previous section, the presence of the QCD anomaly is necessary to induce the axion potential whose minimum is located at $\bar{\theta}_{\text{eff}}=0$. Expanding the effective potential $V_{\text{eff}}$ for the axion field at the minimum gives the axion $a$ mass \begin{equation} m_a^2 = \left \langle \frac{\partial^2 V_{\text{eff}}}{\partial a^2} \right \rangle = - \frac{\xi}{f_a} \frac{g_s^2}{32 \pi^2} \left \langle \dfrac{\partial}{\partial a} G_b^{\mu \nu} \tilde{G}^b_{\mu \nu} \right \rangle \Big \vert_{\langle a \rangle = - \bar{\theta} f_a / \xi} \propto \frac{1}{f_a} \:. \end{equation} The estimation of standard axion mass has been calculated using several methods such as current algebra techniques \cite{bardeen1978current} and effective Lagrangian \cite{bardeen1987constraints} approaches. We can easily realize that these calculations imply that the axion mass and interactions are characterized by the scale parameter $f_a$ of the spontaneous breaking of PQ symmetry. In the original model, the $U(1)_{\text{PQ}}$ symmetry breakdown coincided with that of electroweak breaking $f_a = v_F$ , with $v_F \simeq 250$ GeV. This identified the visible axion with a mass of order $100 \ \text{keV}$ to $1 \ \text{MeV}$, and the associated models then have become known as visible axion models. If axions had this mass, then they would have couplings large enough to be experimentally detected. As all the results were null, astrophysical, and experimental searches, have ruled out this type of axions. Nevertheless, the initial assumption that the value of the scale parameter $f_a$ is close to the electroweak scale was not necessary. Indeed, the original realization of the axion has been excluded experimentally, and now $f_a$ is thought to lie much higher. When $f_a \gg v_F$, then the axion is very light, very weakly coupled, and very long-lived. Models, where this occurs, have become known as invisible axion models. These invisible, light, and weakly interacting axions are, therefore, very promising dark matter candidates. In this section, we will shortly review the properties of the standard weak-scale axions and then generalize the discussion to the invisible axion models. \subsection{The visible axion models} The original model of the axion was proposed by Weinberg and Wilczek \cite{weinberg1978new, wilczek1978problem}, based on the idea of Peccei and Quinn \cite{peccei1977constraints, peccei1977cp}. This is called the Peccei-Quinn-Weinberg-Wilczek (PQWW) model, or the visible axion model. Since the basic ingredient of the axion model is a global chiral $U(1)$-symmetry of the standard model Lagrangian, it is clear that an extension of the standard model Lagrangian is required. In principle, the axion is embedded in the phase of the Higgs field in the usual standard model. But, one Higgs doublet is not enough to give rise to the axion because the other three Goldstone modes are absorbed by the longitudinal degrees of the standard model gauge bosons, and the remaining Higgs boson has a potential. Therefore, the simplest extension that can make the standard model Lagrangian invariant under the $U(1)_{\text{PQ}}$ symmetry is to consider a model with the Higgs sector contains at least two scalar field doublets. This minimal model introduces exactly two Higgs fields, $\phi_1$ and $\phi_2$, to absorb independent chiral transformations of the quarks and leptons. Hence, the presence of the Higgs doublets fields can allow the Lagrangian density to be invariant under the desired chiral $U(1)_{\text{PQ}}$ transformations \begin{align} \qquad \qquad \qquad \qquad \begin{split} a & \rightarrow a + \alpha \, v_F \:, \\ \phi_{1} & \rightarrow e^{2 \alpha / x} \phi_{1} \:, \\ \phi_{2} & \rightarrow e^{2 \alpha x} \phi_{2} \:, \end{split} \begin{split} u_{Rj} & \rightarrow e^{-i \alpha x} u_{Rj} \:, \\ d_{Rj} & \rightarrow e^{-i \alpha / x} d_{Rj} \:, \\ \ell_{Rj} & \rightarrow e^{-i \alpha / x} \ell_{Rj} \:, \end{split} \qquad \qquad \qquad \qquad \end{align} Then, the relevant parts of the Yukawa-sector of the standard model Lagrangian involving the Higgs doublet fields can be defined as follows \begin{equation} \label{eq.3.32} \L_{\text{Yukawa}} = y_{ij}^u \bar{q}_{Li} u_{Rj} \phi_1 + y_{ij}^d \bar{q}_{Li} d_{Rj} \phi_2 + y_{ij}^u \bar{l}_{Li} \ell_{Rj} \phi_2 + h.c \:. \end{equation} The usual Yukawa coupling between these Higgs scalars and the Fermions spontaneously breaks the flavor symmetry and gives masses to the quarks and leptons. Because of this spontaneous symmetry breaking, the two the Higgs doublets $\phi_1$ and $\phi_2$ develop the following nonzero VEVs \begin{equation} \langle \phi_1^0 \rangle = v_1 \quad \text{and} \quad \langle \phi_2^0 \rangle = v_2 \:, \end{equation} where $\phi_1^0$ and $\phi_2^0$ are the neutral components of $\phi_1$ and $\phi_2$, respectively. The VEVs of the Higgs doublets break both the electroweak symmetry and the $U(1)_{\text{PQ}}$ symmetry at a scale $f_a =\sqrt{v_1^2 +v_2^2} $. In the simplest model, the scale of $U(1)_{\text{PQ}}$ symmetry breaking scale is taken to be the same as the electroweak symmetry breaking scale. Corresponding to the two Higgs doublets, there are now four physical Higgs scalars and four Nambu-Goldstone (NG) modes. Three of the NG modes will give masses to the $W^{\pm}, Z$ bosons, and the remaining one becomes the axion field. The symmetries of the Lagrangian shows that we have a $U (1)$ phase for each Higgs doublet. The axion in this model is the common phase field of the Higgs doublets $\phi_1$ and $\phi_2$ orthogonal to weak hypercharge. If the ratio of the VEVs of the Higgs doublets is defined as $x=v_2/v_1$, then it is easy to isolate the axion content in $\phi_1$ and $\phi_2$ as \begin{equation} \phi_1 = \frac{v_1}{\sqrt{2}} \left( \begin{matrix} 1 \\ 0 \end{matrix} \right) e^{i x a / f} \quad \text{and} \quad \phi_1 = \frac{v_1}{\sqrt{2}} \left( \begin{matrix} 0 \\ 1 \end{matrix} \right) e^{i a /x f} \:. \end{equation} One of the two linear combinations of these phases is the electroweak hypercharge degree of freedom, which is absorbed by the $Z$ boson, and the other degree of freedom is then the axion field \begin{equation} a = \frac{1}{f_a} (v_1 \Im \phi_1^0 - v_2 \Im \phi_2^0 ) \:. \end{equation} Since the $U(1)_{\text{PQ}}$ symmetry is spontaneously broken, the corresponding NGB, the axion, will be massless. But the QCD gluon anomaly will break this symmetry explicitly, and hence, the axion will become a PNGB with a small mass. The mass for the standard axion was estimated to be given by \begin{equation} m_a^{st} = \frac{m_\pi f_\pi}{v} N_g (x+\frac{1}{x}) \frac{\sqrt{m_u m_d}}{m_u + m_d} \simeq 25 N_g (x+\frac{1}{x}) \ \text{Kev} \:, \end{equation} where $m_\pi$ is the mass of the pion meson, $f_\pi \simeq 92 \ \text{Mev}$ is pion decay constant, and $N_g$ is the number of quark generators. The axion thus directly acquires its mass by mixing with $\pi^0$, which occurs with the gluon coupling. The $\pi^0$-mixing then induces another coupling of the axion to two photons. Writing the interaction Lagrangian describing this coupling as \begin{equation} \label{eq.3.37} \L_{a \gamma \gamma} = \frac{\alpha}{4 \pi} K_{a \gamma \gamma} \frac{a_{\text{phys}}}{f_a} G^{\mu \nu} \tilde{G}_{\mu \nu} \:, \end{equation} where $K_{a \gamma \gamma}$ is the axion-two-photon coupling constant of $\bm{\mathcal{O}}(1)$ and is defined as \begin{equation} K_{a \gamma \gamma} = N_g (x + \frac{1}{x}) \frac{m_u}{m_u + m_d} \:. \end{equation} Then, for all axion models, the mass of the axion and its $\pi$ and $\eta$ couplings, respectively, can be characterized by \begin{equation} m_a = \lambda_m m_a^{\text{st}} \left( \frac{v}{f_a} \right) \:, \quad \xi_{a \pi} = \lambda_3 \frac{f_\pi}{f_a} \:, \quad \xi_{a \pi} = \lambda_0 \frac{f_\eta}{f_a} \:, \end{equation} where $\lambda_m$, $\lambda_3$, and $\lambda_0$ are model parameters of $\bm{\mathcal{O}}(1)$. It seems to be important now to mention that all axion models can be characterized by the model-dependent parameter $K_{a \gamma \gamma}$ together with parameters $\lambda_m$, $\lambda_3$, and $\lambda_0$, which are calculated for the PQ model and written as \begin{align} \begin{split} \lambda_m &= N_g \left( x + \frac{1}{x} \right) \:, \\ \lambda_3 &= \frac{1}{2} \left[ \left( x + \frac{1}{x} \right) - N_g \left( x + \frac{1}{x} \right) \frac{m_d-m_u}{m_d + m_u} \right] \:, \\ \lambda_0 &= \frac{1}{2} (1-N_g) \left( x + \frac{1}{x} \right) \:. \end{split} \end{align} \subsection{The invisible axion models} In the visible axion models, when the axion mass is less than the electron mass, the axions can decay into two photons obeying the interaction giving by equation \eqref{eq.3.37} and hence, the axion becomes long-lived. When the axion mass is more than the electron mass, it can decay rapidly into two electrons and become short-lived \begin{equation} \tau (a \rightarrow e^{+} e^{-}) = \frac{8 \pi v^2 v_2^2}{m_e^2 v_1^2 \sqrt{m_a^2 - 4 m_e^2}} \:. \end{equation} Unfortunately, both these possibilities have been ruled out experimentally. In addition, the strongest constraints to rule out the existence of the visible axion came from the nonobservation of the K-decay to axion, which has been estimated to be \begin{equation} Br (K^{+} \rightarrow \pi^{+} + a) \simeq 3 \times 10^{-5} (x+\frac{1}{x})^2 \:. \end{equation} This is well above the KEK bounds$;$ however, the experimental constraints to the K-decay implied \begin{equation} Br (K^{+} \rightarrow \pi^{+} + \text{nothing}) \leqslant 3.8 \times 10^{-8} \:, \end{equation} where ``nothing’’ in the decay products includes the long-live axions, which would escape detection. We conclude from this argument that the original PQ model with $f_a = v_F$ and its associated visible axion are ruled out by experimental results. However, on the other hand, invisible axion models when $f_a \gg v_F$ are still viable. These models avoided the problem of the original PQWW model by allowing the $U(1)_{\text{PQ}}$ symmetry breaking to occur at scale $f_a$ much higher than the electroweak scale since the coupling of axions with other particles are suppressed by $1/f_a$. Therefore, the axion in such models is characterized as very light, very weakly coupled, and very long-lived. More technically, the essence of the invisible axion models is introducing a new complex scalar field $\phi$, but $SU(2)_L \times U(1)_Y$ is a singlet. This field is called the Peccei-Quinn field, carries only a PQ charge, and does not participate in the electroweak interactions. The $U(1)_{\text{PQ}}$ symmetry then must be broken by the VEV of this field, which has a scale much larger than the one set by the electroweak interactions. Let us now define the change in the PQ field $\phi$ under the $U(1)_{\text{PQ}}$ transfiguration as \begin{equation} \phi \rightarrow e^{i \alpha} \phi \:. \end{equation} If we impose the potential for the PQ field $\mathrm{\Phi}$ \begin{equation} V(\mathrm{\Phi}) = \frac{\lambda}{4} (\vert \mathrm{\Phi} \vert^2 - f_a^2)^2 \:, \end{equation} it acquires the VEV $\vert \langle \mathrm{\Phi} \rangle \vert = f_a \gg v_F$. Then, instead of defining the axion to be the phase direction of the standard model Higgs doublets in the visible axion models, the invisible axion field is the, essentially, the phase of the PQ field $\phi$ \begin{equation} \phi = \frac{f_a}{\sqrt{2}} e^{i a / f_a} \:. \end{equation} There are two main classes of such invisible axion models, depending on whether or not they have a direct coupling to leptons. These two classifications are the Kim-Shifman-Vainshtein-Zakharov (KSVZ)-type and the Dine-Fischler-Srednicki-Zhitnitsky (DFSZ)-type models. We shall briefly review here both two types. \subsubsection*{The KSVZ model} The KSVZ model \cite{kim1979weak, shifman1980can} suggests the existence of some new heavy quarks $Q$ that carry the PQ charge, while the ordinary quarks and leptons do not. Then, the QCD anomaly is obtained via the Yukawa coupling between such heavy quarks $Q$ and the PQ fields, which can be written as \begin{equation} \L_{\text{KSVZ}} = - y_{Q} \bar{Q}_L \phi Q_R + h.c \:. \end{equation} Under the $U(1)_{\text{PQ}}$ symmetry, the heavy quark $Q$ transforms as \begin{align} \qquad \qquad \qquad \qquad \begin{split} Q_L & \rightarrow e^{i \alpha/2} Q_L \:, \end{split} \begin{split} Q_R & \rightarrow e^{-i \alpha/2} Q_R \:. \end{split} \qquad \qquad \qquad \qquad \end{align} Calculations analogous to the one of the PQWW model obtained the following axion characteristic parameters for the KSVZ \begin{equation} \lambda_m =1 \:, \quad \lambda_3 = - \frac{1}{2} \frac{m_d - m_u}{m_d + m_u} \:, \quad \lambda_0 = - \frac{1}{2} \:, \end{equation} and \begin{equation} K_{a \gamma \gamma} = 3 q_{Q}^2 - \frac{4 m_d - m_u}{3(m_d + m_u)} \:, \end{equation} where $q_{Q}$ is the electrical charge of the heavy quarks $Q$. \subsubsection*{The DFSZ model} The DFSZ model \cite{dine1981simple, zhitnitskii1980possible} is an extension of the PQWW model, and similarly, it realizes the QCD anomaly without the need to introduce a heavy quark. As in the PQWW model, both the quarks and leptons carry PQ charge, and hence two standard model Higgs doublets $\phi_1$ and $\phi_2$ are still required. In addition, a complex scalar field $\phi$ is added. The trick now is that the quarks and leptons directly couple to the Higgs doublets $\phi_1$ and $\phi_2$ through the usual Yukawa terms \eqref{eq.3.32}, but do not couple to the PQ field $\phi$. However, the quarks and leptons feel the effects of the axion through the interactions of the PQ filed $\phi$ with the two Higgs doublets $\phi_1$ and $\phi_2$. The PQ field couple the two Higgs doublets through the scalar potential \begin{align} V(\phi_1, \phi_2, \phi) = \frac{\lambda_1}{4} (\phi_1^{\dagger} \phi_1 - v_1^2)^2 &+ \frac{\lambda_2}{4} (\phi_2^{\dagger} \phi_2 - v_2^2)^2 + \frac{\lambda}{4} (\vert \phi \vert^2 - f_a^2)^2 + (a \phi_1^{\dagger} \phi_1 + b \phi_2^{\dagger} \phi_2 ) \vert \phi \vert^2 \nonumber \\ &+ c(\phi_1 \cdot \phi_2 \phi^2 + h.c.) + d \vert \phi_1 \cdot \phi_2 \vert^2 + e \vert \phi_1^{\dagger} \phi_2 \vert^2 \:. \end{align} The Lagrangian density is invariant under the $U(1)_{\text{PQ}}$ symmetry transformation \begin{align} \qquad \qquad \qquad \qquad \begin{split} \phi_{1} & \rightarrow e^{-i \alpha} \phi_{1} \:, \\ \phi_{2} & \rightarrow e^{-i \alpha} \phi_{2} \:, \\ \phi & \rightarrow e^{i \alpha} \phi \:, \end{split} \begin{split} u_{Rj} & \rightarrow e^{i \alpha} u_{Rj} \:, \\ d_{Rj} & \rightarrow e^{i \alpha} d_{Rj} \:, \\ \ell_{Rj} & \rightarrow e^{i \alpha} \ell_{Rj} \:, \end{split} \qquad \qquad \qquad \qquad \end{align} The axion is then a linear combination of the phase of the three scalar fields $\phi_1^0$, $\phi_2^0$, and $\phi$. Defining $X_1= 2 v_2^2/v^2$ and $X_2= 2 v_1^2/v^2$, the contribution of the axion field $a$ in $\phi_1$ and $\phi_2$ can be isolated as \begin{equation} \phi_1 = \frac{v_1}{\sqrt{2}} \left( \begin{matrix} 1 \\ 0 \end{matrix} \right)e^{i X_1 a /f} \:, \quad \text{and} \quad \phi_2 = \frac{v_2}{\sqrt{2}} \left( \begin{matrix} 0 \\1 \end{matrix} \right)e^{i X_2 a /f} \:. \end{equation} The axion characteristic parameters for the DFSZ axion model have been calculated and obtained as follows \begin{equation} \lambda_m =1 \:, \quad \lambda_3 = \frac{1}{2} \frac{X_1 - X_2}{2 N_g} - \frac{m_d - m_u}{m_d + m_u} \:, \quad \lambda_0 = \frac{1-N_g}{2N_g} \:, \end{equation} and \begin{equation} K_{a \gamma \gamma} = \frac{3}{4} - \frac{4 m_d + m_u}{3(m_d + m_u)} \:. \end{equation} \section{Properties of the invisible axion} As we discussed above in the previous section, the existence of the standard visible axion has been ruled out experimentally$;$ therefore, we will concentrate here in this section on describing the properties of the invisible axions. In general, the axion is a neutral pseudo-scalar particle, with a very light mass and very weak interactions with matter, since it has not been detected yet. The most significant feature of the axion properties is that they depend on the energy scale of the spontaneous breaking of the PQ symmetry $f_a$. These properties are the axion mass $m_a$, and the coupling constant of the axion to other particles denoted by $i$, both of them are inversely proportional to $f_a$ \begin{equation} m_a \propto \frac{1}{f_a} \:, \quad \text{and} \quad g_{ai} \propto \frac{1}{f_a} \:. \end{equation} In principle, the axion interacts mainly with gluons and photons, and it could also interact with fermions. The different axion models describe these interactions and to express their coupling strengths as a function of $f_a$. Some features of these interactions will be discussed below, see references \cite{kaplan1985opening, srednicki1985axion} for a detailed discussion on these topics. \subsection{Coupling to gluons} The coupling of axions with gluons is described by the interaction term in the total Lagrangian density \eqref{eq.3.29} and given by the expression \begin{equation} \L_{aG} = \frac{\alpha_s}{8 \pi f_a} \ a \ G^a_{\mu \nu} \tilde{G}_a^{\mu \nu} \:, \end{equation} expressed as a function of the strong fine-structure constant $\alpha_s$, and $a$ refers to the axion field. Figure \ref{Fig.3.1} shows the Feynman diagrams for the axion-gluons interaction. \begin{figure}[ht!] \begin{center} \begin{tikzpicture} \draw[scalarr, black, line width=.5mm] (0,0)-- (2,0); \draw[fermion, black, line width=.5mm] (2,0)--(3,0.5); \draw[fermion, black, line width=.5mm] (3,-0.5)--(2,0); \draw[fermion, black, line width=.5mm] (3,0.5)--(3,-0.5); \draw[gluon, black, line width=.5mm] (3,0.5)--(5,0.5); \draw[gluon, black, line width=.5mm] (3,-0.5)--(5,-0.5); \node at (1,0.3) {\bf{a}}; \node at (5.3,0.5) {\bf{G}}; \node at (5.3,-0.5) {\bf{G}}; \end{tikzpicture} \caption[Feynman diagram of the axion coupling with gluon.]{Feynman diagram of the axion coupling with gluon.} \label{Fig.3.1} \end{center} \end{figure} \subsection{Mass of axions} The coupling of axion with gluons makes possible the mixing with pions as well, and the obtained axion mass can be then expressed as \begin{equation} \label{eq.3.58} m_a = \frac{m_\pi f_\pi}{f_a} \frac{\sqrt{m_u m_d}}{m_u+m_d} \simeq 6 \times 10^{-6} \ \text{eV} \ \left( \frac{10^{12} \ \text{GeV}}{f_a} \right) \:, \end{equation} with astrophysical and cosmological observations imply that $f_a \sim 10^{9} \text{--} 10^{12} \ \text{GeV}$, hence, the axion has a very tiny mass $m_a \sim 10^{-6} \text{--} 10^{-3} \ \text{eV}$. \subsection{Coupling to photons} The mixing of axions with pions, due to the coupling to gluons, permits the description of the interactions of axions with photons by the Primakoff effect. Figure \ref{Fig.3.2} shows the Feynman diagrams for the axion-photon interaction for the two contributions from a triangle loop through fermions carrying PQ and electric charges and axion-pion mixing producing the generic coupling of axions to photons. The coupling of an axion to two photons and the contribution of such interaction to the Lagrangian density is given by the expression \begin{equation} \L_{a \gamma \gamma} = - \frac{1}{4} g_{a \gamma \gamma} \ a \ F_a^{\mu \nu} \tilde{F}^a_{\mu \nu} = g_{a \gamma \gamma} \ \mathbf{E} \cdot \mathbf{B} \ a \:, \end{equation} where $g_{a \gamma \gamma}$ is the axion-two-photon coupling constant, $F^{\mu \nu}$ is the photon field strength tensor, $\tilde{F}_{\mu \nu}$ denotes its dual, $\mathbf{E}$ is the electric field, and $\mathbf{B}$ is the magnetic field. The magnitude of the coupling constant $g_{a \gamma \gamma}$ is parameterized by \begin{equation} \label{eq.3.60} g_{a \gamma \gamma} = \frac{\alpha}{2 \pi f_a} C_{a \gamma \gamma} \:, \end{equation} where $\alpha = e^2/2\pi$ is the fine-structure constant, and $C_{a \gamma \gamma} $ is a numerical coefficient given by \begin{equation} C_{a \gamma \gamma} = \frac{E}{N} - \frac{2}{3} \frac{4+Z}{1+Z} \:, \end{equation} where $Z \equiv m_u/m_d$ gives the ratio between the up-quark and down-quark masses, and $E/N$ gives the ratio between the electromagnetic and color anomalies. The value of $E/N$ is zero in the KSVZ model, while it depends on the charge assignment of leptons in the DFSZ model. \begin{figure}[ht!] \begin{center} \begin{tikzpicture} \pgfmathsetmacro{\CosValueee}{cos(30)} \pgfmathsetmacro{\SinValueee}{sin(30)} \draw[scalarr, black, line width=.5mm] (0,0)-- (2,0); \draw[fermion, black, line width=.5mm] (2,0)--(3,0.5); \draw[fermion, black, line width=.5mm] (3,-0.5)--(2,0); \draw[fermion, black, line width=.5mm] (3,0.5)--(3,-0.5); \draw[vector, black, line width=.5mm] (3,0.5)--(5,0.5); \draw[vector, black, line width=.5mm] (3,-0.5)--(5,-0.5); \node at (1,0.3) {\bf{a}}; \node at (5.3,0.5) {\bf{G}}; \node at (5.3,-0.5) {\bf{G}}; \node at (8,0) {\bf{a} $\hspace{10mm} \pi^{0} $}; \draw[scalarr, black, line width=.5mm] (9,0)-- (10.5,0); \draw[vector, black, line width=.5mm] (10.5,0)--(10.5+2*\CosValueee,2*\SinValueee); \draw[vector, black, line width=.5mm] (10.5,0)--(10.5+2*\CosValueee,-2*\SinValueee); \draw[fermionn, black, line width=.5mm] (7.6,-0.05)--(8.2,-0.05); \draw[fermionn, black, line width=.5mm] (8.3,-0.05)--(7.7,-0.05); \node at (10.4,0) {\Large $\bullet$}; \node at (10.8+2*\CosValueee,2*\SinValueee) {$\gamma$}; \node at (10.8+2*\CosValueee,-2*\SinValueee) {$\gamma$}; \end{tikzpicture} \caption[Feynman diagrams of the axion-photon coupling$:$ coupling of the axion to two photons via a triangle loop through fermions carrying PQ and electric charges (left panel), and axion-pion mixing producing the generic coupling of axions to photons (right panel).]{Feynman diagrams of the axion-photon coupling$:$ coupling of the axion to two photons via a triangle loop through fermions carrying PQ and electric charges (left panel), and axion-pion mixing producing the generic coupling of axions to photons (right panel).} \label{Fig.3.2} \end{center} \end{figure} \subsection{Coupling to fermions} Fermions like electrons and quarks would show Yukawa coupling to axions if they carry the PQ charge. The Feynman diagrams of axion direct coupling with electrons, higher-order coupling of axion and electron, and axion-to-nucleon coupling, respectively, are shown in figure \ref{Fig.3.3}. The coupling of axion to fermions contributes to the Lagrangian density with a term equal to \begin{equation} \L_{a f} = \frac{g_{a f}}{2 m_f} (\bar{\psi_f \gamma^\mu \gamma_5 \psi_f}) \partial_\mu \ a \:, \end{equation} where $\psi_f$ is the fermion field, $f_m$ is its mass, and $g_{a f}$ represents the axion-fermion coupling constant, that can be written as \begin{equation} \label{eq.3.63} g_{a f} = \frac{C_f m_f}{f_a} \:. \end{equation} The dimensionless coefficient plays the role of a Yukawa coupling with an effective PQ charge $C_f$ and a fine-structure constant of the axion $\alpha_{\alpha f} = g_{a f}^2 / 4 \pi$ can be defined. Some axion models calculated the interaction coefficient $C_e$ that describes the axion effective coupling with electrons at the tree level. In addition, and despite there are no free quarks exists below the QCD scale $\mathrm{\Lambda}_{\text{QCD}} \approx 200 \ \text{ MeV}$, the coupling of axions to light quarks at tree level and the mixing of axions with pions lead to calculate the interaction coefficient of axions with proton $C_p$ and neutron $C_n$ and therefore derive the effective axion-to-nucleon coupling. \begin{figure}[ht!] \begin{center} \begin{tikzpicture} \pgfmathsetmacro{\CosValue}{cos(30)} \pgfmathsetmacro{\SinValue}{sin(30)} \pgfmathsetmacro{\CosValuee}{cos(60)} \pgfmathsetmacro{\SinValuee}{sin(60)} \pgfmathsetmacro{\CosValueee}{cos(40)} \pgfmathsetmacro{\SinValueee}{sin(40)} \pgfmathsetmacro{\CosValueeee}{cos(70)} \pgfmathsetmacro{\SinValueeee}{sin(70)} \pgfmathsetmacro{\CosValueeeee}{cos(190)} \pgfmathsetmacro{\SinValueeeee}{sin(190)} \draw[fermion, black, line width=.5mm] (0,0)-- (2*\CosValue,2*\SinValue); \draw[fermion, black, line width=.5mm] (2*\CosValue,2*\SinValue)-- (4*\CosValue,4*\SinValue); \draw[scalarrr, black, line width=.5mm] (2*\CosValue,2*\SinValue)-- (2*\CosValue+2*\CosValuee,2*\SinValue+2*\SinValuee); \node at (2*\CosValue,2*\SinValue) {\Large $\bullet$}; \node at (-0.2,-0.1) {$\bm{e}^{-}$}; \node at (4*\CosValue+0.3,4*\SinValue+0.1) {$\bm{e}^{-}$}; \node at (2*\CosValue+2*\CosValuee+0.2,2*\SinValue+2*\SinValuee+0.2) {\bf{a}}; \node at (2*\CosValue+0.2,2*\SinValue-0.3) {$\bm{g_{ae}}$}; \draw[fermion, black, line width=.5mm] (5,0)-- (5+2*\CosValue,2*\SinValue); \draw[fermion, black, line width=.5mm] (5+2*\CosValue,2*\SinValue)-- (5+4*\CosValue,4*\SinValue); \draw[fermion, black, line width=.5mm] (5,0)-- (5+4*\CosValue,4*\SinValue); \draw[scalarrr, black, line width=.5mm] (6.4,1.8)-- (6.4+2*\CosValueee,1.8+2*\SinValueee); \draw[vector, black, line width=.5mm] (5+0.72*2*\CosValue,0.72*2*\SinValue)-- (5+0.72*2*\CosValue+0.666*\CosValueeee,0.72*2*\SinValue+0.666*\SinValueeee); \draw[antivector, black, line width=.5mm] (5+1.4*2*\CosValue,1.4*2*\SinValue)-- (5+1.4*2*\CosValue+0.666*\CosValueeeee,1.4*2*\SinValue-0.666*\SinValueeeee); \draw[fermionnn, black, line width=.5mm] (5+0.72*2*\CosValue+0.666*\CosValueeee,0.72*2*\SinValue+0.666*\SinValueeee) -- (6.4,1.8); \draw[fermionnn, black, line width=.5mm] (6.85,1.55) -- (6.4,1.8); \draw[fermionnn, black, line width=.5mm] (5+0.72*2*\CosValue+0.666*\CosValueeee,0.72*2*\SinValue+0.666*\SinValueeee) --(5+1.4*2*\CosValue+0.666*\CosValueeeee,1.4*2*\SinValue-0.666*\SinValueeeee); \node at (5+0.72*2*\CosValue,0.72*2*\SinValue) {\Large $\bullet$}; \node at (5+1.4*2*\CosValue,1.4*2*\SinValue) {\Large $\bullet$}; \node at (6.4,1.8) {\Large $\bullet$}; \node at (5+0.72*2*\CosValue+0.666*\CosValueeee,0.72*2*\SinValue+0.666*\SinValueeee) {\Large $\bullet$}; \node at (5+1.4*2*\CosValue+0.666*\CosValueeeee,1.4*2*\SinValue-0.666*\SinValueeeee) {\Large $\bullet$}; \node at (5-0.2,-0.1) {$\bm{e}^{-}$}; \node at (5+4*\CosValue+0.3,4*\SinValue+0.1) {$\bm{e}^{-}$}; \node at (6.6+2*\CosValueee,2.0+2*\SinValueee) {\bf{a}}; \node at (5+2*\CosValue+0.2,2*\SinValue-0.3) {$\bm{g_{ae}}$}; \draw[fermion, black, line width=.5mm] (10,0)-- (10+2*\CosValue,2*\SinValue); \draw[fermion, black, line width=.5mm] (10+2*\CosValue,2*\SinValue)-- (10+4*\CosValue,4*\SinValue); \draw[scalarrr, black, line width=.5mm] (10+2*\CosValue,2*\SinValue)-- (10+2*\CosValue+2*\CosValuee,2*\SinValue+2*\SinValuee); \node at (10+2*\CosValue,2*\SinValue) {\Large $\bullet$}; \node at (10-0.2,-0.1) {\bf{N}}; \node at (10+4*\CosValue+0.3,4*\SinValue+0.1) {\bf{N}}; \node at (10+2*\CosValue+2*\CosValuee+0.2,2*\SinValue+2*\SinValuee+0.2) {\bf{a}}; \node at (10+2*\CosValue+0.2,2*\SinValue-0.3) {$\bm{g_{aN}}$}; \end{tikzpicture} \caption[Feynman diagrams of axion direct coupling with electrons, higher-order coupling of axion and electron, and axion-to-nucleon coupling, respectively.]{Feynman diagrams of axion direct coupling with electrons, higher-order coupling of axion and electron, and axion-to-nucleon coupling, respectively.} \label{Fig.3.3} \end{center} \end{figure} \subsection{Further processes} There are some other processes involving axioms quite relevant in the frame of astrophysics. The predominant emission process of axion in stars would be the Primakoff effect $\gamma + Ze \rightarrow Ze + a$, in which a photon can be converted into an axion in the presence of strong electromagnetic fields. This process is quite relevant in axion searches because axions could be reconverted into photons (and detected) inside a strong magnetic field via the inverse Primakoff effect. Further processes in axion models with tree level coupling to electrons might dominate the axion emission in white dwarfs and red giants. These processes are the Compton $\gamma + e^{-} \rightarrow e^{-} + a$, and the Bremsstrahlung emission $ e^{-} + Ze \rightarrow Ze + e^{-} + a$. Also, the axion-nucleon Bremsstrahlung $N+N \rightarrow N+N+a$, could be relevant in supernova explosion \cite{garcia2015solar}. \subsection{Lifetime of axions} Due to the coupling with photons, the axion would decay into two photons with a decay rate that can be expressed as follows \begin{align} \mathrm{\Gamma}_{a \rightarrow \gamma \gamma} &= \frac{g_{a \gamma \gamma}^2 m_a^3}{64 \pi} = \frac{\alpha^2}{256 \pi^3} C_{a \gamma \gamma} \frac{m_a^3}{f_a^2} \nonumber \\ & \simeq 2.2 \times 10^{-51} {s}^{-1} \left( \frac{10^{12} \ \text{GeV}}{f_a} \right)^5 \:, \end{align} where we used here the axion-to-photon coupling from \eqref{eq.3.60}, the axion mass from \eqref{eq.3.58}, and $C_{a \gamma \gamma}=1$ for simplicity. This gives the axion a lifetime of $ \tau_a \equiv \mathrm{\Gamma}^{-1}_{a \rightarrow \gamma \gamma} \sim 4.5 \times 10^{50} \ \text{s}$. This lifetime of the axion exceeds the age of the universe $\sim 4.3 \times 10^{17} \ \text{s}$ for $f_a \gtrsim 10^5 \ \text{GeV}$. Hence, the invisible axion is almost stable, which motivate us to consider it as the dark matter of the universe. \section{Status of axion search} In the visible axion models, the PQ symmetry was broken along with the electroweak symmetry and the axion couples to matter directly. This model had been ruled out by laboratory limits shortly after their conception. The invisible axion models, on the other hand, considered the scale of PQ symmetry breaking to be much higher than the electroweak scale. Hence, these models predict a very light axion with very weak coupling to ordinary matter. Such weak coupling of the axion would allow it to evade existing experimental searches, and hence, this invisible axion model is allowed by present experiments. Although the invisible axions are very light, very weakly coupled, and have not yet been discovered, they are not necessarily to be totally invisible. The allowed range of parameters is highly constrained from astrophysical and cosmological considerations and also from laboratory measurements. We will not discuss here how the invisible axions might affect astrophysics and cosmology, and instead, we will illustrate the astrophysical and cosmological bounds on the invisible axions in the next chapter. \chapter*{\textbf{Dedication}} \addcontentsline{toc}{chapter}{Dedication} \vspace*{\fill} \begin{center} \begin{minipage}{.8\textwidth}{\begin{center} {\large \bf This thesis is dedicated to my Mother.} \\ {\bf For her endless love, support, and encouragement at all times.}\end{center}} \end{minipage} \end{center} \vfill \chapter*{\textbf{List of Symbols}} \addcontentsline{toc}{chapter}{List of Symbols} \chaptermark{List of Symbols} \begin{abbrv} \item[$s$] Action \item[$G$] Universal gravitational constant \item[$H$] Hubble parameter \item[$h$] Dimensionless Hubble parameter \item[$R$] Scale factor of the universe \item[$c$] Speed of light in vacuum \item[$T_{\mu \nu}$] Energy momentum tensor \item[$g_{\mu \nu}$] Metric tensor \item[$G_{\mu \nu}$] Einstein tensor \item[$\mathfrak{R}$] Ricci scalar \item[$\mathfrak{R}_{\mu \nu}$] Ricci tensor \item[$\mathrm{\Gamma}^{\alpha}_{\mu \nu}$] Christoffel symbol \item[$R_c(t)$] Radius of curvature \item[$R_c$] Present radius of curvature \item[$\mathrm{\Lambda}$] Cosmological constant \item[$\phi$] Scalar field \item[$a$] Axion or ALP field \item[$\rho$] Energy density of the universe \item[$\rho_a$] Energy density of axions and ALPs \item[$n_a$] Number density of axions and ALPs \item[$m_a$] Mass of axions or ALPs \item[$g_{a \gamma}$] Coupling parameter of axions or ALPs with photons \item[$\tau_a$] Lifetime of axions or ALPs \item[$m_e$] Mass of electrons \item[$n_e$] Number density of electrons \item[$P$] Pressure of the universe \item[$\nu$] Expanding velocity of the universe \item[$\mathrm{\Omega}_{\mathrm{\Lambda}}$] Vacuum energy density parameter \item[$\mathrm{\Omega_{m}}$] Matter energy density parameter \item[$F_{\mu \nu}$] Electromagnetic field tensor \item[$\tilde{F}^{\mu \nu}$] Dual electromagnetic field tensor \item[$f_a$] Energy scale of the PQ symmetry breaking \item[$g_{a\gamma}$] ALP-photon coupling parameter \item[$\alpha$] Fine structure constant \item[$e$] Electric charge \item[$E$] Electric field \item[$B$] Magnetic field \item[$T$] Temperature \item[$k_B$] Boltzmann constant \item[$\mathrm{M}_{\odot}$] Solar mass \end{abbrv} \chapter*{\textbf{Acknowledgments}} \addcontentsline{toc}{chapter}{Acknowledgments} This PhD thesis has been carried out at the University of the Witwatersrand, since January 2018. The research in this thesis was supported by the DST/NRF SKA post-graduate bursary initiative. Making this thesis come alive was my biggest dream in life. Even when I was experiencing hardship and depression due to sudden detours in my life, I still managed to persevere and complete my dream. It was not easy, but somehow, I made it through. After Almighty God, many people deserve thanks for their support and help. It is, therefore, my greatest pleasure to express my gratitude to them all in this acknowledgment. First and foremost, I would like to thank the Almighty God for giving me the strength, patience, and knowledge that enabled me to efficiently and effectively tackle this project. In the process of putting this thesis together, I realized how true this gift of writing is for me. You gave me the power to believe in my passion and pursue my dreams. I could never have done this without the faith I have in you. Besides, I want to express my sincere gratitude to my supervisor Dr. G. Beck, for the patient guidance, encouragement, and advice he has provided throughout my time as his student. I have been extremely lucky to have a supervisor who cared so much about my work, responded to my questions and queries so promptly, and provided insightful and interesting questions and comments about the work. My special thanks go to Prof. S. Colafrancesco, who, although no longer with us, continues to inspire by his example and dedication to the students he served over the course of his career. I would also like to thank Prof. A. Chen, Prof. R. de Mello Koch, Prof. K. Goldstein, and Prof. \'A. V\'eliz-Osorio for their excellent and patient technical assistance, for believing in my potential, and for the nice moments, we spent together. Finally, I must express my very profound gratitude to my family, friends, and colleagues, for providing me with unfailing support and continuous encouragement throughout my study duration and through the process of researching and writing this dissertation. This accomplishment would not have been possible without them. Thank you all for always being there for me. I have finally made it. \newpage \chapter*{\textbf{Dedication}} \addcontentsline{toc}{chapter}{Dedication} \vspace*{\fill} \begin{center} \begin{minipage}{.8\textwidth}{\begin{center} {\large \bf This thesis is dedicated to my Mother.} \\ {\bf \small The only one in the entire universe who never stopped believing in me.} \end{center}} \end{minipage} \end{center} \vfill \chapter*{\textbf{Abstract}} \addcontentsline{toc}{chapter}{Abstract} Cosmology and particle physics are closer today than ever before, with several searches underway at the interface between cosmology, particle physics, and field theory. The mystery of dark matter (DM) is one of the greatest common unsolved problems between these fields. It is established now based on many astrophysical and cosmological observations that only a small fraction of the total matter content of the universe is made of baryonic matter, while the vast majority is constituted by dark matter. However, the nature of such a component is still unknown. One theoretically well-motivated approach to understanding the nature of dark matter would be through looking for light pseudo-scalar candidates for dark matter such as axions and axion-like particles (ALPs). Axions are hypothetical elementary particles resulting from the Peccei-Quinn (PQ) solution to the strong CP (charge-parity) problem in quantum chromodynamics (QCD). Furthermore, many theoretically well-motivated extensions to the standard model of particle physics (SMPP) predicted the existence of more pseudo-scalar particles similar to the QCD axion and called ALPs. Axions and ALPs are characterized by their coupling with two photons. While the coupling parameter for axions is related to the axion mass, there is no direct relation between the coupling parameter and the mass of ALPs. Nevertheless, it is expected that ALPs share the same phenomenology of axions. In the past years, axions and ALPs regained popularity and slowly became one of the most appealing candidates that possibly contribute to the dark matter density of the universe. In this thesis, we start by illustrating the current status of axions and ALPs as dark matter candidates. One exciting aspect of axions and ALPs is that they can interact with photons very weakly. Therefore, we focus on studying the phenomenology of axions and ALPs interactions with photons to constrain some of their properties. In this context, we consider a homogeneous cosmic ALP background (CAB) analogous to the cosmic microwave background (CMB) and motivated by many string theory models of the early universe. The coupling between the CAB ALPs traveling in cosmic magnetic fields and photons allows ALPs to oscillate into photons and vice versa. Using the M87 jet environment, we test the CAB model that is put forward to explain the soft X-ray excess in the Coma cluster due to CAB ALPs conversion into photons. Then we demonstrate the potential of the active galactic nuclei (AGNs) jet environment to probe low-mass ALP models and to potentially exclude the model proposed to explain the Coma cluster soft X-ray excess. Further, we adopt a scenario in which ALPs may form a Bose-Einstein condensate (BEC) and, through their gravitational attraction and self-interactions, they can thermalize to spatially localized clumps. The coupling between ALPs and photons allows the spontaneous decay of ALPs into pairs of photons. For ALP condensates with very high occupation numbers, the stimulated decay of ALPs into photons is also possible, and thus the photon occupation number can receive Bose enhancement and grows exponentially. We study the evolution of the ALPs field due to their stimulated decays in the presence of an electromagnetic background, which exhibits an exponential increase in the photon occupation number by taking into account the role of the cosmic plasma in modifying the photon growth profile. In particular, we focus on quantifying the effect of the cosmic plasma on the stimulated decay of ALPs as this may have consequences on the detectability of the radio emissions produced from this process by the forthcoming radio telescopes such as the Square Kilometer Array (SKA) telescopes with the intention of detecting the cold dark matter (CDM) ALPs. Finally, finding evidence for the presence of axions or axion-like particles would point to new physics beyond the standard model (BSM). This should have implications in developing our understanding of the nature of dark matter and the physics of the early universe evolution. {\bf Keywords$:$} dark matter, axions, axion-like particles, strong CP problem, Peccei-Quinn solution, ALP-photon coupling, cosmic ALP background, Coma cluster soft X-ray excess, Bose-Einstein condensate, stimulated decay of ALPs, Square Kilometer Array, physics beyond the standard model \chapter*{\textbf{List of Publications}} \addcontentsline{toc}{chapter}{List of Publications} Some parts of this thesis have been submitted in the form of the following research papers to international journals for publication. In particular, the content of chapter \ref{ch5} is based on the released publications \cite{ayad2020probing, ayad2019phenomenology}. In addition, the results presented in chapter \ref{ch6} are based on the released publication \cite{ayad2020potential} and the forthcoming publication \cite{ayad2020quantifying}. These references are listed below for convenience. \begin{itemize} \item[1.] A. Ayad and G. Beck. \textit{Probing a cosmic axion-like particle background within the jets of active galactic nuclei}. Journal of Cosmology and Astroparticle Physics, 2020(04)$:\text{055}\text{--}\text{055}$, apr 2020. This work has been published in Journal of Cosmology and Astroparticle Physics (JCAP). ArXiv e-Print$:$1911.10078 [astro-ph.HE]. \item[2.] A. Ayad and G. Beck. \textit{Phenomenology of axion-like particles coupling with photons in the jets of active galactic nuclei}. This work has been accepted for puplication in the South African Institute of Physics (SAIP)-2019 Conference Proceedings. ArXiv e-Print$:$1911.10075 [astro-ph.HE]. \item[3-] A. Ayad and G. Beck. \textit{Potential of SKA to detect CDM ALPs with radio astronomy}. This work has been published in the International Conference on Neutrinos and Dark Matter (NDM)-2020 with the Andromeda Conference Proceedings. ArXiv e-Print$:$2007.14262 [hep-ph]. \item[4-] A. Ayad and G. Beck. \textit{Quantifying the effect of cosmic plasma on the stimulated decay of axion-like particles}. This work has been submitted for possible publication in Journal of Cosmology and Astroparticle Physics (JCAP). ArXiv e-Print$:$2010.05773 [astro-ph.HE]. \end{itemize} \chapter*{\textbf{Declaration}} \addcontentsline{toc}{chapter}{Declaration} I, the undersigned, hereby declare that the work contained in this PhD thesis is my original work and that any work is done by others or by myself previously has been acknowledged and referenced accordingly.This thesis is submitted to the School of Physics, Faculty of Sciences, University of the Witwatersrand, Johannesburg, South Africa, in fulfillment of the requirements for the degree of Doctor of Philosophy in Physics. It has not been submitted before for any degree or examination in any other university. \vspace{1.5cm} \includegraphics[height=1.0cm]{images/Signature.pdf} \hrule Ahmed Ayad Mohamed Ali \makebox[2in][r] 12 February, 2021 \newpage \chapter*{\textbf{List of Abbreviations}} \addcontentsline{toc}{chapter}{List of Abbreviations} \chaptermark{List of Abbreviations} \begin{abbrv} \item[ALPs] Axion-like particles \item[CP] Charge-parity \item[PT] Parity-time \item[PQ] Peccei-Quinn \item[DM] Dark matter \item[DE] Dark energy \item[CDM] Cold dark matter \item[HDM] Hot dark matter \item[WDM] Warm dark matter \item[FDM] Fuzzy dark matter \item[AGNs] Active galactic nuclei \item[SMBH] Supermassive black hole \item[SMPP] Standard model of particle physics \item[SMC] Standard model of cosmology \item[SM] Standard models of physics \item[BSM] Beyond the Standard Models \item[QM] Quantum mechanics \item[GR] General relativity \item[GSW] Glashow, Salam, and Weinberg \item[FRW] Friedmann-Robertson-Walker \item[FLRW] Friedmann-Lema{\^\i}tre-Robertson-Walker \item[QCD] Quantum chromodynamics \item[QED] Quantum electrodynamics \item[QFT] Quantum field theory \item[EW] Electroweak \item[CMB] Cosmic microwave background \item[CAB] Cosmic axion or ALP background \item[WIMP] Weakly Interacting Massive Particles \item[BEC] Bose-Einstein condensate \item[SKA] Square kilometer array \item[HERA] Hydrogen epoch of reionisation array \item[EDGES] Global epoch of reionization signature \item[BB] Big bang \item[BBN] Big bang nucleosynthesis \item[SUSY] Supersymmetry \item[MSSM] Minimal Supersymmetric Standard Model \item[LSP] Lightest superpartner \item[LHC] Large Hadron Collider \item[CERN] European Organization for Nuclear Research \item[NG] Nambu-Goldstone \item[NGB] Nambu-Goldstone boson \item[PNGB] Pseudo-Nambu-Goldstone boson \item[EDM] Electric dipole moments \item[PQWW] Peccei-Quinn-Weinberg-Wilczek \item[VEV] Vacuum expectation value \item[KEK] High Energy Accelerator Research Organization \item[KSVZ] Kim–Shifman–Vainshtein–Zakharov \item[DFSZ] Dine–Fischler–Srednicki–Zhitnitsky \item[PBHs] Primordial black holes \item[QGP] Quark-gluon plasma \item[KK] Kaluza-Klein \item[HB] Horizontal branch \item[RGB] Red-giant branch \item[ICM] Intracluster medium \item[SI] International system of units \item[kpc] Kiloparsec \item[Mpc] Megaparsec \end{abbrv}
2,869,038,156,622
arxiv
\section{Introduction} When a solid body is subjected to a varying stress, acoustic waves (i.e. pressure waves propagating inside of the body), often reaching very high frequencies, are generated. Examples of this phenomenon are the noise produced by hitting a metal block with a hammer or the creaking of a wooden floor; the waves propagate inside and on the surface of the materials before dissipating in the surrounding gaseous medium. The process of generating \emph{acoustic waves} in stressed materials is called \emph{acoustic emission} (AE). The acoustic emission can be recorded by means of a transducer (i.e. a sensor in contact with the solid body which transforms the elastic waves into electric signals). Usually the emission of acoustic waves may also be associated with microfractures inside the solid or, in general, a degradation of its condition. Therefore an analysis of the acoustic emissions can reveal the level of degradation of the solid. In particular we note that the study of acoustic emission is developing in the field of \emph{tool condition monitoring} (TCM) where it is important, for example to avoid damaging machinery and to maximise productive capacity. AE analysis is an easily implemented and economical technique and it allows real-time monitoring of working tool conditions. Other statistical analyses have been previously conducted using the same experimental setup described in this article \citep{farrelly:metal,petri:tcm2,petri:tcm}. Results demonstrated that acoustic emission analysis is particularly relevant for the study of the ageing of work tools and the authors elucidated methods for the elicitation and reconstruction of the pdf of the root mean square amplitude signal. In this study we analyse some specific high frequency acoustic emissions by means of time series statistical-probabilistic models: ARMA models \citep{box:time,batt:prev} and models for point processes \citep{batt:prev,cox:stoch}. An objective of this study is to assess their suitability for explaining AE phenomena, and to estabilish if it might be possible to implement a tool wear monitoring algorithm. The experimental setup (presented in figure~\ref{figure_zero}) consisted of a mechanical lathe working on stainless steel bars and a transducer with which the resulting acoustic emissions were recorded. This signal was preamplified and filtered by a band-pass filter to isolate the frequencies of interest. The digitization was performed by means of a digital oscilloscope with a sample frequency of $f_0= 2.5$ Mhz. The cutting speed ranged from $0.5$ to $1$ ms$^{-1}$ and the cutting depth was $2$ mm. \begin{figure} \centering \includegraphics[scale=1]{setup3.eps} \caption{\label{figure_zero}The acoustic emissions were recorded by means of a transducer, preamplified, filtered, digitized and then stored.} \end{figure} Three types of working tools were used for the analyses: new, partially and totally worn. For the new and totally worn tools we conducted one acquisition with each of eight different tools. For the partially worn tools only four acquisistions were conducted. For each acquisition we recorded $15$ AE time series, each composed of $40960$ consecutive points (i.e. $614400$ points for each acquisition). The time duration of a single time series is $16.38$ ms. All analyses were conducted with the R environment for statistical computing (\cite{r}, http://www.R-project.org), a very powerful programming environment mainly used for statistical data analysis and modelling. \section{Data collected and preliminary analyses} When we take a close look at the collected time series that result from the experiments, it is possible to distinguish two different and superimposed parts of the signal. As we can see in the lower part of figure~\ref{figure_one}, \begin{figure} \centering \includegraphics[scale=0.45]{rms-1.eps} \caption{\label{figure_one}A typical AE signal with its RMS transform above with a threshold at level 20.} \end{figure} there is a \emph{continuous} part characterized by a relatively constant variability (essentially due to plastic deformation of the material) and a more interesting part composed of bursts of different and high amplitudes (usually associated with microfractures in the tool or with material splinters striking it). The natural time scale of the cutting process and the short duration of each time series suggest that each time series will be effectively stationary; this is verified by the Dickey-Fuller test \citep[par. 18.3.3]{greene:ecan}. Because of the large amount of data (due to the high sample rate necessary to capture the highest frequencies), it is necessary to operate a \emph{data reduction} by means of a transform that has some physical meaning \citep{kannatey:wear}. \begin{definition}[RMS Values] Let $x_s$, $s \in \mathbb{N}$ be a time series taking values in $\mathbb{R}$ and $T$ the number of samples in some interval over which the RMS $y_t$ is to be calculated. We define \begin{equation} y_t = \sqrt{ \frac{ \sum_{j=1}^{T} \ x_{ \left( t-1 \right) T+j}^2 }{ T } }, \qquad t \in \mathbb{N}. \end{equation} \end{definition} If a time series is of length $N$ (multiple of $T$), then the RMS series $y_t\in \mathbb{R}$ has $N/T$ points, each proportional to the acoustic energy emitted in the interval $(tT-T,tT]$. Our choice of $T$ is motivated by examination of the AE spectral density function $\tilde{x}(f/f_0)$, which is naturally separable into lower and higher frequency regions by an almost zero density region centred at $f/f_0\sim 5\times 10^{-3}$. Furthermore, analysis of $\tilde{x}(f/f_0)$ on a moving window demonstrates that the low frequency region is largely constant while the higher part depends critically on the presence or absence of bursts in the window. As bursts are typically separated by the O(10$^4$) samples, we conclude that the lower part of the spectrum is due to the process generating the series of bursts, whereas the higher part is due both to the dynamics of individual bursts and to the structure of the continuous part of the AE signal. In order to obtain a meaningful RMS, that is, to reduce the data to a small number of points characterising the AE noise level, we chose $T=100$ corresponding to the spectral gap at $f/f_0=5\times 10^{-3}$ (equal to a physical interval of $40\mu s$). \section{Results for ARMA models} We now subject this RMS series $y_t$ to analysis to determine if some feature of the resulting model can be exploited to estimate the wear level. ARMA (Auto Regressive Moving Average) linear models are widely used for modelling stationary time series in general \citep*{brock:theory,pries:spectral,shum} and are very flexible with respect to real-world applications. \citet{76377} have successfully discriminated normal and pathological patients using ARMA analyses on acoustic signals from the respitory tract. \citet{Hol} have used a more generalised ARMA model to forecast failure in mechanical systems and \citet{freq} have presented a review of acoustic applications of ARMA models, defining in detail a protocol for the analysis of acoustic spectral features. Some other examples may be taken from the fields of medicine \citep*{cor}, finance \citep*{Gha,wen}, languages \citep*{paw} and engineering (\cite{kannatey:wear}, \cite{freq2}, \cite{wind}, cite{ground}, \cite{ground2}, \cite{Na}). Previous research on machining by means of techniques related to time series analysis was conducted by Professor Wu and his team, and reported in several papers dating back to the late 1980s. Amongst other we mention \citet{Kim1989282}, \citet{Fassois1989153}, \citet{Yang1985336}, \citet{Ahn198591} and \citet{1989MSSP}. Although few naturally occurring processes are intrinsically linear, the aim of this section of the analysis is to understand if ARMA models are suitable for the purpose of representing acoustic emission signals, and if a linear approximation could give us enough information about the dynamical process itself. \begin{definition}[ARMA processes] Let us consider a real valued process $X_t, t \in \mathbb{N}$. It is called an $\text{ARMA} \left(p,q \right)$ process (combining an $AR \left( p \right)$ and $MA \left( q \right)$ model) if \begin{equation} \label{arma} X_t=c_0+ \phi_1 X_{t-1} + \cdots + \phi_p X_{t-p} + \epsilon_t + \theta_1 \epsilon_{t-1} + \cdots + \theta_q \epsilon_{t-q} \end{equation} where $c_0$ is the intercept, $p$ and $q$ are respectively the number of parameters in the autoregressive and moving average part of the process. The $p + q$ parameters $\phi_1 \cdots \phi_p,\theta_1 \cdots \theta_q$ must be chosen such that $|\phi_i|<1,i \in \{ 1,\cdots, p\}$ and $|\theta_j|<1, j \in \{1,\cdots,q \}$ to ensure stationarity and invertibility of the process. The process $\epsilon_t$ called innovations is taken to be a white noise (see e.g. the classical text of \cite{box:time} or \cite{brock:theory} \end{definition} For each level of wear and for each recorded time series we have conducted a full analysis following the Box-Jenkins iterative procedure \citep*{box:time}. The resulting best model is generally very simple; often it has only three or four parameters. We can analyse the results, for example, for an RMS time series taken with a fully worn tool. At the right side of figure~\ref{figure_two} we can see the autocorrelation function and the partial autocorrelation function of the series. Their shapes (the acf's decay is exponential and the pacf is zero after lag one) and an analysis of the spectral density function suggest a simple AR$\left(1\right)$. Furthermore if we apply the procedure in a completely automated manner \citep[par. 9.3]{brock:theory}, the resulting best model is indeed an AR$\left(1\right)$ model. \begin{align} \label{model} X_t & = \hat{c_0} + \hat{\phi} X_{t-1} + \hat{\epsilon}_{t} \nonumber \\ & = 7.5499 + 0.7632 X_{t-1} + \hat{\epsilon}_{t} \end{align} where $\hat{c_0}$ is the estimated intercept, $\hat{\phi}$ is the sole estimated autoregressive parameter and $\hat{\epsilon}_t$ are the residuals (i.e. the estimated innovations). All the parameters are statistically significant. \begin{definition}[BIC -- Bayesian Information Criterion] We define the Bayesian Information Criterion for a model with $(p+q)$ parameters and $N$ observations as \begin{align} BIC \left( p,q \right) & = N \log \hat{\sigma}^2 \left( p,q \right) + \left( p+q \right) \log N + \nonumber \\ & - \left( N-p-q \right) \log \left( 1-\frac{p+q}{N} \right) + \left( p+q \right) \log \left( \frac{1}{p+q} \left( \frac{\hat{\sigma}^2_*}{ \hat{\sigma}^2 \left(p,q \right)} \right) \right) \end{align} where $p$ is the number of parameters in the autoregressive part of the model, $q$ is the number of parameters in the moving average part, $\hat{\sigma}^2 \left(p,q \right)$ is the residuals variance calculated after having fitted an $ARMA \left(p,q \right)$ model and $\hat{\sigma}^2_*$ is the sample variance of the observations (for details see \cite{pries:spectral} page 375). \end{definition} To better understand the behaviour of the BIC it is useful to give the following representation also presented in \cite{pries:spectral} \begin{equation} BIC \left( p,q \right) = N \log \hat{\sigma}^2 \left( p,q \right) + \left( p+q \right) \log N + o \left( p+q \right) \end{equation} The BIC must thus be minimised w.r.t. $(p,q)$ in order to select the model which best explains the observations with the minimum of parameters. At the left side of figure~\ref{figure_two} we can see the levelplot for the BIC matrix. \begin{figure} \makebox{\includegraphics[scale=0.27]{aic.eps}} \caption{\label{figure_two}BIC matrix and autocorrelation functions.} \end{figure} For each combination of order $\left(p,q\right)$ of the AR and MA parts, maximum likelihood estimates for the parameters are computed and BIC$\left(p,q\right)$ is recorded on the matrix. The minimum BIC value indicates that the best model is that of eq.~(\ref{model}). Following the usual procedure to validate the model, we conduct an analysis of \emph{whiteness} on the residuals. The autocorrelation and partial autocorrelation functions for the estimated residuals are both within the white noise confidence band for lags strictly positive, and the spectral density function is uniform. Whiteness tests in both the time and frequency domains (Ljung-Box for various lags and cumulative periodogram tests) were conducted with positive outcome. This model therefore explains the whole linear dependence between the variables in the process. At this point we have a model which adequately describes the stationary part of the AE RMS signal. Furthermore we can see that the mean of the residuals' variance decreases with increasing tool wear. However the decrease of the residuals' variance (with respect to the wear level) is not due to the better explanation of the data by the model, but instead to the decreasing number of bursts in the time series. Therefore ARMA models are more suitable for the description of the essentially continuous transformed signal part (the one due to plastic deformation). \begin{figure} \centering \makebox{\includegraphics[scale=0.5]{residuals.eps}} \caption{\label{res}A typical residuals time series after adapting an ARMA model. We note that bursts are still present.} \end{figure} \section{Point processes}\label{popro} Taking a closer look at the residuals (figure \ref{res}), we note that bursts are still present, though with a smaller amplitude. Therefore the bursts are not explained by the AR(1) model of the preceding section, and therefore not by any $\text{ARMA}(p,q)$. This indicates a decrease in the mean burst frequency with increasing wear level. If we wish to understand the evolution of the acoustic emissions with respect to the wear level of the tool, we must take into account the dynamical and statistical process that generates the bursts. The acoustic emission bursts are usually associated with micro-fractures in the work tool. They are, effectively, singular events (the exponential decay is due to the transducer response). Furthermore the number of bursts seems to change with the wear level. Point processes are a tool widely used in modelling inherently point phenomena (see e.g. \cite{Paparo}, \cite{telo}). In this section we consider the bursts as the outcome of a point process and try to understand the behaviour of this process as the wear level increases. Figure \ref{figure_four} shows the overall point process for new tools and the observed waiting times between bursts. The waiting times between events are registered and presented in figure~\ref{figure_four}(b) indexed by their order in the sequence of bursts. The identification of the events was performed by placing a threshold at various levels to obtain information about bursts of different amplitudes. In particular we chose four levels of amplitude $40$, $50$, $60$ and $70$. These thresholds are expressed in arbitrary units proportional to the signal's amplitude which depends on the data collection chain. For an example see fig.~\ref{figure_one} where the placement of a threshold at level 20 identifies two burst events in the RMS-transformed time series. The waiting times process is then calculated for all the thresholds considered and is in effect a renewal process \citep{cox:renewal}. It is possible to verify the lack of correlation (second order dependence) in the waiting times process by calculating the spectral density function and the complete and partial autocorrelation functions. \begin{figure} \centering \makebox{\includegraphics[scale=0.45]{point-pro-graph.eps}} \caption{\label{figure_four}a) The sequence of burst events as a function of time. b) The waiting time between events indexed by their order in the sequence of bursts.} \end{figure} The hypothesis of exponentially distributed waiting times (Poisson process) and Pareto (power-law) distributed waiting times (fractal point process) as in \citet{thurner:fracpoin} and \citet{lowen:fract} is addressed conducting Kolmogorov-Smirnov goodness of fit tests on the p.d.f. for all thresholds and wear levels. The tests resulted not significant, so another hypothesis for the distribution must be proposed. As stated in \citet{malevergne:weitopar}, the Weibull distribution (also known as \emph{Stretched Exponential}) can have (for typical parameters' values) both the features of an exponential distribution and a Pareto distribution: \begin{equation} f \left( x \right) = \frac{ \alpha}{ \beta} \left( \frac{x}{ \beta} \right)^{ \alpha -1} e^{ - \left( \frac{x}{ \beta } \right)^{ \alpha}} \end{equation} where $\beta$ and $\alpha$ are, respectively, the scale and the shape parameter ($x \geq 0, \alpha,\beta > 0$). When $\alpha = 1$ a Weibull is an exponential distribution with rate $\lambda = \frac{1}{ \beta}$. This kind of distribution, therefore, allows us to describe point processes that have waiting times following an \emph{exponential behaviour}, but that can exhibit a heavier tail. \begin{table} \caption{\label{results}Maximum Likelihood Estimates and P-Values of Kolmogorov-Smirnov Tests} \centering \fbox{% \begin{tabular}{*{9}{c}} \multicolumn{1}{c}{Thresh.} & \multicolumn{3}{c}{New} & \multicolumn{3}{c}{Worn} & \multicolumn{2}{c}{P-Value} \\ & $\hat{\alpha}$ & $\hat{\beta}$ & $ \hat{ \mu} $ & $\hat{\alpha}$ & $\hat{\beta}$ & $ \hat{ \mu} $ & New & Worn \\ \hline 70 & 0.845 & 98.53 & 90.92 & 0.764 & 138.98 & 124.56 & 0.9907 & 0.3494 \\ 60 & 0.875 & 119.68 & 111.95 & 0.786 & 138.99 & 125.38 & 0.5507 & 0.2388 \\ 50 & 0.822 & 109.99 & 100.53 & 0.942 & 148.79 & 144.06 & 0.606 & 0.1154 \\ 40 & 0.754 & 83.1 & 74.27 & 0.907 & 138.90 & 132.02 & 0.3204 & 0.1055 \\ \end{tabular}} \end{table} Table~\ref{results} summarises the maximum likelihood estimates for the Weibull parameters $\hat{\alpha}$ and $\hat{\beta}$. We have calculated the estimates for each threshold and each wear level, finding the estimated mean $\hat{\mu}$ and performed a Kolmogorov-Smirnov test to see if the Weibull hypothesis can hold. As can be seen from the P-values in the table, the test is not significant for each level and each threshold. Table~\ref{results} contains only the estimates for the \emph{new} and \emph{totally worn} tools. The half worn tool has sufficient data for this analysis only with the lowest threshold; the result is consistent with the other wear levels (i.e. $\hat{\mu}=113.15$). Figure~\ref{figure_three} shows the maximum likelihood Weibull fit for the distribution of waiting times between the bursts (new tools, threshold equal to $40$). The estimated parameters are $\hat{\alpha} = 0.75$ and $\hat{\beta} = 83.1$. In the inset we note that, for all thresholds considered, the estimated mean of the waiting times increases with the tool wear level. For purposes of TCM we could therefore monitor the mean of the estimated distribution to decide whether is necessary to change the tool or not. \section{Conclusions} \begin{figure} \centering \includegraphics[scale=0.45]{fit.eps} \caption{\label{figure_three}Weibull fit for the pdf of the waiting times process. The mean inter-burst waiting time consistently increases with increasing tool wear.} \end{figure} In this analysis we have shown an application of time series models to acoustic emission signals from metal cutting processes. Before this work few stochastic analyses were performed on this particular type of data. After transforming them by means of the RMS transform, we applied standard time series statistical techniques to explain the underlying process that generates the phenomenon under consideration. In particular initially we used linear ARMA models together with the well known Box-Jenkins iterative procedure. We found that these linear models with a small number of parameters are suitable for the description of the linear contribution of the background part of the signal. The variance of the residuals decreases when the wear level increases but this effect is due mainly to the decreasing number of bursts in the time series. Even though, for the purposes of TCM, we could in principle monitor that variance, we have obtained better results looking directly at the underlying mechanism generating the burst process. A renewal point process with Weibull distributed waiting times seems to represent the burst process very well. In particular, for each threshold and each wear level the Kolmogorov-Smirnov test results not significant. When the tool wear level increases the tail of the distribution becomes heavier and the estimated mean of the waiting times process increases. The number of bursts is actually correlated with increasing tool wear, therefore we could identify the estimated mean as the parameter that should be used to monitor the condition of the working tool (the tool will be substituted when the estimated mean becomes sufficiently high). In conclusion, it is important to underline that, from a methodological point of view, both ARMA models and Weibull point processes are suitable for modelling the phenomenon (even though ARMA models are somewhat limited to explaining the linear contribution of the plastic deformation). What appears of importance is that the shape of the distribution of waiting times changes in the sense that the tail becomes heavier when the wear level increases. Therefore the process creating the bursts seems to evolve in a fundamental manner. In future work, it would certainly be very interesting to follow the evolution of the distribution shape along the whole life of the cutting tool to try to understand better the dynamical properties of the underlying process. \vspace{.5cm} \textit{Acknowledgement:} The authors wish to thank the referees for their helpful comments and insights. \nocite{*} \bibliographystyle{Chicago}
2,869,038,156,623
arxiv
\section{Main results}\label{intro} In \cite{W} and \cite{DW}, the authors initiated a partial computation of the connective $KU$-homology groups, $ku_*(K(\bold Z_2,2))$, of the mod-2 Eilenberg-MacLane space $K(\bold Z_2,2)$ in separate studies of Stiefel-Whitney classes of manifolds. We eventually turned to the associated cohomology groups, $ku^*(K(\bold Z_2,2))$, and were able to give a complete determination, via the Adams spectral sequence (ASS). This generalized nicely to the odd primes, and then we found a duality result (\cite{DD}) relating these homology and cohomology groups which enabled us to determine the homology groups $ku_*(K(\bold Z_p,2))$. Let $K_2=K(\bold Z_p,2)$, with the prime $p$ implicit. We begin with a description of the $ku^*$-module $ku^*(K_2)$. Note that $ku^*=\bold Z_{(p)}[v]$ with $|v|=-2(p-1)$. We find that depiction via ASS charts is the most insightful way to envision the groups. There is a very nice interplay between extensions (multiplication by $p$) seen in Ext ($h_0$-extensions) and exotic extensions. We depict the ASS with cohomological (co)degrees increasing from right-to-left. We write $|x|=d$ if $x\in ku^d(K_2)$ or the associated $E_2$-term. In $ku^*(K_2)$, there is a trivial submodule whose Poincar\'e series when $p=2$ is described at the end of Section \ref{E2sec}. It plays no role and {\bf will be ignored from now on}. As a $ku^*$-module, $ku^*(K_2)$ is generated by certain products of elements of $E_2^{0}$, $y_0$, $y_i=y_0^{p^i}$, with $|y_i|=2p^i$, $z_j$ for $j\ge0$ with $|z_j|=2(p^{j+1}+1)$, and $q$ with $|q|=9$ if $p=2$, and $|q|=4p-1$ if $p$ is odd. The even-graded part $ku^{\text{ev}}(K_2)$ is formed from shifted copies of $ku^*$-modules $A_k$ and $B_k$, which can be defined inductively as follows. \begin{defin}\label{ABdef} Let $k_0=1$ if $p$ is odd, and $k_0=2$ if $p=2$. Let $B_{k_0-1}=0$. Let $A_0=\langle z_0\rangle$ for all $p$. Inductively $$B_k\text{ is built from }z_{k-1}^{p-1}B_{k-1},\ TP_{p^k-k}[v]z_k, \text{ and }y_{k-1}^{p-1}B_{k-1},\text{ if }k\ge k_0$$ and $$A_k\text{ is built from }z_{k-1}^{p-1}B_{k-1},\ TP_{p^k}[v]z_k, \text{ and }y_{k-1}^{p-1}A_{k-1},\text{ if }k\ge1$$ with extensions determined by \begin{equation}\label{extns}pz_k=vz_{k-1}^p\text{ for $k\ge2$, and }py_{k-1}^{p-1}z_{k-1}=v^{p^{k-1}(p-1)}z_{k}.\end{equation} \end{defin} Here $TP_i[v]$ is the truncated polynomial algebra over $\bold Z_p$ with generator $v$ and relation $v^i$. When we write something like $zB$, we mean that all elements of $B$ are multiplied by the element $z$. Saying ``is built from'' means that these are successive quotients in a filtration as a $ku^*$-module. The extension formulas are only asserted up to multiplication by a unit is $\bold Z_p$, and can both occur on an element. For example, in Figure \ref{B7}, we have, in grading 116 when $p=2$, $2y_3z_3z_4=vy_3z_2^2z_4+v^8z_4^2$. Figure \ref{B7} should enable the reader to envision $A_k$ and $B_k$ for $p=2$ and $k\le5$, and, by extrapolating, for all $k$. Elements connected by dashed lines are in $A_5$ but not in $B_5$. The long red lines, sometimes slightly curved, are the exotic extensions. The portion in gradings $\le102$, not including the top $v$-tower or the extensions to it, is $y_4A_4$ (or $y_4B_4$ if the dashed part is omitted). The portion in gradings $\ge106$, not including the $v$-tower on $z_5$ or the $h_0$-extensions from it, is $z_4B_4$. The portion in the lower right corner from gradings $\le 84$ is $y_3y_4A_3$, and $y_2y_3y_4A_2$ is in gradings $\le 74$. In Figure \ref{oddchart}, we present a schematic of $A_3$ and $B_3$ at the odd primes. Again the dashed portion is in $A_3$, but not $B_3$, and the triangle in the lower right portion is $y_1^{p-1}y_2^{p-1}A_1$. A generating set as a $\bold Z_p[v]$-module for $B_k$ is \begin{equation}\label{Bk}\biggl\{z_j\prod_{i=j}^{k-1}\{z_i^{p-1},y_i^{p-1}\}:\ k_0\le j\le k\biggr\},\end{equation} while $A_k$ has additional generators $$\begin{cases}z_1y_1\cdots y_{k-1}&p=2\\ z_0y_0^{p-1}\cdots y_{k-1}^{p-1}&\text{all }p.\end{cases}$$ The notation here means a product over all choices of one of the two elements in each factor. For example, $\prod_{i=1}^2\{z_i^{p-1},y_i^{p-1}\}=z_1^{p-1}z_2^{p-1}+z_1^{p-1}y_2^{p-1}+y_1^{p-1}z_2^{p-1}+y_1^{p-1}y_2^{p-1}$. An empty product is defined to equal 1. The following theorem explains how the portion of $ku^*(K_2)$ in even gradings is a direct sum of shifted versions of $A_k$ and $B_k$. \begin{thm}\label{evthm} Let $M_p[S]$ denote the set of monomials in the elements of a set $S$ raised to powers $<p$. Let \begin{equation}\label{Mk}\mathcal{M}_k=(M_p[z_k,y_k]-\{z_k^{p-1},y_k^{p-1}\})\cdot M_p[z_i,y_i:\ i>k].\end{equation} Let $\mathcal{M}_k^A$ be the set of monomials in $\mathcal{M}_k$ with no $z$-factors, and $\mathcal{M}_k^B=\mathcal{M}_k-\mathcal{M}_k^A$. Then $$ku^{\text{ev}}(K_2)=\bigoplus_{k\ge1}\biggl(\bigoplus_{M\in\mathcal{M}_k^A}M\cdot A_k\oplus\bigoplus_{M\in \mathcal{M}_k^B}M\cdot B_k\biggr)$$ plus a trivial $ku^*$-module.\end{thm} Note that the monomial 1 is in $\mathcal{M}_k^A$, so $A_k$ appears by itself, but $B_k$ does not. \tikzset{ testpic/.pic= {\draw (0,0) -- (1,1) -- (1,0) -- (5,4) -- (5,2) -- (29,26); \draw [dashed] (29,26) -- (34,28); \draw [dashed] [color=red] (34,28) -- (34,0); \draw (0,0) -- (34,0); \draw (3,2) -- (3,0); \draw (4,1) -- (4,3); \draw [color=red] (5,0) to[out=98, in=262] (5,4); \draw [color=red] (10,0) to[out=98, in=262] (10,8); \draw [color=red] (11,1) to[out=98, in=262] (11,9); \draw [color=red] (12,2) to[out=98, in=262] (12,10); \draw [color=red] (13,3) to[out=98, in=262] (13,11); \draw [color=red] (22,0) to[out=98, in=262] (22,4); \draw [color=red] (22,3) to[out=83, in=270] (22,19); \draw [color=red] (21,2) to[out=83, in=270] (21,18); \draw [color=red] (20,1) to[out=83, in=270] (20,17); \draw [color=red] (19,0) to[out=83, in=270] (19,16); \draw [color=red] (27,0) to[out=83, in=270] (27,8); \draw [color=red] (14,0) -- (14,4); \draw [color=red] (23,4) -- (23,20); \draw [color=red] (24,5) -- (24,21); \draw [color=red] (25,6) -- (25,22); \draw [color=red] (26,7) -- (26,23); \draw [color=red] (27,8) -- (27,24); \draw [color=red] (28,1) -- (28,25); \draw [color=red] (29,2) -- (29,26); \draw [color=red] (30,3) -- (30,11); \draw [color=red] (31,0) -- (31,4); \node at (0,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (1,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (1,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (2,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (3,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (4,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (5,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (2,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (3,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (4,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (5,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (6,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (7,5) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (8,6) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (9,7) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (10,8) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (11,9) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (12,10) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (13,11) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (3,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (4,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (5,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (6,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (7,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (8,5) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (9,6) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (10,7) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (11,8) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (12,9) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (13,10) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (14,11) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (15,12) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (16,13) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (17,14) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (18,15) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (19,16) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (20,17) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (21,18) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (22,19) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (23,20) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (24,21) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (25,22) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (26,23) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (27,24) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (28,25) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (29,26) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (30,26.4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (31,26.8) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (32,27.2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (33,27.6) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (34,28) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (5,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (6,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (9,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (10,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (10,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (11,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (12,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (13,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (14,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (14,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (15,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (17,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (18,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (18,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (19,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (20,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (21,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (22,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (19,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (20,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (21,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (22,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (23,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (24,5) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (25,6) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (26,7) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (27,8) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (28,9) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (29,10) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (30,11) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (22,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (23,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (26,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (27,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (27,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (28,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (29,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (30,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (31,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (31,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (32,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (33,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (33,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (34,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (34,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (34,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (32,5) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (33,6) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (34,7) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (31,12) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (32,13) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (33,14) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (34,15) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \draw (17,0) -- (18,1) -- (18,0) -- (22,4); \draw (19,0) -- (30,11); \draw (19,0) -- (19,1) -- (20,2) -- (20,1) -- (21,2) -- (21,3) -- (22,4) -- (22,3); \draw (2,1) -- (2,0) -- (13,11) -- (13,10); \draw (6,3) -- (6,4) -- (7,5) -- (7,4) -- (8,5) -- (8,6) -- (9,7) -- (9,6) -- (10,7) -- (10,8) -- (11,9) -- (11,8) -- (12,9) -- (12,10); \draw (3,0) -- (5,2); \draw (9,0) -- (10,1) -- (10,0) -- (14,4); \draw (5,0) -- (6,1); \draw (14,0) -- (15,1); \draw (22,0) -- (23,1); \draw (26,0) -- (27,1) -- (27,0) -- (31,4); \draw (31,0) -- (32,1); \draw [dashed] [color=red] (33,0) -- (33,27.6); \draw [dashed] [color=red] (32,1) -- (32,27.2); \draw [dashed] [color=red] (31,4) -- (31,26.8); \draw [dashed] [color=red] (30,11) -- (30,26.4); \draw [dashed] (33,0) -- (34,1); \draw [dashed] (32,1) -- (34,3); \draw [dashed] (31,4) -- (34,7); \draw [dashed] (30,11) -- (34,15); \node at (0,-.4) {$136$}; \node at (3,-.4) {$130$}; \node at (0,-.9) {$z_2^2z_3z_4$}; \node at (3,-.9) {$z_5$}; \node at (2,-.9) {$z_4^2$}; \node at (5,-.4) {$126$}; \node at (5,-.9) {$y_2z_2z_3z_4$}; \node at (10,-.4) {$116$}; \node at (10,-.9) {$y_3z_3z_4$}; \node at (14,-.4) {$108$}; \node at (14,-.9) {$y_2y_3z_3z_4$}; \node at (17,-.4) {$102$}; \node at (17,-.9) {$y_4z_2^2z_3$}; \node at (19,-.4) {$98$}; \node at (19,-.9) {$y_4z_4$}; \node at (22,-.4) {$92$}; \node at (22,-.9) {$y_2y_4z_2z_3$}; \node at (27,-.9) {$y_3y_4z_3$}; \node at (27,-.4) {$82$}; \node at (31,-.4) {$74$}; \node at (31,-.9) {$y_2y_3y_4z_2$}; \node at (34,-.4) {$68$}; \node at (34,-.9) {$y_0y_1y_2y_3y_4z_0$}; }} \begin{minipage}{6in} \begin{fig}\label{B7} {\bf $B_5$ and $A_5$ when $p=2$.} \begin{center} \begin{tikzpicture} \pic[rotate=90,scale=.56,transform shape] {testpic}; \end{tikzpicture} \end{center} \end{fig} \end{minipage} \vfill\eject \tikzset{ testpic4/.pic= {\draw (-1,0) -- (70,0); \draw (2,0) -- (66,48); \draw (39,0) -- (67,21); \draw (1,0) -- (29,21); \draw (0,0) -- (12,9); \draw (38,0) -- (50,9); \draw (18,0) -- (30,9); \draw (56,0) -- (68,9); \draw [dotted] (12,9) -- (68,9); \draw [dotted] (29,21) -- (67,21); \draw [color=red] (51,9) -- (51,36.75); \node at (66,48) {\,\begin{picture}(-1,1)(1,-1)\circle*{8}\end{picture}\ }; \node at (67,48.75) {\,\begin{picture}(-1,1)(1,-1)\circle*{8}\end{picture}\ }; \node at (68,49.5) {\,\begin{picture}(-1,1)(1,-1)\circle*{8}\end{picture}\ }; \node at (69,50.25) {\,\begin{picture}(-1,1)(1,-1)\circle*{8}\end{picture}\ }; \node at (67,21) {\,\begin{picture}(-1,1)(1,-1)\circle*{8}\end{picture}\ }; \node at (68,21.75) {\,\begin{picture}(-1,1)(1,-1)\circle*{8}\end{picture}\ }; \node at (69,22.5) {\,\begin{picture}(-1,1)(1,-1)\circle*{8}\end{picture}\ }; \node at (68,9) {\,\begin{picture}(-1,1)(1,-1)\circle*{8}\end{picture}\ }; \node at (69,9.75) {\,\begin{picture}(-1,1)(1,-1)\circle*{8}\end{picture}\ }; \draw (39,0) -- (39,.75); \draw (41,1.5) -- (41,2.25); \draw (43,3) -- (43,3.75); \draw (45,4.5) -- (45,5.25); \draw (47,6) -- (47,6.75); \draw (49,7.5) -- (49,8.25); \draw (50,8.25) -- (50,9); \draw [color=red] (56,0) -- (56,40.5); \draw [color=red] (58,1.5) -- (58,42); \draw [color=red] (60,3) -- (60,43.5); \draw [color=red] (62,4.5) -- (62,45); \draw [color=red] (64,6) -- (64,46.5); \draw [color=red] (67,8.25) -- (67,21); \draw [color=red] [dashed] (67,20.25) -- (67,48.75); \draw [color=red] [dashed] (68,9) -- (68,49.5); \draw [color=red] [dashed] (69,0) -- (69,50.25); \draw [color=red] (66,7.5) -- (66,48); \draw [color=red] (54,11.25) -- (54,39); \draw [color=red] (52,9.75) -- (52,37.5); \draw [color=red] (39,0) to[out=80, in=270] (39,27.75); \draw [color=red] (41,1.5) to[out=80, in=270] (41,29.25); \draw [color=red] (43,3) to[out=80, in=270] (43,30.75); \draw [color=red] (45,4.5) to[out=80, in=270] (45,32.25); \draw [color=red] (47,6) to[out=80, in=270] (47,33.75); \draw [color=red] (50,8.25) to[out=85, in=270] (50,36); \draw (2,0) -- (2,1.5); \draw (1,0) -- (1,.75); \draw (4,1.5) -- (4,3); \draw (6,3) -- (6,4.5); \draw (8,4.5) -- (8,6); \draw (10,6) -- (10,7.5); \draw (12,7.5) -- (12,9); \draw (14,9) -- (14,9.75); \draw (16,10.5) -- (16,11.25); \draw (18,12) -- (18,12.75); \draw (20,13.5) -- (20,14.25); \draw (22,15) -- (22,15.75); \draw (24,16.5) -- (24,17.25); \draw (26,18) -- (26,18.75); \draw (29,20.25) -- (29,21); \draw [color=red] (18,0) to[out=98, in=262] (18,12.75); \draw [color=red] (20,1.5) to[out=98, in=262] (20,14.25); \draw [color=red] (22,3) to[out=98, in=262] (22,15.75); \draw [color=red] (24,4.5) to[out=98, in=262] (24,17.25); \draw [color=red] (26,6) to[out=98, in=262] (26,18.75); \draw [color=red] (29,8.25) to[out=98, in=262] (29,21); \draw [dotted] (18,12.75) -- (56,12.75); \node [font=\fontsize{40}{0}] at (34,9) {$p-2$}; \node [font=\fontsize {40}{0}] at (34,12.75) {$p^2-p$}; \node [font=\fontsize {40}{0}] at (34,21) {$p^2-3$}; \node [font=\fontsize {40}{0}] at (64,50.25) {$p^3-1$}; \node [font=\fontsize {40}{0}] at (46,36.75) {$p^3-p^2+p-2$}; \draw [dotted] (49,36.75) -- (51,36.75); \draw [dotted] (66,50.25) -- (69,50.25); \node [font=\Huge] at (18,-1) {$y_1^{p-1}z_1z_2^{p-1}$}; \node [font=\Huge] at (56,-1) {$y_1^{p-1}y_2^{p-1}z_1$}; \node [font=\Huge] at (36,-1) {$y_2^{p-1}z_1^p$}; \node [font=\Huge] at (41,-1) {$y_2^{p-1}z_2$}; \node [font=\Huge] at (69,-1) {$y_0^{p-1}y_1^{p-1}y_2^{p-1}z_0$}; \node [font=\Huge] at (1,-1) {$z_2^p$}; \node [font=\Huge] at (2.5,-1) {$z_3$}; \node [font=\Huge] at (-1.5,-1) {$z_1^pz_2^{p-1}$}; \node at (69,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{8}\end{picture}\ }; }} \bigskip \begin{minipage}{6in} \begin{fig}\label{oddchart} {\bf Schematic of $A_3$ and $B_3$ for odd $p$.} \begin{center} \begin{tikzpicture} \pic[rotate=90,scale=.28,transform shape] {testpic4}; \end{tikzpicture} \end{center} \end{fig} \end{minipage} \vfill\eject Now we describe the portion of $ku^*(K_2)$ in odd gradings. Let $P[S]$ denote the polynomial algebra on a set $S$, and $TP_i[S]=P[S]/(s^i:\ s\in S)$, the truncated polynomial algebra. Let $\Lambda_j=TP_p[z_i: \i\ge j]$. Note that if $p=2$, $\Lambda_j$ is an exterior algebra. For $i\le j$, let \begin{equation}\label{zij}z_{i,j}=z_i(z_i\cdots z_{j-1})^{p-1}.\end{equation} If $j=i$, then $z_{i,j}=z_i$. \begin{defin}\label{Sdef} For $\ell>k\ge1$, let $S_{k,\ell}=TP_{k+1}[v]\langle z_{k_0,\ell},\ldots,z_{\ell-k-1+k_0,\ell}\rangle$ with $pz_{i,\ell}=vz_{i-1,\ell}$ and $pz_{k_0,\ell}=0$. \end{defin} \noindent For example, $S_{5,8}$ with $p=2$ is depicted in Figure \ref{S710}. \bigskip \begin{minipage}{6in} \begin{fig}\label{S710} {\bf $S_{5,8}$ if $p=2$} \begin{center} \begin{tikzpicture}[scale=.3] \draw (0,0) -- (10,10); \draw (2,0) -- (12,10); \draw (4,0) -- (14,10); \draw (2,0) -- (2,2); \draw (4,0) -- (4,4); \draw (6,2) -- (6,6); \draw (8,4) -- (8,8); \draw (10,6) -- (10,10); \draw (12,8) -- (12,10); \draw (-1,0) -- (5,0); \node at (-.6,-.9) {$1040$}; \node at (3.9,-.9) {$1036$}; \node at (-.6,-1.9) {$z_{2,8}$}; \node at (3.9, -1.9) {$z_{4,8}$}; \node at (0,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (2,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (4,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (6,6) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (8,8) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (10,10) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (2,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (4,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (6,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (8,6) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (10,8) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (12,10) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (4,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (6,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (8,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (10,6) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (12,8) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (14,10) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \end{tikzpicture} \end{center} \end{fig} \end{minipage} The following result describes the portion of $ku^*(K_2)$ in odd gradings. The exponent of $p$ in an integer $i$ is denoted simply by $\nu(i)$; the prime $p$ is implicit. The element $q$ here has grading $9$ or $4p-1$, as mentioned earlier. \begin{thm} \label{oddthm} There is an isomorphism of $ku^*$-modules $$ku^{\text{odd}}(K_2)\approx \bigoplus_{i\ge1}\bigoplus_{\ell\ge\nu(i)+2}q y_1^{i-1}S_{\nu(i)+1,\ell}\otimes TP_{p-1}[z_\ell]\otimes\Lambda_{\ell+1}.$$\end{thm} The non-visual, formulaic form of our result is as follows. \begin{thm}\label{formula} The $ku^*$-module $ku^*(K_2)$ is isomorphic to a trivial $ku^*$-module plus \begin{eqnarray}&&P[y_1^{p-1}]y_0^{p-1}z_0\oplus\bigoplus_{t\ge1}TP_{p^t}[v]\otimes P[y_t]z_t\label{a1}\\ &\oplus&\label{a2}\bigoplus_{t\ge k_0}TP_{p^t-t}[v]\otimes P[y_t]z_t\Lambda_t\\ &\oplus&\label{a3}\bigoplus_{i\ge1}TP_{\nu(i)+2}[v]q y_1^{i-1}\bigoplus_{\ell\ge0}z_{k_0+\ell,\ell+\nu(i)+2}\Lambda_{\ell+\nu(i)+2}.\end{eqnarray} Multiplication by $p$ in (\ref{a1}) and (\ref{a2}) is determined by (\ref{extns}) and in (\ref{a3}) as in Definition \ref{Sdef}.\end{thm} Our initial interest in this project was $ku_*(K_2)$ (\cite{W},\cite{DW}), but we first achieved success in computing $ku^*(K_2)$. In \cite[Example 3.4]{DD}, the following result was proved. \begin{thm}\label{dual} There is an isomorphism of $ku_*$-modules $ku_*(K_2)\approx(ku^{*+2p}K_2)^\vee$.\end{thm} Here $M^\vee=\operatorname{Hom}(M,\bold Z/p^\infty)$, the Pontryagin dual, localized at $p$. A homotopy chart for $ku_*(K_2)$ could be thought of as a shifted version of the homotopy chart of $ku^*(K_2)$ viewed upside-down and backwards. For example, the element of $ku^{108}(K_2)^\vee$ dual to the element $v^4y_3z_3z_4$ in Figure \ref{B7} corresponds to the generator of a $\bold Z_4$ in $ku_{104}(K_2)$ on which $v^4$ acts nontrivially. This element can be seen in Figure \ref{H_*pic}. A remarkable property, for which one explanation is given in Section \ref{optsec}, is that $B_k$ is self-dual as a $ku^*$-module. One way of stating this is to let $\widetilde B_k$ denote $B_k$ with its indices negated. Then there is an isomorphism of $ku_*$-modules \begin{equation}\Sigma^{2(p^{k+1}+p^{k}+(k+1)p-k+1)}\widetilde B_k\approx B_k^\vee.\label{Bd}\end{equation} For example, with $p=2$, the second smallest generator $Y$ of $\Sigma^{208}\widetilde B_5$ is in grading $208-134=74$ and has $2Y\ne0$ and $v^4Y\ne0$. (See Figure \ref{B7}.) The second generator $Z$ of $B_5^\vee$ is dual to the class in position $(74,4)$ in Figure \ref{B7}, and also satisfies $2Z\ne0$ and $v^4Z\ne0$. The isomorphism (\ref{Bd}) can be proved by induction on $k$ using Definition \ref{ABdef}. A complete description of the $ku_*$-module $ku_*(K_2)$ is immediate from Theorems \ref{evthm}, \ref{oddthm}, and \ref{dual}. However, one might like a complete description of its ASS. We can write formulas for the $E_2$-term and differentials, but will not do so here. In Theorem \ref{ku*thm} we give a complete description of the $E_\infty$-term of the ASS of $ku_*(K_2)$ with exotic extensions included, in terms of the charts described in Section \ref{intro}. In \cite{DD}, a comparison was made of a chart for $A_3$ and its $ku_*$ analogue. Here we present in Figure \ref{H_*pic} the $ku_*$ analogue of Figure \ref{B7}. This presents the portion of the ASS of $ku_*(K_2)$ dual to $A_5$ with $p=2$ under the isomorphism of Theorem \ref{dual}. The chart dual to $B_5$ is obtained from this by removing the classes connected by dashed lines, and lowering the remaining tower so that the bottom is in filtration 0. The resulting chart is isomorphic to the $B_5$ part of Figure \ref{B7}. \vfill\eject \tikzset{ testpic3/.pic= {\draw (0,5) -- (1,6) -- (1,5) -- (5,9) -- (5,7) -- (29,31); \draw (-3,0) -- (33,0); \draw (3,7) -- (3,5); \draw (4,6) -- (4,8); \draw [dashed] (3,5) -- (-2,0) -- (-2,5); \draw [dashed] (-2,1) -- (2,5); \draw [dashed] (-2,1) -- (-1,2) -- (-1,1); \draw [dashed] (0,2) -- (0,5) -- (-2,3); \draw [dashed] (-2,2) -- (1,5) -- (1,3); \draw [dashed] (-2,4) -- (-1,5) -- (-1,2); \draw [dashed] (2,4) -- (2,5); \draw [color=red] (5,0) to[out=98, in=262] (5,9); \draw [color=red] (10,0) to[out=98, in=262] (10,13); \draw [color=red] (11,1) to[out=98, in=262] (11,14); \draw [color=red] (12,2) to[out=98, in=262] (12,15); \draw [color=red] (13,3) to[out=98, in=262] (13,16); \draw [color=red] (22,0) to[out=98, in=262] (22,4); \draw [color=red] (22,3) to[out=83, in=270] (22,24); \draw [color=red] (21,2) to[out=83, in=270] (21,23); \draw [color=red] (20,1) to[out=83, in=270] (20,22); \draw [color=red] (19,0) to[out=83, in=270] (19,21); \draw [color=red] (27,0) to[out=83, in=270] (27,8); \draw [color=red] (14,0) -- (14,4); \draw [color=red] (23,4) -- (23,25); \draw [color=red] (24,5) -- (24,26); \draw [color=red] (25,6) -- (25,27); \draw [color=red] (26,7) -- (26,28); \draw [color=red] (27,8) -- (27,29); \draw [color=red] (28,1) -- (28,30); \draw [color=red] (29,2) -- (29,31); \draw [color=red] (30,3) -- (30,11); \draw [color=red] (31,0) -- (31,4); \node at (0,5) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (1,5) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (1,6) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (2,6) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (3,7) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (4,8) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (5,9) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (2,5) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (3,6) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (4,7) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (5,8) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (6,9) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (7,10) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (8,11) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (9,12) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (10,13) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (11,14) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (12,15) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (13,16) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (3,5) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (4,6) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (5,7) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (6,8) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (7,9) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (8,10) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (9,11) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (10,12) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (11,13) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (12,14) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (13,15) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (14,16) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (15,17) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (16,18) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (17,19) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (18,20) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (19,21) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (20,22) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (21,23) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (22,24) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (23,25) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (24,26) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (25,27) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (26,28) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (27,29) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (28,30) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (29,31) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (5,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (6,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (9,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (10,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (10,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (11,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (12,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (13,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (14,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (14,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (15,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (17,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (18,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (18,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (19,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (20,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (21,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (22,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (19,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (20,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (21,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (22,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (23,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (24,5) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (25,6) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (26,7) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (27,8) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (28,9) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (29,10) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (30,11) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (22,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (23,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (26,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (27,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (27,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (28,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (29,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (30,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (31,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (31,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (32,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \draw (17,0) -- (18,1) -- (18,0) -- (22,4); \draw (19,0) -- (30,11); \draw (19,0) -- (19,1) -- (20,2) -- (20,1) -- (21,2) -- (21,3) -- (22,4) -- (22,3); \node at (-2,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (-2,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (-2,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (-2,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (-2,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (-2,5) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (-1,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (-1,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (-1,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (-1,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (-1,5) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (0,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (0,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (0,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (1,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (1,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (2,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \draw (2,6) -- (2,5) -- (13,16) -- (13,15); \draw (6,8) -- (6,9) -- (7,10) -- (7,9) -- (8,10) -- (8,11) -- (9,12) -- (9,11) -- (10,12) -- (10,13) -- (11,14) -- (11,13) -- (12,14) -- (12,15); \draw (3,5) -- (5,7); \draw (9,0) -- (10,1) -- (10,0) -- (14,4); \draw (5,0) -- (6,1); \draw (14,0) -- (15,1); \draw (22,0) -- (23,1); \draw (26,0) -- (27,1) -- (27,0) -- (31,4); \draw (31,0) -- (32,1); \node at (-2,-.4) {$64$}; \node at (5,-.4) {$78$}; \node at (10,-.4) {$88$}; \node at (14,-.4) {$96$}; \node at (17,-.4) {$102$}; \node at (22,-.4) {$112$}; \node at (26,-.4) {$120$}; \node at (32,-.4) {$132$}; }} \bigskip \begin{minipage}{6in} \begin{fig}\label{H_*pic} {\bf Portion of $ku_*(K_2)$ corresponding to $B_5$ and $A_5$.} \begin{center} \begin{tikzpicture} \pic[rotate=90,scale=.48,transform shape] {testpic3}; \end{tikzpicture} \end{center} \end{fig} \end{minipage} \bigskip \vfill\eject We observe that in even gradings of the ASS for $ku_*(K_2)$, $h_0$-extensions exactly correspond to exotic extensions in the ASS of $ku^{*+2p}(K_2)$, and vice versa. As a typical example of the duality, the summands of $ku^{82}(K_2)$, $ku^{82}(K_2)^\vee$, and $ku_{78}(K_2)$ in Figures \ref{B7} and \ref{H_*pic} are all isomorphic to $\bold Z_8\oplus\bold Z_2$. But for the $ku_*$-module structure, it is $ku^{82}(K_2)^\vee$ and $ku_{78}(K_2)$ that correspond, since in both, the element that is divisible by 4, in position $(82,0)$ and $(78,7)$, resp., is also divisible by $v^7$ for $A_5$ and by $v^4$ for $B_5$. \begin{thm}\label{ku*thm} The $E_\infty$-term of the ASS of $ku_*(K_2)$ with exotic extensions included contains exactly the following. \begin{itemize} \item There is a trivial $ku_*$-module, which when $p=2$ has generators corresponding to those enumerated at the end of Section \ref{E2sec} with gradings decreased by 4, and similarly when $p$ is odd. \item For every $S_{k,\ell}$ occurring in a summand of Theorem \ref{oddthm}, there is a chart of the same form as Figure \ref{S710} with $v$-towers of height $k+1$ on generators in gradings $2p^{\ell+1}+2(p-1)(i-k_0-1)$ for $1\le i\le \ell-k$. One must add to this the grading of the other factors accompanying $S_{k,\ell}$ in Theorem \ref{oddthm}. \item For each occurrence of $B_k$ in Theorem \ref{evthm}, there is a summand $$\Sigma^{2(p^{k+1}+p^{k}+kp-k+1)}\widetilde B_k$$ with gradings increased by those of other factors accompanying $B_k$ in \ref{evthm}. Here $\widetilde B_k$ is as defined prior to (\ref{Bd}). \item For each summand $y_k^eA_k$ in Theorm \ref{evthm}, there is a variant of $\Sigma^{2(p^{k+1}+p^{k}+kp-k+1)}\widetilde B_k$ with gradings increased by $2ep^k$. In this variant, the initial $v$-towers are pushed up by $k$ filtrations and surrounded with a triangle of classes of the sort appearing in the lower left corner of Figure \ref{H_*pic}. See Remark \ref{rk}. \end{itemize} \end{thm} \begin{proof} Theorem \ref{dual} and our results for $ku^*(K_2)$ give the $ku_*$-module structure of $ku_*(K_2)$, but that is not the same as the ASS picture. Expanding on work done in \cite{DW} and \cite{W} and using methods such as those in Section \ref{E2sec}, we were able to write the $E_2$-term of the ASS for $ku_*(K_2)$, and had conjectured the differentials (but not the extensions) prior to embarking on our $ku$-cohomology project. We were unable to {\em prove} the differentials, probably because we had not taken sufficient advantage of the exact sequence with $k(1)_*(K_2)$. Now that we know the 2-orders and $v$-heights of generators (by grading, at least, if not by name), it is straightforward to see that the differentials must be as we expected. The isomorphism (\ref{Bd}) plays an important role here; the left hand side gives the ASS form of the right hand side. \end{proof} \begin{rmk}\label{rk}{\rm Regarding the unusual portion of the ASS chart for part of $ku_*(K_2)$ in the lower left of Figure \ref{H_*pic}, this is obtained from \cite[Fig.~4.2]{DW} with $d_6$-differentials on all odd-graded towers. For $A_k$, it will be a triangle going up to filtration $k$, with all but the first two dots on the top row being part of $B_k$.}\end{rmk} The structure of the rest of the paper is as follows. In Section \ref{E2sec}, we compute the $E_2$-term of the ASS for $ku^*(K_2)$. In Section \ref{difflsec} we determine the differentials in this ASS. In order to do so, we need to compare with $k(1)^*(K_2)$, where $k(1)$ is the spectrum for mod-$p$ connective $KU$-theory, using the exact sequence \begin{equation}\label{LES}\to k(1)^{*-1}(K_2)\to ku^*(K_2) \mapright{p} ku^*(K_2)\to k(1)^*(K_2)\to ku^{*+1}(K_2)\mapright{p}.\end{equation} In Section \ref{difflsec}, we restate results about $k(1)^*(K_2)$ from \cite{DRW}. At the end of Section 3, we show how the descriptions of $ku^*(K_2)$ in Theorems \ref{evthm} and \ref{oddthm} are obtained once we know the differentials. This exact sequence is also used in determining the exotic extensions of (\ref{extns}), which is done in Section \ref{extnsec}. In Section \ref{LESsec}, we propose complete formulas for the exact sequence (\ref{LES}), and then in Section \ref{allsec}, we show that our proposed formulas account for all elements of $k(1)^*(K_2)$ exactly once. The main point of Section \ref{allsec} is to prove that there are no additional exotic extensions in $ku^*(K_2)$. An exotic extension $p\cdot A=B$ implies that $A$ is not in the image from $k(1)^{*-1}(K_2)$, and $B$ does not map nontrivially to $k(1)^*(K_2)$, so once we have shown that all elements are accounted for, there can be no more extensions. Many of our formulas in Section \ref{LESsec} are forced by naturality. However, many others occur in regular families, but with surprising filtration jumps. We could probably prove that the homomorphisms {\em must} be as we claim, by showing that there are no other possibilities, but we prefer to forgo doing that. In the optional Section \ref{optsec}, we discuss in more detail how the charts are obtained and provide an explanation for the duality result (\ref{Bd}). \section{The $E_2$-term of the ASS for $ku^*(K_2)$} \label{E2sec} We will need some notation. By $H^*K_2$, we understand $H^*(K(\bold Z_p,2);\bold Z_p)$. Let $E$ denote an exterior algebra, $P$ a polynomial algebra, and $TP_n[x]=P[x]/(x^n)$ the truncated polynomial algebra. In all cases these will be over $\bold Z_p$, the integers mod $p$. Let $\overline{E}$ denote the augmentation ideal of an exterior algebra, and $E_1 = E[Q_0,Q_1]$, where $Q_i$ are the Milnor primitives. Because $Q_i^2 = 0$ we have homology groups, $H_*(-;Q_i)$, defined for $E_1$-modules. We let $\langle y_1, y_2, \ldots \rangle$ denote the $\bold Z_p$-span of classes $y_i$. The Adams spectral sequence (ASS) for $ku^*(K_2)$ has $E_2^{s,t} = \operatorname{Ext}_{\mathcal{A}}^{s,t}(H^*(bu),H^*K_2)$, where $\mathcal{A}$ is the mod $p$ Steenrod algebra and $H^*(bu) \approx \mathcal{A}/\mathcal{A}(Q_0,Q_1)$. Using a standard change of rings theorem, \cite{Liu64}, this is $\operatorname{Ext}_{E_1}^{s,t}(\bold Z_p,H^*K_2)$. This converges to $ku^{-(t-s)}(K_2)$. We depict this with $E_2^{s,t}$ in position $(t-s,s)$ as usual, but label the axis with codegrees, the negative of the homotopical degree, so the left side of the chart will have positive gradings and refer to cohomological grading. In an attempt to avoid confusion, we rewrite this as $G_2^{-(t-s),s}$. With this notation, the differentials are $d_r : G_r^{a,b} \longrightarrow G_r^{a+1,b+r}$, multiplication by the element $v \in ku^{-2(p-1)}$ (also considered in $G_r^{-2(p-1),1}$), is $v : G_r^{a,b} \longrightarrow G_r^{a-2(p-1),b+1}$, and multiplication by the element representing $p \in ku^0$, ($h_0 \in G_r^{0,1}$), is $h_0 : G_r^{a,b} \longrightarrow G_r^{a,b+1}$. We will later define elements $z_j \in G_2^{2(p^{j+1}+1),0}$ for $j \ge 0$ and elements $$z_{i,j} \in G_2^{2(p^{j+1}+1+(p-1)(j-i)),0}$$ as in (\ref{zij}) satisfying the properties in Definition \ref{Sdef}. For $j \ge k_0$, we define $W_j = \langle z_{j,j},z_{j-1,j},\ldots, z_{k_0,j}\rangle$. We also have $y_i \in G_2^{2p^i,0}$ for $i\ge0$, and \begin{equation}\label{qdef}q\in G^{9,0}_2\text{ if }p=2,\text{ and in }G^{4p-1,0}_2\text{ if $p$ is odd.}\end{equation} One last definition, let $\Lambda_{j+1}=TP_p[z_i:\,i\ge j+1]$. A picture of $P[v]\otimes W_5$ as a $P[v,h_0]$-module with $p=2$ appears in Figure \ref{fig2}. \bigskip \begin{minipage}{5in} \begin{fig}\label{fig2} {\bf A depiction of $P[v]\otimes W_5$} \begin{center} \begin{tikzpicture}[scale=.55] \draw (-1,0) -- (10,0); \draw [->] (0,0) -- (11,5.5); \draw [->] (2,0) -- (11,4.5); \draw [->] (4,0) -- (11,3.5); \draw [->] (6,0) -- (11,2.5); \draw (2,0) -- (2,1); \draw (4,0) -- (4,2); \draw (6,0) -- (6,3); \draw (8,1) -- (8,4); \draw (10,2) -- (10,5); \node at (0,-.5) {$136$}; \node at (2,-.5) {$134$}; \node at (4,-.5) {$132$}; \node at (6,-.5) {$130$}; \node at (0,-1.3) {$z_{2,5}$}; \node at (2,-1.3) {$z_{3,5}$}; \node at (4,-1.3) {$z_{4,5}$}; \node at (6,-1.3) {$z_{5,5}$}; \node at (0,0) {$\scriptstyle\bullet$}; \node at (2,1) {$\scriptstyle\bullet$}; \node at (4,2) {$\scriptstyle\bullet$}; \node at (6,3) {$\scriptstyle\bullet$}; \node at (8,4) {$\scriptstyle\bullet$}; \node at (10,5) {$\scriptstyle\bullet$}; \node at (2,0) {$\scriptstyle\bullet$}; \node at (4,1) {$\scriptstyle\bullet$}; \node at (6,2) {$\scriptstyle\bullet$}; \node at (8,3) {$\scriptstyle\bullet$}; \node at (10,4) {$\scriptstyle\bullet$}; \node at (4,0) {$\scriptstyle\bullet$}; \node at (6,1) {$\scriptstyle\bullet$}; \node at (8,2) {$\scriptstyle\bullet$}; \node at (10,3) {$\scriptstyle\bullet$}; \node at (6,0) {$\scriptstyle\bullet$}; \node at (8,1) {$\scriptstyle\bullet$}; \node at (10,2) {$\scriptstyle\bullet$}; \end{tikzpicture} \end{center} \end{fig} \end{minipage} \bigskip The remainder of this section is devoted to the proof of the following result. \begin{thm}\label{E2} The $E_2$ term of the Adams spectral sequence for the reduced $ku^*(K_2)$ is isomorphic as a $P[h_0,v]$-module to $$ P[v,y_1]\otimes E[q] \otimes\bigl( \bigoplus_{j\ge k_0}(W_j\otimes TP_{p-1}[z_j]\otimes\Lambda_{j+1})\bigr) $$ $$ \oplus \bigl(P[h_0,v,y_1]\otimes E[ v^{k_0}q]\bigr) \oplus \biggl( P[y_1]\otimes\begin{cases}\langle y_0^{p-1}z_0\rangle&p\text{ odd}\\ \langle y_0z_0,z_1,h_0y_0z_0=vz_1\rangle&p=2.\end{cases} \biggr) $$ \noindent plus a trivial $P[h_0,v]$-module. \end{thm} Some of the algebra structure of this $E_2$ will be useful later. For example, the product structure among the $z_j$'s will be clear, and also the formula \begin{equation}\label{x9z4}(v^2q)^2 = v^4z_2,\end{equation} holds when $p=2$ since, as we shall see, in $H^*(K_2)$, $x_9^2-Q_0x_{17}\in\on{im}(Q_1)$. We will give a detailed proof when $p=2$, and then sketch the minor changes for odd $p$. There are two parts to proving this theorem. First, we must give a complete description of the $E_1$-module structure of $H^*K_2$. Second, we have to compute $\operatorname{Ext}_{E_1}^{*,*}(\bold Z_2,-)$ of this. We begin the first part. Serre (\cite{Ser}) showed that $H^*K_2$ is a polynomial algebra on classes $u_{2^j+1}$ in degree $2^j+1$ for $j\ge0$ defined by $u_2=\iota_2$ and $u_{2^{j+1}+1}=\on{Sq}^{2^j}u_{2^j+1}$ for $j\ge0$. We easily have $$ Q_0(u_2)=u_3,\ Q_0(u_3)=0,\ Q_0(u_{2^j+1})=u_{2^{j-1}+1}^2\text{ for }j\ge2, $$ and $$ Q_1(u_2)=u_5,\ Q_1(u_3)=u_3^2,\ Q_1(u_5)=0,\ Q_1(u_{2^j+1})=u_{2^{j-2}+1}^4\text{ for }j\ge3. $$ Let $x_5=u_5+u_2u_3$ and write $H^*K_2$ as an associated graded object: $$ P[u_2^2]\otimes E[x_5] \otimes \bigl( E[u_2] \otimes P[u_3] \bigr) \otimes_{j\ge 2} \left( E[u_{2^{j+1}+1}]\otimes P[(u_{2^j+1})^2] \right) $$ From this, we can read off \begin{lem} \label{Q0} $$ H_*(H^*K_2;Q_0)=P[u_2^2]\otimes E[x_5] $$ \end{lem} Letting $x_9=u_9+u_3^3$ and $x_{17}=u_{17}+u_2u_5^3$, we rewrite again as \begin{gather*} P[u_2^2] \otimes TP_4[x_9]\otimes TP_4[x_{17}]\otimes_{j>4} E[(u_{2^j+1})^2]\\ \otimes \bigl( E[u_2]\otimes P[u_5]\bigr) \otimes \bigl( E[u_3] \otimes P[u_3^2]\bigr) \otimes_{j>4} \bigl(E[u_{2^j+1}] \otimes P[(u_{2^{j-2}+1})^4]\bigr). \end{gather*} Again we read off \begin{lem}\label{Q1} $$ H_*(H^*K_2;Q_1) = P[u_2^2] \otimes TP_4[x_9] \otimes TP_4[x_{17} ] \otimes_{j>4} E[(u_{2^j+1})^2] $$ \end{lem} An associated graded version of this is \begin{lem}\label{Q1g} $$ H_*(H^*K_2;Q_1) = P[u_2^2] \otimes E[x_9] \otimes E[x_{17} ] \otimes_{j>2} E[(u_{2^j+1})^2] $$ \end{lem} \noindent The bulk of the work here is finding a nice splitting of $H^*K_2$ as an $E_1$-module. Let $N$ be the $E_1$-submodule with single nonzero elements in gradings 5, 7, 8, 9, and 10 with generators $x_5=u_5 + u_2 u_3$, $x_7=u_2u_5$, and $x_9= u_9 + u_3^3$, satisfying $Q_0x_7=Q_1x_5$ and $Q_0x_9=Q_1x_7=x_{10}$. It has a $Q_0$-homology class $x_5$ and a $Q_1$-homology class $x_9$. This class $x_9$ is called $q$ in Theorem \ref{E2} and in all other sections. A picture of $N$ is in Figure \ref{N}. In pictures such as this, straight lines indicate $Q_0=\on{Sq}^1$ and curved lines $Q_1$. \bigskip \begin{minipage}{6in} \begin{fig}\label{N} {\bf An $E_1$-module $N$.} \begin{center} \begin{tikzpicture}[scale=.35] \draw (4,0) -- (6,0); \draw (8,0) -- (10,0); \draw (0,0) to[out=45, in=135] (6,0); \draw (4,0) to[out=315, in=225] (10,0); \node at (0,-.7) {$5$}; \node at (8,.7) {$9$}; \node at (4,-1) {$7$}; \node at (10,.7) {$10$}; \node at (0,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (4,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (6,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (8,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (10,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \end{tikzpicture} \end{center} \end{fig} \end{minipage} \bigskip The $E_1$-submodule $P[u_2^2]\oplus P[u_2^2]\otimes N$ carries the $Q_0$-homology of $H^*K_2$, while the remaining $Q_1$-homology is, written in our usual way as an associated graded version, \begin{equation}\label{rest} P[u_2^2]\otimes E[x_9]\otimes\overline{E}[x_{17},u_{2^j+1}^2,\ j > 2]. \end{equation} We will exhibit a $Q_0$-free $E_1$-submodule $R$ whose $Q_1$-homology is exactly the above $\overline{E}$. Moreover, $N\otimes R$ contains an $E_1$-split summand $S$ which maps isomorphically to $\langle x_9\rangle\otimes R$. It is premature to state this because we haven't defined $R$ and $S$ yet, but for the record: \begin{prop} \label{T} As an $E_1$ module, $\widetilde{H}^*K_2$ is isomorphic to $T \oplus F$ where $F$ is free over $E_1$ and $T$ is $$ P[u_2^2]\otimes \bigl(\langle u_2^2\rangle\oplus N\oplus R\oplus S\bigr) $$ \end{prop} \begin{center} \textbf{A start on $R$ and $S$.} \end{center} For this to make sense, we need to find $R$ and $S$. The module $R$ is a direct sum of shifted versions of modules $L_k$, $k \ge 0$, which have generators $g_{2i}$, $0\le i\le k$, with $Q_1g_{2i}=Q_0g_{2i+2}$ for $0\le i<k$, $Q_0g_0\ne0$, and $Q_1g_{2k}=0$. For example, $L_3$ is depicted in Figure \ref{L3}. \bigskip \begin{minipage}{6in} \begin{fig}\label{L3} {\bf The $E_1$-module $L_3$.} \begin{center} \begin{tikzpicture}[scale=.35] \draw (0,0) -- (2,0); \draw (4,0) -- (6,0); \draw (8,0) -- (10,0); \draw (12,0) -- (14,0); \draw (0,0) to[out=45, in=135] (6,0); \draw (4,0) to[out=315, in=225] (10,0); \draw (8,0) to[out=45, in=135] (14,0); \node at (0,-.7) {$g_0$}; \node at (4,-1) {$g_2$}; \node at (8,1) {$g_4$}; \node at (12,-.7) {$g_6$}; \node at (0,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (2,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (4,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (6,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (8,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (10,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (12,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (14,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \end{tikzpicture} \end{center} \end{fig} \end{minipage} \bigskip A splitting map, $\langle x_9\rangle \otimes L_k \longrightarrow N \otimes L_k$, for the epimorphism $N\otimes L_k\to \langle x_9\rangle \otimes L_k$ is defined by $$ x_9 g_{2i} \mapsto x_9\otimes g_{2i}+x_7\otimes g_{2i+2}+x_5\otimes g_{2i+4}\text{ for }0\le i\le k-2,$$ $x_9 g_{2k-2} \mapsto x_9\otimes g_{2k-2}+x_7\otimes g_{2k}$, and $x_9 \otimes g_{2k} \mapsto x_9\otimes g_{2k}$. \bigskip \begin{center} \textbf{The $E_1$-module $M_j$} \end{center} Let $$ x_{2^j+1}=u_{2^j+1}+ \begin{cases}u_2u_5^3&j=4\\ u_2u_3u_5^2u_9^2&j=5\\ u_3u_5^2u_9^2u_{17}^2&j=6\\ 0&j>6 \end{cases} \text{ and } w_{2^j-1}= \begin{cases}u_2u_3u_5^2&j=4\\ u_3u_5^2u_9^2&j=5\\ 0&j>5. \end{cases}$$ Then $Q_0x_{2^j+1}=u_{2^{j-1}+1}^2+Q_1w_{2^j-1}$, so $Q_0x_{2^j+1}$ and $u_{2^{j-1}+1}^2$ represent the same $Q_1$-homology class. Define $E_1$-modules $M_j$ inductively by $M_3=0$, and for $j\ge4$ there is a short exact sequence of $E_1$-modules \begin{equation}\label{Mdef} 0\to u_{2^{j-2}+1}^2M_{j-1}\to M_j\to M_j'\to0, \end{equation} where $M_j'=\langle x_{2^j+1},Q_0x_{2^j+1}\rangle$ and $Q_1x_{2^j+1}=u_{2^{j-2}+1}^2Q_0x_{2^{j-1}+1}$. The above definitions of the $x_{2^j+1}$ are necessary to get this formula to work right. There is an isomorphism of $E_1$-modules $M_j\approx\Sigma^{2^j+1}L_{j-4}$ given by \begin{equation}\label{LM} \Sigma^{2^j+1} g_{2i} \mapsto \begin{cases} x_{2^j+1} & i = 0 \\ u_{2^{j-2}+1}^2 x_{2^{j-1}+1} & i = 1 \\ u_{2^{j-2}+1}^2 u_{2^{j-3}+1}^2 x_{2^{j-2}+1} & i = 2 \\ u_{2^{j-2}+1}^2 u_{2^{j-3}+1}^2 \cdots u_{2^{j-i-1}+1}^2 x_{2^{j-i}+1} & 2 < i \le j-4 \\ \end{cases} \end{equation} And we have \begin{equation}\label{two} H_*(M_j;Q_1)= \begin{cases} \langle u_9^2 , u_{17} \rangle & j = 4 \\ \langle u_{17}^2 , u_9^2 u_{17} \rangle & j = 5 \\ \langle u_{33}^2 , u_{17}^2 u_9^2 u_{17} \rangle & j = 6 \\ \langle u_{2^{j-1}+1}^2,u_{2^{j-2}+1}^2\cdots u_9^2x_{17}\rangle & j > 6 \\ \end{cases} \end{equation} \bigskip \begin{center} \textbf{The $E_1$-module $R$} \end{center} Let \begin{equation}\label{R}R=\bigoplus_{j\ge4}M_j\otimes E[u_{2^j+1}^2,u_{2^{j+1}+1}^2,\ldots].\end{equation} Then $H_*(R;Q_1)=\overline{E}[x_{17},u_9^2,u_{17}^2,\ldots]$, since monomials in $\overline{E}$ without $x_{17}$ appear from a first term (of the two in (\ref{two})) in $H_*(M_j\otimes E;Q_1)$, where $j$ is minimal such that $u_{2^{j-1}+1}^2$ appears in the monomial, while those with $x_{17}$, and also containing a product $u_9^2\cdots u_{2^{j-2}+1}^2$ of maximal length, occur as a second term in $H_*(M_j\otimes E;Q_1)$. \begin{proof}[Proof of Proposition \ref{T}] We have the $E_1$-submodule $T$ given in Proposition \ref{T}. Because this contains all of the $Q_0$ and $Q_1$ homology, what remains must be free over $E_1$ by \cite{Wal62}. \end{proof} \begin{proof}[Proof of Theorem \ref{E2}] We compute $\operatorname{Ext}_{E_1}(\bold Z_2,T)$ with $T$ as in Proposition \ref{T}. We will not be concerned with the free $E_1$-module $F$ but later we will give the Poincar\'e series for it. Each copy of $E_1$ in $F$ gives a $\bold Z_2$ in $G^{*,0}$ that corresponds to $Q_0Q_1$. That $$ \operatorname{Ext}_{E_1}^{*,*}(\bold Z_2,P[u_2^2])=P[v,h_0,y_1] $$ with $y_1\in G_2^{4,0}$ should be clear, given our labeling conventions. We normally work with the reduced cohomologies, so the $y_1^0$ generator above would be ignored. The $y_1$ notation is particularly useful when we consider all primes $p$. It is $y_0^{p^1}$ where $y_0\in G_2^{2,0}$. So $|y_1|=2p$. We compute $\operatorname{Ext}_{E_1}(\bold Z_2,N)$ in two ways using two different filtrations of $N$. From this we see that the generator of the towers can be thought of either as $v^2x_9$ or $h_0^2x_5$. Using Figure \ref{N} as our guide, our first filtration is $\langle x_5, x_{8} \rangle $, $\langle x_7, x_{10} \rangle $, and $\langle x_9 \rangle $. The $\operatorname{Ext}$ on $x_9\in G^{9,0}$ is just $P[v,h_0]$. For the other two, we get $h_0$-towers on $x_{10}\in G^{10,0}$ and $x_8\in G^{8,0}$. The extensions in $N$ show these two $h_0$-towers are connected by multiplication by $v$. In addition, a $d_1$ is forced on us by the extensions. Figure \ref{fig3} describes this completely. \medskip \begin{minipage}{6in} \begin{fig}\label{fig3} {\bf The first computation of $\operatorname{Ext}_{E_1}(\bold Z_2,N)$} \begin{center} \begin{tikzpicture}[scale=.45] \draw (-1,0) -- (7,0); \draw (0,0) -- (0,5); \draw [dotted] (0,0) -- (2,1); \draw [dotted] (0,1) -- (2,2); \draw [dotted] (0,2) -- (2,3); \draw [dotted] (1,0) -- (7,3); \draw [dotted] (1,1) -- (7,4); \draw [dotted] (1,2) -- (7,5); \draw (1,0) -- (1,5); \draw (2,0) -- (2,5.5); \draw (3,1) -- (3,5.5); \draw (5,2) -- (5,6); \draw (7,3) -- (7,7); \draw [- >] (1,0) -- (.1,.9); \draw [- > ] (1,1) -- (.1,1.9); \draw [->] (1,2) -- (.1,2.9); \draw [->] (3,1) -- (2.1,1.9); \draw [->] (3,2) -- (2.1,2.9); \draw [->] (3,3) -- (2.1,3.9); \node at (0,-.5) {$10$}; \node at (2,-.5) {$8$}; \node at (5,-.5) {$5$}; \node at (7,-.5) {$3$}; \draw (9.5,0) -- (17.5,0); \draw (12,0) -- (12,1); \draw [dotted] (10,0) -- (12,1); \draw (15,2) -- (15,6); \draw [dotted] (15,2) -- (17,3); \draw [dotted] (15,3) -- (17,4); \draw [dotted] (15,4) -- (17,5); \draw (17,3) -- (17,7); \node at (10,-.5) {$10$}; \node at (12,-.5) {$8$}; \node at (15,-.5) {$5$}; \node at (17,-.5) {$3$}; \node at (8.5,2) {$\Rightarrow$}; \node at (14,2) {$v^2x_9$}; \end{tikzpicture} \end{center} \end{fig} \end{minipage} \medskip Again referring to Figure \ref{N}, our second filtration is $\langle x_9, x_{10} \rangle$, $\langle x_7, x_{8} \rangle$, and $\langle x_5 \rangle$. Now our $\operatorname{Ext}$ groups are $P[v,h_0]$ on $x_5 \in G^{5,0}$, $P[v]$ on $x_8 \in G^{8,0}$ and $x_{10} \in G^{10,0}$. Again, the $d_1$ is forced by the extensions in $N$. Figure \ref{fig4} describes the result. \medskip \begin{minipage}{6in} \begin{fig}\label{fig4} {\bf The second computation of $\operatorname{Ext}_{E_1}(\bold Z_2,N)$} \begin{center} \begin{tikzpicture}[scale=.45] \draw (-1,0) -- (7,0); \draw [dotted] (0,0) -- (8,4); \draw [dotted] (2,0) -- (8,3); \draw (2,0) -- (2,1); \draw (4,1) -- (4,2); \draw (6,2) -- (6,3); \draw (5,0) -- (5,6); \draw [dotted] (5,0) -- (7,1); \draw [dotted] (5,1) -- (7,2); \draw [dotted] (5,2) -- (7,3); \draw [dotted] (5,3) -- (7,4); \draw [dotted] (5,4) -- (7,5); \draw (7,1) -- (7,7); \draw [->] (5,0) -- (4.1,.9); \draw [->] (5,1) -- (4.1,1.9); \draw [->] (7,1) -- (6.1,1.9); \draw [->] (7,2) -- (6.1,2.9); \draw (9.5,0) -- (17.5,0); \draw (12,0) -- (12,1); \draw [dotted] (10,0) -- (12,1); \draw (15,2) -- (15,6); \draw [dotted] (15,2) -- (17,3); \draw [dotted] (15,3) -- (17,4); \draw [dotted] (15,4) -- (17,5); \node at (8.5,2) {$\Rightarrow$}; \draw (17,3) -- (17,7); \node at (10,-.5) {$10$}; \node at (12,-.5) {$8$}; \node at (15,-.5) {$5$}; \node at (17,-.5) {$3$}; \node at (8.5,2) {$=$}; \node at (14,2) {$h_0^2x_5$}; \node at (0,-.5) {$10$}; \node at (2,-.5) {$8$}; \node at (5,-.5) {$5$}; \node at (7,-.5) {$3$}; \end{tikzpicture} \end{center} \end{fig} \end{minipage} \medskip This concludes the computation of $\operatorname{Ext}$ for $P[u_2^2]\otimes (\langle u_2^2 \rangle \oplus N)$ of Proposition \ref{T}. The result is the second line of Theorem \ref{E2}. We need to compute $\operatorname{Ext}$ for $P[u_2^2] \otimes (R \oplus S)$ and show it is the same as the top line in Theorem \ref{E2}. Since $S \approx \langle x_9 \rangle \otimes R$, all we need to do is $P[u_2^2] \otimes R$ and ignore the $E[x_9]$. Similarly we can ignore the $P[u_2^2]$ and the $P[y_1]$ because for every power of $u_2^2$ we will have a copy of the answer indexed by powers of $y_1$. All we have left now is $R$, but $R$ is just many copies of the various $M_j$ and the indexing for the number of copies is given by the $\Lambda_{j+1}$. All that remains is to show that $\operatorname{Ext}_{E_1}(\bold Z_2,M_j) \approx P[v]\otimes W_{j-2}$.\footnote{The reason for this awkward shift is that the gradings for $z_j$ which give the elegant statements in Definition \ref{ABdef} and elsewhere are not particularly convenient in developing the $E_2$ statement.} Recall that $M_j = \Sigma^{2^j+1} L_{j-4}$. We can filter $L_{j-4}$ into pairs of elements $g_{2i}, Q_0 g_{2i}$, for $0 \le i \le j-4$. $\operatorname{Ext}$ for each of these gives a $P[v]$ on the element $Q_0 g_{2i}$ represented by $z_{j-i-2,j-2} \in G^{2^j+2+2i,0}$. There is no $d_1$, but undoing the filtration does solve the extension problem and gives us $h_0 z_{k,j-2} = v z_{k-1,j-2}$. This completes our computation and thus our proof. \end{proof} \begin{rmk}{\rm To illustrate the last computation in the proof, consider the generators of the $v$-towers for $\operatorname{Ext}_{E_1}(\bold Z_2,M_7)$. They are $z_5$, $z_4^2$, $z_3^2z_4$, and $z_2^2z_3z_4$, which is what we have called $z_{5,5}$, $z_{4,5}$, $z_{3,5}$, and $z_{2,5}$, as pictured in Figure \ref{fig2}. For future reference, we note that (with $\sim$ meaning homologous)} \begin{equation}\label{zj}z_j=Q_0x_{2^{j+2}+1}\sim u_{2^{j+1}+1}^2=Q_0u_{2^{j+2}+1}=Q_0Q_{j+2}\iota_2=Q_{j+2}Q_0\iota_2.\end{equation} \end{rmk} We now describe briefly the changes required when $p$ is odd. We have $$H^*(K_2)=P[y_0]\otimes P[g_1,g_2,\ldots]\otimes E[u_0,u_1,\ldots],$$ with $|y_0|=2$, $|g_i|=2(p^i+1)$, $|u_i|=2p^i+1$, $Q_0y_0=u_0$, $Q_0u_i=g_i$, $Q_1y_0=u_1$, $Q_1u_0=g_1$, $Q_1u_i=g_{i-1}^p$, $i\ge2$. Let $y_1=y_0^p$. Then, similarly to the case $p=2$, $$H_*(H^*K_2,Q_0)=P[y_1]\otimes E[y_0^{p-1}u_0].$$ Let $N=\langle y_0^{p-1}u_0, q=y_0^{p-1}u_1, Q_0q=Q_1(y_0^{p-1}u_0)\rangle$. Then $P[y_1]\oplus P[y_1]\otimes N$ carries the $Q_0$-homology and part of the $Q_1$-homology. Similarly to (\ref{rest}), the rest of the $Q_1$-homology is $$P[y_1]\otimes E[q]\otimes \overline{E[w_1]\otimes TP_p[g_2,g_3,\ldots]},$$ where $w_1=u_2+u_0g_1^{p-1}$. There are $E_1$-submodules $M_j$ for $j\ge2$, defined inductively by $M_2=\langle w_1,g_2=Q_0w_1\rangle$, $M_j'=\langle u_j,g_j=Q_0u_j \rangle$ for $j\ge3$, and for $j\ge3$, there exists a short exact sequence of $E_1$-modules $$0\to g_{j-1}^{p-1}M_{j-1}\to M_j\to M_j'\to0,$$ with $Q_1u_j=g_{j-1}^p$. There is an isomorphism of $E_1$-modules $M_j\approx \Sigma^{2p^j+1}L_{j-2}$, where $L_j$ is similar to Figure \ref{L3}, but with $i$th generator ($i\ge0$) in grading $2(p-1)i$ rather than $2i$. Let $$R=\bigoplus_{j\ge2}M_j\otimes TP_{p-1}[g_j]\otimes TP_p[g_{j+1},\ldots].$$ Then $H_*(R;Q_1)=\overline{E[w_1]\otimes TP_p[g_2,g_3,\ldots]}$, and so, similarly to Proposition \ref{T}, up to free $E_1$-modules \begin{equation}\label{Rp}H^*K_2\approx P[y_1]\otimes(\langle y_1\rangle\oplus N\oplus R\oplus qR).\end{equation} Similarly to Figure \ref{fig4}, $\operatorname{Ext}_{E_1}(\bold Z_p,N)$ can be read off from Figure \ref{Nfig}. This gives the third summand and $vq$ part of the second summand in Theorem \ref{E2}, while the $\langle y_1\rangle$ part of (\ref{Rp}) gives the non-$vq$ part of the second summand. For the first summand in Theorem \ref{E2}, we replace $g_j$ by $z_{j-1}$, and then note that $\operatorname{Ext}_{E_1}(\bold Z_p,M_j)\approx P[v]\otimes W_{j-1}$, similar to Figure \ref{fig2}. For example, $M_3$ has $v$-towers on $g_3$ and $g_2^p$, which are renamed $z_2=z_{2,2}$ and $z_1^p=z_{1,2}$, the generators of the $v$-towers of $W_2$. This completes our sketch of proof of Theorem \ref{E2} when $p$ is odd. \bigskip \begin{minipage}{6in} \begin{fig}\label{Nfig} {\bf Computation of $\operatorname{Ext}_{E_1}(\bold Z_p,N)$} \begin{center} \begin{tikzpicture}[scale=.45] \draw (-1,0) -- (25,0); \draw (9,0) -- (9,6); \draw (19,1) -- (19,7); \node at (0,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (8,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (18,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (9,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (9,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (19,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (19,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (0,-.8) {$4p$}; \node at (0,-1.8) {$y_0^{p-1}g_1$}; \node at (13,-1) {$vq$}; \draw [->] [dashed] (13,-.7) -- (9.2,.8); \node at (9,-.8) {$2p+1$}; \node at (19,-.8) {$3$}; \draw [->] (9,0) -- (8.3,.7); \draw [->] (19,1) -- (18.3,1.7); \node at (22,3) {$\cdot$}; \node at (22.5,3.5) {$\cdot$}; \node at (23,4) {$\cdot$}; \end{tikzpicture} \end{center} \end{fig} \end{minipage} \bigskip We explain here the reason for the $k_0$ in Definition \ref{ABdef}. In Theorem \ref{E2}, $y_0^{p-1}z_0$ and $z_1$ are in the part that is not multiplied by higher $z$'s when $p=2$, but when $p$ is odd, they form the module $M_2$, whose Ext is $P[v]\otimes W_1$, which is multiplied by higher $z$'s. Since $B_k$'s are multiplied by higher $z$'s, but $A_k$'s are not, this explains why $z_1$ is in $B_1$ when $p$ is odd, but not when $p=2$. The reason for the split in Theorem \ref{E2} is the difference in the submodules $N$. Its second class is $y_0^{p-1}Q_1y_0$ in each. Applying $Q_1$ yields $y_0^{p-2}(Q_1y_0)^2$. This is 0 when $p$ is odd, but not when $p=2$. The reason that the portion of Ext corresponding to $N$ is not multiplied by higher $z$'s is that it gives part of the $Q_0$-homology, and this is not multiplied by higher $z$'s. \bigskip We close this section with enumeration of the unimportant $\bold Z_2$-classes in $ku^*(K_2)$ when $p=2$. \begin{center} \textbf{More on the $E_1$-free part when $p=2$} \end{center} If we compute the $\operatorname{Ext}_{E_1}(\bold Z_2,F)$ for the $E_1$ free part of $H^*K_2$, we just get a $\bold Z_2$ corresponding to the top element for each copy of $E_1$. If we find the Poincar\'e series (PS) for the free part, all we have to do to get the PS for these elements is multiply by $\frac{x^4}{(1+x)(1+x^3)}$. The Poincar\'e series for free part is obtained by subtracting the PS for the non-free part of Proposition \ref{T} from that of $H^*K_2$. This is: $$ \prod_{k\ge 0} \frac{1}{(1-x^{2^k+1})} -\frac{1}{(1-x^4)} \bigl(1 + x^5+x^7+x^8+x^9+x^{10}\bigr) $$ $$ -\frac{1}{(1-x^2)(1-x^4)} \bigl( \bigoplus_{j\ge 4} \bigl( x^{2^j+1}(1+x^9)(1+x)(1-x^{2j-6}) \prod_{k\ge j} (1+x^{2^{k+1}+2}) \bigr) \bigr) $$ The first term is the PS for $H^*K_2$. The second is the PS for $P[u_2^2]\otimes (\langle 1 \rangle \oplus N)$. The last term is more complicated but does the $S$ and $R$ terms. The $(1-x^4)$ in the denominator is for the $P[u_2^2]$. The $x^9$ is the shift that takes $R$ to $S$. The $(1+x)$ is because they are $Q_0$ free. The $x^{2^j+1} (1-x^{2j-6})/(1-x^2)$ is for the odd part of $M_j$ and the remainder is for $\Lambda$. This is easy to put into a computer and calculate. For example, the number of free generators in degree 79 is 245. \section{Differentials in the ASS of $ku^*(K_2)$} \label{difflsec} The main theorem of this section determines the differentials in the ASS for $ku^*(K_2)$. \begin{thm}\label{diffl} The differentials in the spectral sequence whose $E_2$-term was given in Theorem \ref{E2} are as follows. All $v$-towers are involved, either as source or target, in exactly one of these. Here $M$ refers to any monomial (possibly $=1$) in the specified algebra. Recall that $\Lambda_j=TP_p[z_i:\,i\ge j]$, which is an exterior algebra if $p=2$. Also, recall $y_t=y_1^{p^{t-1}}$. We give reference numbers to the differentials when $p$ is odd, but references to these also apply to the corresponding differential when $p=2$, as the proofs are extremely similar. First with $p=2$. \begin{eqnarray*} d_{\nu(i)+2}(y_1^i)&=&h_0^{\nu(i)}v^2q y_1^{i-1},\ i\ge1;\\ d_{\nu(i)+2}(y_1^iz_jM)&=&v^{\nu(i)+2}q y_1^{i-1}z_{j-\nu(i),j}M,\\ &&j\ge \nu(i)+2,\ M\in\Lambda_j;\\ d_{2^t-t}(h_0^{t-2}v^2q y_1^{2^{t-1}-1}M)&=&v^{2^t}z_tM,\\ &&t\ge2,\ M\in P[y_t];\\ d_{2^t-t}(q y_1^{2^{t-1}-1}z_{j-(t-2),j}M)&=&v^{2^t-t}z_tz_jM,\\ &&j\ge t\ge2,\ M\in P[y_t]\otimes \Lambda_{j+1}.\end{eqnarray*} Now with $p$ odd. \begin{eqnarray}d_{\nu(i)+2}(y_1^i)&=&h_0^{\nu(i)+1}vq y_1^{i-1},\ i\ge1;\label{1}\\ d_{\nu(i)+2}(y_1^iz_jM)&=&v^{\nu(i)+2}q y_1^{i-1}z_{j-\nu(i)-1,j}M,\label{2}\\ &&j\ge\nu(i)+2,\ M\in \Lambda_j;\nonumber\\ d_{p^t-t}(h_0^{t-1}vq y_1^{p^{t-1}-1}M)&=&v^{p^t}z_tM,\label{3}\\ &&t\ge1,\ M\in P[y_t];\nonumber\\ d_{p^t-t}(q y_1^{p^{t-1}-1}z_{j-(t-1),j}M)&=&v^{p^t-t}z_tz_jM,\label{4}\\ &&j\ge t\ge1,\ M\in P[y_t]\otimes TP_{p-1}[z_j]\otimes \Lambda_{j+1}.\nonumber\end{eqnarray} \end{thm} The proof occupies the rest of this section, except that at the end of the section we explain briefly how this leads to our description of $ku^*(K_2)$ in Section \ref{intro}, except for the exotic extensions. By \cite[Theorem A]{Tam}, $Q_jQ_0\iota_2$ is in the image from $BP^*(K_2)$, and hence must be a permanent cycle in our ASS. Thus by (\ref{zj}), $z_j$ is a permanent cycle, and so (\ref{2}) follows from (\ref{1}), and (\ref{4}) follows from (\ref{3}), using $pz_{i,\ell}=vz_{i-1,\ell}$, as noted in \ref{Sdef}. The differentials (\ref{1}) follow from the result of \cite{Br} that $H^{2pi+1}(K_2;\bold Z)\approx{\Bbb Z}/p^{\nu(i)+2}\oplus\bigoplus\bold Z_p$. See also \cite[Proposition 1.3.5]{Cle} when $p=2$. The ASS converging to $H^*(K_2;\bold Z)$ has $E_2=\operatorname{Ext}_{A_0}(\bold Z_2,H^*K_2)$, where $A_0=\langle 1,Q_0\rangle$. We depict this $E_2$ similarly to our ASS for $ku^*(K_2)$. It has an $h_0$-tower for each element of $H_*(H^*K_2,Q_0)$, which was described in Lemma \ref{Q0}. These come in pairs in grading $2pi$ and $2pi+1$ corresponding to $y_1^i$ and $y_1^{i-1}y_0^{p-1}u_0$. In order to get the $\bold Z/p^{\nu(i)+2}$, there must be a $d_{\nu(i)+2}$-differential, as pictured on the right hand side of Figure \ref{Brfig}. Similarly to Figures \ref{fig3} and \ref{fig4}, we have, for $p=2$ and $i\ge1$, an $h_0$-tower in the ASS for $ku^*(K_2)$ arising from $G^{4i+1,2}$, called either $h_0^2y_1^{i-1}x_5$ or $v^2y_1^{i-1}q$. There is also an $h_0$-tower arising from $y_1^i\in G^{4i,0}$. The classes $y_1$ and $x_5$ correspond to cohomology classes $u_2^2$ and $u_5+u_2u_3$. Under the morphism $ku^*(K_2)\to H^*(K_2;\bold Z)$, these towers map across, as suggested in Figure \ref{Brfig}. We deduce the $d_{\nu(i)+2}$-differential claimed in (\ref{1}), promulgated by the action of $v$. Note that $x_9=q$. \bigskip \begin{minipage}{6in} \begin{fig}\label{Brfig} {\bf $ ku^*(K_2)\to H^*(K_2;\bold Z)$} \begin{center} \begin{tikzpicture}[scale=.55] \draw (-1,0) -- (3,0); \draw (.4,2.2) -- (0,2) -- (0,7); \draw (2.4,.2) -- (2,0) -- (2,7); \draw [->] (2,0) -- (.5,4); \draw (7,0) -- (11,0); \draw (8,0) -- (8,7); \draw (10,0) -- (10,7); \draw [->] (10,0) -- (8.5,4); \node at (2,-.5) {$2pi$}; \node at (0,-.5) {$2pi+1$}; \node at (10,-.5) {$2pi$}; \node at (8,-.5) {$2pi+1$}; \node at (1,-1.3) {$ku^*(K_2)$}; \node at (9,-1.3) {$H^*(K_2;\bold Z)$}; \node at (5,2) {$\longrightarrow$}; \end{tikzpicture} \end{center} \end{fig} \end{minipage} \bigskip The situation when $p$ is odd is extremely similar, using Figure \ref{Nfig}. The difference is that the $h_0$-tower in $2pi+1$ in the $ku^*$ ASS starts in filtration 1 rather than 2. Its generator can be called $vy_1^{i-1}q$. In Figure \ref{low}, we depict many of the differentials asserted in Theorem \ref{diffl} in grading $\le36$ when $p=2$. Regarding the third (final) summand in Theorem \ref{E2}, which is $P[y_1]\otimes A_1$ when $p=2$, we have included $y_1A_1$, $y_1^3A_1$, and $y_1^5A_1$. Not included are the portions involving (\ref{1}) and (\ref{2}) when $i$ is odd, as this portion self-annihilates. What is shown is (\ref{1}) for $i=2$, 4, and 6, (\ref{3}) for $(t,k)=(1,0)$, $(1,1)$, $(1,2)$, and $(2,0)$, and (\ref{4}) with $t=1$, $k=0$, and $j=4$. \bigskip \tikzset{ testpic2/.pic= {\draw (2,0) -- (20.6,9.3); \draw [color=blue] (-1,0) -- (29,0); \draw (18,0) -- (28.6,5.3); \draw (27,2) -- (29.6,3.3); \draw [color=red] (27,2) -- (26,4); \draw [color=red] (29,3) -- (28,5); \node at (18,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (19.6,.8) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (22,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (24,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (26,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (28,5) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (27,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (29,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (2,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (4,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (6,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (8,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (10,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (12,5) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (14,6) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (16,7) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (18,8) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (20,9) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (0,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (2,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (4,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (6,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (8,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \draw (0,0) -- (8.6,4.3); \draw (2,0) -- (2,1); \draw (4,1) -- (4,2); \draw (6,2) -- (6,3); \draw (8,3) -- (8,4); \draw (5,0) -- (9.6,2.3); \draw [color=red] (5,0) -- (4,2); \draw [color=red] (7,1) -- (6,3); \draw [color=red] (9,2) -- (8,4); \node at (5,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (7,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (9,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (0,-.5) {$36$}; \node at (4,-.5) {$32$}; \node at (8,-.5) {$28$}; \node at (12,-.9) {$24$}; \node at (16,-.5) {$20$}; \node at (20,-.5) {$y_1^4$}; \node at (20,-.9) {$16$}; \node at (24,-.5) {$12$}; \node at (28,-.5) {$8$}; \node at (6,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (8,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (14,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (16,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (22,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (24,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (8,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (16,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (24,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \draw (6,0) -- (8,1) -- (8,0); \draw (14,0) -- (16,1) -- (16,0); \draw (22,0) -- (24,1) -- (24,0); \node at (22,-.4) {$y_1z_1$}; \node at (0,.4) {$z_2^2$}; \node at (1.7,.3) {$z_3$}; \node at (2.4,-.4) {$y_1^4z_2$}; \node at (5.3,-.4) {$y_1qz_2$}; \node at (18,-.4) {$z_2$}; \node at (26.3,2) {$v^2y_1q$}; \draw (10,0) -- (20.6,5.3); \draw (19,5.3) -- (19,2) -- (21,3) -- (21,4) -- (19,3); \draw [color=red] (19,2) -- (18,4); \draw [color=red] (21,3) -- (20,5); \draw [color=red] (19,3) -- (18,8); \draw [color=red] (21,4) -- (20,9); \node at (10,-.4) {$y_1^2z_2$}; \node at (18.3,2) {$v^2y_1^3q$}; \draw (21,3) -- (21.6,3.3); \draw (21,4) -- (21.6,4.3); \draw (28,0) -- (28,1.3); \draw (27,2) -- (27,4.3); \draw [color=red] (28,0) -- (27,3); \draw [color=red] (28,1) -- (27,4); \node at (28.3,.3) {$y_1^2$}; \node at (28,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (28,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (27,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (27,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \draw (20,0) -- (20,1.3); \node at (20,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (20,1.2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \draw (20,0) -- (20.6,.3); \draw (20,1.2) -- (20.6,1.5); \draw [color=red] (20,0) -- (19,4); \draw [color=red] (20,1.2) -- (19,5); \node at (10,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (12,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (14,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (16,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (18,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (20,5) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (19,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (19,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (19,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \draw (19,4) -- (19.6,4.3); \draw (19,5) -- (19.6,5.3); \node at (19,5) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (21,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (21,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \draw (21,4) -- (21,4.3); \node at (2.3,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (4.3,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (6.3,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (8.3,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (10.3,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (12.3,5) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \draw (2.3,0) -- (12.9,5.3); \draw (11,2) -- (13.6,3.3); \node at (11,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (13,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \draw [color=red] (11,2) -- (10.3,4); \draw [color=red] (13,3) -- (12.3,5); \draw (12,0) -- (12,.5); \node at (12,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (12,-.4) {$y_1^6$}; \draw (11,2) -- (11,3.3); \node at (11,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \draw [color=red] (12,0) -- (11,3); \node at (11,1.6) {$v^2y_1^5q$}; \draw [color=red] (19,3) -- (18,8); \draw [color=red] (21,4) -- (20,9); \draw (12,0) -- (12.6,.3); \draw (11,3) -- (11.6,3.3); }} \bigskip \begin{minipage}{6in} \begin{fig}\label{low} {\bf Some differentials with $p=2$} \begin{center} \begin{tikzpicture} \pic[rotate=90,scale=.6,transform shape] {testpic2}; \end{tikzpicture} \end{center} \end{fig} \end{minipage} \bigskip In order to establish the remaining differentials, we will need the following description of $k(1)^*(K_2)$, which is proved in \cite{DRW}. We shift by 1 the subscripts of the classes $z_j$ and $w_j$ used there. The formulas for $r(j)$ and $r'(j)$ are as in \cite{DRW}. We recapitulate some of their properties. Those stated here but not there are easily proved by induction. \begin{prop} \cite{DRW} \label{rprop}For $j\ge0$, $z_j$ is the reduction of the class in $ku^*(K_2)$ and satisfies $|z_j|=2(p^{j+1}+1)$. The classes $w_j$ satisfy $|w_1|=2p^2+1$, $|w_2|=2p^3-2p^2+6p-3$, and $w_{j+2}=y_j^{p-1}w_jz_{j+1}^{p-1}$. The integers $r(j)$ and $r'(j)$ satisfy the following properties. \begin{eqnarray}&&r(0)=1,\ r(1)=p,\ r(j+2)=r(j)+p^{j+1}(p-1)+1;\label{rrec}\\ &&r'(0)=p-1,\ r'(1)=p^2-p,\nonumber\\ &&r'(j+2)=r'(j)+p^{j+2}(p-1)-1;\label{rprec}\\ &&r(j)-r'(j-1)=j,\label{r1}\\ &&r(j)+r'(j)=p^{j+1},\label{r2}\\ &&r(j+2)+r'(j)=p^{j+2}+1,\label{r3}\\ &&(p-1)(r(j-1)+j-1)<p^j,\label{r4}\\ &&p^{j+1}-p^j\le r'(j)<p^{j+1}-p^{j-1}.\label{r5} \end{eqnarray} \end{prop} \begin{thm}\label{DRWthm}\cite{DRW} For any $p$, $k(1)^*(K_2)$ is an ignorable trivial $k(1)^*$-module plus \begin{eqnarray*}&&\bigoplus_{j>0}TP_{r(j)}[v]\otimes P[y_{j+1}]\otimes TP_{p-1}[y_j]\otimes \overline{E}[w_{j}]\otimes E[w_{j+1}]\otimes \Lambda_{j+1}\\ &\oplus&\bigoplus_{j\ge1}TP_{r'(j-1)}[v]\otimes P[y_{j}]\otimes E[w_{j}]\otimes\overline{TP}_p[z_{j}]\otimes \Lambda_{j+1}\\ &\oplus&\bigoplus_{j\ge1} P[y_1]\otimes E[q ]\otimes \overline{E}[z_j^p]\otimes \Lambda_{j+1}\oplus P[y_1]\otimes\biggl(\overline{E}[y_0^{p-1}z_0]\oplus\begin{cases}\overline{E}[z_1]&p=2\\ 0&p\text{ odd.}\end{cases}\biggr)\end{eqnarray*}\end{thm} \noindent The last line was not discussed in \cite{DRW}; it is from free $E[Q_1]$ summands which are not part of free $E_1$ summands, and plays a very important role. Now we continue the proof of Theorem \ref{diffl}. We have already proved (\ref{1}) and (\ref{2}). As already noted, the $z_j$'s are infinite cycles by \cite{Tam}, and so the differentials in (\ref{4}) are implied as soon as the corresponding differential in (\ref{3}) is proved. As a warmup, we consider the cases $t=2$ and 3 of (\ref{3}) when $p=2$. We make extensive use of the exact sequence (\ref{LES}). Referring to Figure \ref{low} is useful. In even gradings $\le14$, $k(1)^*(K_2)=0$ in positive filtration, by Theorem \ref{DRWthm}. Thus the map $ku^*(K_2)\to k(1)^*(K_2)$ implies that in the ASS for $ku^*(K_2)$, $v^sz_2$ must be hit by a differential or divisible by 2 for $s\ge2$. In grading $<8$, there is nothing that can divide it, and the only odd-grading $v$-tower in that range is on $v^2y_1q$. Thus $d_2(v^2y_1q)=v^4z_2$, the case $t=2$, $M=1$ of (\ref{3}). Since $d_2(y_1^{2k})=0$ by (\ref{1}), the case $t=2$ of (\ref{3}) follows for any $M$ by the derivation property. An analogous argument does not work at the odd primes. Similarly $v^sz_3$ must be hit or divisible for $s\ge4$, and examination of options in Figure \ref{low} shows that we must have $d_5(h_0v^2y_1^3q)=v^8z_3$, preceded by extensions. Since $d_5(y_1^8)=h_0^3v^2y_1^7q$, we deduce the case $t=3$, $M\in P[y_1^8]$ of (\ref{3}) using the derivation property, (\ref{x9z4}) and $h_0z_2=0$. We do not have {\it a priori} knowledge that $y_1^4z_3$ is a permanent cycle in the ASS of $ku^*(K_2)$. However, if it supported a nonzero differential, then the tower of $v$-height 4 on $y_1^4z_3$ in the ASS of $k(1)^*(K_2)$ would have to map to $v^tC$ for $0\le t\le3$ for some $C$ in positive filtration in grading 51 in the ASS of $ku^*(K_2)$. Then $v^4C$ must be $d_r(B)$ with $r\ge5$ and $B$ in filtration 0 in grading 42. ($B$ cannot have higher filtration since everything is $v$-towers, and $v^3C$ cannot be hit.) But the only possible $B$ is $y_1^6z_2$, and we already know that $v^4y_1^6z_2\in\on{im}(d_4)$. (Ordinarily this would not preclude the possibility of $B$ supporting a differential, but it does since everything is $v$-towers.) Thus $y_1^4z_3$ is a permanent cycle, and consideration of its image in $k(1)^*(K_2)$ implies that $v^sy_1^4z_3$ is hit by a differential for some $s\ge4$. The only element in odd grading $<42$ not yet accounted for is $h_0v^2y_1^7q$ in grading 33, and so this must be the source of the differential. This is the case $t=3$, $M=y_1^4$ of (\ref{3}). The validity for all $M=y_1^{8i+4}$ (and $t=3$) now follows similarly to what we did for $M=y_1^{8i}$ at the beginning of this paragraph. Now we switch our attention to the odd primes. The situation when $p=2$ is extremely similar. We want to prove the following version of (\ref{3}). \begin{equation}\label{3n}d_{p^t-t}(h_0^{t-1}vqy_1^{(i+1)p^{t-1}-1})=v^{p^t}y_1^{ip^{t-1}}z_t.\end{equation} Now we work toward proving this. We illustrate with $p=5$, but it should be clear how it generalizes to an arbitrary prime. One new thing is the Divisibility Criterion as invoked in \cite{DRW}. Each mod $(p-1)$ value of $i$ can be considered separately. We will consider (\ref{3n}) with $p=5$ and $i=4\ell$; other congruences follow similarly. We index the differential (\ref{3n}) by $(\ell,t)$. We write $T$ (for vertical Tower) for the class $h_0^{t-1}v\beta y_1^{(4\ell+1)5^{t-1}-1}$, and $M$ (for Monomial) is $y_1^{4\ell5^{t-1}}z_t$. We write $|T|$ for $\frac12(|T|+1)$. The $\frac12$ avoids extraneous factors of 2 that always cancel out. The $+1$ is so that this indicates the grading (times $\frac12$) of the class that it hits. $|M|$ denotes $\frac12$ times the grading of $M$, and $M'$ equals $\frac12$ times the grading of $v^hM$, where $h$ is the $v$-height of $M$ in $k(1)^*(K_2)$. We wish to show that the differentials {\it must be} as claimed. Constraint C1 is that if $T_2\to M_1$ (by which we mean a certain $T$ hits a certain $M$, with the possibility $T_2=T_1$ allowed), then $|T_2|\le M_1'$. (This says that the $v$-tower on $M_1$ cannot be hit while its image in $k(1)^*(K_2)$ is nonzero.) Constraint C2 says that if $T(5\ell+1,t-1)\to M_1$ and $T(\ell,t)\to M_2$, then $|M_2|>|M_1|$. Since $|T(5\ell+1,t-1)|=|T(\ell,t)|$, this says that as you move up an $h_0$ tower, differentials must get longer (unless they are hitting into an $h_0$ tower, which is not the case here.) Constraint C3 says that if $T_2\to M_1$, then there exists $M_3$ with $|M_3|\ge M_1'$ and $M_3'\le|T_2|$. Alternatively, if $T_3\to M_3$ has already been proved, then $|T_3|\le |T_2|$ (and $|M_3|\ge M_1'$). The reason for C3 is that there must be extensions into the $M_1$-tower from grading $M_1'$ to $|T_2|+4$. The nonzero classes on the $v$-tower (on $M_3$) supporting the extensions must go to at least $|T_2|+4$, and it has nonzero classes at least to $M_3'+4$, and if $T_3\to M_3$ was already proved, it has nonzero classes to $|T_3|+4$. Note that we are saying that the $v$-tower on $M_1$ maps to 0 in $k(1)^*(K_2)$ once we get to grading $M_1'$ (and hence in gradings $\le M_1'$ it is either hit by differentials or is divisible by $p$). There might be classes of higher filtration in $k(1)^*(K_2)$ to which it could map, but, if so, we can modify the generator of the $M_1$ tower by the class on the tower sitting above it. Also note that it is possible that extensions from the tower $M_3$ above don't start from the generator, if there are $h_0$-extensions on the tower for awhile. See Figure \ref{figM3}. There is an exception to the C3 requirement for $T(\ell,1)\to M(\ell,1)$. Here the extension into $v^4y_1^{4\ell}z_1$ is obtained from the special class $y_1^{4\ell}y_0^4z_0$. \bigskip \begin{minipage}{6in} \begin{fig}\label{figM3} {\bf The role of $M_3$} \begin{center} \begin{tikzpicture}[scale=.25] \draw (0,0) -- (55,11); \draw (-1,0) -- (60,0); \draw (20,0) -- (55,7); \draw (18,0) -- (28,2); \draw (20,0) -- (20,.4); \draw (24,.8) -- (24,1.2); \draw (28,1.6) -- (28,2); \draw [->] (51,0) -- (50,10); \draw [dotted] (30,2) -- (30,6); \draw [dotted] (49,5.8) -- (49,9.8); \draw [dotted] (35,3) -- (35,7); \draw [dotted] (40,4) -- (40,8); \draw [dotted] (45,5) -- (45,9); \node at (0,-.8) {$|M_1|$}; \node at (20,-.8) {$M_3$}; \node at (50,10.8) {$|T_2|$}; \node at (48.5,5) {$|T_2|+4$}; \node at (30,-.8) {$M_1'$}; \end{tikzpicture}\end{center} \end{fig} \end{minipage} \bigskip With the above conventions, we have $|T|=5^t(4\ell+1)+1$, $|M|=5^t(4\ell+5)+1$, and $M'=|M|-4r'(t-1)$, where $4r'(t-1)$ has the values 16, 80, 412, and 2076 for $t=1$, 2, 3, and 4. Increasing from $t$ to $t+2$ increases this by $4^2\cdot5^{t+1}-4$. We consider the cases in order of increasing $|M|$ and, for equal values of $|M|$, increasing $\ell$. We tabulate a representative sample in Table \ref{tabl}. We omit listing values of $\ell\equiv3,4$ mod 5 because they behave similarly to $\ell\equiv2$. \vfill\eject \begin{table}[h] \caption{Cases in order} \label{tabl} \renewcommand{\arraystretch}{1.15} \begin{tabular}{cc|ccc|ccc|ccc} $\ell$&$t$&$|T|$&$|M|$&$M'$&\qquad\qquad&$\ell$&$t$&$|T|$&$|M|$&$M'$\\ \hline 0&1&6&26&10&&36&1&726&746&730\\ 1&1&26&46&30&&37&1&746&766&750\\ 2&1&46&66&50&&7&2&726&826&746\\ 0&2&26&126&46&&40&1&806&826&810\\ 5&1&106&126&110&&41&1&826&846&830\\ 6&1&126&146&130&&42&1&846&866&850\\ 7&1&146&166&150&&8&2&826&926&846\\ 1&2&126&226&146&&45&1&906&926&910\\ 10&1&206&226&210&&46&1&926&946&930\\ 11&1&226&246&230&&47&1&946&966&950\\ 12&1&246&266&250&&9&2&926&1026&946\\ 2&2&226&326&246&&50&1&1006&1026&1010\\ 15&1&306&326&310&&51&1&1026&1046&1030\\ 16&1&326&346&330&&52&1&1046&1066&1050\\ 17&1&346&366&350&&1&3&626&1126&714\\ 3&2&326&426&346&&10&2&1026&1126&1046\\ 20&1&406&426&410&&55&1&1106&1126&1110\\ 21&1&426&446&430&&56&1&1126&1146&1130\\ 22&1&446&466&450&&57&1&1146&1166&1150\\ 4&2&426&526&446&&11&2&1126&1226&1146\\ 25&1&506&526&510&&60&1&1206&1226&1210\\ 26&1&526&546&530&&61&1&1226&1246&1230\\ 27&1&546&566&550&&62&1&1246&1266&1250\\ 0&3&126&626&214&&&&$\vdots$&&\\ 5&2&526&626&546&&154&1&3086&3106&3090\\ 30&1&606&626&610&&0&4&626&3126&1050\\ 31&1&626&646&630&&5&3&2626&3126&2714\\ 32&1&646&666&650&&30&2&3026&3126&3046\\ 6&2&626&726&646&&155&1&3106&3126&3110\\ 35&1&706&726&710&&156&1&3126&3146&3130 \end{tabular} \end{table} \bigskip We begin by noting that C1, C2, and C3 are satisfied by $T(\ell,t)\to M(\ell,t)$. C1 is quite clear from Table \ref{tabl} and follows in general from $r'(t-1)<p^t$. C2 is immediate from the construction of the table. C3 when $t=1$ was handled by the exception noted previously. C3 for $T(\ell,t)\to M(\ell,t)$ is satisfied by $M_3=M(5\ell+1,t-1)$, as $|T(5\ell+1,t-1)|=|T(\ell,t)|$ and $|M(5\ell+1,t-1)|\ge M'(\ell,t)$ follows from $r'(t-1)\ge p^t-p^{t-1}$. We now seek to show that all other possibilities are eliminated. We first note that if $t\ge3$ there are many cases $M$ occurring between $(5\ell+1,t-1)$ and $(\ell,t)$ with $M'>|T(\ell,t)|$ so that C1 allows the possibility of $T(\ell,t)\to M$. For example, if $(\ell,t)=(1,3)$, $M$ could be any case between $(36,1)$ and $(52,1)$. Assume that we have handled all cases through $(5\ell+1,t-1)$; i.e., we have proved that $T\to M$ must be as in the table. We seek to show that $T(\ell,t)$ cannot hit any $M$ preceding it in the list. By C2, $T(\ell,t)$ cannot hit any $M$ with $|M|=|M(5\ell+1,t-1)|$, and it cannot hit anything before $(5\ell+1,t-1)$ because those cases have already been handled. We will use C3 to show it cannot hit anything above it with $|M|>|M(5\ell+1,t-1)|$. First note that $M_3'\le|T(\ell,t)|$ is not satisfied by any $M$'s after $(5\ell+1,t-1)$ since $M'>|T|>|T(\ell,t)|$, so those cannot work as $M_3$. The cases with $|M|\le |M(5\ell+1,t-1)|$ cannot work as $M_3$ because they do not satisfy the $|M_3|\ge M_1'$ criterion for C3. Thus we have shown that if we are done through $(5\ell+1,t-1)$, then $T(\ell,t)$ cannot hit any $M$ preceding it in the list. Consider the smallest $|M|$ such that, for some $(\ell,t)$, $T(\ell,t)\to M$ with $M\ne M(\ell,t)$. Then we have just seen that $M$ cannot be between $M(5\ell+1,t-1)$ and $M(\ell,t)$. It cannot be on or before $M(5\ell+1,t-1)$ by C2, since $|T(5\ell+1,t-1)|=|T(\ell,t)|$. And it cannot come after $M(\ell,t)$, for then $M(\ell,t)$ would not be hit by $T(\ell,t)$, contradicting minimality of $|M|$. This completes most of the proof of (\ref{3n}) and hence of Theorem \ref{diffl}. \bigskip Underlying the above analysis has been an assumption that the $M$-classes are always hit by $T$-classes. We show now that it could not have occurred that an $M$-class supported a differential. Assume that $M=y_1^{ip^{t-1}}z_t$ is the $M$-class of lowest grading which supports a differential. In $k(1)^*(K_2)$, $M$ supports a $v$-tower of $v$-height $r'(t-1)$ by \ref{DRWthm}. We will show at the end of the proof that there is a number $\Delta\le t$ such that $v^iM$ maps nontrivially to $ku^{*+1}(K_2)$ if and only if $i\le r'(t-1)-\Delta$. (Usually $\Delta=1$.) The image of $M$ in $ku^{|M|+1}(K_2)$ is a class $C$ of positive filtration such that $v^{r'(t-1)-\Delta}C\ne0$ and $v^{r'(t-1)-\Delta+1}C=0\in ku^*(K_2)$, so there must be a differential in the ASS of $ku^*(K_2)$ from a filtration-0 class hitting a class of filtration $\ge r'(t-1)-\Delta+2$ in grading $|M|+1-2(p-1)(r'(t-1)-\Delta+1)$. (The reason that the differential must start from filtration 0 is that in even gradings, $E_2$ consists entirely of $v$-towers starting in filtration 0.) This differential cannot come from another such $M$ because of our lowest-grading assumption, and it cannot come from an $M$ of higher grading because its grading is too small. It cannot come from a product of one or more $z$'s times one of these $M$'s because $z$'s are infinite cycles. We must rule out the possibility that this differential is one of type (\ref{2}). They are distinguished by having the smallest $z$-subscript at least 2 greater than the $p$-exponent of the exponent of $y_1$. The differential on $C$ has subscript $\ge r'(t-1)-\Delta+2$, and so the class in (\ref{2}) would be $y_1^{\ell p^{r'(t-1)-\Delta}}Z$ for some positive integer $\ell$, where $Z$ is a product of $z_j$'s with $j\ge r'(t-1)-\Delta+2$, and each $j$ appears at most $p-1$ times, except that the smallest $j$ might appear $p$ times. Equating this grading with $|M|-2(p-1)(r'(t-1)-\Delta+1)$, and cancelling a common factor 2 from all terms yields \begin{equation}\ell p^{r'(t-1)-\Delta+1} + \sum_j (p^{j+1}+1) =ip^t+p^{t+1}+1-(p-1)(r'(t-1)-\Delta+1).\label{ppt}\end{equation} Using (\ref{r2}) and (\ref{r4}) and $\Delta\le t$, the right hand side of (\ref{ppt}) equals $p^t(i+1)+(p-1)(r(t-1)+\Delta-1)+1\equiv (p-1)(r(t-1)+\Delta-1)+1$ mod $p^t$, with $(p-1)(r(t-1)+\Delta-1)+1\le p^t$ (strict if $t>2$). Since $r'(t-1)-\Delta>t$, this implies that the $\displaystyle\sum_j$ on the left hand side of (\ref{ppt}) must contain at least $(p-1)(r(t-1)+\Delta-1)+1$ summands. We obtain \begin{eqnarray*}&&\sum p^{j}\ge p\cdot p^{r'(t-1)-\Delta+2}+(p-1)(p^{r'(t-1)-\Delta+3}+\cdots+ p^{r'(t-1)+r(t-1)})\\ &=&p^{r'(t-1)+r(t-1)+1}=p^{p^t+1},\end{eqnarray*} so $\sum p^{j+1}\ge p^{p^t+2}$, and hence $p^t(i+1)>p^{p^t+2}$. Thus $i\ge p^{p^t-t+2}>p^{p^t-2t}$. Since $d_{p^t-t+1}(y_1^{p^{p^t-t-1}})$ is defined, \begin{equation}\label{pp2t}d_r(y_1^{p^{p^t-t-1}})=0\text{ for }r\le p^t-t,\end{equation} and by the lowest-grading assumption, $ d_{p^t-t}(h_0^{t-1}vqy_1^{(i-p^{p^t-2t}+1)p^{t-1}-1})=v^{p^t}y_1^{(i-p^{p^t-2t})p^{t-1}}z_t$ and $y_1^{(i-p^{p^t-2t})p^{t-1}}z_t$ is a permanent cycle. Since $$y_1^{ip^{t-1}}z_t=y_1^{(i-p^{p^t-2t})p^{t-1}}z_t\cdot y_1^{p^{p^t-t-1}},$$ we deduce that $y_1^{ip^{t-1}}z_t$ survives to $E_{p^t-t}$ and (\ref{3n}), using the derivation property of differentials. Now we consider the need for $\Delta$ in the above argument. The worry is that maybe part of the $v$-tower on $M$ in $k(1)^*(K_2)$ might be in the image from $ku^*(K_2)$, due to a filtration jump from a lower tower, as sketched in Figure \ref{unw}, so that only a smaller part of the $M$-tower in $k(1)^*(K_2)$ maps to $ku^{*+1}(K_2)$. \bigskip \begin{minipage}{6in} \begin{fig}\label{unw} {\bf An unwanted possibility} \begin{center} \begin{tikzpicture}[scale=.15] \draw (-3,0) -- (25,0); \draw (0,0) -- (15,15); \draw (10,0) -- (25,15); \draw (32,0) -- (59,0); \draw (35,0) -- (55,20); \draw (45,0) -- (49,4); \draw (68,0) -- (84,0); \draw (70,3) -- (84,17); \draw [->] (0,0) -- (-2,5); \draw [->] (5,5) -- (3,10); \draw [->] (10,10) -- (8,15); \node at (15,5) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (13,5){$c$}; \node at (20,10) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (18,10) {$c'$}; \node at (50,15) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (50,13) {$c$}; \node at (55,20) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (53,20) {$c'$}; \draw [->] (58,0) -- (56,21); \draw [->] (52,0) -- (50,5); \draw [->] [color=red] (13,2) -- (46,2); \draw [->] [color=red] (18,7) -- (51,17); \draw [->] [color=red] (43,7) -- (76,10); \node at (0,-1.6) {$|M_1|$}; \node at (10,-1.6) {$|M_2|$}; \node at (35,-1.6) {$|M_1|$}; \node at (45,-1.6) {$|M_2|$}; \node at (69.5,-1.6) {$|M_1|+1$}; \node at (12.5,-5) {$ku^*(K_2)$}; \node at (47,-5) {$k(1)^*(K_2)$}; \node at (75,-5) {$ku^{*+1}(K_2)$}; \end{tikzpicture} \end{center} \end{fig} \end{minipage} \bigskip The monomials $M_\varepsilon=y_{t_\varepsilon}^{i_\varepsilon}z_{t_\varepsilon}$ ($\varepsilon=1,2$) have $|M_\varepsilon|=2(p^{t_\varepsilon}(i_\varepsilon+p)+1)$ and are truncated in $k(1)^*(K_2)$ in grading $M_\varepsilon'=|M_\varepsilon|-2(p-1)r'(t_\varepsilon-1)$. In $ku^*(K_2)$, $M_2$ is truncated in grading $|T_2|=|v^{p^{t_2}}M_2|=2(p^{t_2}(i_2+1)+1)$. In Figure \ref{unw}, elements $c$ are in grading $M_2'$, and $c'$ is in grading $M_1'+2(p-1)$. The necessary condition for nontrivial image in $k(1)^*(K_2)$ (and hence $\Delta>1$) is \begin{equation}\label{TM}|T_2|+2(p-1)\le M_1'+2(p-1)\le M_2'.\end{equation} If this occurs, then we might have $\Delta$ as large as $\dfrac{M_2'-M_1'}{2(p-1)}+1$. We now show in Lemma \ref{hell} that if (\ref{TM}) holds, then $(M_2'-M_1')/(2(p-1))<t$, establishing the claim made earlier about $\Delta\le t$. We restrict to $p=5$, $i=4\ell$ for simplicity, and so that the reader can refer to Table \ref{tabl} as an aid. The argument easily generalizes to any prime and any congruence. We divide everything by 2 as was done above, and also subtract off the $+1$ which occurs in formulas for $|M|$ and $|T|$, so the numbers will be 1 smaller than those in the table. \begin{lem}\label{hell} If $t_1>t_2$ and $$5^{t_2}(4\ell_2+1)+4\le 5^{t_1}(4\ell_1+5)-4r'(t_1-1)+4\le 5^{t_2}(4\ell_2+5)-4r'(t_2-1),$$ then $$\tfrac14\bigl(5^{t_2}(4\ell_2+5)-4r'(t_2-1)-(5^{t_1}(4\ell_1+5)-4r'(t_1-1))\bigr)<t_1-1.$$\end{lem} \begin{proof} If there is a counterexample to this, then there is one with $\ell_1=0$, since $\ell_2$ could be decreased by $5^{t_1-t_2}\ell_1$, so it suffices to use $\ell_1=0$. Let $Q(k)=(5^{2k}-1)/24$ (called $q(k)$ in \cite[Lemma 5.3]{DRW}). Then, using \cite[Lemma 5.5]{DRW}, for $t=2k+\delta$ with $\delta=1$ or 2, $$5^{t+1}-4r'(t-1)=5^{2k+\delta}+16\cdot 5^\delta Q(k)+4k+4\cdot5^{\delta-1}.$$ Since $16\cdot 5^\delta Q(k)+4k+4\cdot5^{\delta-1}<3\cdot 5^{2k+\delta}$, the hypothesis of the lemma says that $5^{t_1+1}-4r'(t_1-1)$ mod $4\cdot 5^{t_2}$ lies in the mod-$(4\cdot 5^{t_2})$ interval $[5^{t_2},5^{t_2+1}-4r'(t_2-1)-4]$. Let $t_1=2k_1+\delta_1$ and $t_2=2k_2+\delta_2$. The condition is restated as \begin{equation}\label{x1}5^{2k_1+\delta_1}+16\cdot 5^{\delta_1}Q(k_1)+4k_1+4\cdot 5^{\delta_1-1}\end{equation} lies in the mod-$(4\cdot5^{t_2})$ interval \begin{equation}\label{I}[5^{t_2},5^{t_2}+16\cdot5^{\delta_2}Q(k_2)+4k_2+4\cdot5^{\delta_2-1}-4].\end{equation} Let $\delta_2=1$. The reduction mod $4\cdot 5^{t_2}$ of (\ref{x1}) is \begin{equation}\label{x2}5^{t_2}+16\cdot 5^{\delta_1}Q(k_2)+4k_1+4\cdot5^{\delta_1-1}.\end{equation} Let $\delta_1=2$. Then $5^{t_2}+16\cdot5^{\delta_1}Q(k_2)>4\cdot 5^{t_2}$ and equals $5^{2k_2+2}-(2000Q(k_2-1)+100)$, so (\ref{x2}) will first be in the interval (\ref{I}) when $4k_1+20=2000Q(k_2-1)+100$, hence $k_1=500Q(k_2-1)+20$, so $t_1=1000Q(k_2-1)+42$. The left hand side of the conclusion of the lemma is $\frac18(M_2'-M_1')$ with $M_1'$ and $M_2'$ as in (\ref{TM}). For $k_1=500Q(k_2-1)+20$, the value of $M_1'$ is at the left end of the interval (\ref{I}), and so $\frac18(M_2'-M_1')$ equals $\frac14$ times the length plus 4 of (\ref{I}), which is $$20Q(k_2)+k_2+1=500Q(k_2-1)+k_2+21=\tfrac12t_1+k_2.$$ Since $k_2<<t_1$, this is less than $t_1-1$, as claimed. If $k_1$ is increased from the value $500Q(k_2-1)+20$, the value of $t_1$ increases, while $M_2'-M_1'$ decreases, since $M_1'$ is moving through the interval, so the inequality asserted in the lemma is satisfied more strongly. Now, with $\delta_2=1$ continuing, let $\delta_1=1$. Since $k_1>k_2$, (\ref{x2}) lies outside the interval (\ref{I}) until $80Q(k_2)+4k_1+4=4\cdot 5^{t_2}$, so $$k_1=5^{2k_2+1}-20Q(k_2)-1=100Q(k_2)+4$$ and $t_1=200Q(k_2)+9$. Again $\frac18(M_2'-M_1')=20Q(k_2)+k_2+1\approx\frac1{10}t_2+k_2$, so the conclusion of the lemma is satisfied more strongly. A similar analysis works when $\delta_2=2$. In this case $\frac18(M_2'-M_1')\approx\frac12t_1+k_2$ if $\delta_1=1$, and $\frac18(M_2'-M_1')\approx\frac1{10}t_1+k_2$ if $\delta_1=2$.\end{proof} We close this section by explaining how Theorems \ref{E2} and \ref{diffl} lead to the descriptions of $ku^*(K_2)$ given in Theorems \ref{evthm} and \ref{oddthm}, modulo exotic extensions. We begin with the portion in even gradings and restrict our attention to odd $p$. All elements in the $P[h_0,v,y_1]$ part of Theorem \ref{E2} support differentials of type (\ref{1}). Note that $y_0^{p^k-1}=y_0^{p-1}y_1^{p^{k-1}-1}=\prod_{j=0}^{k-1}y_j^{p-1}$. The first is easiest to write, the second occurs in Theorem \ref{E2}, and the third in \ref{ABdef} and Figure \ref{oddchart}. From \ref{ABdef}, $y_0^{p^k-1}z_0$ is in $A_k$ for $k\ge1$, the bottom right element in Figure \ref{oddchart}. Then \begin{equation}\label{Pyy}P[y_1]y_0^{p-1}z_0=\bigoplus \mathcal{M}_k^A\cdot y_0^{p^k-1}z_0\subset \bigoplus \mathcal{M}_k^AA_k.\end{equation} The first part occurs in Theorem \ref{E2} and the last part in Theorem \ref{evthm}. Now we consider $P[y_1]\otimes\bigoplus_{j\ge1} W_j\otimes TP_{p-1}[z_j]\otimes\Lambda_{j+1}$ in Theorem \ref{E2}. The $\bigoplus$ part is all monomials $z_\ell M$ with $\ell\ge1$ and $M\in\Lambda_\ell$. From Theorem \ref{diffl}, $y_1^iz_\ell M$ supports a differential (\ref{2}) if $\ell\ge\nu(i)+2$, while those with $\nu(i)\ge\ell-1$ are hit by differentials (\ref{3}) and (\ref{4}), yielding $v$-towers with heights as given in \ref{ABdef}. These are all monomials in $\bigoplus_{\ell\ge1}P[y_\ell,y_{\ell+1},\ldots]z_\ell\Lambda_\ell$. From \ref{ABdef} or (\ref{Bk}), the generators of the $v$-towers in $B_k$ are all $$z_j\prod_{i=j}^{k-1}\{z_i^{p-1},y_i^{p-1}\},\ 1\le j\le k.$$ Let $(z_\ell M)_i$ be the $y_i^ez_i^{e'}$ factors of $ M$. Then $\mathcal{M}_kB_k$ consists of all monomials $z_\ell M$ such that $(z_\ell M)_i$ equals $y_i^{p-1}$ or $z_i^{p-1}$ for $i<k$, but not for $\ell\le i=k$, and so every monomial $z_\ell M$ is in a unique $\mathcal{M}_kB_k$. From Theorem \ref{diffl}, $z_\ell M$ has $v$-height $p^\ell$ if and only if $M$ contains no $z$-factors, which explains the split into $\mathcal{M}_k^A$ and $\mathcal{M}_k^B$ in Theorem \ref{evthm}. Now we address the odd gradings. The $P[h_0,v,y_1]vq$ part of Theorem \ref{E2} is totally removed either as sources (\ref{3}) or targets (\ref{1}) of differentials. See grading 17 in Figure \ref{low} for a nice illustration. The $qy_1^{i-1}S_{\nu(i)+1,\ell}$ part of Theorem \ref{oddthm} is formed from $TP_{\nu(i)+2}[v]qy_1^{i-1}W_\ell$ in \ref{E2} using (\ref{2}). The generators of $S_{\nu(i)+1,\ell}$ are $z_{1,\ell},\ldots,z_{\ell-\nu(i)-1,\ell}$, but to see the differential from (\ref{2}), one should write $z_{t,\ell}=z_{t,t+\nu(i)+1}Z_{t+\nu(i)+1}^\ell$, where \begin{equation}\label{Zdef}Z_i^j=(z_i\cdots z_{j-1})^{p-1}\text{ for }j>i, \text{ with }Z_i^i=1.\end{equation} The remaining generators of $qy_1^{i-1}W_\ell$, namely $qy_1^{i-1}z_{j,\ell}$ with $ \ell-\nu(i)\le j\le\ell$, support differentials (\ref{4}). \section{Exotic extensions}\label{extnsec} In this section, we prove the following expansion of (\ref{extns}). \begin{thm}\label{extnthm} If $i\ge0$ and $k\ge k_0$, $$py_{k}^iy_{k-1}^{p-1}z_{k-1}=v^{p^{k-1}(p-1)}y_{k}^iz_{k}$$ with an additional term $vy_{k}^iy_{k-1}^{p-1}z_{k-2}^p$ if $k\ge k_0+2$.\end{thm} The additional term is seen in Ext, and will be ignored in the rest of this section. We have included the factor $y_{k}^i$, which is not automatic since $y_{k}^i$ is not a permanent cycle. Since, for example, $y_{k+1}=y_k^p$, we need not consider $y_i$ for $i>k$. It is automatic that this formula can be multiplied by $z_j$'s, since they do survive the spectral sequence. The extension is deduced from the exact sequence $$ku^*(K_2)\mapright{\cdot p} ku^*(K_2)\longrightarrow k(1)^*(K_2)$$ and the fact that $v^{r'(k-1)}y_{k}^iz_{k}=0$ in $k(1)^*(K_2)$ with $r'(k-1)\ge p^k(p-1)$. Thus $v^{r'(k-1)}y_{k}^iz_{k}$ must be divisible by $p$ in $ku^*(K_2)$, and, as we will show, the $v$-tower on $y_{k}^iy_{k-1}^{p-1}z_{k-1}$ provides the only classes that can do the dividing. Once we know the division formula toward the end of the $v$-tower, we can deduce that it holds earlier in the tower, as well. For example, $r'(2)=p^3-p^2+p-2$, which is the height in the top $v$-tower in Figure \ref{oddchart} where the extensions into it do not also involve an $h_0$-extension. We deduce the extensions from the earlier part of the $v$-tower on $y_2^{p-1}z_2$ by naturality. We illustrate in Figure \ref{extnfig}, using the notation of the preceding section. Thus $T_i$ is the class satisfying $d_r(T_i)=v^rM_i$, Here the portion of the top tower to the right of $M_1'$ must be divisible by $p$. The tower providing the extension must have $M_1'\le |M_2|<|M_1|$ and $|T_2|\le |T_1|$. \bigskip \begin{minipage}{6in} \begin{fig}\label{extnfig} {\bf Conditions for extension} \begin{center} \begin{tikzpicture}[scale=.25] \draw (0,0) -- (36,9); \draw (20,0) -- (38,4.5); \draw (-1,0) -- (41,0); \node at (24,6) {$\bullet$}; \node at (0,-1.2) {$M_1$}; \node at (20,-1.2) {$M_2$}; \draw [dotted] (24,1) -- (24,6); \draw [dotted] (28,2) -- (28,7); \draw [dotted] (32,3) -- (32,8); \draw [dotted] (36,4) -- (36,9); \node at (24,9) {$M_1'$}; \draw [->] (24,8) -- (24,6.5); \draw [->] (38,10.5) -- (38,9.5); \node at (38,11.5) {$|T_1|$}; \node at (40,-1.2) {$|T_2|$}; \end{tikzpicture}\end{center} \end{fig} \end{minipage} \bigskip As we did for the differentials in the previous section, we will perform the argument for $p=5$. It will be clear that it generalizes to an arbitrary odd prime, and with minor modification to $p=2$. Also, we use $i=4\ell$ in Theorem \ref{extnthm}. If instead we used $i=4\ell+d$ for $1\le d\le 3$, it will just add the same amount to the quantities $|M|$, $|T|$, and $M'$ involved in the argument. We can use Table \ref{tabl} to envision the analysis, with the $t$ there replaced by $k$. For a monomial $M(\ell,k)=y_k^{4\ell}z_k$, we have, after dividing by 2, $|M|=5^k(4\ell+5)+1$, $|T|=5^k(4\ell+1)+1$, and $5^k(4\ell+1.16)+1< M'\le 5^k(4\ell+1.8)+1$, using (\ref{r5}). With $M_1$ and $M_2$ as in Figure \ref{extnfig}, we will show that $M_2(5\ell+1,k-1)$ is the unique monomial satisfying the inequalities stated just before Figure \ref{extnfig} for $M_1(\ell,k)$. Note that $M(5\ell+1,k-1)=y_k^{4\ell}y_{k-1}^4z_{k-1}$. We omit the $+1$ in all the formulas. The inequalities are satisfied by $M_2(5\ell+1,k-1)$ since $$5^k(4\ell+1.8)\le 5^{k-1}(4(5\ell+1)+5)<5^k(4\ell+5)\text{ and }5^{k-1}(4(5\ell+1)+1)\le 5^k(5\ell+1).$$ If $k_2\ge k$, then the first inequality, after dividing by $5^k$, becomes $$4\ell+1.8\le 5^{k_2-k}(4\ell_2+5)<4\ell+5,$$ which cannot be satisfied since the middle term is $\equiv1$ mod 4. If $k_2<k-1$, then $$M_1'-|T_1|>5^k\cdot.16\ge4\cdot5^{k_2}=|M_2|-|T_2|,$$ which is inconsistent with two of the inequalities. Let $k_2=k-1$. If $\ell_2<5\ell+1$, then $$|M_2|=5^{k-1}(4\ell_2+5)\le5^{k-1}(4\cdot5\ell+5)<5^k(4\ell+1.16)<M_1',$$ contradicting one of the inequalities. If $k_2=k-1$ and $\ell_2>5\ell+1$, then $$|T_2|\ge 5^{k-1}(4(5\ell+2)+1)>5^k(4\ell+1)=|T_1|,$$ contradicting one of the inequalities. We deduce that $M_2=y_k^{4\ell}y_{k-1}^4z_{k-1}$, as claimed. We should perhaps have noted that the extensions could not have come from classes with more than one $z_j$-factor, because these are $z_j$ times a class on which the extensions have already been determined. \section{Proposed formulas for the exact sequence (\ref{LES})}\label{LESsec} In this section we propose what we feel must be the correct complete formulas for the exact sequence (\ref{LES}). Some homomorphisms are forced by naturality, but many others involve significant filtration jumps. However, they all occur in several families with nice properties. The 10-term exact sequence (\ref{10}) shows how the $S_{k,\ell}$ portions and the exotic extensions yield compatibility of the differing $v$-tower heights in $ku^*(K_2)$ and $k(1)^*(K_2)$. In Section \ref{allsec}, we show that all elements of $k(1)^*(K_2)$ are accounted for exactly once in these homomorphisms, which implies that there can be no more exotic extensions. This does not require us to prove that our homomorphism formulas are actually correct, as discussed at the end of Section \ref{intro}. We will focus on the case when $p$ is odd. We could incorporate all primes together at the expense of involving the parameter $k_0$, but things are complicated enough without that. In an earlier version of this paper (\cite{2}), a thorough analysis when $p=2$ was performed. We propose that (\ref{LES}) can be split into exact sequences of length 4 and 10 (not including 0's at the end). There are subgroups of $k(1)^*(K_2)$ called $G_k^1$ and $G_k^2$ for $k\ge1$ and $G^i_{k,\ell}$ for $3\le i\le6$ and $1\le k<\ell$ such that there are exact sequences \begin{equation}\label{Aseq}0\to G_k^1\to A_k\mapright{p}A_k\to G_k^2\to 0\end{equation} for $k\ge1$, and, for $1\le k<\ell$, \begin{eqnarray}\nonumber0&\to& G^3_{k,\ell}\to y_kB_{k}Z_k^\ell\mapright{p} y_kB_{k}Z_k^\ell\to G^4_{k,\ell}\to y_1^{p^{k-1}-1}q S_{k,\ell}\\ &\mapright{p}&y_1^{p^{k-1}-1}q S_{k,\ell}\to G^5_{k,\ell}\to B_{k}z_\ell\mapright{p}B_{k}z_\ell\to G^6_{k,\ell}\to0,\label{10}\end{eqnarray} with $Z_k^\ell$ as defined in (\ref{Zdef}). The sequence (\ref{Aseq}) can be tensored with $TP_{p-1}[y_k]\otimes P[y_{k+1}]$, while (\ref{10}) can be tensored with $TP_{p-1}[y_k]\otimes P[y_{k+1}]\otimes TP_{p-1}[z_\ell]\otimes \Lambda_{\ell+1}$. If $p$ is odd, there are also exact sequences \begin{equation}\label{78}0\to G^7_{k,e}\to B_kz_k^e\mapright{p} B_kz_k^e\to G^8_{k,e}\to 0\end{equation} for $k\ge1$ and $1\le e\le p-2$. This can be tensored with $P[y_k]\otimes\Lambda_{k+1}$. One can verify that the totality of $A_k$ and $B_k$ groups in these exact sequences agrees with that in Theorem \ref{evthm}. We will study these exact sequences by breaking them up into short exact sequences and isomorphisms involving kernels and cokernels of $\cdot p$. Let $K_k^A=\on{ker}(\cdot p|A_k)$, $K_k^B=\on{ker}(\cdot p|B_k)$, $C_k^A=\operatorname{coker}(\cdot p|A_k)$, and $C_k^B=\operatorname{coker}(\cdot p|B_k)$. There are important elements $g_k\in K_k^A$ and $K_k^B$ defined (up to unit coefficients) by $g_1=z_1$, $g_2=v^{p-2}z_2$, and, for $k\ge1$, \begin{equation}\label{gdef}g_{k+2}=v^{r'(k)-1}z_{k+2}+g_ky_k^{p-1}z_{k+1}^{p-1}.\end{equation} To see that this is in $\on{ker}(\cdot p)$, we use (\ref{extns}) to see that $p\cdot v^{r'(k)-1}z_{k+2}=v^{r'(k)}z_{k+1}^p$, and that the $v^{r'(k-2)-1}z_k$ term in $g_k$ yields $v^{r'(k-2)-1}v^{p^k(p-1)}z_{k+1}z_{k+1}^{p-1}$ in $p\cdot g_ky_k^{p-1}z_{k+1}^{p-1}$. Using (\ref{rprec}), these terms cancel. Other terms in $p\cdot g_ky_k^{p-1}z_{k+1}^{p-1}$ yield 0 since $g_k\in\on{ker}(\cdot p)$. The $v$-towers in $K_k^A$ are generated by \begin{equation}\label{gns}g_k\text{ and }g_jz_j^{p-1}\prod_{i=j+1}^{k-1}\{z_i^{p-1},y_i^{p-1}\},\ 1\le j\le k-1.\end{equation} For example, using Figure \ref{oddchart} when $k=3$, these are $g_3=v^{p^2-p-1}z_3+y_1^{p-1}z_1z_2^{p-1}$, $g_2z_2^{p-1}=v^{p-2}z_2^p$, $g_1z_1^{p-1}z_2^{p-1}$, and $g_1z_1^{p-1}y_2^{p-1}$. The $v$-heights are $p^k-(r'(k-2)-1)$ for $g_k$, and $p^j-j-(r'(j-2)-1)$ for the others, since they are determined by $v$-heights of $z_j$ in $B_k$. The map $G_k^1\to K_k^A$ sends $w_k$ to $g_k$ and \begin{equation}w_jP\mapsto g_jP\text{ for }P=z_j^{p-1}\prod_{i=j+1}^{k-1}\{z_i^{p-1},y_i^{p-1}\}\label{wg},\end{equation} with $w_j$ as in \ref{rprop} and \ref{DRWthm}. The $v$-height of $w_j$ is $r(j)$ if it is not accompanied by $z_j$, and $r'(j-1)$ if it is. By (\ref{r3}) and ((\ref{r2}) and (\ref{r1})) the $v$-heights agree, so (\ref{wg}) is an isomorphism on $v$-towers. For $L=K_k^A$ or $K_k^B$ or $C_k^A$ or $C_k^B$, we say that a $\bold Z_p$ in $L$ is a class of $v$-height 1 in $L$ which is not part of a larger $v$-tower in $L$. There is one $\bold Z_p$ in $K_3^A$, as can be seen in Figure \ref{oddchart}. This is the element $v^{p-2}y_1^{p-1}z_1z_2^{p-1}$. Note that for $i<p-1$, $v^iy_1^{p-1}z_1z_2^{p-1}+v^{i+p^2-p-1}z_3$ is part of a $v$-tower in $K_3^A$, which continues with the elements $v^{i}z_3$ for $i>p^2-3$, but $v^iy_1^{p-1}z_1z_2^{p-1}$ itself is in $K_3^A$ only for $i=p-2$. Using \ref{ABdef}, we find that the $\bold Z_p$'s in $K_k^A$ are \begin{equation}\label{KkA}v^{p^t-t-1}(y_t\cdots y_{j-1})^{p-1}z_tz_j^{p-1}\prod_{i=j+1}^{k-1}\{z_i^{p-1},y_i^{p-1}\}\text{ for }1\le t<j<k.\end{equation} For example, the elements $v^{p-2}(y_1y_2)^{p-1}z_1$ and $v^{p^2-3}y_2^{p-1}z_2$ in Figure \ref{oddchart} yield elements in $K_4^A$ after being multiplied by $z_3^{p-1}$. The basic formula for the homomorphism from part of $k(1)^*(K_2)$ to $\bold Z_p$'s in various $K_k^A$ and $K_k^B$, possibly tensored with other classes as in Theorem \ref{evthm}, is \begin{equation}\label{KK}\bigl(q(y_1\cdots y_t)^{p-1}z_{j-t,j}\mapsto v^{p^t-t-1}y_t^{p-1}z_tz_j\bigr)\otimes P[y_j]\otimes TP_{p-1}[z_j]\otimes \Lambda_{j+1}\text{ for }j>t\ge1.\end{equation} The domain elements are in the first half of the third line of Theorem \ref{DRWthm}. The ones that are in $G^1_k$ in the isomorphism $G^1_k\to K_k^A$ can be extracted using (\ref{KkA}). The isomorphism $G^3_{k,\ell}\to y_kK_k^B Z_k^\ell$ in (\ref{10}) is given using formulas analogous to (\ref{wg}) and (\ref{KK}). There are several minor differences. One is that the $v$-tower on $y_kg_kZ_k^\ell$ is truncated due to $v^{p^k-k}z_k=0$ in $B_k$ (as opposed to $v^{p^k}z_k=0$ in $A_k$). This is compatible with the fact that the $v$-height of $w_kz_k$ in $k(1)^*(K_2)$ is $k$ less than that of $w_k$, using Theorem \ref{DRWthm} and (\ref{r1}). The other is that $K_k^B$ has additional $\bold Z_p$'s \begin{equation}\label{KkB}v^{p^t-t-1}(y_t\cdots y_{k-1})^{p-1}z_t\text{ for }1\le t\le k-1,\end{equation} as seen in Figure \ref{oddchart} when $k=3$, but these are always multiplied by higher $z$'s, and so (\ref{KK}) applies. The isomorphisms $C_k^A\to G_k^2$ and $C_k^Bz_\ell\to G_{k,\ell}^6$ are defined simply by sending an element to one with the same name. Moreover $C_k^A=C_k^B$ except for $(y_0\cdots y_{k-1})^{p-1}z_0\in C_k^A-C_k^B$. When $k=3$, we see that the $\bold Z_p$'s in $C_k^B$ are $\{z_1^pz_2^{p-1},\,z_2^p,\,y_2^{p-1}z_1^p\}$ in Figure \ref{oddchart}.\footnote{The class $y_2^{p-1}z_1^{p-1}$ should really be called $y_2^{p-1}z_1^{p-1}+v^{p^2(p-1)-1}z_3$ so that $v$ times it is divisible by $p$, hence 0 in $C_k^B$, but we will ignore this fine-tuning.} For future reference, \begin{equation}\label{Ck}\text{$\bold Z_p$'s in }C_k^B\text{ are} \bigl\{z_t^p\prod_{i=t+1}^{k-1}\{z_i^{p-1},y_i^{p-1}\bigr\}:\, 1\le t<k\}.\end{equation} The corresponding elements in $k(1)^*(K_2)$ are from the third line of \ref{DRWthm}. The $v$-towers in $C_k^A=C_k^B$ are generated by \begin{equation}\label{Ctow}z_k\text{ and }y_t^{p-1}z_t\prod_{i=t+1}^{k-1}\{z_i^{p-1},y_i^{p-1}\},\ 1\le t<k.\end{equation} We will show that the $v$-height of $z_k$ in $C_k^B$ is $r'(k-1)$, which equals its $v$-height in $k(1)^*(K_2)$. It follows from \ref{ABdef} that the $v$-height of $y_t^{p-1}z_t\prod_{i=t+1}^{k-1}\{z_i^{p-1},y_i^{p-1}\}$ equals $r'(t-1)$, establishing the isomorphisms out of $C_k^A$ and $C_k^Bz_\ell$. In Figure \ref{oddchart}, the $v$-height of $z_3$ equalling $p^3-p^2+p-2=r'(2)$ is apparent. The proof of the claim about $v$-heights is by induction. By (\ref{rprec}), $r'(k-1)-r'(k-3)=p^{k-1}(p-1)-1$. Let $D=(|z_k|-|y_{k-1}^{p-1}z_{k-1}|)/(2(p-1))=p^{k-1}(p-1)$. This is the filtration on the $z_k$-tower above the element $y_{k-1}^{p-1}z_{k-1}$. We show that $v^{i-1+D}z_k$ is divisible by $p$ if and only if $v^iz_{k-2}$ is divisible by $p$. Thus the difference of the $v$-heights in cokernels equals the difference of the corresponding $r'$ values. From Theorem \ref{extnthm}, we have $$pv^{i-1}y_{k-1}^{p-1}z_{k-1}=v^{i-1+D}z_k+v^iy_{k-1}^{p-1}z_{k-2}^p.$$ The claim follows, since $v^iy_{k-1}^{p-1}z_{k-2}^p$ is divisible by $p$ if and only if $v^iz_{k-2}$ is, by \ref{ABdef}. The analysis of (\ref{78}) is extremely similar. Now $S_{k,\ell}$ becomes involved. Let $S_{k,\ell}^K=\on{ker}(\cdot p|S_{k,\ell})$ and $S_{k,\ell}^C=\operatorname{coker}(\cdot p|S_{k,\ell})$. Then $S_{k,\ell}^K$ consists of $TP_{k+1}[v]\langle z_{1,\ell}\rangle$ plus $\bold Z_p$'s on $v^kz_{i,\ell}$ for $2\le i\le \ell-k$, while $S_{k,\ell}^C$ has $TP_{k+1}[v]\langle z_{\ell-k,\ell}\rangle$ plus $\bold Z_p$'s on $z_{i,\ell}$ for $1\le i<\ell-k$. Next we consider the short exact sequence \begin{equation}\label{G4seq}0\to y_kC_k^BZ_k^\ell\mapright{\phi} G_{k,\ell}^4\mapright{\psi} y_1^{p^{k-1}-1}qS_{k,\ell}^K\to 0.\end{equation} The map $\phi$ sends everything except the $v$-tower on $y_kz_kZ_k^\ell$ to classes with the same name, and the heights of these $v$-towers agree, as seen above. The class $y_kz_kZ_k^\ell=y_kz_{k,\ell}$ maps to a $\bold Z_p$ with the same name in $k(1)^*(K_2)$. We have $\psi(w_kw_{k+1}Z_{k+1}^\ell)=qy_1^{p^{k-1}-1}z_{1,\ell}$. Then $v^{k+1}w_kw_{k+1}Z_{k+1}^\ell\in\on{ker}(\psi)$, and we have $$\phi(vy_kz_{k,\ell})=v^{k+1}w_kw_{k+1}Z_{k+1}^\ell.$$ We illustrate this in the schematic Figure \ref{fig7}, in which $X$, $\circ$, and $\bullet$ map to elements with the same symbol. The expressions at the end of the $v$-towers are their $v$-heights. In particular, $v^{r'(k-1)}y_kz_{k,\ell}=0$ in $y_kC_k^BZ_k^\ell$. The $v$-heights agree by (\ref{r1}), and the gradings match by an induction proof. The $\bold Z_p$'s in $y_1^{p^{k-1}-1}qS^K_{k,\ell}$ are hit by $\psi(y_kz_{i+k-1,\ell})=y_1^{p^{k-1}-1}qv^kz_{i,\ell}$, $2\le i\le\ell-k$, another interesting filtration jump. \bigskip \begin{minipage}{6in} \begin{fig}\label{fig7} {\bf Towers in exact sequence.} \begin{center} \begin{tikzpicture}[scale=.25] \draw (0,0) -- (13,0); \draw (1,0) -- (13,12); \draw (15,0) -- (32,0); \draw (16,0) -- (32,16); \draw (35,0) -- (40,0); \draw (36,0) -- (39.5,3.5); \node at (2,1) {$\circ$}; \node at (14.2,13.2) {$r'(k-1)$}; \node at (20,4) {$\circ$}; \node at (16,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (16,-1.6) {$w_kw_{k+1}Z_{k+1}^\ell$}; \node at (36,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (36,-1.4) {$y_1^{p^{k-1}-1}qz_{1,\ell}$}; \node at (40.2,4.2) {$k+1$}; \node at (22.4,4.2) {$k+1$}; \node at (33.2,17.2) {$r(k)$}; \node at (1,0) {$X$}; \node at (19,0) {$X$}; \node at (7,-3) {$y_kC_k^BZ_k^\ell$}; \node at (22,-3) {$G_{k,\ell}^4$}; \node at (40,-4.2) {$y_1^{p^{k-1}-1}qS_{k,\ell}^K$}; \node at ((14,7) {$\mapright{\phi}$}; \node at (33,7) {$\mapright{\psi}$}; \node at (1,-1.5) {$y_kz_{k,\ell}$}; \end{tikzpicture} \end{center} \end{fig} \end{minipage} \bigskip Finally we consider the short exact sequence \begin{equation}\label{G5seq}0\to y_1^{p^{k-1}-1}qS^C_{k,\ell}\mapright{\phi'}G^5_{k,\ell}\mapright{\psi'}K_k^Bz_\ell\to0.\end{equation} Similarly to (\ref{gns}), the generators of $v$-towers in $K_k^B$ are $g_k$ and, for $1\le j<k$, elements of the form $g_jz_j^{p-1}\prod_{j+1}^{k-1}\{z_i^{p-1},y_i^{p-1}\}$. The morphism $\psi'$ is determined by $w_j\mapsto g_j$. The $v$-heights of the corresponding elements in $k(1)^*(K_2)$ and $K_k^B$ both equal $r'(j-1)$ for $j<k$. However, the $v$-height of $w_kz_\ell$ is $r(k)$, which is $k$ greater than $r'(k-1)$. We have $\phi'(vy_1^{p^{k-1}-1}qz_{\ell-k,\ell})=v^{r'(k-1)}w_kz_\ell$. The class $y_1^{p^{k-1}-1}qz_{\ell-k,\ell}$ at the base of the $v$-tower maps to a $\bold Z_p$ with the same name. The picture is quite similar to Figure \ref{fig7} with $k+1$ and $r'(k-1)$ interchanged. The $\bold Z_p$ classes $y_1^{p^{k-1}-1}qz_{i,\ell}$ for $1\le i<\ell-k$ are mapped by $\phi'$ to classes with the same name in $G_{k,\ell}^5\subset k(1)^*(K_2)$. The $\bold Z_p$'s in $K_k^Bz_\ell$ are of the same form as in (\ref{KkA}), and are hit by analogues of (\ref{KK}). \section{All accounted for}\label{allsec} In this section, we show that all elements of $k(1)^*(K_2)$ are involved in exactly one of the homomorphisms involving some $G$-group described in the preceding section. As discussed earlier, this implies that there can be no exotic extensions in $ku^*(K_2)$ other than those in (\ref{extns}), because an additional extension would decrease the number of elements in $\on{ker}(\cdot p|ku^*(K_2))$ and $\operatorname{coker}(\cdot p|ku^*(K_2))$, and these must correspond to elements of $k(1)^*(K_2)$. It also provides an excellent check on our analysis. Let $p$ be odd, and $$G^i=\begin{cases}\displaystyle\bigoplus_{k\ge1}G_k^i\otimes TP_{p-1}[y_k]\otimes P[y_{k+1}]&1\le i\le 2\\ \displaystyle\bigoplus_{1\le k<\ell}G_{k,\ell}^i\otimes TP_{p-1}[y_k]\otimes P[y_{k+1}]\otimes TP_{p-1}[z_\ell]\otimes\Lambda_{\ell+1}&3\le i\le 6\\ \displaystyle\bigoplus_{k\ge1}\bigoplus_{e=1}^{p-2}G_{k,e}^i\otimes P[y_k]\otimes\Lambda_{k+1}&7\le i\le8.\end{cases}$$ \begin{thm}\label{allthm} $G^1\oplus\cdots\oplus G^8$ equals $k(1)^*(K_2)$, as described in Theorem \ref{DRWthm}.\end{thm} As throughout the paper, $\bold Z_p$'s coming from $E_1$-submodules of $H^*(K_2)$ are ignored here. The remainder of this section is devoted to the proof of Theorem \ref{allthm}. There are four parts of Theorem \ref{DRWthm}. We deal with them one-at-a-time. {\bf Case} 1. $P[y_1]y_0^{p-1}z_0$. In (\ref{Pyy}), it is shown that these classes form a subset of $\bigoplus \mathcal{M}_k^AA_k$, and they map to classes with the same name in $G^2$. {\bf Case} 2. $\bigoplus_{j>0}TP_{r(j)}[v]\otimes P[y_{j+1}]\otimes TP_{p-1}[y_j]\otimes \overline{E}[w_{j}]\otimes E[w_{j+1}]\otimes \Lambda_{j+1}$. The generators of $v$-towers of height $r(j)$ occur in $G^1$, $G^4$, and $G^5$. From (\ref{wg}), only $w_j$ is in $G_j^1$. So $G^1$ has $TP_{p-1}[y_j]\otimes P[y_{j+1}]w_j$. From Figure \ref{fig7}, $G^4_{j,\ell}$ has $w_jw_{j+1}Z_{j+1}^\ell$. Note that $\bigoplus_\ell Z_{j+1}^\ell TP_{p-1}[z_\ell]\otimes \Lambda_{\ell+1}=\Lambda_{j+1}$, since the $\ell$-component gives the monomials whose smallest non-$(p-1)$-power is a power of $z_\ell$, so $G^4$ contains $P[y_{j+1}]\otimes TP_{p-1}[y_j]w_{j}w_{j+1}\otimes \Lambda_{j+1}$. From the analysis following (\ref{G5seq}), $G^5_{j,\ell}$ has only $w_jz_\ell$ of $v$-height $r(j)$, so $G^5$ will have $P[y_{j+1}]\otimes TP_{p-1}[y_j]w_{j}\otimes \overline{\Lambda}_{j+1}$. Thus $G^1\oplus G^5$ contains the part without $w_{j+1}$, while $G^4$ contains the part with $w_{j+1}$. {\bf Case} 3. $\bigoplus_{j\ge1}TP_{r'(j-1)}[v]\otimes P[y_{j}]\otimes E[w_{j}]\otimes\overline{TP}_p[z_{j}]\otimes \Lambda_{j+1}$. The generators of $v$-towers of height $r'(j-1)$ occur in each $G^i$ as follows. \begin{itemize} \item [$G^1$:] $\displaystyle{w_jz_j^{p-1}\bigoplus_{k\ge j+1}TP_{p-1}[y_k]\otimes P[y_{k+1}]\otimes\bigoplus_{i=j+1}^{k-1}\{z_i^{p-1},y_i^{p-1}\}}$. This can be deduced from (\ref{wg}). \item[$G^2$:] From (\ref{Ctow}), $$z_jTP_{p-1}[y_j]\otimes P[y_{j+1}]\oplus y_j^{p-1}z_j\bigoplus_{k\ge j+1}TP_{p-1}[y_k]\otimes P[y_{k+1}]\otimes\prod_{i=j+1}^{k-1}\{z_i^{p-1},y_i^{p-1}\}.$$ \item[$G^3$:] We use (\ref{gns}) and (\ref{wg}) and adapt some arguments used in Case 2 to obtain $$w_jz_j^{p-1}\biggl(\overline{TP}_p[y_j]\otimes P[y_{j+1}]\otimes\Lambda_{j+1}\oplus\bigoplus_{k\ge j+1}\overline{TP}_p[y_k]P[y_{k+1}]z_k^{p-1}\Lambda_{k+1}\prod_{i=j+1}^{k-1}\{z_i^{p-1},y_i^{p-1}\}\biggr).$$ \item[$G^4$:] We use (\ref{Ctow}) and (\ref{G4seq}) to obtain $$y_j^{p-1}z_j\bigoplus_{k\ge j+1}\overline{TP}_p[y_k]\otimes P[y_{k+1}]z_k^{p-1}\Lambda_{k+1}\prod_{i=j+1}^{k-1}\{z_i^{p-1},y_i^{p-1}\}.$$ \item[$G^5$:] We use (\ref{G5seq}) and $\bigoplus_{\ell>k}z_\ell TP_{p-1}[z_\ell]\otimes\Lambda_{\ell+1}\approx\overline{\Lambda}_{k+1}$ to obtain $$w_jz_j^{p-1}\bigoplus_{k\ge j+1}TP_{p-1}[y_k]\otimes P[y_{k+1}]\otimes\overline{\Lambda}_{k+1}\otimes\prod_{i=j+1}^{k-1}\{z_i^{p-1},y_i^{p-1}\}.$$ \item[$G^6$:] We combine the analysis for $G^2$ and the observation used for $G^5$ to obtain \begin{eqnarray*}&&z_jTP_{p-1}[y_j]\otimes P[y_{j+1}]\otimes\overline{\Lambda}_{j+1}\\ &\oplus& y_j^{p-1}z_j\bigoplus_{k\ge j+1}TP_{p-1}[y_k]\otimes P[y_{k+1}]\otimes\overline{\Lambda}_{k+1}\otimes\prod_{i=j+1}^{k-1}\{z_i^{p-1},y_i^{p-1}\}\end{eqnarray*} \item[$G^7$:] Similarly to $G^3$, we have $$\bigoplus_{e=1}^{p-2}\biggl(w_jz_j^e\otimes P[y_j]\otimes\Lambda_{j+1}\oplus w_jz_j^{p-1}\bigoplus_{k\ge j+1}z_k^e\otimes P[y_k]\otimes\Lambda_{k+1}\otimes\prod_{i=j+1}^{k-1}\{z_i^{p-1},y_i^{p-1}\}\biggr).$$ \item[$G^8$:] Using (\ref{Ctow}), we get $$\bigoplus_{e=1}^{p-2}\biggl(z_j^{e}\otimes P[y_j]\otimes\Lambda_{j+1}\oplus y_j^{p-1}z_j\bigoplus_{k\ge j+1}z_k^e\otimes P[y_k]\otimes\Lambda_{k+1}\otimes\prod_{i=j+1}^{k-1}\{z_i^{p-1},y_i^{p-1}\}\biggr).$$ \end{itemize} We begin by analyzing the portion including the factor $w_j$. We will show that $$G^1\oplus G^3\oplus G^5\oplus G^7= P[y_j]w_j\otimes\overline{TP}_p[z_j]\otimes\Lambda_{j+1}.$$ Here, and in the remainder of our analysis of Case 3, $G^i$ refers just to the relevant portion of $G^i$, here the part with $TP_{r'(j-1)}[v]w_j$. The first part of $G^7$ gives all terms with $z_j^e$ for $1\le e\le p-2$. The remaining part has factors $w_jz_j^{p-1}$, which we will omit writing. Combining $G^1$ and $G^5$ removes the bar in $G^5$. The first part of $G^3$ gives the part with positive exponent of $y_j$, which we now omit. Let $E_\ell=P[y_\ell]\otimes\Lambda_\ell$, thought of as monomials in $y_i$ and $z_i$ for $i\ge\ell$ with exponents $\le p-1$. The remaining parts of the $G^i$'s under consideration combine to \begin{equation}\label{big}\bigoplus_{k\ge j+1}\biggl(TP_{p-1}[y_k]\oplus y_kz_k^{p-1}TP_{p-1}[y_k]\oplus\bigoplus_{e=1}^{p-2}z_k^eTP_p[y_k]\biggr)\otimes E_{k+1}\otimes\prod_{i=j+1}^{k-1}\{z_i^{p-1},y_i^{p-1}\}.\end{equation} We wish to show this equals $E_{j+1}$. The portion in parentheses is all monomials in $TP_p[y_k,z_k]$ except $y_k^{p-1}$ and $z_k^{p-1}$. For a monomial $M$ in $E_{j+1}$, let $M_i$ denote its $y_i^sz_i^t$ factor. The $k$-summand in (\ref{big}) is all monomials $M$ in $E_{j+1}$ for which $k$ is the smallest $i$ such that $M_i$ is neither $y_i^{p-1}$ nor $z_i^{p-1}$. Thus the sum over all $k$ yields all of $E_{j+1}$, as claimed. A very similar argument shows that the $G^2\oplus G^4\oplus G^6\oplus G^8$ part for Case 3 equals the portion which includes just the 1 in $E[w_j]$; i.e., $P[y_j]\otimes \overline{TP}_p[z_j]\otimes\Lambda_{j+1}$. \medskip {\bf Case} 4. $\bigoplus_{j\ge1} P[y_1]\otimes E[q ]\otimes \overline{E}[z_j^p]\otimes \Lambda_{j+1}$. We first consider the part without the $q$, and fix $j$ and omit writing the $z_j^p$. The desired answer is $P[y_1]\otimes\Lambda_{j+1}$. These come from the $\bold Z_p$'s in $G^2\oplus G^4\oplus G^6\oplus G^8$. Similarly to Case 3, $G^2$ and $G^6$ combine to give $$\bigoplus_{k\ge j+1}TP_{p-1}[y_k]\otimes P[y_{k+1}]\otimes\Lambda_{k+1}\otimes\prod_{i=k+1}^{j-1}\{z_i^{p-1},y_i^{p-1}\}.$$ This, together with the portion of $G^4$ from $\on{im}(\phi)$ in (\ref{G4seq}) obtained using (\ref{Ck}), and the $\bold Z_p$'s in $G^8$ obtained using (\ref{Ck}) give exactly (\ref{big}), which we showed equals $P[y_{j+1}]\otimes\Lambda_{j+1}$.\footnote{Here the classes in (\ref{big}) are $\bold Z_p$'s and are multiplied by $z_j^p$, whereas in Case 3 they were multiplied by $w_jz_j^{p-1}$ and were generators of $v$-towers of height $r'(j-1)$.} The element $X$ in Figure \ref{fig7} with $k$ replaced by $j$ yields, from $G^4$, \begin{eqnarray*}&&y_jTP_{p-1}[y_j]\otimes P[y_{j+1}]\otimes\bigoplus_{\ell>j}Z_{j+1}^\ell TP_{p-1}[z_\ell]\otimes\Lambda_{\ell+1}\\ &=&y_jTP_{p-1}[y_j]\otimes P[y_{j+1}]\otimes\Lambda_{j+1},\end{eqnarray*} which combines with the portion just obtained to yield $P[y_j]\otimes\Lambda_{j+1}$. The last line of the $G^4_{k,\ell}$ discussion in Section \ref{LESsec} describes $\bold Z_p$'s in $G^4$ mapped by $\psi$ in (\ref{G4seq}). Those with a $z_j^p$ factor yield \begin{eqnarray*}&&\bigoplus_{k=1}^{j-1}y_kTP_{p-1}[y_k]P[y_{k+1}]\bigoplus_{\ell>j}Z_{j+1}^\ell TP_{p-1}[z_\ell]\Lambda_{\ell+1}\\ &=&\bigoplus_{k=1}^{j-1}(P[y_k]-P[y_{k+1}])\otimes\Lambda_{j+1}\\ &=&(P[y_1]-P[y_j])\otimes\Lambda_{j+1}.\end{eqnarray*} Combining this with the result of the preceding paragraph yields the desired $P[y_1]\otimes\Lambda_{j+1}$. \medskip We finish this section by showing that the $\bold Z_p$'s including a factor $q$ are obtained exactly once. We omit writing the $q$. The classes which we must obtain are $P[y_1]\bigoplus_{j\ge1}z_j^p\Lambda_{j+1}$. There are eight ways these appear in $G^i$-sets. \begin{enumerate} \item In $G^1$, using (\ref{KkA}) and (\ref{KK}), for $1\le i<j<k$, $$y_1^{p^{j-1}-1}z_{i,j}z_j^{p-2}\prod_{s=j+1}^{k-1}\{z_s^{p-1},y_s^{p-1}\}\otimes TP_{p-1}[y_k]\otimes P[y_{k+1}].$$ \item In $G^3$, using (\ref{KkB}) and (\ref{KK}), for $1\le i<k<\ell$, $$y_1^{p^{k-1}-1}y_kz_{i,k}z_k^{p-2}Z_{k+1}^\ell\otimes TP_{p-1}[y_k]\otimes P[y_{k+1}]\otimes TP_{p-1}[z_\ell]\otimes \Lambda_{\ell+1}.$$ \item In $G^3$, using (\ref{KkA}) and (\ref{KK}), for $1\le i<j<k<\ell$, $$y_1^{p^{j-1}-1}y_kz_{i,j}z_j^{p-2}\prod_{s=j+1}^{k-1}\{z_s^{p-1},y_s^{p-1}\}Z_k^\ell\otimes TP_{p-1}[y_k]\otimes P[y_{k+1}]\otimes TP_{p-1}[z_\ell]\otimes \Lambda_{\ell+1}.$$ \item From $\on{im}(\phi')$ in (\ref{G5seq}), for $1\le k<\ell$ and $1\le i\le \ell-k$, $$y_1^{p^{k-1}-1}z_{i,\ell}\otimes TP_{p-1}[y_k]\otimes P[y_{k+1}]\otimes TP_{p-1}[z_\ell]\otimes \Lambda_{\ell+1}.$$ \item From $\psi'$ in (\ref{G5seq}), using (\ref{KkB}) and (\ref{KK}), for $k<\ell$ and $\ell-k<i<\ell$, $$y_1^{p^{k-1}-1}z_{i,\ell}\otimes TP_{p-1}[y_k]\otimes P[y_{k+1}]\otimes TP_{p-1}[z_\ell]\otimes \Lambda_{\ell+1}.$$ \item From $\psi'$ in (\ref{G5seq}), using (\ref{KkA}) and (\ref{KK}), for $i<j<k<\ell$, $$y_1^{p^{j-1}-1}z_{i,j}z_j^{p-2}\prod_{s=j+1}^{k-1}\{z_s^{p-1},y_s^{p-1}\}\cdot z_\ell\otimes TP_{p-1}[y_k]\otimes P[y_{k+1}]\otimes TP_{p-1}[z_\ell]\otimes \Lambda_{\ell+1}.$$ \item From (\ref{78}), using (\ref{KkB}) and (\ref{KK}), for $i<k$ and $1\le e\le p-2$, $$y_1^{p^{k-1}-1}z_{i,k}z_k^{e-1}P[y_k]\otimes\Lambda_{k+1}.$$ \item From (\ref{78}), using (\ref{KkA}) and (\ref{KK}), for $i<j<k$ and $1\le e\le p-2$, $$y_1^{p^{j-1}-1}z_{i,j}z_j^{p-2}\prod_{s=j+1}^{k-1}\{z_s^{p-1},y_s^{p-1}\}\cdot z_k^eP[y_k]\otimes\Lambda_{k+1}.$$ \end{enumerate} \medskip First combine (1)+(6) to put a $\otimes\Lambda_{k+1}$ at the end of (1), and then, similarly to the simplification of (\ref{big}), combine with (3)+(8) to get \begin{equation}\label{S1}\bigoplus_{i<j}y_1^{p^{j-1}-1}P[y_{j+1}]z_{i,j}z_j^{p-2}\Lambda_{j+1}.\end{equation} We combine and relabel (4)+(5) to give \begin{equation}\label{S2})\bigoplus_{i<j}y_1^{p^{j-1}-1}TP_{p-1}[y_j]P[y_{j+1}]z_{i,j+1}\Lambda_{j+1}\end{equation} together with \begin{equation}\label{S4}\bigoplus_{i\ge j\ge1}y_1^{p^{j-1}-1}TP_{p-1}[y_j]P[y_{j+1}]z_i^p\Lambda_{i+1}.\end{equation} Let $Y(s)=y_1^{p^s-1}TP_{p-1}[y_{s+1}]P[y_{s+2}]=\langle y_1^i:\nu(i+1)=s\rangle$. Then (\ref{S4}) is \begin{equation}\label{name}\bigoplus_{i>s\ge0}Y(s)z_i^p\Lambda_{i+1}.\end{equation} We simplify and relabel (2) to \begin{equation}\label{S3}\bigoplus_{i<j}y_1^{p^{j-1}-1}y_jTP_{p-1}[y_j]P[y_{j+1}]z_{i,j}z_j^{p-2}\Lambda_{j+1}.\end{equation} (\ref{S1}), (\ref{S3}), and (7) combine to give $$\bigoplus_{i<j}y_1^{p^{j-1}-1}P[y_j]z_{i,j}TP_{p-1}[z_j]\Lambda_{j+1}=\bigoplus_{i\le j-1\le t}Y(t)z_{i,j}TP_{p-1}[z_j]\Lambda_{j+1}.$$ For any $t\ge i$, the coefficient of $Y(t)z_i^p$ in (\ref{S2}) plus this is $$Z_{i+1}^{t+2}\Lambda_{t+2}\oplus\bigoplus_{j=i+1}^{t+1}Z_{i+1}^jTP_{p-1}[z_j]\Lambda_{j+1}=\Lambda_{i+1},$$ as the second part has all monomials not divisible by $Z_{i+1}^{t+2}$. Combining this with (\ref{name}) yields the desired result, $$\bigoplus_{s\ge0}Y(s)\bigoplus_{i\ge1}z_i^p\Lambda_{i+1}.$$ \section{An explanation of self-duality of $B_k$}\label{optsec} In this optional section, we discuss some observations about the ASS of $ku^*(K_2)$ and $ku_*(K_2)$ which, among other things, provide an explanation of the self-dual nature of the $B_k$ summands which occur in both $ku^*(K_2)$ and $ku_*(K_2)$. We restrict to $p=2$. We first observe that, for $k\ge1$, there is an $E_1$-submodule, $\mathcal{M}_k$, of $H^*(K_2)$ such that $\operatorname{Ext}_{E_1}(\bold Z_2,\mathcal{M}_k)$ (resp.~$\operatorname{Ext}_{E_1}(\mathcal{M}_k,\bold Z_2)$) is closed under the differentials in the ASS converging to $ku^*(K_2)$ (resp.~$ku_*(K_2)$), yielding the chart $A_k$ (resp.~the $ku$-homology analogue of $A_k$ discussed in Theorem \ref{ku*thm}). For example, with $M_j$ as in (\ref{Mdef}) and $N$ as in Figure \ref{N}, $\mathcal{M}_3$ is as depicted in Figure \ref{M5pic}. \bigskip \begin{minipage}{6in} \begin{fig}\label{M5pic} {\bf The $E_1$-module $\mathcal{M}_3$.} \begin{center} \begin{tikzpicture}[scale=.3] \draw (4,0) -- (6,0); \draw (8,0) -- (10,0); \draw (0,0) to[out=45, in=135] (6,0); \draw (4,0) to[out=315, in=225] (10,0); \node at (0,-.7) {$17$}; \node at (18,.7) {$26$}; \node at (32,-1) {$33$}; \node at (26,.7) {$30$}; \node at (38,-1) {$36$}; \node at (10,.7) {$22$}; \node at (0,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (-2,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (4,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (6,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (8,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (10,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (16,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (18,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (26,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (28,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (32,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (34,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (36,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (38,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \draw (16,0) -- (18,0); \draw (26,0) -- (28,0); \draw (32,0) -- (34,0); \draw (36,0) -- (38,0); \draw (32,0) to[out=45, in=135] (38,0); \node at (-2,-2) {$y_1^4$}; \node at (5,-2) {$y_1^3N$}; \node at (17,-2) {$y_1^2M_4$}; \node at (27,-2) {$y_1x_9M_4$}; \node at (34.5,-2) {$M_5$}; \end{tikzpicture} \end{center} \end{fig} \end{minipage} \bigskip \noindent The two ASSs for $\mathcal{M}_3$ will yield the charts for $A_3$ and its homology analogue pictured in \cite{DD}. The situation for $B_k$ is slightly more complicated. There is no $E_1$-submodule of $H^*(K_2)$ which, by itself, can give a chart $B_k$ or $B_kz_\ell$. Some of the differentials that truncate $v$-towers in $B_kz_\ell$ come from classes that are part of a summand that includes $y_1^{2^{k-1}-1}qS_{k,\ell}$. We find that, for $2\le k<\ell$, there is an $E_1$-submodule $\mathcal{M}_{k,\ell}$ of $H^*K_2$ such that $\operatorname{Ext}_{E_1}(\bold Z_2,\mathcal{M}_{k,\ell})$ is closed under the differentials in the ASS converging to $ku^*(K_2)$ and yields the chart $$B_kz_\ell\oplus y_1^{2^{k-1}-1}qS_{k,\ell}\oplus y_kB_kZ_k^\ell.$$ Note that these three subsets of $ku^*(K_2)$ appeared together in the 10-term exact sequence (\ref{10}). This $\mathcal{M}_{k,\ell}$ is symmetric; i.e., there is an integer $D$ such that $\Sigma^D\mathcal{M}_{k,\ell}^*$ and $\mathcal{M}_{k,\ell}$ are isomorphic $E_1$-modules, where $\mathcal{M}_{k,\ell}^*$ is obtained from $\mathcal{M}_{k,\ell}$ by negating gradings and dualizing $Q_0$ and $Q_1$. This implies that the $v$-towers in $\operatorname{Ext}_{E_1}(\bold Z_2,\mathcal{M}_{k,\ell})$ and $\operatorname{Ext}_{E_1}(\mathcal{M}_{k,\ell},\bold Z_2)$ correspond nicely. Moreover, the differentials in the two ASSs correspond, too, obtaining isomorphic charts, although the gradings in one decrease from left to right, while in the other they increase. We illustrate with an example, $\mathcal{M}_{3,4}$, and then discuss the implication for self-duality of $B_k$. In Figure \ref{56}, we depict $\mathcal{M}_{3,4}$. \bigskip \begin{minipage}{6in} \begin{fig}\label{56} {\bf The $E_1$-module $\mathcal{M}_{3,4}$.} \begin{center} \begin{tikzpicture}[scale=.24] \draw (0,0) -- (2,0); \draw (4,0) -- (6,0); \draw (0,0) to[out=45, in=135] (6,0); \draw (26,2) to[out=315, in=225] (32,2); \draw (22,2) to[out=45, in=135] (28,2); \draw (36,0) to[out=315, in=225] (42,0); \draw (58,0) to[out=45, in=135] (64,0); \node at (0,-1) {$70$}; \node at (10,-1) {$75$}; \node at (20,-1) {$80$}; \node at (52,-1) {$96$}; \node at (64,-1) {$102$}; \node at (42,1) {$91$}; \node at (0,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (2,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (4,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (6,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (12,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (10,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (20,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (22,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (40,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (42,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (32,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (34,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (36,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (38,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (32,1) {$86$}; \node at (52,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (54,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (58,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (60,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (62,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (64,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (22,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (24,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (26,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (28,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (30,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (32,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (42,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (44,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \draw (10,0) -- (12,0); \draw (20,0) -- (22,0); \draw (32,0) -- (34,0); \draw (36,0) -- (38,0); \draw (32,0) to[out=45, in=135] (38,0); \node at (3,-3) {$y_1^7x_9M_5$}; \node at (11,-3) {$y_1^6z_3M_4$}; \node at (20,-3) {$y_1^5x_9z_3M_4$}; \node at (27,-3) {$y_1^4M_6$}; \node at (36,-3) {$y_1^3x_9M_6$}; \node at (43,-3) {$y_1^2z_4M_4$}; \node at (53,-3) {$y_1x_9z_4M_4$}; \node at (61,-3) {$z_4M_5$}; \draw (22,2) -- (24,2); \draw (26,2) -- (28,2); \draw (30,2) -- (32,2); \draw (42,2) -- (44,2); \draw (40,0) -- (42,0); \draw (52,0) -- (54,0); \draw (58,0) -- (60,0); \draw (62,0) -- (64,0); \end{tikzpicture} \end{center} \end{fig} \end{minipage} \bigskip In Figure \ref{MASS}, we depict the ASS chart for both $\operatorname{Ext}_{E_1}(\bold Z_2,\mathcal{M}_{3,4})$ and $\operatorname{Ext}_{E_1}(\mathcal{M}_{3,4},\bold Z_2)$. They are isomorphic except that, from left to right, the gradings start with 102 for the first and 70 for the second. We label the portions of the chart corresponding to the eight summands of $\mathcal{M}_{3,4}$ just by the $M$-factor, since accompanying factors differ for the two versions. For example, the $M_5$ on the left-hand side is $z_4M_5$ for the first spectral sequence, and is $y_1^7x_9M_5$ for the second. \bigskip \tikzset{ testpic4/.pic={ \node at (0,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (2,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (2,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (4,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (6,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (4,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (6,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (8,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (10,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (12,5) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (14,6) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (5,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (7,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (10,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (12,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (14,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (16,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (11,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (13,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (15,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (17,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (19,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (21,5) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (13,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (15,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (17,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (15,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (17,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (16,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (18,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (20,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (22,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (18,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (20,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (22,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (24,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (26,4) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (28,5) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (30,6) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (20,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (22,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (21,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (23,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (26,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (28,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (30,2) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (32,3) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (29,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (31,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (31,0) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \node at (33,1) {\,\begin{picture}(-1,1)(1,-1)\circle*{4.5}\end{picture}\ }; \draw [thick] (0,0) -- (2,1) -- (2,0) -- (10,4); \draw [thick] (10,0) -- (12,1); \draw [thick] (11,0) -- (17,3); \draw [thick] (16,0) -- (18,1) -- (18,0) -- (26,4); \draw [thick] (26,0) -- (28,1); \draw [color=red] (10,0) -- (10,4); \draw [color=red] (26,0) -- (26,4); \draw [dashed] (2,1) -- (6,3); \draw [dashed] (5,0) -- (7,1); \draw [dashed] (10,4) -- (16,7); \draw [dotted] (5,0) -- (4,2); \draw [dotted] (7,1) -- (6,3); \draw [dashed] (12,1) -- (16,3); \draw [dashed] (13,0) -- (17,2); \draw [dashed] (15,0) -- (17,1); \draw [dotted] (13,0) -- (12,5); \draw [dotted] (15,1) -- (14,6); \draw [dotted] (15,0) -- (14,2); \draw [dotted] (17,1) -- (16,3); \draw [dashed] (13,0) -- (13,1); \draw [dashed] (15,0) -- (15,2); \draw [dashed] (17,1) -- (17,3); \draw [dotted] (17,2) -- (16,7); \draw [dashed] (17,3) -- (21,5); \draw [dashed] (18,1) -- (22,3); \draw [dashed] (20,0) -- (22,1); \draw [dashed] (21,0) -- (23,1); \draw [dashed] (20,0) -- (20,2); \draw [dashed] (22,1) -- (22,3); \draw [dotted] (20,0) -- (19,4); \draw [dotted] (22,1) -- (21,5); \draw [dotted] (21,0) -- (20,2); \draw [dotted] (23,1) -- (22,3); \draw [dashed] (26,4) -- (30,6); \draw [dashed] (28,1) -- (32,3); \draw [dashed] (29,0) -- (31,1); \draw [dashed] (31,0) -- (33,1); \draw [dashed] (31,0) -- (31,1); \draw [dotted] (29,0) -- (28,5); \draw [dotted] (31,1) -- (30,6); \draw [dotted] (31,0) -- (30,2); \draw [dotted] (33,1) -- (32,3); \node at (0,-.6) {$102$}; \node at (0,-1.2) {$70$}; \node at (2,-.6) {$M_5$}; \node at (6,-.6) {$M_4$}; \node at (9.6,-.6) {$92$}; \node at (9.6,-1.2) {$80$}; \node at (11.2, -.6) {$M_4$}; \node at (14,-.6) {$M_6$}; \node at (16,-.6) {$86$}; \node at (16,-1.2) {$86$}; \node at (19,-.6) {$M_6$}; \node at (22,-.6) {$M_4$}; \node at (25.6,-.6) {$76$}; \node at (25.6,-1.2) {$96$}; \node at (27.2, -.6) {$M_4$}; \node at (30,-.6) {$M_5$}; \draw [color=blue] (-1,0) -- (33,0); }} \bigskip \begin{minipage}{6in} \begin{fig}\label{MASS} {\bf Two ASSs for $\mathcal{M}_{2,3}$.} \begin{center} \begin{tikzpicture} \pic[rotate=90,scale=.6,transform shape] {testpic4}; \end{tikzpicture} \end{center} \end{fig} \end{minipage} \bigskip For the $ku^*(K_2)$ version, $B_3z_4$ is on the left hand side of Figure \ref{MASS}, and $y_3B_3z_3$ on the right hand side, with $y_1^3qS_{3,4}$ separating them. The duality isomorphism in Theorem \ref{dual} says that the Pontryagin dual of $B_3z_4$ is isomorphic as a $ku_*$-module to $\Sigma^4$ of the right hand side of the $ku_*(K_2)$ version of Figure \ref{MASS}, and we see that this is isomorphic to a shifted version of $B_3$ with indices negated. This is the self-duality statement, that the Pontryagin dual of $B_k$ is isomorphic as a $ku_*$-module to a shifted version of $B_k$ with indices negated. \def\rule{.6in}{.6pt}{\rule{.6in}{.6pt}}
2,869,038,156,624
arxiv
\section{Introduction} Brownian motion is the most popular stochastic process and has a tremendous number of applications in science. In one dimension, a \emph{free} Brownian motion $x(t)$ evolves according to the Langevin equation \begin{align} \dot x(t) =\sqrt{2\,D}\,\eta(t)\,, \label{eq:BMeom} \end{align} where $D$ is the diffusion coefficient and $\eta(t)$ is an \emph{uncorrelated} Gaussian white noise with zero mean and correlations $\langle \eta(t)\eta(t') \rangle =\delta(t-t')$. In many practical situations, it is necessary to simulate Brownian motion numerically. This can be easily done by discretising the Langevin equation (\ref{eq:BMeom}) over small time increments $\Delta t$: \begin{align} x(t+\Delta t) = x(t) + \sqrt{2\,D}\,\eta(t)\,\Delta t\,,\label{eq:BMeomd} \end{align} and drawing at each time step a Gaussian random variable $\sqrt{2\,D}\,\eta(t)\,\Delta t$ with zero mean and variance $2\,D\,\Delta t$. In many applications, such as in the study of foraging animals \cite{Giuggioli05,Randon09,MajumdarCom10,Murphy92,Boyle09}, financial stock markets \cite{Shepp79,Majumdar08}, or in statistical testing \cite{CB2012,Kol1933}, one is only interested in particular trajectories that satisfy some condition. For instance, one can decide to study only \emph{bridge} trajectories which, as their name suggests, are trajectories that start at the origin and return to the origin after a fixed time $t_f$. How to generate efficiently such bridge configurations for a Brownian motion? A naive algorithm would be to generate all possible trajectories of Brownian motion up to time $t_f$, starting at the origin, and retain only those that come back to the origin at time $t_f$. Such a naive method is obviously computationally wasteful. This is part of a more general question: how to efficiently sample atypical rare trajectories with a given statistical weight, which is typically very small \cite{BCDG2002,GKP2006,GKLT2011,KGGW2018,Gar2018,Rose21,Rose21area}? In the context of Brownian motion, one can also ask how to generate other constrained Brownian motions, going beyond the bridge. Examples of such constrained Brownian motions include Brownian excursions, Brownian meanders, reflected Brownian motions, etc. \cite{Yor2000,Majumdar05Ein,MP2010,Dev2010,PY2018}. Fortunately, constrained Brownian motions have been extensively studied and there exist several methods to sample them \cite{Doob,Pitman,MajumdarEff15,CT2013}. One of them, which is quite powerful and perhaps the easiest relies on writing an effective Langevin equation with an effective force term that implicitly accounts for the constraint \cite{MajumdarEff15,CT2013}. For the Brownian bridge $x_B(t)$, the effective Langevin equation reads \cite{MajumdarEff15,CT2013} \begin{align} \dot x_B(t) = \sqrt{2\,D}\,\eta(t) -\frac{x_B(t)}{t_f-t}\,, \label{eq:BMeomb} \end{align} where the subscript $B$ refers to the bridge condition, and the additional term is an effective force term that implicitly accounts for the bridge constraint. The effective Langevin equation (\ref{eq:BMeomb}) can be discretised over time to numerically generate Brownian bridge trajectories with the appropriate statistical weight. The concept of effective Langevin equation is quite robust and can be easily extended to other types of constrained Brownian motions such as excursions, meanders and non-intersecting Brownian motions \cite{CT2013,MajumdarEff15,Orland,Baldassarri2021,Grela2021}. In addition, the concept was recently extended to the case of discrete-time random walks with arbitrary jump distributions, including fat-tailed distributions, and was also shown to be quite a versatile method \cite{DebruyneRW21}. While for Markov processes, such as the Brownian motion, the effects of constraints (e.g., bridges, excursions, meanders, etc) can be included in an effective Langevin equation (alternatively in effective transition probabilities for discrete-time processes), a similar effective Langevin approach is still lacking for non-Markovian processes which are however abundant in nature \cite{Hanggi1995}. For such processes, there are thus two levels of complexity: (i) the non-Markovian nature of the dynamics indicating temporal correlations in the history of the process and (ii) the effects of the additional geometrical constraints such as the bridge constraint. This two-fold complexity renders the derivation of an effective Langevin equation rather challenging for non-Markovian processes. The goal of this paper is to study an example of a non-Markovian process for which we show that the effective Langevin equation, ensuring the geometric constraints, can be derived exactly. Our example of a non-Markovian stochastic process is the celebrated run-and-tumble dynamics of a particle in one dimension, also known as the persistent random walk \cite{kac1974,weiss2002, masoliver2017}, which is of much current interest in the context of active matter \cite{berg08,marchetti13,cates15}. The run-and-tumble particle (RTP) is a simple model that describes self-propelled particles such as the \textit{E. coli} bacteria \cite{berg08}, that are able to move autonomously rendering them inherently different from the standard passive Brownian motion. Active noninteracting particles, including the run-and-tumble model, have been studied extensively in the recent past, both experimentally and theoretically \cite{berg08,marchetti13,cates15,bechinger16,tailleur08}. Even for such noninteracting systems, a plethora of interesting phenomena have been observed, arising purely from the ``active nature'' of the driving noise. Such phenomena include, e.g., non-trivial density profiles \cite{Bijnens20,Martens12,Basu19,Basu20,Dhar19,Singh20,Santra20,Dean21}, dynamical phase transitions \cite{Doussal20,Gradinego19,Mori21}, anomalous transport properties \cite{Doussal20,Dor19,Demaerel19,Banerjee20}, or interesting first-passage and extremal statistics \cite{Orsingher90,Orsingher95,Lopez14,CinqueF20,CinqueS20,Foong92,Masoliver92,Angelani14,Angelani15,Artuso14,Evans18,Weiss87,Malakar18,Ledoussal19,MoriL20,MoriE20,DebruyneSur21,HartmannConvex20,Singh2019}. In its simplest form, a \emph{free} one-dimensional RTP moves (runs) with a fixed velocity $v_0$ in the positive direction during a random time $\Delta t$ drawn from an exponential distribution $p(\Delta t)=\gamma\, e^{-\gamma \Delta t}$ after which it changes direction (tumbles) and goes in the negative direction during another random time. The process continues and the particle performs this run-and-tumble motion indefinitely. The position of the particle $x(t)$ evolves according to the Langevin equation \begin{align} \dot x(t)=v_0\,\sigma(t)\,,\label{eq:eom} \end{align} where $\sigma(t)$ is a telegraphic noise that switches between the values $1$ and $-1$ with a \emph{constant} rate $\gamma$ (see figure \ref{fig:telegraphic}). During an infinitesimal time interval $dt$, the particle changes direction with probability $\gamma\, dt$ or remains in the same direction with the complementary probability $1- \gamma\, dt$: \begin{align} \sigma(t+dt) = \left\{\begin{array}{rl}\sigma(t)\, \quad & \text{with \, prob.~ }=1-\gamma\, dt\, , \\ -\sigma(t)\, \quad &\text{with \, prob.~ } =\gamma\, dt\, . \end{array}\right. \label{eq:telegraphic} \end{align} Consequently, the time between two consecutive tumbles $\Delta t$ is drawn independently from an exponential distribution $p(\Delta t)=\gamma \, e^{- \gamma \Delta t}$ and the sequence of tumbling times follow a Poisson process with constant rate $\gamma$ (see figure \ref{fig:telegraphic}). \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{telegraphic_noise.pdf} \caption{Telegraphic noise $\sigma(t)$ driving the sign of the velocity of the RTP. The signal switches with a constant rate $\gamma$. The time between two consecutive switches $\Delta t$ is drawn independently from an exponential distribution $p(\Delta t)=\gamma \, e^{- \gamma \Delta t}$. The sequence of tumbling times $t_1,\ldots,t_n$ follow a Poisson process of constant rate $\gamma$.} \label{fig:telegraphic} \end{figure} To generate a trajectory $x(t)$ of a free RTP starting from the origin with a given initial velocity \begin{align} x(0)=0\,,\quad \dot x(0) = \sigma_0\,v_0\,,\label{eq:init} \end{align} where $\sigma_0=\pm 1$, one simply generates a sequence of tumbling times $t_1,\ldots,t_n$ that follow a \emph{homogeneous} Poisson process of constant rate $\gamma$: \begin{align} t_{m+1} = t_{m} + \Delta t_m\,,\label{eq:tm} \end{align} where $\Delta t_m$ are independently drawn from an exponential distribution $p(\Delta t)=\gamma\, e^{-\gamma\,\Delta t}$. Then, the trajectory $x(t)$ of the particle is simply obtained by integrating the equation of motion (\ref{eq:eom}) which yields the piecewise linear function: \begin{align} x(t) = \sigma_0\,v_0\,(-1)^n\, (t-t_n)+\sum_{m=0}^{n-1} \sigma_0\, v_0\, (-1)^m \, (t_{m+1}-t_{m}) \,,\label{eq:xt} \end{align} where $n$ is such that $t_n$ is the latest tumbling time before $t$, i.e. such that $t_n<t<t_{n+1}$. The sum in (\ref{eq:xt}) accounts for all complete runs that happened before $t$ and the first term corresponds to the last run that is not yet completed at time $t$. This sampling method works well to generate \emph{free} run-and-tumble trajectories. However, as in the case of Brownian motion, some applications require to only sample specific trajectories, such as bridge trajectories where, in addition to satisfy the initial condition (\ref{eq:init}), the particle must also return to the origin after a fixed time $t_f$ with a given velocity $\sigma_f\,v_0$: \begin{align} x(t_f)=0\,,\quad \dot x(t_f) = \sigma_f\, v_0\,,\label{eq:final} \end{align} where $\sigma_f =\pm 1$. Note that the final position need not necessarily be the origin but any fixed point in space -- here for simplicity we only consider the case where the final position coincides with the origin. One possible application of run-and-tumble bridge trajectories is in the context of animal foraging, where animals typically return to their nest after a fixed time, and one could study the persistence and memory effects in their trajectories \cite{Giuggioli05,Randon09,MajumdarCom10,Murphy92,Boyle09}. Unfortunately, as in the case of Brownian motion, obtaining realisations of bridge trajectories using the free sampling method would be computationally wasteful. As argued in the introduction, one needs an efficient algorithm to generate run-and-tumble bridge trajectories, in a similar spirit as the effective Langevin equation (\ref{eq:BMeomb}) for Brownian motion. In this paper, we derive an exact effective Langevin equation for RTPs to generate bridge trajectories efficiently. We show that the effective process, that automatically takes care of the bridge constraints (\ref{eq:init}) and (\ref{eq:final}) can be written as \begin{align} \dot x(t)=v_0\,\sigma^*(x,\dot x,t\,|\,\sigma_0,t_f,\sigma_f)\,,\label{eq:effeom} \end{align} where $\sigma^*(x,\dot x,t\,|\,\sigma_0,t_f,\sigma_f)$ is now an effective telegraphic noise that switches between the values $1$ and $-1$ with a space-time dependent rate $ \gamma^*(x,\dot x,t\,|\,\sigma_0,t_f,\sigma_f)$, which we compute exactly (\ref{eq:effB}). Finally, we show how to extend the method to other types of constrained RTP trajectories, such as the excursion (a bridge RTP that is additionally constrained to remain above the origin) and the meander (where the RTP is constrained not to cross the origin and with a free end point). We illustrate our method by numerical simulations (the code is available as a Python notebook in \cite{github}). The rest of the paper is organised as follows. In section \ref{sec:bridge}, we present the derivation of the effective Langevin equation for the bridge RTP and derive the effective tumbling rate that accounts for the bridge constraint. In section \ref{sec:gen}, we generalise the effective Langevin equation to the case of other constrained run-and-tumble trajectories such as the excursion and the meander and derive their effective tumbling rates. Finally, in section \ref{sec:sum}, we conclude and provide perspectives for further research. Some useful results on the run-and-tumble process are recalled in \ref{app:prop}. \section{Generating run-and-tumble bridges} \label{sec:bridge} The derivation of the effective Langevin equation for the bridge RTP follows similar ideas to the ones developed for continuous and discrete time Markov processes \cite{MajumdarEff15,DebruyneRW21}. The key point is that the free run-and-tumble process, though non-Markovian in the $x$-coordinate, becomes Markovian in the phase space $(x,\dot x)$. Therefore, a bridge trajectory satisfying the initial and final conditions (\ref{eq:init})-(\ref{eq:final}) can be decomposed into two independent paths over the time intervals $[0,t]$ and $[t,t_f]$ (see figure \ref{fig:bridges}). \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{bridge.pdf} \caption{A sketch of a run-and-tumble bridge trajectory that starts at the origin with a positive velocity $\dot x=+v_0$ and returns to the origin at a fixed time $t_f$ with a negative velocity $\dot x=-v_0$. Due to the Markov property in the extended phase space $(x,\dot x)$, the bridge trajectory can be decomposed into two independent parts: a left part over the time interval $[0,t]$, where the particle freely moves from the point $(0,+v_0)$ to the point $(x,-v_0)$ at time $t$ and a right part over the time interval $[t,t_f]$, where it moves from the point $(x,-v_0)$ at time $t$ to the point $(0,-v_0)$ at time $t_f$. The combination of the finite velocity of the particle and the bridge condition induces a double sided light cone in which the particle must remain (shaded red region).} \label{fig:bridges} \end{figure} As a result, the bridge probability distribution $P_B(x,t,\sigma\,|\,\sigma_0,t_f,\sigma_f)$ to find the particle at $x$ with a velocity $\dot x =\sigma\,v_0$ at time $t$ given that it satisfies the bridge conditions (\ref{eq:init})-(\ref{eq:final}) can be decomposed as a simple product \begin{align} P_B(x,t,\sigma\,|\,\sigma_0,t_f,\sigma_f)=\frac{P(x,t,\sigma\,|\,\sigma_0)\,Q(x,t_f-t,\sigma\,|\,\sigma_f)}{P(x=0,t_f,\sigma_f\,|\,\sigma_0)}\,,\label{eq:Pb} \end{align} where the subscript $B$ refers to ``bridge''. The first term $P(x,t,\sigma|\sigma_0)$ in (\ref{eq:Pb}) accounts for the first path over $[0,t]$ and is the probability density of the free particle to be located at position $x$ at time $t$ with velocity $\dot x=\sigma\, v_0$ given that it started at the origin with velocity $\dot x=\sigma_0\, v_0$. This is usually referred to as the forward propagator. The second term $Q(x,t,\sigma\,|\,\sigma_f)$ is the probability density of the free particle to reach the origin at time $t$ with velocity $\dot x=\sigma_f\,v_0$ given that it started at $x$ with velocity $\dot x=\sigma\, v_0$. We will refer to it as the backward propagator. The denominator in (\ref{eq:Pb}) is a normalisation factor that accounts for all the bridge trajectories such that $\int_{-\infty}^{\infty} dx \sum_{\sigma=\pm}P_B(x,t,\sigma\,|\,\sigma_0,t_f,\sigma_f)=1$. Using Markov properties, one can see that the free forward propagator $P(x,t,\sigma|\sigma_0)$ and backward propagator $Q(x,t,\sigma\,|\,\sigma_f)$ evolve according to Fokker-Plank equations. For conciseness, we will drop the conditional dependence in the differential equations below and use the shorthand notation $P(x,t,\sigma)\equiv P(x,t,\sigma|\sigma_0)$, $Q(x,t,\sigma)\equiv Q(x,t,\sigma\,|\,\sigma_f)$. To obtain the Fokker-Plank equations for the forward propagator, let us consider an infinitesimal time interval $[t-dt,t]$ and suppose that the particle is located at $x$ at time $t$ with velocity $\dot x=\sigma\,v_0$. In the time interval $[t-dt,t]$, we see from the telegraphic equation (\ref{eq:telegraphic}) that the particle either traveled with velocity $\dot x=\sigma \,v_0$ from $x-\sigma \,v_0\,dt$ to $x$ or tumbled with velocity $\dot x=-\sigma \,v_0$ and remained at $x$. The first event happens with probability $1-\gamma\,dt$ and the second event happens with the complementary probability $\gamma\,dt$. We can now write the following equation for the forward propagator \begin{align} P(x,t,\sigma) = (1-\gamma\,dt)\,P(x-\sigma \,v_0\,dt,t-dt,\sigma) + \gamma\,dt \,\,P(x,t-dt,-\sigma)\,.\label{eq:Pdt} \end{align} Expanding (\ref{eq:Pdt}) to first order in $dt$ and writing separate equations for $\sigma=+1$ and $\sigma=-1$, we find that $P(x,t,\sigma)$ satisfies a set of two coupled equations, called the \emph{forward} Fokker-Plank equations: \begin{subequations} \begin{align} \partial_t P(x,t,+)&=- v_0\,\partial_x P(x,t,+)-\gamma\, P(x,t,+)+\gamma\, P(x,t,-)\,,\\ \partial_t P(x,t,-)&= +v_0\,\partial_x P(x,t,-)-\gamma\, P(x,t,-)+\gamma\, P(x,t,+)\,. \end{align} \label{eq:P} \end{subequations} The forward propagator $P(x,t,\sigma|\sigma_0)$ of the free particle can be obtained analytically by solving the differential equations (\ref{eq:P}) on the real line along with the initial condition $P(x,t=0,\sigma|\sigma_0)=\delta_{\sigma,\sigma_0}\,\delta(x)$. To obtain the Fokker-Plank equations for the backward propagator, we instead consider an infinitesimal time interval $[0,dt]$ and suppose that the particle is initially located at $x$ at time $t=0$ with velocity $\dot x=\sigma\,v_0$. In the time interval $[0,dt]$, the particle either traveled with velocity $\dot x=\sigma \,v_0$ to $x+\sigma \,v_0\,dt$ or tumbled to a velocity $\dot x=-\sigma \,v_0$ and remained at $x$. After either of these two events, the particle must reach the origin in a time $t-dt$. Therefore, we can write the following equation for the backward propagator \begin{align} Q(x,t,\sigma) = (1-\gamma\,dt)\,Q(x+\sigma \,v_0\,dt,t-dt,\sigma) + \gamma\,dt \,\,Q(x,t-dt,-\sigma)\,,\label{eq:Qdt} \end{align} which, after expanding to first order in $dt$, gives the \emph{backward} Fokker-Plank equations: \begin{subequations} \begin{align} - \partial_t Q(x,t,+)= +v_0\,\partial_x Q(x,t,+)-\gamma \,Q(x,t,+)+\gamma\, Q(x,t,-)\,,\\ - \partial_t Q(x,t,-)=- v_0\,\partial_x Q(x,t,-)-\gamma\, Q(x,t,-)+\gamma\, Q(x,t,+)\,. \end{align} \label{eq:Q} \end{subequations} The backward propagator $Q(x,t,\sigma|\sigma_f)$ of the free particle can be obtained analytically by solving the differential equations (\ref{eq:Q}) on the real line along with the initial condition $Q(x,t=0,\sigma|\sigma_f)=\delta_{\sigma,\sigma_f}\,\delta(x)$. The derivation can be found in e.g. \cite{DebruyneSur21} and the results are recalled in \ref{app:prop}. It is now easy to show that the bridge propagator $ P_B(x,t,\sigma\,|\,\sigma_0,t_f,\sigma_f)$ defined in (\ref{eq:Pb}) in terms of $P$ and $Q$ satisfies a similar set of Fokker-Plank equations. Omitting the conditional dependence for conciseness, we find that the bridge propagator satisfies the effective Fokker-Plank equations \begin{subequations} \begin{align} \partial_t P_B(x,t,+) &= - v_0\partial_xP_B(x,t,+)- \gamma_B^*(x,+,t) P_B(x,t,+) + \gamma_B^*(x,-,t) P_B(x,t,-)\,,\\[1em] \partial_t P_B(x,t,-) &= + v_0\partial_xP_B(x,t,-)- \gamma_B^*(x,-,t) P_B(x,t,-)+ \gamma_B^*(x,+,t) P_B(x,t,+)\,, \end{align} \label{eq:effFPb} \end{subequations} where the transition rates are now space-time dependent: \begin{subequations} \begin{align} \gamma_B^*(x,\dot x=+v_0,t\,|\,\sigma_0,t_f,\sigma_f) &= \gamma\, \frac{Q(x,\tau,-\,|\,\sigma_f)}{Q(x,\tau,+\,|\,\sigma_f)}\,,\\[1em] \gamma_B^*(x,\dot x=-v_0,t\,|\,\sigma_0,t_f,\sigma_f) &= \gamma\, \frac{Q(x,\tau,+\,|\,\sigma_f)}{Q(x,\tau,-\,|\,\sigma_f)}\,, \end{align} \label{eq:effBQ} \end{subequations} where $\tau=t_f-t$ and $Q$ is the free backward propagator satisfying the backward Fokker-Plank equations (\ref{eq:Q}). One can easily check that the effective equations (\ref{eq:effFPb}) conserve the probability current such that the bridge propagator is indeed normalised to unity $\int_{-\infty}^{\infty} dx \sum_{\sigma=\pm}P_B(x,t,\sigma\,|\,\sigma_0,t_f,\sigma_f)=1$. Physically, the effective tumbling rate is the free tumbling rate that is modified in such a way that tumbling events that bring the particle closer to the origin are more likely to happen. Using the expression of the free backward propagator (recalled in \ref{app:prop}), we find the exact expressions of the transition rates. For example, when $\sigma_0=+1$ and $\sigma_f=-1$, we get \begin{subequations} \begin{align} \gamma_B^*(x,\dot x=+v_0,t\,|\,+,t_f,-) &= 2\,\gamma\,\delta[f(\tau,x)]+\,\gamma\,\sqrt{\frac{g(\tau,x)}{f(\tau,x)}} \frac{I_1[ h(\tau,x)]}{I_0[ h(\tau,x)]}\,,\\[1em] \gamma_B^*(x,\dot x=-v_0,t\,|\,+,t_f,-) &= \gamma\, \frac{1}{2\,\delta[f(\tau,x)]+\sqrt{\frac{g(\tau,x)}{f(\tau,x)}} \frac{I_1[ h(\tau,x)]}{I_0[ h(\tau,x)]}}\,, \end{align} \label{eq:effB} \end{subequations} where $\tau=t_f-t$. In the expressions (\ref{eq:effB}), $I_0(z)$ and $I_1(z)$ denote the modified Bessel functions while the functions $f$, $g$, and $h$ are defined as \begin{align} f(t,x) = \gamma\,t-\frac{\gamma\,x}{v_0}\,,\quad g(t,x)= \gamma\,t+\frac{\gamma\,x}{v_0}\,, \quad h(t,x)=\sqrt{f(t,x)\,g(t,x)}\,. \label{eq:fgh} \end{align} The Dirac delta terms in the effective rates (\ref{eq:effB}) enforce the particle to remain in the double sided light cone defined as (see figure \ref{fig:bridges}) \begin{align} \left\{\begin{array}{ll} |x|\leq v_0\, t\,, &\text{when }\, 0\leq t \leq \frac{t_f}{2}\, ,\\ |x|\leq v_0 \,(t_f-t) \,, &\text{when }\, \frac{t_f}{2}\leq t \leq t_f\, ,\label{eq:lc} \end{array} \right. \end{align} which is a natural boundary induced by the combination of the finite velocity of the particle along with the bridge constraint. In practice, when performing numerical simulations, these Dirac delta terms can be safely removed from the effective tumbling rates and can be replaced by hard constraints such that the particle must remain in the double sided light cone (\ref{eq:lc}). By comparing the effective Fokker-Plank equations for the bridge propagator (\ref{eq:effFPb}) with the ones for the free propagator (\ref{eq:P}), one can see that the bridge constraint is encoded in the space-time dependency of the tumbling rates and lead to the effective Langevin equation (\ref{eq:effeom}) with a space-time dependent telegraphic noise presented in the introduction. RTPs with space and time dependent tumbling rates are relatively easy to simulate and there have been quite a few recent studies on them~\cite{Doussal20,Dor19,Singh20,Angelani14}. Unlike these models where the space and time dependency of the tumbling rates are ``put in by hand'', here we see from first principle how geometric constraints, such as the bridge condition, naturally generates space-time dependent tumbling rates. To generate trajectories of RTPs with space-time dependent tumbling rates, one proceeds as follows. Instead of generating a sequence of tumbling times that follow a \emph{homogeneous} Poisson process with constant rate $\gamma$, as presented in the introduction, one needs to generate a sequence of times that follow a \emph{non homogeneous} Poisson process with a variable rate. There exist several methods to generate non homogeneous Poisson processes (see \cite{Lewis79} for a review). A quick and simple method is to discretise the effective equation (\ref{eq:effeom}) over small time increments $\Delta t$ which, omitting the conditional dependence, writes \begin{align} x_B(t+\Delta t) = x_B(t) + v_0\,\Delta t\,\sigma_B^*(x_B,\dot x_B,v_0,t)\,,\label{eq:eomd} \end{align} and to evolve the telegraphic signal according to \begin{align} \sigma_B^*(x_B,\dot x_B,v_0,t+\Delta t) = \left\{\begin{array}{rl}\sigma_B^*(x_B,\dot x_B,v_0,t)\, \quad & \text{with \, prob.~ }=1-\gamma_B^*(x_B,\dot x_B,t)\, \Delta t\, , \\ -\sigma_B^*(x_B,\dot x_B,v_0,t)\, \quad &\text{with \, prob.~ } =\gamma_B^*(x_B,\dot x_B,t)\, \Delta t\, . \end{array}\right. \label{eq:telegraphicN} \end{align} This method is very simple to implement but nevertheless requires to choose the time increments $\Delta t$ sufficiently small such that the switching probabilities in (\ref{eq:telegraphicN}) do not exceed unity, which can be an issue if one is interested in regimes close to the light cone structure where the effective rates become large and might require more advanced sampling techniques \cite{Lewis79}. Nevertheless, this method effectively generates run-and-tumble bridge trajectories and works well in practice (see left panel in figure \ref{fig:bridge}). In the right panel in figure \ref{fig:bridge}, we computed numerically the probability distribution of the position at some intermediate time $t=t_f/2$, by generating bridge trajectories from the effective tumbling rates (\ref{eq:effB}) and compared it to the theoretical position distribution for the bridge propagator which can be easily computed by substituting the free forward and backward propagators (recalled in the \ref{app:prop}) in the expression of the bridge propagator in (\ref{eq:Pb}): \begin{subequations} \begin{align} P_B(x,t,-\,|\,+,t_f,-)&=\frac{\gamma}{2\,v_0}\frac{I_0[h(t,x)]}{I_0[\gamma\, t_f]}\,\left(2\,\delta[f(\tau,x)]+\sqrt{\frac{g(\tau,x)}{f(\tau,x)}}I_1[h(\tau,x)]\right)\,, \\ P_B(x,t,+\,|\,+,t_f,-)&=P_B(x,\tau,-\,|\,+,t_f,-)\,, \end{align} \label{eq:Pbridge} \end{subequations} where $\tau=t_f-t$. In the expressions (\ref{eq:Pbridge}), $I_0(z)$ and $I_1(z)$ denote the modified Bessel functions. As can be seen in figure \ref{fig:bridge}, the agreement is excellent. \begin{figure}[t] \subfloat{% \includegraphics[width=0.5\textwidth]{bridgeTraj.pdf}% }\hfill \subfloat{% \includegraphics[width=0.5\textwidth]{bridgeDistP.pdf}% }\hfill \caption{\textbf{Left panel:} A typical bridge trajectory of a RTP starting at the origin with a positive velocity $\dot x=+v_0$ and returning to the origin after a time $t_f=5$ with a negative velocity $\dot x=-v_0$. The trajectory was generated using the effective tumbling rates (\ref{eq:effB}). \textbf{Right panel:} Position distribution at $t=t_f/2$ for a RTP starting at the origin with a positive velocity $\dot x=+v_0$ and returning to the origin after a time $t_f=5$ with a negative velocity $\dot x=-v_0$. The position distribution $P_B(x,t,+\,|\,+,t_f,-)$ obtained numerically by sampling from the effective tumbling rates (\ref{eq:effB}) is compared with the theoretical prediction (\ref{eq:Pbridge}). The agreement is excellent. Note that the Dirac delta function in (\ref{eq:Pbridge}) is not shown to fit the data within the limited window size.}\label{fig:bridge} \end{figure} Note that in the diffusive limit when \begin{align} v_0\rightarrow\infty\,,\quad \gamma\rightarrow\infty \,,\quad \text{with } D\equiv\frac{v_0^2}{2\gamma} \text{ fixed}\,,\label{eq:bml} \end{align} where $D$ is the effective diffusion coefficient, the effective tumbling rates (\ref{eq:effB}) both become the same constant $\gamma$ which is independent of $x$ and $t$. The signature of the bridge constraint can be found in the second order term of this limit which gives \begin{subequations} \begin{align} \gamma_B^*(x,\dot x=+v_0,t\,|\,+,t_f,-) &\sim \gamma+\frac{x}{\tau\,\sqrt{2\,D}}\,\gamma^{\frac{1}{2}}+O(\gamma^{-1}),\\ \gamma_B^*(x,\dot x=-v_0,t\,|\,+,t_f,-) &\sim \gamma-\frac{x}{\tau\,\sqrt{2\,D}}\,\gamma^{\frac{1}{2}}+O(\gamma^{-1})\,. \end{align} \label{eq:effgbm} \end{subequations} Note that one needs to retain the subleading terms up to order $O(\sqrt{\gamma})$ in order to capture the nontrivial $x$-dependence, which indeed ensures the bridge condition. Upon inserting these rates in the effective Fokker-Plank equations (\ref{eq:effFPb}) and solving for $P_B(x,t)\equiv P_B(x,t,+)+P_B(x,t,-)$ by adding and subtracting the two equations, we find that the first order terms in the tumbling rates cancel out and we recover the well-known effective Fokker-Plank equation for Brownian motion \begin{align} \partial_t P_B(x,t) = D\partial_x[\partial_x P_B(x,t) - 2 P_B(x,t)\partial_x\ln(Q(x,\tau))]\,,\label{eq:DiffB} \end{align} where $\tau=t_f-t$ and $Q(x,\tau)=\frac{1}{\sqrt{4\pi D\tau}}e^{-x^2/4D\tau}$ is the free Brownian backward propagator. This Fokker-Plank equation leads to the effective Langevin equation (\ref{eq:BMeomb}) that generates Brownian bridges presented in the introduction. \section{Generalisation to other constrained run-and-tumble trajectories} \label{sec:gen} In the previous section, we obtained effective tumbling rates to generate bridge run-and-tumble trajectories. In this section, we generalise the method to other types of constrained run-and-tumble trajectories, namely excursions and meanders. \subsection{Generating run-and-tumble excursions} An excursion is a bridge trajectory that is further constrained to remain above the origin. The particle must start from the origin $x_0=0$, necessarily in the state $\sigma_0=+1$, and return to the origin at the time $t_f$, necessarily in the state $\sigma_f=-1$, while never crossing the origin: \begin{align} x(0)=x(t_f)=0\,,\quad \dot x(0)=+v_0\,,\quad x(t')\geq 0\quad \forall t' \in [0,\,t_f]\,,\quad \dot x(t_f)=-v_0\,.\label{eq:exc} \end{align} Similarly to the bridge propagator (\ref{eq:Pb}), the propagator for an excursion can be written as (see figure \ref{fig:excursions}) \begin{align} P_E(x,t,\sigma\,|\,t_f)=\frac{P_{\text{absorbing}}(x,t,\sigma)\,Q_{\text{absorbing}}(x,t_f-t,\sigma)}{P_{\text{absorbing}}(x=0,t_f)}\,,\label{eq:Pe} \end{align} where the subscript $E$ refers to ``excursion'', and $P_{\text{absorbing}}$ and $Q_{\text{absorbing}}$ are now the forward and backward propagator of the free RTP in the presence of an absorbing boundary located at the origin. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{excursion.pdf} \caption{A sketch of a run-and-tumble excursion trajectory that starts at the origin and returns to the origin at a fixed time $t_f$ while remaining positive. Due to the Markov property in the extended phase space $(x,\dot x)$, the excursion trajectory can be decomposed into two independent parts: a left part over the interval $[0,t]$, where the particle moves from the point $(0,+v_0)$ to the point $(x,-v_0)$ at time $t$ while staying positive and a right part over the interval $[t,t_f]$, where it moves from the point $(x,-v_0)$ at time $t$ to the point $(0,-v_0)$ at time $t_f$ while staying positive. The combination of the finite velocity of the particle and the excursion condition induces a positive double sided light cone in which the particle must remain (shaded red region).} \label{fig:excursions} \end{figure} They satisfy the set of Fokker-Plank equations (\ref{eq:P}) and (\ref{eq:Q}) that must now be solved on the half line with the initial condition $P_{\text{absorbing}}(x,t\!=\!0,\sigma)=\delta_{\sigma,+}\delta(x)$ and $Q_{\text{absorbing}}(x,t\!=\!0,\sigma)=\delta_{\sigma,-}\delta(x)$. The boundary conditions at $x=0$ can be obtained by looking at the differential forms (\ref{eq:Pdt})-(\ref{eq:Qdt}) and are found to be $P_{\text{absorbing}}(x\!=\!0,t,+)=0$ and $Q_{\text{absorbing}}(x\!=\!0,t,-)=0$. Following the steps in the previous section, we find that the analog of the effective tumbling rates (\ref{eq:effBQ}) are given by \begin{subequations} \begin{align} \gamma_E^*(x,\dot x=+v_0,t\,|\,t_f) &= \gamma\, \frac{Q_{\text{absorbing}}(x,\tau,-)}{Q_{\text{absorbing}}(x,\tau,+)}\,,\\[1em] \gamma_E^*(x,\dot x=-v_0,t\,|\,t_f) &= \gamma\, \frac{Q_{\text{absorbing}}(x,\tau,+)}{Q_{\text{absorbing}}(x,\tau,-)}\,, \end{align} \label{eq:gEQ} \end{subequations} where $\tau=t_f-t$ and $Q_{\text{absorbing}}$ is the backward propagator of the free particle in the presence of an absorbing boundary. Using its expression (recalled in \ref{app:prop}), we find the exact expressions of the transition rates: \begin{subequations} \begin{align} \gamma_E^*(x,\dot x=+v_0,t\,|\,t_f) &= 2\,\frac{\gamma v_0\tau}{ x}\,\delta[f(\tau,x)]+\frac{\gamma^2\, x}{v_0}\,\sqrt{\frac{g(\tau,x)}{f(\tau,x)}}\frac{I_1[h(\tau,x)]}{\frac{\gamma x}{v_0}I_0[h(\tau,x)]+\sqrt{\frac{f(\tau,x)}{g(\tau,x)}}I_1[h(\tau,x)]}\,,\\ \gamma_E^*(x,\dot x=-v_0,t\,|\,t_f) &= \gamma\, \frac{1}{2\,\frac{v_0\tau}{ x}\,\delta[f(\tau,x)]+\frac{\gamma\, x}{v_0}\,\sqrt{\frac{g(\tau,x)}{f(\tau,x)}}\frac{I_1[h(\tau,x)]}{\frac{\gamma x}{v_0}I_0[h(\tau,x)]+\sqrt{\frac{f(\tau,x)}{g(\tau,x)}}I_1[h(\tau,x)]}}\,, \end{align} \label{eq:effE} \end{subequations} where $\tau=t_f-t$. In the expressions (\ref{eq:effE}), $I_0(z)$ and $I_1(z)$ denote the modified Bessel functions while the functions $f$, $g$, and $h$ are defined in (\ref{eq:fgh}). As in the bridge case, the Dirac delta terms in the effective rates (\ref{eq:effE}) enforce the particle to remain in the positive double sided light cone defined as (see figure \ref{fig:excursions}) \begin{align} \left\{\begin{array}{ll} 0\leq x\leq v_0\, t\,, &\text{when }\, 0\leq t \leq \frac{t_f}{2}\, ,\\ 0\leq x\leq v_0 \,(t_f-t) \,, &\text{when }\, \frac{t_f}{2}\leq t \leq t_f\, ,\label{eq:plc} \end{array} \right. \end{align} which is a natural boundary induced by the combination of the finite velocity of the particle along with the excursion constraint. In practice, when performing numerical simulations, these Dirac delta terms can be safely removed from the effective tumbling rates and be replaced by hard constraints such that the particle must remain in the positive double sided light cone (\ref{eq:plc}). The effective rates (\ref{eq:effE}) generate run-and-tumble excursion trajectories (see left panel in figure \ref{fig:excursion}). In the right panel in figure \ref{fig:excursion}, we computed numerically the probability distribution of the position at some intermediate time $t=t_f/2$, by generating excursion trajectories from the effective tumbling rates (\ref{eq:effE}) and compared it to the theoretical position distribution for the excursion propagator which can be easily computed by substituting the forward and backward propagators of a free particle in the presence of an absorbing boundary (recalled in the \ref{app:prop}) in the expression of the excursion propagator in (\ref{eq:Pe}): \begin{subequations} \begin{align} P_E(x,t,+) &= \frac{\gamma\,t_f}{I_1[\gamma t_f]\,(v_0\,\tau+x)}\left(\frac{\gamma x}{v_0}\,I_0[h(\tau,x)]+\sqrt{\frac{f(\tau,x)}{g(\tau,x)}}I_1[h(\tau,x)]\right)\nonumber\\ &\quad \times \left(\delta[f(t,x)]+\frac{\gamma\,x}{v_0}\frac{1}{\sqrt{f(t,x)\,g(t,x)}} I_1[h(t,x)]\right)\,,\\ P_E(x,t,-)&=P_E(x,\tau,+)\,, \end{align} \label{eq:Pexcursion} \end{subequations} where $\tau=t_f-t$. In the expressions (\ref{eq:Pexcursion}), $I_0(z)$ and $I_1(z)$ denote the modified Bessel functions while the functions $f$, $g$, and $h$ are defined in (\ref{eq:fgh}). As can be seen in figure \ref{fig:excursion}, the agreement is excellent. \begin{figure}[t] \subfloat{% \includegraphics[width=0.5\textwidth]{excursionTraj.pdf}% }\hfill \subfloat{% \includegraphics[width=0.5\textwidth]{excursionDistP.pdf}% }\hfill \caption{\textbf{Left panel:} A typical excursion trajectory of a RTP starting at the origin and returning to the origin after a time $t_f=5$ while remaining positive. The trajectory was generated using the effective tumbling rates (\ref{eq:effE}). \textbf{Right panel:} Position distribution at $t=t_f/2$ for a RTP starting at the origin and returning to the origin after a time $t_f=5$ while remaining positive. The position distribution $P_E(x,t,+\,|\,t_f)$ obtained numerically by sampling from the effective tumbling rates (\ref{eq:effE}) is compared with the theoretical prediction (\ref{eq:Pexcursion}). The agreement is excellent. Note that the Dirac delta function in (\ref{eq:Pexcursion}) is not shown to fit the data within the limited window size.}\label{fig:excursion} \end{figure} As in the bridge case, we can compute the diffusive limit (\ref{eq:bml}) of the effective rates (\ref{eq:effE}) to find that they take a rather simple form \begin{subequations} \begin{align} \gamma_E^*(x,\dot x=+v_0,t\,|\,t_f) &\sim \gamma + \left(\frac{x}{\tau\,\sqrt{2D}}-\frac{\sqrt{2D}}{x}\right)\,\gamma^{\frac{1}{2}}+O(\gamma^{-1})\,,\\ \gamma_E^*(x,\dot x=-v_0,t\,|\,t_f) &\sim\gamma - \left(\frac{x}{\tau\,\sqrt{2D}}-\frac{\sqrt{2D}}{x}\right)\,\gamma^{\frac{1}{2}}+O(\gamma^{-1})\,, \end{align} \label{eq:gEbm} \end{subequations} which, upon inserting in the effective Fokker-Plank equations (\ref{eq:effFPb}) gives back the effective Langevin equation for Brownian excursions \cite{MajumdarEff15}. \subsection{Generating run-and-tumble meanders} A meander is a trajectory that starts at the origin and stays above it, regardless of its final position. The particle must start from the origin $x_0=0$, necessarily in the state $\sigma_0=+1$, and remain above the origin up to time $t_f$: \begin{align} x(0)=0\,,\quad \dot x(0)=+v_0\,,\quad x(t')\geq 0\quad \forall t' \in [0,\,t_f]\,.\label{eq:mea} \end{align} Similarly to the bridge propagator (\ref{eq:Pb}), the propagator for a meander can be written as (see figure \ref{fig:meanders}) \begin{align} P_M(x,t,\sigma\,|\,t_f)=\frac{P_{\text{absorbing}}(x,t,\sigma)\,S(x,t_f-t,\sigma)}{S(x=0,t_f,+)}\,,\label{eq:Pm} \end{align} where the subscript $M$ refers to ``meander'', $P_{\text{absorbing}}$ is the forward propagator in the presence of an absorbing boundary located at the origin defined in the previous section. In expression (\ref{eq:Pm}), $S(x,t,\sigma)$ denotes the survival probability, i.e. the probability that a free particle starting at $x$ in the state $\sigma$ does not cross the origin up to time $t$. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{meander.pdf} \caption{A sketch of a run-and-tumble meander trajectory that starts at the origin and remains positive up to a time $t_f$. Due to the Markov property in the extended phase space $(x,\dot x)$, the meander trajectory can be decomposed into two independent parts: a left part over the interval $[0,t]$, where the particle moves from the point $(0,+v_0)$ to the point $(x,-v_0)$ at time $t$ while staying positive, and a right part over the interval $[t,t_f]$, where it moves from the point $(x,-v_0)$ at time $t$ to an arbitrary point at time $t_f$ while staying positive. The combination of the finite velocity of the particle and the meander condition induces a positive single sided light cone in which the particle must remain (shaded red region). Note that once the particle is beyond the line $x=-v_0\,(t-t_f)$ in the $(x,t)$ plane (green dashed line), the particle survives for sure and the tumbling rates return to their free constant value $ \gamma_M^*(x,\dot x,t\,|\,t_f)=\gamma$.} \label{fig:meanders} \end{figure} The survival probability satisfies the same Fokker-Plank equations as the backward propagator (\ref{eq:Q}) but must be solved on the half line with the initial condition $S(x,t=0,\sigma)=\Theta(x)$, where $\Theta$ is the Heaviside step function, i.e. $\Theta(x)=1$ if $x> 0$ and $\Theta(x)=0$ if $x<0$. One can show, again using the differential form (\ref{eq:Qdt}), that the boundary condition must be $S(x=0,t,-)=0$. Following the steps in the previous section, we find that the analog of the effective tumbling rates (\ref{eq:effBQ}) are given by \begin{subequations} \begin{align} \gamma_M^*(x,\dot x=+v_0,t\,|\,t_f) &= \gamma\, \frac{S(x,\tau,-)}{S(x,\tau,+)}\,,\\[1em] \gamma_M^*(x,\dot x=-v_0,t\,|\,t_f) &= \gamma\, \frac{S(x,\tau,+)}{S(x,\tau,-)}\,, \end{align} \label{eq:gMS} \end{subequations} where $\tau=t_f-t$ and $S$ is the survival probability of the free particle in the presence of an absorbing boundary. Using its expression (recalled in \ref{app:prop}), we find the exact expressions of the transition rates: \begin{subequations} \begin{align} \gamma_M^*(x,\dot x=+v_0,t\,|\,t_f) &= \gamma\,\frac{1-\int_0^{\tau} dt' F(t',x,-)}{1-\int_0^{\tau} dt' F(t',x,+)}\,,\\ \gamma_M^*(x,\dot x=-v_0,t\,|\,t_f) &= \gamma\,\frac{1-\int_0^{\tau} dt' F(t',x,+)}{1-\int_0^{\tau} dt' F(t',x,-)}\,, \end{align} \label{eq:effM} \end{subequations} where $\tau=t_f-t$. In expression (\ref{eq:effM}), the function $F(t,x,\sigma)$ is the first-passage distribution (see \ref{app:prop}) given by \begin{subequations} \begin{align} F(t,x,+) &= \gamma\,\frac{e^{-\gamma t}}{g(t,x)}\left(\frac{\gamma x}{v_0}\,I_0[\gamma h(t,x)]+\sqrt{\frac{f(t,x)}{g(t,x)}}\,I_1[\gamma h(t,x)]\right)\,,\\ F(t,x,-) &= \gamma\,e^{-\gamma t}\,\left(\delta[f(t,x)]+\frac{\gamma\, x}{v_0\,\sqrt{h(t,x)}}\,I_1[\gamma h(t,x)]\right)\,, \end{align} \label{eq:Fm} \end{subequations} where $I_0(z)$ and $I_1(z)$ denote the modified Bessel functions while the functions $f$, $g$, and $h$ are defined in (\ref{eq:fgh}). As in the bridge case, the Dirac delta terms in the effective rates (\ref{eq:effM}) enforce the particle to remain in the positive single sided light cone defined as \begin{align} 0\leq x\leq v_0\, t\,, \quad \text{when }\,\quad 0\leq t \leq t_f\,,\label{eq:pslc} \end{align} which is a natural boundary induced by the combination of the finite velocity of the particle along with the meander constraint. In practice, when performing numerical simulations, these Dirac delta terms can be safely removed from the effective tumbling rates and be replaced by hard constraints such that the particle must remain positive. Note that once the particle is beyond the line $x=-v_0\,(t-t_f)$ in the $(x,t)$ plane, the particle survives for sure and the tumbling rates return to their free constant value $\gamma$ (see figure \ref{fig:meanders}). The effective rates (\ref{eq:effM}) generate run-and-tumble meander trajectories (see left panel in figure \ref{fig:meander}). In the right panel in figure \ref{fig:meander}, we computed numerically the probability distribution of the position at some intermediate time $t=t_f/2$, by generating meander trajectories from the effective tumbling rates (\ref{eq:effM}) and compared it to the theoretical position distribution for the meander propagator which can be easily computed by substituting the free forward propagator and the survival probability in the presence of an absorbing boundary (recalled in the \ref{app:prop}) in the expression of the meander propagator in (\ref{eq:Pm}): \begin{subequations} \begin{align} P_M(x,t,+) &= \frac{e^{\gamma\,t_f}}{v_0}\, \frac{F(t,x,-\sigma)\left[1-\int_0^{\tau} dt' F(t',x,\sigma)\right]}{I_0(\gamma\,t_f)+I_1(\gamma\,t_f)}\,,\\ P_M(x,t,-)&=P_M(x,\tau,+)\,, \end{align} \label{eq:Pmeander} \end{subequations} where $\tau=t_f-t$ and $F$ is defined in (\ref{eq:Fm}). In the expressions (\ref{eq:Pmeander}), $I_0(z)$ and $I_1(z)$ denote the modified Bessel functions while the functions $f$, $g$, and $h$ are defined in (\ref{eq:fgh}). \begin{figure}[t] \subfloat{% \includegraphics[width=0.5\textwidth]{meanderTraj.pdf}% }\hfill \subfloat{% \includegraphics[width=0.5\textwidth]{meanderDistP.pdf}% }\hfill \caption{\textbf{Left panel:} A typical meander trajectory of a RTP starting at the origin and remaining positive up to time $t_f=5$. The trajectory was generated using the effective tumbling rates (\ref{eq:effM}). \textbf{Right panel:} Position distribution at $t=3\,t_f/4$ for a meander RTP starting at the origin and remaining positive up to time $t_f=5$. The position distribution $P_M(x,t,+\,|\,t_f)$ obtained numerically by sampling from the effective tumbling rates (\ref{eq:effM}) is compared with the theoretical prediction (\ref{eq:Pmeander}). The agreement is excellent. The distribution exhibits two regimes, one below $x=-v_0(t-t_f)=5/4$ and one beyond $x=5/4$, due to the region in the $(x,t)$ plane where the tumbling rates return to their free constant value $\gamma$ (see figure \ref{fig:meanders}). Note that the Dirac delta function of the never tumbling trajectory is not shown to fit the data within the limited window size.}\label{fig:meander} \end{figure} As can be seen in figure \ref{fig:meander}, the agreement is excellent. As in the bridge case, we can compute the diffusive limit (\ref{eq:bml}) of the effective rates (\ref{eq:effM}) to find that they take a rather simple form \begin{subequations} \begin{align} \gamma_M^*(x,\dot x=+v_0,t\,|\,t_f) &\sim \gamma - \sqrt{\frac{2\gamma}{\pi\tau}}\frac{e^{-\frac{x^2}{4D\tau}}}{\text{erf}\left(\frac{x}{\sqrt{4D\tau}}\right)}+O(\gamma^{-1})\,,\\ \gamma_M^*(x,\dot x=-v_0,t\,|\,t_f) &\sim\gamma + \sqrt{\frac{2\gamma}{\pi\tau}}\frac{e^{-\frac{x^2}{4D\tau}}}{\text{erf}\left(\frac{x}{\sqrt{4D\tau}}\right)}+O(\gamma^{-1})\,, \end{align} \label{eq:gMbm} \end{subequations} which, upon inserting in the effective Fokker-Plank equations (\ref{eq:effFPb}) gives back the effective Langevin equation for Brownian meanders \cite{MajumdarEff15}. \section{Summary and outlook} \label{sec:sum} In this paper, we studied run-and-tumble bridge trajectories, which is a prominent example of a non-Markovian constrained process. We provided an efficient way to generate them numerically by deriving an effective Langevin equation for the constrained dynamics. We showed that the tumbling rate of the RTP acquires a space-time dependency that naturally encodes the bridge constraint. We derived the exact expression of the effective tumbling rate and showed how it yields to an efficient sampling of run-and-tumble bridge trajectories. The method is quite versatile and we extended it to other types of constrained run-and-tumble trajectories such as excursions and meanders. It would be interesting to generalise our results to higher dimensions and study geometrical properties of bridge trajectories such as their convex hull. Indeed, the convex hull is a natural observable that appears in the study of the motion of foraging animals and measures the spatial extent of their territory \cite{Randon09}. In this context, the bridge constraint would enforce the condition that the animal must return to its home after a fixed amount time. Another possible extension of this work would be to derive effective equations of motion to generate other types of constrained trajectories. For instance, it would be interesting to study various constraints on linear statistics, such as trajectories with a fixed area below the curve. \section*{Acknowledgments} This work was partially supported by the Luxembourg National Research Fund (FNR) (App. ID 14548297).
2,869,038,156,625
arxiv
\section{HD~34282 peculiarities} \label{sec:HD34282} HD~34282 (V1366~Ori, PDS~176) is a Herbig Ae (HAe) star (the cool A-range subset of the Herbig AeBe stars) that is also a \DSS. Pulsations were originally discovered by \citet{2004MNRAS.352L..11A}, with ten frequencies later identified by \citet{2006MmSAI..77...97A} using multi-site ground-based photometry. The observed frequencies, from 64.7 to 79.4 cycles/day (d$^{-1}$), are among the highest frequencies detected in a \DSS. In order to resolve in more detail the star's unusual oscillation spectrum, the {\em Microvariability and Oscillations of STars} ({\em MOST}) satellite \citep{2003PASP..115.1023W} observed HD~34282 for 31~days in 2007 December. These observations reveal a unique spectrum: 22 frequencies detected, forming groups of frequencies at regular intervals of about 44~\mhz, that may be indicative of the large spacing between successive orders of radial pulsation. The amplitudes of these groups steadily grow to the highest-frequency group at around 79~d$^{-1}$, above which point there is an abrupt cut off in pulsation power, which we will show is consistent with the star pulsating just below the acoustic cut-off frequency. Importantly, the acoustic cut-off frequency has not been previously identified as a factor in the pulsation spectrum of any \DSS. Here we report on the results of the {\em MOST} observations and our attempts to model the oscillation frequencies that have led to these conclusions. HD~34282 was first identified as a HAe object by \citet{1994A&AS..104..315T}, and is therefore assumed to be a pre-main-sequence (PMS) star. Spectral classifications have ranged from A0 to A3 \citep[\eg][]{2003AJ....126.2971V, 2001A&A...378..116M, 2004A&A...419..301M}. The latest Hipparcos reductions report a parallax of 5.2~$\pm$~1.7 milliarcseconds, corresponding to a distance of 191~$^{+89}_{-46}$~pc \citep{2007A&A...474..653V}. The above spectral classifications place HD~34282 blueward of the classical instability strip, relatively close to the zero-age main sequence (ZAMS; see Fig.~\ref{fig:HRDpos}). At this position in the Hertzsprung-Russell (HR) diagram, the star's PMS nature is ambiguous - it has either already arrived, or is just about to arrive on the ZAMS. \citet{2001A&A...378..116M} report a projected rotational velocity (\vsini)$~= 129 \pm 11$~km~s$^{-1}$. \citet{2004A&A...419..301M} found that HD~34282 has an anomalously low metallicity, [Fe/H] = -0.8 (fractional metal content by mass, Z=0.004). Such a low metallicity for a supposedly PMS star in the disk of the Milky Way poses questions regarding the true evolutionary status of the star; unless there are patches of low-metallicity material within the interstellar medium from which the star could form, a newly-formed star should not otherwise have such low metal content. One solution is that perhaps this star is also a $\lambda$-Bootis star, with depressed levels of heavier metals at the surface, but near-solar values for carbon, nitrogen, and oxygen (CNO) throughout, and near-solar values for heavy metals below the surface layers of the star as well. The $\lambda$-Bootis characteristics could be the result of recent preferential accretion of metal-depleted gas over metal-rich dust, \eg~as outlined in \citet{2002MNRAS.335L..45K}. In this case HD~34282 could consistently be a PMS (or new main-sequence) star without otherwise needing to question the low metal abundance of the star. In the past, \citet{1998AJ....116.2530G} observed this star as part of a systematic campaign to identify $\lambda$-Bootis characteristics in young A-type stars, in which they classified HD~34282 as {\em A0.5~Vb~(shell)r} in their extended Morgan-Keenan classification system, but failed to detect $\lambda$-Bootis characteristics. On the other hand \citet{1998AJ....116.2530G} did {\em not} report an anomalously low Z for the star, as later found by \citet{2004A&A...419..301M}. The source of this discrepancy is unknown. Low metallicity and high \vsini~are confirmed by Amado et al. (in preparation) for heavier metals, but analysis of the required lighter metals (CNO plus S), needed to determine $\lambda$~Bootis status, have not yielded an answer one way or the other -- blending of the relevant spectral lines with other lines caused by the high \vsini~of HD~34282 is beyond the current capabilities of stellar atmosphere models. We now present the {\em MOST} observations and data reductions of this star (Section~\ref{sec:MOST}), refine the position of HD~34282 in the HRD using Tycho data (Section~\ref{sec:HRD_pos}), and show the results of an asteroseismic analysis of the {\em MOST} observed frequencies (Section~\ref{sec:asteroseis}). \section{{\em MOST} observations} \label{sec:MOST} \begin{table} \caption{Pulsation frequencies in order of decreasing SigSpec significance (sig column) for HD~34282 in d$^{-1}$ and $\mu$Hz, with respective last-digit errors given in parentheses. Amplitudes are in mmag, S/N values were derived from Period04. The A06 ID cross references the same frequencies as identified by \citet{2006MmSAI..77...97A}.} \label{tab:freqs} \begin{scriptsize} \begin{tabular}{lcccccc} \hline ID & freq & freq & amp & sig & S/N & A06 \\ & [d$^{-1}$] & [$\mu$Hz] & [mmag] & & & ID\\ \hline $f_1$ & 79.423(1) & 919.24(1) & 6.344 & 945.14 & 45.50 & A10 \\ $f_2$ & 79.252(2) & 917.27(2) & 3.523 & 378.97 & 47.61 & A9 \\ $f_3$ & 75.416(2) & 872.87(2) & 3.339 & 360.40 & 23.07 & A7 \\ $f_4$ & 75.864(2) & 878.05(3) & 2.427 & 227.04 & 22.42 & A8 \\ $f_5$ & 75.356(2) & 872.18(3) & 2.205 & 189.99 & 31.13 & A6 \\ $f_6$ & 71.589(3) & 828.58(3) & 2.075 & 171.87 & 12.37 & A4 \\ $f_7$ & 71.525(3) & 827.84(3) & 1.862 & 158.92 & 10.07 & A3 \\ $f_8$ & 57.060(3) & 660.42(3) & 1.681 & 150.24 & 17.38 & A1 \\ $f_9$ & 71.972(2) & 833.01(3) & 1.630 & 183.24 & 14.77 & A5 \\ $f_{10}$ & 68.152(3) & 788.80(3) & 1.475 & 147.62 & 17.91 & A2 \\ $f_{11}$ & 64.695(3) & 748.79(4) & 1.245 & 119.38 & 20.16 & - \\ $f_{12}$ & 61.053(4) & 706.63(5) & 0.754 & 60.01 & 12.23 & - \\ $f_{13}$ & 67.787(5) & 784.58(5) & 0.714 & 54.56 & 10.37 & - \\ $f_{14}$ & 71.043(5) & 822.26(5) & 0.707 & 50.67 & 7.22 & - \\ $f_{15}$ & 75.402(4) & 872.70(5) & 0.658 & 56.83 & 6.94 & - \\ $f_{16}$ & 53.427(5) & 618.37(6) & 0.553 & 45.81 & 10.96 & - \\ $f_{17}$ & 67.534(5) & 781.65(6) & 0.524 & 48.45 & 9.83 & - \\ $f_{18}$ & 75.448(6) & 873.24(7) & 0.492 & 32.21 & 14.60 & - \\ $f_{19}$ & 68.669(6) & 794.78(7) & 0.448 & 32.34 & 6.84 & - \\ $f_{20}$ & 72.319(6) & 837.03(7) & 0.418 & 28.38 & 5.43 & - \\ $f_{21}$ & 60.353(6) & 698.53(7) & 0.377 & 27.20 & 7.98 & - \\ $f_{22}$ & 67.465(7) & 780.84(8) & 0.345 & 22.67 & 7.05 & - \\ \hline \end{tabular} \end{scriptsize} \end{table} On 2003 June 30 the {\em MOST} satellite was launched into a polar, Sun-synchronous circular orbit with an altitude of 820~km. It carries a 15-cm Rumak-Maksutov telescope with a single, custom broadband (350 to 750 nm) optical filter attached to a CCD photometer. {\em MOST}'s orbital period is 101.413 minutes, which corresponds to an orbital frequency of $\sim$14.2~d$^{-1}$. {\em MOST} observed HD~34282 from 2007 December~4 to 2008 January~4 (31 days) as an uninterrupted Direct Imaging Target. Individual exposures were $\sim 3$~s each with 20 consecutive images stacked on board the satellite, giving a sampling time of $\sim 60$~s per co-added measurement. The {\em MOST} on-board clocks are updated with time stamps from the ground stations (synchronized with atomic time) every day. Individual exposure start times are accurate to 0.01\,s resulting in an even higher accuracy for the total exposure times. Barycentric corrections are for the Earth. Two independent methods for the reduction of {\em MOST} Direct Imaging Photometry have been developed: 1) a combination of classical aperture photometry and point-spread function fitting to the Direct Imaging Subrasters, as developed by \citet{2006ApJ...646.1241R}; 2) a data-reduction pipeline for space-based, open-field photometry that includes automated corrections for cosmic-ray hits and a stepwise pixel-to-pixel decorrelation of stray-light effects on the CCD \citep{2008CoAst.152...77H}. Tests on several {\em MOST} data sets in which both methods were used gave no significant differences in the quality of the extracted light curves \citep[\eg see][]{2009A&A...494.1031Z}. For HD~34282, the {\em MOST} Direct Imaging data were reduced using the method developed by \citet{2006ApJ...646.1241R}. The resulting light curve consists of 23093 data points, as outliers from the phases of the {\em MOST} orbit with the highest stray-light counts had to be discarded in the reduction. Additionally, there are two gaps in the light curve which resulted from interruptions of the HD~34282 observations for a high-priority MOST target of opportunity. HD~34282 has a bright companion at a distance of about 3 arcminutes. In Direct Imaging Mode, the focal-plane scale is about 3 arcseconds per pixel and the raster used is about 20 pixels wide resulting in a mask size of about 60 arcseconds. This safely excludes contamination of the HD~34282 MOST light curve by a brighter star about 180 arcseconds away. For the frequency analysis the {\sc Period04} \citep{2005CoAst.146...53L} and {\sc SigSpec} \citep{2007A&A...467.1353R} software packages were used, and the respective results compared for consistency. HD~34282 was observed during a period of relatively unvarying extinction, as only moderate peak-to-peak irregular variability in the integrated light of less than 0.1 magnitude is observed (top panel of Fig.~\ref{lcs}). 22 frequencies thought to be intrinsic to the star (\eg non-instrumental) are found between 50 and 85~d$^{-1}$ (578 and 926~$\mu$Hz), corresponding to periods ranging from 18 to 30 minutes. These frequencies are listed in Table~\ref{tab:freqs} and shown in Fig.~\ref{amps}. Ten of these frequencies were previously detected by \citet{2006MmSAI..77...97A}, given by the A06 ID in the final column of Table~\ref{tab:freqs}. Twelve new, lower-amplitude frequencies are detected due to the enhanced sensitivity of {\em MOST} compared to the ground-based instrumentation. A number of frequencies in the power spectrum of Fig.~\ref{amps} (\eg at 51~d$^{-1}$) are not intrinsic but are either identified as aliases of the pulsation frequencies with the {\em MOST} orbital frequency, and disappear with appropriate pre-whitening, or as instrumental frequencies related to the orbit of the satellite. The modulation of stray light with the orbital period of the satellite is itself modulated slightly with a 1~d$^{-1}$ frequency. The Sun-synchronous orbit of MOST brings it over almost the same point on Earth after one day. The albedo pattern of the Earth introduces a 1~d$^{-1}$ modulation of the amplitude of the 14.2~d$^{-1}$ modulation of scattered Earthshine. Therefore, all significant peaks that can be related to these instrumental effects within the frequency resolution were discarded. None of the 22 frequencies are identified as combination frequencies of the others, {\em i.e.} all would appear to be independent frequencies. \begin{figure} \includegraphics[width=\columnwidth]{fig1.eps} \caption{Top panel: complete 31-day {\em MOST} light curve of HD~34282; bottom panel: one-day subset of the {\em MOST} light curve illustrating the pulsational variability.} \label{lcs} \end{figure} \begin{figure*} \includegraphics[width=1.80\columnwidth]{fig2.eps} \caption{Top panel: complete amplitude spectrum from 0 to 100~d$^{-1}$ (bottom axis) or from 0 to 1157.407~\mhz\, respectively (top axis), where the corresponding spectral window is given as inset; bottom panel: zoom into the amplitude spectrum between 50 and 85~d$^{-1}$ (bottom axis) or from 578 to 984~\mhz\, respectively (top axis) where the 22 identified pulsation frequencies are identified (in red using negative values). Note that unmarked peaks are aliases of the pulsation frequencies with the {\em MOST} orbital frequency. Dark grey lines mark the multiples of the {\em MOST} orbital frequency and light grey lines are the respective 1~d$^{-1}$ side lobes.} \label{amps} \end{figure*} Collectively the frequency spectrum of HD~34282 in Fig.~\ref{amps} is quite striking, with distinct groups of frequencies spaced every 44~\mhz, ending abruptly at the high frequency end. Other stars that exhibit similar frequency groupings are 44 Tau \citep{2008A&A...478..855L} and HD~144277 \citep{2011A&A...533A.133Z}. None of the stars for which frequency clustering has been previously observed shows the sudden cut-off in amplitude at high frequencies that HD~34282 does. \section{Stellar and asteroseismic modelling.} \subsection{HR Diagram position} \label{sec:HRD_pos} Determining an HAe star's theoretical HR diagram position from observations is particularly difficult, mostly due to the circumstellar material obscuring the star. Given these difficulties, we purposely take a broad approach to illustrate the relative precision of the (below) asteroseismic analysis. A forthcoming paper by Amado et al. (in preparation) will address the fundamental parameters of the star in much greater detail, parameters which will need to be reconciled with the asteroseismic analysis. Here, three potential positions for HD~34282 are considered, displayed in Fig.~\ref{fig:HRDpos}, all derived using broad-band photometry. \begin{figure} \begin{centering} \includegraphics[width=\columnwidth]{fig3.eps} \caption{Possible positions of HD~34282 in the HR diagram: ``A'' from \citet{2003AJ....126.2971V}; ``B'' and ``C'' from Tycho photometry of Fig.~\ref{fig:VT}. The instability strip for the first three radial modes of PMS stars is as originally calculated by \citet{1998ApJ...507L.141M}. The colouring indicates the large spacing of models that fall within the observational range of HD~34282.} \label{fig:HRDpos} \end{centering} \end{figure} An effective temperature ($T_{eff}$) range of 8420 to 9520~K is used, encompassing spectral classes determined by various authors ranging from A0 to A3. In the HR diagram this places the star blueward of the instability strip as determined by \citet{1998ApJ...507L.141M} for the first three ($n=0$ to 2) radial modes of PMS stars (driven by the $\kappa$-opacity mechanism). However, HD~34282 (as will be shown) is pulsating in higher overtones than $n=2$, and so its placement to the left of the instability strip in the HR diagram is not surprising, and follows a trend outlined in \citet{2011PhDT......Casey}. $T_{eff}$ and bolometric correction (BC) values were extracted from the spectral types by comparison to the tables published in \citet{1996imsa.book.....O}, based upon the work of \citet{1982lbg6.conf.....schmidt}. These same tables were used to determine the amount of reddening, $E(B-V)$, by comparing the intrinsic ($B-V$)$_0$ colours to broad-band observations of $B-V$. The overall extinction in $V$, $A_V$, was determined, assuming $A_V = R_V E(B-V)$ where $R_V = 3.1$, and is the empirically-determined ratio of selective-to-overall extinction, as measured, {\em e.g.}, by \citet{1994RMxAA..29..163T} that usually applies to extinction caused by interstellar material. Importantly, in the case of HAe stars that are often subject to large and variable levels of circumstellar extinction, this assumption is quite likely wrong. At high levels of $A_V$ (more than 1.5 magnitudes or so), stars may start to appear bluer instead of redder with increased $A_V$, a result of the star's surrounding dust clouds reflecting light into the line of sight of the observer, causing a bluing effect. In this case $R_V=3.1$ will underestimate the intrinsic brightness of the star, miss-positioning it in the HR diagram. Unfortunately, without studies such as by \citet{1996A&A...309..809V}, the true value of $R_V$ cannot be determined. This is illustrated by position A of Fig.~\ref{fig:HRDpos}, derived from $V=9.84 \pm 0.02$, $B-V = 0.17 \pm 0.02$ of \citet{2003AJ....126.2971V}. Given the range of possible spectral classes and distances (the latter from the Hipparcos range cited in the introduction), the resulting error box is a trapezoid in the HR diagram. Significantly, position A falls essentially below the ZAMS, consistent with HD~34282 suffering from the bluing effect, but it is also possible that the true parallax falls outside the $1\sigma$ Hipparcos parallax uncertainty quoted in Section~\ref{sec:HD34282}, which could also result in an underestimate of the star's luminosity. If the star {\em is} subject to blueing, this may be revealed by time-series broad-band photometry. Positions B and C are derived from Tycho~2 photometry \citep{2000A&A...355L..27H}, displayed in Fig.~\ref{fig:VT}, in which the light curve of the star in Tycho $V$ ($V_T$) magnitudes are shown (Tycho $B$, $B_T$, also exists, but is not shown here). $B_T$ and $V_T$ filters can be transformed to Johnson $B$ and $V$ filters using the prescription detailed in Appendix C of \citet{2002AJ....124.1670M}, and addendum \citep{2006AJ....131.2360M}. With this, a Johnson $V$, $B-V$ colour-magnitude diagram has been constructed in Fig.~\ref{fig:VT}b, displaying a potential blueing effect. There is one distinct brightening event, during which the starlight is more likely to obey an $R_V=3.1$ reddening law, and hence two data points from this event, marked ``B'' and ``C'' in Fig.~\ref{fig:VT} (both parts), are used to calculate the corresponding HR diagram positions from Fig.~3. Unfortunately, the data set is not complete enough to determine whether $R_V=3.1$ is correct for either point (or what the correct value should be), however corrections for these two points will certainly give better estimates of $A_V$ than the data used to determine position A. Unsurprisingly, positions B and C are quite a bit brighter than position~A. \begin{figure} \begin{centering} \includegraphics[width=0.75\columnwidth]{fig4a.eps} \includegraphics[width=0.75\columnwidth]{fig4b.eps} \caption{a) Tycho $V$-band light curve for HD~34282; b) CMD for HD~34282, each point representing a different point in the Tycho 2 light curve. Johnson magnitudes were converted from Tycho magnitudes. Positions B and C provide the corresponding positions in the HR diagram in Fig.~\ref{fig:HRDpos}.} \label{fig:VT} \end{centering} \end{figure} As will be shown below, position B seems to agree best with the asteroseismic analysis. The above analysis shows the broad range of intrinsic luminosities that can be obtained for an HAe star if the data are not treated properly. The above is not meant to be a final analysis, but an indicator that more observations are needed. Long term multi-filter observations are currently under way to determine the true reddening law for of HD~34282, plus a forthcoming paper will address the fundamental parameters of HD~34282 in much greater detail (Amado et al., in preparation). \subsection{Asteroseismic analysis} \label{sec:asteroseis} \subsubsection{Large spacing} The unique frequency spectrum of HD~34282 suggests that pulsation frequencies are being triggered around a ``main'' frequency, the central frequency of each group perhaps indicative of successive orders of radial pulsation. If this is true, then the group spacing of approximately 44~\mhz~will match the asteroseismic average large spacing between radial orders of pulsation \citep{2010aste.book.....A}. Figure~\ref{fig:HRDpos} shows the large spacings of the stellar models under consideration. Models with a large spacing between 40 and 50~\mhz~that would match the observed group spacings of HD~34282 coincide with position ``B'' in the HR diagram. Further analysis, presented below, strongly support this general region as the true location of HD~34282 in the HR diagram. The stellar models used here are the same as used in \citet{2009ApJ...704.1710G} for the PMS \DSS s in NGC~2264. To summarize, a large grid of stellar models, closely spaced in position in the HR diagram was constructed using the {\sc yrec} stellar evolution code \citep{2008Ap&SS.316...31D}. Solar metallicity ($Z=0.02$) models between 1.00 and 5.00~$\msun$ and low metallicity ($Z=0.004$) models between 1.00 and 3.00~$\msun$ (in increments of 0.01~$\msun$ for both cases) were considered. Both PMS and post-ZAMS evolutionary tracks were evolved, the former starting on the Hayashi track with a polytrope, and ending at the ZAMS, and the latter using the ZAMS as a starting point, and ending near the base of the red giant branch. For each model within an evolutionary track, adiabatic and non-adiabatic oscillation frequencies $\nu_{n{\ell}m}$ were calculated using Guenther's non-adiabatic stellar pulsation program \citep{1994ApJ...422..400G}. Radial orders $n=0$ to 30 and azimuthal orders, $\ell=0$ to 3 were calculated. The effects of rotation were not considered. \subsubsection{``Averaged'' frequencies.} It is possible that some of the individual frequencies within each group could be caused by time variations in the amplitude of, for example, the radial modes, including frequencies that might be damped. In this case, a Fourier transform spectrum would show broadened or distinct modes depending on the resolution of the time-series data. Note that in this case, not all of the detected 22 frequencies would be separate pulsation frequencies, and so caution must be taken when comparing the observed frequencies to models. Unfortunately, for the closest pairs, the temporal extent of the observations ($\sim$ 31~days) is insufficient to test this hypothesis, as the frequency resolution from considering only part of the light curve is not high enough to distinguish between \eg $f_1$ and $f_2$. However, if all the frequencies {\em are} stable then some physical phenomenon must be selectively driving the pulsation frequencies in these distinct groups. Here we postulate each group is triggered around successive radial orders of pulsation, and as an experiment construct an ``average'' frequency for each group for comparison to models. We computed the weighted average of each frequency group $H_j$, according to: \begin{equation} \label{eq:ampav} H_j=\frac{\sum_{i=1}^{N_j}{f_{ji}a^2_{ji}}}{\sum_{i=1}^{N_j}a^2_{ji}}, \end{equation} where $a_{ji}$ and $f_{ji}$ are the $i^{th}$ constituent amplitudes and frequencies of the $j^{th}$ grouping, with resultant weighted frequency, $H_j$. $N_j$ is the number of frequencies included in the $j_{th}$ group. The weighted averaged frequencies for each group are listed in Table~\ref{tab:V1366_Ori_weighted} in order of increasing frequency. The squared amplitude for each $H_j$ is given by the sum of the constituent squared amplitudes. For comparison, Fig.~\ref{fig:V1366_Ori_weighted} shows the a) unweighted and b) weighted frequency spectra of the star. \begin{table} \begin{minipage}{\columnwidth} \resizebox{\columnwidth}{!} { \input{table2} } \end{minipage} \caption{Weighted frequencies for HD~34282.} \label{tab:V1366_Ori_weighted} \end{table} \begin{figure} \begin{centering} \includegraphics[width=\columnwidth]{fig5a.eps} \includegraphics[width=\columnwidth]{fig5b.eps} \caption{Squared amplitudes for a) unweighted and b) weighted frequencies of HD~34282 as detected by {\em MOST}.} \label{fig:V1366_Ori_weighted} \end{centering} \end{figure} In order to compare these values to models, uncertainties need to be assigned to the $H_i$, a somewhat speculative venture. We choose an uncertainty of $\pm 1$~\mhz~for each $H_j$, matching the approximate range of the two constituent frequencies of the highest-amplitude group, $H_8$. To locate the best match between the observed pulsation spectrum and the model spectra we quantified the fits using the $\chi^2$ equation: \begin{equation} \label{eq:chisq} \chi^2= \frac{1}{N}\sum^{N}_{i=1}\frac{(\nu_{obs,i}-\nu_{mod,i})^2}{\sigma^2_{obs,i}+\sigma^2_{mod,i}}, \end{equation} where $N$ is the number of observed frequencies, $\nu_{obs,i}$ and $\nu_{mod,i}$ are the $i^{th}$ observed and model frequencies respectively, and $\sigma^2_{obs,i}$ and $\sigma^2_{mod,i}$ are the $i^{th}$ observed- and model-frequency uncertainties respectively. As in \citet{2009ApJ...704.1710G}, $\sigma^2_{mod,i}$ is small compared to $\sigma^2_{obs,i}$ and is ignored for the purposes of these calculations. The top two panels of Fig.~\ref{fig:V1366_Ori_z02_echelle} show \chisq fits to our $Z=0.02$ PMS model grid, along with a sample echelle diagram. Only models within the grid that have \chisq $< 3.0$ are shown. The trapezoid corresponds to position~B for HD~34282, and the large black square to our model best fit to the weighted-averaged frequencies. The fits are to radial-order modes only. \begin{figure*} \begin{center} \begin{minipage}[l]{\columnwidth} \includegraphics[width=0.75\textwidth]{fig6_topleft.eps} \includegraphics[width=0.75\textwidth]{fig6_bottomleft.eps} \end{minipage} \begin{minipage}[r]{0.98\columnwidth} \includegraphics[width=0.75\textwidth]{fig6_topright.eps} \includegraphics[width=0.75\textwidth]{fig6_bottomright.eps} \end{minipage} \caption{\chisq fits to weighted frequencies of HD~34282, $\ell = 0$ modes only. Top Row: $Z$=0.02, best model fit \chisq=1.7, $M$=2.30~$\msun$, \logT=3.960, \logl=1.629. Bottom row: $Z$=0.004, best model fit \chisq=4.7, $M$=1.92~$\msun$, \logT=3.935, \logl=1.478. Both echelle diagrams use a folding frequency of 43.4~\mhz.} \label{fig:V1366_Ori_z02_echelle} \end{center} \end{figure*} The bottom two panels show the fits to a $Z=0.004$ model grid. The best fit to the $Z=0.02$ is slightly better than the best fit to the lower metallicity, and thus marginally favours a $\lambda$ Bootis nature for the star. The results are encouraging particularly given the speculative nature of the analysis. In both cases the observed spectrum of HD~34282 appears to correspond to high radial-order modes, with $n=13$ to 20. Of particular note, in Fig.~\ref{fig:V1366_Ori_z02_echelle} there are no theoretical frequencies displayed above $n=20$, as any pulsation frequency above this value would be greater than the model's acoustic cut-off frequency, a theoretical maximum pulsation frequency for the model, the consequences of which will be addressed in the following subsection. Note that if we plot the original frequencies in a similar echelle diagram as in the top of Fig.~\ref{fig:V1366_Ori_z02_echelle} the frequencies would scatter slightly about the $\ell=0$ (and $\ell=2$) model frequencies, but would not spread to the $\ell=1$ (and 3) model frequencies (see Fig.~\ref{fig:V1366_echelle}). \begin{figure} \begin{centering} \includegraphics[width=\columnwidth]{fig7.eps} \caption{Echelle diagram of the unweighted frequencies of HD~34282 compared to the $Z=0.02$ model of Fig.~\ref{fig:V1366_Ori_z02_echelle}. The frequencies do not extend to $\ell=1$ nor 3 modes.} \label{fig:V1366_echelle} \end{centering} \end{figure} Further \chisq~fits were performed with $\ell = 1$ modes only, but the results are not shown. The results are similar in quality, with a line of good fits that are a bit cooler than those of the $\ell = 0$ case and return models with similar large spacings and radial orders that run from $n=13$ to 20. Simultaneous fits to both $\ell = 0$ and 1 modes did not yield good results, indicating that the frequency groups are not probing the separation between $\ell = 0$ and 1 modes. Similarly, fits of the unweighted frequencies to $\ell=0$ to 3 modes did not yield good results, yielding extremely high \chisq values of around $10^5$. Regardless, overall, the frequency groups displayed in the pulsation spectrum of HD~34282 are consistent with those groups representing successive order of radial pulsation of the star, the mechanism for this pattern unknown at this time. \subsubsection{Acoustic cut-off frequency} The average amplitudes of the frequencies of the groups increases monotonically with frequency from 700~\mhz\, onward, then abruptly stops with no periodic signal detected above $\sim 920$~\mhz. We believe that this abrupt drop off corresponds to the acoustic cut-off frequency for HD~34282 and that models with acoustic cut-off frequencies above this frequency can be ruled out. In standard theory, p-modes above the acoustic cut-off frequency are no longer reflected back at the surface but continue on as travelling waves into the atmosphere of star where they quickly radiate away their energy. Indeed, in the early days of helioseismology we expected to see an abrupt drop off in the Sun's p-mode spectrum above the theoretically-predicted solar acoustic cut-off frequency \citep{1992A&A...266..532F}. In fact, in the case of the Sun, regularly spaced modes above the acoustic cut-off frequency are observed with amplitudes that decrease at higher frequencies \citep{1988ESASP.286..279J,1988ApJ...334..510L}. Again we expected these modes to form a continuous spectrum, being stochastically driven travelling waves. But in the case of the Sun the modes are spaced out in a pattern that mimics the spacing of the trapped p-modes below the acoustic cut-off frequency. Currently, we believe that the pseudomodes, as they are called, are driven by turbulent convection in the atmosphere and that their regular spacings are caused by simple geometric interference as the waves travel around the star \citep{1994ApJ...428..827K,1998ApJ...504L..51G,2011ApJ...743...99J}. For most stars, the amplitudes of the p-modes do decrease below the detection threshold well before reaching the theoretical acoustic cut-off frequency. Therefore, the rise in amplitudes with a sudden drop off at the highest observed frequency for HD~34282 is unusual. At this time we do not believe that the acoustic cut-off frequency is below the highest observed frequency in HD~34282, that is, we do not believe any of the observed modes are pseudomodes. If the higher frequency modes we observe were pseudomodes then they should have short life times and random phases. Indeed this could provide a possible explanation for the multiple peaks within each group. Furthermore, as Fig.~\ref{fig:PMSacf}, shows, within the uncertainties of HD~34282's HR-diagram position, viable models with acoustic cut-off frequencies as low as $\sim 300$~\mhz~are possible. But what leads us to doubt this possibility is the fact that the amplitudes of the averaged modes continue to increase with frequency. All currently proposed models to explain the existence of the Sun's pseudomodes predict a drop in amplitudes with increasing frequency. Therefore, we speculate that the acoustic cut-off frequency, indeed, provides an upper frequency limit to the regularly spaced p-modes in HD~34282 and that the acoustic cut-off frequency is near or above $\sim~920$~\mhz. Fig.~\ref{fig:PMSacf} shows the acoustic cut-off frequencies for PMS $Z=0.02$ model-grid stars. Under our assumption, we can eliminate models whose acoustic cut-off frequencies are lower than the highest frequency observed, i.e., $f_1$ at $\sim 920$~\mhz. As shown in Fig.~\ref{fig:PMSacf}, for fixed $T_{eff}$, the acoustic cut-off frequency decreases as the luminosity of the model star increases. Hence, for HD~38282 there is a near-horizontal line in the HR diagram (solid, black line passing through the sample model in Fig.~\ref{fig:PMSacf}), above which models have too low an acoustic cut-off frequency to meet our assumption (that the acoustic cut-off frequency is above 920~\mhz). Only the models in the region between this line and the ZAMS are viable. Finally we note that our best model fit to the averaged mode frequencies (see Fig.~\ref{fig:V1366_Ori_z02_echelle}) lies just below this line, in agreement with our assumption that the highest observed frequency is just below the acoustic cut-off frequency. \begin{figure} \begin{centering} \includegraphics[width=\columnwidth]{fig8.eps} \caption{Acoustic cut-off frequencies of the PMS models. Models with an acoustic cut-off frequency less than 100~\mhz~and greater than 1000~\mhz~are not shown, as they are out of observed \DS~pulsation range. Only models between the thick solid line and the ZAMS can support the pulsations of HD~34282. The black square indicates the $Z=0.02$ sample model of Fig.~\ref{fig:V1366_Ori_z02_echelle}.} \label{fig:PMSacf} \end{centering} \end{figure} \subsubsection{Rotational splitting} Here we comment upon the minimum role that rotation must play in the pulsation spectrum of the star. Individual (unaveraged) frequencies could not be identified as distinct low $\ell$-valued modes, so we consider the possibility that the frequencies within groups correspond to a rotationally-split mode. Within a group some frequencies have separations of about 2 to 4~\mhz, but given the observed \vsini, Fig.~\ref{fig:V1366_Ori_HR_maxrot} shows these frequencies are unlikely to be from the same $\ell$ mode. A first-order estimate of the rotational splitting, $\Delta f$, between two successive $m$ modes (\eg between $m$ and $m+1$) within a multiplet implied by $v$ (the surface equatorial velocity) is $\Delta f = 1/{\Omega}$, where $\Omega$ is the surface equatorial rotation period of the star, $\Omega = (2\pi R_*)/v$, and $R_*$ is the radius of the particular stellar model in question \citep{2010aste.book.....A}.\footnote{It is important to note that for large values of the ratio $R_*/v$, $\Delta f_{min}$ does not apply in detail, but only on average \citep[][Deupree, private communication]{2008ApJ...679.1499L,2010ApJ...721.1900D}.} For each model in the grid, Fig.~\ref{fig:V1366_Ori_HR_maxrot} shows the equatorial velocity required to give $\Delta f = 4$~\mhz. Only the most luminous models have $v$ consistent with the \vsini~$=~129 \pm 11$~km~s$^{-1}$ of \citet{2001A&A...378..116M}. The solid black line that parallels the ZAMS in Fig.~\ref{fig:V1366_Ori_HR_maxrot} separates models that are consistent with the acoustic cut-off frequency constraint (below the line) from models that do not (above the line). If we accept the acoustic cut-off frequency constraint, then rotation as a possible source of the 2 to 4~\mhz~differences between frequencies within each group is ruled out. If rotational splittings are present in the spectrum then the splittings must be at least $\sim 10$~\mhz. As rotation rates become higher, rotational splittings become progressively non-linear, multiplets originating from different unsplit modes begin to overlap, and mode identification becomes more difficult. Detailed 2D calculations, such as those by \citet{2010ApJ...721.1900D} are required to calculate these frequencies, and are beyond the current scope of this paper. \begin{figure} \begin{centering} \includegraphics[width=\columnwidth]{fig9.eps} \caption{Surface equatorial rotational velocities required to give $\Delta f = 4$~\mhz~(colour bar). Models between the solid black line and the ZAMS are those with an acoustic cut-off frequency greater than $f_1$ = 919.24~\mhz.} \label{fig:V1366_Ori_HR_maxrot} \end{centering} \end{figure} \section{Summary and conclusions} \label{sec:conclusions} {\em MOST} observations have discovered in the light curve of HD~34282 some 22 frequencies (12 more than previously observed), in which asteroseismic analysis suggests that these frequencies cluster around eight successive radial orders of pulsation. The amplitude of each pulsation group grows with frequency, with an abrupt cut-off in power after the highest frequency detected. We believe that the observed frequencies run right up to the acoustic cut-off frequency. We think it is unlikely that we are observing pseudomodes, i.e. untrapped travelling waves, because the amplitude of the frequencies do not decrease with increasing frequency. The average frequencies compiled from the groups of frequencies simultaneously fit the highest eight radial orders below the acoustic cut-off frequency ($n=13$ through 20 orders), and predict the acoustic cut-off frequency. Although we have focused our discussion on fitting radial modes to the weighted-average frequencies, equally viable fits for $\ell=1$ p-modes, exclusively, can be achieved. The $\ell=1$ best-fit models have similar large spacings, with only slightly lower temperatures and luminosities compared to the radial-mode models. Therefore, regardless of which azimuthal order is considered, the best-fit models to the averaged frequencies occupy nearly the same position in the HR diagram as those shown in Fig.~\ref{fig:V1366_Ori_z02_echelle}. The range of viable models in the HR diagram is also little affected by metallicity, with the lower-$Z$ example we tested predicting a slightly lower mass than the solar-$Z$ case. The HR diagram position of HD~34282 remains difficult to determine -- the asteroseismic analysis in the work suggests position ``B'' in Fig.~\ref{fig:V1366_Ori_weighted} is the most appropriate, and that the star is therefore suffering from the blueing effect common to some heavily-obscured Herbig Ae stars. Further work in this area is needed, and multi-filter observations of HD~34282 are currently under way to determine the true level of obscuration and reddening of the star. The ultimate cause of the frequency groups is unknown, however it may be an example of mode trapping combined with large rotational splittings. The large non-linear splittings expected with \vsini$=~129 \pm 11$~km~s$^{-1}$, coupled with a selection mechanism that drives only modes close in frequency space to that of a radial mode would explain the strange pattern observed in HD~34282. Future theoretical calculations are needed to investigate this possibility. \section*{Acknowledgements} We wish to thank the referee for useful comments that allowed us to clarify certain parts of this paper. KZ is a recipient of an APART fellowship of the Austrian Academy of Sciences at the Institute of Astronomy of the University Vienna. DBG, MPC, SMR and AFJM acknowledge the funding support of the Natural Sciences and Engineering Research Council of Canada. AFJM also acknowledges the funding support of FQRNT. RK and WWW are supported by the Austrian Science Fund (P22691-N16) and by the Austrian Research Promotion Agency-ALR. PJA acknowledges financial support of the previous Spanish Ministry of Science and Innovation (MICINN), currently Ministry of Economy and Competitiveness, grant AYA2010-14840. DD and ER acknowledge the support by the Junta de Andaluc\'{i}a and the Direcci\'{o}n General de Investigaci\'{o}n (DGI), project AYA2009-10394. \bibliographystyle{mn2e}
2,869,038,156,626
arxiv
\section{Introduction} The basic idea of the brane worlds is that the universe is restricted to a brane inside a higher-dimensional space, called the "bulk" . In this model, at least some of the extra dimensions are extensive (possibly infinite), and other branes may be moving through this bulk. Some of the first braneworld models were developed by Rubakov and Shaposhnikov \cite{Rubakov}, Visser \cite{Visser}, Randall and Sundrum \cite{Randall1}, \cite{Randall2}, Pavsic \cite{Matej}, Gogberashvili \cite{Gogberashvili}. At least some of these models are motivated by string theory. Braneworlds in string theory were discussed in \cite{Antoniadis}, see for a review for example \cite{Dieter Lust}, our approach will be very different to the present standard approaches to braneworlds in the context of string theories however. In our approach a dynamical string tension is required. Our scenario could be enriched by incorporating aspects of the more traditional braneworlds, but these aspects will be ignored here to simplify the discussion. String Theories have been considered by many physicists for some time as the leading candidate for the theory everything, including gravity, the explanation of all the known particles that we know and all of their known interactions (and probably more) \cite{stringtheory}. According to some, one unpleasant feature of string theory as usually formulated is that it has a dimension full parameter, in fact, its fundamental parameter , which is the tension of the string. This is when formulated the most familiar way. The consideration of the string tension as a dynamical variable, using the modified measures formalism, which was previously used for a certain class of modified gravity theories under the name of Two Measures Theories or Non Riemannian Measures Theories, see for example \cite{d,b, Hehl, GKatz, DE, MODDM, Cordero, Hidden}. In the contex of this paper, it is also interesting to mention that the modified measure approach has also been used to construct braneworld scenarios \cite{modified measures branes} When applying these principles to string theory, this leads to the modified measure approach to string theory, where rather than to put the string tension by hand it appears dynamically. This approach has been studied in various previous works \cite{a,c,supermod, cnish, T1, T2, T3, cosmologyandwarped}. See also the treatment by Townsend and collaborators for dynamical string tension \cite{xx,xxx}. In our most recent papers on the subject \cite{cosmologyandwarped}, we have also introduced the ¨tension scalar¨, which is an additional background fields that can be introduced into the theory for the bosonic case (and expected to be well defined for all types of superstrings as well) that changes the value of the tension of the extended object along its world sheet, we call this the tension scalar for obvious reasons. Before studying issues that are very special of this paper we review some of the material contained in previous papers, first present the string theory with a modified measure and containing also gauge fields that like in the world sheet, the integration of the equation of motion of these gauge fields gives rise to a dynamically generated string tension, this string tension may differ from one string to the other. Then we consider the coupling of gauge fields in the string world sheet to currents in this world sheet, as a consequence this coupling induces variations of the tension along the world sheet of the string. Then we consider a bulk scalar and how this scalar naturally can induce this world sheet current that couples to the internal gauge fields. The integration of the equation of motion of the internal gauge field lead to the remarkably simple equation that the local value of the tension along the string is given by $T= e \phi + T _{i} $ , where $e$ is a coupling constant that defines the coupling of the bulk scalar to the world sheet gauge fields and $ T _{i} $ is an integration constant which can be different for each string in the universe. Then each string is considered as an independent system that can be quantized. We take into account the string generation by introducing the tension as a function of the scalar field as a factor inside a Polyakov type action with such string tension, then the metric and the factor $g \phi + T _{i} $ enter together in this effective action, so if there was just one string the factor could be incorporated into the metric and the condition of world sheet conformal invariance will not say very much about the scalar $\phi $ , but if many strings are probing the same regions of space time, then considering a background metric $g_{\mu \nu}$ , for each string the ¨string dependent metric¨ $(\phi + T _{i})g_{\mu \nu}$ appears and in the absence of othe background fields, like dilaton and antisymmetric tensor fields, Einstein´s equations apply for each of the metrics $(\phi + T _{i})g_{\mu \nu}$, considering two types of strings with different tensions. We call $g_{\mu \nu}$ the universal metric, which in fact does not necessarily satisfy Einstein´s equations. In the case of the flat space for the string associated metrics, in the Milne representation, for the case of two types of string tensions, we study the case where the two types of strings have positive string tensions , as opposed to our previous work \cite{cosmologyandwarped} where we found solutions with both positive and negative string tensions. At the early universe the negative string tension strings tensions are large in magnitude , but approach zero in the late universe and the positive string tensions appear for the late universe with their tension approaching a constant value at the late universe. These solutions are absolutely singularity free. In contrast, we have studied also \cite{Escaping} the case of very different solutions and where both type of strings have positive tensions, then these are singular, they cannot be continued before a certain time (that corresponded to a bounce in our previous work \cite{cosmologyandwarped}). Here, at the origin of time, the string tensions of both types of strings approach plus infinity, so this opens the possibility of having no Hagedorn temperature in the early universe and latter on in the history of the universe as well for this type of string cosmology scenario. Also the tensions can become infinity at a certain location in the warped coordinate in a warped scenario. Here we will study a situation where we consider the metrics $(\phi + T _{i})g_{\mu \nu}$, considering two types of strings tensions . The two metrics will again satisfy Einstein´s equations and the two metrics will represent Minkowski space and Minkowski space after a special conformal transformation. In this case, the location where the two types of strings acquire an infinite tension is given bu two surfaces. If the vector that defines the special conformal transformation is light like, these two surface are plane, parallel to each other and both move with the speed of light. If the vector is time like of space like the two surfaces are spherical and expanding and the distance between them approaches zero at large times (positive or negative) . In both cases this represents a genuine brane world scenario. \section{The Modified Measure Theory String Theory} The standard world sheet string sigma-model action using a world sheet metric is \cite{pol1}, \cite{pol2}, \cite{pol3} \begin{equation}\label{eq:1} S_{sigma-model} = -T\int d^2 \sigma \frac12 \sqrt{-\gamma} \gamma^{ab} \partial_a X^{\mu} \partial_b X^{\nu} g_{\mu \nu}. \end{equation} Here $\gamma^{ab}$ is the intrinsic Riemannian metric on the 2-dimensional string worldsheet and $\gamma = det(\gamma_{ab})$; $g_{\mu \nu}$ denotes the Riemannian metric on the embedding spacetime. $T$ is a string tension, a dimension full scale introduced into the theory by hand. \\ Now instead of using the measure $\sqrt{-\gamma}$ , on the 2-dimensional world-sheet, in the framework of this theory two additional worldsheet scalar fields $\varphi^i (i=1,2)$ are considered. A new measure density is introduced: \begin{equation} \Phi(\varphi) = \frac12 \epsilon_{ij}\epsilon^{ab} \partial_a \varphi^i \partial_b \varphi^j. \end{equation} There are no limitations on employing any other measure of integration different than $\sqrt{-\gamma}$. The only restriction is that it must be a density under arbitrary diffeomorphisms (reparametrizations) on the underlying spacetime manifold. The modified-measure theory is an example of such a theory. \\ Then the modified bosonic string action is (as formulated first in \cite{a} and latter discussed and generalized also in \cite{c}) \begin{equation} \label{eq:5} S = -\int d^2 \sigma \Phi(\varphi)(\frac12 \gamma^{ab} \partial_a X^{\mu} \partial_b X^{\nu} g_{\mu\nu} - \frac{\epsilon^{ab}}{2\sqrt{-\gamma}}F_{ab}(A)), \end{equation} where $F_{ab}$ is the field-strength of an auxiliary Abelian gauge field $A_a$: $F_{ab} = \partial_a A_b - \partial_b A_a$. \\ It is important to notice that the action (\ref{eq:5}) is invariant under conformal transformations of the internal metric combined with a diffeomorphism of the measure fields, \begin{equation} \label{conformal} \gamma_{ab} \rightarrow J\gamma_{ab}, \end{equation} \begin{equation} \label{diffeo} \varphi^i \rightarrow \varphi^{'i}= \varphi^{'i}(\varphi^i) \end{equation} such that \begin{equation} \label{measure diffeo} \Phi \rightarrow \Phi^{'}= J \Phi \end{equation} Here $J$ is the jacobian of the diffeomorphim in the internal measure fields which can be an arbitrary function of the world sheet space time coordinates, so this can called indeed a local conformal symmetry. To check that the new action is consistent with the sigma-model one, let us derive the equations of motion of the action (\ref{eq:5}). \\ The variation with respect to $\varphi^i$ leads to the following equations of motion: \begin{equation} \label{eq:6} \epsilon^{ab} \partial_b \varphi^i \partial_a (\gamma^{cd} \partial_c X^{\mu} \partial_d X^{\nu} g_{\mu\nu} - \frac{\epsilon^{cd}}{\sqrt{-\gamma}}F_{cd}) = 0. \end{equation} since $det(\epsilon^{ab} \partial_b \varphi^i )= \Phi$, assuming a non degenerate case ($\Phi \neq 0$), we obtain, \begin{equation} \label{eq:a} \gamma^{cd} \partial_c X^{\mu} \partial_d X^{\nu} g_{\mu\nu} - \frac{\epsilon^{cd}}{\sqrt{-\gamma}}F_{cd} = M = const. \end{equation} The equations of motion with respect to $\gamma^{ab}$ are \begin{equation} \label{eq:8} T_{ab} = \partial_a X^{\mu} \partial_b X^{\nu} g_{\mu\nu} - \frac12 \gamma_{ab} \frac{\epsilon^{cd}}{\sqrt{-\gamma}}F_{cd}=0. \end{equation} One can see that these equations are the same as in the sigma-model formulation . Taking the trace of (\ref{eq:8}) we get that $M = 0$. By solving $\frac{\epsilon^{cd}}{\sqrt{-\gamma}}F_{cd}$ from (\ref{eq:a}) (with $M = 0$) we obtain the standard string eqs. \\ The emergence of the string tension is obtained by varying the action with respect to $A_a$: \begin{equation} \epsilon^{ab} \partial_b (\frac{\Phi(\varphi)}{\sqrt{-\gamma}}) = 0. \end{equation} Then by integrating and comparing it with the standard action it is seen that \begin{equation} \frac{\Phi(\varphi)}{\sqrt{-\gamma}} = T. \end{equation} That is how the string tension $T$ is derived as a world sheet constant of integration opposite to the standard equation (\ref{eq:1}) where the tension is put ad hoc. Let us stress that the modified measure string theory action does not have any \textsl{ad hoc} fundamental scale parameters. associated with it. This can be generalized to incorporate super symmetry, see for example \cite{c}, \cite{cnish}, \cite{supermod} , \cite{T1}. For other mechanisms for dynamical string tension generation from added string world sheet fields, see for example \cite{xx} and \cite{xxx}. However the fact that this string tension generation is a world sheet effect and not a universal uniform string tension generation effect for all strings has not been sufficiently emphasized before. Notice that Each String in its own world sheet determines its own tension. Therefore the tension is not universal for all strings. \section{Introducing Background Fields including a New Background Field, The Tension Field} Schwinger \cite{Schwinger} had an important insight and understood that all the information concerning a field theory can be studied by understanding how it reacts to sources of different types. This has been discussed in the text book by Polchinski for example \cite{Polchinski} . Then the target space metric and other external fields acquire dynamics which is enforced by the requirement of zero beta functions. However, in addition to the traditional background fields usually considered in conventional string theory, one may consider as well an additional scalar field that induces currents in the string world sheet and since the current couples to the world sheet gauge fields, this produces a dynamical tension controlled by the external scalar field as shown at the classical level in \cite{Ansoldi}. In the next two subsections we will study how this comes about in two steps, first we introduce world sheet currents that couple to the internal gauge fields in Strings and Branes and second we define a coupling to an external scalar field by defining a world sheet currents that couple to the internal gauge fields in Strings that is induced by such external scalar field. \subsection{Introducing world sheet currents that couple to the internal gauge fields} If to the action of the string we add a coupling to a world-sheet current $j ^{a}$, i.e. a term \begin{equation} S _{\mathrm{current}} = \int d ^{2} \sigma A _{a} j ^{a} , \label{eq:bracuract} \end{equation} then the variation of the total action with respect to $A _{a }$ gives \begin{equation} \epsilon ^{a b} \partial _{a } \left( \frac{\Phi}{\sqrt{- \gamma}} \right) = j ^{b} . \label{eq:gauvarbracurmodtotact} \end{equation} We thus see indeed that, in this case, the dynamical character of the brane is crucial here. \subsection{How a world sheet current can naturally be induced by a bulk scalar field, the Tension Field} Suppose that we have an external scalar field $\phi (x ^{\mu})$ defined in the bulk. From this field we can define the induced conserved world-sheet current \begin{equation} j ^{b} = e \partial _{\mu} \phi \frac{\partial X ^{\mu}}{\partial \sigma ^{a}} \epsilon ^{a b} \equiv e \partial _{a} \phi \epsilon ^{a b} , \label{eq:curfroscafie} \end{equation} where $e$ is some coupling constant. The interaction of this current with the world sheet gauge field is also invariant under local gauge transformations in the world sheet of the gauge fields $A _{a} \rightarrow A _{a} + \partial_{a}\lambda $. For this case, (\ref{eq:gauvarbracurmodtotact}) can be integrated to obtain \begin{equation} T = \frac{\Phi}{\sqrt{- \gamma}} = e \phi + T _{i} , \label{eq:solgauvarbracurmodtotact2} \end{equation} or equivalently \begin{equation} \Phi = \sqrt{- \gamma}( e \phi + T _{i}) , \label{eq:solgauvarbracurmodtotact} \end{equation} The constant of integration $T _{i}$ may vary from one string to the other. Notice tha the interaction is metric independent since the internal gauge field does not transform under the the conformal transformations. This interaction does not therefore spoil the world sheet conformal transformation invariance in the case the field $\phi$ does not transform under this transformation. One may interpret (\ref{eq:solgauvarbracurmodtotact} ) as the result of integrating out classically (through integration of equations of motion) or quantum mechanically (by functional integration of the internal gauge field, respecting the boundary condition that characterizes the constant of integration $T _{i}$ for a given string ). Then replacing $ \Phi = \sqrt{- \gamma}( e \phi + T _{i})$ back into the remaining terms in the action gives a correct effective action for each string. Each string is going to be quantized with each one having a different $ T _{i}$. The consequences of an independent quantization of many strings with different $ T _{i}$ covering the same region of space time will be studied in the next section. \subsection{Consequences from World Sheet Quantum Conformal Invariance on the Tension field, when several strings share the same region of space} \subsubsection{The case where all all string tensions are the same, i.e., $T _{i}= T _{0}$, and the appearance of a target space conformal invariance } If all $T _{i}= T _{0}$, we just redefine our background field so that $e\phi+T _{0} \rightarrow e\phi$ and then in the effective action for all the strings the same combination $e\phi g_{\mu \nu}$, and only this combination will be determined by the requirement that the conformal invariance in the world sheet of all strings be preserved quantum mechanically, that is , that the beta function be zero. So in this case we will not be able to determine $e\phi$ and $ g_{\mu \nu}$ separately, just the product $e\phi g_{\mu \nu}$, so the equation obtained from equating the beta function to zero will have the target space conformal invariance $e\phi \rightarrow F(x)e\phi $, $g_{\mu \nu} \rightarrow F(x)^{-1}g_{\mu \nu} $. That is, there is no independent dynamics for the Tension Field in this case. On the other hand, if there are at least two types of string tensions, that symmetry will not exist and there is the possibility of determining separately $e\phi$ and $ g_{\mu \nu}$ as we will see in the next subsection. \subsubsection{The case of two different string tensions } If we have a scalar field coupled to a string or a brane in the way described in the sub section above, i.e. through the current induced by the scalar field in the extended object, according to eq. (\ref{eq:solgauvarbracurmodtotact}), so we have two sources for the variability of the tension when going from one string to the other: one is the integration constant $T _{i}$ which varies from string to string and the other the local value of the scalar field, which produces also variations of the tension even within the string or brane world sheet. As we discussed in the previous section, we can incorporate the result of the tension as a function of scalar field $\phi$, given as $e\phi+T_i$, for a string with the constant of integration $T_i$ by defining the action that produces the correct equations of motion for such string, adding also other background fields, the anti symmetric two index field $A_{\mu \nu}$ that couples to $\epsilon^{ab}\partial_a X^{\mu} \partial_b X^{\nu}$ and the dilaton field $\varphi $ that couples to the topological density $\sqrt{-\gamma} R$ \begin{equation}\label{variablestringtensioneffectiveacton} S_{i} = -\int d^2 \sigma (e\phi+T_i)\frac12 \sqrt{-\gamma} \gamma^{ab} \partial_a X^{\mu} \partial_b X^{\nu} g_{\mu \nu} + \int d^2 \sigma A_{\mu \nu}\epsilon^{ab}\partial_a X^{\mu} \partial_b X^{\nu}+\int d^2 \sigma \sqrt{-\gamma}\varphi R . \end{equation} Notice that if we had just one string, or if all strings will have the same constant of integration $T_i = T_0$. In any case, it is not our purpose here to do a full generic analysis of all possible background metrics, antisymmetric two index tensor field and dilaton fields, instead, we will take cases where the dilaton field is a constant or zero, and the antisymmetric two index tensor field is pure gauge or zero, then the demand of conformal invariance for $D=26$ becomes the demand that all the metrics \begin{equation}\label{tensiondependentmetrics} g^i_{\mu \nu} = (e\phi+T_i)g_{\mu \nu} \end{equation} will satisfy simultaneously the vacuum Einstein´s equations, Notice that if we had just one string, or if all strings will have the same constant of integration $T_i = T_0$, then all the $g^i_{\mu \nu}$ metrics are the same and then (\ref{tensiondependentmetrics}) is just a single field redefinition and therefore there will be only one metric that will have to satisfy Einstein´s equations, which of course will not impose a constraint on the tension field $\phi$ . The interesting case to consider is therefore many strings with different $T_i$, let us consider the simplest case of two strings, labeled $1$ and $2$ with $T_1 \neq T_2$ , then we will have two Einstein´s equations, for $g^1_{\mu \nu} = (e\phi+T_1)g_{\mu \nu}$ and for $g^2_{\mu \nu} = (e\phi+T_2)g_{\mu \nu}$, \begin{equation}\label{Einstein1} R_{\mu \nu} (g^1_{\alpha \beta}) = 0 \end{equation} and , at the same time, \begin{equation}\label{Einstein1} R_{\mu \nu} (g^2_{\alpha \beta}) = 0 \end{equation} These two simultaneous conditions above impose a constraint on the tension field $\phi$, because the metrics $g^1_{\alpha \beta}$ and $g^2_{\alpha \beta}$ are conformally related, but Einstein´s equations are not conformally invariant, so the condition that Einstein´s equations hold for both $g^1_{\alpha \beta}$ and $g^2_{\alpha \beta}$ is highly non trivial. Let us consider the case that one of the metrics, say $g^2_{\alpha \beta}$ is a Schwarzschild solution, either a 4 D Schwarzschild solution X a product flat of Torus compactified extra dimensions or just a 26 D Schwarzschild solution, in this case, it does not appear possible to have a conformally transformed $g^2_{\alpha \beta}$ for anything else than in the case that the conformal factor that transforms the two metrics is a positive constant, let us call it $\Omega^2$, in that case $g^1_{\alpha \beta}$ is a Schwarzschild solution of the same type, just with a different mass parameter and different sizes of extra dimensions if the compactified solution is considered. Similar consideration holds for the case the 2 metric is a Kasner solution, Then in this case also, it does not appear possible to have a conformally transformed $g^2_{\alpha \beta}$ for anything else than in the case that the conformal factor that transforms the two metrics is a constant, we will find other cases where the conformal factor will not be a constant, let us call then conformal factor $\Omega^2$ in general, even when it is not a constant. One can also study metrics used to describe gravitational radiation, then again, multiplying by a constant both the background flat space and the perturbation gives us also a solution of vacuum Einstein´s equations. Then for these situations, we have, \begin{equation}\label{relationbetweentensions} e\phi+T_1 = \Omega^2(e\phi+T_2) \end{equation} which leads to a solution for $e\phi$ \begin{equation}\label{solutionforphi} e\phi = \frac{\Omega^2T_2 -T_1}{1 - \Omega^2} \end{equation} which leads to the tensions of the different strings to be \begin{equation}\label{stringtension1} e\phi+T_1 = \frac{\Omega^2(T_2 -T_1)}{1 - \Omega^2} \end{equation} and \begin{equation}\label{stringtension2} e\phi+T_2 = \frac{(T_2 -T_1)}{1 - \Omega^2} \end{equation} Both tensions can be taken as positive if $T_2 -T_1$ is positive and $\Omega^2$ is also positive and less than $1$. It is important that we were forced to consider a multi metric situation. One must also realize that $\Omega^2$ is physical, because both metrics live in the same spacetime, so even if $\Omega^2$ is a constant , we are not allowed to perform a coordinate transformation, consisting for example of a rescaling of coordinates for one of the metrics and not do the same transformation for the other metric. Other way to see that $\Omega^2$ is physical consist of considering the scalar consisting of the ratio of the two measures $\sqrt{-g^1}$ and $\sqrt{-g^2}$ where $ g^1 = det ( g^1_{\alpha \beta})$ and $ g^2 = det ( g^2_{\alpha \beta})$, and we find that the scalar $\frac{\sqrt{-g^1}}{\sqrt{-g^2}} = \Omega^{D}$, showing that $\Omega$ is a coordinate invariant. \subsubsection{Flat space in Minkowski coordinates and flat space after a special conformal transformation } Let us study now a case where $\Omega^2$ is not a constant. For this we will consider two spaces related by a conformal transformation, which will be flat space in Minkowski coordonates and flat space after a special conformal transformation. The flat space in Minkowskii coordinates is, \begin{equation}\label{Minkowski} ds_1^2 = \eta_{\alpha \beta} dx^{\alpha} dx^{\beta} \end{equation} where $ \eta_{\alpha \beta}$ is the standard Minkowski metric, with $ \eta_{00}= 1$, $ \eta_{0i}= 0 $ and $ \eta_{ij}= - \delta_{ij}$. This is of course a solution of the vacuum Einstein´s equations. We now consider the conformally transformed metric \begin{equation}\label{Conformally transformed Minkowski} ds_2^2 = \Omega(x)^2 \eta_{\alpha \beta} dx^{\alpha} dx^{\beta} \end{equation} which we also demand that will satisfy the $D$ dimensional vacuum Einstein´s equations. Let us use the known transformation law of the Ricci tensor under a conformal transformation applied to $g^1_{\alpha \beta}=\eta_{\alpha \beta}$ and $ g^2_{\alpha \beta}=\Omega(x)^2\eta_{\alpha \beta}$, defining $ \Omega(x)= \theta^{-1}$, we obtain \begin{equation}\label{Conformally transformed Ricci} \begin{split} R^2_{\alpha \beta}=& R^1_{\alpha \beta} + (D-2) \nabla_{\alpha}\nabla_{\beta}(ln \theta) + \eta _{\alpha \beta}\eta^{\mu \nu}\nabla_{\mu}\nabla_{\nu}(ln \theta)+(D-2)\nabla_{\alpha}(ln \theta)\nabla_{\beta}(ln \theta) \\ & -(D-2)\eta _{\alpha \beta}\eta^{\mu \nu}\nabla_{\mu}(ln \theta)\nabla_{\nu}(ln \theta) \end{split} \end{equation} Since $g^1_{\alpha \beta}=\eta_{\alpha \beta}$, we obtain that $R^1_{\alpha \beta} =0$, also the covariant derivative above are covariant derivatives with respect to the metric $g^1_{\alpha \beta}=\eta_{\alpha \beta}$, so they are just ordinary derivatives. Taking this into account, after a bit of algebra we get that, \begin{equation}\label{Conformally transformed Ricci} \begin{split} R^2_{\alpha \beta}=& (D-2) \frac{\partial_{\alpha}\partial_\beta \theta}{\theta}+ \eta _{\alpha \beta}\eta^{\mu \nu}(\frac{\partial_{\mu}\partial_{\nu}\theta}{\theta}-\frac{\partial_{\mu}\theta\partial_{\nu}\theta}{\theta^2})\\ & -(D-2)\eta _{\alpha \beta}\eta^{\mu \nu}\frac{\partial_{\mu}\theta\partial_{\nu}\theta}{\theta^2} = 0 \end{split} \end{equation} by contracting (\ref{Conformally transformed Ricci}) we obtain a relation between $\eta^{\mu \nu}\frac{\partial_{\mu}\partial_{\nu}\theta}{\theta}$ and $\eta^{\mu \nu}{\partial_{\mu}\theta\partial_{\nu}\theta}/\theta^2$ \begin{equation}\label{relation} 2\eta^{\mu \nu}\frac{\partial_{\mu}\partial_{\nu}\theta}{\theta} = D\frac{\eta^{\mu \nu}{\partial_{\mu}\theta\partial_{\nu}\theta}}{\theta^2} \end{equation} using (\ref{relation}) to eliminate the nonlinear term $\eta^{\mu \nu}\frac{\partial_{\mu}\theta\partial_{\nu}\theta}{\theta}$ in (\ref{Conformally transformed Ricci}) we obtain the remarkably simple linear relation, \begin{equation}\label{linear relation} \partial_{\alpha}\partial_\beta \theta - \frac{1}{D}\eta_{\alpha \beta}\eta^{\mu \nu}\partial_{\mu}\partial_{\nu}\theta = 0 \end{equation} So we now first find the most general solution of the linear equation (\ref{linear relation}), which is, \begin{equation}\label{solution of linear relation} \theta= a_1 + a_2 K_{\mu}x^{\mu} + a_3 x^{\mu}x_{\mu} \end{equation} and then impose the non linear constraint (\ref{relation}), which implies, \begin{equation}\label{solution of a1} a_1 = \frac{a^2_2 K_{\mu} K^{\mu}}{4 a_3} \end{equation} we further demand that $\theta(x^{\mu}=0)=1$, so that, \begin{equation}\label{solution of linear relation} \theta= 1 + a_2 K_{\mu}x^{\mu} + \frac{a^2_2 K_{\mu} K^{\mu}}{4} x^{\mu}x_{\mu} \end{equation} This coincides with the results of Culetu \cite{Culetu} for $D=4$ and to identify this result with the result of a special conformal transformation, see discussions in \cite{Kastrup} and \cite{Zumino} , to connect to standard notation we identify $a_2 K_{\mu} = 2 a_{\mu}$, so that \begin{equation}\label{conformal factor of special conformal transformation} \theta= 1 +2 a_{\mu}x^{\mu} + a^2 x^2 \end{equation} where $ a^2 =a^{\mu}a_{\mu}$ and $ x^2= x^{\mu}x_{\mu}$. In this case, this conformal factor coincides with that obtained from the special conformal transformation \begin{equation}\label{ special conformal transformation} x\prime ^{\mu} = \frac{(x ^{\mu} +a ^{\mu} x^2)}{(1 +2 a_{\nu}x^{\nu} + a^2 x^2)} \end{equation} As discussed by Zumino \cite{Zumino} the finite special conformal transformation mixes up in a complicated way the topology of space time, so it is not useful to interpret the finite special conformal transformations as mapping of spacetimes. In summary, we have two solutions for the Einstein´s equations, $g^1_{\alpha \beta}=\eta_{\alpha \beta}$ and \begin{equation}\label{ conformally transformed metric} g^2_{\alpha \beta}= \Omega^2\eta_{\alpha \beta} =\theta^{-2}\eta_{\alpha \beta}=\frac{1}{( 1 +2 a_{\mu}x^{\mu} + a^2 x^2)^2} \eta_{\alpha \beta} \end{equation} We can then study the evolution of the tensions using $\Omega^2 =\theta^{-2}=\frac{1}{( 1 +2 a_{\mu}x^{\mu} + a^2 x^2)^2}$. We will consider two different cases: 1) $ a^2 =0 $, 2) $a^2 \neq 0 $ \subsubsection{Light Like Segment Compactification } Here we consider the case $ a^2 =0 $, and let us consider $ a^{\mu} = (A,A,0,......0) $. Then \begin{equation}\label{ conformally factor shock wave} \Omega^2 =\frac{1}{( 1 +2 a_{\mu}x^{\mu})^2}=\frac{1}{( 1 +2 A(t-x))^2} \end{equation} From this, let is calculate the tensions of the two sting types and see that they will be constrained to be inside a segment that moves with the speed of light. At the boundaries of those segments the string tensions become infinity, so the strings cannot escape this segment. \ref{ conformally factor shock wave} leads to the tensions of the different strings to be \begin{equation}\label{stringtension1segment} e\phi+T_1 = \frac{\Omega^2(T_2 -T_1)}{1 - \Omega^2} = \frac{(T_2 -T_1)(1 +2A(t-x))^2}{4A(t-x)(1+A(t-x))} \end{equation} and \begin{equation}\label{stringtension2segment} e\phi+T_2 = \frac{(T_2 -T_1)}{1 - \Omega^2} = \frac{(T_2 -T_1)}{4A(t-x)(1+A(t-x))} \end{equation} Let us take $T_2 -T_1$ positive, A negative, so we see that both tensions above go to positive infinity when $t-x$ goes to zero from negative values . Also both tensions above go to positive infinity when $t-x$ goes to the value $-1/A$ from above. That means that the strings are confined to the moving segment where $t-x$ is inside the segment $(-1/A ,0 )$. We call this phenomenon ¨Light Like Segment Compactification¨. Light like extra dimensions have been considered by Searight \cite{Searight} and Braneworlds via Lightlike Branes was considered in \cite{Braneworlds via Lightlike Branes} for example. To complete the discussion of the Light Like Segment Compactification, we present the universal metric. From the relation $g^1_{\mu \nu} = (e\phi+T_1)g_{\mu \nu}$ and the solution for $(e\phi+T_1)$ from \ref{stringtension1segment}, we obtain \begin{equation}\label{universal metric light like} g_{\mu \nu} = \frac{1}{(e\phi+T_1)} g^1_{\mu \nu} = \frac{4A(t-x)(1+A(t-x))}{(T_2 -T_1)(1 +2A(t-x))^2}\eta_{\mu\nu} \end{equation} showing this metric goes through zero changes sign at the boundaries of the segment. By considering the strings confined to the segment we avoid this pathology, this provides another way to justify that we must have a light like segment compactication. In the next subsection we will see how thick, expanding at sub luminal velocities braneworld scenarios are obtained from Dynamical String Tension Theories. \subsubsection{ Braneworlds in Dynamical String Tension Theories } We now consider the case when $a^\mu$ is not light like and we will find that for $a^2 \neq 0$, irrespective of sign, i.e. irrespective of whether $a^\mu$ is space like or time like, we will have thick Braneworlds where strings can be constrained between two concentric spherically symmetric bouncing higher dimensional spheres and where the distance between these two concentric spherically symmetric bouncing higher dimensional spheres approaches zero at large times. The string tensions of the strings one and two are given by \begin{equation}\label{stringtension1forBraneworld} e\phi+T_1 = \frac{(T_2-T_1)( 1 +2 a_{\mu}x^{\mu} + a^2 x^2)^2}{( 1 +2 a_{\mu}x^{\mu} + a^2 x^2)^2-1}= \frac{(T_2-T_1)( 1 +2 a_{\mu}x^{\mu} + a^2 x^2)^2}{(2 a_{\mu}x^{\mu} + a^2 x^2)(2+2 a_{\mu}x^{\mu} + a^2 x^2)} \end{equation} \begin{equation}\label{stringtension2forBraneworld} e\phi+T_2 = \frac{(T_2-T_1)}{( 1 +2 a_{\mu}x^{\mu} + a^2 x^2)^2-1}= \frac{(T_2-T_1)}{(2 a_{\mu}x^{\mu} + a^2 x^2)(2+2 a_{\mu}x^{\mu} + a^2 x^2)} \end{equation} Then, the locations where the string tensions go to infinity are determined by the conditions \begin{equation}\label{boundariesforBraneworld1} 2 a_{\mu}x^{\mu} + a^2 x^2 = 0 \end{equation} or \begin{equation}\label{boundariesforBraneworld2} 2 +2 a_{\mu}x^{\mu} + a^2 x^2 = 0 \end{equation} Let us start by considering the case where $a^\mu$ is time like, then without loosing generality we can take $a^\mu = (A, 0, 0,...,0)$. In this case the denominator in (\ref{stringtension1forBraneworld}) , (\ref{stringtension2forBraneworld}) is \begin{equation}\label{denominatortimelike} (2 a_{\mu}x^{\mu} + a^2 x^2)(2+2 a_{\mu}x^{\mu} + a^2 x^2) = (2At +A^2(t^2-x^2))(2+2At++A^2(t^2-x^2)) \end{equation} The condition (\ref{boundariesforBraneworld1}) implies then that \begin{equation}\label{bubbleboundaryforBraneworld1a} x^2_1 + x^2_2 + x^2_3.....+ x^2_{D-1}- (t+ \frac{1}{A})^2 = -\frac{1}{A^2} \end{equation} while the other boundary of infinite string tension (\ref{boundariesforBraneworld2}) is given by, \begin{equation}\label{bubbleboundaryforBraneworld1b} x^2_1 + x^2_2 + x^2_3.....+ x^2_{D-1}- (t+ \frac{1}{A})^2 = \frac{1}{A^2} \end{equation} So we see that (\ref{bubbleboundaryforBraneworld1b}) represents an exterior boundary which has an bouncing motion with a minimum radius $\frac{1}{A}$ at $t = - \frac{1}{A}$ , The denominator (\ref{denominator}) is positive between these two bubbles. So for $T_2 -T_1$ positive the tensions are positive and diverge at the boundaries defined above. The internal boundary (\ref{bubbleboundaryforBraneworld1a}) exists only for times $t$ smaller than $-\frac{2}{A}$ and bigger than $0$, so in the time interval $(-\frac{2}{A},0)$ there is no inner surface of infinite tension strings. This inner surface collapses to zero radius at $t=-\frac{2}{A}$ and emerges again from zero radius at $t=0$. For large positive or negative times, the difference between the upper radius and the lower radius goes to zero as $t \rightarrow \infty$ \begin{equation}\label{asymptotic} \sqrt{\frac{1}{A^2} +(t+ \frac{1}{A})^2 } -\sqrt{-\frac{1}{A^2} +(t+ \frac{1}{A})^2 }\rightarrow \frac{1}{t A^2}\rightarrow 0 \end{equation} of course the same holds $t \rightarrow -\infty$. This means that for very large early or late times the segment where the strings would be confined (since they will avoid having infinite tension) will be very narrow and the resulting scenario will be that od a brane world for late or early times, while in the bouncing region the inner surface does not exist. Let us start by considering the case where $a^\mu$ is space like, then without loosing generality we can take $a^\mu = (0, A, 0,...,0)$. In this case the denominator in (\ref{stringtension1forBraneworld}) , (\ref{stringtension2forBraneworld}) is \begin{equation}\label{denominatorspacelike} (2 a_{\mu}x^{\mu} + a^2 x^2)(2+2 a_{\mu}x^{\mu} + a^2 x^2) = (-2Ax^1-A^2(t^2- \vec{x}^2))((2-2Ax^1-A^2(t^2-\vec{x}^2)) \end{equation} where $\vec{x}= (x^1, x^2,...., x^{D-1})$ represents the spacial part of $x^{\mu}$, and $\vec{x}^2= (x^1)^2 +(x^2)^2+....+ (x^{D-1})^2$. We now consider the case when $a^\mu$ is space like, then without loosing generality we can take $a^\mu = (0, A , 0,...,0)$. We then consider the first boundary where the string tensions approoach infinity according to (\ref{boundariesforBraneworld1}), \begin{equation}\label{bubbleboundaryforBraneworld1b} -( x_1 -\frac{1}{A})^2 - x^2_2 - x^2_3.....- x^2_{D-1}+ t^2 = -\frac{1}{A^2} \end{equation} which describes a bouncing bubble with minimum radius $\frac{1}{A}$ at $t=0$. The case (\ref{boundariesforBraneworld2}) gives \begin{equation}\label{bubbleboundaryforBraneworld2b} -( x_1 -\frac{1}{A})^2 - x^2_2 - x^2_3.....- x^2_{D-1}+ t^2 = \frac{1}{A^2} \end{equation} (\ref{bubbleboundaryforBraneworld2b}) is an internal boundary which exists only for times $t$ smaller than $-\frac{1}{A}$ and bigger than $\frac{1}{A}$. Between $-\frac{1}{A}$ and $\frac{1}{A}$ there is no inner surface of infinite tension strings. Between these two bubbles the two factors in eq. This inner surface collapses to zero radius at $t=-\frac{1}{A}$ and emerges again from zero radius at $t=-\frac{1}{A}$. So the situation is very similar to that of the case where the vector $a^\mu$ is time like, just that the roles of the cases $\Omega = 1$ and $\Omega = -1$ get exchanged. Between these two boundaries the two factors in the denominator (\ref{denominatorspacelike}) are positive, while at the boundaries one or the other approach zero and the tensions diverge, so again for $T_2 -T_1$ positive the tensions are positive and diverge at the boundaries. Once again for large positive or negative times, the difference between the upper radius and the lower radius goes to zero. Implying that the strings will be confined to a very small segment at large early or late times, so then again we get an emergent brane world scenario. The strings and therefore all matter and gravity will be consequently confined to the very small segment of size $ \frac{1}{t A^2}$, very small for large $t$. At the moment of the bounce there is no brane world, there is only one exterior bubble which represents infinite tension location, the brane is generated dynamically after a period of time by the appearance of the inner bubble which completes the trapping of the strings between two surfaces. To complete the discussion of the braneworld case, we present the universal metric. From the relation $g^1_{\mu \nu} = (e\phi+T_1)g_{\mu \nu}$ and the solution for $(e\phi+T_1)$ from (\ref{stringtension1segment}), we obtain, \begin{equation}\label{universal metric braneworld} g_{\mu \nu} = \frac{1}{(e\phi+T_1)} g^1_{\mu \nu} = \frac {(2 a_{\mu}x^{\mu} + a^2 x^2)(2+2 a_{\mu}x^{\mu} + a^2 x^2)}{(T_2-T_1)( 1 +2 a_{\mu}x^{\mu} + a^2 x^2)^2}\eta_{\mu\nu} \end{equation} showing that this metric becomes zero at the boundaries where the tensions go infinity and if we extend the metric beyond the boundary the metric changes signature. If our basic postulate is that the signature of the metric $g_{\mu \nu}$ is $(+,-,-,-,-,-,-,-,....)$, then we cannot allowed to extend the space time beyond the region of space time where the string tensions are positive. By considering the strings confined to the boundaries we have defined we avoid these pathologies, this provides another way to justify that we must have a braneworld scenario. \section{Discussion: Comparison with Standard Approaches to Braneworlds and Perspectives} Our approach is very different to the present standard approaches to braneworlds in the context of string theories however. In our approach a dynamical string tension has been used. Our scenario could be enriched by incorporating aspects of the more traditional braneworlds, like introducing D-branes between the surfaces where the string tensions go to infinity, so open strings could end before their tensions approach an infinite value, or the surfaces where the tensions diverge could be themselves be defined as D branes for open strings. These possibilities have been ignored here to simplify the discussion. In any case, given that the tension of the strings diverge at the two boundaries we have defined, all strings are confined between those, the closed strings also, so unlike more traditional braneworlds, gravity does not escape to the bulk, in fact in the framework proposed here a braneworld scenario using just closed strings is perfectly possible. In spite of the differences with the more conventional approaches, one should question nevertheless whether still some of the typical signatures of these approaches. To start with, in order to calculate some of the features of the brane, one would have to see if the branes we have obtained here have some flexibility or if they are rigid. The degree of flexibility of the brane is measured by associating a tension to the brane. For a small brane tension, a flexible brane-world model, brane excitations, branons, will be relevant and in this case branons are the only new relevant low-energy particles \cite{Branons} . So far our calculations do not give us an indication concerning the tension of these branes, if this tension is big, small or infinite, since we have seen the brane appearing in a most symmetric way. In order to see if the brane is rigid, we would have to consider how the solution when perturbed responds, from this we could identify a brane tension. This is a project for further research. Another subject which is very important is the combination of the braneworld with compactification of some dimensions. Indeed , the brane makes one dimension small and this would be enough if we ere considering just five dimensions but in string theory we must consider $26$ dimensions, so additional ways to reduce effective dimensions to four must be considered, compactification as in Kaluza Klein scenarios comes to mind. In this respect, it is important that the brane obtained here are thick $D$ dimensional with one dimension getting smaller and smaller as times increases, so effectively we obtain a $D-1$ dimensional brane moving in a $D$ dimensional universe, as compared to a $4$ brane (usually denoted as a 3 brane) where SM particles live in our 4 dimensional manifold used in standard string theory brane worlds \cite{Antoniadis}. Then according to the standard scenario, some other fields like gravity will live in the bulk . Here there will be no bulk , all fields live in the initially thick brane, which as time advances becomes a very thin brane, thinner and thinner as time advances and the calculation of the modes at asymptotic times should be calculated in a $D-1$ with a number of these dimensions compactified, so that the remaining uncompactified space time coordinates is only four. So, for the braneworld scenario advocated here , we could proceed as in ref \cite{KK} , but with the crucial modification that gravity is not on the bulk but rather in the brane, in fact nothing should be on the bulk. In the early universe however the brane is thicker, so cavity excitations, where field reflect from the boundaries of the thick brane , associated casimir effects could be important there. There could be also interactions between the cavity modes and the Kaluza Klein excitations that could modify the predictions for the Kaluza Klein gravitons which are obtained in the more conventional braneworld models \cite{KK}. Finally, since the string tensions go to infinity at the boundaries, there is the option of avoiding the Hagedorn Temperature, which is proportional to the string tension, as it was discussed in the case of other examples where the string tensions go to infinity in certain regions \cite{Escaping}. This possibility to obtain very high temperatures could also give rise to interesting observational consequences for the graviton spectrum. This could have interesting consequences for the early universe, or collision experiments where there will be the possibility of thermalization. \textbf{Acknowledgments} I thank Oleg Andreev, David Andriot, Stefano Ansoldi, David Benisty , Thomas Curtright, Euro Spallucci, Emil Nissimov, Svetlana Pacheva, Tatiana Vulfs, Hitoshi Nishino, Subhash Rajpoot, Luciano Rezzolla, Horst Stöcker, Jurgen Struckmeier , David Vasak, Johannes Kirsch, Dirk Kehm , Luca Mezincescu and Matthias Hanauske for usefull discussions. I also want to thank the Foundational Questions Institute (FQXi) and the COST actions Quantum Gravity Phenomenology in the multi messenger approach, CA18108 and Gravitational waves, Black Holes and Fundamental Physics, CA16104 for support and special thanks to the Frankfurt Institute for Advanced Study for hospitality and finantial support and to Astrophysics Group at Goethe University for providing me with the opportunity of presenting these results in a seminar at one of their Astro Coffe Seminars, https://astro.uni-frankfurt.de/astrocoffee/ and to The University of Miami for the possibility of presenting these results at the Miami 2021 conference https://cgc.physics.miami.edu/Miami2021/.
2,869,038,156,627
arxiv
\section{Introduction} It is well known that Glauber dynamics (local Markov chains) on spin systems (such as the Ising, Potts, and hard-core models, graph colorings, etc.) suffer an exponential slowdown at low temperatures. This is due to the emergence of multiple phases in the state space, which are separated by narrow bottlenecks that are hard for the dynamics to cross. Much effort has been devoted to overcoming this obstacle, including the Swendsen-Wang dynamics~\cite{SW} (which allows large-scale, non-local moves), various dynamics on alternative representations of the spin system (including the subgraph dynamics~\cite{JSIsing}, polymer dynamics~\cite{CGGPS} and random-cluster dynamics~\cite{Grimmett}) and non-dynamical methods based on the cluster expansion~\cite{HPR-Algorithmic-Pirogov-Sinai}. (See Section~\ref{subsec:related} below for a summary of what is known about these methods.) However, there is a much more ``obvious'' solution to this problem, at least in cases where the phases and their respective {\it ground states\/} (maximum-likelihood configurations) are well understood: simply initialize the standard Glauber dynamics to be in a random mixture of the ground states (one for each phase), and run it as usual. The intuition is that, presumably, the {\it only\/} obstacle to rapid mixing is the slow transitions between phases, so we should expect it to converge rapidly ``within each phase''; since at low temperatures the overall probability distribution on configurations is approximated by a mixture of the single-phase distributions, this should suffice for global convergence. While this paradigm has been frequently proposed, and there is a folklore belief that it is valid, it has apparently eluded rigorous analysis except in the special case of the mean-field Ising model (i.e., on the complete graph)~\cite{LLP}, which reduces to a 1D process. In this paper, we prove that this paradigm works for the classical Ising model on the $d$-dimensional lattice~${\mathbb Z}^d$ throughout the low-temperature regime. We believe that the methods we develop may be useful in establishing the analogous paradigm in more general scenarios (see Section~\ref{subsec:extensions} for more details). To state our results more precisely, we first remind the reader of the definition of the Ising model. Given a finite graph $G=(V,E)$, configurations of the Ising model are assignments $\sigma:V\to \{\pm 1\}$ of one of two {\it spins}, denoted $\pm 1$, to each vertex. The probability that the model is in configuration~$\sigma$ is specified by the {\it Gibbs distribution\/} at \emph{inverse temperature} $\beta>0$: \begin{equation}\label{eqn:gibbs} \pi(\sigma) = \frac{1}{\mathcal Z_{G,\beta}}\, \exp(-\beta|C(\sigma)|), \end{equation} where $C(\sigma)$ is the set of edges $\{u,v\}\in E$ connecting vertices of different spins (i.e., edges in the cut induced by the spins) and $\mathcal Z_{G,\beta}$ is the normalizing factor, or partition function. Note that the distribution~\eqref{eqn:gibbs} favors configurations with fewer cut edges, and this bias increases with the parameter~$\beta >0$. The Gibbs distribution can be seen as a symmetric mixture of its restrictions to two phases: the {\it plus\/} phase, in which the {\it magnetization\/} (excess of $+1$ over $-1$ spins) is non-negative, and the {\it minus\/} phase, in which the magnetization is non-positive. Whereas at high temperatures (small~$\beta$) this perspective is uninformative, at low temperatures (large $\beta$) the Gibbs distribution becomes bimodal, one mode for each phase, and the boundary between the phases (the set of configurations with magnetization zero) has exponentially small weight. In this paper we focus on the classical setting where the graph $G$ is a $d$-dimensional torus $(\mathbb Z/n\mathbb Z)^d$ (or, equivalently, a cube of side length~$n$ in $\mathbb{Z}^d$ with ``periodic" boundaries) for $d\ge 2$. We write $N=n^d$ for the number of vertices in~$(\mathbb Z/n\mathbb Z)^d$. In this physically relevant setting, the separation of the two phases alluded to above is known to occur at a {\it critical value} $\beta=\beta_c(d)$. More precisely, for $\beta<\beta_c(d)$ the correlation between spins $\sigma_u,\sigma_v$ goes to zero as the distance between~$u$ and~$v$ goes to infinity, and the (normalized) magnetization satisfies a central limit theorem about zero. For $\beta>\beta_c(d)$ correlations remain uniformly bounded away from zero, and the normalized magnetization converges to a $\frac{1}{2}$-$\frac{1}{2}$ mixture of two point-distributions at~$\pm m_*(\beta)$. We refer to Figure~\ref{fig:magnetization-schematic} for a schematic of this phase transition and how it relates to mixing of the Glauber dynamics. We emphasize that this picture is only heuristic, as the geometry of the configurations is lost when projecting onto the magnetization: see also Remark~\ref{rem:worst-case-mixing-restricted-chain} below. \begin{figure}[t] \begin{subfigure}[b]{.32\textwidth} \begin{tikzpicture}[scale = .5] \node at (0,0) {\includegraphics[width = 5cm]{High-temp-FE.pdf}}; \draw[|-|] (-5,-2.8)--(5,-2.8); \draw[->] (0,-2.8)--(0,3); \shade[ball color = red!40, opacity = 1] (0,-2.525) circle (.25cm); \draw [->] (.3,-2.5) to [out=10,in=220] (1,-2.2); \draw [->] (-.3,-2.5) to [out=180-10,in=180-220] (-1,-2.2); \end{tikzpicture} \subcaption{$\beta<\beta_c$} \end{subfigure} \begin{subfigure}[b]{.32\textwidth} \begin{tikzpicture}[scale = .5] \node at (0,0) {\includegraphics[width = 5cm]{Low-temp-FE.pdf}}; \draw[|-|] (-5,-2.8)--(5,-2.8); \draw[->] (0,-2.8)--(0,3); \shade[ball color = red!40, opacity = 1] (4.525,2.7) circle (.25cm); \shade[ball color = red!40, opacity = 1] (-4.525,2.7) circle (.25cm); \draw [->] (4.525,2.4) to [out=-95,in=85] (4.475,1.7); \draw [->] (-4.525,2.4) to [out=-85,in=95] (-4.475,1.7); \end{tikzpicture} \subcaption{$\beta>\beta_c$} \end{subfigure} \begin{subfigure}[b]{.32\textwidth} \begin{tikzpicture}[scale = .5] \node at (0,0) {\includegraphics[width = 5cm]{Low-temp-FE.pdf}}; \draw[|-|] (-5,-2.8)--(5,-2.8); \draw[->] (0,-2.8)--(0,3); \draw [->] (4.525,2.4) to [out=-95,in=85] (4.475,1.7); \shade[ball color = red!40, opacity = 1] (4.525,2.7) circle (.25cm); \draw[rectangle, fill = gray, opacity = .4] (-5,-2.8)--(0,-2.8)--(0,3)--(-5,3)--(-5,-2.8); \end{tikzpicture} \subcaption{$\beta>\beta_c$} \end{subfigure} \caption{The normalized magnetization $\frac{1}{|V|}\sum_{v}\sigma_v$ (on the $x$-axis) plotted against $F_\beta(m)$, the negative logarithm of the probability that $\frac{1}{|V|}\sum_{v}\sigma_v=m$ (on the $y$-axis). \\ \\ (\textsc{a}) At high temperatures, $F_\beta(m)$ is minimized at $m=0$, and is strictly convex. The magnetization does not pose any obstacle to mixing, and the Glauber dynamics can mix rapidly from every initialization. (\textsc{b}) At low temperatures, $F_\beta(m)$ is bimodal, being minimized at $\pm m_*(\beta)$, and it takes exponentially long for the dynamics to transition from one mode to the other. However, $F_\beta(m)$ is locally convex around $\pm m_*(\beta)$, and thus the global magnetization should not be an obstacle to mixing if one starts in a symmetric mixture of the all $+1$ and all $-1$ configurations. This is our approach in Theorem~\ref{thm:Ising-torus-mixing}. (\textsc{c}) The restricted dynamics, considered in Theorem~\ref{thm:Ising-plus-phase}, rejects any transition that would make the magnetization negative (region shaded gray). This restriction ensures that there is no bottleneck in the magnetization within the restricted space of configurations, so the restricted dynamics initialized from the all $+1$ configuration can, in principle, mix rapidly.} \label{fig:magnetization-schematic} \end{figure} The Glauber dynamics for the Ising model is a Markov chain on configurations that is very simple to describe. At each step, a vertex~$v\in V$ is chosen uniformly at random and the spin $\sigma_v$ is replaced by a random spin selected according to the correct conditional distribution given the spins on the neighbors of~$v$. This Markov chain is ergodic and reversible w.r.t.\ the Gibbs distribution~\eqref{eqn:gibbs} and hence converges to it. In addition to providing a natural and easy-to-implement algorithm for sampling from the Gibbs distribution~\eqref{eqn:gibbs}, the dynamics mimics the thermodynamic evolution of the underlying physical system, and is thus of interest in its own right. The key quantity associated with the Glauber dynamics is its rate of convergence, or {\it mixing time}, which measures the time until the total-variation distance to the stationary distribution is small starting from a {\it worst-case\/} initial distribution. Classical results have established that the mixing time is $\Theta(N\log N)$ throughout the high-temperature regime $\beta<\beta_c(d)$~\cite{MaOl1,LS-information-percolation}, and exponentially slow ($\exp(\Theta(N^{1-1/d}))$) throughout the low-temperature regime $\beta>\beta_c(d)$~\cite{Pisztora96,Bodineau05} (where the constants in the $\Theta$ notation here depend on $\beta$ and~$d$). In this latter case, the slow mixing is exhibited by a bottleneck between the plus and minus phases: with high probability, the time for the dynamics initialized in the $\pmb{+1}$ configuration, to enter the minus phase is exponentially large. Here and throughout the paper, we will use $\pmb{+1}$ (respectively, $\pmb{-1}$) to denote a configuration consisting of all $+1$ spins (respectively, all $-1$ spins). Our main result proves that, if we initialize the dynamics on the torus in the $\pmb{+1}$ or $\pmb{-1}$ configurations with probability $\frac 12$ each (denoted by $\nu^{\pmb{\pm}}$), then the mixing time is very fast (in fact optimal).\footnote{By ``mixing time'' here we mean the time~$t_{\textsc{mix}}$ until the variation distance from stationarity is at most~$\frac{1}{4}$, as in the standard definition from a worst-case initial distribution. In the standard setting, this implies that the time to reach variation distance~$\varepsilon$ is $O(t_{\textsc{mix}}\log\varepsilon^{-1})$. In our setting, due to our {\it specific\/} initial distribution, this ``boosting'' no longer automatically holds. However, in all our mixing time results in this paper, the time to reach variation distance~$\varepsilon$ is indeed $O(t_{\textsc{mix}}\log\varepsilon^{-1})$, provided that~$\varepsilon\ge e^{-o(n)}$.} \begin{theorem}\label{thm:Ising-torus-mixing} For every $d\ge 2$ and every $\beta>\beta_c(d)$, the mixing time of the Glauber dynamics for the Ising model on $(\mathbb Z/n\mathbb Z)^d$ initialized from the distribution~$\nu^{\pmb{\pm}}$, is $\Theta(N\log N)$ where $N= n^d$. \end{theorem} Recall that the worst-case mixing time is exponential in $N^{1-1/d}$ for all $d\ge 2$ and $\beta>\beta_c(d)$; to the best of our knowledge, there was previously no sub-exponential bound known for well-chosen initializations. Combined with the aforementioned $O(N\log N)$ mixing time at all high temperatures from every initialization, the conclusion of Theorem~\ref{thm:Ising-torus-mixing} holds for all non-critical temperatures $\beta\ne\beta_c(d)$. (At the critical point $\beta=\beta_c(d)$ the mixing time is much more delicate; in two dimensions it is known to be polynomially bounded~\cite{LS-critical-2D-Ising}, while for $d>2$ a polynomial bound is conjectured but not known.) \subsubsection*{Proof approach} To prove Theorem~\ref{thm:Ising-torus-mixing}, we introduce a novel notion that we call \emph{weak spatial mixing within a phase}. Weak spatial mixing (WSM) is a classical notion capturing the decay of correlations in spin systems, and has been extensively used in the analysis of these spin systems and their dynamics. In the case of the Ising model on~$(\mathbb Z/n\mathbb Z)^d$, WSM says (informally) that for every vertex $v$, the influence of the spins at a distance greater than~$r$ from~$v$ on the distribution at $v$ decays exponentially with~$r$. I.e., for every~$n$ and $r<n/2$, \begin{equation}\label{eqn:wsmintro} \max_{\tau: B_{r}^c(v) \to \{\pm 1\}} \| \pi(\sigma_v\in \cdot \mid \sigma_{B_{r}^c(v)}=\tau) - \pi(\sigma_v\in \cdot ) \|_{{\textsc{tv}}} \le Ce^{-r/C} \end{equation} for some constant~$C$, where $B_{r}^c(v)$ denotes the set of vertices outside the $\ell_\infty$ ball of radius~$r$ centered at~$v$, and $\Vert\cdot\Vert_{\textsc{tv}}$ is total variation distance. The phase transition in the Ising model corresponds to the fact that WSM holds for all $\beta<\beta_c(d)$ and breaks down as soon as $\beta \ge \beta_c(d)$. To define {\it WSM within a phase}, we let $\widehat\pi$ denote the Gibbs distribution on $(\mathbb Z/n\mathbb Z)^d$ restricted to the plus phase (i.e., conditioned on having non-negative magnetization), and replace~\eqref{eqn:wsmintro} by the requirement that, for every $n$ and $r< n/2$, \begin{equation}\label{eqn:wsmphaseintro} \Vert \pi(\sigma_v\in \cdot \mid\sigma_{B_{r}^c(v)} = \pmb{+1}) - \widehat\pi(\sigma_v \in \cdot)\Vert_{{\textsc{tv}}} \le Ce^{-r/C}. \end{equation} (By spin-flip symmetry, if~\eqref{eqn:wsmphaseintro} holds then the analogous bound for the minus phase also holds.) Note that~\eqref{eqn:wsmphaseintro} is a much weaker condition than~\eqref{eqn:wsmintro} as it requires correlations to decay only in the plus phase, with $+1$ spins outside $B_r(v)$---loosely, it can be viewed as a ``monotone'' version of~\eqref{eqn:wsmintro}.\footnote{In our formal proofs, we actually work with a slightly more general version of~\eqref{eqn:wsmphaseintro} where $v$ is replaced by a box in $(\mathbb Z/n\mathbb Z)^d$: see Definition~\ref{def:wsm-within-a-phase}. However, in our setting, these definitions of WSM within a phase can be seen to be equivalent.} Armed with this new notion, we prove Theorem~\ref{thm:Ising-torus-mixing} in two stages. The first stage concerns the Glauber dynamics restricted to a single phase, say the plus phase, by ignoring all updates that would make the magnetization negative (so that its stationary distribution is~$\widehat\pi$). We show that if WSM within a phase holds, this restricted dynamics mixes rapidly when initialized from its ground state. \begin{theorem}\label{thm:Ising-plus-phase} Suppose that WSM within a phase holds. Then the mixing time of the Ising Glauber dynamics on $(\mathbb Z/n\mathbb Z)^d$ restricted to the plus phase, initialized from the $\pmb{+1}$ configuration, is $\Theta(N\log N)$. \end{theorem} \begin{remark}\label{rem:worst-case-mixing-restricted-chain} In the case of the complete graph (i.e., the mean-field Ising model), the \emph{worst-case} mixing time of the restricted dynamics was shown to be $O(N\log N)$ in~\cite{LLP}. However, this is not the case in a setting with non-trivial geometry such as~$\mathbb Z^d$. Indeed, it is possible to construct configurations (e.g., a droplet of $-1$ spins of small but macroscopic size, surrounded by $+1$ spins), whose normalized magnetization exceeds $m_*(\beta)$ but for which the mixing time of the restricted dynamics will be at least polynomially slower than $N\log N$. Our result demonstrates that such bottlenecks are avoided when the dynamics are started in the $\pmb{+1}$ configuration. \end{remark} Given the conclusion of Theorem~\ref{thm:Ising-plus-phase} (and its symmetric counterpart for the minus phase), it is straightforward to prove that WSM within a phase implies the upper bound in Theorem~\ref{thm:Ising-torus-mixing}, i.e., that the original Glauber dynamics with initialization~$\nu^{\pmb{\pm}}$ has mixing time $O(N\log N)$ as well; this follows because $\pi$ itself is essentially a uniform mixture of its restrictions $\widehat \pi$ and $\widecheck \pi$ to the plus and minus phases, respectively. To prove Theorem~\ref{thm:Ising-plus-phase}, we derive a simultaneous recursion on time and distance~$r$, inspired by the classical \emph{high-temperature} recursion of Martinelli and Olivieri~\cite{MaOl1}; however, our low-temperature version uses only the weaker WSM condition~\eqref{eqn:wsmphaseintro} and requires substantial modification to cope with the restriction to the plus phase. (See Section~\ref{sec:relaxation-within-phase} for details.) The second stage of our proof of Theorem~\ref{thm:Ising-torus-mixing} is to show that WSM within a phase holds at all low temperatures. \begin{theorem}\label{thm:Ising-wsm-within-phase-intro} For every $d\ge 2$ and every $\beta>\beta_c(d)$, the Ising model on $(\mathbb Z/n\mathbb Z)^d$ has WSM within a phase. \end{theorem} Our proof of Theorem~\ref{thm:Ising-wsm-within-phase-intro} starts from a recent result of Duminil-Copin, Goswami and Raoufi~\cite{DCGR20}, showing a WSM-type property for the {\it random-cluster representation\/} of the Ising model at all low temperatures in all dimensions. (The random-cluster representation is oblivious to the phases of the Ising model.) In order to lift the random-cluster WSM property to our Ising WSM property within a phase, we combine careful revealing and coupling schemes with a sophisticated coarse-graining approach first introduced by Pisztora~\cite{Pisztora96}: coarse-graining replaces single vertices in~$\mathbb Z^d$ with boxes of large but constant size to boost a marginally super-critical model into a very super-critical one\footnote{The coarse-graining is crucial to proving the result throughout the entire low-temperature regime; if we were only interested in proving it for sufficiently large $\beta$, the exponential tails on connected components of $-1$ spins could help to simplify the argument.}. (See Section~\ref{sec:Ising-wsm-within-phase} for more details.) Finally, we note that the $\Omega(N\log N)$ lower bound in Theorem~\ref{thm:Ising-torus-mixing} follows along similar lines to the same lower bound for general spin systems from a worst-case initialization due to Hayes and Sinclair~\cite{HayesSinclair}; however, adaptations are required to handle the fact that we are constrained to start with the specific initialization~$\nu^{\pmb{\pm}}$ rather than the carefully chosen one in~\cite{HayesSinclair}. \subsubsection*{Ising model with plus boundary conditions} A key ingredient in Theorem~\ref{thm:Ising-torus-mixing} is the analysis of the mixing time ``within a phase'', where we constrain the dynamics to remain within the phase by censoring moves outside (Theorem~\ref{thm:Ising-plus-phase}). An alternative approach to analyzing dynamics within a phase is to consider the $N$-vertex box $\{1,...,n\}^d$ in~$\mathbb Z^d$ with $\pmb{+1}$ boundary condition outside the box. Here the phase constraint is enforced by the boundary condition. There is a famous conjecture that the mixing time from a worst-case initialization (most pertinently, the $\pmb{-1}$ configuration) is polynomial in~$N$ at low temperatures~\cite{Martinelli-SP,MaTo,LMST}; however, the best bound currently known is $N^{O(\log N)}$ in $d=2$~\cite{LMST}, while no sub-exponential bound is known in $d\ge 3$. These works also considered the mixing time from the $\pmb{+1}$ initialization, which is conjectured to be $O(N\log N)$; they deduced that in $d=2$ this mixing time is $N^{1+o(1)}$ (see~\cite[Corollary 1.9]{MaTo} and~\cite[Corollary 4]{LMST}). In Section~\ref{sec:mixing-plus-bc}, we introduce a notion of \emph{strong spatial mixing (SSM) within a phase}, and show, using our recursive scheme described above, that it implies the conjectured bound of $O(N\log N)$ mixing from the $\pmb{+1}$ initialization. We expect SSM within a phase to hold at all $\beta>\beta_c(d)$ for all $d\ge 2$; given the regimes in which we are able to prove that it holds, we get the following. \begin{theorem}\label{thm:Ising-mixing-plus-bc} Suppose either (i) $d=2$ and $\beta>\beta_c(d)$; or (ii) $d\ge 3$ and $\beta$ is sufficiently large. Then the mixing time of the Glauber dynamics in an $N$-vertex box in~$\mathbb Z^d$ with the $\pmb{+1}$ boundary condition, initialized from the $\pmb{+1}$ configuration, is $\Theta(N\log N)$. \end{theorem} \begin{remark} As in Remark~\ref{rem:worst-case-mixing-restricted-chain}, the worst-case mixing time of this chain is at least polynomially slower than $N\log N$; indeed, obtaining any polynomial upper bound is a fundamental open question. On the other hand, Theorem~\ref{thm:Ising-mixing-plus-bc} proves optimal mixing for the $\pmb{+1}$ initialization. Another class of interesting initializations that have been considered are those in which each spin is set to be~$+1$ independently with probability $p=1-\delta$ for small fixed~$\delta>0$ (independent of~$\beta$); unlike our $\pmb{+1}$ initialization, while these initializations have predominantly $+1$ spins, they do not stochastically dominate the stationary distribution. On $d$-ary trees, the convergence rate from such initializations was studied in~\cite{CaMaTree}, and shown to be much faster than the worst-case mixing time (previously studied in~\cite{MSW-trees-bc}). \end{remark} \subsubsection*{Exponential relaxation of the infinite-volume dynamics} A closely related setting is the \emph{infinite-volume dynamics}, where the Glauber dynamics is defined on all of $\mathbb Z^d$ using continuous-time updates (see Section~\ref{sec:prelim} for a formal definition). It is a straightforward consequence of monotonicity (see~\cite{Martinelli-notes}) that when initialized from the $\pmb{+1}$ configuration, this dynamics converges to the infinite-volume plus measure $\pi_{\mathbb Z^d}^{\pmb{+}}$ as $t\to \infty$, in the following sense: for any function $f$ that is local (depends only on finitely many spins), its expectation under the dynamics initialized from $\pmb{+1}$ (denoted $X_{t,\mathbb Z^d}^{\pmb{+}}$) converges to its expectation under $\pi_{\mathbb Z^d}^{\pmb{+}}$. As a consequence of our approach, we obtain the following sharp bound on the rate of this convergence. \begin{corollary}\label{cor:infinite-volume-relaxation} For every $d\ge 2$ and every $\beta>\beta_c(d)$, for every local function $f$, there exists $C_f$ such that \begin{align}\label{eqn:infvolcor} \big|\mathbb E[f(X_{t,\mathbb Z^d}^{\pmb{+}})] - \pi_{\mathbb Z^d}^{\pmb{+}} [f(\sigma)]\big|\le C_f e^{ - \Omega(t)}\,,\qquad \mbox{for all $t\ge 0$}\,. \end{align} \end{corollary} Note that this result may be viewed as analogous to a version of Theorem~\ref{thm:Ising-mixing-plus-bc} that only measures the mixing time \emph{in the bulk\/} (i.e., sufficiently far from the boundary). The absence of boundary effects in infinite volume allows us to use WSM within a phase to establish~\eqref{eqn:infvolcor} for all $\beta>\beta_c(d)$ in all dimensions~$d$, in contrast to Theorem~\ref{thm:Ising-mixing-plus-bc}, which required the stronger SSM within a phase property and hence required stronger assumptions on~$\beta$ when $d\ge 3$. Corollary~\ref{cor:infinite-volume-relaxation} stands in contrast to the result of~\cite{Bodineau-Martinelli02}, which in $d=2$ gives a {\it lower\/} bound of $e^{-O(\sqrt{t}\,{\rm polylog}(t))}$ on the left-hand side of~\eqref{eqn:infvolcor} averaged over an initialization drawn from $\pi_{\mathbb Z^d}^{\pmb{+}}$ (replacing the uniform $\pmb{+1}$ initialization in~\eqref{eqn:infvolcor}). \subsection{Possible extensions} \label{subsec:extensions} We briefly discuss some potential extensions of our results above, leaving them as open problems. Perhaps the most canonical generalization of the Ising model is the Potts model: there, each vertex can be assigned one of $q$ possible spins, with the same Gibbs distribution as in~\eqref{eqn:gibbs} except that now $C(\sigma)$ denotes the {\it $q$-way cut\/} induced by the spins; the Ising model is the case $q=2$. The natural generalization of Theorem~\ref{thm:Ising-torus-mixing} would suggest $\Theta(N\log N)$ mixing time for the low-temperature Potts Glauber dynamics, starting from the initialization that puts equal weight~$\frac{1}{q}$ on each of the ground states, i.e., the configurations in which all spins are the same. Indeed, this behavior was conjectured in~\cite{HPR-Algorithmic-Pirogov-Sinai}, and our Theorem~\ref{thm:Ising-torus-mixing} resolves this conjecture for the case $q=2$ throughout the low-temperature regime. A major obstacle to extending our current analysis to this scenario is the lack of monotonicity in the Potts model for~$q\ge 3$. A closely related model that \emph{does} exhibit monotonicity is the {\it random-cluster model}. The random-cluster model is an extensively studied graphical representation of the Ising and Potts models, in which configurations are random subgraphs weighted by their number of edges and number of connected components~\cite{Grimmett}. The Glauber dynamics for the random-cluster model (which updates edges rather than vertex spins) on $(\mathbb Z/n\mathbb Z)^d$ does not suffer from the exponential slowdown at low temperatures of the Ising and Potts models, but is expected to experience such a slowdown {\it at the critical point} if $q$ is larger than some $q_c(d)$, due to a bottleneck between an ``ordered phase" (marked by the existence of a giant component) and a ``disordered phase" (where the subgraph is shattered). This slowdown was proved for~$q$ sufficiently large in~\cite{BCT,BCFKTVV}, and for all $q>q_c(2) = 4$ when $d=2$~\cite{GL1}. We predict that this bottleneck can be overcome if the dynamics is initialized from a mixture of the corresponding ground states, i.e., the full and empty subgraphs. (In this case, interestingly, the right mixture is not uniform, but has weights determined by the parameters $q$ and~$d$.) Indeed, our proof of Theorem~\ref{thm:Ising-plus-phase} can be seen to generalize immediately to this setting, so the main missing piece is establishing that the appropriate analog of WSM within a phase holds. Lastly, we expect that both our methods and results for the Ising model can be extended to arbitrary amenable graphs (i.e., graphs of sub-exponential growth) at sufficiently low temperatures. As the worst-case mixing time of the Ising Glauber dynamics is expected to be polynomial at criticality, we end with the following (admittedly ambitious) open problem: is the mixing time of the Ising Glauber dynamics initialized from $\nu^{{\pmb{\pm}}}$ polynomial on \emph{every} graph on $N$ vertices and for every $\beta>0$? \subsection{Related work} \label{subsec:related} There is a vast literature on Glauber dynamics for spin systems which we do not attempt to summarize here. We focus only on attempts to circumvent low-temperature bottlenecks---our main concern in this paper. As mentioned earlier, the only setting in which an analog of our Theorem~\ref{thm:Ising-torus-mixing} was previously known to hold is for the mean-field Ising model~\cite{LLP} where the question reduces to a 1-dimensional argument based just on the magnetization. Other approaches to this problem have typically started by transforming the spin system to an alternative representation, which does not suffer from the same bottleneck. We summarize these here, though we emphasize that our motivation in this paper is to explore the more direct approach of using the original Glauber dynamics on the spin system itself. The early work of~\cite{JSIsing}, which gave a polynomial-time sampler for the Ising model on all graphs at all temperatures, was based on the so-called ``even subgraph" representation of the model. The Glauber dynamics for the random-cluster model has been seen as a promising route to overcoming low-temperature bottlenecks in both the Ising and Potts models, as it is oblivious to the choice of phase. For the $q=2$ case of the Ising model,~\cite{GuoJer} proved that the random-cluster dynamics mixes in polynomial time on all graphs at all temperatures. For general $q$,~\cite{BS} showed that the random-cluster dynamics on subsets of $\mathbb Z^2$ mixes in $O(N\log N)$ time at all low temperatures (see also~\cite{BGVfull} where different boundary conditions were considered). More generally, the classical proof of~\cite{MaOl1} (shown to readily extend to random-cluster dynamics in~\cite{HarelSpinka}) implies fast mixing of the random-cluster dynamics on $(\mathbb Z/n\mathbb Z)^d$ whenever the random-cluster model has a WSM property; recall that this property was shown to hold at all low temperatures in all dimensions when $q=2$ in~\cite{DCGR20}. The Swendsen-Wang dynamics~\cite{SW} exploits the Edwards-Sokal coupling~\cite{ES} of spins and edges in the Potts and random-cluster models to update large clusters of spins in a single step, and thus jump between phases. By virtue of a comparison technique introduced in~\cite{Ullrich-random-cluster}, the above bounds on the random-cluster dynamics translate to bounds on the Swendsen--Wang dynamics up to a multiplicative factor of~$N$. In special settings, one can do better than this lossy comparison and obtain optimal bounds: at low temperatures this includes the complete graph~\cite{LNNP14,GSV,BS-MF}, $\mathbb Z^2$~\cite{Martinelli-SW,BCPSV} and trees~\cite{BZSV-SW-trees}. Finally, there has been a recent series of papers based on the polymer representation of the Ising and Potts models (also known as the cluster expansion). This began with the work ~\cite{HPR-Algorithmic-Pirogov-Sinai}, which gave polynomial-time sampling algorithms for the Potts model on $(\mathbb Z/n\mathbb Z)^d$ at sufficiently low temperatures. Further work~\cite{BCHPT-Potts-all-temp} derives similar results at all temperatures, provided the parameter~$q$ is sufficiently large as a function of the dimension~$d$. In a related direction, ~\cite{CGGPS,GGS} obtain rapid mixing results for the so-called ``polymer dynamics'' at sufficiently low temperatures on expander graphs, and deduce polynomial mixing for Glauber dynamics restricted to a small region around the ground states (measured in terms of the size of its largest polymer). It should be noted that such approaches are ultimately based on convergence of the cluster expansion and are unlikely to cover the full low-temperature regime for small~$q$. \par\medskip\noindent {\bf Outline of paper.} In Section~\ref{sec:prelim}, we recall necessary preliminaries on the Ising model and its Glauber dynamics. In Section~\ref{sec:relaxation-within-phase}, we prove the mixing time upper bounds of Theorems~\ref{thm:Ising-torus-mixing} and~\ref{thm:Ising-plus-phase} assuming WSM within a phase holds. Then, in Section~\ref{sec:Ising-wsm-within-phase}, we prove Theorem~\ref{thm:Ising-wsm-within-phase-intro}, establishing that WSM within a phase does indeed hold at all low temperatures. In Section~\ref{sec:mixing-plus-bc}, we prove the mixing time upper bound with $\pmb{+1}$ boundary conditions of Theorem~\ref{thm:Ising-mixing-plus-bc}. In Section~\ref{sec:lower-bound}, we obtain the key estimate for our mixing time lower bounds. Finally, in Section~\ref{sec:proofs-of-main-theorems}, we combine these various pieces to deduce our main theorems. \subsection*{Acknowledgements} The authors thank Fabio Martinelli for useful discussions and for suggesting the application in Corollary~\ref{cor:infinite-volume-relaxation}. R.G.\ thanks the Miller Institute for Basic Research in Science for its support. The research of A.S.\ is supported in part by NSF grant CCF-1815328. \section{Preliminaries and notation}\label{sec:prelim} In this section, we recap our notation, and important preliminaries about the Ising model and monotone Markov chains that we appeal to throughout the paper. \subsection{Underlying geometry} Throughout, we will work on rectangular subgraphs of the $d$-dimensional lattice $\mathbb Z^d$, with vertex set \[ \Lambda_m := [-m,m]^d \cap \mathbb Z^d\,, \] and edge set $E(\Lambda_m) = \{\{u,v\}:d(u,v)=1\}$, where $d(\cdot,\cdot)$ is the $\ell_1$ distance in $\mathbb Z^d$. The (inner) boundary vertices of $\Lambda_m$ are denoted $\partial \Lambda_m= \{w\in \Lambda_m: d(w,\mathbb Z^d \setminus \Lambda_m) = 1\}$. For a vertex $v$, we will frequently consider its ($\ell_\infty$) ball of radius $r$, which we denote $B_r(v)=\{w\in \mathbb Z^d: d_\infty(w,v)\le r\}$ where $d_\infty(\cdot,\cdot)$ is the $\ell_\infty$ distance in $\mathbb Z^d$. We also consider the torus $\mathbb T_m = (\mathbb Z/(2m\mathbb Z))^d$ with nearest neighbor edges, where the $\ell_1$ distance is measured modulo $2m$. One naturally identifies the fundamental domain of $\mathbb T_m$ with $\Lambda_m$, but with vertices on opposite sides identified with one another as one vertex---we denote this graph $\Lambda_m^p$ indicating its equivalence to $\Lambda_m$ with periodic boundary conditions. \subsection{The Ising model} Recall the definition of the Ising Gibbs distribution~\eqref{eqn:gibbs} on a graph $G$ at inverse temperature $\beta>0$. We denote the set of configurations of the model by $\Omega = \{\pm 1\}^{V(G)}$. We indicate the underlying graph by a subscript, e.g., $\pi_{G}$. (The parameter $\beta$ will always be fixed, and thus its dependency is suppressed.) We use $\pi_G[\cdot]$ to denote the corresponding expectation. For an Ising configuration $\sigma\in \Omega$, let $M(\sigma) = \sum_{v\in V(G)} \sigma_v$ be its {\it magnetization}. We will frequently work with the following subsets of the configuration space, as indicated in the introduction: \[ \widehat \Omega= \{\sigma: M(\sigma) \ge 0\}\,, \qquad \mbox{and}\qquad \widecheck \Omega = \{\sigma: M(\sigma) \le 0\}\,. \]We also use $\widehat \pi_G$ and $\widecheck\pi_G$ to denote the conditional distributions $\pi_G (\cdot \mid \widehat \Omega)$ and $\pi_G (\cdot \mid \widecheck \Omega)$ respectively. \subsubsection*{Ising phase transition on $\mathbb Z^d$} The Ising model has a famous phase transition on $\mathbb Z^d$, manifested in finite volumes as follows. For every $d\ge 2$, there exists a critical $\beta_c(d)>0$ such that (1) for all $\beta<\beta_c(d)$, the spin-spin correlation $\pi_{\mathbb T_m}(\sigma_v = \sigma_w) - \frac 12$ decays to zero exponentially with $d(v,w)$; and (2) for all $\beta>\beta_c(d)$, $\pi_{\mathbb T_m}(\sigma_v = \sigma_w) - \frac 12$ is bounded away from zero, uniformly in $m$ and $v,w\in \mathbb T_m$. As a result, for $\beta>\beta_c(d)$ there is no spatial mixing of the form~\eqref{eqn:wsmintro}, and $(2m)^{-d} M(\sigma)$ converges to a two-atomic distribution at $\pm m_*(\beta)$ for some $m_*(\beta)>0$. The works~\cite{Pisztora96,Bodineau05} established a \emph{surface-order} large deviation principle: for all $\beta>\beta_c(d)$, for all $\epsilon<m_*(\beta)$, \begin{align}\label{eq:surface-order-LDP} \pi_{\mathbb T_m}\big((2m)^{-d}M(\sigma) \in [-\epsilon,\epsilon]\big) \le Ce^{ - m^{d-1}/C}\,. \end{align} \subsubsection{Boundary conditions} When considering the Ising model on subgraphs of $\mathbb Z^d$, e.g., $\Lambda_m$, we will consider the model with \emph{boundary conditions}. A boundary condition is an arbitrary configuration $\eta\in \{\pm 1\}^{\mathbb Z^d}$. The Ising measure on $\Lambda_m$ with boundary condition~$\eta$ is the conditional distribution $$\pi_{\Lambda_{m+1}}\big(\sigma_{\Lambda_m}\in \cdot \mid \sigma_{\partial \Lambda_{m+1}} = \eta_{\partial \Lambda_{m+1}}\big)\,.$$ where we write $\sigma_A$ for a configuration~$\sigma$ restricted to vertices in a subset~$A$. We use $\pi_{\Lambda_m^\eta}$ to denote this conditional distribution over $\{\pm 1\}^{\Lambda_m}$, and use $\pmb{+}$ or $\pmb{+1}$ to denote the all $+1$ configuration, and $\pmb{-}$ or $\pmb{-1}$ to denote the all $-1$ configuration. \subsection{Markov chains and monotonicity relations} Consider a (discrete-time) ergodic Markov chain with transition matrix $P$ on a finite state space $\Omega$, reversible with respect to a stationary distribution~$\pi$. Denote the chain initialized from $x_0\in \Omega$ by $(X_t^{x_0})_{t \in \mathbb N}$. It is well known that, for any~$x_0$, the distance $d_{{\textsc{tv}}}(x_0; t) := \|\mathbb P(X_{t}^{x_0}\in \cdot) -\pi\|_{\textsc{tv}}$ is non-increasing with~$t$ and converges to zero as $t\to\infty$. (Here $\Vert\cdot\Vert_{\textsc{tv}}$ is the total variation distance (i.e., half the $\ell_1$ norm).) The {\it $\epsilon$-mixing time from initialization~$x_0$} is defined as $t_{\textsc{mix}}^{x_0}(\epsilon) = \min\{t: d_{\textsc{tv}}(x_0;t) \le \epsilon\}$, and by convention, we set $t_{\textsc{mix}}^{x_0} = t_{\textsc{mix}}^{x_0}(1/4)$ and $t_{\textsc{mix}} = \max_{x_0}t_{\textsc{mix}}^{x_0}$. \subsubsection*{Continuous-time dynamics} It will be convenient in our analysis to work in continuous rather than discrete time. The {\it continuous-time Glauber dynamics\/} on graph~$G$ is defined as follows. Assign every vertex an i.i.d.\ rate-1 Poisson clock; when the clock at $v$ rings, replace the spin $\sigma_v$ by a random spin sampled according to the correct conditional distribution given the spins on the neighbors of~$v$. Recall (e.g., from~\cite[Theorem~20.3]{LP}) the standard fact that \begin{align}\label{eq:continuous-time-discrete-time-comparison} d_{{\textsc{tv}}}^{\textrm{cont}}(x_0 ; C t) \le d_{\textsc{tv}}(x_0; |V(G)| t) \le d_{{\textsc{tv}}}^{\textrm{cont}}(x_0 ; C^{-1} t)\,, \end{align} for some absolute constant $C$, where $d_{\textsc{tv}}^{\textrm{cont}}$ is the distance of the continuous-time dynamics. I.e., the mixing time of the discrete- and continuous-time dynamics differ only by a factor~$O(|V(G)|)$, where $|V(G)|$ is the volume. As such, for our $\Theta(N\log N)$ mixing time results stated in the introduction, it suffices to prove that the continuous-time dynamics has mixing time $\Theta(\log N)$. Abusing notation, from this point forth, we let $(X_t^{x_0})_{t\ge 0}$, and all other Ising Glauber dynamics we consider, be the continuous-time chains. \subsubsection*{The grand coupling} A standard tool in the study of Markov chains is the \emph{grand coupling}, which places chains with all possible initializations in a common probability space. \begin{definition}\label{def:grand-coupling} Independently to each vertex~$v\in V(G)$ we assign an infinite sequence of times $(T_1^{v}, T_2^{v},...)$ given by the rings of a rate-1 Poisson clock, and an infinite sequence $(U_{1}^{v},U_2^v,...)$ of $\mbox{Unif}[0,1]$ random variables. The dynamics $(X_{t}^{x_0})_{t\ge 0}$ is generated as follows (where $\pi$ denotes the stationary distribution): \begin{enumerate} \item Set $X_0^{x_0} =x_0$. Let $t_1< t_2<...$ be the times in $\bigcup_{k}\bigcup_{v}\{T_k^v\}$ in increasing order (almost surely, these times are distinct). \item For $j\ge 1$, let $(v,k)$ be the unique pair such that $t_j=T_k^v$. Set $X_t^{x_0}= X_{t_{j-1}}^{x_0}$ for all $t\in [t_{j-1},t_j)$; then set $X_{t_j}^{x_0}(\Lambda_n\setminus \{v\}) = X_{t_{j-1}}^{x_0}(\Lambda_n\setminus \{v\})$ and \begin{align}\label{eq:update-rule} X_{t_j}^{x_0}(v) = \begin{cases}+1 & \mbox{if } U_{k}^{v}\le \pi(\sigma_v = +1 \mid \sigma_{\Lambda_n\setminus v} = X_{t_{j-1}}^{x_0}(\Lambda_n \setminus v)) \\ -1 & \mbox{otherwise} \end{cases}\,. \end{align} \end{enumerate} \end{definition} By using the same times $(T_1,T_2,...)_{v\in \mathbb Z^d}$ and uniform random variables $(U_1^v,U_2^v,...)_{v\in \mathbb Z^d}$, and taking $\pi = \pi_{A^\eta}$, the above construction gives a grand coupling of $(X_{A^\eta,t}^{x_0})_{t\ge 0}$, the Markov chains on $A\subset \mathbb Z^d$ with boundary conditions $\eta$ initialized from all possible configurations~$x_0$. It is well known (and simple to check) that in the case of the Ising model this coupling is {\it monotone}, in the sense that if $x_0 \ge x_0'$ and $\eta \ge \eta'$, then $X_{A^\eta,t}^{x_0}\ge X_{A^{\eta'},t}^{x_0'}$ for all $t\ge 0$. (Monotonicity does not hold for many other models, such as the Potts model.) \subsubsection*{The restricted Glauber dynamics} A crucial tool in our analysis will be the Glauber dynamics \emph{restricted to $\widehat \Omega$} (and, symmetrically, to~$\widecheck \Omega$). This chain, denoted $(\widehat X_t^{x_0})_{t\ge 0}$, is defined exactly as in Definition~\ref{def:grand-coupling}, except that if the update in~\eqref{eq:update-rule} would cause~$\sigma$ not to be in~$\widehat \Omega$ we set $\widehat X_{t_j}^{x_0}(v) = +1$ deterministically (i.e., leave $\sigma_v$ unchanged). It is easy to check that $(\widehat X_t)$ is a Markov chain reversible w.r.t.\ $\widehat \pi$. \subsection{Notational disclaimers} Throughout the paper, $d\ge 2$ and $\beta>\beta_c(d)$ will be fixed and understood from context. All our results should be understood as holding uniformly over sufficiently large $n$ or $m$. The letter $C>0$ will be used frequently, always indicating the existence of a constant that is independent of $r,n,m$ etc., but that may depend on $\beta, d$ and may differ from line to line. Finally, we use big-$O$, little-$o$ and big-$\Omega$ notation in the same manner, where the hidden constants may depend on $\beta, d$. \section{Fast mixing on the torus given WSM within a phase}\label{sec:relaxation-within-phase} In this section, we establish that WSM within a phase implies fast mixing of the restricted dynamics $\widehat X_{\Lambda_n^p,t}^{\pmb{+}}$ to $\widehat \pi_{\Lambda_n^p}$ (Theorem~\ref{thm:Ising-plus-phase}), and in turn of the full dynamics $X_{\Lambda_n^p,t}^{\nu^{{\pmb{\pm}}}}$ to $\pi_{\Lambda_n^p}$ (Theorem~\ref{thm:Ising-torus-mixing}). Let us begin by formalizing this goal by defining WSM within a phase as the following slightly stronger version of~\eqref{eqn:wsmphaseintro} where we consider marginals on sets that are balls. \begin{definition}\label{def:wsm-within-a-phase} Fix $d\ge 2$ and $\beta>\beta_c(d)$. We say that the Ising model on $\Lambda_n$ has \emph{weak-spatial mixing (WSM) within a phase} if for every $r< n$, for every $v\in \Lambda_n$, \begin{align*} \|\pi_{B_r^{\pmb{+}}(v)}(\sigma_{B_{r/2}(v)} \in \cdot) - \widehat \pi_{\Lambda_n^p}(\sigma_{B_{r/2}(v)}\in \cdot)\|_{\textsc{tv}}\le Ce^{ - r/C}. \end{align*} \end{definition} \noindent We aim to prove the following, which combined with Theorem~\ref{thm:Ising-wsm-within-phase-intro} implies the upper bound of Theorem~\ref{thm:Ising-torus-mixing}. Recall that we use $N= (2n)^d$ to denote the number of vertices in $|\Lambda_n|$. \begin{theorem} \label{thm:Ising-torus-restatement} Suppose WSM within a phase holds. Then for every $t \ge 0$, \begin{align}\label{eqn:torus-restmt-thm} \|\mathbb P(X_{\Lambda_n^p,t}^{\nu^{{\pmb{\pm}}}} \in \cdot) - \pi_{\Lambda_n^p}\|_{\textsc{tv}} \le C n^d e^{ - t/C} \vee Ce^{ - n/C}\,. \end{align} In particular, the (continuous-time) $\epsilon$-mixing time of $X_{\Lambda_n^p,t}^{\nu^{{\pmb{\pm}}}}$ is $O(\log N \log (1/\epsilon))$ for all $\epsilon \ge e^{ - o(n)}$. \end{theorem} \noindent The proof of this will go hand in hand with proving fast mixing of the restricted chain $\widehat X_t$ (formally defined in Section~\ref{sec:prelim}), giving the upper bound of Theorem~\ref{thm:Ising-plus-phase}, as follows. \begin{theorem}\label{thm:Ising-mixing-restricted-chain} Suppose WSM within a phase holds. Then for every $t\ge 0$, \begin{align*} \|\mathbb P(\widehat X_{\Lambda_n^p,t}^{\pmb{+}} \in \cdot) - \widehat\pi_{\Lambda_n^p}\|_{\textsc{tv}} \le Cn^d e^{ - t/C} \vee C e^{ - n/C}\,. \end{align*} In particular, the (continuous-time) $\epsilon$-mixing time of $\widehat X_{\Lambda_n^p,t}^{\pmb{+}}$ is $O(\log N \log(1/\epsilon))$ for all $\epsilon \ge e^{- o(n)}$. \end{theorem} For ease of notation, we always write $\widehat \pi = \widehat \pi_{\Lambda_n^p}$ and, in this section, $X_{t}^\sigma = X_{\Lambda_n^p,t}^\sigma$ and $\widehat X_t^\sigma = \widehat X_{\Lambda_n^p,t}^{\sigma}$. \subsection{Single-site relaxation within a phase} The goal of this subsection is to prove the following exponential rate of relaxation of the {\it single-site marginals\/} of $X_{t}^{\pmb{+}}$ to $\widehat \pi$. This will be a key step towards Theorem~\ref{thm:Ising-mixing-restricted-chain}. \begin{proposition}\label{prop:relaxation-within-phase} Suppose WSM within a phase holds. Then for every $v\in \Lambda_n$, \begin{align*} |\mathbb P(X_{\Lambda_n^p,t}^{\pmb{+}}(v)=+1) - \widehat \pi_{\Lambda_n^p}(\sigma_v = +1)| \le Ce^{ - t/C} \qquad \mbox{for all }t\le n\,. \end{align*} \end{proposition} Our starting point for proving Proposition~\ref{prop:relaxation-within-phase} is the recursive scheme of~\cite{MaOl1}, used to prove fast mixing of the \emph{high-temperature} Ising dynamics under the standard WSM condition. We transform this into a scheme that works at low temperatures, restricted to a single phase of the model, assuming instead our new WSM within a phase condition (Definition~\ref{def:wsm-within-a-phase}). The need to work with the restricted dynamics $\widehat X_t$ and its stationary measure $\widehat \pi$ introduce various technical obstacles to the recursive scheme of~\cite{MaOl1}. These include the absence of global monotonicity of the restricted dynamics, and the fact that there is no \emph{minimal\/} configuration within a phase (only the distribution~$\widehat \pi$). While~\cite{MaOl1} (and extensions such as~\cite{HarelSpinka}) recurse over the probability of disagreement between two Markov chains initialized from the maximal and minimal configurations respectively, in our setting the minimal configuration lies outside the plus phase. Instead, we will recurse over the difference in expectations $\mathbb E[X_t^{\pmb{+}}(\sigma_v)] - \widehat \pi[\sigma_v]$. Conveniently, this allows us to work with the Markov semigroup. Namely, for $f: \Omega \to \mathbb R$, define the function $P_t f: \Omega \to \mathbb R$ as \begin{align}\label{eq:Pt-def} P_t f(\sigma) := \mathbb E[f( X_{t}^{\sigma})]\,. \end{align} Our main goal will be to establish a recurrence relation (in time) for $P_t f_v$, where \begin{align}\label{eq:fv-def} f_v(\sigma) : = \sigma_v - \widehat \pi[\sigma_v]\,. \end{align} \subsubsection{Monotonicity and the restricted chain}\label{subsec:monotonicity-restricted} We first overcome the absence of monotonicity of the restricted chain using the observation that its typical hitting time to $\partial \widehat \Omega$ is exponentially long. Throughout the paper, let us denote by $\widehat \tau^{x_0}$ the hitting time to $\partial \widehat \Omega$ of the restricted dynamics $\widehat X_{t}^{x_0}$. By Definition~\ref{def:grand-coupling} and monotonicity, together with the definition of the restricted dynamics, one observes the following. \begin{claim}\label{clm:monotonicity-relations} For every $0\le t\le \widehat \tau^{\widehat \pi}$, we have $\widehat X_{t}^{\widehat \pi} = X_{t}^{\widehat \pi} \le X_{t}^{\pmb{+}} = \widehat X_{t}^{\pmb{+}}$. \end{claim} Claim~\ref{clm:monotonicity-relations} leads to the following lower bound on $\widehat \tau^{\widehat \pi}\le \widehat \tau^{\pmb{+}}$. \begin{lemma}\label{lem:zero-magnetization-hitting-time} For every $t\ge 0$, we have $\mathbb P(\widehat \tau^{\pmb{+}}\le t) \le \mathbb P(\widehat \tau^{\widehat \pi}\le t) \le C (t\vee 1) e^{ - n^{d-1}/C}$. \end{lemma} \begin{proof} Fix the sequence of times $t_1,t_2,...$ at which some clock rings in $\Lambda_n$: note that this is distributed as a Poisson clock with rate $|\Lambda_n|$. By definition of the Glauber dynamics, even conditionally on this clock sequence, for all $t$, the law of $\widehat X_{t}^{\widehat \pi}$ is stationary, and thus distributed as $\widehat \pi$. The number of clock rings by time $t$ is at most $n^{d-1}|\Lambda_n|(t\vee 1)$ except with probability $Ce^{- n^{d-1}/C}$. Therefore, we have $$\mathbb P(\widehat \tau^{\widehat \pi} \le t) \le Ce^{ - n^{d-1}/C} + \sum_{i\le n^{d-1}|\Lambda_n|(t\vee 1)} \mathbb P(\widehat X_{t_i}^{\widehat \pi}\notin \partial \widehat \Omega) \le Ce^{ - n^{d-1}/C} + Cn^{2d-1}(t\vee 1) \widehat \pi(\partial \widehat \Omega)\,,$$ which implies the desired bound via~\eqref{eq:surface-order-LDP} and $\widehat \pi(\partial \widehat \Omega)\le 2 \pi(|M(\sigma)|\le 1)$. \end{proof} \subsubsection{Main recursion}\label{subsec:main-recurrence} We now prove the following recurrence, and deduce Proposition~\ref{prop:relaxation-within-phase}. For ease of notation, we use $P_t f_v(\pmb{+})$ to denote $P_t f_v(\sigma)$ where $\sigma$ is the $\pmb{+1}$ configuration. \begin{proposition}\label{prop:MO-recurrence-within-phase} Suppose that the Ising model has WSM within a phase. Then for every $t\le n$ and $r< n/2$, \begin{align}\label{eqn:recurrence} P_{2t} f_v (\pmb{+}) \le C r^d \big(P_t f_v (\pmb{+})\big)^2 + Ce^{ - r/C}\,. \end{align} \end{proposition} \begin{proof} Fix $t$, $v\in \Lambda_n$, and consider the quantity $P_{2t} f_v(\pmb{+})$. Define the event \begin{align*} E_t(v,r) := \{X_t^{\pmb{+}}(B_r(v)) = \widehat X_t^{\widehat \pi}(B_r(v))\}\,, \end{align*} under the grand coupling. By the Markov property, $P_{2t} f = P_t P_t f$, so we can decompose \begin{align}\label{eq:P-2t-f-splitting} P_{2t}f_v(\pmb{+}) = \mathbb E[P_tf_v(X_t^{\pmb{+}})] = \mathbb E[P_t f_v (X_t^{\pmb{+}}) \mathbf 1\{E_t(v,r)\}] + \mathbb E[P_t f_v( X_t^{\pmb{+}}) \mathbf 1\{E_t(v,r)^c\}]\,, \end{align} where the expectation is over both $\widehat \pi$ and the dynamics under the grand coupling of $(X_t^{\sigma}, \widehat X_t^{\sigma})_{\sigma,t}$. Before proceeding, let us make the following observation: for any increasing $f: \Omega \to\mathbb R$ and every~$t$, the function $P_t f$ is increasing. Indeed for any $\sigma' \ge \sigma$, under the grand coupling of the dynamics $X_t^{\sigma'} \ge X_t^{\sigma}$, and therefore $f(X_t^{\sigma'}) \ge f(X_t^{\sigma})$. Taking the expectation of $f(X_t^{\sigma'}) - f(X_t^\sigma)$ under this coupling, we get $P_t f(\sigma') \ge P_t f(\sigma)$. Also, for a subset $A$ of $\Lambda_n$, introduce the function $P_{t,A^{\pmb{+}}} f: \Omega_A \to \mathbb R$ defined as \begin{align*} P_{t,A^{\pmb{+}}} f(\sigma) := \mathbb E[ f(X_{A^{\pmb{+}},t}^\sigma)]\,, \end{align*} where $X_{t,A^{\pmb{+}}}^{\sigma}$ is the dynamics in $A$ with $\pmb{+1}$ boundary conditions, i.e., with all sites in $\Lambda_n \setminus A$ fixed to be $+1$, initialized from $\sigma$. The grand coupling of the dynamics naturally extends to $X_{t,A^{\pmb{+}}}^{\sigma}$ for all subsets $A \subset \Lambda_n$. Observe that for $A\ni v$, if $f:\Omega_A \to\mathbb R$ is an increasing function, then $P_t f(\sigma) \le P_{t,A^{\pmb{+}}}f (\sigma_A)$. Indeed this can be seen by the grand coupling, just as the increasing nature of $P_t f$ was. We now bound the first term of~\eqref{eq:P-2t-f-splitting} as \begin{align*} \mathbb E[P_t f_v(X_t^{\pmb{+}}) \mathbf 1\{E_t(v,r)\}] & \le \mathbb E[P_{t,B_r^{\pmb{+}}(v)} f_v(X_t^{\pmb{+}}(B_r(v))) \mathbf 1\{E_t(v,r)\}] \\ & \le \mathbb E[P_{t,B_r^{\pmb{+}}(v)} f_v(\widehat X_t^{\widehat \pi}(B_r(v)))] = \widehat \pi[P_{t,B_r^{\pmb{+}}(v)} f_v(\sigma_{B_r(v)})]\,, \end{align*} where the last equality used stationarity of $\widehat X_t^{\widehat \pi}$. We can then bound \begin{align} \widehat \pi[P_{t,B_r^{\pmb{+}}(v)} & f_v(\sigma_{B_r(v)})] \nonumber\\ & \le \pi_{B_{2r}^{\pmb{+}}(v)}[P_{t,B_r^{\pmb{+}}(v)} f_v(\sigma_{B_r(v)})] + \big|\widehat \pi[P_{t,B_r^{\pmb{+}}(v)} f_v(\sigma_{B_r(v)})] - \pi_{B_{2r}^{\pmb{+}}(v)}[P_{t,B_r^{\pmb{+}}(v)} f_v(\sigma_{B_r(v)})]\big| \nonumber\\ & \le \pi_{B_{r}^{\pmb{+}}(v)}[P_{t,B_r^{\pmb{+}}(v)} f_v(\sigma_{B_r(v)})] + 2\|\widehat \pi (\sigma_{B_r(v)}\in \cdot) - \pi_{B_{2r}^{\pmb{+}}(v)}(\sigma_{B_r(v)} \in \cdot) \|_{\textsc{tv}}\,,\label{eqn:ajs1} \end{align} where in the second inequality the change in domain in the first term is by monotonicity and the fact that $P_{t,B_r^{\pmb{+}}(v)} f_v$ is an increasing function, and the $2$ multiplying the variation distance comes from the fact that $|P_{t,B_r^{\pmb{+}}(v)}f_v(\sigma)|\le 2$ for all $\sigma$. By the WSM within a phase assumption, the second term in~\eqref{eqn:ajs1} is at most $Ce^{ - r/C}$. To bound the first term in~\eqref{eqn:ajs1}, we use the stationarity of $\pi_{B_r^{\pmb{+}}(v)}$ under $P_{t,B_r^{\pmb{+}}(v)}$ to obtain \begin{align*} \pi_{B_{r}^{\pmb{+}}(v)}[P_{t,B_r^{\pmb{+}}(v)} f_v(\sigma_{B_r(v)})] = \pi_{B_r^{\pmb{+}}(v)}[f_v(\sigma)] = \pi_{B_r^{\pmb{+}}(v)}[\sigma_v] - \widehat \pi[\sigma_v]\,. \end{align*} Again, this is bounded by $Ce^{ - r/C}$ via the WSM within a phase assumption, so the entire first term of~\eqref{eq:P-2t-f-splitting} is similarly bounded. We now turn to the second term in~\eqref{eq:P-2t-f-splitting}. This can be bounded as \begin{align}\label{eqn:ajs2} \mathbb E[P_{t} f_v(X_t^{\pmb{+}})\mathbf 1\{E_t(v,r)^c\}] \le \mathbb P(E_t(v,r)^c) \max_{\sigma}P_t f_v(\sigma) \le \mathbb P (E_t(v,r)^c) P_t f_v(\pmb{+}) \,, \end{align} by the monotonicity of the function $P_t f_v$. By Claim~\ref{clm:monotonicity-relations}, we can express \begin{align*} \mathbb P(E_t(v,r)^c) & \le \mathbb P(\widehat \tau^{\widehat \pi}<t) + \sum_{u\in B_r(v)} \mathbb P(X_t^{\pmb{+}}(u) \ne \widehat X_t^{\widehat \pi}(u), \widehat \tau^{\widehat \pi}\ge t)\,. \end{align*} Using that while $t\le \widehat \tau^{\widehat \pi}$, in order for $X_{t}^{\pmb{+}}(u) \ne \widehat X_t^{\widehat \pi}$, the former must be $+1$ while the latter is $-1$, we get \begin{align*} \mathbf 1\{X_t^{\pmb{+}}(u) \ne \widehat X_t^{\widehat \pi}(u)\} \mathbf 1\{\widehat\tau^{\widehat \pi}\ge t\} = \mathbf 1\{X_t^{\pmb{+}}(u) = +1\}\mathbf 1\{\widehat\tau^{\widehat \pi}\ge t\}- \mathbf 1\{\widehat X_t^{\widehat \pi}(u) = +1\}\mathbf 1\{\widehat\tau^{\widehat \pi}\ge t\}\,. \end{align*} Thus, \begin{align*} \mathbb P(E_t(v,r)^c) & \le \mathbb P(\widehat\tau^{\widehat \pi}<t) + \sum_{u\in B_r(v)} \Big(\mathbb E[X_t^{\pmb{+}}(u)] - \mathbb E[\widehat X_t^{\widehat \pi}(u) \mathbf 1\{\widehat\tau^{\widehat \pi} \ge t\}]\Big) \\ & \le \mathbb P(\widehat\tau^{\widehat \pi}<t) + \sum_{u\in B_r(v)} \Big(\mathbb E[X_t^{\pmb{+}}(u)] - \mathbb E[\widehat X_t^{\widehat \pi}(u)] + \mathbb P(\widehat\tau^{\widehat \pi}<t)\Big)\,. \end{align*} Using the stationarity of $\widehat \pi$ for $\widehat X_t$, and in the second line transitivity of the torus and Lemma~\ref{lem:zero-magnetization-hitting-time}, we obtain \begin{align}\label{eqn:ajs3} \mathbb P(E_t(v,r)^c) & \le \mathbb P(\widehat\tau^{\widehat \pi} <t)( 1 + |B_r(v)|) + |B_r(v)| \max_{u\in B_r(v)}P_t f_u(\pmb{+}) \nonumber \\ & \le Ce^{ -n^{d-1}/C} + |B_r(v)| P_t f_v(\pmb{+})\,. \end{align} Substituting~\eqref{eqn:ajs3} into~\eqref{eqn:ajs2}, gives a bound on the second term in~\eqref{eq:P-2t-f-splitting} which, when combined with our earlier bound of $Ce^{- r/C}$ on the first term, yields the desired inequality~\eqref{eqn:recurrence}. \end{proof} \begin{proof}[\textbf{\emph{Proof of Proposition~\ref{prop:relaxation-within-phase}}}] The difference under consideration is exactly $\frac{1}{2}|P_t f_v(\pmb{+})|$. We first of all reason that, for $t\le n$, $P_t f_v(\pmb{+})$ cannot be very negative, namely: \begin{align*} P_t f_v(\pmb{+}) = 2\big(\mathbb E[(\mathbf 1\{X_t^{\pmb{+}}(v) = +1\} - \mathbf 1\{\widehat X_t^{\widehat \pi}(v) = +1\})]\big) \ge - 2\mathbb P(\widehat \tau^{\widehat \pi}\le t)\,, \end{align*} since on $\{\widehat \tau^{\widehat \pi}>t\}$ the difference of the two indicators is non-negative. Thus, by Lemma~\ref{lem:zero-magnetization-hitting-time}, we get $P_t f_v(\pmb{+}) \ge - Ce^{ - n^{d-1}/C}$. It now suffices to upper bound $a_n(t): = P_t f_v(\pmb{+}) \vee 0$. By the FKG inequality, both expectations constituting $P_t f_v(\pmb{+})$ are non-negative, so $0\le a_n(t)\le 1$, and $a_n(t)$ is uniformly (in $n,t$) bounded away from $1$ since $\beta>\beta_c$. Moreover, $a_n(t)$ is non-increasing in $t$ (as can be seen, e.g., from~\cite[Lemmas 2.1 and 2.3]{PWcensoring}) and satisfies the same recurrence relation~\eqref{eqn:recurrence} as $P_t f_v(\pmb{+})$. We can then use the following lemma, whose proof is standard and deferred to Appendix~\ref{sec:recurrence-solution}. \begin{lemma}\label{lem:recurrence-solution} Suppose $0\le a_n\le 1$ is a non-increasing (in $t$) sequence that is uniformly (in $n,t$) bounded away from $1$. Suppose further that, for every $\epsilon>0$, there exists $T(\epsilon)$ such that $a_n(t)<\epsilon$ for all $t\ge T$ and all $n$ sufficiently large. If the sequence satisfies, for all $t\le n$ and $r<n/2$, \begin{align*} a_n(2t)\le Cr^d a_n(t)^2 + Ce^{- r/C}\,, \end{align*} then for all $n$ sufficiently large, $a_n(t)\le C'e^{ - t/C'}$ for all $t\le n$. \end{lemma} In order to apply Lemma~\ref{lem:recurrence-solution}, we need to show that for every $\epsilon>0$, there exists $t_0$ such that for all sufficiently large $n$, we have $P_t f_v(\pmb{+}) \le \epsilon$ for all $t\ge t_0$, and therefore also $a_n(t)\le \epsilon$. In order to see this, fix $\epsilon>0$. Let $\ell(\epsilon)$ be sufficiently large that the variation distance obtained from the WSM property in Definition~\ref{def:wsm-within-a-phase}, with $r = \ell$ and $n\ge \ell$, is less than $\epsilon/2$. Then, let $t_0(\ell)$ be larger than $ t_{\textsc{mix}}(X_{B_\ell^{\pmb{+}}(v)}) \log (2/\epsilon)$, so that the sub-multiplicativity of $\max_{x_0}d_{\textsc{tv}}(x_0;t)$ in time (see, e.g.,~\cite{LP}) implies that, for all $t\ge t_0$, \begin{align*} P_t f_v(\pmb{+}) & \le \mathbb P(X_{B_\ell^{\pmb{+}}(v),t}^{\pmb{+}}(v) = +1) - \widehat \pi(\sigma_v = +1) \\ & \le |\widehat \pi(\sigma_v = +1) - \pi_{B_\ell^{\pmb{+}}(v)}(\sigma_v =+1)| + \|\mathbb P(X_{B_\ell^{\pmb{+}}(v),t}^{\pmb{+}}\in \cdot) - \pi_{B_\ell^{\pmb{+}}(v)}\|_{\textsc{tv}} <\epsilon\,. \end{align*} By Lemma~\ref{lem:recurrence-solution}, we obtain $P_t f_v(\pmb{+})\le a_n(t)\le Ce^{- t/C}$ for all $t\le n$, and $n$ sufficiently large. Combined with the earlier lower bound on $P_t f_v(\pmb{+})$, we obtain the desired exponential decay for $|P_t f_v(\pmb{+})|$. \end{proof} \subsection{Proofs of Theorems~\ref{thm:Ising-torus-restatement} and \ref{thm:Ising-mixing-restricted-chain}}\label{subsec:fast-mixing-torus} To prove these theorems from Proposition~\ref{prop:relaxation-within-phase}, it remains to reduce the total variation distances between $\widehat X_t^{\pmb{+}}$ or $X_{t}^{\nu^{{\pmb{\pm}}}}$ and their respective stationary distributions, to sums of the distances of their one-point marginals. Such a bound is not true in general, but can be done in our setting using the monotonicity over sub-exponential timescales implied by Claim~\ref{clm:monotonicity-relations} and Lemma~\ref{lem:zero-magnetization-hitting-time}. \begin{proof}[\textbf{\emph{Proof of Theorem~\ref{thm:Ising-mixing-restricted-chain}}}] Consider the total variation distance \begin{align*} \|\mathbb P(\widehat X_{t}^{\pmb{+}}\in \cdot) - \widehat \pi\|_{\textsc{tv}} = \|\mathbb P(\widehat X_{t}^{\pmb{+}}\in \cdot) - \mathbb P(\widehat X_{t}^{\widehat \pi}\in\cdot)\|_{\textsc{tv}}\,. \end{align*} Using the standard definition of variation distance as the probability of disagreement under an \emph{optimal} coupling, we can bound the above by the following probability of disagreement under the grand coupling: \begin{align*} \mathbb P\big(\widehat X_{t}^{\pmb{+}} \ne \widehat X_{t}^{\widehat \pi}\big) & \le \mathbb E\big[ \mathbf 1\{\widehat X_{t}^{\pmb{+}} \ne \widehat X_{t}^{\widehat \pi}\} \mathbf 1\{\widehat \tau^{\widehat \pi} > t\}\big] + \mathbb P(\widehat \tau^{\widehat \pi} \le t) \le \mathbb P\big(X_{t}^{\pmb{+}} \ne X_{t}^{\widehat \pi}\big) + \mathbb P(\tau^{\widehat \pi} \le t)\,, \end{align*} where in the second inequality we used Claim~\ref{clm:monotonicity-relations}. The second term on the right-hand side is at most $Ce^{ - n^{d-1}/C}$ by Lemma~\ref{lem:zero-magnetization-hitting-time}. By monotonicity of the grand coupling, the first term is at most \begin{align*} \sum_{v\in \Lambda_n^p} \Big(\mathbb P(X_{t}^{\pmb{+}}(v) = +1) & - \mathbb P(X_{t}^{\widehat \pi}(v) = +1)\Big) \\ & \le \sum_{v\in \Lambda_n^p} \Big(\mathbb E[X_{t}^{\pmb{+}}(v)] - \big(1 -\mathbb E[(1-\widehat X_{t}^{\widehat \pi}(v))\mathbf 1\{\widehat\tau^{\widehat \pi} >t\}]\big) + \mathbb P(\widehat \tau^{\widehat \pi} \le t)\Big) \\ & \le \sum_{v\in \Lambda_n^p} \Big(\mathbb E[X_{t}^{\pmb{+}}(v)] - \widehat \pi[\sigma_v] + \mathbb P(\widehat \tau^{\widehat \pi} \le t) \Big), \end{align*} where in the first inequality we again used Claim~\ref{clm:monotonicity-relations} to switch from $X_{t}^{\widehat\pi}$ to the stationary chain $\widehat X_{t}^{\widehat \pi}$. Combining the above, and using Lemma~\ref{lem:zero-magnetization-hitting-time} again, we find that \begin{align*} \mathbb P\big(\widehat X_{t}^{\pmb{+}} \ne \widehat X_{t}^{\widehat \pi}\big) \le \sum_{v\in \Lambda_n^p} P_t f_v(\pmb{+}) + C (t \vee 1) n^d e^{ - n^{d-1}/C}\,, \end{align*} where we recall the definition of $P_t f_v$ from~\eqref{eq:Pt-def}--\eqref{eq:fv-def}. By Proposition~\ref{prop:relaxation-within-phase}, the right-hand side above is at most $Cn^d e^{ - t/C}$ for all $t\le n$, as desired. \end{proof} \noindent We now conclude the mixing time upper bound on $\mathbb T_n$ initialized in $\nu^{\pmb{\pm}}$, again assuming WSM within a phase. \begin{proof}[\textbf{\emph{Proof of Theorem~\ref{thm:Ising-torus-restatement}}}] Consider the total variation distance \begin{align*} \|\mathbb P(X_{t}^{\nu^{{\pmb{\pm}}}} \in \cdot) - \pi\|_{\textsc{tv}} & \le \big\|\frac 12 \mathbb P(X_{t}^{\pmb{+}}\in \cdot ) + \frac 12 \mathbb P(X_{t}^{\pmb{-}}\in \cdot) -\frac 12 \widehat \pi - \frac 12 \widecheck\pi \big\|_{\textsc{tv}} + \big\|\pi - \frac 12\widehat \pi - \frac 12 \widecheck \pi\big\|_{\textsc{tv}} \\ & \le \frac 12 \Big[ \|\mathbb P(X_{t}^{\pmb{+}}\in \cdot )- \widehat \pi\|_{\textsc{tv}} + \|\mathbb P(X_{t}^{\pmb{-}}\in \cdot ) - \widecheck \pi\|_{\textsc{tv}} \Big] + \pi(\partial \widehat \Omega) + \pi(\partial \widecheck \Omega)\,. \end{align*} The second inequality here used the triangle inequality for the first term, and the spin-flip symmetry of the Ising model, together with the definitions of $\widehat \Omega$ and $\widecheck \Omega$, for the second. The last two terms are bounded by $Ce^{ - n^{d-1}/C}$ by~\eqref{eq:surface-order-LDP}. The first two terms are symmetric, so we only consider one of them, aiming to show that it is at most $\epsilon/3$ for $t = O(\log N \log \epsilon^{-1})$. By the triangle inequality, we have \begin{align}\label{eqn:ajs4} \|\mathbb P(X_{t}^{\pmb{+}}\in \cdot) - \widehat \pi\|_{\textsc{tv}} \le \|\mathbb P(X_{t}^{\pmb{+}}\in \cdot) - \mathbb P(\widehat X_{t}^{\pmb{+}}\in \cdot)\|_{\textsc{tv}} + \|\mathbb P(\widehat X_{t}^{\pmb{+}}\in \cdot) - \widehat \pi\|_{\textsc{tv}}\,. \end{align} By Claim~\ref{clm:monotonicity-relations} and the grand coupling, the first term in~\eqref{eqn:ajs4} is at most $\mathbb P(\widehat\tau^{\pmb{+}}\le t) \le C(t\vee 1) e^{ - n^{d-1}/C}$. The second term is bounded by Theorem~\ref{thm:Ising-mixing-restricted-chain}. Altogether, we deduce the desired inequality~\eqref{eqn:torus-restmt-thm}. \end{proof} \section{WSM within a phase for the low-temperature Ising model}\label{sec:Ising-wsm-within-phase} In this section we prove Theorem~\ref{thm:Ising-wsm-within-phase-intro}, showing that WSM within a phase holds for the Ising model at all low temperatures in all dimensions. Recall that this is the only missing ingredient in the proof of the upper bound of our main result, Theorem~\ref{thm:Ising-torus-mixing}. To prove Theorem~\ref{thm:Ising-wsm-within-phase-intro}, we will construct an explicit coupling between $\sigma\sim \pi_{B_r^{\pmb{+}}(v)}$ and $\sigma'\sim \widehat \pi$ such that $\sigma$ and $\sigma'$ agree on $B_{r/2}(v)$ except with probability $Ce^{ - r/C}$. An analogous coupling for the corresponding random-cluster representations $\omega,\omega'$ is available due to recent results in~\cite{DCGR20}. We construct a ``good" set~$\ensuremath{\mathcal E}$ of pairs of random-cluster configurations such that, if $(\omega,\omega')\in \ensuremath{\mathcal E}$, we can lift the coupling of $\omega,\omega'$ to a coupling of the Ising configurations $(\sigma,\sigma')$ on $B_{r/2}(v)$ via the \emph{Edwards--Sokal coupling}~\cite{ES}. We then use a powerful coarse-graining technique of~\cite{Pisztora96} to construct a coupling under which $(\omega,\omega')\in \ensuremath{\mathcal E}$ except with probability~$Ce^{ - r/C}$. Sections~\ref{subsec:random-cluster-rep} and~\ref{subsec:coarse-graining} describe the necessary background on the random-cluster representation and coarse-graining techniques, respectively. In Section~\ref{subsec:coupling-rc-configurations}, we use coarse-graining to couple the random-cluster configurations $\omega,\omega'$ and define a high-probability event~$\ensuremath{\mathcal E}$ for this coupling. In Section~\ref{subsec:edwards-sokal-pi-hat}, we give a way to generate samples from the conditional measure $\widehat \pi$ using the random-cluster representation, and in Section~\ref{subsec:proof-of-wsm} we combine these ingredients to couple the Ising configurations corresponding to $\omega,\omega'$ on $B_{r/2}(v)$ on the event~$\ensuremath{\mathcal E}$. This results in the proof of Theorem~\ref{thm:Ising-wsm-within-phase-intro}. \subsection{The random-cluster representation}\label{subsec:random-cluster-rep} We first formally define the random-cluster representation of the Ising model, and the Edwards--Sokal coupling of the random-cluster model to the Ising model. For more details, and a discussion of the random-cluster model for non-integer~$q$, we refer the reader to~\cite{Grimmett}. For $p\in [0,1]$ and $q>0$, the random-cluster model at parameters $(p,q)$ on a (finite) graph $G = (V(G), E(G))$ is the distribution over edge-subsets $\omega \subset E(G)$, naturally identified with $\omega\in \{0,1\}^{E(G)}$ via $\omega(e) := 1$ if and only if $e\in \omega$, given by \begin{align}\label{eq:rcmeasure} \pi^{\textsc{rc}}_{G}(\omega) = \frac{1}{\mathcal Z^\textsc{rc}_{G,p,q}} p^{|\omega|} (1-p)^{|E(G)| - |\omega|} q^{|\mathsf{Comp}(\omega)|}\,, \end{align} where $|\mathsf{Comp}(\omega)|$ is the number of connected components (clusters) in the subgraph $(V(G),\omega)$. When $\omega(e) = 1$, we say that $e$ is \emph{wired} or \emph{open}, and when $\omega(e) = 0$, we say it is \emph{free} or \emph{closed}. If $x,y$ are in the same connected component of the subgraph $(V(G),\omega)$, we write $x\xleftrightarrow[]{\omega} y$. \subsubsection{Random-cluster boundary conditions} When $G$ is a subset of $\mathbb Z^d$, e.g., $G = \Lambda_m$, it will be important for us to introduce the notion of \emph{boundary conditions} for the random-cluster model. \begin{definition} A random-cluster \emph{boundary condition} $\xi$ on a subset $G \subset \mathbb Z^d$ is a partition of the (inner) boundary $\partial G = \{v\in G: d(v,\mathbb Z^d \setminus G) =1\}$ such that the vertices in each part of the partition are identified with one another. The random-cluster measure with boundary condition~$\xi$, denoted $\pi^{\xi}_{G,p,q}$, is the same as in~\eqref{eq:rcmeasure} except that $\mathsf{Comp}(\omega)$ is replaced by $\mathsf{Comp}(\omega;\xi)$, counted with this vertex identification. Alternatively, $\xi$ can be seen as introducing ghost ``wirings" of vertices in the same part of the partition. \end{definition} The \emph{free} boundary condition, $\xi = \mathbf{0}$, is the one whose partition of $\partial G$ consists only of singletons. The \emph{wired} boundary condition on $\partial G$, denoted $\xi = \mathbf{1}$, is that whose partition has all vertices of $\partial G$ in the same part. There is a natural stochastic order on boundary conditions, given by $\xi \le \xi'$ if $\xi$ is a refinement of~$\xi'$. The wired/free boundary conditions are then maximal/minimal under this order. Finally, a class of boundary conditions that will recur are those induced by a configuration on $\mathbb Z^d \setminus G$: given a random-cluster configuration $\eta$ on $\mathbb Z^d \setminus G$, the boundary condition it induces on $G$ is that given by the partition where $v,w\in \partial G$ are in the same element if and only if $v\xleftrightarrow[]{\eta} w$. \subsubsection*{Edwards--Sokal coupling} For $q=2$, there is a canonical way to couple the random-cluster model to the Ising model so that the random-cluster model encodes the correlation structure of the Ising model. (An analogous coupling holds for all integer~$q$, relating the random-cluster model to the Potts model.) The ($q=2$) \emph{Edwards--Sokal} coupling $\pi^\textsc{es}_G$ is the probability distribution over spin-edge pairs $(\omega,\sigma)\in \{0,1\}^{E(G)} \times \{\pm 1\}^{V(G)}$ given by: \begin{enumerate} \item sampling a random-cluster configuration $\omega\sim \pi^\textsc{rc}_G$ at parameters $p=1-e^{ - \beta}$ and $q=2$; \item independently assigning (coloring) each connected component $\ensuremath{\mathcal C}$ of $\omega$ a random variable $\eta_\ensuremath{\mathcal C}$ which is $\pm1$ with probability $\frac 12$-$\frac 12$, and setting $\sigma_v = \eta_\ensuremath{\mathcal C}$ for every $v\in \ensuremath{\mathcal C}$,. \end{enumerate} The marginal of this coupling on $\sigma$ then gives a sample from $\pi_G$ at inverse temperature~$\beta$. We will work with this coupling extensively, as it will give us a mechanism for boosting couplings of random-cluster configurations to couplings of Ising configurations. The coupling can also be used in the presence of Ising boundary conditions that are $\pmb{+1}$ (symmetrically, all $\pmb{-1}$) as follows. In step (1) above, draw $\omega\sim \pi_{G^\mathbf{1}}^\textsc{rc}$, and in step (2) above, if $\ensuremath{\mathcal C}({\partial G})$ is the connected component of the boundary, then deterministically set $\eta_{\ensuremath{\mathcal C}({\partial G})} = +1$ (symmetrically, $-1$). \subsubsection*{Random-cluster phase transition on $\mathbb Z^d$} The $q=2$ random-cluster model has a phase transition matching that of the Ising model on subsets of $\mathbb Z^d$. Namely, there exists $p_c(d) = 1-e^{ - \beta_c(d)}$ such that, when $p<p_c(d)$, the probability under $\pi^\textsc{rc}_{\mathbb T_m}$ that $v\stackrel{\omega}\leftrightarrow w$ decays exponentially in $d(v,w)$, while when $p>p_c(d)$, that probability stays uniformly (in $m$ and $v,w\in \mathbb T_m$) bounded away from zero. Pisztora~\cite{Pisztora96}, towards proving~\eqref{eq:surface-order-LDP}, obtained more refined information on the random-cluster model at $p>p_c$, showing that configurations typically have one macroscopically sized giant component, and all other components have exponential tails on their size. Recently, using inputs from the \emph{random-current representation} of the Ising model, the following weak spatial mixing property was established for the $q=2$ (Ising) random-cluster model in general dimension in~\cite{DCGR20}. \begin{theorem}[{\cite{DCGR20}}]\label{thm:rc-wsm} Let $d \ge 2$, $q=2$ and $p> p_c(2,d)$. Then \begin{align*} \|\pi_{\Lambda_n^\mathbf{1}}^\textsc{rc}(\omega(\Lambda_{n/2})\in \cdot) - \pi_{\Lambda_n^\mathbf{0}}^\textsc{rc}(\omega(\Lambda_{n/2}) \in \cdot)\|_{\textsc{tv}} \le Ce^{ - n/C}\,. \end{align*} \end{theorem} This result suggests, roughly, that the lack of spatial mixing in the Ising model at low temperatures comes solely from the second step in the Edwards--Sokal coupling, where the random-cluster components are assigned independent spins. Our Theorem~\ref{thm:Ising-wsm-within-phase-intro} is a formalization of this intuition. \subsection{Coarse graining of the random-cluster model}\label{subsec:coarse-graining} In this subsection, we describe the coarse-graining technique of~\cite{Pisztora96}, which will be a key tool in our proof of Theorem~\ref{thm:Ising-wsm-within-phase-intro}. Consider a tiling of $\mathbb T_m$ by blocks ($\ell_\infty$ balls of radius~$k$). The coarse-graining approach assigns to each block an indicator random variable according to some property of the random-cluster configuration in the block that serves as a signature of the low-temperature regime. By taking $k$ sufficiently large (depending on $p,d$), the probability that a block satisfies that property can be made arbitrarily close to~$1$. The resulting process can then be compared to a Bernoulli percolation process so that, when $k$ is sufficiently large, the set of blocks {\it not\/} satisfying the signature property form a sub-critical percolation process, with exponential tails. Thus, moving to blocks serves to {\it boost\/} the spatial prevalence of the signature property. \subsubsection{$k$-good blocks} For a fixed $k$, let $\Lambda_m^{(k)} = \Lambda_m \cap k\mathbb Z^d$, and tile $\mathbb T_m = \Lambda_m^p$ by overlapping blocks $(B_x)_{x\in \Lambda_m^{(k)}}$ given by $B_x = B_k(x)$, i.e., the $\ell_\infty$ balls of $\mathbb T_m$ of radius $k$, centered about $x\in \Lambda_{m}^{(k)}$, when the fundamental domain of $\mathbb T_m$ is identified with $\Lambda_m$. (Without loss of generality, we will assume that $m$ is a multiple of~$k$---it will be evident that any remainder issues could otherwise be handled easily.) The following will be our signature property for these blocks. \begin{figure} \centering \includegraphics[width = .4\textwidth]{bad-block-low-temp-2.png} \caption{A $k$-bad block of a low-temperature random-cluster configuration: even though its giant component (blue) intersects all boundary sides as desired, the configuration has a second distinct component (red) of size greater than $k$.} \label{fig:k-good-block} \end{figure} \begin{definition}\label{def:good-low-temp} When $p> p_c(d)$, a random-cluster configuration $\omega$ on a block $B_x$, is called $k$-\emph{good} if \begin{enumerate} \item There is at most one open cluster of size (number of vertices) at least $k$ in $\omega$. \item There exists an open cluster in $\omega$ intersecting all $2d$ sides of $\partial B_x$. \end{enumerate} Given $k$ and a random-cluster configuration $\omega$ on $\mathbb T_m$, we say $B_x$ is $k$-\emph{good} in $\omega$ if $\omega(B_x)$ is $k$-good. We call $\omega$ (resp., $B_x$) $k$-\emph{bad} if it is not $k$-good. See Figure~\ref{fig:k-good-block} for a visualization of a $k$-bad block. \end{definition} The definition of \emph{good} blocks in~\cite{Pisztora96} was actually stronger than the above (cf~\cite[Theorem 3.1]{Pisztora96}); we borrow our definition from~\cite{DCGR20}. The results of~\cite{Pisztora96}, together with~\cite{Bodineau05}, directly imply the following. \begin{lemma}[{\cite{Pisztora96,Bodineau05}}]\label{lem:good-whp-low-temp} For all $d\ge 2$ and $p>p_c(d)$, there exists $C>0$ such that, for every $k$, \begin{align}\label{eqn:PBlemma} \inf_{\xi} \pi^\textsc{rc}_{B_{2k}^{\xi}(v)} (B_v\mbox{ is $k$-good}) \ge 1-Ce^{ - k/C}\,, \end{align} where the infimum is over boundary conditions $\xi$ on $B_{2k}(v)$. \end{lemma} \noindent Note that the infimum in~\eqref{eqn:PBlemma} is important because the event that $\omega(B_v)$ is $k$-good is {\it not\/} monotone. \subsubsection{The coarse-grained percolation process} The above notions of $k$-goodness can be used to define a coarse-graining of a configuration $\omega$ on $\mathbb T_m$, via the percolation process of $k$-\emph{good} blocks. \begin{definition}\label{def:k-good-coarse-graining} Fix $q\ge 1$, $d\ge 2$ and $p>p_c(d)$. For $k$ fixed, and a configuration $\omega$ on $\Lambda_m$, define a {\it coarse-grained site percolation} $\eta(\omega) = \eta^{(k)}(\omega): \mathbb T_{m}^{(k)} \to \{0,1\}$ by setting the random variable $\eta_x(\omega)$ to be 1 (open) if $\omega(B_x)$ is $k$-good (according to Definition~\ref{def:good-low-temp}), and $0$ if it is $k$-bad. \end{definition} By~\cite[Theorem 1.3]{LSS97}, on a graph $G$ of degree at most $d$, for every $r$ and $\varepsilon>0$, there exists $\delta(r,\varepsilon,d)>0$ such that every site percolation process $\zeta: V(G)\to \{0,1\}$ having \begin{align*} \min_{v\in V(G)} \mathbb P(\zeta_v = 1\mid \{\zeta_w: d_G(w,v)\ge r\}) \ge 1-\delta \end{align*} stochastically dominates the i.i.d.\ $\mbox{Ber}(1-\varepsilon)$ independent percolation on $G$. This implies that, by taking $k$ large enough, we can ensure that $\eta$ stochastically dominates an i.i.d.\ percolation process on $\mathbb T_m^{(k)}$ whose parameter can be taken as close to $1$ as we like. The result is in fact quantitative in relating $r, \varepsilon, d$ and $\delta$, i.e., both $\epsilon$ and $\delta$ can be taken to be exponentially decaying in~$k$. Thus, we get the following sharper statement. \begin{corollary} \label{cor:stoch-domination} Fix $d\ge 2$ and $p>p_c(d)$. There exists $C$ such that, for every $k$, if $\omega \sim \pi_{\mathbb T_m}^\textsc{rc}$ and $(\tilde \eta_x)_{x\in \mathbb T_m^{(k)}}$ are i.i.d.\ $\mbox{Ber}(1-Ce^{ - k/C})$, we have the stochastic domination \begin{align*} \eta(\omega) \succeq \tilde \eta\qquad \mbox{ on }\qquad \mathbb T_m^{(k)}\,. \end{align*} By similar reasoning, for any $\xi$, if $\omega \sim \pi_{\Lambda_m^\xi}^{\textsc{rc}}$, we have \begin{align*} \eta(\omega) \succeq \widetilde \eta \qquad \mbox{ on }\qquad \Lambda_{m-2k}^{(k)}\,. \end{align*} \end{corollary} \subsubsection{Separating surfaces of good blocks} Not only does the percolation process of $k$-\emph{good} blocks, and in particular sub-criticality of $k$-\emph{bad} blocks, control the typical connectivity structure of the random-cluster model as in its application in~\cite{Pisztora96}, it can sometimes be used to couple configurations and obtain spatial mixing properties of the measure up to exact thresholds given a priori inputs; this was done in e.g.,~\cite{DCGR20} for the Ising model, using a priori inputs from the random-current representation of the Ising model. Let us formalize what we mean by a separating surface. We begin by defining clusters of the coarse-grained percolation process, both on the original coarse-grained graph $\mathbb T_m^{(k)}$ as well as under a stronger form of adjacency, which we call $\star$-adjacency (allowing diagonal adjacencies), corresponding to $\ell_\infty$ distance. \begin{definition}\label{def:k-adjacency} Call two vertices $x,y \in k\mathbb Z^d$ \emph{$k$-adjacent}, denoted $x\sim_{k} y$, if they are distance at most $k$ apart. Call two vertices $x,y\in k\mathbb Z^d$ \emph{$k$-$\star$-adjacent}, denoted $x\sim_{k}^\star y$, if they are at $\ell_\infty$ distance at most $k$. Observe that for $x\ne y$, we have $x\sim_k^\star y$ if and only if $B_k(x) \cap B_k(y)\ne \emptyset$. \end{definition} \begin{definition} Consider a site percolation $\zeta: \Lambda_m^{(k)} \to \{0,1\}$ for some $\Lambda^{(k)}\subset k\mathbb Z^d$. An {\it (open) $k$-cluster of $\zeta$} is a maximal $k$-connected (via $k$-adjacency) component of the vertices $\{v\in \Lambda^{(k)}: \zeta_v = 1\}$. An {\it (open) $k$-$\star$-cluster of $\zeta$} is a maximal $k$-$\star$-connected (via $k$-$\star$-adjacency) component of $\{v\in \Lambda^{(k)}: \zeta_v = 1\}$. The $k$-cluster containing a vertex~$v$ is denoted $\ensuremath{\mathcal C}_v(\zeta)$, and the $k$-$\star $-cluster containing~$v$ is denoted $\ensuremath{\mathcal C}_v^\star(\zeta)$. We write $x\xleftrightarrow[]{\zeta} y$ if $y\in \ensuremath{\mathcal C}_x(\zeta)$, and $x\xleftrightarrow[]{\zeta}_\star y$ if $y\in \ensuremath{\mathcal C}_x^\star(\zeta)$. We will often be interested in the clusters of the {\it complement\/} of $\zeta$, i.e., $\mathbf 1-\zeta$, defined pointwise as $(\mathbf 1-\zeta)_x = 1-\zeta_x$ for all $x\in \mathbb T_m^{(k)}$. \end{definition} \begin{definition}\label{def:separating-surface} Consider a percolation $\zeta: \Lambda_m^{(k)} \to \{0,1\}$ and let $l<m-2k$. We say that $\zeta$ has an {\it open separating $k$-surface\/} in $\Lambda_m^{(k)}\setminus \Lambda_l^{(k)}$ if the following event occurs: $$\Big\{\zeta: \Lambda_{l+k}^{(k)}\xleftrightarrow[]{\mathbf{1}-\zeta}_\star \Lambda_{m-k}^{(k)}\Big\}^c\,,$$ i.e., there is no $(\mathbf{1} - \zeta)$-open $k$-$\star$-cluster intersecting both $\Lambda_{l+k}^{(k)}$ and $\Lambda_{m-k}^{(k)}$. Any (minimal) set $\Gamma$ of open sites of $\Lambda_{m}^{(k)}$ in~$\zeta$ that serves as a witness to this event (i.e., the event holds no matter the values of $\zeta$ outside $\Gamma$) is called an \emph{open separating $k$-surface}. \end{definition} \begin{remark}\label{rem:outermost-separating-surface} Observe that for any fixed set of vertices $\partial \Lambda_m^{(k)} \subset A^{(k)} \subset \Lambda_m^{(k)}$, if we let $D = \bigcup_{v\in A^{(k)}} \ensuremath{\mathcal C}^\star_{v}(\zeta)$ denote its open $\star$-component(s) in $\zeta$, then if $D \cap \Lambda_{l+k}^{(k)} = \emptyset$, the outer $\star$-boundary $$\partial_{\textrm{out}} D = \{u \in \Lambda_m^{(k)} \setminus D : u\sim^{k}_\star D\}$$ of $D$ in $\Lambda_m^{(k)}$ yields an open separating $k$-surface of $\zeta$ in $\Lambda_m^{(k)}\setminus \Lambda_l^{(k)}$. \end{remark} One observes~\cite{DeuschelPisztora96} that if $\Gamma_k \subset \Lambda_m^{(k)} \setminus \Lambda_l^{(k)}$ is an open separating $k$-surface, it is a connected set of open sites of~$\zeta$ such that any $k$-$\star$-connected path from $\Lambda_l^{(k)}$ to $\Lambda_m^{(k)}$ intersects $\Gamma_k$. Thus, $\Gamma_k$ necessarily splits $k\mathbb Z^d \setminus \Gamma_k$ into exactly one infinite $\star$-connected component and some finite ones: call the infinite one $\mathsf{Ext}(\Gamma_k)$ and let $\mathsf{Int}(\Gamma_k) = \Lambda_m^{(k)}\setminus (\Gamma_k \cup \mathsf{Ext}(\Gamma_k))$. \begin{figure} \centering \includegraphics[width = .4\textwidth]{Separating-surface-config.pdf} \qquad \qquad \includegraphics[width = .402\textwidth]{Separating-surface-itself-2.pdf} \caption{Left: a random-cluster configuration at $p>p_c(q,d)$ with its coarse graining; the coarse graining features a separating surface $\Gamma_k$ (cyan). Right: The separating surface $\Gamma_k$, with its corresponding blocks $\Gamma$ is shown in blue; the distribution in ${\mathsf{Int}}(\Gamma)$ (shaded green) is conditionally independent of the distribution in ${\mathsf{Ext}}(\Gamma)\setminus \Gamma$ (white).} \label{fig:separating-surface} \end{figure} \begin{lemma}\label{lem:separating-surface-disconnects-information} Let $\eta(\omega)$ be as in Definition~\ref{def:good-low-temp}. Consider a random-cluster configuration $\omega$ in $\mathbb Z^d$ and let $\Gamma_k(\omega)$ be any open separating $k$-surface of $\eta(\omega)$. If $$\Gamma = \bigcup_{x\in \Gamma_k} B_x \qquad \mbox{and}\qquad \mathsf{Ext}(\Gamma) = \bigcup_{x\in \mathsf{Ext}(\Gamma)}B_x\,,$$ then the boundary conditions induced by $\omega(\Gamma \cup \mathsf{Ext}(\Gamma))$ on $\Lambda_m \setminus (\Gamma \cup \mathsf{Ext}(\Gamma))$ depend only on $\omega(\Gamma)$. \end{lemma} We refer the reader to Figure~\ref{fig:separating-surface} for a visualization of the above. An analog of Lemma~\ref{lem:separating-surface-disconnects-information} was established in~\cite[Section 3.2]{DCGR20}. For completeness, we include a proof in Appendix~\ref{app:deferred-proofs}. Before proceeding, however, we do note a key observation that is central to this low-temperature coarse-graining, and the above lemma, and which we will also appeal to later. \begin{observation}\label{obs:connected-component-good-blocks} Suppose $A_k$ is a $k$-connected open set of $\Lambda_n^{(k)}$ in $\eta^{k}(\omega)$, and let $A = \bigcup_{x\in A_k} B_x$. Then $\omega(A)$ has exactly one component of size greater than or equal to $k$. \end{observation} \subsection{Coupling random-cluster configurations via coarse-graining}\label{subsec:coupling-rc-configurations} A key tool in our proof of WSM within a phase for the Ising model up to its critical point is a coupling of random-cluster configurations $\omega(\Lambda_{r/2}),\omega'(\Lambda_{r/2})$ that are drawn from distinct random-cluster measures, e.g., $\pi^{\textsc{rc}}_{\Lambda_r^\mathbf{1}}$ and $\pi^\textsc{rc}_{\Lambda_n^p}$. Our aim in this subsection is to also couple the boundary conditions induced by $\omega(\Lambda_r \setminus \Lambda_{r/2})$ and $\omega'(\Lambda_n \setminus \Lambda_{r/2})$ on $\Lambda_{r/2}$ , as well as, respectively, the component of $\omega$ intersecting $\partial \Lambda_r$ and the largest component of $\omega'$. This stronger coupling will be central to our ability to couple the corresponding Ising configurations from $\pi_{B_r^{\pmb{+}}(v)}$ and $\widehat \pi$. To this end, we define a different coarse-graining which applies to pairs of random-cluster configurations. This coarse-graining is based on one found in~\cite{DCGR20} and used for proving Theorem~\ref{thm:rc-wsm}. \begin{definition} For random-cluster configurations $(\omega,\omega')$, we say that a block $B_v$ is $k$-\emph{very good} if $\omega(B_v) = \omega'(B_v)$ and that common configuration is $k$-good, per Definition~\ref{def:good-low-temp}. Define the $k$-\emph{very-good coarse graining}, $\zeta^{(k)}(\omega, \omega'): \Lambda_n^k \to \{0,1\}$ as taking the value $1$ if $B_v$ is very good, and $0$ otherwise. \end{definition} \begin{definition} Let $\ensuremath{\mathcal E}_{\textsc{vg}}$ be the event that the very good coarse-graining $\zeta^{(k)}(\omega,\omega')$ has a $k$-very good separating surface in the annulus $\Lambda_{m}\setminus \Lambda_{m/2}$. \end{definition} The main result of this subsection is the following, and is very similar to Lemma 3.3 of~\cite{DCGR20}. \begin{lemma}\label{lem:E-very-good-probability} Suppose $k$ is sufficiently large. Then there is a monotone coupling $\mathbb P$ such that for any $\xi,\xi'$, if $\omega\sim \pi_{\Lambda_m^{\xi}}$ and $\omega' \sim \pi_{\Lambda_m^{\xi'}}$, \begin{align*} \mathbb P((\omega,\omega') \notin \ensuremath{\mathcal E}_{\textsc{vg}}) \le Ce^{ - m/C}\,, \end{align*} and such that, on the event $\ensuremath{\mathcal E}_{\textsc{vg}}$, if $\Gamma_k$ denotes the outermost separating surface of $\zeta^{(k)}(\omega,\omega')$, \begin{enumerate} \item The boundary conditions induced on ${\mathsf{Int}}(\Gamma)$ by $\omega(E(\Lambda_m^\xi) \setminus {\mathsf{Int}}(\Gamma))$ are identical to those induced by $\omega'(E(\Lambda_m^{\xi'}) \setminus {\mathsf{Int}}(\Gamma))$; and \item $\omega({\mathsf{Int}}(\Gamma)) = \omega'({\mathsf{Int}}(\Gamma))$. \end{enumerate} \end{lemma} The proof of this lemma is very similar to that of~\cite[Lemma 3.3]{DCGR20} where it was applied simply to establish Theorem~\ref{thm:rc-wsm}. Due to this similarity, we defer the proof and only include it for completeness in Appendix~\ref{app:deferred-proofs}. Intuitively, the coupling goes by revealing, from the outside in, the components of $k$-very bad blocks, so that when the revealing process terminates one has exposed only the outermost separating surface of $k$-very good blocks and nothing inside it. Lemma~\ref{lem:separating-surface-disconnects-information} then ensures that the two configurations can be drawn from the identity coupling inside the separating surface of $k$-very good blocks. \subsubsection{Using coarse-graining to match the giant components} While the very good separating surface is sufficient for coupling the component structure of the random-cluster configurations $(\omega(\Lambda_{r/2}), \omega'(\Lambda_{r/2}))$, we also need to couple their corresponding Ising configurations, including in the presence of (a) $\pmb{+1}$ boundary conditions or (b) conditioning of the form of $\widehat \pi$. This requires a more refined understanding of the component structures of the random-cluster measures, which is the purpose of this section. We define two different events for the random-cluster measure on $\Lambda_m$, used together in the proof of Theorem~\ref{thm:Ising-wsm-within-phase-intro}. \begin{definition}\label{def:E-m-A} For $A\subset \Lambda_{m/2}$, define $\ensuremath{\mathcal E}_{m,A}$ as the set of $\omega$ such that the $k$-good coarse-graining $\eta^{(k)}(\omega)$ has a $k$-open path connecting $\partial A^{(k)}$ to $\partial \Lambda_{m-k}^{(k)}$, and the largest open cluster in that path is connected to $\partial \Lambda_m$ in $\omega$. \end{definition} While the above event avoided discussing the coarse-graining within distance $k$ of the boundary $\partial \Lambda_m$, the next event is to be viewed for coarse-grainings of the torus $\Lambda_m^p$; therefore we note that the notions of cluster and adjacency of blocks are to be viewed as being on the coarse-graining $\mathbb T^{(k)}_m$. \begin{definition}\label{def:E-m-p} Fix $\theta$. For $A \subset \Lambda_{m/2}$, let $\ensuremath{\mathcal E}_{m,A}^{\theta}$ be the set of $\omega$ such that \begin{enumerate} \item There is at most one cluster of $k$-good blocks of $\omega$ having size greater than $m/4k$. \item The largest cluster of $k$-good blocks has size at least $\theta |\Lambda_m^{(k)}|$, and contains a $k$-good path connecting $\partial A^{(k)}$ to $\partial \Lambda_{m-k}^{(k)}$. \end{enumerate} We also write $\ensuremath{\mathcal E}_{m}^\theta$ to denote $\ensuremath{\mathcal E}_{m,A}^{\theta}$ with the last requirement of a $k$-good path connecting $\partial A^{(k)}$ to $\partial \Lambda_m^{(k)}$ omitted. Thus for all $A$, we have $\ensuremath{\mathcal E}_{m,A}^\theta\subset \ensuremath{\mathcal E}_{m}^\theta$. \end{definition} \subsubsection{Bounding the probability of the $\ensuremath{\mathcal E}_{m,A},\ensuremath{\mathcal E}_{m,A}^{\theta}$ events} The goal of this subsection is to prove the following pair of results for $k$ sufficiently large. \begin{proposition}\label{prop:E-m-probability} Let $d\ge 2$ and $p>p_c(d)$, and $v= (0,...,0)$. As long as $k$ is sufficiently large, then, uniformly over all boundary conditions $\xi$, for all $r\le m$, we have \begin{align*} \pi_{\Lambda_m^\xi}^\textsc{rc} \big((\ensuremath{\mathcal E}_{m,B_{r/2}(v)})^c\big) \le Ce^{ - r/C}\,. \end{align*} \end{proposition} \begin{proposition}\label{prop:E-m-theta-probability} Let $d\ge 2$ and $p>p_c(d)$, and $v= (0,...,0)$. For all $k$ sufficiently large, there exists $\theta_0(k,p,d)$ such that for every $\theta<\theta_0$, for all $r\le m$, \begin{align*} \pi_{\Lambda_m^p}^\textsc{rc}\big((\ensuremath{\mathcal E}_{m,B_{r/2}(v)}^{\theta})^c\big) \le Ce^{ - r/C}\,. \end{align*} \end{proposition} \begin{proof}[\textbf{\emph{Proof of Proposition~\ref{prop:E-m-probability}}}] Fix $v,r$ and for ease of notation, let $B= B_{r/2}(v)$. Consider the $k$-good coarse-graining $\eta^{(k)}(\omega)$, and recall from Corollary~\ref{cor:stoch-domination} that it stochastically dominates the $\mbox{Ber}(1-Ce^{ - k/C})$ independent percolation process $\tilde \eta$ on $\Lambda_{m-2k}^{(k)}$. Let us consider the event $\tilde \ensuremath{\mathcal E}_{m,B}$ that a coarse-grained percolation~$\eta$ has an open component $\ensuremath{\mathcal C}_1(\eta)$ connecting $\partial B^{(k)}$ to $\partial \Lambda_{m-2k}^{(k)}$ and moreover, that component satisfies \begin{align*} \ensuremath{\mathcal C}_1(\eta) \cap \partial \Lambda_{m-2k}^{(k)} \ge |\partial \Lambda_{m-2k}^{(k)}|/2\,. \end{align*} Observe that $\tilde \ensuremath{\mathcal E}_{m,B}$ is increasing, and the event $\{\eta^{(k)}(\omega)\in \tilde \ensuremath{\mathcal E}_{m,B}\}$ is measurable with respect to $\omega(\Lambda_{m-k})$. By its increasing nature and the above stochastic domination, \begin{align*} \pi_{\Lambda_m^{\xi}}^{\textsc{rc}}(\eta^{(k)}(\omega)\in \tilde \ensuremath{\mathcal E}_{m,B}^c) \le \mathbb P(\tilde \eta \in \tilde \ensuremath{\mathcal E}_{m,B}^c)\,. \end{align*} Then by standard facts about Bernoulli percolation, $\widetilde\eta$ has this property with probability $1-Ce^{ - r^{d-1}/C}$ as long as $k$ is sufficiently large, so that it is sufficiently super-critical. (This can be seen by looking at the process $\mathbf{1} - \tilde \eta$, and noticing that in order for the complement of the above event to occur, $\mathbf{1} - \tilde \eta$ needs a $\star$-component of size at least proportional to $|\partial B^{(k)}|$.) Thus we have \begin{align}\label{eqn:ajs5} \pi_{\Lambda_m^\xi}^\textsc{rc}\big(\eta^{(k)}(\omega) \in \ensuremath{\mathcal E}_{m,B}^c\big) \le Ce^{ - r^{d-1}/C} + \max_{\omega(\Lambda_{m-k}) \in \tilde\ensuremath{\mathcal E}_{m,B}} \pi^\textsc{rc}_{\Lambda_m^{\xi}}\big(\ensuremath{\mathcal E}_{m,B}^c \mid \omega(\Lambda_{m-k})\big)\,. \end{align} Now fix any configuration $\omega(\Lambda_{m-k}) \in \tilde \ensuremath{\mathcal E}_{m,B}$. We consider the probability that there is a connection in $\omega$ between the largest component of the $k$-good path from $\partial B^{(k)}$ to $\partial \Lambda_{m-2k}^{(k)}$ and the boundary $\partial \Lambda_m$. Notice that since $\omega(\Lambda_{m-k})\in \tilde\ensuremath{\mathcal E}_{m,B}$, there are at least $|\partial \Lambda_{m-2k}^{(k)}|/(4d)$ disjoint blocks $(B_{x_i})$ for $x_i \in \partial \Lambda_{m-2k}^{k}$ that are $k$-good in $\eta^{(k)}(\omega)$ and are part of $\ensuremath{\mathcal C}_1 (\eta^{(k)}(\omega))$. For each of these blocks $B_{x_i}$, there is a vertex $v_i \in \partial \Lambda_{m-k}$ contained in the largest component of $\omega(B_{x_i})$. Now notice that there is a collection $(\gamma_i)$ of edge-disjoint paths connecting $v_i$ to $\partial \Lambda_m$, and each of these paths has length $k$. The probability that a fixed such path is open is at least $c^k$ for $c(p)>0$, independently of the configuration on $E(\Lambda_m)\setminus \gamma_i$, so the probability that at least one of them is open gives the bound: \begin{align*} \max_{\omega(\Lambda_{m-k})\in \tilde\ensuremath{\mathcal E}_{m,B}} \pi^\textsc{rc}_{\Lambda_m^{\xi}} (\ensuremath{\mathcal E}_{m,B}^c \mid \omega(\Lambda_{m-k})) \le \mathbb P\big(\mbox{Bin}(|\partial \Lambda_{m-k}^{(k)}|/(4d), c^k) = 0\big) \le Ce^{ - m^{d-1}/C}\,. \end{align*} Plugging this bound into~\eqref{eqn:ajs5} then gives the desired bound on the probability of $\ensuremath{\mathcal E}_{m,B}^c$. \end{proof} \begin{proof}[\textbf{\emph{Proof of Proposition~\ref{prop:E-m-theta-probability}}}] Fix $v,r$ and let $B = B_{r/2}(v)$. Fix $\theta<\theta_0(k,p,d)$ sufficiently small, to be chosen later. For ease of notation, let $\mathcal E_m = \mathcal E_{m,B}^\theta$ and let $\ensuremath{\mathcal E}_{m} = \ensuremath{\mathcal E}_{m,1}\cap \ensuremath{\mathcal E}_{m,2}$ where the index $1$ or $2$ corresponds to the two different items in Definition~\ref{def:E-m-p}. Clearly we have \begin{align*} \pi_{\Lambda_m^p}^{\textsc{rc}} (\mathcal E_m^c) \le \pi_{\Lambda_m^p}^{\textsc{rc}}(\ensuremath{\mathcal E}_{m,1}^c) + \pi_{\Lambda_m^p}^{\textsc{rc}}(\ensuremath{\mathcal E}_{m,2}^c)\,. \end{align*} Let us begin by bounding the probability of $\ensuremath{\mathcal E}_{m,1}^c$. By Corollary~\ref{cor:stoch-domination}, $\mathbf{1} - \eta^{(k)}(\omega) \preceq \mathbf{1} - \widetilde \eta$ on $\mathbb T_m = \Lambda_m^p$, where $\widetilde \eta$ is a $\mbox{Ber}(1-Ce^{ - k/C})$ percolation process on $\mathbb T_m^{(k)}$. If $\eta^{(k)}(\omega)$ has more than one $k$-cluster of good blocks of size greater than $m/4k$, then $\mathbf{1} - \eta^{(k)}(\omega)$ must have a $k$-$\star$-cluster (of $k$-bad blocks) of size at least $m/4k$. The probability of that event is in turn less than that of $\mathbf{1} - \widetilde \eta$ having a $k$-$\star$-cluster; but the percolation parameter of $\mathbf{1} - \widetilde \eta$ is $Ce^{- k/C}$, which when $k$ is large enough is sub-critical on $\Lambda_{m-k}^{(k)}$ endowed with $\star$-adjacency. In particular, this has probability at most $C\exp( - m/C)$. Next we turn to bounding the probability of $\ensuremath{\mathcal E}_{m,2}^c$. The probability of the giant component not connecting $\partial B^{(k)}$ to $\partial \Lambda_{m-k}^{(k)}$ is bounded as in the proof of Proposition~\ref{prop:E-m-probability} by $Ce^{ - r^{d-1}/C}$. It remains to bound the probability of that giant component not having density at least $\theta$ in $\Lambda_m^{(k)}$. By standard properties of super-critical Bernoulli percolation, there exists $\theta_0(k)>0$ (going to $1$ as $k\to \infty$) such that with probability $1-Ce^{ - m^{d-1}/C}$, as long as $k$ is sufficiently large, the largest component of the process $\widetilde \eta$ has a connected component of size at least $\theta_0 |\Lambda_m^{(k)}|$. This also implies the desired property for $\eta^{(k)}(\omega)$ by Corollary~\ref{cor:stoch-domination}. \end{proof} \subsection{Edwards--Sokal representation of the Ising measure in the plus phase}\label{subsec:edwards-sokal-pi-hat} The proof of Theorem~\ref{thm:Ising-wsm-within-phase-intro} will couple $\widehat \pi$ to $\pi_{B_{r}^{\pmb{+}}(v)}$ via an intermediate distribution $\widetilde \pi^\textsc{es}$ defined as follows. This latter measure's coloring stage will be more tractable to couple to that of $\pi_{B_r^{\pmb{+}}(v)}^\textsc{es}$ later. \begin{definition} Let $\widetilde \pi^\textsc{es}_{\Lambda_n^p}$ be the joint distribution $(\widetilde \omega, \widetilde \sigma)$, obtained by \begin{enumerate} \item Sampling $\widetilde \omega \sim \pi_{\Lambda_n^p}^{\textsc{rc}}$. \item Sampling $\widetilde \sigma$ by coloring the largest connected component of $\widetilde \omega$ with $+1$ (breaking ties according to some fixed ordering over all connected components), and coloring all other components independently and uniformly over $\{\pm 1\}$. \end{enumerate} \end{definition} \begin{remark}\label{rem:pi-hat-edwards-sokal} We can also extend the distribution $\widehat \pi$ to a joint distribution over Ising and random-cluster configurations. When $|\Lambda_n|$ is odd, it is clear that $\widehat \pi$ can be sampled from by first drawing a sample $\widehat \omega\sim \pi_{\Lambda_n^p}^\textsc{rc}$ then coloring its connected components independently, uniformly over $\{\pm 1\}$, conditional on the magnetization being positive (because, irrespective of $\widehat \omega$, this has probability exactly $\frac 12$). When $|\Lambda_n|$ is even, we claim that the marginal of this distribution is within $Ce^{ - n^{d-1}/C}$ of $\widehat \pi$: indeed this is evident because it gives the correct relative weights to all configurations with non-zero magnetization, and the entire mass of zero magnetization (i.e., $\widehat \Omega \cap \widecheck \Omega$) is at most $Ce^{ - n^{d-1}/C}$ by~\eqref{eq:surface-order-LDP}. As this error is negligible everywhere, we abuse notation and simply call $\widehat \pi^\textsc{es}_{\Lambda_n^p}$ this joint distribution. \end{remark} The main result of this section will be the following. \begin{proposition}\label{prop:tilde-hat-comparison} Fix $d\ge 2$ and let $\beta>\beta_c(d)$. Then, \begin{align*} \|\widetilde \pi_{\Lambda_n^p} - \widehat \pi_{\Lambda_n^p}\|_{\textsc{tv}} \le Ce^{ - n/C}\,. \end{align*} \end{proposition} The main ingredient in the proof of Proposition~\ref{prop:tilde-hat-comparison} is the following coupling of their coloring stages when the random-cluster draws are in $\ensuremath{\mathcal E}_{n}^{\theta}$. \begin{lemma}\label{lem:tilde-hat-ising-coupling} Consider any configuration $\omega$ in $\ensuremath{\mathcal E}_{n}^\theta$. Then \begin{align*} \|\widetilde \pi_{\Lambda_n^p}^\textsc{es} (\widetilde \sigma \in \cdot \mid \widetilde \omega = \omega) - \widehat\pi_{\Lambda_n^p}^\textsc{es} (\widehat \sigma \in \cdot \mid \widehat \omega = \omega)\|_{\textsc{tv}} \le Ce^{ - n/C}\,. \end{align*} \end{lemma} \begin{proof} Fix any $\omega \in \ensuremath{\mathcal E}_{n,B_{r/2}(v)}^\theta$ and consider any event $A$ for the Ising configuration. Let $\widetilde \Omega = \widetilde \Omega(\omega)$ be the event that the largest component of $\omega$ is colored $+1$; call $\widehat \Omega(\omega)$ the event that the magnetization after the coloring step is (strictly) positive. Consider \begin{align*} \widetilde \pi^\textsc{es}_{\Lambda_n^p} (A \mid \omega) - \widehat\pi^\textsc{es}_{\Lambda_n^p} (A \mid \omega) & = \frac{\pi^\textsc{es}_{\Lambda_n^p} (A, \widetilde \Omega(\omega) \mid \omega)}{\pi^\textsc{es}_{\Lambda_n^p}(\widetilde \Omega(\omega)\mid \omega)} - \frac{\pi^\textsc{es}_{\Lambda_n^p} (A , \widehat \Omega(\omega) \mid \omega^p)}{\pi_{\Lambda_n^p}^\textsc{es}(\widehat \Omega \mid \omega)} \\ & \le 2 \big(\pi_{\Lambda_n^p}^\textsc{es}(A, \widetilde \Omega(\omega) \mid \omega) - \pi_{\Lambda_n^p}^\textsc{es} (A , \widehat \Omega(\omega) \mid \omega)\big) + \big| \pi_{\Lambda_n^p}^\textsc{es}(\widehat \Omega(\omega) \mid \omega) - \frac{1}{2}\big|\,. \end{align*} The difference in the two probabilities can be bounded as \begin{align*} \pi^\textsc{es}_{\Lambda_n^p}(\widetilde \Omega^c(\omega) ,\widehat \Omega(\omega) \mid \omega) + \pi^\textsc{es}_{\Lambda_n^p}(\widetilde \Omega(\omega),\widehat \Omega^c(\omega) \mid \omega) \le \pi(\widehat \Omega(\omega) \mid \widetilde \Omega^c(\omega),\omega) + \pi^\textsc{es}_{\Lambda_n^p}(\widehat \Omega^c(\omega) \mid \widetilde \Omega(\omega),\omega)\,. \end{align*} The difference $\pi_{\Lambda_n^p}^\textsc{es}(\widehat \Omega(\omega) \mid \omega) - \frac 12$ is seen to also be bounded by \begin{align*} \big|\pi^\textsc{es}_{\Lambda_n^p}(\widehat \Omega(\omega) \mid \omega) - \pi_{\Lambda_n^p}(\widetilde \Omega(\omega) \mid \omega)\big| \le \pi^\textsc{es}_{\Lambda_n^p}(\widetilde \Omega^c(\omega) ,\widehat \Omega(\omega) \mid \omega) + \pi^\textsc{es}_{\Lambda_n^p}(\widetilde \Omega(\omega),\widehat \Omega^c(\omega) \mid \omega)\,. \end{align*} Thus it suffices to control these two quantities. They are shown to be small by identical arguments so let us just consider the latter, where we are conditioning on the largest component in $\omega$ being colored $+1$, and considering the probability on that event that the magnetization is non-positive. Let $\ensuremath{\mathcal C}_1,\ensuremath{\mathcal C}_2,...$ be the components of $\omega$ ordered in decreasing size, so that $|\ensuremath{\mathcal C}_1|\ge \theta k^{1-d} |\Lambda_n|$, and $|\ensuremath{\mathcal C}_j|\le k^{d-1} n$ for all $j\ne 1$. Here, the additional factors of $k^{1-d}$ and $k^d$ come from the facts that the minimum volume of the largest component of $\omega(B_x)$ for a block $B_x$ is $k$, while the maximum volume of any component of $\omega(B_x)$ is $k^d$. Then since these clusters are colored independently, the probability that the magnetization is non-positive is at most \begin{align*} \pi_{\Lambda_n^p}^\textsc{es} \big(M(\sigma)\le 0 \mid \widetilde \Omega(\omega), \omega\big)\le \pi_{\Lambda_n^p}^\textsc{es}\Big(M\Big(\sigma\Big(\bigcup_{j\ge 2} \ensuremath{\mathcal C}_j\Big)\Big) < - \theta k^{1-d} |\Lambda_n| \,\, \Big\vert \,\, \omega\Big)\,. \end{align*} Notice that this magnetization can be expressed as the sum of independent random variables $Z_j$ that take values $\pm |\ensuremath{\mathcal C}_j|$ with probability $\frac 12$-$\frac 12$. By Hoeffding's inequality, this is at most \begin{align*} \mathbb P\Big(\sum_j Z_j < -\theta k^{1-d} |\Lambda_n|/2\Big) \le \exp \Big( - \frac{\theta^2 k^{2-2d} |\Lambda_n|^2}{n^d |\ensuremath{\mathcal C}_2|}\Big) \le C\exp( - \theta^2 n^{d-1}/C)\,, \end{align*} which for $d\ge 2$ is at most $C\exp( - n/C)$. \end{proof} \begin{proof}[\textbf{\emph{Proof of Proposition~\ref{prop:tilde-hat-comparison}}}] By using the identity coupling for the random-cluster draws from $\pi_{\Lambda_n^p}^\textsc{rc}$, one easily obtains the bound \begin{align*} \|\widetilde \pi_{\Lambda_n^p} - \widehat \pi_{\Lambda_n^p}\|_{\textsc{tv}} \le \pi_{\Lambda_n^p}^{\textsc{rc}}\big((\ensuremath{\mathcal E}_{n,\Lambda_{n/2}}^\theta)^c\big) + \max_{\omega\in \ensuremath{\mathcal E}_{n,B_{r/2}(v)}^\theta} \|\widetilde \pi_{\Lambda_n^p}^\textsc{es} (\widetilde \sigma \in \cdot \mid \widetilde \omega = \omega) - \widehat\pi_{\Lambda_n^p}^\textsc{es} (\widehat \sigma \in \cdot \mid \widehat \omega = \omega)\|_{\textsc{tv}}\,. \end{align*} The desired bound then follows from Proposition~\ref{prop:E-m-theta-probability} together with Lemma~\ref{lem:tilde-hat-ising-coupling}. \end{proof} \subsection{Proof of WSM within a phase}\label{subsec:proof-of-wsm} We prove Theorem~\ref{thm:Ising-wsm-within-phase-intro} using a combination of the above properties for the good and very good coarse-grained percolation processes for $\omega \sim \pi_{B_r^\mathbf{1}(v)}^\textsc{rc}$ and $\omega' \sim \pi_{\Lambda_n^p}^{\textsc{rc}}$. Recall the event $\mathcal E_{\textsc{vg}}$ and abusing notation slightly, say $(\omega,\omega')\in \ensuremath{\mathcal E}_{\textsc{vg}}$ if $(\omega(B_r(v)), \omega'(B_r(v))$. Also recall the events from Definitions~\ref{def:E-m-A}--\ref{def:E-m-p}. \begin{lemma}\label{lem:coupling-Ising-configurations} Consider a pair of configurations $(\omega,\omega')$ satisfying $(\ensuremath{\mathcal E}_{r,B_{r/2}(v)} \times \ensuremath{\mathcal E}_{n,B_{r/2}(v)}^{\theta}) \cap \ensuremath{\mathcal E}_{\textsc{vg}}$; let $\Gamma_k$ denote the outermost open separating $k$-surface of very good blocks $\zeta^{(k)}(\omega,\omega')$ in $B_{r}(v)\setminus B_{r/2}(v)$. If $\omega({\mathsf{Int}}(\Gamma)) = \omega'({\mathsf{Int}}(\Gamma))$, then there exists a coupling of $$\sigma^{(r)}\sim \pi_{B_r^{\pmb{+}}(v)}^{\textsc{es}} ( \cdot \mid \omega^{(r)} =\omega)\qquad \mbox{and}\qquad \sigma^{(n)}\sim \widetilde \pi_{\Lambda_n^p}^\textsc{es}(\cdot \mid \widetilde \omega^{(n)} = \omega')$$ such that $\sigma^{(r)}_{B_{r/2}(v)} = \sigma^{(n)}_{B_{r/2}(v)}$ with probability one. \end{lemma} \begin{proof} We first show that under the conditions of the lemma, the two configurations $\omega,\omega'$ satisfy \begin{enumerate} \item Their induced component structures agree on $B_{r/2}(v)$, and \item A component in $\omega(B_{r/2}(v))$ is connected to $\partial B_{r}(v)$ in $\omega$, if and only if that component a subset of the largest component of $\omega'$. \end{enumerate} The fact that their induced component structures agree on $B_{r/2}(v)$ is immediate from the facts that ${\mathsf{Int}}(\Gamma)\supset B_{r/2}(v)$ on the event $\ensuremath{\mathcal E}_{\textsc{vg}}$, and Lemma~\ref{lem:separating-surface-disconnects-information}. Take as that component the union of $\Gamma$ and the portion of the path of $k$-good blocks connecting $\partial B_{r/2}(v)$ to $\partial B_{r}(v)$ exterior to $\Gamma$. This must be a connected component by definition of $\Gamma_k$ being a separating surface. Since this includes a block whose side coincides with $\partial B_r(v)$, the large component (of size at least $k$) of $\omega(\Gamma)$ is connected to $\partial B_{r}(v)$ in $\omega$. By similar reasoning, we claim that the largest component of $\omega'$ coincides with the largest component of $\omega'(\Gamma)$. In order to see this, notice that by Definition~\ref{def:E-m-p}, on $\ensuremath{\mathcal E}_{n,B_{r/2}(v)}^\theta$, the largest component of $\omega'$ coincides with its largest component in the path connecting $\partial B_{r/2}^{(k)}(v)$ to $\partial \Lambda_{n}^{(k)}$. Now reasoning as above, this implies that the largest component of $\omega'$ also coincides with the largest component of $\omega'(\Gamma)$, as expected. We now construct a coupling of the corresponding Ising configurations given a pair $(\omega,\omega')$ satisfying (1)--(2) above. Recall that via the Edwards--Sokal coupling, $\pi^\textsc{es}_{B_r^{\pmb{+}}(v)}$ is obtained by drawing a sample $\omega^{(r)} \sim \pi^\textsc{rc}_{B_r^\mathbf{1}(v)}$, setting the state of the component of the boundary to be $+1$, and coloring all other components independently uniformly among $\{\pm 1\}$. The measure $\widetilde \pi^\textsc{es}_{\Lambda_n^p}$ is similarly obtained by drawing a sample $\widetilde \omega^{(n)} \sim \pi^\textsc{rc}_{\Lambda_n^\mathbf{1}}$ and setting its largest component to be $+1$ (and coloring all other components independently uniformly among $\{\pm 1\}$). By item (2), we have \begin{align*} \ensuremath{\mathcal C}_{\partial B_r(v)}(\omega) \cap B_{r/2}(v) = \ensuremath{\mathcal C}_{1}(\omega') \cap B_{r/2}(v)\,, \end{align*} where $\ensuremath{\mathcal C}_1(\omega')$ is the largest component of $\omega'$. Those sites are all colored $+1$ in both~$\sigma^{(r)}$ and~$\sigma^{(n)}$. Consider the remaining vertices, $B_{r/2}(v)\setminus \ensuremath{\mathcal C}_{\partial B_r(v)}(\omega)$. Those vertices have the same induced component structure given the configurations $\omega(B_{r}(v)\setminus B_{r/2}(v))$ and $\omega'(\Lambda_n\setminus B_{r/2}(v))$. Therefore, we can assign colors to the clusters of $\omega,\omega'$ in the following way. \begin{enumerate} \item Fix an enumeration of all the vertices in $\Lambda_n$, but such that the vertices in $B_{r/2}(v)$ are enumerated before any vertices in $\Lambda_n \setminus B_{r/2}(v)$ are. \item For each $v\in \Lambda_n$, let $s_v$ be an i.i.d.\ uniform $\{\pm 1\}$ spin. \item For every cluster $\ensuremath{\mathcal C}$ of $\omega$ (disjoint from $\ensuremath{\mathcal C}_{\partial B_r(v)}$), for all $w\in \ensuremath{\mathcal C}$, let $\sigma^{(r)}_w = s_{v_{\min}(\ensuremath{\mathcal C})}$ where $v_{\min}(\ensuremath{\mathcal C})$ is the smallest (in the enumeration) vertex in $\ensuremath{\mathcal C}$. \item For every cluster $\ensuremath{\mathcal C}'$ of $\omega'$ (disjoint from $\ensuremath{\mathcal C}_1(\omega')$), for all $w\in \ensuremath{\mathcal C}'$, let $\sigma^{(n)}_w = s_{v_{\min}(\ensuremath{\mathcal C}')}$ where $v_{\min}(\ensuremath{\mathcal C}')$ is the smallest (in the enumeration) vertex in $\ensuremath{\mathcal C}'$. \end{enumerate} Since every cluster that intersects $B_{r/2}(v)$ in either of $\omega,\omega'$ has its smallest vertex in $B_r(v)$, the color assignments will be coupled such that all vertices in $B_{r/2}(v)$ get the same colors under $\sigma^{(r)}$ and $\sigma^{(n)}$. \end{proof} Finally, we are able to prove Theorem~\ref{thm:Ising-wsm-within-phase-intro} from the introduction, which claims that the Ising model satisfies WSM within a phase at all low temperatures. We recall that, combined with the results of Section~\ref{sec:relaxation-within-phase}, this concludes the proof of our main result, Theorem~\ref{thm:Ising-torus-mixing}. \begin{proof}[{\textbf{\emph{Proof of Theorem~\ref{thm:Ising-wsm-within-phase-intro}}}}] We will construct couplings between the two Ising distributions, $\pi_{B_{r}^{\pmb{+}}(v)}$ and $\widetilde \pi_{\Lambda_n^p}$ such that the two agree on $B_{r/2}(v)$ except with probability $Ce^{ - r/C}$. This suffices by a triangle inequality together with Lemma~\ref{lem:tilde-hat-ising-coupling} and the discussion preceding it in Remark~\ref{rem:pi-hat-edwards-sokal}. We can then upper bound, under the coupling of $\omega\sim \pi_{B_r^\mathbf{1}(v)}^\textsc{rc}$ and $\omega'\sim \pi_{\Lambda_n^p}^\textsc{rc}$ from Lemma~\ref{lem:coupling-Ising-configurations} (more precisely, first exposing $\omega'(\Lambda_n^p \setminus B_r(v))$ then using that coupling on $B_r(v)$), \begin{align*} \|\pi_{B_r^{\pmb{+}}(v)} & (\sigma (B_{r/2}(v))\in \cdot)- \widetilde \pi_{\Lambda_n^p}(\sigma_{B_{r/2}(v)} \in \cdot) \|_{\textsc{tv}} \\ & \le \mathbb P \big((\omega,\omega') \notin (\ensuremath{\mathcal E}_{r,B_{r/2}(v)} \times \ensuremath{\mathcal E}_{n,B_{r/2}(v)}^\theta) \cap \ensuremath{\mathcal E}_{\textsc{vg}}\big) \\ & \quad + \mathbb E \big[\|\pi_{B_r^{\pmb{+}}(v)}^\textsc{es}(\sigma_{B_{r/2}(v)}\in \cdot\mid \omega) - \widetilde \pi_{\Lambda_n^p}(\sigma_{B_{r/2}(v)}\in \cdot \mid \omega')\|_{\textsc{tv}} \mid (\ensuremath{\mathcal E}_{r,B_{r/2}(v)} \times \ensuremath{\mathcal E}_{n,B_{r/2}(v)}^\theta) \cap \ensuremath{\mathcal E}_{\textsc{vg}} \big]\,. \end{align*} Using the monotone coupling given to us by Lemma~\ref{lem:E-very-good-probability}, on the event $\ensuremath{\mathcal E}_{\textsc{vg}}$ we have $\omega({\mathsf{Int}}(\Gamma)) = \omega'({\mathsf{Int}}(\Gamma))$ and we can apply the coupling of Ising configurations given by Lemma~\ref{lem:coupling-Ising-configurations} to see that the second term on the right-hand side above is zero. It therefore suffices for us to bound the probability of $(\ensuremath{\mathcal E}_{r,B_{r/2}(v)} \times \ensuremath{\mathcal E}_{n,B_{r/2}(v)}^\theta) \cap \ensuremath{\mathcal E}_{\textsc{vg}})^c$. This bound follows from combining Proposition~\ref{prop:E-m-probability} applied at $m = r$, Proposition~\ref{prop:E-m-theta-probability} applied at $m=n$, and Lemma~\ref{lem:E-very-good-probability}. \end{proof} \section{Fast mixing with plus boundary conditions}\label{sec:mixing-plus-bc} In this section, we use arguments similar to those of Section~\ref{sec:relaxation-within-phase} to establish Theorem~\ref{thm:Ising-mixing-plus-bc}, showing that the Ising dynamics on $\Lambda_n^{\pmb{+}}$ mixes rapidly when initialized from the $\pmb{+1}$ configuration. The presence of boundary conditions complicates the mixing behavior near the boundary, especially in $d\ge 3$. Therefore, the WSM within a phase condition, while sufficient to bound the total variation distance to equilibrium \emph{in the bulk}, i.e., in an interior box $\Lambda_{n/2}$ sufficiently far from the boundary, is not enough to control the overall mixing time on $\Lambda_n^{\pmb{+}}$. In analogy with the well-known notion of \emph{strong spatial mixing}, which is used to handle boundary conditions in the high-temperature regime, we introduce the following notion. \begin{definition}\label{def:ssm-within-a-phase} We say the Ising model on $\Lambda_n^{\pmb{+}}$ has \emph{strong spatial mixing (SSM) within a phase} if for every $r< n$, and every $v\in \Lambda_n$, \begin{align*} \|\pi_{B_{r}^{\pmb{+}}(v)}(\sigma_{B_{r/2}(v)} \in \cdot) - \pi_{\Lambda_n^{\pmb{+}}}(\sigma_{B_{r/2}(v)}\in \cdot)\|_{\textsc{tv}} \le Ce^{ - r/C}\,. \end{align*} \end{definition} Note the crucial difference between this definition and that of WSM within a phase (Definition~\ref{def:wsm-within-a-phase}). Here the variation distance is still required to decay exponentially with the radius~$r$ of the box, {\it even in the presence of a fixed plus boundary on~$\Lambda_n$}, which may be much closer to~$v$ than~$r$ when $B_r(v)$ intersects~$\partial\Lambda_n$. This section is broken up into proofs of the following two propositions, which are analogous to the earlier Proposition~\ref{prop:relaxation-within-phase} and Theorem~\ref{thm:Ising-wsm-within-phase-intro} respectively. \begin{proposition}\label{prop:single-site-relaxation-plus-bc} Suppose the Ising model satisfies SSM within a phase. Then for every $v\in \Lambda_n$, \begin{align*} \|\mathbb P(X_{\Lambda_n^{\pmb{+}},t}^{\pmb{+}}(v)\in \cdot) - \pi_{\Lambda_n^{\pmb{+}}}(\sigma_v\in \cdot) \|_{\textsc{tv}} \le Ce^{ - t/C}\,. \end{align*} \end{proposition} \begin{proposition}\label{prop:ssm-within-phase} The Ising model has SSM within a phase in both of the following settings: \begin{enumerate} \item $d=2$ and $\beta>\beta_c(d)$; and \item $d\ge 3$ and $\beta$ is sufficiently large (depending on $d$). \end{enumerate} \end{proposition} For ease of notation, throughout this section we will use the abbreviations $\pi^{\pmb{+}} = \pi_{\Lambda_n^{\pmb{+}}}$ and $X^\sigma_t = X_{\Lambda_n^{\pmb{+}},t}^\sigma$. \subsection{Relaxation to equilibrium with plus boundary conditions and ground state initialization} The proof here is similar to that in Section~\ref{sec:relaxation-within-phase}, proving fast relaxation within a phase on the torus. While we have to deal with the presence of boundary conditions here, these are handled by the stronger assumption of SSM within a phase, and other elements of the proof are simplified by the fact that the Markov chain $X_{t}$ is indeed monotone. The quantity we will recurse over is now $\mathbb E[X_{t}^{\pmb{+}}(\sigma_v)] - \pi^{\pmb{+}}[\sigma_v]$. Recall the semigroup notation, and in this section define $P_t f: \Omega \to \mathbb R$ by \begin{align*} P_t f(\sigma) := \mathbb E[f(X_{t}^{\sigma})]\,. \end{align*} Our main goal will be to establish a recurrence relation analogous to that in Proposition~\ref{prop:MO-recurrence-within-phase} for $P_t f_v$, where \begin{align*} f_v(\sigma) : = \sigma_v - \pi^{\pmb{+}}[\sigma_v]\,. \end{align*} \begin{proposition}\label{prop:MO-recurrence-with-b.c.} Suppose the Ising model satisfies SSM within a phase. Then for all $t\ge 0$, for every $r\le n$, \begin{align}\label{eqn:recurrence-bc} \max_{v\in \Lambda_{n}} P_{2t} f_v (\pmb{+}) \le C r^d \big(\max_{v\in \Lambda_{n}} P_t f_v (\pmb{+})\big)^2 + Ce^{ - r/C}\,. \end{align} \end{proposition} \begin{proof} Fix $t$ and $v\in \Lambda_{n}$, and consider the quantity $P_{2t} f_v(\pmb{+})$. Introduce the event \begin{align*} E_t(v,r) := \{X_t^{\pmb{+}}(B_r(v)) = X_t^{\pi^{\pmb{+}}}(B_r(v))\}\,. \end{align*} By the Markov property, $P_{2t} f = P_t P_t f$, so that we can decompose \begin{align}\label{eq:P-2t-f-splitting-bc} P_{2t} f_v (\pmb{+}) = \mathbb E[P_t f_v (X_t^{\pmb{+}}) \mathbf 1\{E_t(v,r)\}] + \mathbb E[P_t f_v(X_t^{\pmb{+}}) \mathbf 1\{E_t(v,r)^c\}]\,, \end{align} where the expectation is over both the initialization $\pi^{\pmb{+}}$ and the dynamics under the grand coupling. Before proceeding, let us observe, exactly as in the proof of Proposition~\ref{prop:MO-recurrence-within-phase}, that for any increasing $f: \Omega \to\mathbb R$, the function $P_t f$ is increasing. Also as before, for a subset $A$ of $\Lambda_n$, introduce the function $P_{t,A^{\pmb{+}}} f: \Omega_A \to \mathbb R$ as \begin{align*} P_{t,A^{\pmb{+}}} f(\sigma) := \mathbb E[ f(X_{A^{\pmb{+}},t}^\sigma)]\,, \end{align*} where $X_{A^{\pmb{+}},t}^{\sigma}$ is the dynamics in $A$ with $\pmb{+1}$ boundary conditions, i.e., with all sites in $\Lambda_n \setminus A$ fixed to be $+1$. The grand coupling of the dynamics naturally extends to $X_{A^{\pmb{+}},t}^{\sigma}$ for all subsets $A \subset \Lambda_n$. And again as in the proof of Proposition~\ref{prop:MO-recurrence-within-phase}, we see that if $f:\Omega_A \to\mathbb R$ is an increasing function and $v\in A$, then $P_t f(\sigma) \le P_{t,A^{\pmb{+}}}f (\sigma_A)$. We now bound the first term on the right-hand side of~\eqref{eq:P-2t-f-splitting-bc} as \begin{align*} \mathbb E[ P_t f_v(X_t^{\pmb{+}}) \mathbf 1\{E_t(v,r)\}] & \le \mathbb E[P_{t,B_r^{\pmb{+}}(v)} f_v(X_t^{\pmb{+}}(B_r(v)))\mathbf 1\{E_t(v,r)\}] \\ & \le \mathbb E[ P_{t,B_r^{\pmb{+}}(v)} f_v (X_t^{\pi^{\pmb{+}}}(B_r(v)))] = \pi^{\pmb{+}}[P_{t,B_r^{\pmb{+}}(v)} f_v(\sigma_{B_r(v)})]\,. \end{align*} Here in the first inequality we used the above monotonicity between $P_t f_v$ and $P_{t,B_r^{\pmb{+}}(v)} f_v$, and in the second inequality we used the indicator function to replace $X_t^{\pmb{+}}(B_r(v))$ with $X_t^\pi(B_r(v))$. We can now use the monotonicity $\pi_{\Lambda_n^{\pmb{+}}} \preceq \pi_{B_r^{\pmb{+}}(v)}$, together with the fact that $P_{t,B_r^{\pmb{+}}(v)} f_v$ is an increasing function, to get \begin{align*} \pi^{\pmb{+}}[P_{t,B_r^{\pmb{+}}(v)} f_v(\sigma_{B_r(v)})] \le \pi_{B_r^{\pmb{+}}(v)}[P_{t,B_r^{\pmb{+}}(v)} f_v(\sigma_{B_r(v)})]\,. \end{align*} By stationarity of $\pi_{B_r^{\pmb{+}}(v)}$ under the dynamics $P_{t,B_r^{\pmb{+}}(v)}$, the right-hand side is equal to \begin{align*} \pi_{B_r^{\pmb{+}}(v)}[P_{t,B_r^{\pmb{+}}(v)} f_v(\sigma_{B_r(v)})] = \pi_{B_r^{\pmb{+}}(v)}[f_v(\sigma)] = \pi_{B_r^{\pmb{+}}(v)}[\sigma_v]- \pi^{\pmb{+}}[\sigma_v]\,. \end{align*} By the SSM within a phase property of the Ising model, this is evidently at most $Ce^{ - r/C}$. We now turn to the second term in~\eqref{eq:P-2t-f-splitting-bc}. This can be bounded as \begin{align*} \mathbb E[P_{t} f_v(X_t^{\pmb{+}})\mathbf 1\{E_t(v,r)^c\}] \le \mathbb P(E_t(v,r)^c) \max_{\sigma}P_t f_v(\sigma) \le \mathbb P (E_t(v,r)^c) P_t f_v(\pmb{+})\,, \end{align*} by monotonicity of the function $P_t f_v$. By monotonicity again, we can express \begin{align*} \mathbb P(E_t(v,r)^c) \le \sum_{u\in B_r(v)} \mathbb P(X_t^{\pmb{+}}(u) \ne X_t^{\pi^{\pmb{+}}}(u)) & \le \sum_{u\in B_r(v)}\mathbb P(X_t^{\pmb{+}}(u) = +1) - \mathbb P(X_t^{\pi^{\pmb{+}}}(u) = +1) \\ & \le \sum_{u\in B_r(v)} \frac12 \Big[\mathbb E[X_t^{\pmb{+}}(u)] - \pi^{\pmb{+}}[\sigma_u]\Big] \\ & \le \frac12 |B_r(v)| \max_{u\in B_r(v)} P_t f_u(\pmb{+})\,. \end{align*} Combining the above in similar fashion to the proof of Proposition~\ref{prop:MO-recurrence-within-phase}, we obtain~\eqref{eqn:recurrence-bc} as desired. \end{proof} \begin{proof}[\textbf{\emph{Proof of Proposition~\ref{prop:single-site-relaxation-plus-bc}}}] Let $a_n(t) = \max_{v\in \Lambda_n} P_t f_v(\pmb{+})$ and notice this is non-increasing in $t$ and non-negative. As argued in the proof of Proposition~\ref{prop:relaxation-within-phase}, replacing the WSM within a phase assumption with the SSM within a phase assumption, we have for every $\epsilon>0$ that there exists $t_0$ such that, for sufficiently large $n$, for all $t\ge t_0$, $a_n(t) <\epsilon$. Then, by Lemma~\ref{lem:recurrence-solution}, we get that for all $n$ sufficiently large and $t\le n$, \begin{align*} \max_{v\in \Lambda_n} P_{t} f_v(\pmb{+}) = a_n(t) \le Ce^{ - t/C}\,. \end{align*} The total variation distance between Bernoulli random variables is bounded by the difference in their expectations, so this concludes the proof. \end{proof} \subsection{SSM within a phase under plus boundary conditions} Our aim in this section is to establish Proposition~\ref{prop:ssm-within-phase}, showing that the Ising model satisfies SSM within a phase throughout its low-temperature regime when $d=2$, and at sufficiently low temperatures when $d\ge 3$. This additional temperature constraint when $d\ge 3$ arises because the coarse-graining technique we used in Section~\ref{sec:Ising-wsm-within-phase} is not useful near the boundary condition: specifically, the domination in Corollary~\ref{cor:stoch-domination} holds only at distance at least $2k$ from the boundary. Our proofs in this section still rely on \emph{separating surfaces}, this time of $+1$ spins, to couple Ising configurations directly. Though we require the assumption that $\beta$ is sufficiently large in order to ensure the existence of such separating surfaces when $d\ge 3$ (and indeed they do not exist for $\beta$ all the way down to $\beta_c(d)$), we still expect that the SSM within a phase property, and therefore Theorem~\ref{thm:Ising-mixing-plus-bc}, to in fact hold for \emph{all} $\beta>\beta_c(d)$, even in dimensions $d\ge 3$. Before getting into the proof, let us define separating surfaces of $+1$ spins. \begin{definition} For a vertex $v\in\Lambda_n$, we say a configuration $\sigma$ has a {\it $+1$-separating surface\/} in $B_{r}(v)\setminus B_{r/2}(v)$ if there is no $\star$-path of $-1$ spins connecting $\partial B_{r/2}(v)$ to $\partial B_{r}(v)$. \end{definition} The outermost such separating surface can be revealed as follows: if one shows all $\star$-components of $-1$ spins of $\partial B_r(v)$, then its outer boundary is exactly the outermost $+1$-separating surface in $B_r(v)\setminus B_{r/2}(v)$. \begin{proof}[\textbf{\emph{Proof of Proposition~\ref{prop:ssm-within-phase}}}] For a pair of configurations $\sigma,\sigma'$, we say $(\sigma,\sigma')\in \ensuremath{\mathcal E}_{\textsc{ss}}$ if the pair $\sigma,\sigma'$ share a $+1$-separating surface in $B_{r}(v)\setminus B_{r/2}(v)$, i.e., $\sigma$ has a $+1$-separating surface $\Gamma$ which is also all $+1$ in $\sigma'$. We construct a monotone coupling of $\sigma^{(r)}\sim \pi_{B_r^{\pmb{+}}(v)}$ and $\sigma^{(n)}\sim \pi_{\Lambda_n^{\pmb{+}}}$ such that, on the event $(\sigma^{(r)},\sigma^{(n)})\in \ensuremath{\mathcal E}_{\textsc{ss}}$, we have $\sigma^{(r)}_{{\mathsf{Int}}(\Gamma)} = \sigma^{(n)}_{{\mathsf{Int}}(\Gamma)}$ and in particular, therefore, they agree on all of $B_{r/2}(v)$ with probability~1. First of all, expose the spin configuration $\sigma^{(n)}_{\Lambda_n\setminus B_{r}(v)}$. Let $V_0$ track the vertices for which we have revealed $(\sigma^{(r)}_v,\sigma^{(n)}_v)$, and initialize $V_0 = \Lambda_n \setminus B_r(v)$. Initialize $W_0 = \partial B_r(v)$, and update $W_j$ as follows: \begin{enumerate} \item If $W_{j-1}\ne \emptyset$, select arbitrarily some $v_j \in W_{j-1}$, and sample \begin{align*} \sigma^{(r)}_{v_j} \sim \pi_{B_r^{\pmb{+}}(v)}(\cdot \mid \sigma^{(r)}_{V_{j-1}}) \qquad \mbox{and}\qquad \sigma^{(n)}_{v_j} \sim \pi_{\Lambda_n^{\pmb{+}}}(\cdot \mid \sigma^{(n)}_{V_{j-1}}) \end{align*} in a monotone way (i.e., using the same uniform random variable). Then, letting $N(v_j) = \{w\in \Lambda_n: w\sim_\star v_j\}$, update $V_j = V_{j-1} \cup \{v_j\}$, and $W_j = W_{j-1}\setminus \{v_j\}$ if $\sigma^{(n)}_{v_j} = +1$ and $W_j = W_{j-1}\cup N(v_j)\setminus \{v_1,...,v_j\}$ otherwise. \item If $W_{j-1} = \emptyset$, set $\tau := j-1$ and sample \begin{align*} \sigma^{(r)}_{\Lambda_n \setminus V_{j-1}} = \sigma^{(n)}_{\Lambda_n \setminus V_{j-1}}\quad \sim \quad \pi_{\Lambda_n^{\pmb{+}}}\big( \cdot \mid \sigma^{(n)}_{W_{j-1}}\big)\,. \end{align*} \end{enumerate} Similar revealing schemes have been used extensively in the literature, and it is straightforward to see that this is a valid coupling such that, with probability $1$, $\sigma^{(r)}\ge \sigma^{(n)}$ in $B_r(v)$. Observe, next, that the event $\ensuremath{\mathcal E}_{\textsc{ss}}$ is equivalent to the event that $V_{\tau} \cap B_{r/2}(v) = \emptyset$, and on that event, the inner boundary $\partial V_{\tau}$ exactly forms a separating surface in $B_{r}(v)\setminus B_{r/2}(v)$ shared between $\sigma^{(n)}$ and $\sigma^{(r)}$. As such, in the last stage of the above coupling, the two will take the same configuration on all of $B_{r/2}(v)$. Therefore, we see that using the above coupling we have \begin{align*} \|\pi_{B_R^{\pmb{+}}(v)}(\sigma_{B_{r/2}(v)}\in \cdot) - \pi_{\Lambda_n^{\pmb{+}}}(\sigma_{B_{r/2}(v)}\in \cdot)\|_{\textsc{tv}} \le \mathbb P(\ensuremath{\mathcal E}_{\textsc{ss}}^c). \end{align*} By monotonicity of the above coupling, if $\sigma^{(n)}$ has a separating surface in $B_{r}(v)\setminus B_{r/2}(v)$, it is shared with~$\sigma^{(r)}$: thus we can bound the probability on the right-hand side by the probability that $\sigma^{(n)}$ does not have a $+1$-separating surface in $B_{r}(v)\setminus B_{r/2}(v)$. In particular, the above total variation distance is at most \begin{align*} \pi_{\Lambda_n^{\pmb{+}}}\big(\partial B_{r/2}(v) \stackrel{(-)}\longleftrightarrow_\star \partial B_{r}(v)\big) \le \pi_{\mathbb Z^d}^{\pmb{+}}\big(\partial B_{r/2}(v) \stackrel{(-)}\longleftrightarrow_\star \partial B_{r}(v)\big), \end{align*} where the event in the probabilities is that there is a $\star$-connected path of $-1$ spins from $\partial B_{r/2}(v)$ to $\partial B_r(v)$ in $\sigma$. In order for there to be such a path, in the corresponding random-cluster model there must be a path of closed edges (where adjacency is now viewed on the corresponding dual graph, where edges in the primal graph are dual to $(d-1)$-cells in the dual), either separating $\partial B_{r/2}(v)$ from infinity, or intersecting both $\partial B_{r/2}(v)$ and $\partial B_{r}(v)$. In particular, that path must have size at least $r$. The probability of such an event is then seen to be at most $Ce^{ - r/C}$ in both of the following scenarios: \begin{itemize} \item when $d=2$, for all $\beta>\beta_c(d)$, by the exponential decay of dual-connectivities in the corresponding super-critical random cluster model ($p>p_c(d)$). \item when $d\ge 3$, for $p$ sufficiently large, by the exponential decay of clusters of $(d-1)$-dimensional plaquettes dual to closed edges in the random-cluster model. \end{itemize} Both of the above facts are standard; we refer to~\cite{Grimmett} for details. Putting all this together, we get $\mathbb P(\ensuremath{\mathcal E}_{\textsc{ss}}^c)\le Ce^{ - r/C}$, as desired. \end{proof} \section{Lower bounds on mixing times from ground state initializations}\label{sec:lower-bound} We adapt the proof approach of~\cite{HayesSinclair} to set ourselves up to prove the $\Omega(\log N)$ (continuous-time) lower bounds of Theorems~\ref{thm:Ising-torus-mixing}--\ref{thm:Ising-mixing-plus-bc}. However, the proof of~\cite{HayesSinclair} used a carefully chosen initialization according to a conditional stationary distribution; moreover, the conditioning was on certain vertices being initialized to the opposite of their most likely value at stationarity. In contrast, here we are aiming to show that the mixing time to the plus phase, when initialized in the $\pmb{+1}$ configuration, is at least $\Omega(\log N)$. We are able to take advantage of the monotonicity of the Ising model to adapt the approach of ~\cite{HayesSinclair} to this setting. Throughout this section, for a positive integer~$R$, define $$A_R := \Lambda_{n/2}^{(3R)} = \Lambda_{n/2}\cap 3R\mathbb Z^d\,.$$ Let $\mathbf B_R = \bigcup_{v\in A_R} B_R(v)$ and $\ensuremath{\mathcal E}_{+,\mathbf B_R^c}$ be the event that all vertices of $\Lambda_n \setminus \mathbf B_R$ are assigned $+1$. \subsection{Fraction of plus sites in $A_R$ for the dynamics} Consider the quantity \begin{align*} F_+(\sigma) = \frac{1}{|A_R|} \sum_{v\in A_R} \mathbf 1\{\sigma_v = +1\}\,, \end{align*} which counts the fraction of vertices of $A_R$ that have spin~$+1$ in~$\sigma$, and let \begin{align*} \vartheta_+ = \pi_{\Lambda_n^{\pmb{+}}}\big[F_+(\sigma) \mid \ensuremath{\mathcal E}_{+,\mathbf{B}_R^c}\big] = \pi_{B_R^{\pmb{+}}(v)}(\sigma_v= +1)\,. \end{align*} Let $\rho = \rho(\beta,d)$ be the probability that the spin at a vertex $v$ is $-1$, given that all $2d$ of its neighbors are $+1$, and notice that $(1-\vartheta_+)\ge \rho>0$. \begin{definition} Let $\ensuremath{\mathcal A}^+_{>\epsilon}$ be the event $\{\sigma: F_+(\sigma) \ge \vartheta_+ + \epsilon\}$. \end{definition} Notice that $\ensuremath{\mathcal A}^+_{>\epsilon}$ is an increasing event and is measurable with respect to the configuration on $A_R$. We can symmetrically define the function $F_-(\sigma), \vartheta_-$ and $\ensuremath{\mathcal A}_{>\epsilon}^-$. Finally, define the distribution \begin{align*} \mu_+ = \pi_{\Lambda_n^{\pmb{+}}}(\cdot \mid \ensuremath{\mathcal E}_{+,\mathbf{B}_R^c}, \sigma_{A_R} = \pmb{+1})\,. \end{align*} \begin{lemma}\label{lem:lower-bound-fraction-tail} Let $\epsilon = \epsilon(t) = \rho/(2e^{t/\rho})$. Then for all $R = o(n)$ and $t= o(R)$, we have \begin{align*} \mathbb P(X_{\Lambda_n^{\pmb{+}},t}^{\mu_+} \notin \ensuremath{\mathcal A}_{>\epsilon}^{+}) \le C|A_R|e^{-R/C} + Ce^{ - \epsilon^2|A_R|/C}\,. \end{align*} \end{lemma} \begin{proof} Let $\overline X_{\Lambda_n^{\pmb{+}},t}^{\mu_+}$ be the Glauber dynamics initialized in $\mu_+$ which additionally ignores all updates that occur in $\mathbf{B}_R^c$. This chain is naturally coupled to $X_{\Lambda_n^{\pmb{+}},t}^{\mu_+}$. By standard bounds on the speed of information propagation (disagreement percolation) in Glauber dynamics for a nearest-neighbor spin system (see e.g.,~\cite[Lemma 3.1]{HayesSinclair}), we have for all $t = o(R)$, \begin{align*} \mathbb P(\overline X_{\Lambda_n^{\pmb{+}},t}^{\mu_+}(A_R) \ne X_{\Lambda_n^{\pmb{+}},t}^{\mu_+}(A_R)) \le C|A_R|\exp( - R/C)\,. \end{align*} It therefore suffices for us to bound the probability $\mathbb P(\overline X_{\Lambda_n^{\pmb{+}},t}^{\mu_+} \notin \ensuremath{\mathcal A}_{>\epsilon}^{+})$. By stationarity of $\pi_{\Lambda_n^{\pmb{+}}}(\cdot \mid \ensuremath{\mathcal E}_{+,\mathbf B_R^c})$ for the chain $\overline X_{\Lambda_n^{\pmb{+}},t}^{\mu_+}$, the \emph{total monotonicity property} (see e.g., \cite[Lemmas~3.5 \& 4.1]{HayesSinclair}) implies that \begin{align*} \mathbb E[F_+(\overline X_{\Lambda_n^{\pmb{+}},t}^{\mu_+})]\ge \vartheta_+ + (1-\vartheta_+)e^{ - t/(1-\vartheta_+)}\,. \end{align*} With our choice of $\epsilon(t)$, the right-hand side is at least $\vartheta_+ + 2\epsilon$. At the same time, observe that $\overline X_{\Lambda_n^{\pmb{+}},t}^{\mu_+}(A_R)$ is a collection of $|A_R|$ i.i.d.\ random variables; therefore by a Chernoff bound, \begin{align*} \mathbb P\Big(F_+(\overline X_{\Lambda_n^{\pmb{+}},t}^{\mu_+}) \le \vartheta_+ + \epsilon\Big) \le C\exp( - \epsilon^2 |A_R|/C)\,. \end{align*} Combining the above, we deduce the desired bound. \end{proof} By monotonicity of the dynamics, and the fact that $\ensuremath{\mathcal A}_{>\epsilon}^+$ is increasing, we immediately get the following. \begin{corollary}\label{cor:lower-bound-dynamics} Let $R= (\log n)^4$, $t = \delta \log n$ and $\epsilon = \rho/(2e^{t/\rho})$. Then for $\delta>0$ sufficiently small, \begin{align*} \mathbb P\Big(X_{\Lambda_n^{\pmb{+}},t}^{\pmb{+}} \in \ensuremath{\mathcal A}_{>\epsilon}^+\Big) \ge 1-o(1)\,. \end{align*} \end{corollary} \subsection{Fraction of plus sites in $A_R$ at equilibrium} Our lower bound of Theorem~\ref{thm:Ising-mixing-plus-bc} follows if we show the following upper bound on the probability of $\ensuremath{\mathcal A}_{>\epsilon}^+$ at equilibrium. \begin{lemma}\label{lem:lower-bound-stationary} Let $R = (\log n)^4$, $t = \delta \log n$, and $\epsilon = \rho/(2e^{ t/\rho})$. Then for $\delta>0$ sufficiently small \begin{align*} \pi_{\Lambda_n^{\pmb{+}}}(\ensuremath{\mathcal A}_{>\epsilon}^+) \le o(1)\,. \end{align*} \end{lemma} \begin{proof} By monotonicity, and the increasing nature of $\ensuremath{\mathcal A}_{>\epsilon}^+$, we have $\pi_{\Lambda_n^{\pmb{+}}}(\ensuremath{\mathcal A}_{>\epsilon}^+) \le \pi_{\Lambda_n^{\pmb{+}}}(\ensuremath{\mathcal A}_{>\epsilon}^+ \mid \ensuremath{\mathcal E}_{+,\mathbf{B}_{R}^c})$. Now $\pi_{\Lambda_n^{\pmb{+}}}[F_+(\sigma) \mid \ensuremath{\mathcal E}_{+,\mathbf B_R^c}] = \vartheta_+$, and the distribution $\pi_{\Lambda_n^{\pmb{+}}}(\cdot \mid \ensuremath{\mathcal E}_{+,\mathbf B_R^c})$ is a product measure over $\pi_{B_R^{\pmb{+}}(v)}$ for $v\in A_R$. Thus by a Chernoff bound, \begin{align}\label{eq:pi-fraction-bound} \pi_{\Lambda_n^{\pmb{+}}}(\ensuremath{\mathcal A}_{>\epsilon}^+ \mid \ensuremath{\mathcal E}_{+,\mathbf B_R^c}) \le C\exp( - \epsilon^2 |A_R|/C) \le o(1)\,, \end{align} with our choices of $\epsilon,t,R$ and $\delta$ sufficiently small. \end{proof} \section{Proofs of main theorems}\label{sec:proofs-of-main-theorems} In this final section, we assemble the various pieces from the preceding sections to conclude the proofs of our main theorems from the introduction. Recall that Theorem~\ref{thm:Ising-wsm-within-phase-intro} was already proved in Section~\ref{sec:Ising-wsm-within-phase}. \begin{proof}[\textbf{\emph{Proof of Theorem~\ref{thm:Ising-mixing-plus-bc}}}] We begin by proving the upper bound on the mixing time. Using the monotone coupling, we can bound the total variation distance as \begin{align*} \|\mathbb P(X_{\Lambda_n^{\pmb{+}},t}^{\pmb{+}}\in \cdot) - \pi_{\Lambda_n^{\pmb{+}}}\|_{\textsc{tv}} \le \mathbb P\big(X_{\Lambda_n^{\pmb{+}},t}^{\pmb{+}}\ne X_{\Lambda_n^{\pmb{+}},t}^{\pi^{\pmb{+}}}\big) \le \sum_{v\in \Lambda_n} \mathbb P(X_{\Lambda_n^{\pmb{+}},t}^{\pmb{+}}(v) = +1) - \pi_{\Lambda_n^{\pmb{+}}}(\sigma_v = +1)\,. \end{align*} In turn, the right-hand side is exactly the sum of the total variations on each individual site, which by Propositions~\ref{prop:single-site-relaxation-plus-bc} and~\ref{prop:ssm-within-phase}, under the assumptions of Theorem~\ref{thm:Ising-mixing-plus-bc}, is at most $C|\Lambda_n|e^{ - t/C}$, which is $o(1)$ for some $t = O(\log N)$, as desired. We now turn to the lower bound on the mixing time. By definition of total variation distance, \begin{align*} \|\mathbb P(X_{\Lambda_n^{\pmb{+}},t}^{\pmb{+}}\in \cdot) - \pi_{\Lambda_n^{\pmb{+}}}\|_{\textsc{tv}} \ge \mathbb P(X_{\Lambda_n^{\pmb{+}},t}^{\pmb{+}}\in \ensuremath{\mathcal A}_{>\epsilon}^+) - \pi_{\Lambda_n^{\pmb{+}}}(\ensuremath{\mathcal A}_{>\epsilon}^+)\,. \end{align*} By Corollary~\ref{cor:lower-bound-dynamics} and Lemma~\ref{lem:lower-bound-stationary}, there exist choices of $\epsilon, R$ and $t = \Omega(\log N)$ such that this difference is at least $1-o(1)$, as desired. \end{proof} \begin{proof}[\textbf{\emph{Proof of Theorem~\ref{thm:Ising-plus-phase}}}] The upper bound on the mixing time was established in Theorem~\ref{thm:Ising-mixing-restricted-chain}. For the lower bound, we use the triangle inequality to obtain \begin{align}\label{eq:plus-phase-lower-bound} \mathbb P(\widehat X_{\Lambda_n^p,t}^{\pmb{+}}\in \ensuremath{\mathcal A}_{>\epsilon}^+) - \widehat \pi_{\Lambda_n^p}(\ensuremath{\mathcal A}_{>\epsilon}^+) & \ge \mathbb P(X_{\Lambda_n^p,t}^{\pmb{+}}\in \ensuremath{\mathcal A}_{>\epsilon}^+) - \pi_{\Lambda_n^{\pmb{+}}}(\ensuremath{\mathcal A}_{>\epsilon}^+) - \|\mathbb P(\widehat X_{\Lambda_n^p,t}^{\pmb{+}}\in \cdot) - \mathbb P(X_{\Lambda_n^p,t}^{\pmb{+}}\in \cdot)\|_{\textsc{tv}} \nonumber \\ & \qquad - \|\widehat \pi_{\Lambda_n^p}(\sigma_{\Lambda_{n/2}}\in \cdot) - \pi_{\Lambda_n^{\pmb{+}}}(\sigma_{\Lambda_{n/2}}\in \cdot)\|_{\textsc{tv}}\,. \end{align} Now by the disagreement percolation bound referred to earlier (\cite[Lemma 3.1]{HayesSinclair}), we have for all $t = o(n)$, \begin{align}\label{eq:disagreement-percolation-2} \mathbb P\Big(X_{\Lambda_n^p,t}^{\pmb{+}}(\Lambda_{n/2})\ne X_{\Lambda_n^{\pmb{+}},t}^{\pmb{+}}(\Lambda_{n/2})\Big) \le C e^{ - n/C}\,. \end{align} Since $\ensuremath{\mathcal A}_{>\epsilon}^+$ is measurable with respect to the configuration on $A_R \subset \Lambda_{n/2}$, by Corollary~\ref{cor:lower-bound-dynamics} and Lemma~\ref{lem:lower-bound-stationary}, there exist choices of $\epsilon, R$ and $t = \Omega(\log N)$ such that the difference between the first two terms in~\eqref{eq:plus-phase-lower-bound} is $1-o(1)$. By Claim~\ref{clm:monotonicity-relations}, the third term is at most $\mathbb P(\widehat\tau^{\pmb{+}}\le t)$, which is at most $Ce^{ - n^{d-1}/C}$ by Lemma~\ref{lem:zero-magnetization-hitting-time}. Finally, the fourth term is at most $Ce^{ - n/C}$ by the WSM within a phase property, as established in Theorem~\ref{thm:Ising-wsm-within-phase-intro}. \end{proof} \begin{proof}[\textbf{\emph{Proof of Theorem~\ref{thm:Ising-torus-mixing}}}] The upper bound on the mixing time follows directly from Theorem~\ref{thm:Ising-wsm-within-phase-intro} together with Theorem~\ref{thm:Ising-torus-restatement}. For the lower bound, we can bound the total variation distance by \begin{align}\label{eqn:lbvd} \|\mathbb P(X_{\Lambda_n^p,t}^{\nu^{\pmb{\pm}}}\in \cdot) - \pi_{\Lambda_n^p}\|_{\textsc{tv}} \ge \mathbb P(X_{\Lambda_n^p,t}^{\nu^{{\pmb{\pm}}}} \in \ensuremath{\mathcal A}_{>\epsilon}^+ \cup \ensuremath{\mathcal A}_{>\epsilon}^-) - \pi_{\Lambda_n^p}( \ensuremath{\mathcal A}_{>\epsilon}^+ \cup \ensuremath{\mathcal A}_{>\epsilon}^-)\,. \end{align} Write the first term on the right-hand side of~\eqref{eqn:lbvd} as \begin{align*} \mathbb P(X_{\Lambda_n^p,t}^{\nu^{{\pmb{\pm}}}}\in \ensuremath{\mathcal A}_{>\epsilon}^+ \cup \ensuremath{\mathcal A}_{>\epsilon}^-) \ge \frac 12 \mathbb P(X_{\Lambda_n^p,t}^{\pmb{+}}\in \ensuremath{\mathcal A}_{>\epsilon}^+) + \frac 12 \mathbb P(X_{\Lambda_n^p,t}^{\pmb{-}}\in \ensuremath{\mathcal A}_{>\epsilon}^-)\,. \end{align*} It follows from~\eqref{eq:disagreement-percolation-2} together with Corollary~\ref{cor:lower-bound-dynamics} (and their symmetric versions for $X^{\pmb{-}}_{\Lambda_n^p,t}$) that there exist choices of $\epsilon,R$ and $t = \Omega(\log N)$ such that this is $1-o(1)$. Turning to the second term on the right-hand side of~\eqref{eqn:lbvd}, we have by monotonicity \begin{align*} \pi_{\Lambda_n^p}(\ensuremath{\mathcal A}_{>\epsilon}^+\cup\ensuremath{\mathcal A}_{>\epsilon}^-) \le \pi_{\Lambda_n^{\pmb{+}}}(\ensuremath{\mathcal A}_{>\epsilon}^+) + \pi_{\Lambda_n^{\pmb{-}}}(\ensuremath{\mathcal A}_{>\epsilon}^-)\,. \end{align*} By Lemma~\ref{lem:lower-bound-stationary} and its symmetric version for $\pi_{\Lambda_n^{\pmb{-}}}$, for the same choices of $\epsilon$ and $R$, this is $o(1)$. Thus the right-hand side of~\eqref{eqn:lbvd} is $1-o(1)$ for some $t=\Omega(\log N)$, implying the desired mixing time lower bound. \end{proof} \begin{proof}[\textbf{\emph{Proof of Corollary~\ref{cor:infinite-volume-relaxation}}}] If one repeats the proof of Proposition~\ref{prop:single-site-relaxation-plus-bc} identically, but with $X_{\Lambda_n^{\pmb{+}},t}^{\pmb{+}}$ and $\pi_{\Lambda_n^{\pmb{+}}}$ replaced by the infinite-volume quantities $X_{\mathbb Z^d,t}^{\pmb{+}}$ and $\pi_{\mathbb Z^d}^{\pmb{+}}= \lim_{m\to\infty} \pi_{\Lambda_m^{\pmb{+}}}$, respectively, one obtains the following. Suppose the Ising model satisfies, for every $v\in \mathbb Z^d$ and every $r$, \begin{align}\label{eq:infinite-volume-WSM-within-a-phase} \|\pi_{B_r^{\pmb{+}}(v)}(\sigma_{B_{r/2}(v)}\in \cdot) - \pi_{\mathbb Z^d}^{\pmb{+}}(\sigma_{B_{r/2}(v)}\in \cdot)\|_{\textsc{tv}} \le Ce^{- r/C}\,; \end{align} then for every $v\in \mathbb Z^d$, \begin{align}\label{eq:infinite-volume-single-site-relaxation} \mathbb P(X_{\mathbb Z^d,t}^{\pmb{+}}(v)= +1) - \pi_{\mathbb Z^d}^{\pmb{+}}(\sigma_v = +1) \le Ce^{ - t/C}\,. \end{align} Note that~\eqref{eq:infinite-volume-WSM-within-a-phase} is an infinite-volume analog of the WSM within a phase property. We can reduce~\eqref{eq:infinite-volume-WSM-within-a-phase} to the finite-volume WSM within a phase property proved in Theorem~\ref{thm:Ising-wsm-within-phase-intro} via a triangle inequality: for any $A\in \{\pm 1\}^{B_{r/2}(v)}$, \begin{align*} |\pi_{B_r^{\pmb{+}}(v)}(\sigma_{B_{r/2}(v)}\in A) - \pi_{\mathbb Z^d}^{\pmb{+}}(\sigma_{B_{r/2}(v)}\in A)| & = \lim_{m\to\infty} |\pi_{B_r^{\pmb{+}}(v)}(\sigma_{B_{r/2}(v)}\in A) - \pi_{\Lambda_m^{\pmb{+}}}(\sigma_{B_{r/2}(v)}\in A)| \\ & \le \lim_{m\to\infty} \|\pi_{B_r^{\pmb{+}}(v)}(\sigma_{B_{r/2}(v)}\in \cdot) - \widehat \pi_{\Lambda_{2m}^{p}}(\sigma_{B_{r/2}(v)}\in \cdot)\|_{\textsc{tv}} \\ & \qquad + \lim_{m\to\infty} \|\widehat \pi_{\Lambda_{2m}^{p}}(\sigma_{B_{r/2}(v)}\in \cdot) - \pi_{\Lambda_{m}^{\pmb{+}}} (\sigma_{B_{r/2}(v)}\in \cdot)\|_{\textsc{tv}}\,. \end{align*} By Definition~\ref{def:wsm-within-a-phase} and Theorem~\ref{thm:Ising-wsm-within-phase-intro}, the right-hand side is at most $\lim_{m\to\infty} (Ce^{ - r/C} + Ce^{ - m/C}) = Ce^{ - r/C}$, yielding~\eqref{eq:infinite-volume-WSM-within-a-phase}, and thus~\eqref{eq:infinite-volume-single-site-relaxation}. To deduce the exponential relaxation for local functions~\eqref{eqn:infvolcor}, note that for any $f:\{\pm 1\}^{B_{r/2}(v)}\to \mathbb R$, \begin{align*} |\mathbb E[f(X_{\mathbb Z^d,t}^{\pmb{+}})] - \pi_{\mathbb Z^d}^{\pmb{+}}[f(\sigma)]|& \le {\|f\|}_\infty \|\mathbb P(X_{\mathbb Z^d,t}^{\pmb{+}}(B_{r/2}(v))\in \cdot) - \pi_{\mathbb Z^d}^{\pmb{+}}(\sigma_{B_{r/2}(v)}\in \cdot) \|_{\textsc{tv}} \\ & \le {\|f\|}_\infty \sum_{u\in B_{r/2}(v)} \big(\mathbb P(X_{\mathbb Z^d,t}^{\pmb{+}}(u)= +1) - \pi_{\mathbb Z^d}^{\pmb{+}}(\sigma_u = +1)\big)\,, \end{align*} which by~\eqref{eq:infinite-volume-single-site-relaxation} is at most $C {\|f\|}_\infty r^d e^{ - t/C}$, yielding the desired bound~\eqref{eqn:infvolcor} with $C_f = C{\|f\|}_\infty r^d$. \end{proof}